Wednesday, May 31, 2023

ChatGPT and the like - an interesting era has begun!

My previous post with thoughts on "Artificial Dumbing" - the phrase totally coined by me - was pretty much in the words of ChatGPT! It is amazing how it expands blocks of ideas into such wonderfully articulated pieces of writing which largely resonate with what you think and want to communicate if you express yourself well enough to the tool. I hear a lot else is possible with Generative AI based tools already (check this out). And I am curious. Looking forward to try them all, at least the free ones.

What I have experienced through extensively using ChatGPT of late, is that the low hanging fruit of direct productivity improvements is something everyone must immediately tap into. It's primarily in terms of quick generation of specific content, including code, which would be as precise as the context and details provided by the requester. So, if you know what you want to achieve, just type your requirement as clearly as possible, basically whatever is in your mind and whatever you would tell yourself that you needed to do. And ChatGPT might give you a better output in a few seconds than what you might in a few hours. There are constraints on the type of input it takes and the type of output you can derive out of it. But then, you can be creative and push the boundaries to some extent. Also, the way the current ChatGPT works, you can use it to augment you, may be even do majority of the work for you, but there will be gaps and you have to fill those so that the output is complete, coherent and meaningful.

A powerful feature of ChatGPT is its conversational format, and working through multiple threads which behave like separate conversations. For example, if I am working on creating a report on a specific topic, with a many sub-topics, side-topics and nuances to deal with, I can first carry on a conversation with the tool like I would with an expert; exchange ideas, views and feedback, and through this process get to a point where, like a human, rather like a friend or colleague conversing with me, or perhaps even better than any of them in a way we wouldn't want to acknowledge, the tool actually understands my views and intentions which need to come in the report. Now with the framework already set, I would now ask the tool to give me the content as I desire, with the structure I want, with the nuances I want, with the messaging I intend and with the tone I desire. And even after that, if there are slight deviations, I can tell the tool to make necessary fixes, add or remove stuff I don't want, change the tone if I'd like, even change the person - pretend to be me or someone, and regenerate the content. I can make these tweaks multiple times, and even ask for many versions by regenerating responses, just for the heck of it sometimes. It won't take too many iterations to get what you need. It's not only a huge time saver, but also gives you the quality that you may not be able to deliver yourself in 100 times the time it took. And that is one aspect that both enthralls and worries me.

The reason it worries me, is that if tools like Generative AI become fully integrated into our lives, especially from our childhood, but also in later years, we will diminish, or not fully develop our abilities to imagine, create, express, articulate, write, draw and develop from scratch - something that is so unique to human minds and bodies, as we would have tools to do better job more effectively. The tools still have limitations, at least so far and for the near future, in being not capable of surpassing all human ability - what they call singularity - and are only capable of what they can copy / learn from, which is the entire body of human creation so far. Which means that there would still be value to things we can imagine or create beyond what anyone has ever done, and there are really no boundaries to that if history is anything to go by. But then, if we are out of practice with the basic level, aren't we generally dumbing down our faculties? How can we run for iron-man if we rarely jog or get into water or cycle?

Or am I looking at it all wrong? The time we save by using tools for mundane tasks can indeed be devoted to pursue goals of higher order. But the tools we are looking at have abilities beyond the mundane, and if we set boundaries to where they play a role, I think we'd be trying to suppress the impact of one of our greatest inventions. And something so great will always find its way around the stupidity / rigidity of humans, eventually.

There's another possibility. Human endeavor has always found newer areas and greater challenges. The invention of the wheel and everything that enabled us to move faster ever since, has possibly made us poor runners as a whole since we are less dependent on that skill for survival. Running has become a sport to compete in, with others interested, wanting, skilled and trained at running - it's become a form of entertainment that way. For many it's for fitness. But we certainly don't need to run from an angry tiger to save our lives or cover long distances on foot. The analogy is compelling but the key difference with AI is that we are playing with mental faculties now, and that's fairly recent. May be a few centuries later, we'll look back at this moment as a pivot in human civilization that totally transformed our lives, made us live longer, healthier and happier. Or may be we'll see this as the dark period that destroyed everything we stood for.

We must therefore develop this carefully, but definitely.

How do you think we must shape this? Can we, beyond a point? Where do we draw a line, to be safe? And should we?

I am tempted to ask ChatGPT for an answer...

Sunday, May 28, 2023

Unveiling the Shadow Side of Corporate Decision-Making: The Era of "Artificial Dumbing"

In the age of rapid technological advancements and the pursuit of artificial intelligence, a contrasting phenomenon has emerged within the corporate world — one that can be described as "artificial dumbing" - a term I coined and which I think captures what's eating productivity of humans very deeply. This concept refers to deliberate non-intelligent actions taken by corporate executives that serve their personal interests, often at the expense of rational choices and genuine insights. From sales and strategy to delivery and beyond, artificial dumbing casts a shadow on the decision-making landscape. Let us delve deeper into this intriguing and concerning aspect of corporate behavior.

The Prevalence of Artificial Dumbing:
Artificial dumbing pervades various domains within corporate functions, where self-serving motivations can eclipse rational thinking. In sales, executives might resort to manipulative tactics and short-term gains, sacrificing long-term customer relationships. In presales, decisions may be driven by personal biases rather than objective evaluation, hindering the pursuit of optimal solutions. Even in strategy formulation, misguided ambitions and the desire for personal glory can lead to shortsighted plans detached from reality. This trend poses significant challenges to the pursuit of genuine progress and ethical business practices.

The Factors Behind Artificial Dumbing:
Several factors contribute to the propagation of artificial dumbing in corporate decision-making. The pressures of competition, quarterly targets, and the relentless pursuit of individual success create an environment that incentivizes short-term thinking and self-preservation. In addition, organizational structures and hierarchies sometimes prioritize individual achievements over collective wisdom, promoting a culture that rewards personal gain over the common good. Moreover, the abundance of information in today's interconnected world can lead to selective data interpretation, enabling executives to cherry-pick facts that align with their preconceived notions or personal interests.

Consequences and Implications:
The consequences of artificial dumbing can be far-reaching. It erodes trust within organizations, stifles innovation, and limits sustainable growth. Employees who witness such behavior may become disillusioned, and the overall corporate culture may suffer as a result. Moreover, the collective intelligence and potential of organizations remain untapped when decision-making is clouded by self-serving agendas. Ultimately, the negative repercussions extend beyond the corporate realm, impacting stakeholders, customers, and society at large.

Combatting Artificial Dumbing:
Addressing artificial dumbing requires a multi-faceted approach. Organizations should foster a culture of integrity, transparency, and collaboration, emphasizing the importance of long-term success over short-term gains. Encouraging diverse perspectives and empowering employees to challenge flawed decisions can help counteract personal biases. Furthermore, fostering a learning environment that values evidence-based decision-making and critical thinking can help dismantle the allure of artificial dumbing. Leaders must set the example by prioritizing ethical conduct and promoting a collective mindset focused on sustainable progress rather than self-interest.

Conclusion:
Artificial dumbing represents a concerning trend in corporate decision-making, where self-serving actions take precedence over intelligent choices and authentic insights. Recognizing and combating this phenomenon is vital for organizations to foster a culture of ethical decision-making, innovation, and long-term success. By challenging personal biases, fostering collaboration, and prioritizing collective intelligence, businesses can overcome the allure of artificial dumbing and embrace the transformative power of genuine intelligence in their pursuit of a better future.

Short-Termism - Focus on Today at the cost of Tomorrow

"Strategies don't come out of a formally planned process. Most strategies tend to emerge, as people solve little problems and learn...