Showing posts with label ChatGPT. Show all posts
Showing posts with label ChatGPT. Show all posts

Wednesday, May 31, 2023

ChatGPT and the like - an interesting era has begun!

My previous post with thoughts on "Artificial Dumbing" - the phrase totally coined by me - was pretty much in the words of ChatGPT! It is amazing how it expands blocks of ideas into such wonderfully articulated pieces of writing which largely resonate with what you think and want to communicate if you express yourself well enough to the tool. I hear a lot else is possible with Generative AI based tools already (check this out). And I am curious. Looking forward to try them all, at least the free ones.

What I have experienced through extensively using ChatGPT of late, is that the low hanging fruit of direct productivity improvements is something everyone must immediately tap into. It's primarily in terms of quick generation of specific content, including code, which would be as precise as the context and details provided by the requester. So, if you know what you want to achieve, just type your requirement as clearly as possible, basically whatever is in your mind and whatever you would tell yourself that you needed to do. And ChatGPT might give you a better output in a few seconds than what you might in a few hours. There are constraints on the type of input it takes and the type of output you can derive out of it. But then, you can be creative and push the boundaries to some extent. Also, the way the current ChatGPT works, you can use it to augment you, may be even do majority of the work for you, but there will be gaps and you have to fill those so that the output is complete, coherent and meaningful.

A powerful feature of ChatGPT is its conversational format, and working through multiple threads which behave like separate conversations. For example, if I am working on creating a report on a specific topic, with a many sub-topics, side-topics and nuances to deal with, I can first carry on a conversation with the tool like I would with an expert; exchange ideas, views and feedback, and through this process get to a point where, like a human, rather like a friend or colleague conversing with me, or perhaps even better than any of them in a way we wouldn't want to acknowledge, the tool actually understands my views and intentions which need to come in the report. Now with the framework already set, I would now ask the tool to give me the content as I desire, with the structure I want, with the nuances I want, with the messaging I intend and with the tone I desire. And even after that, if there are slight deviations, I can tell the tool to make necessary fixes, add or remove stuff I don't want, change the tone if I'd like, even change the person - pretend to be me or someone, and regenerate the content. I can make these tweaks multiple times, and even ask for many versions by regenerating responses, just for the heck of it sometimes. It won't take too many iterations to get what you need. It's not only a huge time saver, but also gives you the quality that you may not be able to deliver yourself in 100 times the time it took. And that is one aspect that both enthralls and worries me.

The reason it worries me, is that if tools like Generative AI become fully integrated into our lives, especially from our childhood, but also in later years, we will diminish, or not fully develop our abilities to imagine, create, express, articulate, write, draw and develop from scratch - something that is so unique to human minds and bodies, as we would have tools to do better job more effectively. The tools still have limitations, at least so far and for the near future, in being not capable of surpassing all human ability - what they call singularity - and are only capable of what they can copy / learn from, which is the entire body of human creation so far. Which means that there would still be value to things we can imagine or create beyond what anyone has ever done, and there are really no boundaries to that if history is anything to go by. But then, if we are out of practice with the basic level, aren't we generally dumbing down our faculties? How can we run for iron-man if we rarely jog or get into water or cycle?

Or am I looking at it all wrong? The time we save by using tools for mundane tasks can indeed be devoted to pursue goals of higher order. But the tools we are looking at have abilities beyond the mundane, and if we set boundaries to where they play a role, I think we'd be trying to suppress the impact of one of our greatest inventions. And something so great will always find its way around the stupidity / rigidity of humans, eventually.

There's another possibility. Human endeavor has always found newer areas and greater challenges. The invention of the wheel and everything that enabled us to move faster ever since, has possibly made us poor runners as a whole since we are less dependent on that skill for survival. Running has become a sport to compete in, with others interested, wanting, skilled and trained at running - it's become a form of entertainment that way. For many it's for fitness. But we certainly don't need to run from an angry tiger to save our lives or cover long distances on foot. The analogy is compelling but the key difference with AI is that we are playing with mental faculties now, and that's fairly recent. May be a few centuries later, we'll look back at this moment as a pivot in human civilization that totally transformed our lives, made us live longer, healthier and happier. Or may be we'll see this as the dark period that destroyed everything we stood for.

We must therefore develop this carefully, but definitely.

How do you think we must shape this? Can we, beyond a point? Where do we draw a line, to be safe? And should we?

I am tempted to ask ChatGPT for an answer...

Short-Termism - Focus on Today at the cost of Tomorrow

"Strategies don't come out of a formally planned process. Most strategies tend to emerge, as people solve little problems and learn...