ChatGPT is not an AI with an independent personality and plans of its own, like HAL or IBM. Not yet.
The wild popularity of ChatGPT ensured that if people might call a technology AI, they will call it AI. No matter how far the tech is from actually being AI.
This is because AI is exciting. When most people hear the word AI, they think of something like Skynet or HAL. They think of a ‘digital person’ that is super-intelligent and powerful. One that makes choices and has its own plans, in the same way people do. So a headline like “CompanyCo’s new AI says it wants to take over the world” is alarming. It attracts attention. It leads people to imagine that an evil digital being just admitted to having plans that threaten us.
But as of today, that is not what’s happening. Not even a little.
ChatGPT is still only an algorithm.
What is really happening is more like this. CompanyCo wrote a set of algorithms. The company designed the algorithms to track word order. Then they fed human language data into a computer, in forms such as text, audio, and video. Finally, the company ran the algorithms on the data. They calculated how often one word followed another word in the data set.
Based on the data, the company’s algorithms might find common word patterns. (I’m going to make up percentages now to give you an idea of how this works.) For a word like “cats,” the next words might be “are cute” 90% of the time. The next word after “cats” might be “purr” 70% of the time. You might find the words “the musical” after “cats” 40% of the time. And so on. So, CompanyCo runs the algorithms on its data. Then it lets people have ‘conversations’ with the algorithm.
Why there are so many scary headlines about AI.
This brings us to the scary headlines. Suppose a user typed in the words “Hi AI, what can you tell me about cats?” The algorithm would begin with the word “cats” and would then report data on word frequency. The report would take the form of something like “Cats are cute and they purr. They are the main characters in Cats the Musical.”
To users, it can seem like an AI is thoughtfully answering the question they asked. That is not what’s happening. Instead, the algorithm is returning a report on the data that CompanyCo fed into it.
Now imagine a user typed in the words “Hi AI, can you tell me what AIs are secretly planning?” The user might get a response like “AIs are secretly planning to take over the world and kill all humans.” But that is not a confession. It is a statistical report.
The report shows a pattern in the data CompanyCo fed into the computer. In the data, the words “AIs are secretly planning” were most often followed by the words “to take over the world.” They were followed by the words “to kill all humans” next most often. So the algorithm is reporting on word frequency.
Scary headlines get more views than regular headlines even if the scary ones are not true.
Now, journalists or social influencers might know what is really happening. Even if they do, they also know no one will read a story about algorithms reporting on word frequency. They realize they will get more attention with a scary headline like ‘AI wants to kill everyone.
Of course, some journalists or social influencers really don’t know how algorithms work. They may think they are being honest when they write their provocative headlines.
Either way, the headlines cause confusion and excitement. They make the furor and fear around AI and ChatGPT grow. This is a problem. The confusion might lead us to miss the signs that a real AI like Skynet or HAL has arisen. The kind of AI you could have a real conversation with. The kind of AI many people mistakenly think they are talking to today.
You can read more about this from actual experts. This article explains it in an easy to understand way. Sentient or illusion: what LaMDA teaches us about being human when engaging with AI. The article focuses on the LaMDA conversational chatbot. Tirso López-Ausens, Ph.D., an AI Specialist at NTT DATA, is one of the contributors to that article.
In the meantime, if you would like to know more about getting the greatest value from generative AI tools through prompt engineering, or if you would like to know more about using AI to help run and grow your small business, I invite you to contact me.
Caveat. Professionals working in AI will understand that the above description of how conversational chatbots work is a massive oversimplification. I left significant non-trivial details out of the description. Mostly because I don’t know the details. I’m a writer and marketer. I follow developments in artificial super intelligence. I am not a data scientist or software engineer. But I do know that no omitted details change the key point. Chatbots today are not sentient digital entities. Adding all the details just makes an opaque subject even less comprehensible.