Google just released its long-awaited ChatGPT killer, Gemini, an ultra-smart AI chatbot that can finally match OpenAI’s ChatGPT. Early reviews of Google’s chatbot are slowly rolling out, and everyone is impressed. However, some can’t shake this eerie feeling that Gemini has more ghosts than a haunted house.
“GPT-4 is full of ghosts. Gemini is also full of ghosts,” said Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania in a Thursday blog post. “There is a weirdness to GPT-4 that isn’t sentience, but also isn’t like talking to a program… It is the illusion of a person on the other end of the line, even though there is nobody there.”
Mollick made his assessment after receiving a month of early access to Gemini’s most advanced model from Google. He is hardly the first person to conclude an AI chatbot may be sentient, or something close to it. Google fired Blake Lemoine, an engineer working on its large language model LaMBDA, in 2022 after he claimed his company’s AI was alive. Scientists were quick to deem Lemoine as crazy, but this idea that powerful chatbots are sentient just won’t go away.
When Mollick says he sees a “ghost,” he sees the semblance of a person, in the hazy, nebulous fog of AI chatbot text. Yes, chatbots have tons of hallucinations and sometimes awkward sentences that reveal it to be a robot. But through that, you can vaguely recognize human characteristics. Sometimes you walk away from conversation with ChatGPT feeling like you just spoke to someone. But it’s just you and the software; it’s like a ghost.
Mollick notes that Gemini has a different “personality” than ChatGPT. Gemini appears to be friendlier, more agreeable, and uses more wordplay. He’s not alone in this assessment that different chatbots have specific tones and personalities.
AI detection companies use cadence and tone to identify which AI chatbots people are using. That’s how Pindrop was able to identify ElevenLabs as the AI behind the New Hampshire deepfake robocalls impersonating President Biden. That’s also how Copyleaks, an AI detection company specializing in text, identifies AI chatbots with 99% accuracy.
Microsoft researchers do not outright say GPT-4 is alive, but simply that it shows “sparks” of human-level cognition. In the 2023 study titled, “Sparks of Artificial General Intelligence,” Microsoft scientists examined how the LLM behind ChatGPT could understand emotions, explain itself, and reason with people.
Researchers found that GPT-4 was able to pass several “Theory of Mind” tests, which are used to assess the emotional intelligence of children. There were absolutely shortcomings in OpenAI’s LLM, but there were sparks of humanity in the tests. Microsoft scientists even called into question how we measure “human-level intelligence” altogether.
The people at the Sentience Institute have coopted this outlandish belief as well. They think AI models should be “granted moral consideration.” The organization tries to expand the circle of things that are treated as sentient. The Sentience Institute believes AI models could one day be mistreated, either incidentally or on purpose, if not given proper consideration.
There’s broad scientific agreement that large language models are not currently alive. However, there’s a growing group of people who seem to believe AI models are not far off from sentience. Call them crazy, or not, but people are seeing ghosts in the machine.