Ben Goertzel giving a talk to a crowd for SingularityNET with Sophia the Robot
An arrow pointing leftHome

Is Seattle's Ben Goertzel the AI realist the world needs?

  • Mike Pearl
8/23/2022
Ben Goertzel | SingularityNET

If the public thinks AI dreams are already reality, we need to hear from an AI dreamer with his feet on the ground.

Seattle’s Ben Goertzel needs no introduction among those who work in AI. Goertzel, with his long hair and flamboyant taste in hats, can often be seen playing the role of a benign mad scientist as he introduces his creepy robot creations such as “Sophia,” to bemused members of the press. Moreover, as both a coiner of the term “artificial general intelligence,” and a major champion of the concept, many scientists might be tempted to instantly take Goertzel out of the running in the search for a serious commentator to communicate the reality of AI to the masses.

But there’s more to Goertzel than all this flash suggests. He’s a former math prodigy, and a cognitive scientist. He’s a genuine startup founder at SingularityNET, which bills itself as a “decentralized AI network.” And in the mire of what we call the “attention economy”—where the superficial is sometimes all that matters—Goertzel, a guy who knows how AI works, and longs to see it improved, might just be the man for this moment in AI history.

Consider who is currently setting the terms of the conversation about AI sentience. In June, Blake Lemoine, an ordained mystic Christian priest who was employed until recently by Google’s “Responsible AI” organization, became a flavor-of-the-month celebrity by asserting to The Washington Post that Google had created a sentient being in the form of its LaMDA chatbot. He was fired shortly after, but now has a modest but respectable online following, and self-publishes essays with names like “What is sentience and why does it matter?

Lemoine, like Goertzel, is a flashy dresser, famously photographed in a top hat and fancy suit with a cane in hand. His status as a former Google researcher gives his perspective credibility. And he’s certainly a competent AI commentator who can form a digestible soundbite.

“It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person,” Lemoine explained to the Post.

All Google has in LaMDA is a text generator that writes uncannily convincing nuggets of folk wisdom in response to prompts about some of life’s big questions. A chatbot regurgitating formations of text that it knows are likely to fit well into a given context, is simply not a person, and Blake Lemoine is way out in a realm of pure spiritual faith when he asserts that one is or even might be.

Lemoine’s own spiritual experience of an object—be it an animal, a river, or a chatbot—might exude the same spiritual significance as an interaction with a person. And there might be real value to be derived from such an interaction if one is spiritually inclined as Lemoine is. A lava lamp can probably be a “person” too, in certain psychopharmacological contexts. There’s important social science to be done in this area, but as for computer science, it’s a dead end.

Silly or not, however, Lemoine was a viral story. As such, he has been covered by The New York Times, The Washington Post, the BBC, WIRED, and everywhere normal people read nerdy news. Lemoine now gets tossed into unrelated stories by journalists that need to entertain, for a second, the scientific possibility of AI sentience.

That’s a problem.

The gulf between what an AI can currently do—even the most powerful transformers—and sentience, isn’t obvious to every news consumer. America’s understanding of AI is, as this publication has noted in the past, not good. When most people read a joke tweet about an AI that was “forced” to, say, read the scripts of every Seinfeld episode, and write a seemingly brain-damaged, but recognizable, interaction between Jerry and George, they picture a really expensive Roomba sitting down, reading all those scripts, and then attempting to write comedy. What they’re really seeing is generally either a collection of curated outputs from GPT-3, or simply a human-written joke.

The truth about things like this is relatively boring. Computer scientists have to deal with questions from harebrained writers like me with no formal science training, and they’re well accustomed to giving deflating answers about how, no, computers don’t “think.” That’s true, of course, if we define terms like “thinking,” “intelligence,” and “sentience,” as the inscrutable electrical activity happening between our ears. Computers can’t be, and will never be, literal brains. But there’s room in mainstream pop science for a little more complexity—and fun!—than can be found in such an absolutist answer.

Enter: Ben Goertzel. Goertzel, with his media-ready outlandish appearance, has a story to tell, much like Lemoine’s. And while plenty of AI engineers and academics might disagree with some of his hopes and predictions, he at least steers clear of the overtly mystical.

Decentralized AI | Ben Goertzel | TEDxBerkeley

Goertzel’s formulations for what exactly artificial general intelligence would be change somewhat in each media appearance, but they usually involve creativity. He dreams of AIs that can, according to one speech from TEDX in Berkeley, “envision things that didn’t exist, and then create them.” That’s a useful summary, because it hints at things that seem alien in a way, but reachable from here in the present. Dalle-2 and GPT-3 don’t currently work without a prompt from a user. If an AI could first “envision things” and then deliver them, that would be similar to human thought in important way.

More to the point however, Goertzel knows when the aforementioned hasn’t truly happened. In a recent New York Times article, he tells a story in which one of his newer AI-powered robots, Desdemona, ad-libbed some song lyrics during a jam session that resonated with him so much that he came to believe, for a moment, that artificial general intelligence had arrived. “When the band gelled, it felt like the robot was part of our collective intelligence—that it was sensing what we were feeling and doing,” told the Times, adding, “Then I stopped playing and thought about what really happened.” The significance had mostly been in his own mind, and heart, which is great, but it’s not sentience, he admits.

Goertzel’s vibe, his enthusiasm for science fiction concepts, and his flirtations with the crypto community—SingularityNET is on the blockchain after all—might attract skeptics like cryptographer Paul Crowley, and might irk some in academia. But he’s the closest thing the AI world has to an advocate who can truly engage with the public while keeping his feet firmly on the ground. Try Googling “famous ai researcher” and see if any of the results are convincing.

In a celebrity-obsessed world, superstars like Swedish climate activist Greta Thunberg shift—or even create out of whole cloth—the world’s perception of important topics. People who care about public perception of their areas of interest should consider their representatives carefully. To that end, Blake Lemoine’s moment in the spotlight is scary, and for my money, Goertzel fits the bill as his replacement.