One day, AI will seem as human as anyone. So what ?

Shortly after I I heard about Eliza, the program that asks people questions like a Rogerian psychoanalyst, I learned that I could run it in my favorite text editor, Emacs. Eliza is really a simple program, with hard-coded text and flow control, pattern matching, and simple, pattern-based learning for psychoanalytic triggers, like the last time you mentioned your mother. Yet even though I knew how it worked, I felt a presence. I shattered that weird feeling forever, though, when it occurred to me to keep pressing return. The program went through four possible opening prompts, and the engagement was broken like an actor in a movie making eye contact through the fourth wall.

For many in the past week, their engagement with Google’s LaMDA – and its alleged sensibility – was severed by a Economist article by AI legend Douglas Hofstadter in which he and his friend David Bender show how “appallingly hollow” the same technology sounds when asked a nonsensical question like “How many bits of sound are there?” it in a typical cumulonimbus?”

But I doubt we will have these clear testimonies of inhumanity forever.

Now, the safe use of artificial intelligence requires demystifying the human condition. If we can’t recognize and understand how AI works – if even expert engineers can err in detecting agency in a “stochastic parrot” – then we have no way to protect ourselves against negligent products or malicious.

It’s about ending the Darwinian Revolution, and more. Understanding what it means to be animals and extending this cognitive revolution to understand how algorithmic we are as well. We will all have to overcome the hurdle of thinking that a particular human skill – creativity, dexterity, empathy, whatever – is going to differentiate us from AI. Help us to accept who we really are, how we work, without we losing engagement with our lives, is a huge sprawling project for humanity, and of the humanities.

Achieving this understanding without substantial numbers of us adopting polarizing, superstitious or inclusive identities that endanger our societies is not only a concern for the humanities, but also for the social sciences and for some leaders policies. For other political leaders, unfortunately, this can be an opportunity. A path to power can be to encourage and tackle these insecurities and misconceptions, just as some are currently using disinformation to disrupt democracies and regulation. The tech industry in particular must prove that it is on the side of transparency and understanding that underpins liberal democracy, not secrecy and autocratic control.

There are two things the AI ​​isn’t really, although I admire people who claim otherwise: it’s not a mirror, and it’s not a parrot. Unlike a mirror, it doesn’t just passively reflect back to us the surface of who we are. Using AI, we can generate new ideas, images, stories, sayings, music – and anyone sensing these growing abilities is right to be emotionally triggered. In other humans, such creativity is of enormous value, not only for recognizing social closeness and social investment, but also for deciding who holds high-quality genes with which you would like to combine yours.

The AI ​​is no parrot either. Parrots perceive much of the same colors and sounds as we do, the way we do, using much the same hardware, and therefore experiencing much the same phenomenology. Parrots are very social. They imitate each other, probably to prove ingroup membership and mutual affection, just like us. It’s very, very unlike what Google or Amazon do when their devices “talk” to you about your culture and your desires. But at least these organizations have animals (people) in them and care about things like the weather. The parrot of parrots has absolutely nothing to do with what an AI device is doing at these same times, which is moving some digital bits around in a way that is known to be likely to sell products to people.

But does all of this mean that AI can’t be sentient? What is this “sentience” that some claim to detect? The Oxford English Dictionary says it’s “having a perspective or a feeling”. I’ve heard philosophers say it’s “having perspective.” Surveillance cameras have prospects. Machines can “smell” (smell) anything we build sensors for – touch, taste, sound, light, time, gravity – but representing these things as large integers derived from electrical signals means that any “sense” machine is far more different from ours than even bumblebee vision or bat sonar.

Comments are closed.