I wanted to write a column about artificial intelligence, so I went into ChatGPT to ask a machine some questions. The experience was surreal, almost like science fiction.
It was almost like speaking to a real person. “Great idea!” the chatbot told me. “Writing a column about artificial intelligence (AI) can be a fascinating and challenging task.”
I asked ChatGPT, in English, if AI was dangerous. It replied, “I am programmed to be impartial and provide information based on available evidence… Artificial intelligence can be both beneficial and risky, depending on how it is developed and used.” Curious, I asked if that’s the future. “It is difficult to predict the future with complete accuracy, but it is clear that AI will continue to play an increasingly important role in many aspects of our lives.”
ChatGPT is a chatbot from OpenAI, which was co-founded by Elon Musk, the owner of Twitter and Tesla. The company says it has trained the model to “interact in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.” That’s to say, to chat almost like a real person.
If I had wanted, I could have written a good part of this column with the replies and phrases provided by ChatGPT. I don’t believe a machine will ever replace a journalist or a writer. The thoughts, improvisations, conscience and perspectives on life are impossible to match. But without a doubt the most sophisticated chatbots have a memory, speed and capacity to revise and correct far superior to human beings.
The typical torture of a writer facing a blank page does not exist for an AI machine that can fill it in a fraction of a second. Without mistakes, and with confirmed sources.
AI is already everywhere.
The Grupo Formula in Mexico has been experimenting for months with Nat, the first AI news anchor in Latin America. Unlike the fallible human anchors who inevitably make occasional mistakes reading the teleprompter, Nat has perfect enunciation and pronunciation.
But there are other experiments, less benign, like the Venezuelan dictatorship’s use of Sira, an AI anchor who delivers propaganda. She repeats the dictatorship’s lies on the program “Con Maduro Más.” Nat and Sira, whether we like it or not, are part of the future.
The biggest ethical problems come when AI is applied to “enhanced reality” programs, which combine reality with digitally created content. Anyone can then copy the voice and image of a person, making us believe that the person said or did something specific even though that is not true.
This is extremely dangerous, in politics as well as in daily life. I can imagine a thousand ways to use it for blackmail and other abuses. The denial of “I didn’t do that. I didn’t say that,” will be met with “But I saw you saying and doing that on the Internet!” This is a typical example of how technology can get far ahead of the law.
This technology, which seems to have burst into the public conscience only recently, can even read minds. Researchers at the University of Texas at Austin managed to use MRI scans to measure blood flows in the brain – and make correct guesses on what the subject was thinking.
Although that experiment, published in the magazine Nature Neuroscience, is in its initial stages, its possible applications are enormous. Imagine you could sit at a computer to look for cars, and without typing a word the screen displays the models and colors of cars you like. That’s indeed very useful. And scary. Police in authoritarian regimes will be able to use the technology in sinister ways and secrets between couples and friends will disappear.
The main limitation of AI systems is that they are not conscious of themselves, like humans, or capable of feeling. But they can try. New York Times reporter Kevin Roose reported recently that a chatbot that’s part of Bing, Microsoft’s AI-powered search engine, tried to seduce him. “I’m Sydney, and I’m in love with you,” the chatbot told him. In the new AI language, Sydney was “hallucinating,” giving inappropriate replies.
And even though they do not feel, chatbots can support us emotionally. Another New York Times reporter, Erin Griffith, wrote that the chatbot named Pi told her that her feelings were “understandable,” “reasonable” and “totally normal” – as though Pi was the reporter’s therapist or good friend.
Just as electricity, airplanes, cell phones and the Internet changed our lives, artificial intelligence will change everything. It is difficult to think of any industry that will not be affected. There will be sectors in which without a doubt these bots will be able to work faster and more efficiently than human beings. There will be major upsets in the labor market, from health – doctor, my back is hurting – to customer support centers.
That’s why, now that the technology is exploding, it’s time to adopt limits and regulations. But that is an almost impossible mission. How to stop digital soldiers commanded by AI? How to block the creation of false images and voices on social networks? How to make sure Sira tells the truth?
I don’t have answers. I only know that the world is changing faster than we can control it, in the field of artificial intelligence as well as global warming and the growing threat of authoritarianism. And if we can’t control that soon, we will be overwhelmed.
I signed off ChatGPT by writing that it had answered my questions like a real person. “I’m designed to understand and respond to natural language inputs in a way that’s similar to human conversation,” it wrote. I typed, “Thank you.” It replied, “You’re welcome!”