It’s my name. It’s my face. But I never authorized anyone to use my name and image to sell marijuana products or sexual enhancement pills, or to advertise cryptocurrencies.
In fact, throughout my career as a journalist I have carefully avoided making commercials. But nevertheless, with artificial intelligence it will soon be nearly impossible to know what is true and what is false.
Let me tell you what happened.
Some friends had been asking me about products I was supposedly selling on the Internet. I would laugh and tell them it was not true, and I didn’t pay any attention. But the questions kept coming and, being curious, I went online. And I got a tremendous surprise.
I found several Facebook pages in which my name appears selling gummies with CBD, a chemical found in marijuana. And I found photos of me and Dr. Juan Rivera, Univision’s chief medical correspondent, on a Web page promoting a product that improves “sexual intimacy … makes you stronger, makes you more self-assured … makes your wife love you more.” Contrary to what the page says, Dr. Rivera and I never promoted that product. And while the page was branded as “FALSE” by Univision Noticias’ lie detector, thousands of people have seen it.
“It is extremely frustrating to have worked hard for decades to become a specialized physician and won the trust of the Hispanic community, and then see thieves take advantage of my credibility to swindle and cheat the same people I am trying to help,” Dr. Juan, as he’s known, wrote to me. “The social networks that allow that type of fraud, even after they are notified of the identity theft, must accept responsibility.”
There’s fraud everywhere, and of all types.
The Web page Newtral.es reported a fake video in which my image and that of billionaire Elon Musk were manipulated to sell cryptocurrencies. The video was very clumsily made. It’s not even my voice, and the speaker has a Spanish accent. But it’s nevertheless surprising that so much time was spent creating this fake.
Actor Tom Hanks also recently complained that a dental insurance plan was using his image without his permission and with the help of artificial intelligence. It’s interesting that Hanks, who has more than 9 million followers on Instagram, went public with his complaints instead of focusing on finding those responsible or possible lawsuits.
It is incredibly frustrating that social networks and Web pages can do so little to block or limit disinformation and outright lies. I have been searching for months for those responsible for the pages that use my name and image illegally, but I have not been able to find them, never mind file a lawsuit against them. And that’s not the worst of it.
With the stunning advances in artificial intelligence, it will soon be almost impossible to determine if videos and audios are true or false. During the recent production for a special program on artificial intelligence, I interviewed Venezuelan musician and composer Cesar Muñoz. Cesar had recorded my voice from various news programs and then used a computer to invent a text with words I had never spoken. That fake recording and my voice were practically the same. It was an impressive replica. And just a few seconds of my recorded voice were enough for a machine to copy it.
The same thing that Cesar did with my voice will be done by Spotify in some podcasts. Using artificial intelligence, it will be able to replicate a program in other languages, but with the voice of the original persons. For example, if I made a podcast in Spanish, it could be replicated, with my exact voice, in Mandarin, Russian or Danish. That’s not the future. That’s now.
The danger is that soon, thanks to the unrestricted use of artificial intelligence, they will be able to almost perfectly imitate our voices and images. For now, there was, once the technology is developed it is almost impossible to stop its use.
Today I can still prove to you that I did not make a commercial about marijuana, cryptocurrencies or sexual potency pills. But I have been in journalism for more than four decades, and there are a lot of people who believe what I say. What would happen if my image and voice turn up sometime soon appearing to really say things I never said. Who will they believe. The image on their computers, or me?
This problem affects us all, not just those of us on television or public figures. Technology will soon, perhaps within months, be able to invent someone exactly like you and steal your identity.
And for now, the only thing we can say is, “That’s not me.”