The spread of fake news is already a very real problem. Artificial intelligence could make the problem even worse.
That prospect is so frightening that an Elon Musk-backed non-profit called OpenAI has decided not to publicly circulate AI-based text generation technology that enables researchers to spin an all-too-convincing–and yes, fabricated–machine-written article.
“Due to our concerns about malicious applications of the technology, we are not releasing the trained model,” OpenAI blogged.
Such concerns go beyond just generating misleading news articles. OpenAI worries about deception, bias and large-scale abuse.
For instance, malicious people might exploit the technology to impersonate others online, automate the production of faked social media content, and automate the production of spam or phishing attacks.
Musk himself has been outspoken about AI’s risks to human civilization.
The OpenAI technology, known as GPT-2, is designed to predict the next word given all the previous words it is shown within some text. The language-based model has been trained on a dataset of 8 million web pages.
The model is described as “chameleon-like — it adapts to the style and content of the conditioning text. This allows the user to generate realistic and coherent continuations about a topic of their choosing.”
Read about fake unicorns
In one example, the researchers fed GPT-2 the following human-written text:
In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.
The machine generated the rest of the article, a portion of which is printed here:
“The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.
Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.
Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.
Pérez and the others then ventured further into the valley. `By the time we reached the top of one peak, the water looked blue, with some crystals on top,’ said Pérez.
Pérez and his friends were astonished to see the unicorn herd. These creatures could be seen from the air without having to move too much to see them – they were so close they could touch their horns.”
AI isn’t perfect
OpenAI concedes that the system isn’t perfect. It took 10 tries, for example, to generate this unicorn article. (To read the rest of the fake AI-generated story go to the OpenAI blog.)
And researchers point to repetitive text or to topics that are switched in an “unnatural” way.
“Overall, we find that it takes a few tries to get a good sample, with the number of tries depending on how familiar the model is with the context,” OpenAI blogged. “When prompted with topics that are highly represented in the data (Brexit, Miley Cyrus, Lord of the Rings, and so on), it seems to be capable of generating reasonable samples about 50 percent of the time. The opposite is also true: on highly technical or esoteric types of content, the model can perform poorly.”
The company says it will release a smaller version of GPT-2 , giving the AI community time to discuss the societal impact of such systems.
As general rule, though, OpenAI cautions the public to be more skeptical of the text they find online.
The debate about AI continued online. While some voiced concerns, others highlighted the potential. OpenAI’s article “The article highlights the fears but imagine all the good!!!” tweeted Thomas Anglero, innovation director at IBM Norway.
Nintendo Switch tips: Ten things you probably don’t know about the Nintendo Switch video game console
Email: email@example.com; Follow @edbaig on Twitter
Read or Share this story: https://www.usatoday.com/story/tech/2019/02/15/elon-musks-openai-fake-news-generator-too-scary-release/2880790002/