New Artificial Intelligence Designed to Be Mentally Unstable: What Could Go Wrong?

By 

We tend to think of artificial intelligence entities as flawless intellects, early prototypes of the powerful ‘artilects’ futurists imagine will one day rule our world. We also tend to think of them as not being subject to unhappy thoughts or feelings. But one company has created an artificially intelligent machine-learning system that suffers from mental instability, or the AI equivalent, and the creators deliberately designed it to be unstable.

This tortured artist of an AI is called DABUS, short for “device for the autonomous bootstrapping of unified Sentience.” It was created by computer scientist Stephen Thaler, who used a technique called “generative adversarial networks” to mimic the extreme fluctuations in thought and emotion experienced by humans who suffer from mental illness. His Missouri-based company, Imagination Engines, developed a two-module process: Imagitron infused digital noise into a neural network, causing DABUS to generate new ideas and content; then, a second neural network, Perceptron, was integrated to assess DABUS’s output and provide feedback. Then they added their secret sauce.

This method of creating an echo chamber between neural networks is not new or unique. However, what Thaler and his company are using it for — deliberately tweaking an AI’s cognitive state to make its artistic output more experimental — is. Their process triggers ‘unhappy’ associations and fluctuations in rhythm. The result is an AI interface with symptoms of insanity.

“At one end, we see all the characteristic symptoms of mental illness, hallucinations, attention deficit and mania,” Thaler says, describing DABUS’s faculties and temperament. “At the other, we have reduced cognitive flow and depression.”

Thaler believes that integrating human-like problem-solving — and human-like flaws, such as mental illness — may significantly enhance an AI’s ability to create innovative artwork and subjective output. While everyone is familiar with the psychedelic and surreal canvases produced by Google’s Deep Dream algorithm, they may be uniquely impressed by the more measured and meditative work of DABUS.

Above: a few of DABUS’s surreal pieces, born of neural networks

Thaler also believes this technique will improve the abilities of AI in stock market predictions and autonomous robot decision-making. But what are the risks to infusing mental illness into a machine mind? Thaler believes there are limits but that psychological problems could be just as natural to AI as they are to humans.

“The AI systems of the future will have their bouts of mental illness,” Thaler speculates. “Especially if they aspire to create more than what they know.”


This article originally appeared on The Anti-Media.

Source Article from http://www.renegadetribune.com/new-artificial-intelligence-designed-mentally-unstable-go-wrong/

You can leave a response, or trackback from your own site.

Leave a Reply

Powered by WordPress | Designed by: Premium WordPress Themes | Thanks to Themes Gallery, Bromoney and Wordpress Themes