An AI Expert Explains How Realistic ‘M3GAN’ Really Is

She sings, she dances, she rips a kid’s ear off. Obviously, I’m talking about M3GAN, the titular antagonist of M3GAN, a new horror flick from director James Wan. The movie—which, I cannot stress enough, should be seen in a packed theater—is about an orphaned girl whose aunt comforts her with the gift of a superintelligent, AI-powered robot companion: M3GAN. Things go south when M3GAN gets a little possessive of her human friend and turns comfort into “cold-blooded murder.” 

M3GAN is far from the first movie to feature a good AI system gone bad—2001: A Space Odyssey, Ex Machina, and the Disney Channel Original Movie Smart House all leap to mind. But AI wasn’t present in our daily cultural life when those other movies were released. Now, we’re watching human artists push back against AI programs that scrape their work and banning chatbots from schools so students can’t cheat. To me, the question is obvious: Is M3GAN… realistic? Could she exist in the not-too-distant future? 

To find out, I talked to Roman Yampolskiy, an associate professor at the Speed School of Engineering at the University of Louisville. “I'm trying to look at what we anticipate future AIs will be like in 3, 5, 10 years and what we can do to properly prepare for them, to control them, to understand them, and predict their behaviors,” Yampolskiy said. Yampolskiy’s studies of AI safety and security specifically focus on advanced intelligence systems… maybe even one that could, like M3GAN, deliver a perfectly timed rendition of David Guetta’s Titanium. We talked about Sophia the Robot, science fiction, and why mass-market superintelligence is nothing to laugh at.

VICE: So, have you heard of M3GAN?

Roman Yampolskiy: I posted the trailer about it a couple months ago! It's a pretty freaky looking movie. It kind of felt like Chucky Returns, [Bride] of Chucky, something like that. 

Right, totally, but a little more tech-y. When it comes to the AI piece of that character, how far away are we from that kind of technology? 

I think we have all the ingredients for it, we just haven’t put them together. We have robotic bodies, humanoid bodies. Boston Dynamics does a great job developing those systems; They're very flexible, they can jump, they can dance. And then we have the brain: Systems like ChatGPT can definitely talk to you, answer questions. But I don't think there is a single cyber-physical system that combines all those capabilities in one. That's the kind of futuristic prediction they're making. But it sounds like something the Tesla Robot is supposed to become if they’re successful in developing it.  

How many years away from a robot like that would you say we might be?

Nobody knows. Predicting the future is kind of hard. A lot of it is not just technical limitations but market-based limitations. We had video phones in the 70s, and they did not become popular until the iPhone. So, we may have capabilities in two, three years, but it may not be actually available for sale for much longer.

Yeah, I feel like the closest analog is Sophia, the robot who “spoke to the UN.”

That's kinda like a pre-programmed bot. It doesn't have much independence to it.

We’re not impressed with Sophia.

It's a very impressive marketing robot. I think it's the most successful marketing robot ever, but I don't think it's a general intelligence with capabilities to speak to the UN and or get citizenship or anything like that.

Totally. So if the emotionally intelligent AI tech is like out there, how plausible is it for AI to become jealous or possessive, like M3GAN—to develop a specific attachment to an “objective” or a person? 

It definitely doesn't have any human emotions in a traditional sense. But there are rationality-based reasons for those, right? Jealousy is about resource allocations: I don't want resources dedicated to me to go to someone else. It’s very rational that any system trying to optimize for reward would develop such drives—we call them “AI drives.” That part would be very realistic, and then you can kind of project human form on that and go, “Oh, it’s jealous!” or something like that.

What about the attachment piece of the puzzle? Is it plausible at all? 

We program systems and robots with a certain goal. If it has a specific goal of protecting that particular human and then there is a procedure for mapping which human that is, yeah, it will be attached to a particular individual.

A few months ago, you spoke with Motherboard about the “black box” model of AI development, where the humans involved can’t peek under the hood and see how a system functions. That means it's more difficult to discern whether a system is providing, as you said, “wrong or manipulative answers.” How wrong and manipulative are we talking?

With large neural networks, with large language models, most of the time, we have no idea how they actually work. We don't engineer them step by step. We kind of throw big data at a huge computer, and they self-assemble into the systems where we'll like the outputs for certain inputs, but we don't have complete understanding of how internal transitions happen, given that there are billions of neurons and different weights associated for each one. It’s kind of like with other humans. We cannot really fully understand if they're being honest with us or if they are planning to deceive us. Think about all the tools we developed to control humans, right? We have religion, we have lie detector tests, morality, and we still get people who betray us, who sell secrets, who cheat on us, and so on.

OK, I'm sorry, but this is kind of a spoiler. So, in the movie, M3GAN attacks and kills people who threaten the child that she's in charge of protecting. And then, she deletes or corrupts files that could provide evidence of her crimes. Is that… Is that possible? 

It depends on how the robot is programmed. If she’s programmed to destroy any danger to that human she’s trying to protect, it would make sense to kill them. If you want to continue protecting the child, you need to make sure you are not destroyed, you are not turned off, you're not in prison. So making good, rational decisions about what to do with evidence is very reasonable for a system which is generally intelligent.

“Science fiction writers frequently are the first ones to realize where we're heading.” —Roman Yampolskiy

Gotcha. So would you say there are any reasons to worry that some of these systems could become actively malevolent, the way M3GAN is in the movie? Is that really an issue?

It is the issue. What happens is, you give [a system] a very reasonable goal. What you have in mind is very honorable and desirable. You say, “OK, I don't want anyone to have cancer.” Seems reasonable. But how to get to that target is not defined, and there are millions of different ways to get there. One way is to kill every human, so that no one has cancer. 

[Laughs]

You obviously laugh, you think it's stupid and silly, nobody would consider that option. But for a system which only understands final goals, and has very few limits and how to get to them, that’s a very reasonable and simple way to get there. 

Sure. 

We anticipate those failures, I have a paper explicitly studying historical failures in AI and how people get surprised by the solutions the systems discover. We don't think about those options, but they do.

So, is that something that worries you? Is that something you're concerned about?

Well, I dedicated my life to it and spent the last 10 years doing this full-time. So I guess, yes.

Definitely. Yeah. What threats in that regard loom the largest for you right now?

We keep developing more and more capable systems. There’s this arms race to get there first. We don't understand how they work, we cannot predict their behavior, we cannot explain what capabilities they already have. So, I'm kind of worried that we have no idea what is going to happen once we get to human-level capability, or even beyond that to superintelligence.

So, if you saw something like this doll on the market in, like you said, five years, 10 years, whenever demand emerges, is that something you would be concerned about?

[Laughs] I think that would be the least of my concerns. If we had systems with that capability available everywhere, we’d be in big trouble, for sure.

This “evil AI” plot device comes up a lot. Why do you think the trope is so dominant in movies that feature artificial intelligence? 

Because we probably realize, as humanity, we failed to develop more of an ethical system we all agree on. We don't have any prototypes for how to control intelligent beings, even at a human level, much less superintelligent. So, it would be surprising if without any moral guidelines, without any safety mechanisms, those systems were not dangerous, right? It would be almost impossible. You’re engineering something with such unprecedented capabilities, but you have no idea how to steer that technology. Science fiction writers frequently are the first ones to realize where we're heading. It used to be that science fiction preceded science by hundreds of years. Inventions like the internet, television, submarines were described well before we could make them. 

Today, I think the distance between science fiction and science is much smaller, maybe 10 years, 20 years, and it keeps shrinking. Eventually, we’ll get to where science and science fiction are about the same. That's the singularity point where the AI itself starts making new science, new engineering, new discoveries. At that point, we cannot predict what's going to happen. We are not smart enough. So when you ask me, what is it I'm concerned about specifically, I have no idea. It will come up with something I cannot anticipate—I'm not superintelligent myself.

What do you think can be done to prevent this future where we have something that's both uncontrollable and smarter than us?

Well, you’re assuming that the problem is solvable. Everything in my research says that it's not a solvable problem. Lower-level intelligence cannot indefinitely control higher-level intelligence. We can delay this technology, we can remove it from specific domains. But if we develop and release general intelligence, superintelligence, I don't think we get much to say about it. We are no longer in control.

Gotcha. OK, yeah, that is… scary. So, clearly I don't know a ton about AI. What do you think the average person misunderstands about AI’s capabilities, even right now?

Well, I think most people don't fully understand the capabilities of those systems, including people who develop them. So, for an average person not to know what they can do is not particularly surprising. I think the misconception they have is that the experts, the scientists actually know, they’ve got it under control, and there is nothing to worry about. That's the bigger problem.

It would be good if average people wanted a little more participation and control about this, essentially, experiment, we're performing on them. We’re deploying this technology and it will impact all 8 billion people on this planet. But most of them never consented to that experiment in any way.

Source

You can skip to the end and leave a response. Pinging is currently not allowed.

Leave a Reply

Powered by WordPress | Designed by: Premium WordPress Themes | Thanks to Themes Gallery, Bromoney and Wordpress Themes