DeepMind Says It Had Nothing to Do With Research Paper Saying AI Could End Humanity

DeepMind Says It Had Nothing to Do With Research Paper Saying AI Could End Humanity
Image: NurPhoto / Contributor via Getty Images

After a researcher with a position at DeepMind—the machine intelligence firm owned by Google parent Alphabet—co-authored a paper claiming that AI could feasibly wipe out humanity one day, DeepMind is distancing itself from the work. 

The paper was published recently in the peer-reviewed AI Magazine, and was co-authored by researchers at Oxford University and by Marcus Hutter, an AI researcher who works at DeepMind. The first line of Hutter’s website states the following: “I am Senior Researcher at Google DeepMind in London, and Honorary Professor in the Research School of Computer Science (RSCS) at the Australian National University (ANU) in Canberra.” The paper, which currently lists his affiliation to DeepMind and ANU, runs through some thought experiments about humanity’s future with a superintelligent AI that operates using similar schemes to today’s machine learning programs, such as reward-seeking. It concluded that this scenario could erupt into a zero-sum game between humans and AI that would be “fatal” if humanity loses out. 

Advertisement

After Motherboard published an article on this paper with the headline “Google Deepmind Researcher Co-Authors Paper Saying AI Will Eliminate Humanity,” the company decided to distance itself from the paper and ask Motherboard to remove mention of the company. In a statement to Motherboard, DeepMind claimed that the affiliation was listed in “error” and was being removed (it has not been as of the time of writing), and that Hutter’s contribution was solely under the banner of his university position.

“DeepMind was not involved in this work and the paper’s authors have requested corrections to reflect this,” a DeepMind spokesperson told Motherboard in an email. “There are a wide range of views and academic interests at DeepMind, and many on our team also hold university professorships and pursue academic research separate to their work at DeepMind, through their university affiliations.”

The spokesperson said that while DeepMind was not involved in the paper, the company invests effort in guarding against the harmful uses of AI, and thinks “deeply about the safety, ethics and wider societal impacts of AI and research and develop AI models that are safe, effective and aligned with human values.”

DeepMind declined to comment on whether it agreed with the conclusions of the paper co-authored by Hutter.

Advertisement

Michael Cohen, one of the co-authors of the paper, also asked Motherboard for a similar correction. Motherboard’s editorial policy is to not correct a headline unless it contains a factual error.

While the company says it’s committed to AI safety and ethics, it previously showed that when criticism comes on a little too strongly from people with positions at the firm—regardless of whether they also have outside commitments—it’s all too happy to cut and run.

For example, in 2020 prominent AI researcher Timnit Gebru—who, at the time, had a position with Google—co-authored a paper on ethical considerations in large machine learning models. Google demanded that she remove her name from the publication and retract it, and ultimately fired her. Gebru’s ousting prompted Google employees to publish a blog explaining the details leading up to the firing, including that the paper was actually approved internally; only after it came into the public eye did Google decide it could not be associated with the work and demanded that it be excised from the situation. When it couldn’t, the company cut Gebru loose.

In response to Motherboard’s first story about the paper, Gerbu tweeted that when she asked Google if she could add her name to the AI ethics paper that eventually led to her firing with an affiliation that wasn’t Google, she “was met with laughter.”

Margret Mitchell, another AI ethicist who was fired from Google at the same time as Gerbu tweeted that Google told them that, as long as they worked at the company, Google “had FULL say over what we published.”

Having multiple affiliations within academia and the private sector is relatively normal, and comes with its own set of fraught ethical concerns to do with corporations’ long history of capturing academia to produce favorable research. What Google has shown is that it will exploit that blurry divide to shake off criticism when it suits the company. 

Source

You can skip to the end and leave a response. Pinging is currently not allowed.

Leave a Reply

Powered by WordPress | Designed by: Premium WordPress Themes | Thanks to Themes Gallery, Bromoney and Wordpress Themes