PSYOP: Public Perception of Autonomous Machines






PSYOP: Public Perception of Autonomous Machines


March 26th, 2016

The vast majority of robotics research is funded by the military industrial complex.

There is a lot of dancing around that fact, but the reality is that it’s governments’ desire for more, cheaper and deadlier killing machines that’s at the root of this.

Sure, the corporate drive to shrink payrolls is a large factor, but I see that as a by-product of the military work.

I found this piece interesting because it touches upon the efforts with perception management around autonomous systems; the sugar that helps the Rise of the Machines medicine go down.

With regard to the “soft fascism” described here, don’t miss the recent post on The New Mind Control, which describes an emerging, “unseen dictatorship.”

Via: The Atlantic:

The year is 2016. Robots have infiltrated the human world. We built them, one by one, and now they are all around us. Soon there will be many more of them, working alone and in swarms. One is no larger than a single grain of rice, while another is larger than a prairie barn. These machines can be angular, flat, tubby, spindly, bulbous, and gangly. Not all of them have faces. Not all of them have bodies.

And yet they can do things once thought impossible for machine. They vacuum carpets, zip up winter coats, paint cars, organize warehouses, mix drinks, play beer pong, waltz across a school gymnasium, limp like wounded animals, write and publish stories, replicate abstract expressionist art, clean up nuclear waste, even dream.

Except, wait. Are these all really robots? What is a robot, anyway?

This has become an increasingly difficult question to answer. Yet it’s a crucial one. Ubiquitous computing and automation are occurring in tandem. Self-operating machines are permeating every dimension of society, so that humans find themselves interacting more frequently with robots than ever before—often without even realizing it. The human-machine relationship is rapidly evolving as a result. Humanity, and what it means to be a human, will be defined in part by the machines people design.

“We design these machines, and we have the ability to design them as our masters, or our partners, or our slaves,� said John Markoff, the author of Machines of Loving Grace, and a long-time technology reporter for The New York Times. “As we design these machines, what does it do to the human if we have a class of slaves which are not human but that we treat as human? We’re creating this world in which most of our interactions are with anthropomorphized proxies.�

In the philosopher Georg Wilhelm Friedrich Hegel’s 1807 opus, The Phenomenology of Spirit, there is a passage known as the master-slave dialectic. In it, Hegel argues, among other things, that holding a slave ultimately dehumanizes the master. And though he could not have known it at the time, Hegel was describing our world, too, and aspects of the human relationship with robots.

But what kind of world is that? And as robots grow in numbers and sophistication, what is this world becoming?

There are all kinds of reasons why engineers might want to make a robot appealing this way. For one thing, people are less likely to fear a robot that’s adorable. The people who make autonomous machines, for example, have a vested interest in manipulating public perception of them. If a Google self-driving car is cute, perhaps it will be perceived as more trustworthy. Google’s reported attempts to shed Boston Dynamics, the robotics company it bought in 2013, appears tied to this phenomenon: Bloomberg reported last week that a director of communications instructed colleagues to distance the company’s self-driving car project from Boston Dynamic’s recent foray into humanoid robotics.

It’s clear why Google might not want its adorable autonomous cars associated with powerful human-shaped robots. The infantilization of technology is a way of reinforcing social hierarchy: Humankind is clearly in charge, with sweet-looking technologies obviously beneath them.

When the U.S. military promotes video compilations of robots failing—buckling at the knees, bumping into walls, and tumbling over—at DARPA competitions, it is, several roboticists told me, clearly an attempt to make those robots likeable. (It’s also funny, and therefore disarming, like this absurd voiceover someone added to footage of a robot performing a series of tasks.) The same strategy was used in early publicity campaigns for the first computers. “People who had economic interest in computers had economic interest in making them appear as dumb as possible,� said Atkeson, from Carnegie Mellon. “That became the propaganda—that computers are stupid, that they only do what you tell them.�

What matters, in other words, is who is in control—and how well humans understand that autonomy occurs along a gradient. Increasingly, people are turning over everyday tasks to machines without necessarily realizing it. “People who are between 20 and 35, basically they’re surrounded by a soup of algorithms telling them everything from where to get Korean barbecue to who to date,� Markoff told me. “That’s a very subtle form of shifting control. It’s sort of soft fascism in a way, all watched over by these machines of loving grace. Why should we trust them to work in our interest? Are they working in our interest? No one thinks about that.�

“A society-wide discussion about autonomy is essential,� he added.















<!–

–>











<!– AD CAN GO HERE

Buy gold online - quickly, safely and at low prices

END: AD CAN GO HERE –>

Leave a Reply


You must be logged in to post a comment.







Source Article from http://www.cryptogon.com/?p=48508

You can leave a response, or trackback from your own site.

Leave a Reply

Powered by WordPress | Designed by: Premium WordPress Themes | Thanks to Themes Gallery, Bromoney and Wordpress Themes