Facebook’s New Suicide Detection A.I., Could Put Innocent People Behind Bars


By MassPrivateI

Imagine police knocking on your door because you posted a ‘troubling comment’ on a social media website.

Imagine a judge forcing you to be jailed, sorry I meant hospitalized, because a computer program found your comment(s) ‘troubling’.

You can stop imagining, this is really happening.

A recent TechCrunch article warns that Facebook’s “Proactive Detection” artificial intelligence (A.I.) will use pattern recognition to contact first responders. The A.I. will contact first responders, if they deem a person’s comment[s] to have troubling suicidal thoughts.

Facebook also will use AI to prioritize particularly risky or urgent user reports so they’re more quickly addressed by moderators, and tools to instantly surface local language resources and first-responder contact info. (Source)

A private corporation deciding who goes to jail? What could possibly go wrong?

Facebook’s A.I. automatically contacts law enforcement 

Facebook is using pattern recognition and moderators to contact law enforcement.

Facebook is ‘using pattern recognition to detect posts or live videos where someone might be expressing thoughts of suicide, and to help respond to reports faster.’

Dedicating more reviewers from our Community Operations team to review reports of suicide or self harm. (Source)

Facebook admits that they have asked the police to conduct more than ONE HUNDRED wellness checks on people.

Over the last month, we’ve worked with first responders on over 100 wellness checks based on reports we received via our proactive detection efforts. This is in addition to reports we received from people in the Facebook community. (Source)

Why are police conducting wellness checks for Facebook? Are private corporations running police departments? 

Not only do social media users have to worry about a spying A.I. but now they have to worry about thousands of spying Facebook ‘Community Operations’ people who are all to willing to call the police.

Should we trust pattern recognition to determine who gets hospitalized or arrested?

A 2010, CBS News article warns that pattern recognition and human behavior is junk science. The article shows, how companies use nine rules to convince law enforcement that pattern recognition is accurate.

A 2016, Forbes article used words like ‘nonsense, far-fetched, contrived and smoke and mirrors’ to describe pattern recognition and human behavior.

Cookie-cutter ratios, even if scientifically derived, do more harm than good. Every person is different. Engagement is an individual and unique phenomenon. We are not widgets, nor do we conform to widget formulas. (Source)

Who cares, if pattern recognition is junk science right? At least Facebook is trying to save lives.

Wrong.

Using an A.I. to determine who might need to be hospitalized or incarcerated can and will be abused.

You can read more from MassPrivateI at his blog, where this article first appeared.

Source Article from http://feedproxy.google.com/~r/ActivistPost/~3/7Lh7rmUPPxI/facebooks-new-suicide-detection-put-innocent-people-behind-bars.html

You can leave a response, or trackback from your own site.

Leave a Reply

Powered by WordPress | Designed by: Premium WordPress Themes | Thanks to Themes Gallery, Bromoney and Wordpress Themes