Facebook Turns to AI to Help Prevent Suicides

Facebook says it will hire another 3,000 people to review videos of crime and other questionable content following murders shown live on its site. Scott McGrew reports.

(Published Wednesday, May 3, 2017)

Facebook is turning to artificial intelligence to detect if someone might be contemplating suicide.

Facebook already has mechanisms for flagging posts from people thinking about harming themselves. The new feature is intended to detect such posts before anyone reports them.

Tysen Benz, 11, took his own life on March 14th after he was told a lie that a 13-year-old friend had killed herself. Marquette County prosecutors are looking to charge the 13-year-old girl for using social media to commit a crime. 

(Published Tuesday, April 18, 2017)

The service will scan posts and live video with a technique called "pattern recognition." For example, comments from friends such as "are you ok?" can indicate suicidal thoughts.

Facebook has already been testing the feature in the U.S. and is making it available in most other countries. The European Union is excluded, though; Facebook won't say why.

The mother of an 8-year-old Ohio boy who hanged himself says a surveillance video proves he was severely bullied at school prior to his death. School officials called the boy's mother the day her son was bullied and said he had fainted, attorney Carla Leader told The Associated Press.

(Published Thursday, May 11, 2017)

The company is also using AI to prioritize the order that flagged posts are sent to its human moderators so they can quickly alert local authorities.