Facebook Suicide Prevention Tool Uses AI To Detect Users At Risk
Facebook's latest suicide prevention tool update doesn't rely on user reports to detect suicidal users, instead using artificial intelligence to detect when a user may be experiencing suicidal thoughts. This ability also includes Facebook Live; the company plans to roll it out to users around the world with the exception of Europe. This is part of what Facebook calls proactive detection.
Facebook's suicide prevention measures have been around for awhile, and include things like enabling users to report posts that they believe indicate the person is at risk of harming themselves. This prompts the social network to get that user in touch with resources that can offer help, and though the tool has been criticized in some ways, it has largely been a welcomed addition to the service.
In its most recent announcement about the tool, Facebook explained that it is using pattern recognition technology that is able to spot live videos or posts where someone may be expressing suicidal thoughts. In addition, Facebook is expanding its Community Operations team that reviews user reports, plus it is improving the way it identifies first responders.
Proactive detection is a key way to spot these posts and potentially get the user help as soon as possible. Facebook says it is working on decreasing the number of false positives while increasing its accuracy in identifying these posts. Helping in this proactive detection is identifying text in comments on posts that may indicate concerning subject matter, such as the phrase, "Are you okay?"
As it stands, Facebook users can still directly report posts that concern them; these posts are shuttled off to the company's review teams which are said to "prioritize the most serious reports." Users who are flagged are provided with various help resources, such as resources for the moment, help line numbers, and templates on how to ask friends for help.
SOURCE: Facebook