This was originally published on Liberal America on March 2, 2017.
Facebook is using artificial intelligence (AI) to prevent suicides among its users. There are already tools in place to report a post of someone who is suicidal, but this will go even further. In a blog post, the company said they will be offering:
They are testing out a pattern-recognition algorithm that will look in posts for potentially suicidal thoughts. The Community Operations team will then look at the flagged posts to see if the person is in danger. They are starting out using these new features in the United States only, then they will begin expanding to more countries.
Facebook’s founder and CEO, Mark Zuckerberg would also like to use AI to try and spot terrorist posts and other inflammatory content. He said:
“Right now, we’re starting to explore ways to use AI to tell the difference between news stories about terrorism and actual terrorist propaganda so we can quickly remove anyone trying to use our services to recruit for a terrorist organization.”
It will take years to develop because they will need to make an algorithm that can tell the difference between a news story about an attack and propaganda.
I applaud the efforts to prevent suicide. If you or someone you know is having suicidal thoughts, please call the National Suicide Prevention Lifeline, 1-800-273-TALK. You can also use the Crisis Text Line by texting “HERE” to 741-741. Facebook is partnering with organizations around the world to help prevent suicides.