Jump to content
Join the Unexplained Mysteries community today! It's free and setting up an account only takes a moment.
- Sign In or Create Account -

Facebook Testing AI Tools to Prevent Suicide


Claire.

Recommended Posts

Facebook is testing AI tools to help prevent suicide.

Facebook is trialing new tools to help with suicide prevention efforts. One approach will use artificial intelligence to identify concerning posts and make it easier for other people to report them.Facebook says it will use pattern recognition algorithms to spot posts that could indicate someone is suicidal and help their friends to flag this content by making the option to report posts about “suicide and self injury” more prominent for those that are considered potentially concerning. The algorithms are trained on posts that have previously been reported.

It will also use pattern recognition to flag posts “very likely to include thoughts of suicide” so that its community operations team can take action even if the post is not reported. The team will review posts to see if the person appears to be in need of help and provide resources directly if they deem it appropriate.

Read more: New Scientist

  • Like 1
Link to comment
Share on other sites

 
3 minutes ago, Claire. said:

so that its community operations team can take action even if the post is not reported.

On the whole, it seems a wonderful idea.  My question is, what "action" will be taken.  Does someone come knocking on the door?  Is the force of law behind that knock?  Can the home be searched for "dangerous" tools that might be used for suicide?  Can the person be compelled to see a mental health professional?  Finally, what if the handy, dandy little algorithm got it wrong?  I am concerned about these issues and wouldn't support such tech until they were answered.

  • Like 2
Link to comment
Share on other sites

12 minutes ago, and then said:

On the whole, it seems a wonderful idea.  My question is, what "action" will be taken.  Does someone come knocking on the door?  Is the force of law behind that knock?  Can the home be searched for "dangerous" tools that might be used for suicide?  Can the person be compelled to see a mental health professional?  Finally, what if the handy, dandy little algorithm got it wrong?  I am concerned about these issues and wouldn't support such tech until they were answered.

Those are all good questions. Their solution thus far appears to be twofold: (1) Make it easier for that person to contact a friend, and (2) alert them to various resources that offer help and support. How and if, it expands from there is anyone's guess.

Your second concern, that of he AI tools getting it all wrong, is a legitimate one. Hopefully, the person at the receiving end has a sense of humor and is in no way stigmatized. But at the same time, one wonders if FB is crossing a line with this new technology they're exploring.

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.