Claire. Posted March 1, 2017 #1 Share Posted March 1, 2017 Facebook is testing AI tools to help prevent suicide. Facebook is trialing new tools to help with suicide prevention efforts. One approach will use artificial intelligence to identify concerning posts and make it easier for other people to report them.Facebook says it will use pattern recognition algorithms to spot posts that could indicate someone is suicidal and help their friends to flag this content by making the option to report posts about “suicide and self injury” more prominent for those that are considered potentially concerning. The algorithms are trained on posts that have previously been reported. It will also use pattern recognition to flag posts “very likely to include thoughts of suicide” so that its community operations team can take action even if the post is not reported. The team will review posts to see if the person appears to be in need of help and provide resources directly if they deem it appropriate. Read more: New Scientist 1 Link to comment Share on other sites More sharing options...
+and-then Posted March 1, 2017 #2 Share Posted March 1, 2017 3 minutes ago, Claire. said: so that its community operations team can take action even if the post is not reported. On the whole, it seems a wonderful idea. My question is, what "action" will be taken. Does someone come knocking on the door? Is the force of law behind that knock? Can the home be searched for "dangerous" tools that might be used for suicide? Can the person be compelled to see a mental health professional? Finally, what if the handy, dandy little algorithm got it wrong? I am concerned about these issues and wouldn't support such tech until they were answered. 2 Link to comment Share on other sites More sharing options...
Claire. Posted March 1, 2017 Author #3 Share Posted March 1, 2017 12 minutes ago, and then said: On the whole, it seems a wonderful idea. My question is, what "action" will be taken. Does someone come knocking on the door? Is the force of law behind that knock? Can the home be searched for "dangerous" tools that might be used for suicide? Can the person be compelled to see a mental health professional? Finally, what if the handy, dandy little algorithm got it wrong? I am concerned about these issues and wouldn't support such tech until they were answered. Those are all good questions. Their solution thus far appears to be twofold: (1) Make it easier for that person to contact a friend, and (2) alert them to various resources that offer help and support. How and if, it expands from there is anyone's guess. Your second concern, that of he AI tools getting it all wrong, is a legitimate one. Hopefully, the person at the receiving end has a sense of humor and is in no way stigmatized. But at the same time, one wonders if FB is crossing a line with this new technology they're exploring. 1 Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now