Facebook has been using artificial intelligence (AI) to try to help prevent suicide and self-harm. Facebook’s head of safety says that the AI system, which triggers alerts when Facebook chatter indicates potential risk, was used several times in its first year. While some people support this use of AI, others want the social media giant to be more transparent about its use. Many worry that this development could lead to more AI surveillance. Listen to hear more about how the AI system works and debate: Should AI be allowed to assess suicide risk on Facebook?
Story Length: 4:29
Socrative users can import these questions using the following code: SOC-1234