Facebook has announced an AI-powered model that will identify offensive comments. While she will work in conjunction with live moderators.

The social network Facebook is testing an artificial intelligence (AI) model to identify negative comments, including in closed groups.

Facebook will introduce several new software tools to help the more than 70 million people who lead and moderate groups on the platform. Facebook, which has 2.85 billion monthly users, said at the end of last year that more than 1.8 billion people are active monthly in groups, and there are tens of millions of active groups on the platform in total.

Along with Facebook’s new tools, AI will decide when to send “conflict warnings” to those who lead the group. Alerts will be sent to administrators if the AI ​​determines that a conversation in their group is “controversial or unhealthy,” the company said.

Over the years, technology platforms have relied more and more on artificial intelligence to identify most of the content on the Internet. Scientists note that this tool can be useful if there is so much information on the site that live moderators cannot track all of it.

But AI can be wrong when it comes to the intricacies of communication and context. The methods that work in AI-based moderation systems are also not advertised.

A Facebook spokesman noted that the company’s AI will use several signals to determine when to send a conflict alert, including comment response time and post comment length. Some administrators have already set up keyword alerts, which can identify topics that could lead to offensive comments, he said.