With the help of artificial intelligence, the social network will track conflicts and insults in comments and report them to community owners. To reduce toxicity in discussions, administrators will receive new tools
Facebook will begin testing a new system to reduce toxicity in community comments. This was reported in the blog of the social network.
The system, called Conflict Alert, will use artificial intelligence to analyze the comments users leave on Facebook communities.
If disputes or potentially conflict situations are found in the comments, the system will mark them and send a notification to the community administrator. In addition, messages with obscene language or inappropriate language will be flagged with a moderation alert. Information about such comments will also go to the moderators of the group.
The community administrators themselves will be given new tools that, according to the developers, will reduce the level of toxicity or reduce the conflict to nothing. In particular, it will now be possible to limit the frequency of comments from a particular user, as well as the frequency of commenting on certain posts.
Together with the new comment analysis system, Facebook group administrators will be given the tools to “shape and enhance the culture of communities.” Now they will have access to statistics on group members, which will display the number of their publications and comments, the number of deleted messages, blocking statistics.
In 2020 and 2021, Facebook censored posts and comments related to politics and the coronavirus pandemic. In particular, the social network has banned posting content with speeches by former President Donald Trump.