Twitter will facilitate automatic filtering of offensive comments
Twitter has unveiled two new features that it is working on, with which it allows automatically filter potentially offensive responses to tweets and even limit that those who violate the rules can reply.
The new functions in the concept phase, which the Twitter designer has announced Paula Barcante In their personal account to get the opinion of the community, they seek to "keep potentially harmful content out of the answers" that users receive.
Users who wish to can proactively activate features to filter potentially offensive interactions, but they do not affect discussions, criticisms and debates.
The first of the concepts, called 'filter', makes potentially offensive responses to a user's tweets not shown to him or others, but remains visible to whoever responds.
Users who access the tweet will also receive a notification to remind them to avoid the tweet. offensive and potentially harmful content, such as name calling and bad language, as well as repetitive or unwanted responses.
The second function in concept, 'limit', it works in a similar way to the previous one. It is activated when an account has a history of behavior that violates the rules and prevents it from being able to respond to the tweets of those who have enabled it.
Twitter has explained that the controls of these functions would take place fully automatically. To avoid errors, the social network studies the way that the user can continue to review the contents to admit incorrectly blocked answers.