December 4, 2020

Facebook claims that only 0.1% of what its users see is hate speech


Facebook claims that it continues to refine its algorithms to detect and remove hate speech from its platform. In the first report on the presence of hate speech on Facebook worldwide, published this Thursday, the social network ensures that it is already capable of deleting 95% of these toxic publications before its users see and report it. What is the prevalence of the remaining 5% of the total content circulating on Facebook? According to the company, it is equal to 0.1%.

“In other words, out of every 10,000 views of content on Facebook, from 10 to 11 included incitement to hatred,” Arcadiy Kantor, head of Facebook Integrity, said in a statement. “We specifically measure how much harmful content can be seen on Facebook and Instagram because the number of times each post is viewed is not evenly distributed. One content could go viral and be seen by many people in a very short time, while other content could be on the Internet for a long time and only a handful of people could see it, “Kantor details.

Since hate speech depends on language and cultural context, the company’s method of detecting its prevalence among the total content is to send samples of randomly selected publications to teams of reviewers located in different parts of the world. In its report, Facebook specifically cites Spanish as one of the three languages ​​in which automatic detection of hate speech has improved, along with English and Arabic. According to its data, its automatic detection capacity has improved 70 points since 2017.

However, Facebook’s algorithms do not simply make this content disappear. On the contrary, in most cases they are quarantined to be reviewed by a team of human moderators. elDiario.es has reported the harsh conditions in which these people carry out their work, forced to determine in 30 seconds if a content violates the internal rules of the social network. In the moderation centers, the 35,000 workers that Facebook has dedicated to this work throughout the world visualize the mutilations, insults, drugs or sexual abuse to which the algorithms stop, and decide whether they can be published on the platform or not . Post-traumatic stress leave is common.

Facebook claims that these teams of moderators are capable of reviewing content in 50 languages. The company is expanding its capacity after the UN deemed the social network to play a key role in calls for violence against the Rohingya in Myanmar, which they spread relentlessly. Following the United Nations investigation, Facebook acknowledged that it was unable to stop the avalanche because it did not have enough staff capable of reviewing the content in Burmese, the official language of the country.

1,300 requests for information from Spain

Facebook too Has published this Thursday its semi-annual transparency report, in which it reports on the requests made by the states to access information from the users of the platform. In the first six months of 2020, Spain processed 1,353 requests to the company regarding 2,228 users. Facebook disclosed data in 63% of cases.

98.9% of these requests for information from Spain were part of legal investigations, while 15 were classified as “emergency procedures”. Spain has some mechanisms to make this type of administrative requests to digital platforms, such as the urgent complaint channel of the Spanish Agency for Data Protection, whose mission is to stop videos like the one of the victim of the herd before they go viral.

.



Source link