Hate speech is easy to find on social networks | Technology

Hate speech is easy to find on social networks | Technology


Shortly after the shooting in a synagogue in Pittsburgh, I noticed that the word "Jews" was a trending topic on Twitter. As a researcher of social networks and educator, I was worried that violence would spread over the Internet, just as it happened in the past.

The activity of the alleged attacker in the social network Gab has called attention to the role that network has as a alternative full of hate to more conventional options such as Facebook or Twitter. The latter are between the social networking platforms who have promised fight against hate speech Y insults in its pages.

However, when I explored the Internet activity after the shooting, it became clear to me that the problems are not only in places like Gab. Conversely, the hate speech is still easy to find in conventional social networks, including Twitter. I have also determined what additional measures the company could take.

Incomplete answers to new terms of hate

I expected new threats related to the Pittsburgh shooting to appear on the Internet, and there were signs that it was already happening. In a recent anti-Semitic attack, the leader of the Nation of Islam, Louis Farrakhan, used the word "termites" to describe the Jews. I searched for this term, knowing that it was likely that racists would use it as a key word to avoid being detected when expressing anti-Semitism.

[Hitlerica, Gaseadlos a todos]  Images with hate texts are more difficult to identify for algorithms, but no less dangerous or harmful. Screenshot by Jennifer Grygiel, CC BY-ND
[Hitlerica, Gaseadlos a todos] Images with hate texts are more difficult to identify for algorithms, but no less dangerous or harmful. Screenshot by Jennifer Grygiel, CC BY-ND

Twitter had not suspended Farrakhan's account after another of his anti-Semitic statements, and the search function of the network automatically suggested that I might look for the expression "termite eats bullets" [la termita come balas]. That turns the Twitter search box into a hate speech poster.

However, the company had apparently adjusted some of its internal algorithms, because in my search results no tweets with antisemitic uses of the word "termite" appeared.

Messages that have gone unnoticed for years

As I continued my searches for hate speech and calls for violence against Jews, I found even more disturbing evidence of the shortcomings of the Twitter system for moderating content. After the 2016 presidential election in the United States and the discovery that Twitter had been used to influence them, the company stated that it was investing in machine learning to "detect and mitigate the effect of false, coordinated and automatic account activity on users" Based on my results, these systems have not detected even violent threats and clear and direct hate speech that have been on this site for years.

  ["Let'skillJewsandkillthemforfun"AsimpleexampleofhatetweetthathasbeenallowedtoremainonTwitterformorethanfouryearsScreencapturedbyJenniferGrygielCCBY-ND[“Matemosjudíosymatémoslospordiversión”UnsencilloejemplodetuitdeodioalquesehapermitidopermanecerenTwitterdurantemásdecuatroañosPantallacapturadaporJenniferGrygielCCBY-ND
["Let'skillJewsandkillthemforfun"AsimpleexampleofhatetweetthathasbeenallowedtoremainonTwitterformorethanfouryearsScreencapturedbyJenniferGrygielCCBY-ND[“Matemosjudíosymatémoslospordiversión”UnsencilloejemplodetuitdeodioalquesehapermitidopermanecerenTwitterdurantemásdecuatroañosPantallacapturadaporJenniferGrygielCCBY-ND

When I reported that a tweet raised in 2014 was proposing to kill Jews "for fun," Twitter withdrew it that day, but its general automatic warning gave no explanation as to why it had remained intact for more than four years.

Hate deceives the system

When I reviewed hate tweets that had not been detected in years, I found that many had no text and contained only one image. Without text, tweets are more difficult to detect for both users and the algorithms used by Twitter to detect hate. But users who specifically seek hate speech on Twitter can very well scroll through the activity of the accounts they find, seeing even more hate messages.

[Hitlerica, Gaseadlos a todos]  Images with hate texts are more difficult to identify for algorithms, but no less dangerous or harmful. Screenshot by Jennifer Grygiel, CC BY-ND
[Hitlerica, Gaseadlos a todos] Images with hate texts are more difficult to identify for algorithms, but no less dangerous or harmful. Screenshot by Jennifer Grygiel, CC BY-ND

Twitter seems to be aware of this problem: users who report a tweet are encouraged to check other tweets from the same account and to submit more content for review, but it still leaves room for some to not be detected.

Help for technology giants in trouble

As I was finding tweets that in my opinion violated Twitter policies, I was reporting on them. Most eliminated them quickly, even in less than an hour. But some offensive messages took up to several days to disappear. There are still a few text messages that have not yet been removed, despite clearly violating Twitter's policies. This shows that the process of reviewing the content of the company is not consistent.

[“Árabes, dejad de joder y matad a…. en Las Vegas, jodidos incompetentes” “Antisionistas, centraos en matar a ….. Dejad de joder. Se están perdiendo vidas”]  Tuits of January 2015 urging to kill a specific person, reported on October 24 and 25, 2018, continued appearing on October 31, 2018 (edited to cross out the person's name). Screen captured by Jennifer Grygiel, CC BY-ND
[“Árabes, dejad de joder y matad a…. en Las Vegas, jodidos incompetentes” “Antisionistas, centraos en matar a ….. Dejad de joder. Se están perdiendo vidas”] Tuits of January 2015 urging to kill a specific person, reported on October 24 and 25, 2018, continued appearing on October 31, 2018 (edited to cross out the person's name). Screen captured by Jennifer Grygiel, CC BY-ND

It may seem that Twitter is improving the removal of harmful content and that it is removing a lot of content and memes, and suspending accounts, but much of that activity is not related to hate speech. Much of Twitter's attention has focused instead on what the company calls "coordinated manipulation", as bots and networks of false profiles directed by government propaganda sections.

In my opinion, the company could take a significant step and request the help of the citizens, as well as researchers and experts such as my collaborators and myself, to detect the hate content. It is normal for technology companies –Twitter included– offer a remuneration for those who detect vulnerabilities in its computer support. However, everything the company offers to users who report problematic content is to send them a automatically generated message giving them "thanks". The disparity between how Twitter deals with coding problems and content complaints conveys the message that it gives technology priority over the community.

Instead, Twitter could pay users to report content that violates their community guidelines, offering economic rewards for pointing out the social vulnerabilities of their system, as if those users were helping them to determine software or hardware problems. A Facebook executive expressed concern that this possible solution fail and generate more hate in the network, but I believe that the rewards program could be structured and designed to avoid that problem.

There is much to do

There are other Twitter problems that go beyond what is published directly on your site. Those who hang hate speech often take advantage of another key Twitter tool: the ability to include links to other Internet content. That function is key in the use of Twitter, and serves to share content of mutual interests in the network. But it is also a way of spreading hate speech.

For example, a totally innocent-looking tweet, saying "This is funny" and include a link. But the link -a content not included in the Twitter servers- presents a message full of hate.

[“Los judíos arderán@odioalosnegros” “Adof Hitler@quemajudíos”, diversas versiones]. A surprising number of profiles have Twitter names and identities with hate messages. Campaign captured by Jennifer Grygiel, CC BY-ND
[“Los judíos arderán@odioalosnegros” “Adof Hitler@quemajudíos”, diversas versiones]. A surprising number of profiles have Twitter names and identities with hate messages. Campaign captured by Jennifer Grygiel, CC BY-ND

In addition, Twitter's content moderation system only allows users to report hateful and threatening tweets, but not accounts that contain similar messages in their own profile. Some of these accounts – with photos of Adolf Hitler, and Twitter names and addresses that encourage burning Jews – do not even tweet or follow other users. Sometimes they only exist for users to find when they search for words in their profiles, converting the search box back into a dissemination system. And although it is impossible to know, it is possible that these accounts are also used to communicate with others on Twitter through direct message, using the platform as an undercover communication channel.

Without tweets or other public activity, it is impossible for users to report these accounts through the usual content reporting system. But they are just as offensive and dangerous, and it is necessary to evaluate and moderate them to the same extent as any other content on the site. As those who wish to spread hate become more adept, Twitter's community guidelines – and, most importantly, their efforts to implement them – must be updated and updated.

If social networks want to avoid being or becoming vectors of information warfare and plagues of hate ideas and memes, they have to be much more active and, at a minimum, have thousands of full-time content moderation employees, as did a teacher over the course of a weekend.

Jennifer Grygiel is Associate Professor of Communications, Syracuse University

Disclosure clause. Jennifer Grygiel has a small portfolio of shares in the following social networking companies: FB, GOOG, TWTR, BABA, LNKD, YY, and SNAP.

This article was originally published in The Conversation. read the original.

The Conversation

.



Source link