Practical guide to disinformation in seven steps | Trends


Misinformation, now known as fake news or directly with Anglicism fake news, has contaminated public discourse for centuries, even millennia. In this era, it has been extended as a weapon of terrorists, mafia harassers and pressure groups that try to destabilize the electoral processes in order to broaden social fissures, subvert democracy and, in some cases, boost authoritarian regimes.

The difference between propaganda in the Third Reich or before the French Revolution is that today there are powerful tools with well-known names: Facebook, Twitter and Google, giants under pressure to take action. We use the Bloomberg agency to do a brief review.

  • 1. What is disinformation?

It is usually defined as false content disseminated with the specific intention of deceiving, deceiving or manipulating. It should not be confused with erroneous information, which, however erroneous it may be, does not have the same intentionality.

Misinformation can take many forms: lifelong news, tweets, or Facebook and Instagram posts; paid ads on social networks and even tendently edited recordings distributed on social networks or through the messaging application (especially WhatsApp).

The main creators of fake news have historically been the States, for the same reason that they have been those who have been responsible for controlling or blinding the channels that did not interest your message. In dictatorships their signals seem more easily identifiable. But the phenomenon is global and historical. The dissemination of images has been one of the technological developments that have been done most by disinformation, giving an appearance of infallibility to information. The images of agonizing cormorant embatured in oil issued and published during the 1991 Gulf War, unleashed after the invasion of Kuwait by Iraq, they were a montage that fit like a glove in the story about ecoterrorism practiced by Saddam Husseim.

The thing is not here. With the advancement of technology the last year has burst called deepfakes: videos or audio clips in which through machine learning The machines are able to put the face of a specific person on the body of another. It's shocking to see Bill Hader become Tom Cruise or in Arnold Schwarzenegger

2. What is different in the Internet era?

The main difference is evident: it has occurred a brutal process of disintermediation that has lowered the barriers to communication. With Facebook and Twitter, modern disinformation providers only need a computer or a smartphone and an Internet connection to reach a potentially huge audience. It is not even necessary to show your face: you can dress up as someone or something else, like a grassroots movement. The technology allows, you can deploy armies of people, known as trolls and the so-called Internet bots, software that performs automated tasks quickly, to drive large-scale disinformation campaigns.

Although there are a certain feeling that there is a decentralization, propaganda is still in the hands of those who can really do it, whether they are States, governments, power groups, ”says Pablo Sapag, Professor of Propaganda History at the Complutense University of Madrid and author of the book Syria in Perspective. From an internationally mediated crisis to the historic internal dilemma. "Propaganda is only effective if it is done professionally and those who have the capacity to advertise are not individuals, they are organizations, whether state or otherwise."

It is what is called trolley done by the State, when governments insist on generating multitudes of digital hatred to defame critical activists or journalists, suppress dissent, undermine political opponents, spread lies and control public opinion. If the power of a state gets down to work

  • 3. Who is primarily responsible for the misinformation?

This year, various researchers from the University of Oxford they found evidence of "social media manipulation campaigns" by governments or political parties in 70 countries, against barely 28 countries in 2017. Facebook is, as expected, the main place where misinformation is spread. All eyes (and that of the Oxford researchers as well, are generally directed towards Russia. But the Oxford report notes that China has become "an important player in the world order of misinformation." Together with those two countries, others five, India, Iran, Pakistan, Saudi Arabia and Venezuela, have used Facebook and Twitter "to influence the global public."

  • 5. What can you tell me about China?

So much Twitter as Facebook revealed in August an information operation backed by the Chinese State launched worldwide to delegitimize the pro-democratic movement in Hong Kong. On that occasion, Twitter said it had eliminated 936 accounts that were "deliberately and specifically trying to sow political discord in Hong Kong." Facebook said it had found a similar operation backed by the Chinese government and eliminated fake accounts. He said he does not want his services "to be used to manipulate people."

  • 6. And since we are, what does Russia do?

A study by Rand Corp. on the conflict in eastern Ukraine, which has claimed about 13,000 lives since 2014, discovered that the Russian government under the presidency of Vladimir Putin conducted a sophisticated social media campaign that included fake news, Twitter bots, untrusted comments on websites and promotion of hashtags to "mobilize support, spread misinformation and hate and try to destabilize the situation." Another Russian effort is the well-known already that took place during the 2016 U.S. presidential election campaign, reaching millions of American voters with false publications and advertisements with which they collected data and influenced the vote.

  • 7. How are social media companies responding?

Despite being the main beneficiaries, it has been the legislators who have put pressure on the companies. Facebook and Google (a unit of Alphabet Inc.) have begun to demand political content ads that are intended to be published in the United States and Europe to reveal who is behind them. Google’s YouTube division also claims to have adjusted the algorithms that decide on the videos that are going to be seen below to limit the recommendations of alleged fake or inflammatory videos, a movement that had resisted for years.

WhatsApp, a subsidiary of Facebook, limits the people or groups to whom a message can be forwarded to five. His parent company says he spent 18 months preparing for the Indian elections in 2019: he blocked and deleted fake accounts, sought intrusion attempts and partnered with external verifiers (although relatively few) to combat calls fake news. Facebook also recently announced that has developed artificial intelligence tools to help identify content that is abusive or that violates site policies. Following the massacre of March 15 in Christchurch, New Zealand, Facebook, Google and Twitter signed a voluntary agreement with world leaders committing to combat hate speech online.

Finally, Faceboo is working on a News tab, a news service that will be curated by human beings and for which it is negotiating with the main US headers.

. (tagsToTranslate) guide (t) practical (t) practical (t) misinformation (t) seven (t) step (t) have (t) name (t) actor (t) tool (t) do (t) review ( t) fast (t) stay (t) king (t) next (t) next (t) gathering



Source link