Thu. Jul 18th, 2019

Facebook alleges that it lacks videos of attacks like the one in New Zealand to prevent its dissemination | Technology

Facebook alleges that it lacks videos of attacks like the one in New Zealand to prevent its dissemination | Technology


The artificial intelligence system (AI) of Facebook to prevent the dissemination of harmful images was not able to identify the live broadcast of the Christchurch attack in New Zealand, in which 50 people were killed and several dozen people were injured. The vice president of the company, Guy Rosen, has sent an explanation on Thursday: "AI systems are based on data that trains them, which means that thousands of examples of this content are needed to train a system capable of detecting certain types of text, images or videos. It has worked very well with subjects like nudes, terrorist propaganda and also with violence graphics, of which there are a large number of examples that we can use to train our systems. "To achieve the same result in situations like New Zealand, adds the manager, "we have to provide our systems with a large volume of data of this type of content, which is difficult in these events [el atentado] and they are fortunately rare. "

Sponsored Ads

Advertise Here

Rosen admits the "incredible importance" of the fight against terrorism through the social network and that the system "will never be perfect". For this reason it appeals to people as "part of the equation", both those responsible for reviewing content and users who report the existence of inappropriate information.

The video of the attacks was removed within minutes of its execution, but it could be viewed 200 times during its live broadcast without any user reporting it at that time. The full content, after the crime, was played 4,000 times until it was sent to Facebook (29 minutes after the beginning of the killing and 12 minutes after the end of the live broadcast) a link where you could access a recording of it . Some 300,000 copies were eliminated.

Facebook Live, which allows to broadcast live events, has also been a gateway for inappropriate content that Facebook tries to control as a priority with its systems. The main objective of prevention are images related to suicides and harmful acts, such as self-harm.

Frame of the broadcast made by the attacker of the Al Noor Mosque, in Christchurch (New Zealand).
Frame of the broadcast made by the attacker of the Al Noor Mosque, in Christchurch (New Zealand). AFP

800 different formats

The experience of New Zealand has enabled the company to review the control and reporting protocols in order to accelerate decision-making.

Now, Facebook recognizes the difficulty of avoiding broadcasts through other social networks or file exchange platforms. These are copies that some users from anywhere in the world got through Internet applications or services, capturing the videos of some pages or recording the screens with their mobiles.

The fact that the existence of the video is known has made many people look for it, admits Rosen, who warns that his company prohibits the dissemination of these images.

During the first 24 hours, Facebook has removed 1.2 million videos, but the vice president of the social network understands that there are groups uploading new copies to the Internet and in versions prepared to avoid detection of computer systems. Up to 800 variants of formats have been registered. In these circumstances, audio identification technology has been used to refine searches.

.



Source link

Leave a Reply