Skip to main content

Google wants to use AI and targeting to control the spread of extremism-related content on YouTube

Google wants to use AI and targeting to control the spread of extremism-related content on YouTube

Keeping in mind the growing levels of extremist content showing up online (the steps taken by tech giants to fight it), video streaming giant YouTube has added four more steps to its plan that should not only help with identifying and removing such content, but preventing it from getting uploaded in the first place.

Kent Walker, General Counsel, Google explained, “Terrorism is an attack on open societies, and addressing the threat posed by violence and hate is a critical challenge for us all. Google and YouTube are committed to being part of the solution.”

In a blog post, which also appeared in The Financial Times, he said that Google was “working with government, law enforcement and civil society groups to tackle the problem of extremism online.”

While Google has been working to identify and take down extremism and terrorism related content for years, the company believes that more needs to be done on this front, and that it needs to be done now.

Walker then went on to explain how the current review process involves “thousands of people around the world” who sit and sift through the content on a daily basis. Google’s engineers have even developed a technology that prevents the re-upload of known terrorist content using what Google calls “image-matching technology”.

The additional four steps:

First, Google will now increase the use of technology to help identify extremism and terrorism-related content on YouTube. It is not as easy as it seems, because the same video could be a reporting of a broadcast, something that would be helpful to others. The new technology uses video analysis models to identify and differentiate content and this, according to Google, has already been used to “assess more than 50 percent of the terrorism-related content” which has been pulled down over the past six months.

The second step is to do with YouTube’s Trusted Flagger programme. While technology can help identify problematic videos, human experts do play a vital role that helps decide between what is “violent propaganda and religious or newsworthy speech”. Walker explained that Trusted Flagger reports are accurate 90 percent of the time, which is why Google will not only be identifying new areas of concern but will also add 50 expert NGOs to the current list of 63 organisations that are a part of the programme.

If a YouTube video violates its policies, the videos “will appear behind an interstitial warning and they will not be monetised, recommended or eligible for comments or user endorsements.” Google’s idea behind this is not to gag the freedom of expression but to strike the right balance that will see such content getting less engagement, making them harder to find.


The last step is to do with the Creators for Change programme that promotes YouTube voices against hate and radicalisation online. Google is working with Jigsaw to roll out a new technology called “Redirect Method” that uses the power of targeted online advertising to reach out to potential Isis recruits. Once detected, the potential recruit is shown more anti-terrorist videos, which according to Google, “can change their minds about joining.”

Comments

Popular posts from this blog

Telcom operators to pay subscribers a penalty of Rs 5,000 for poor services: TRAI

Telcom operators to pay subscribers a penalty of Rs 5,000 for poor services: TRAI As per TRAI’s, additional recommendation report, a penalty of rupees 5000 would be paid by a telecom company to the subscriber, if a customer faces problems like non-activation of SIM, poor connectivity or if the handset is incompatible with the SIM, and it is “beyond the control of the customer.” This applies for both prepaid and postpaid customer. According to the report, the telecom company would have to refund an additional rupees 500 per hour would be given to the customer if the SIM fails to activate within a stipulated amount of time mentioned for activation. The report was prepared as a response to the grievances of people who had complained about the TRAI approved telecom companies. After an SMS survey that was conducted by the authority in December 2016 through the National Informatic Centre, it was inferred that fifty percent of people were unsatisfied with the telecom company they used...

India has seen 20 internet shutdowns in 2017, says report

India has seen 20 internet shutdowns in 2017, says report According to a report by the Software Freedom Law Centre, India has seen 20 internet shutdowns in 2017. It said that since 2012, the Indian government has shut down the internet 79 times. Very recently, on 5 June and 6 June,  the government had shut down the internet in the states of Maharashtra and Madhya Pradesh, respectively, during the farmer agitation. Meanwhile, on 7 June, the People’s Development Party (PDP) had called for a shutdown of the internet in Jammu and Kashmir when a civilian was killed by security forces. The following day, the shutdown was called in Uttar Pradesh, where a Dalit leader had been arrested following a clash between the upper and the lower caste. In all these cases, the shutdown was used as a preventative measure so that India’s internal administrative machinery does not get jeopardised, since most of the content encouraging violence was circulated through social media apps like WhatsAp...