Google wants to use AI and targeting to control the spread of extremism-related content on YouTube
Keeping in mind the growing levels of extremist content showing up online (the steps taken by tech giants to fight it), video streaming giant YouTube has added four more steps to its plan that should not only help with identifying and removing such content, but preventing it from getting uploaded in the first place.
Kent Walker, General Counsel, Google explained, “Terrorism is an attack on open societies, and addressing the threat posed by violence and hate is a critical challenge for us all. Google and YouTube are committed to being part of the solution.”
In a blog post, which also appeared in The Financial Times, he said that Google was “working with government, law enforcement and civil society groups to tackle the problem of extremism online.”
While Google has been working to identify and take down extremism and terrorism related content for years, the company believes that more needs to be done on this front, and that it needs to be done now.
Walker then went on to explain how the current review process involves “thousands of people around the world” who sit and sift through the content on a daily basis. Google’s engineers have even developed a technology that prevents the re-upload of known terrorist content using what Google calls “image-matching technology”.
The additional four steps:
First, Google will now increase the use of technology to help identify extremism and terrorism-related content on YouTube. It is not as easy as it seems, because the same video could be a reporting of a broadcast, something that would be helpful to others. The new technology uses video analysis models to identify and differentiate content and this, according to Google, has already been used to “assess more than 50 percent of the terrorism-related content” which has been pulled down over the past six months.
The second step is to do with YouTube’s Trusted Flagger programme. While technology can help identify problematic videos, human experts do play a vital role that helps decide between what is “violent propaganda and religious or newsworthy speech”. Walker explained that Trusted Flagger reports are accurate 90 percent of the time, which is why Google will not only be identifying new areas of concern but will also add 50 expert NGOs to the current list of 63 organisations that are a part of the programme.
If a YouTube video violates its policies, the videos “will appear behind an interstitial warning and they will not be monetised, recommended or eligible for comments or user endorsements.” Google’s idea behind this is not to gag the freedom of expression but to strike the right balance that will see such content getting less engagement, making them harder to find.
The last step is to do with the Creators for Change programme that promotes YouTube voices against hate and radicalisation online. Google is working with Jigsaw to roll out a new technology called “Redirect Method” that uses the power of targeted online advertising to reach out to potential Isis recruits. Once detected, the potential recruit is shown more anti-terrorist videos, which according to Google, “can change their minds about joining.”
Keeping in mind the growing levels of extremist content showing up online (the steps taken by tech giants to fight it), video streaming giant YouTube has added four more steps to its plan that should not only help with identifying and removing such content, but preventing it from getting uploaded in the first place.
Kent Walker, General Counsel, Google explained, “Terrorism is an attack on open societies, and addressing the threat posed by violence and hate is a critical challenge for us all. Google and YouTube are committed to being part of the solution.”
In a blog post, which also appeared in The Financial Times, he said that Google was “working with government, law enforcement and civil society groups to tackle the problem of extremism online.”
While Google has been working to identify and take down extremism and terrorism related content for years, the company believes that more needs to be done on this front, and that it needs to be done now.
Walker then went on to explain how the current review process involves “thousands of people around the world” who sit and sift through the content on a daily basis. Google’s engineers have even developed a technology that prevents the re-upload of known terrorist content using what Google calls “image-matching technology”.
The additional four steps:
First, Google will now increase the use of technology to help identify extremism and terrorism-related content on YouTube. It is not as easy as it seems, because the same video could be a reporting of a broadcast, something that would be helpful to others. The new technology uses video analysis models to identify and differentiate content and this, according to Google, has already been used to “assess more than 50 percent of the terrorism-related content” which has been pulled down over the past six months.
The second step is to do with YouTube’s Trusted Flagger programme. While technology can help identify problematic videos, human experts do play a vital role that helps decide between what is “violent propaganda and religious or newsworthy speech”. Walker explained that Trusted Flagger reports are accurate 90 percent of the time, which is why Google will not only be identifying new areas of concern but will also add 50 expert NGOs to the current list of 63 organisations that are a part of the programme.
If a YouTube video violates its policies, the videos “will appear behind an interstitial warning and they will not be monetised, recommended or eligible for comments or user endorsements.” Google’s idea behind this is not to gag the freedom of expression but to strike the right balance that will see such content getting less engagement, making them harder to find.
The last step is to do with the Creators for Change programme that promotes YouTube voices against hate and radicalisation online. Google is working with Jigsaw to roll out a new technology called “Redirect Method” that uses the power of targeted online advertising to reach out to potential Isis recruits. Once detected, the potential recruit is shown more anti-terrorist videos, which according to Google, “can change their minds about joining.”
Comments
Post a Comment