YouTube announced plans on Wednesday to remove thousands of videos and channels that advocate neo-Nazism, white supremacy and other bigoted ideologies in an attempt to clean up extremism and hate speech on its popular service.
The new policy will ban “videos alleging that a group is superior in order to justify discrimination, segregation or exclusion,” the company said in a blog post. The prohibition will also cover videos denying that violent events, like the mass shooting at Sandy Hook Elementary School in Connecticut, took place.
YouTube did not name any specific channels or videos that would be banned. But on Wednesday, numerous far-right creators began complaining that their videos had been deleted, or had been stripped of ads, presumably a result of the new policy.
“It’s our responsibility to protect that, and prevent our platform from being used to incite hatred, harassment, discrimination and violence,” the blog post said.
The decision by YouTube, which is owned by Google, is the latest action by a Silicon Valley company to stem the spread of hate speech and disinformation on its site. A month ago, Facebook evicted seven of its most controversial users, including Alex Jones, the conspiracy theorist and founder of Infowars. Twitter barred Mr. Jones last year.
The companies have come under intense criticism for their delayed reaction to the spread of hateful and false content. At the same time, President Trump and others argue that the giant tech platforms censor right-wing opinions, and the new policies put in place by the companies have inflamed those debates.
The tension was evident on Tuesday, when YouTube said a prominent right-wing creator who used racial language and homophobic slurs to harass a journalist in videos on YouTube did not violate its policies. The decision set off a firestorm online, including accusations that YouTube was giving a free pass to some of its popular creators.
In the videos, that creator, Steven Crowder, a conservative commentator with nearly four million YouTube subscribers, repeatedly insulted Carlos Maza, a journalist from Vox. Mr. Crowder used slurs about Mr. Maza’s Cuban-American ethnicity and sexual orientation. Mr. Crowder said his comments were harmless, and YouTube determined that they did not break its rules.
“Opinions can be deeply offensive, but if they don’t violate our policies, they’ll remain on our site,” YouTube said in a statement about its decision on Mr. Crowder.
On Wednesday, YouTube appeared to backtrack, saying that Mr. Crowder had, in fact, violated its rules, and that his ability to earn money from ads on his channel would be suspended as a result.
“We came to this decision because a pattern of egregious actions has harmed the broader community,” the company wrote on Twitter.
The whiplash-inducing deliberations illustrated a central theme that has defined the moderation struggles of social media companies: Making rules is often easier than enforcing them.
“This is an important and long-overdue change,” Becca Lewis, a research affiliate at the nonprofit organization Data & Society, said about the new policy. “However, YouTube has often executed its community guidelines unevenly, so it remains to be seen how effective these updates will be.”
YouTube’s scale — more than 500 hours of new videos are uploaded every minute — has made it difficult for the company to track rule violations. And the company’s historically lax approach to moderating extreme videos has led to a drumbeat of scandals, including accusations that the site has promoted disturbing videos to children and allowed extremist groups to organize on its platform. YouTube’s automated advertising system has paired offensive videos with ads from major corporations, prompting several advertisers to abandon the site.
The kind of content that will be prohibited under YouTube’s new hate speech policies includes videos that claim Jews secretly control the world, that say women are intellectually inferior to men and therefore should be denied certain rights, or that suggest that the white race is superior to another race, a YouTube spokesman said.
Channels that post some hateful content, but that do not violate YouTube’s rules with the majority of their videos, may receive strikes under YouTube’s three-strike enforcement system, but would not be immediately banned.
The company also said channels that “repeatedly brush up against our hate speech policies” but don’t violate them outright would be removed from YouTube’s advertising program, which allows channel owners to share in the advertising revenue their videos generate.
In addition to tightening its hate speech rules, YouTube announced that it would tweak its recommendation algorithm, the automated software that shows users videos based on their interests and past viewing habits. This algorithm is responsible for more than 70 percent of overall time spent on YouTube, and has been a major engine for the platform’s growth. But it has also drawn accusations of leading users down rabbit holes filled with extreme and divisive content, in an attempt to keep them watching and drive up the site’s use numbers.
“If the hate and intolerance and supremacy is a match, then YouTube is lighter fluid,” said Rashad Robinson, president of the civil rights nonprofit Color of Change. “YouTube and other platforms have been quite slow to address the structure they’ve created to incentivize hate.”
In response to the criticism, YouTube announced in January that it would recommend fewer objectionable videos, such as those with conspiracy theories about the Sept. 11, 2001, terrorist attacks and vaccine misinformation, a category it called “borderline content.” The YouTube spokesman said on Tuesday that the algorithm changes had resulted in a 50 percent drop in recommendations to such videos in the United States. He declined to share specific data about which videos YouTube considered “borderline.”
“Our systems are also getting smarter about what types of videos should get this treatment, and we’ll be able to apply it to even more borderline videos moving forward,” the company’s blog post said.
Other social media companies have faced criticism for allowing white supremacist content. Facebook recently banned a slew of accounts, including that of Paul Joseph Watson, a contributor to Infowars, and Laura Loomer, a far-right activist. Twitter bars violent extremist groups but allows some of their members to maintain personal accounts — for instance, the Ku Klux Klan was barred from Twitter in August, while its former leader David Duke remains on the service.
Twitter is studying whether the removal of content is effective in stemming the tide of radicalization online. A Twitter spokesman declined to comment on the study.
When Twitter barred Mr. Jones, he responded with a series of videos denouncing the platform’s decision and drumming up donations from his supporters.
YouTube’s ban of white supremacists could prompt a similar cycle of outrage and grievance, said Joan Donovan, the director of the Technology and Social Change Research Project at Harvard. The ban, she said, “presents an opportunity for content creators to get a wave of media attention, so we may see some particularly disingenuous uploads.”
“I wonder to what degree will the removed content be amplified on different platforms, and get a second life?” Ms. Donovan added.
Source: The New York Times