Facebook Has Already Removed Millions Of Pieces Of Terrorist Content In 2018

"We’re under no illusion that the job is done."

Facebook took action on nearly two million pieces of terrorist content in the first quarter of 2018, about twice as much as the previous quarter, the company said.

Facebook is using artificial intelligence and a 200-person counterterrorism team to label or remove the content. According to an internal report released at the end of April, more than 99 percent of the content Facebook takes action on is identified before a user reports it.

"White supremacists used electronic bulletin boards in the 1980s, and the first pro-al-Qaeda website was established in the mid-1990s," Monika Bickert, Vice President of Global Policy Management, wrote for Facebook. "While the challenge of terrorism online isn't new, it has grown increasingly urgent as digital platforms become central to our lives."

geFacebook defines terrorism as "any non-governmental organization that engages in premeditated acts of violence against persons or property to intimidate a civilian population, government, or international organization in order to achieve a political, religious, or ideological aim." Facebook does not take into account any group's political affiliations, according to their announcement. Violent separatists, religious extremists and white supremacists are treated the same under their content rules. However, their policies also do not apply to governments because of a "general academic and legal consensus nation-states may legitimately use violence under certain circumstances," Facebook said.

Facebook's focus on identifying terrorist content itself is a noticeable change. Previously, organizations like Facebook, Google and YouTube pushed the responsibility of flagging terrorist content onto its users, Bloomberg reported. Notably, the internal report Facebook cited say it is identifying more than 99 percent of the terrorist content before its flagged by users. 

"It's taken time to develop this software – and we're constantly pushing to improve it," Guy Rosen, VP of Product Management, said in a separate post. "We do this by analyzing specific examples of bad content that have been reported and removed to identify patterns of behavior. These patterns can then be used to teach our software to proactively find other, similar problems."

In countries like Myanmar, extremist groups have used Facebook to spread conspiracy theories or encourage ethnic cleansing, as reported by The New York Times. United Nations officials went as far as saying Facebook was accelerating violence in Myanmar. In Sri Lanka, the government has banned Facebook because it has been used to drum up hate against Muslim minorities. ISIS, too, used Facebook and other social media apps to spread propaganda and recruit as far back as 2014, prompting Facebook to develop a more thorough vetting and content control system.

"We're under no illusion that the job is done or that the progress we have made is enough," Bickert wrote. "Terrorist groups are always trying to circumvent our systems, so we must constantly improve. Researchers and our own teams of reviewers regularly find material that our technology misses. But we learn from every misstep, experiment with new detection methods and work to expand what terrorist groups we target."

Cover image via Orlok / Shutterstock.com. Social cover image composited from Orlok's image and the Facebook report button.

GET SOME POSITIVITY IN YOUR INBOX

Subscribe to our newsletter and get the latest news and exclusive updates.