Facebook expanded the data it shares on its removal of terrorist propaganda. Its earlier reports only included data on al-Qaida, ISIS and their affiliates. The latest report shows Facebook detects material posted by non-ISIS or al-Qaida extremist groups at a lower rate than those two organizations.
The report is Facebook’s fourth on standards enforcement and the first to include data from Instagram in areas such as child nudity, illicit firearm and drug sales, and terrorist propaganda. The company said it removed 1.3 million instances of child nudity and child sexual exploitation from Instagram during the reported period, much of it before people saw it.
Still, the company’s latest transparency report arrives as regulators around the world continue to call on Facebook, and the rest of Silicon Valley, to be more aggressive in stopping the viral spread of harmful content, such as disinformation, graphic violence and hate speech. A series of high-profile failures over the past year have prompted some lawmakers, including Democrats and Republicans in the United States, to threaten to pass new laws holding tech giants responsible for failing to police their sites and services.
The calls for regulation only intensified after the deadly shooting in Christchurch, New Zealand, in March. Video of the gunman attacking two mosques spread rapidly on social media, including Facebook, evading tech companies’ expensive systems for stopping such content from going viral. On Wednesday, Facebook offered new data about that incident, reporting that it had removed 4.5 million pieces of content related to the attack between March 15, the day it occurred, and September 30, nearly all of which it spotted before users reported it.