fbpx

Blog

Facebook’s First Transparency Report Shows Majority Of Offending Content Removed Before Being Reported

Thrive Business Marketing company logo

The first ever Facebook transparency report has been released, and outlines the amount of content it has identified as breaking Community Standard rules between October 2017 and March 2018.  Based on the data, a majority of the offensive content had action taken against it by Facebook before it was reported by users.

The report is part of the company’s Community Standards initiative, which was originally announced in April.  There are six violation criteria:

  1. Graphic violence
  2. Nudity and sexual activity
  3. Terrorist propaganda from ISIS, al-Qaeda and affiliates
  4. Hate speech
  5. Spam
  6. Fake accounts.

According to Facebook, the company uses a combination of machine learning automation and employees to identify content that violates Community Standards guidelines.  Facebook has said a number of times that there are plans to have at least 10,000 safety and security professionals on staff by the end of 2018 to work on this initiative.

The transparency report breaks down the number of content violations it took action against in each category between Q4 2017 and Q1 2018, as well as the amount of content that was identified before users reported it.  In the report, it even shows the frequency of content violations within the graphic violence and nudity and sexual activity categories, as well as the frequency of fake accounts.

How many pieces of content or accounts did Facebook take action against?

  • Graphic violence — Q4 2017: 1.2 million | Q1 2018: 3.4 million
  • Nudity and sexual activity — Q4 2017: 21 million | Q1 2018: 21 million
  • Terrorist propaganda from ISIS, al-Qaeda and affiliates — Q4 2017: 1.1 million | Q1 2018: 1.9 million
  • Hate speech — Q4 2017: 1.6 million | Q1 2018: 2.5 million
  • Spam — Q4 2017: 727 million | Q1 2018: 837 million
  • Fake accounts — Q4 2017: 694 million | Q1 2018: 583 million

Amount identified before users reported content or accounts

  • Graphic violence — Q4 2017: 72% | Q1 2018: 86%
  • Nudity and sexual activity — Q4 2017: 94% | Q1 2018: 96%
  • Terrorist propaganda from ISIS, al-Qaeda and affiliates — Q4 2017: 97% | Q1 2018: 99.5%
  • Hate speech — Q4 2017: 24% | Q1 2018: 38%
  • Spam — Q4 2017: 100% | Q1 2018: 100%
  • Fake accounts — Q4 2017: 98.5% | Q1 2018: 99.1%

Prevalence of content in violation of Facebook’s Community Standards

  • Graphic violence — Q4 2017: 0.16% to 0.19% | Q1 2018: 0.22% to 0.27%
  • Nudity and sexual activity — Q4 2017: 0.06% to 0.08% | Q1 2018: 0.07% to 0.09%
  • Terrorist propaganda from ISIS, al-Qaeda and affiliates — Data unavailable
  • Hate speech — Data unavailable
  • Spam — Data unavailable
  • Fake accounts — Facebook estimates that fake accounts represented approximately 3 to 4 percent of monthly active users (MAU) on Facebook during Q1 2018 and Q4 2017.

Facebook says that it is still refining its internal methodologies for measuring its efforts and expects the number to become more precise over time.

Source – Amy Gesenhues

Are You Ready To Thrive?

Or send us a message

Name(Required)

Categories