Facebook, Inc. Common Stock (NASDAQ:FB), Revealing Data In A Show Of Transparency Even Though Figures Don’t Add Up

0
545

Facebook, Inc. Common Stock (NASDAQ:FB) for the first time revealed data on how it has been trying to moderate the content on its platform.

In a show of transparency, Facebook unveiled what it termed as content moderation guidelines. In its release a fortnight ago, it claimed to have been using the guidelines to scout its platform. Earlier this week, it was to publish how good the algorithm it has been using, is good at dealing with the content that has not been following the laid down rules.

Coming in the wake of the recent Cambridge Analytica Scandal, the rules contain the dos and don’ts allowed on the social media platform. The rules which are quite lengthy go into details of practices such as hate speech, sexual content, the language of a threatening nature, violence, terrorist propaganda and so on.

Released Data

The released data was categorized into graphic violence, hate speech, spam, sexual activity, terrorist propaganda, and spam. According to the data, 3.4 million pieces of content contained graphic violence, 583 million accounts were found to be fake which the company disabled. Additionally, 837 million pieces were reported as spam, 2.5 million pieces were viewed as hate speech, and action was taken on 2.5 million contents which were of terrorist propaganda.

Facebook indicated that they were using the data to evaluate their own progress. It also indicated the belief that increased transparency led to accountability and responsibility.


THE HERALD FINANCE REPORT

Start your workday the right way with the news that matters most.

Your information is 100% secure with us and will never be shared
Disclaimer & Privacy Policy


Challenges

In spite of Facebook using a combination of reports from users, reviews from their team and technology to weed out violations it still is not foolproof.

The company itself admitted that the Artificial intelligence it uses is not very effective in determining hate speech for example. Automated programs are proving difficult in understanding language context. It also admitted that it has a hard time flagging terrorist propaganda.

Critics voiced certain concerns, among them, questioning how long moderators take to remove violating content. They felt the issue of fake news was not covered, and that such information is never brought down by the company.

LEAVE A REPLY

Please enter your comment!
Please enter your name here