Facebook has spent an impressive amount of time building and using artificial intelligence to reduce and completely eradicate hate speech from the social network. According to the company, they are now using technology to identify almost 95% of hate speech content. The 5% left is a little difficult to get rid of, however.
On Thursday, Facebook announced that their AI systems detected 94.7% of 22.1 million of content with hate speech that was removed from the site in the third quarter of 2020. This is an improvement from last year where their AI identified 80% of the 6.9 million pieces. In 2017, their AI detected just 24% of the total amount of hate speech content. These calculations were published in the latest edition of the Community Standards Enforcement Report issued quarterly by the company as of August.
This update is timely as it comes a few days after the CEO of Facebook Mark Zuckerberg had a meeting with Congress, where he reiterated the company’s dependence on algorithms to identify content with themes such as terrorism and child exploitation before it is seen by anyone.
Like numerous other social media networks, Facebook is dependent on AI to assist the humans regulate the enormous and continually growing content on its platform, as well as Instagram, another social media network it owns. It is a very sensitive job to remove some posts and ads. This is partly because humans are peculiar at understanding the thin line between, maybe a creative nude painting and an exploitative picture. Also, images and words that may be harmless on their own may be hurtful when used together.
During a video call with some reporters on Wednesday, the Chief Technology Officer of Facebook, Mike Schroepfer gave details of some advanced AI tools Facebook uses to identify harmful posts before it starts trending, like one that makes use of online data from Facebook systems to upgrade, instead of a set of data offline. He said the goal is to keep improving the technology until as little people as possible see harmful content on the social network.
Previously, Facebook has been accused of too much reliance on human contractors whose work, by virtue of its nature, subjects them to the sight of horrifying content. Also, their AI could not catch violent live-streams like the New Zealand mosque shooting in March 2019.
Schroepfer said, “Obviously, I’m not satisfied until we’re done. And we’re not done”.
The most difficult concept for AI to understand is that which is dependent on subtlety and context— information that computers have not yet grasped. According to Schroepfer, Facebook is now in the process of detecting hateful memes. Recently, the company brought out a publicly available dataset related to that, which may help researchers improve the detection capabilities.
Giving an example of hurtful content that the AI may not spot, he gave an instance of an image of a cemetery with a caption “You belong here”. He said, “If I had overlaid text that said, ‘You belong here’, and the background image is a playground, that’s fine. If it’s a graveyard, it may be construed as hate to you, as a targeted class of people”.
By Marvellous Iwendi.
Source: CNN