Skip to Content

Where AI fits in the battle against hate speech

A conversation with Cornelia Carapcea, who leads our efforts in applying AI in the battle against hate speech, on the progress we've made and what's coming next.

Hate speech is one of the worst forms of unwanted content on our products, and it’s clear the solutions will require a combination of deep human expertise and powerful technology. Facebook’s Community Standards Enforcement Report, released today, shows the advances we’ve made on both sides of this challenge, and highlights progress that’s still to come.

Recommended Reading

You can see some of those advances here — there’s been a rapid evolution.  Starting in 2015, our first early computer vision classifiers were able to find one type of violation in images only.  Over time we’ve expanded to multiple types of content: images, text, video, and text in images via OCR. We’ve also expanded our systems to multiple languages, and multiple violation types. These new systems were powered by advances in self-supervised learning, multi-modal learning, and multi-lingual learning coming out of our AI research labs.

By 2017, a little under a quarter of all the hate speech that was taken down by Facebook and Instagram was first spotted by AI, not humans. That number has increased to about 95% today. 

This week I spoke with Cornelia Carapcea, who leads our efforts in applying AI in the battle against hate speech, who gave me a great rundown of our progress so far and where the work is going. Here’s what she had to say:

One thing that really stood out to me was Cornelia’s explanation for why she’s doing this work in the first place. She’s an expert in computer vision and machine learning, two of the hottest fields in technology right now, and could be doing almost anything she wants — her skills are in extremely high demand at virtually any company or university, anywhere in the world. Why, I asked, is she working on teaching computers to recognize some of the ugliest and darkest things that people say on the internet? 

Her answer stuck with me, because it got to the heart of something I think about all the time. In the last decade Facebook has hired many of the world’s best AI researchers and made enormous investments in the field. It’s a place where we think the future of computing will be determined. And for years now, we’ve focused so much of that brainpower on making our platforms safer. What keeps these brilliant people motivated to work on something like hate speech?

“The payoff can be monumental,” Cornelia told me. “If you’re able to have communities that are safe, constructive, and help people connect while keeping safe? That’s immeasurable in my mind.” Her colleagues share that optimism about where their work is taking us, she said. “We wake up in the morning and we are intrinsically motivated to work on this.” 

Latest Stories

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy