Skip to Content

How AI is learning to see the bigger picture

The speed at which AI has evolved over the last decade means it’s easy to overlook the significance of individual developments along the way. Things have changed so fast that what seemed like a milestone just a couple of years ago is already outdated. But to understand the progress, it’s important to note those milestones. And as Facebook releases new data today showing AI’s increasing role in enforcing our Community Standards, I wanted to talk about one of those developments.

Recommended Reading

Last year, our AI team rolled out a new system for automatically predicting whether content on our platforms violates our Community Standards. It was capable of doing something that no automated system could do before: It looked at content more holistically, including images, video, text, and comments in multiple languages, and evaluated it against multiple types of policy violations. It also analyzed the content over time, based on how people interacted with it. 

Previous systems for detecting harmful content could each do one pretty specific thing — one could analyze English-language text posts and predict the likelihood they contained hate speech, while another could look at photos and predict the probability they depicted violent acts. Each could work on one narrow problem area. 

To figure out whether an individual piece of content violated our rules, our software stitched together the output of all these different AI systems — over the years, we’ve built thousands of them — and made a recommendation. As you can imagine, that’s a lot of stitching, and it has fundamental limitations.

With this new system now able to analyze the entire picture on a deeper level, our ability to automatically detect and remove violating content before anyone sees or reports it has increased significantly. 

It has also helped the teams of people who manually review content have a much larger impact. With an improved ability to evaluate content, our automated tools are now doing a much better job identifying priority cases to be sent for human review. In 2020, tests of this new approach led to significant gains in the efficiency of our prioritization system, as our AI tools ensured that our moderators’ time could be focused on the most valuable, higher-impact decisions.

An image of bullets alone does not necessarily violate our rules; neither do the words in the text. But when our new system sees the two in combination, it recognizes this as potentially being an attempt to sell ammunition, which does violate our rules, and prioritizes the post for human review.
Previous systems would read the text in this image and flag it as potentially containing discriminatory language based on age, a protected category. But the ability to analyze the whole post, including the text accompanying the image, means our new system is less likely to prioritize posts like this for human review, allowing moderation teams to focus on higher-impact cases.

That alone is a significant advance for us, but it reflects a much bigger technological shift. This progress isn’t unique to AI for content moderation, and it isn’t unique to Facebook. You hear similar stories from people across the world who are building cutting-edge AI: Its capabilities are evolving in similarly holistic ways. 

I recently heard Andrej Karpathy, the head of AI at Tesla, describe the evolution of the company’s self-driving systems. Previously, he said, images from each of the car’s multiple cameras and sensors would be analyzed individually, with different AI models identifying features like stop signs and lane markings. Then the output of all those systems would be stitched together by software designed to build up an overall model of what’s happening. 

Today, he said, there’s a much more holistic approach. The car’s AI system ingests input from all those cameras and sensors, and outputs a model of the surrounding environment — the nearby cars and pedestrians, the lane markings, and the traffic lights. And then the software on top applies the rules, like braking for a red light. Over time, the AI has taken on more and more of the work, and produced a deeper, more complete understanding of the environment. 

We’ve seen these exact dynamics happen with the AI systems in use at Facebook, including the ones deployed to keep our platforms safe. For example, our AI systems can now build a holistic view of a group or a page by combining an assessment of multiple posts, comments, images, and videos over time. This allows for a much more sophisticated approach than what was possible even a year ago, when AI was limited to evaluating individual pieces of content on a standalone basis

This evolution of AI isn’t just helping Facebook enforce our community standards — it’s also driving progress across many of the hardest challenges in AI. Our computer vision tools are developing a much deeper understanding of images and video, and our translation systems are making leaps in their ability to comprehend multiple languages at once.

Most important, this increasing sophistication shows no sign of slowing down — in fact, research breakthroughs made over the last year suggest that an extraordinary period of progress in AI is still ahead of us.

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy