Artificial Intelligence

Using AI to help increase blood donations

March 25, 2019

In 2017, a team of Facebook engineers were working on researching new product features when they observed a specific type of post: People using Facebook in India were requesting help from their community for blood donations.

This activity was the inspiration for our blood donations feature, a tool developed by Facebook engineers to help connect people interested in donating blood with local hospitals, blood banks, and other organizations that provide blood to those in need.

Recommended Reading

It’s one of the applications of Facebook’s artificial intelligence work highlighted at the inaugural AI for India Summit in Bangalore on March 26. Having launched in India in 2017 with input and feedback from nonprofit organizations, government officials, and health industry experts, Facebook's blood donations feature is now available in Bangladesh, Brazil, and Pakistan, and we plan to expand it to more countries.

To date, more than 35 million people have signed up to be blood donors on Facebook. And when we partnered with blood banks in India and Brazil to conduct in-person surveys in October and November of last year, one in five donors said that Facebook influenced their decision to give blood.

Using the feature, people can choose to sign up on Facebook to become a blood donor. Then local health workers and others can post about a need for blood in a particular area, and potential donors nearby receive a notification on their devices.

The feature has evolved over the past two years, but it began as a complex technical challenge that only AI could tackle. Building this system required a unique approach, which offers an example of how we apply AI to improve technologies used by millions of people every day.

From spontaneous requests to an AI-powered tool

Many countries — including India, Bangladesh, and Pakistan — have too few blood donors to provide everyone with reliable access to safe blood, per data from the World Health Organization. Since people were already coming to Facebook to address this need, we wanted to use AI to help communities do it faster and more effectively.

The feature allows people to set a reminder, contact local blood banks, or alert their friends to the need.

To make more people aware of the blood donations feature, we use AI to help recognize when the content of a Facebook post is related to donating blood. The system can then automatically serve the person or organization a message that explains how the feature works and invites them to participate if they choose.

This sort of task is why AI exists. The AI field of machine learning (ML), in particular, excels at distinguishing a Facebook post that references a need for blood donors. With an effective AI system, someone who posted a Facebook message about the importance of giving blood could be given information about how to register as a potential donor and then could choose to be notified whenever there was an urgent need nearby.

When an organization shares a request for blood donations — whether because our AI system recognized the intent of its post or because it chose to use the blood donations feature directly — people can then respond to nearby donation requests.

For this feature, Facebook engineers were faced with an obstacle from the very beginning: ML systems need to be trained with relevant examples, but those examples were scarce in this case. Content related to blood donation was common enough to justify a new feature but still relatively rare, with thousands of pieces of relevant text scattered across billions of user-generated posts, and even those examples hadn’t been collected into a usable data set. An AI system would have to be accurate enough to isolate those (comparatively) few posts but also precise enough to avoid misinterpretations, which could lead to unwanted messages and negative reactions that might limit the feature’s impact.

It was, in other words, the AI version of a chicken-and-egg dilemma — a feature that called for a system trained using examples that weren’t readily available.

Solving this challenge required the AI engineers working on the feature to take a different approach.

Using AI to train better AI

To break down the steps that led to the blood donations feature’s final AI model, let’s circle back to the foundation of virtually every kind of AI: training.

Training on data is how AI learns. But it’s more than that, often determining the very structure of systems that have no other input. While humans might use flash cards and textbooks to learn, those are just training aids that build on people’s preexisting knowledge and ability to put new information into context. AI models, on the other hand, have no innate powers of learning and self-improvement. Most train on only the information they’re given, which has to be as specific and as purpose-oriented as possible, because machines have neither the versatility nor the humanlike agency to learn from general data. AI does what it’s trained to do, and, in the case of blood donation-related content, there weren’t enough useful examples to properly train the system.

Our engineers' response to this problem was a hand-tuned approach to AI, and specifically the creation of a simple ML model that could begin to find additional training data, however imperfect. This resulted in more models, each one finding the equivalent of better flash cards and textbooks and passing them on to the next system, with the ultimate goal of AI that could reliably identify that someone was expressing interest in giving blood.

One reason this task was so challenging — including finding examples for training purposes — was that text related to blood donation isn’t limited to a single kind of interaction. Relevant content ranges from posts in Groups that mention a potential donor’s location to freeform text posts that specify nothing beyond an interest in blood donations. The AI system must reliably identify the right posts and comments.

The training process started with the system simply looking for posts that contained relevant keywords, such as “blood” and “drive.” It then analyzed that set of posts in order to train a more sophisticated AI system that could recognize relevant posts even when they didn’t contain those exact words. That AI system, in turn, was used to spot more example posts, which were then used to train an even more effective AI system. Creating and deploying this series of fine-tuned ML systems took three days, while a more standard approach would have taken months.

When we later expanded the feature to work in more countries, our engineers adjusted the system to accommodate additional languages and different character sets. In Pakistan, for example, the AI was trained to recognize posts that were related to blood donation and written either in Urdu characters or in Anglicized Urdu. Training the system to understand the original posts rather than text that had been converted to Latin characters meant it was better able to spot nuances that indicated interest in blood donation.

A case study in curated machine learning

Counterintuitive as it might seem, part of our approach to improve accuracy was to allow certain types of false positives into the system during training. Because of the scarce training data, it was important to assess the kinds of false positives that were specific to content related to blood donation. For example, a person who posts “I donated blood today and I’m happy I did it” is clearly talking about a relevant activity, but it wouldn’t be appropriate to invite someone to give again so soon after he or she had donated. Though the system wouldn't act on this kind of example, it was useful training data for the AI model.

And since no AI system is perfect, our engineers adjusted the model to allow in another class of example, called near-positive negatives. These were cases in which the content was in the same overall domain (related to organ donation, for example) but not necessarily directly related to blood donation or requests. They helped improve the final AI system’s understanding of the exceptions and fringe examples that reveal what is and isn’t related to blood donation.

This video was created by the blood donations team to give an overview of the product.

This technique — along with the process described earlier of training a series of gradually improving AI systems — allowed the team to create an AI system that’s uniquely suited to this particular project. Applied elsewhere, to surface entirely different kinds of text, this same AI would be consistently off the mark.

This degree of focused optimization is not only by design but also in keeping with how we approach and apply the blood donations feature. It takes into account country-specific issues, such as the ability to analyze posts in Pakistan written in Urdu script. In the two previous countries where we released the feature — India and Bangladesh — the use of Anglicized posts was less common. Just as the ML was hand-tuned to address the scarcity and sensitivity of blood donation content, the feature represents a customized use of AI that’s adjusted on a country-by-country basis.

This curated approach doesn’t fit every application, but it demonstrates that when the need for help is extremely specific, the appropriate AI response is often just as specific. It’s worth considering, both in AI and in other fields, that sometimes the more tailored the use of technology is, the more helpful it can be.

We're hiring AI scientists!

Help us drive scientific breakthroughs in core AI research

Artificial Intelligence

Meta creates breakthrough technologies and advances AI to connect people to what matters and to help keep communities safe.