Navigating trade-offs: A multipronged approach
So how might we navigate trade-offs between safety, privacy, autonomy, and fairness & inclusion? Do we prioritize by prevalence, perceived intensity, potential to encourage off-platform harm, or disparate negative impact on more at-risk populations? Any direction, however well-informed or intended, has the potential to create negative outcomes as well. Sometimes, making these decisions feels like a scene from a movie where the protagonist kills one monster only to have a dozen more spawn in its place. Of course, when it comes to optimizations, it’s not all or nothing. Even as we decide to optimize for one thing, we must mitigate potential resulting harm whenever we can.
As an industry, we must take a multipronged approach to tackling these dilemmas. Based on what we've learned over the years, here are some key actions we've integrated into our approach to responsible innovation:
Empower and educate
Given the enormous impact of the work we do every day, we must equip all developers and designers with the training and tools to hone a responsibility mindset and effectively raise concerns early on. Responsibility is a continuous, all-hands effort and process. For many technologists, these weren’t skills or practices we were taught in school, so we must cultivate them in our own work and develop stronger education, support, and incentives within our industry as a whole. For example, in the first days of orientation, we give all new Facebook employees a primer on responsible innovation, to send a strong message about how foundational it is to how we work and build. It’s also a core course in all of our technical boot camps and an ongoing part of education for all of our technical employees.
Start the conversations early
Before launching a product, we must evaluate the potential negative impacts to help catch any unresolved issues. But a thorough consideration of potential issues must involve engaging stakeholders early in the product development life cycle, to help teams get out in front of issues by asking foundational questions, such as “Should we build this at all?” Early engagement has the added benefit of providing teams with more flexibility to grapple with these ambiguities, so the process feels more like a helpful setting of proper guardrails than a sense of being blocked at the last minute.
Design and build with, not for, our communities
Especially for our thorniest issues, we must continuously build relationships with — and invite into the design process — the diverse set of stakeholders who could be affected by what we build. We must continue to make them key partners in deliberating with our designers and developers; they should feel empowered to give feedback, voice concerns, and propose alternative approaches. Engaging directly with our diverse global stakeholders can shift the conversation in unexpected and productive ways that have real impact on product outcomes.
Show our work
In order to design responsibly, decision-making should be deliberate and consistent. Teams should explicitly call out what values are at play and how various alternatives will optimize for those values. There must be an agreed-upon and transparent process for deliberations and clear metrics for making decisions so in order to avoid the bias that otherwise might creep into more ad-hoc approaches. Final decisions — and the rationales behind them — should be shared as transparently as possible to provide accountability, help ensure adoption in ways that are consistent with the intent of the decision, and facilitate reevaluation if the factors that led to that decision change in the future. The Oversight Board is a good example of transparency in practice. While they make independent judgments about significant and difficult content decisions, they also publish transparent opinions and openly share the factors they took into consideration when making those decisions — along with broader recommendations about our products, policies, and enforcement systems.
A path forward
As technologists at the helm of some of the most powerful communication tools ever created, it is our responsibility to navigate these dilemmas in ways that create the most responsible outcomes possible, especially for at-risk communities. In order to do this, we must reprogram our tendency to reduce dilemmas into problems to be solved. We must sit with the discomfort that dilemmas can generate and acknowledge the fact that there is often no single right solution. We must engage with a diverse set of stakeholders and experts to ensure that we understand all aspects of the dilemmas before us. And then we must do the hard work of pushing through these barriers to determine — through intentional, rigorous decision-making — what our values drive us to optimize for, and how to mitigate the potential negative impact that may result from that optimization.
I still deeply believe in the potential of technology to empower people to create a more equitable world. But after 25 years of helping build some of the world’s most powerful communication tools, I no longer believe that good outcomes are certain or inevitable. They are the product of intentional hard work from all of us to proactively safeguard that future by navigating these dilemmas that are so consequential to people, communities, and society as a whole.