To understand Meta’s data centers and how far they've come in the last 10 years, I have to tell you about a napkin. But before I tell you about that, I have to walk you back to the beginning…
In 2008, Meta (then Facebook) was nowhere near the size it is today. Before we built our first data center in Prineville, Oregon, and founded the Open Compute Project, we did what many other companies that need data center capacity do — we leased or rented data center space from colocation providers. This sort of arrangement works fine unless the market experiences a major impact … something like the 2008 financial crisis.
The financial crisis hit the data center business right at a time when we were in the middle of a negotiation with one of the big colocation providers. They didn’t want to commit to all this spending until they had a better idea of what 2009 would be like. This was totally understandable from a business perspective, but it put us, as a potential customer, in a rather uncomfortable position.
We ended up making smaller deals, but they weren’t efficient from the standpoint of what we ultimately wanted — a way to handle how rapidly Facebook was growing. On the Infrastructure team, we always wanted an infrastructure that facilitates the growth of the business rather than holding it back. That’s not easy when your plan for the next two years effectively gets thrown in the trash.
That was the moment where we really asked what we could do to ensure that the company had the infrastructure it would need going forward. The only answer was that we had to take control of our data centers, which meant designing and building our own.
In 2009, we started looking at what it would really mean to build and operate our own data centers, and what our goals should be. We knew we wanted the most efficient data center and server ecosystem possible. To do that, we decided to create an infrastructure that was open and modular, with disaggregated hardware, and software that is resilient and portable. Having disaggregated hardware — breaking down traditional data center technologies into their core components — makes it easy and efficient to upgrade our hardware as new technologies become available. And having software that can move around and be resilient during outages allows us to minimize the number of redundant systems and build less physical infrastructure. It means the data centers will be less expensive to build and operate, and more efficient.