

Meta is embarking on one of its most ambitious infrastructure projects to date, with CEO Mark Zuckerberg announcing plans to build a 5-gigawatt (GW) AI-centric data center as part of the company’s broader mission to power artificial general intelligence (AGI). The initiative will involve a series of mega data centers globally, designed to accommodate future generations of Meta’s AI workloads, including its Llama model family.
The announcement, made during a recent video address, underlines Meta’s aggressive pivot toward building the computational backbone required for next-gen AI systems. According to Zuckerberg, the company intends to create a network of data center clusters, each spanning multiple gigawatts, that will collectively support the development of "superintelligence"—a term he used to describe the powerful general-purpose AI Meta is aiming to achieve.
A Strategic Infrastructure Shift
The 5GW facility, once completed, will be among the largest AI-focused data centers globally. Meta’s goal is to build several such clusters to ensure the scalability and efficiency of AI training across its platforms, including Facebook, Instagram, WhatsApp, and Threads.
The shift to purpose-built AI infrastructure marks a strategic deviation from traditional cloud and data center design. These new centers will be optimized for AI training, inference, and deployment—particularly for large language models (LLMs) like Meta’s open-source Llama 3 and the upcoming Llama 4.
Zuckerberg emphasized that the compute requirements for AGI are orders of magnitude higher than today’s needs. The planned investment reflects this understanding, with hundreds of billions of dollars likely to be spent over the next decade on infrastructure, hardware, and energy optimization.
Ramping Up Compute Power
Meta has already made strides in scaling its compute capabilities. The company is on track to deploy approximately 350,000 NVIDIA H100 GPUs by the end of 2024, which will be used across its research and production environments. According to Zuckerberg, this will amount to 600,000 GPUs worth of total compute power when equivalent units are factored in.
These resources will play a pivotal role in training and refining Meta’s suite of AI tools, particularly as the company plans to keep pushing its models into more capable domains, including reasoning, planning, and multi-modal understanding.
Focus on Open-Source AI and Control
Unlike rivals like OpenAI and Google DeepMind, Meta has been vocal about maintaining transparency and accessibility in its AI models. Zuckerberg reiterated the company’s commitment to open-source AI development, suggesting that Meta will continue to publish weights and architectures for its LLMs.
This approach, he argued, is essential to ensuring broader innovation in the AI ecosystem and giving developers and researchers more control over the tools they use. Meta believes that an open-source future—combined with robust infrastructure—can democratize AI and help build safer and more accountable systems.
Competitive Landscape
Meta’s move comes at a time when major tech companies are racing to secure AI dominance through investments in infrastructure, talent, and model training. Microsoft, Google, Amazon, and NVIDIA have all committed vast resources to building scalable AI platforms, whether through partnerships or internal hardware development.
However, Meta’s long-term vision appears to center around deep vertical integration—from chips to data centers to model deployment—giving the company more control over both performance and cost efficiency.
Energy and Sustainability Considerations
While specific timelines and locations for the data center clusters are yet to be disclosed, energy usage and sustainability are expected to be central considerations. A single gigawatt can power nearly a million homes, making Meta’s 5GW plan a substantial energy undertaking.
Meta previously stated that its data centers would be powered by 100% renewable energy, a commitment that will be tested as it expands globally. The company has also invested in heat recycling, liquid cooling, and custom chip designs to manage power consumption more efficiently.
Outlook
With this latest announcement, Meta is positioning itself not just as a social media giant, but as a frontrunner in AI infrastructure innovation. As the competition in the AI arms race intensifies, the company’s bold bet on data centers and compute power may determine how far and fast its ambitions toward superintelligence can materialize.