Meta, the parent company of Facebook, disclosed new developments regarding its artificial intelligence (AI) projects in multiple articles on May 18th, Eastern Time. After making strides in the metaverse and AI projects, Meta has shifted its focus to chips.
AI as the Core Infrastructure
Since the groundbreaking of Meta’s first data center in 2010, a global infrastructure has gradually been established. It is reported that Meta’s data centers currently serve as the engine for Meta applications (including Facebook, Instagram, Messenger, WhatsApp, etc.), providing services to over 3 billion people daily.
AI is an integral part of Meta’s data system, from the hardware of Big Sur in 2015 to the development of PyTorch, and now to supercomputers used for AI research. Meta is constructing an AI-centered infrastructure. The details of the announced project progress include:
MTIA (Meta Training and Inference Accelerator) is an internally customized chip by Meta for accelerating training and inference workloads. By deploying MTIA chips alongside GPUs, Meta will provide better performance, reduced latency, and higher efficiency for each workload.
Next-Generation Data Centers
Meta’s next-generation data center design will support its current products while also accommodating the training and inference of future generations of AI hardware. This new data center will feature AI-optimized design, supporting liquid-cooled AI hardware and high-performance AI networks, connecting thousands of AI chips to form a data center-scale AI training cluster.