To meet the evolving demands of high-performance computing (HPC) and AI-driven data centers, NVIDIA has introduced the MGX platform. This innovative modular server technology is specifically designed to address the complex needs of modern computing environments, offering scalable solutions that can adapt to the varying operational requirements of diverse industries. With its future-proof architecture, the NVIDIA MGX platform enables system manufacturers to deliver customized server configurations that optimize power, cooling, and budget efficiencies, ensuring that organizations are well-equipped to handle the challenges of next-generation computing tasks.
Overview
NVIDIA MGX offers a modular reference architecture that allows for hundreds of different system configurations. This flexibility facilitates rapid adoption of key platform technologies like CPUs, GPUs, and DPUs with minimal investment in engineering resources. MGX is designed to meet the varied demands of applications ranging from AI and HPC to Omniverse environments, providing system builders with the capability to quickly and cost-effectively create tailored solutions.
Modular Architecture and Bays
MGX's modular design includes options for various form factors that are compatible with current and future NVIDIA hardware. This includes different chassis sizes like 1U, 2U, and 4U, available in both air-cooled and liquid-cooled options. The platform supports a full range of NVIDIA GPUs and CPUs, including the latest models like the NVIDIA Grace CPU Superchip and GH200 Grace Hopper Superchip, as well as standard x86 CPUs. It also integrates NVIDIA's networking technology with components such as the NVIDIA BlueField®-3 DPU and ConnectX®-7 network adapters
Key Features and Benefits
Advanced Hardware Compatibility:
MGX supports a broad array of NVIDIA's cutting-edge GPUs and CPUs, including the latest NVIDIA Grace CPU Superchip and GH200 Grace Hopper Superchip. The platform's ability to support multi-generational hardware ensures that system manufacturers can adapt existing designs to incorporate new NVIDIA technologies without the need for expensive redesigns. This multi-generational compatibility enhances the longevity and scalability of the infrastructure, safeguarding investments over time.
Enhanced AI Performance:
MGX leverages NVIDIA's powerful GPUs and CPUs to deliver high performance for AI training and inference, making it ideal for edge computing where quick data processing is crucial. The support for NVIDIA AI Enterprise software further enhances its capabilities by providing a robust framework for developing and deploying AI applications.
Energy & Power Efficiency:
The MGX platform is designed with a strong focus on energy efficiency, addressing the increasing importance of energy-conscious computing in reducing both operational costs and environmental impact. Its modular architecture enhances power management by allowing for customized power delivery that meets the specific requirements of each deployment. This targeted approach not only helps in minimizing the total cost of ownership but also supports sustainable computing practices. Additionally, MGX offers both air and liquid cooling options, optimizing energy consumption while maintaining high performance levels.
NVIDIA MGX as an Edge AI server
The NVIDIA MGX platform is designed with a modular, flexible architecture that efficiently meets the diverse demands of data centers, particularly in edge computing scenarios. Combining robust hardware with sophisticated software and networking capabilities, MGX is optimally suited for edge AI applications. Its versatility ensures that systems are not only scalable but also adaptable to future technological advancements, making it an excellent choice for businesses seeking to deploy flexible, high-performance, and energy-efficient AI solutions at the edge. With its ability to support various configurations and comply with existing infrastructure standards, MGX stands out as a versatile and future-proof option for a wide range of industry needs.
Lanner’s ECA-6051, designed for AI acceleration in 5G infrastructure, features CPU boards equipped with either NVIDIA's 72-core Grace Arm Neoverse V2 or the Intel® Xeon® 6 processor, supported by up to 1,536GB DDR5 memory. Its compact, 420mm-depth short-chassis with front-access and redundant power supplies makes it ideal for space-constrained 5G edge networks. This edge AI server, configured with up to three PCIe 5.0 x16 slots, can accommodate a variety of PCIe cards including NVIDIA's L40S GPU, H100 GPU, Bluefield-3 DPU, and ConnectX®-7 network adapters. Offering support for multiple GPUs and DPUs, the ECA-6051 boosts AI performance at the 5G edge, facilitating rapid deployment of AI-driven applications with minimized latency, while providing telecom operators with a scalable, high-performance platform to build secure and efficient 5G radio access networks.