Enabling Heavy AI Workloads for Edge Infrastructure

A Short-chassis Edge AI Server is a compact yet powerful AI computing solution designed to handle heavy AI workloads directly at the edge network. Unlike data center AI servers, these servers are optimized for deployment in space constraint edge environments. By bringing high-performance AI power closer to the data source, short-chassis edge AI servers minimize latency, enhance real-time AI inference capabilities, and improve overall system efficiency.

Lanner provides a comprehensive range of short-depth edge AI servers aimed at empowering heavy AI workloads at the edge. With Lanner's advanced hardware solutions, businesses can achieve faster decision-making, improved operational efficiency, and enhanced security. Additionally, Lanner's edge servers feature a scalable design and can incorporate multi-GPU/Smart NIC support, enabling parallel processing and significantly boosting AI and machine learning performance in the most demanding edge environments.

 

ECA-5540

5th Gen Intel Xeon Scalable
Max. 1024GB RAM
1x PCIe*16/2x PCIe*8 Slots
2 x 2.5" HDD/SSD

ECA-6040

5th Gen Intel Xeon Scalable
Max. 1024GB RAM
2x PCIe*16/2x PCIe*8 Slots
4 x 2.5" HDD/SSD

ECA-6051

Intel Xeon 6/NVIDIA GH200
Max. 1536GB RAM
3x PCIe*16 Slots
2 x E1.S SSD

 

 

 

ECA-6051

 

Designed to accelerate AI training and inference in 5G infrastructure, the ECA-6051 is a 2U short-depth edge AI server featuring the Intel Xeon 6 Processor or NVIDIA GH200 Grace Hopper Superchip. The ECA-6051 aims to enable low-latency multi-access edge computing applications, including video transcoding, factory visual inspection, and RAN intelligent control, in private 5G networks.

Featuring a front-access, 420mm-depth short-chassis and redundant power supplies, the ECA-6051 is specifically designed for deployment in space-constrained 5G edge network locations. Configured with up to 3 PCIe 5.0 x16 slots, the modular ECA-6051 can accommodate 3 PCIe cards, including the NVIDIA L40S GPU, NVIDIA H100 Tensor Core GPU, NVIDIA Bluefield-3 DPU, and NVIDIA ConnectX-7 network adapters.

 

ECA-6040

The ECA-6040 is a powerful 2U short-chassis DU/CU and AI server appliance for 5G RAN virtualization and real-time inferencing at the 5G edge. This platform supports up to 1024GB DDR5 system memory, 1x OCP 3.0 slot, 4x PCIe expansion slots that support multiple NVIDIA GPU cards including 2x FHFL PCIe*16 slots and 2x low profile PCIe*8 slots, making available 2x NVIDIA L40S and 2x L4 GPUs.

Featuring front-access I/O ports, a paint-free design, and IPMI remote management, the ECA-6040 is powered by the 5th Gen Intel Xeon Scalable Processors, with up to 64 cores of computing prowess for supercharged virtualization performance and improved power efficiency.

 

ECA-5540

ECA-5540, has been validated as an NVIDIA-Certified System for industrial edge computing. The ECA-5540 is designed to accelerate the deployment of AI-driven applications and data-intensive workloads at the network edge, while minimizing latency caused by data transmission to centralized servers in the cloud.

Configured with an NVIDIA L4/L40S GPU cards, the NVIDIA-Certified edge AI server ECA-5540 significantly accelerates AI performance for service providers, powering deep learning inferencing in proximity to data-generating devices such as retail, factories, smart cities, and healthcare.