Realize A.I and Machine Learning at Edge

The strong demand for artificial intelligence (A.I) and machine learning has contributed to the shift from cloud to edge. There have been claims that computing related to A.I. algorithm should return to the edge, where A.I based devices are deployed. The claims also suggest that device-based edge computing can enable real-time analytics and execute algorithms to keep A.I and machine learning applications on premise better than the cloud.

A.I in the Cloud Data Center

However, most of today’s A.I and machine learning deployments still rely on the cloud for processing, computing and analytics, due to the high computing, volume storage and memory, as well as graphic processing capability of cloud data centers. In this type of deployments, edge just collect data from sensors in proximity and send it to the cloud. The major drawback is obvious, when all the edge devices are sending data to the cloud, the bandwidth will be jammed and thus causing latencies. In fact, bandwidth and latencies are two of the major concerns that limit the future growth of A.I applications.

Take self-driving cars and city surveillance for examples. A self-driving car is like large portable workstation, consisting of computing devices and electronically controlled units. The computing device will collect all the data from the units in order to display vehicle status. Thus, a moving self-driving car may generate an incredible volume of data each day. If all the data is sent to the cloud, the bandwidth will be constantly occupied and result in great latencies. The similar consequence may be the same for city surveillance. To assure social security, many local governments have installed more and more IP cameras for surveillance. The cameras detect the surroundings and capture footages. When all the footages are processed for streaming data, it will be too overwhelming for the cloud to analyze the data without latency.

From these two applications, it is obvious that A.I and machine learning consume tremendous computational resources, such as high-performance CPU, efficient memory, volume storage and graphical or switching capabilities. These technological characteristics are usually found in cloud data centers, and that explains why most infrastructure owners still rely on the cloud to execute A.I algorithm and analytics, although the latency may make it less “real-time”.

Fortunately, many key players in both software and hardware domains have observed the limitation and suggest a more capable and higher performance edge platform that can handle A.I algorithm at the edge and perform real-time analytics to respond to critical scenarios. In other words, creating an edge that functions as distributed data center can greatly relieve the upstream bandwidth and eliminate latency issues.

Solution: A.I. Computing at Edge Data Center

Proximity matters. Creating a distributed, localized data center at edge can keep mission critical applications on-premises so that resilience and reliability can be much improved.

In order to turn edge computing into edge data center, it is necessary to reconsider the types of devices to be deployed. Traditionally, as mentioned earlier, most edge computing adopts low-cost, basic devices to just collect data from sensors and send it to cloud. However, these primitive devices cannot perform real-time analytics nor respond to critical scenarios for infrastructure owners. Therefore, if the edge is expected to keep mission-critical applications on-premise, it might be necessary to replace primitive devices with capable hardware platforms that can act as a real edge data center.

A real edge data center shall perform processing and analytics on the spot for the data collected in the edge environment. For instance, applications like Fintech, self-driving cars, smart factory and city surveillance generate tremendous volume of data each day. An edge data center can cache and aggregate all the generated data, process all the packets and execute analytics algorithms. Under the edge data center model, network congestion can be relieved, latency can be reduced and performance can be highly boosted.

Network Appliance for A.I Computing at Micro Data Center

Essential characteristics of optimal edge data center shall be scalable, flexible, compute-intensive, modular and made in open-architecture. The main reason is future expansion. When businesses expand their edge services, they need hardware platforms that meet these characteristics so that they can repeat the same deployments without compatibility and reliability issues.

For instance, Lanner’s HTCA-6200 may be the perfect fit to act as a NFV-ready data center platform at the edge. This device has been validated as NFV-ready by major infrastructure standards in the world and NFV is the optimal infrastructure for distributed networking like edge. Hardware wise, HTCA-6200 is carrier-grade and NEBS certified, and comes with high degree of modularity to meet edge computing requirements. The robust platform delivers ultra processing capability, huge storage and switching capability that can fit in all the applications where enormous volume of data is generated.

Featured Products


HTCA-6200

High Availability Chassis 2U Telecom Network Appliance with 2 x86 CPU Blades and 2 I/O Blades

CPU Intel® Xeon® processor E5-2600 v3/v4 Series
Chipset Intel C612 Chipset

Read more