Rise of Edge Computing

The rapid adoption of IoT technology and Big Data infrastructure in recent years is driving enterprises to shift their attentions from the centralized computing architecture to a distributed, edge-oriented approach. As more data from more sources is generated from the network edge, there is an increasing number of enterprises to adopt edge computing to reduce the latency arise from the required communication bandwidth between the data source and the central data centers.

In the edge computing architecture, data analytics are processed at the networking edge, in proximity to the sources of data generation, like IoT devices, sensors, as well as tablets, smartphones and other mobile Internet gadgets. This approach optimizes the performance of data analytic process, as less data is sent to the central cloud.

Edge Servers for Edge Computing

The rising demand for edge computing is largely contributed by the widespread deployments of machine learning, Big Data infrastructures, Internet of vehicles, industrial robotics and AR/VR facilities in recent years. However, these deployments generate massive amount of data as they become connected and “smart”, and it would occupy sufficient network bandwidth for the edge-generated data to travel to the central cloud for analytics. This traffic latency may indirectly impact business competitiveness since quicker response is critical for enterprises to stay ahead in the competition.

To reduce the latency without too much extra infrastructure cost, some companies have positioned their hardware gateways as edge servers, to offer network operators the hardware alternatives for edge computing deployments. Different from mainstream gateways that deal with traffic monitoring and routing, the edge servers are designed with considerable computing power and storage capacity to run high-intensity third-party applications for data analytics at the edges. In other words, the edge servers can perform first-stage processing and filtering whenever the data is generated from the sources.

To further stress the importance of deploying edge servers, it is necessary take a look at near-future IoT applications like virtual reality, artificial reality and self-driving cars. These next-generation deployments generate incredible loads of data, especially the self-driving cars. Once these automated cars are connected, it is estimated that several terabytes (TB) of data is generated per hour. Since the IoT and 5G eras are expected to promise incredibly low latency, it is a very feasible idea to deploy high-throughput edge servers as close to the data generators as possible. For instance, when a substantial number of terabytes is generated by self-driving cars, a high-throughput edge server like Lanner’s HTCA-6200 can process the incoming data from the connected vehicles, perform real-time analytics and filter the data to the configured destination, for example, the central cloud. HTCA-6200 comes with dual Intel® Xeon® processor E5-2690 v3/v4 Series on each blade, Intel® C612 chipset, BCM StrataXGS™ Trident-II BCM56854 Switch Fabric up to 720Gbps, dual network I/O blades at the front panel for switch or Ethernet functions, and carrier-grade redundancy design, making it capable of handling flooding data from IoT devices and reducing latencies.

Another benefit of edge servers is that significant amount of network resources and bandwidth have been conserved, and thus the overall performance of the IoT network infrastructure is rapidly boosted.

Future Considerations

Facing the transition to the 5G and IoT era, it is observed that the network interface has actually disaggregated. To provide a more well-rounded network service, many network operators have deployed white-box hardware with decent computing power, graphic unit and storage in edge computing to run third-party open source applications. At the same time, this opens door for more players into the market as network operators now have more options to scale up their IoT infrastructures.

Before the potential of edge computing is truly realized, network operators have to consider the possible challenges:

  1. Hardware capability of edge servers: as mentioned, edge servers have to perform data analytics; therefore, the hardware capabilities such as processor, memory, storage and even graphic processing have to be considered. For instance, HTCA-6200 mentioned earlier.
  2. The network aggregation: the communication between the edge and the central cloud must be assured, in case of huge flood data that is too much for edge servers to handle.