View Hardware

 

 


The prevalence of Video Management Systems (VMS) infrastructure has now reached a point where enterprises find it impractical to continuously monitor the overwhelming stream of video data around the clock. The demand for scalable, real-time video analysis stands as one of the primary catalysts driving the integration of AI into VMS infrastructure. Given the immense volumes of video data, more and more enterprise organizations are finding success using Milestone VMS solution in conjunction with edge AI technology deployed near the video cameras to enable intelligent video analytics (IVA) applications.

Lanner partners with Milestone to build an API gateway that eases the integration of intelligent video analytics (IVA) applications with the Milestone XProtect VMS. Pre-validated with Milestone AI Bridge, Lanner LEC-2290E, a NVIDIA-certified Edge AI appliance, can work as docker containers that enable the exchange of data between these IVA and VMS.

 

Deploy your AI/ML Pipeline in Minutes

Scailable drastically reduces edge-AI pipeline development time from months to hours. With Scailable, data scientists can independently iterate and optimize AI models, while the platform allows field operations teams to deploy new models across a diverse fleet of edge devices seamlessly, without the need for individual device re-engineering.

 

Hardware Independent Deployment

Scailable can deploy the same AI model across different generation of AI hardware platform. For customer that have already deployed the Lanner’s EAI-I130 (Jetson Xavier) and want to add EAI-I131 (Jetson Orin) to the fleet going forward, Scailable automatically adjusts model versions, eliminating the need for embedded engineering. This streamlines time to market, optimizes hardware selection, and simplifies fleet management.

 

Over-the-Air Remote Management

Customers managing a fleet of edge-AI devices in the field will benefit from Scailable for deploying AI models Over-the-Air as soon as models get updated. Scailable eliminates the need for chip-set specific custom development, and facilitates the maintenance across various generations of devices and AI chip-sets that might have been introduced in the fleet over time.

 

 

Scailable on Lanner’s Edge AI Appliances

Scailable simplifies development and shortens the time to market for building Edge AI pipelines for all of the Lanner Rugged Edge AI Appliances powered by NVIDIA, Hailo, and Intel AI architectures. This facilitates migrating AI models from one architecture to another.

 

NVIDIA Edge AI Appliances:

 

Hailo Edge AI Appliances:

 

Non-accelerated Edge AI Appliances:

 

 

With native support for NVIDIA Jetson and GPU architectures, the Scailable platform optimizes and deploys AI models across devices such as the EAI-i130, EAI-i131, LEC-2290E, or any Edge AI server featuring an NVIDIA card.   Scailable supports parallel AI processing across available Hailo AI processors on Lanner’s Falcon H8 card, enabling seamless high-performance AI application development on devices like the LEC-2290H, LEC-7242H, IIoT-I530H, or any Edge AI servers equipped with a Falcon H8 card.   For non-accelerated applications, Scailable supports highly efficient AI inference on native Intel architectures, allowing a diverse range of AI models to run smoothly on devices like the IIoT-i530, IIoT-i531, and any Edge AI server with an Intel CPU running Linux.

 

Ready to Deploy?

 

Request a Demo