TPU Inference Servers for Efficient Edge Data Centers - DOWNLOAD

Unigen targets edge AI market with high-efficiency, hot-swappable E3.S module

Categories Edge Computing News  |  Hardware
Unigen targets edge AI market with high-efficiency, hot-swappable E3.S module

Edge AI hardware manufacturer Unigen has launched the Poptart E3.S AI Module, expanding its AI product portfolio with an industry-standard form factor for air-cooled AI inference servers. 

The module integrates with AMD Genoa servers and Network Optix’s VMS AI software, enabling efficient processing of hundreds of video streams from IP security cameras for applications like smart cities, warehouses, and transportation systems. 

Poptart E3.S delivers 52 TOPS performance using 10 Watts, powered by two Hailo-8 Edge AI processors, offering superior power efficiency compared to other solutions. 

It supports plug-and-play and hot-swap functionality, fitting into E3.S slots typically used for SSDs, enhancing AI capabilities in large server setups with lower power consumption than GPU modules. 

The module supports multiple parallel neural networks and can function as a unified Large Language Model array for complex AI tasks. It also comes with a robust software suite supporting AI frameworks like TensorFlow, PyTorch, and ONNX, and includes a dataflow compiler for easy neural network model integration. 

This announcement underscores the growing shift towards specialized, energy-efficient hardware tailored specifically for edge AI workloads.

Unigen also recently unveiled DDR5 memory modules to meet rising demands in AI and industrial applications.

Article Topics

 |   |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Featured Edge Computing Company

Edge Ecosystem Videos

Latest News