TPU Inference Servers for Efficient Edge Data Centers - DOWNLOAD

Room to expand: Gen 4 expansion system for AI edge computing revealed by OSS

One Stop Systems has expanded its PCI Express Gen 4 product line with the introduction of the 4U Pro, a Gen 4 professional expansion platform that delivers twice the performance of its popular OSS Gen 3 solutions.

The 4U Pro offers a significantly enhanced and configurable feature set compared to the OSS Gen 4 4UV product launched earlier this year. At only 18.5” deep, the 4U Pro’s proven OSS Gen 4 capabilities enables quick time-to-market for OSS customers with harsh and demanding edge computing applications.

The 4U Pro provides an industry-leading 1 terabit per second of low-latency PCIe externally-cabled interconnect between the latest high-performance servers and AI accelerators, which is ideal for powering large-scale AI data acquisition, training, and inference applications at the edge.

In addition to four Gen 4 x16 host connections, the 4U Pro features multiple slot configurations, including eight x16 slots that can support up to eight NVIDIA A100 Tensor Core GPUs that deliver unprecedented acceleration and flexibility for AI, data analytics and HPC to tackle the world’s toughest computing challenges.

The 4U Pro can also be configured with 16 x8 slots. They can be populated with 16 of Liqid’s new LQD4500 PCIe Gen 4 NVMe drives that can store more than 500 terabytes of datasets at industry-leading speeds, making the configuration ideal for the most demanding storage challenges.

The 4U Pro also supports and enables the high-performance edge computing ecosystem that includes many other PCIe Gen 4 CPUs, NICs, host adapters and FPGAs — all with OSS’ proven Gen 4 interoperability. This often requires supporting a mix of x8 and x16 slots, which is also available on the 4U Pro. This level of flexibility, combined with OSS host adapters, riser cards and cables, provides a solid Gen 4 offering.

The 4U Pro sets the standard for high-performance AI workflows at the edge with other key features that include rugged frame-in-frame design, shock isolation, remote monitoring and control, AC or DC power supplies, OSS management software, and the ability to scale servers to an industry-leading density of 64 GPUs, FPGAs or NVMe drives using multiple systems.

The 4U Pro is compatible with OSS’ Ion Accelerator SAN and NAS storage applications that deliver the industry’s lowest-latency, as well as work with the widest suite of AI applications and frameworks available through the NVIDIA GPU Cloud (NGC) ready program. This enables a fast time-to-deployment with pre-trained AI models and popular frameworks, such as Tensorflow and PyTorch, which are critical to edge designs.

Liqid, OSS’ partner in delivering composable infrastructure solutions, recently announced a contract with the Department of Defense (DoD) to provide the world’s largest composable high-performance computer to the U.S. Army. The project combines the OSS 4U Pro with NVIDIA A100 and Liqid’s LQD4500 FHFL NVMe add-in cards to achieve unparalleled compute and 56GB/s data storage throughput, which is more than double the performance from Liqid’s previous generation product.

Article Topics

 |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Featured Edge Computing Company

Edge Ecosystem Videos

Latest News