TPU Inference Servers for Efficient Edge Data Centers - DOWNLOAD

Winbond launches CUBE for edge AI computing use cases

Winbond launches CUBE for edge AI computing use cases

Semiconductor memory solutions provider Winbond Electronics Corporation has unveiled a new technology for edge AI computing in mainstream use cases.

The company’s customized ultra-bandwidth elements (CUBE) enables memory technology to be optimized for seamless performance running generative AI on hybrid edge/cloud applications, according to the press release.

The company says CUBE enhances the performance of front-end 3D structures such as chip on wafer (CoW) and wafer on wafer (WoW), as well as back-end 2.5D/3D chip on Si-interposer on substrate and fan-out solutions.

It has been designed to meet the demands of edge AI computing devices, and is compatible with memory density from 256Mb to 8Gb with a single die, and can also be 3D stacked to boost bandwidth while reducing data transfer power consumption.

The technology is suited to applications such as wearable and edge server devices, surveillance equipment, ADAS, and co-robots, with the aim of enabling seamless deployment across various platforms and interfaces.

Some features of the new technology include power efficiency, and a range of memory capacities from 256Mb to 8Gb per die, based on the 20nm specification now and 16nm in 2025.

Winbond says the CUBE IO boosts a data rate of up to 2Gbps with a total 1K IO. When paired with legacy foundry processes like 28nm/22nm SoC, its bandwidth capabilities reach 32GBs-256GB/s.

Winbond is also engaging with partner companies to establish the 3DCaaS platform to leverage CUBE’s capabilities.

Article Topics

 |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Featured Edge Computing Company

Edge Ecosystem Videos

Latest News