MIT engineers develop novel modular AI chip for edge uses
A cohort of engineers at the Massachusetts Institute of Technology (MIT) have built an artificial intelligence (AI) chip that is adjustable and upgraded by swapping out a few parts, which they say will have edge computing applications.
The AI chip is likened to a set of Lego bricks that can exchange sensors and processors to reduce electronic waste and keep up to date with the improvements in hardware.
The design of the AI chip alternates layers of sensing and processing elements; light-emitting diodes (LED) enable optical communication between the chip’s layers. Unlike other modular chip designs that use conventional wiring to relay signals between layers, the use of light to transmit information makes such a modular design possible. The elimination of intricate bundles of wires makes it far easier to reconfigure such an AI chip.
“Other chips are physically wired through metal, which makes them hard to rewire and redesign, so you’d need to make a new chip if you wanted to add any new function,” says MIT postdoctoral research Hyunseok Kim, who participated in the team. “We replaced that physical wire connection with an optical communication system, which gives us the freedom to stack and add chips the way we want.”
An optical communication system of photodetectors and LEDs patterned with tiny pixels was formed. The photodetectors form an image sensor for receiving data, and LEDs transmit data to the next layer. Upon the image sensor receiving a signal, it triggers another layer of photodetectors and an artificial synapse array, which classifies the signal based on the pattern and strength of the incoming LED light.
To test out the chip, the team first assembled a variant with a tiny computing core of about four square millimeters. It was stacked with three image recognition ‘blocks,’ to serve as an image sensor, optical communication layer, and artificial synapse array to classify the letters M, I, or T. The chip was exposed to a pixelated image of random letters, and measured the electrical current that each neural network array produced in response.
If the current was stronger, there was a higher chance that the image was the letter that the array is trained to recognize. The AI chip correctly identified clear images of each letter, but struggled more with blurry images, like between I and T. The team was able to exchange the chip’s processing layer for a “denoising” processor with superior performance, which improved its image recognition capabilities.
“You can add as many computing layers and sensors as you want, such as for light, pressure, and even smell,” says Jihoon Kang, MIT postdoctoral researcher also involved in the research. “We call this a Lego-like reconfigurable AI chip because it has unlimited expandability depending on the combination of layers.”
Among the many potential applications, the team specifically points to edge computing. “As we enter the era of the internet of things based on sensor networks, demand for multifunctioning edge-computing devices will expand dramatically,” says Jeehwan Kim, associate professor of mechanical engineering at MIT. “Our proposed hardware architecture will provide high versatility of edge computing in the future.”
Though the chip currently only performs basic image recognition through artificial synapses, the team has plans to add more sensing and processing capabilities to broaden its horizons to the edge. The research team names applications in cellphone cameras to better recognize complex images, healthcare monitors to be embedded into wearable electronic skin, and for market use as a general chip platform to give consumers the flexibility to design what they wish to make.
Nexcom launches Intel-powered 5G uCPE for MEC
Comments