TPU Inference Servers for Efficient Edge Data Centers - DOWNLOAD

What is edge AI and what is it used for?

What is edge AI and what is it used for?

Even though edge computing is being talked almost hand in hand with edge AI, what is behind the term and what role does edge AI play in this broader edge computing ecosystem?

Edge AI refers to the deployment of AI applications in or near the smart or constrained device edge, terms which both describe where computing is done and the characteristics of the devices.  As with edge computing, edge AI brings computation closer to the source where data is generated. Organizations exploring business opportunities by employing AI-based projects, edge AI improves the processes, efficiency and security for the data.

Among the many uses of edge AI, performing computer vision (including facial recognition) at the edge can enable workers to access machine functions that they are trained for, as well as aiding in workplace safety..

Technological breakthroughs in deploying AI models at the edge have brought in innovative strategies for designing power-efficient AI models that can be scaled in IoT edge devices. Traditionally, machine learning models were trained before the deployment of the neural networks in edge devices. However, with new edge AI models coming to light, developers can use efficient models that can re-train themselves while processing the new incoming data — making them more intelligent than ever before.

There is an emerging trend for federated learning and training machine learning models at the edge to maintain data privacy and security. Deploying AI at the edge offers benefits to the users as well as developers by offering a different model for security and privacy. Processing the incoming sensor data on or close to the data source (for example, in facial recognition, where the face is the ‘data) means personally identifiable information analyzed locally than relying being sent across the internet to a central cloud. The reduced cost of sending data and lower latency for processing are added bonuses.

Another emerging trend at the constrained device edge is deploying ML inference models in microcontroller-based resources like the smart speakers (Alexa). For uses where there is more power available, AI-specific chips are being used. For example, for autonomous vehicles to do object detection, AI accelerators can be integrated inside edge devices that are responsible for AI inference. In some cases where the deployed AI models encounter errors or anomalies in the data, the data can be transferred to the cloud facility for training on the original AI model. This feedback mechanism makes edge AI smarter and faster than traditional AI applications.

Some of the common examples where edge AI has become an essential technology and backbone for industries include manufacturing, healthcare, financial services, transportation, and many more.

Article Topics

 |   |   |   |   |   |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Featured Edge Computing Company

Edge Ecosystem Videos

Latest News