TPU Inference Servers for Efficient Edge Data Centers - DOWNLOAD

TPU inference servers for efficient edge data centers

Categories White Papers
TPU inference servers for efficient edge data centers

This whitepaper by Unigen explores the concept of developing data centers that are solely focused on AI inference.

Up to 90% of AI operations are inference vs 10% training. Training requires specialized processing to create the neural networks that are then used for inference operations. Training is the primary driver for the power requirements mentioned by the IEA above. On the flipside, inference can be done much more power efficiently.

The benefits on developing inference-only datacenters can be significant:

– Reduced initial cost for inference servers compared with training servers
– Reduced Total Cost of Ownership (TCO) over the lifetime for inference servers
– Inference servers with TPUs can be air-cooled, avoiding expensive and difficult to deploy liquid cooling schemes
– Data centers with air-cooled servers use far less resources, reducing strain on local power and water

This whitepaper compares the different requirements for cooling, electrical systems, HVAC, power and the infrastructure between training servers and inference servers.

Download White Paper

Please fill out the following brief form in order to access and download this white paper.

  • This site is protected by reCAPTCHA and the Google | Privacy Policy | Terms of Service

Article Topics

 |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Featured Edge Computing Company

Edge Ecosystem Videos

Latest News