TPU Inference Servers for Efficient Edge Data Centers - DOWNLOAD

Gcore boosts AI inference with flexible deployment and global edge network

Gcore boosts AI inference with flexible deployment and global edge network

Gcore, a provider of edge AI solutions has updated its AI solution Everywhere Inference, formerly known as Inference at the Edge. It will support flexible deployment options, including on-premise, Gcore’s cloud, public clouds, and hybrid environments, ensuring ultra-low latency for AI applications. 

The solution leverages Gcore’s global network of over 180 points of presence for real-time processing, instant deployment, and seamless performance worldwide. 

Seva Vayner, Product Director of Edge Cloud and Edge AI at Gcore, commented: “The update to Everywhere Inference marks a significant milestone in our commitment to enhancing the AI inference experience and addressing evolving customer needs. The flexibility and scalability of Everywhere Inference make it an ideal solution for businesses of all sizes, from startups to large enterprises.”

New features include smart routing for directing workloads to the nearest compute resource and multi-tenancy capabilities for running multiple AI tasks simultaneously, optimizing resource efficiency. 

The update addresses challenges like compliance with local data regulations, data security, and cost management, offering businesses scalable and adaptable AI solutions. 

Gcore is a global provider of edge AI, cloud, network, and security solutions, with strong infrastructure across six continents and a network capacity exceeding 200 Tbps.

Gcore and Qareeb Data Centres recently formed a strategic partnership to enhance AI and cloud infrastructure in the Gulf Cooperation Council (GCC).

Article Topics

 |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Featured Edge Computing Company

Edge Ecosystem Videos

Latest News