TPU Inference Servers for Efficient Edge Data Centers - DOWNLOAD

Run AI tightens ties with Nvidia to launch a full-stack AI solution for enterprises

Run AI tightens ties with Nvidia to launch a full-stack AI solution for enterprises

Israel-based AI resource management solutions provider Run AI recently launched an MLOps Compute Platform based on Nvidia’s DGX systems. The AI MLOps Compute Platform was created to help businesses avoid common issues when deploying AI models. For example, according to the company, Run AI MCP is an AI infrastructure platform that orchestrates the hardware and software complexities of AI development and deployment into a single solution.

This development is the latest in the extended collaboration between Run AI and Nvidia. Run AI MLOps Compute Platform, for instance, merges resources so one team can manage and monitor it. MLOps, The Compute Platform solution, permits developers to use Kubeflow, Airflow, MLflow and other tools through integrations. The AI MLOps Compute Platform also comes with the Nvidia base command. Further, users can install the solution with enterprise-grade support, including direct access to Nvidia and Run AI experts.

“AI offers incredible potential for enterprises to grow sales and reduce costs, and simplicity is key for businesses seeking to develop their AI capabilities,” said Matt Hull, vice president of global AI data center solutions at Nvidia. “As an integrated solution featuring Nvidia DGX systems and the Run AI software stack, Run AI MCP makes it easier for enterprises to add the infrastructure needed to scale their success.”

Run AI believes organizations face challenges and inefficiencies when multiple teams compete for the same limited GPU computing time. The proposed solution of shadow AI, in which each team has its dedicated infrastructure, can cause increased idle resources and expenses. The company says that the AI MLOps Compute Platform offers a comprehensive solution for enterprise customers.

According to Run AI, clients and partners have achieved 200 to 500 percent improved utilization and return on investment on their GPU. This result shows the Run AI MLOps Compute Platform’s ability to streamline enterprise IT team processes and eliminate bottlenecks in the deployment of AI.

“This is a unique, best-in-class hardware/software AI solution that unifies our AI workload orchestration with NVIDIA DGX systems–the universal AI system for every AI workload–to deliver unprecedented compute density, performance and flexibility,” said Omri Geller, co-founder and CEO of Run AI.

Run AI also recently announced that its Atlas platform is certified to run the Nvidia AI enterprise platform. This certification gives its partners and clients confidence to use Nvidia’s end-to-end cloud-native AI and data analytics software suite to optimize AI model production. Run AI and Nvidia have been working together for a long time. Recently, the company led a proof of concept to enable multi-cloud GPU flexibility for enterprises that use Nvidia GPUs in the cloud.

Article Topics

 |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Featured Edge Computing Company

Edge Ecosystem Videos

Latest News