TPU Inference Servers for Efficient Edge Data Centers - DOWNLOAD

Ridge’s edge cloud rides digital transformation wave as enterprises seek solutions to hyperscale cloud shortcomings

Categories Brand Focus  |  Edge Computing News
Ridge’s edge cloud rides digital transformation wave as enterprises seek solutions to hyperscale cloud shortcomings

Ridge has been building an edge computing platform that is optimized for Kubernetes-based architectures that are highly distributed and decentralized. Ridge’s approach to edge services gained customer traction and funding over the course of 2020. Now, as the global economy reawakens and executives assess their digital transformation efforts in 2021, edge computing is more often becoming a part of efforts to build resilient, high-performance digital systems, and Ridge is set to raise its profile in this fast-growing market.

Ridge enables developers to seamlessly deploy and infinitely scale their applications and servers anywhere across the globe at the edge. That might sound like a tall order, but co-founder Jonathan Seelig has some experience in that area already: he also co-founded Akamai, a CDN service provider that was the first globally distributed compute platform. Ridge is not a CDN, however; instead, the company aims to offer a scalable edge cloud for cloud-native workloads for customers needing enhanced application performance and global data sovereignty. The fully managed Kubernetes service is powered by a global data center network of top-tier service providers for wide geographic distribution.

Edge Industry Review interviewed Jonathan Seelig to talk about how edge computing is evolving and the role Ridge aims to play in the market. This interview has been edited for length and clarity.

Describe your plans for deploying edge infrastructure and how your approach is different from other edge cloud service providers?

Jonathan Seelig (JS): We have built a global computing platform by partnering with service providers in lots of different places around the world. We already have access to server resources in over 100 locations — and what is unique about Ridge is that we have obtained all of that by partnering with existing service providers. We help those service providers by putting our modern, managed services layer on top of their existing IaaS platform offerings. This partnership often creates the very first-of-its-kind PaaS offering in a market. There is a growing need for infrastructure designed to support cloud-native, latency-sensitive applications. These applications want to deploy globally on a managed container or a managed Kubernetes platform — that’s what Ridge lets them do.

MoffetNathanson offered up a critique of some of the proponents of ‘telco edge’ or ‘access edge’ computing, saying that most applications will be served well enough by regional data centers. What’s your take on where the biggest opportunity for edge cloud services will be?

JS: The MoffetNathanson critique is interesting. They basically say that if you can be in a regional data center, you will be close enough to the end-user. You don’t need to be at the cell tower. This is an argument that is similar to one that was made by our partners at Databank almost two years ago in this blog post. We view the needs of applications as concentric rings of distribution and performance. It’s true that some applications do fine running in a single location—say the East Coast of the US. But there is a large and growing set of applications that require regional distribution to tens or hundreds of locations—and that is the problem that MoffetNathanson acknowledges exists and that Ridge is solving today with its distributed architecture and managed services. Also, some applications will need single-digit millisecond latency and will need to be at the cell tower. We agree that those applications are not mainstream, yet. But one thing to remember is that building performant infrastructure often unlocks a barrage of innovation that can then take advantage of that performance.

Can you describe some of the applications or use cases that have been deployed on Ridge? How important is low latency in those examples, compared to other benefits such as data control/sovereignty?

JS: Our current customers are pretty evenly split between those that need latency or throughput-based performance versus those who care about data sovereignty. We have had remote browsing, virtual desktop, real-time collaboration, gaming, and e-commerce applications come to us to solve latency challenges in certain markets. We have had a diverse set of applications that care about data sovereignty — those tend to be industry or company-specific requirements that we help meet.

While low latency is certainly a key benefit of edge computing, there’s some discussion about how big a need there is for a MEC edge cloud, for example, with <5ms latency. What’s the tipping point for an on-premises edge solution, in your view?

JS: We think that on-premises solutions will be implemented more for reasons of data gravity than for performance reasons. We will end up at the point where the difference between a factory’s latency to the public cloud vs to the on-prem stack is only a few milliseconds. At this point, the on-prem solution’s primary advantage will be local data residency.

Do you foresee pursuing partnerships with telcos and edge data center providers like Vapor IO and EdgeMicro, for instance?

JS: Absolutely. We have already looked at a number of projects with Vapor. Ridge’s solution sits a little way up the stack from Vapor’s—we sit on top of existing compute offerings, while Vapor offers data center and connectivity services. There is generally going to be another entity that sits between us—but we are all participants in the same ecosystem that is enabling geographic distribution of an application’s infrastructure.

Given your background as a co-founder of Akamai, what role do you see CDNs playing in the edge cloud space? Are they partners or competitors with Ridge?

JS: CDNs did an amazing job of transforming the way that the whole market thinks about infrastructure services for content. When we started Akamai, the most popular websites in the world sat in a single data center. That couldn’t be farther from the truth today. And CDNs have been introducing capabilities around relatively lightweight applications at the edge. But Lambda expressions or Workers or Web Assembly are very different from a full-fledged Container or Kubernetes infrastructure stack. We see the CDNs playing in a different market than ours.

Ridge emerged last year during the pandemic, a time in which digital transformation efforts in many industries accelerated out of necessity. What did you hear in your conversations with partners and customers in 2020? Did edge computing gain traction as part of digital transformation efforts, or were projects delayed?

JS: I think that the pandemic and the shift in work patterns accelerated the adoption of cloud technologies in general and edge cloud technologies in particular. The big public companies in the space all announced record levels of traffic and revenue—this was true for CDNs as well as cloud computing platforms. It seems pretty clear that Zoom, work from home, and other trends from the pandemic are here to stay.

They are great examples of something that edge infrastructure is optimally configured to handle. Our partners and customers were just dealing with the growth they were experiencing overall—but it was pretty apparent to us that some of it was very well suited for edge infrastructure.

Article Topics

 |   |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Featured Edge Computing Company

Edge Ecosystem Videos

Latest News