Hivelocity offers enterprises a path for evolution with bare metal, edge compute services
Hivelocity is a provider of bare metal cloud and edge computing solutions on the cutting edge of digital infrastructure services. But Hivelocity isn’t a startup — the company has been in business for over twenty years, which shows the company knows how to navigate changes in technology and helps its customers do the same.
That evolution of strategy has been enabled by the continual internal development of tools for the automation of cloud storage and networking infrastructure. The company has also selectively partnered with other companies to bring new services to customers. In April, Hivelocity announced a partnership with cloud storage provider Wasabi Technologies; Hivelocity provides bare metal compute services, enabled by GPU processors, while Wasabi adds in its object storage service as the two aim to provide an alternative to hyperscale cloud service providers.
Another recent collaboration with edge cloud service provider Ridge Cloud supports cloud-native deployments. Integration of Hivelocity’s cloud and bare metal services with Ridge’s cloud platform will enable enterprise customers to deploy cloud-native applications at lower cost, reduced latency, and improved performance.
Richard Nicholas, SVP of corporate development at Hivelocity talked to Edge Industry Review about the company’s start as a hosting company and its evolution to its present state as a global infrastructure service provider.
EdgeIR: Which customers did you initially attract, and how has that evolved?
Richard Nicholas: Hivelocity started about 20 years ago by hosting websites and graduated from there to dedicated servers. Five years ago, we only had compute in one location — that was in Florida. Even then, we saw an interesting trend. If you looked at our customer base, we have customers from all over the world, but the largest concentration of customers outside the United States was in South America. Before edge compute was a phrase, we had customers with latency-sensitive [workloads] already consuming our services in Florida, because it is the closest part of the United States to South America. At the time, the cost per cycle of compute was much lower in the United States versus South America. Those customers who wanted good latency performance also wanted to balance that against price, so they were picking us.
Since then as we’ve expanded globally, we saw a steady drumbeat of customers that wanted compute in a location that was lower latency to other regions. For example, in New York, we saw customers that wanted to do business in Europe; in Los Angeles, we saw customers that wanted to do business in Asia-Pacific.
Today, we have over 33 locations around the world. With our most recent expansion into India, we’re now in Pune, Delhi, and Mumbai. For retail customers, we’ll primarily be offering a presence in Mumbai, but there was a decentralized finance customer who needed multiple locations in India, so we enabled that.
What kinds of shifts in demand have you seen in the last two years?
RN: We saw a big shift, post-COVID. In mid-2020 you saw an explosion in on-demand video-centric use cases where we had customers who were consuming our compute all over the world. They didn’t just want to be in Central Europe. They wanted to be in north Central Europe, ideally in Stockholm and Frankfurt, for example. The reason for that was twofold. They wanted their video content to be very close to where it was being consumed. But more frequently, they needed servers right next to where the video content was being created.
We have a customer who specializes in the delivery of live sports over the internet. Live sports, is a very latency-sensitive use case. What they do is they put hundreds of servers in the specific city where the broadcasts are occurring — Dallas, Manchester, and places like that. The live video streams are ingested, processed in-market, and then distributed globally over a traditional network.
Another that we’ve seen post-COVID is gaming, which is a great bare metal use case. The cost per compute cycle is typically lower on a bare metal versus a public cloud. Gaming is essentially a SaaS product when you’re running it online, so you want to keep your costs low. It’s also latency sensitive — if you’re playing football against somebody online, those milliseconds matter. Those customers tend to take servers across the footprint everywhere, from Dallas to Seattle and Los Angeles.
I would also add, from a use case perspective, decentralized finance. Distribution is very important to mitigate the risk of service attacks or anything of that nature. These are customers that will take servers all over your footprint; the more diverse locations, the better. That’s something that we’ve been able to deliver.
To what extent would data management or privacy or regulation also be of interest to your customers?
RN: We’re seeing this, of course, but it’s a geographic issue. For example, we see a lot of interest in Canada, which has some regulations around data sovereignty where they want data that’s born in Canada to stay there. There has been some good demand for servers in Canada. For example, why would you want a server in Toronto versus New York,? Geographically, they’re pretty close; the latency argument doesn’t hold there. Well, the data sovereignty argument ultimately becomes the answer to that question.
Toronto’s a market where we’ve had to expand a bit just to deal with that sort of demand that we didn’t even expect. We see that in Europe where there are also some data sovereignty requirements around staying in the EU. It doesn’t seem to be as pervasive across our customer base.
What’s your perspective on infrastructure as a code, especially as it relates to your public API offerings and network automation that you offer to customers?
RN: To play on the same field with public cloud providers — for cloud-native workloads to have a home on our infrastructure, we have to offer seamless consumption of our services. It needs to feel the same and you need to be able to use the same tools.
What that means is that we have to deliver automation in the exact same way that you can receive it in the public clouds. We’ve done a material amount of work over the past couple of years to modernize our network deployment systems and the way our servers are deployed so that people can consume our services in a cloud-native manner.
One of our largest customers is a video gaming use case. They have tens of thousands of servers all over the world, and some of those are with us. Currently, they’re managing their servers in a non-cloud native way. They don’t use much orchestration to manage their servers today. But they’re moving in that direction, and quickly. We met with them and shared where we were heading from an automation perspective, and they said [these capabilities] put you in a great position to continue growing with us into the future. So that validates some of our thinking around automation.
Large enterprises with resources to hire IT folks are using modern tools and building cloud-native applications with APIs. How do you reach the part of the market that doesn’t have a big team, but could still benefit from the automation and bare metal services you offer
RN: It’s going to be difficult for us to convince a firm that doesn’t have an infrastructure team to spin up a self-service global compute infrastructure. That doesn’t sound like a fit. What we like to do is refer those customers to partners. We try to win partnerships with folks who serve as those companies. As they grow, we’re able to grow along with them as the underlying infrastructure.
An example is Ridge, which is an asset-light business that doesn’t own compute. What they do is hook into numerous infrastructure-as-a-service providers like us using APIs and automation and offer their customers a menu – sort of a shopping cart to go. They deliver their customers one unified experience across dozens of underlying infrastructure providers. For us, that victory is to get on board with companies like Ridge and drive workloads through their platform onto ours.
Syntiant, Imagimob partner to accelerate tinyML application deployment
Article Topics
bare metal | BMaaS | cloud | DevOps | Hivelocity | IaaS | public edge cloud | Ridge | Wasabi
Comments