Three top considerations for containers at the edge
By Walt Noffsinger, Vice President of Product at Section
Some things are just made to go together, like containers and edge computing. Containers package an application such that the software and its dependencies (libraries, binaries, config files, etc.) are isolated from other processes, allowing them to migrate as a unit and avoiding the underlying hardware or OS differences that can cause incompatibilities and errors. In short, containers are lightweight and portable, which makes for faster and smoother deployment to a server — or a network of servers.
Edge computing leverages a distributed compute model to physically move compute, storage, data and applications closer to the user to minimize latency, reduce backhaul and improve availability. Effective edge computing requires efficient deployment (both in terms of time and cost) of an application across many locations and often to many underlying compute platforms.
Containers offer two key benefits when it comes to edge computing:
- Portability makes containers ideal for edge applications as they can be deployed in a distributed fashion without needing to fundamentally rearchitect the underlying application.
- Abstraction makes containers ideal for deployment to non-homogenous federated compute platforms, which are often found in distributed edge networks.
Kubernetes at the Edge
The above presupposes a suitable edge orchestration framework is present to coordinate the distributed compute platform, and that’s where Kubernetes comes in. It provides the common layer of abstraction required to manage diverse workloads and compute resources. Moreover, it provides the orchestration and scheduling needed to coordinate and scale resources at each discreet location. However, Kubernetes itself does not manage workload orchestration across disparate edge systems.
Top Considerations When Putting Containers at the Edge
Significant considerations remain when deploying containers at the edge. These can be distilled into three distinct categories: distributed edge orchestration, edge development lifecycle and deployment framework, and edge traffic management and routing.
1) Distributed Edge Orchestration
Moving to a truly distributed compute network adds complexity over typical cloud deployments for one simple reason: how do you maximize efficiency to meet real-time traffic demands without running all workloads in all locations across all networks all the time?
Consider a truly global edge deployment. Ideally, compute resources and workloads are spun up and down in an automated fashion too, at a minimum follow the sun (roughly), yet remain responsive to local demand in real-time. Now add in a heterogeneous edge deployment, where demand and resources are monitored, managed and allocated in an automated fashion to ensure availability across disparate, federated networks. All of this involves workload orchestration, load shedding, fault tolerance, compute provisioning and scaling, messaging frameworks and more. None of this is simple, and the term “automated” is doing a lot of heavy lifting in the above scenarios.
But doing it correctly can accrue significant benefits in terms of lowering costs at the same time as increasing performance. Similarly, effective use of federated networks increases availability and fault tolerance (you don’t want to get hit by the next cloud outage), while decreasing vendor lock-in. Finally, it can improve regulatory compliance with initiatives such as GDPR, which requires data storage in specific locations.
2) Edge development lifecycle and deployment framework
Typical cloud deployments involve a simple calculation of determining which single cloud location will deliver the best performance to the maximum number of users, then connecting your code base/repository and automating build and deployment through CI/CD. But what happens when you add hundreds of edge endpoints to the mix, with different microservices being served from different edge locations at different times? How do you decide which edge endpoints your code should be running on at any given time? More importantly, how do you manage the constant orchestration across these nodes among a heterogeneous makeup of infrastructure from a host of different providers?
Effectively managing edge development and deployment requires the same granular, code-level configuration control, automation, and integration that is typical in cloud deployments, but on a massively distributed scale. Some of the more critical components of an edge platform include comprehensive observability so developers have a holistic understanding of the state of an application, and cohesive application lifecycle systems and processes across a distributed edge.
3) Edge traffic management and routing
Deploying containers across a distributed edge fundamentally requires managing a distributed network. This includes DNS routing and failover, TLS provisioning, DDoS protection at layers 3/4/7, BGP/IP address management and more. Moreover, you’ll now need a robust virtualized network monitoring stack that provides the visibility/observability necessary to understand how traffic is (or isn’t) flowing across an edge architecture. And to truly manage distributed network, infrastructure and operations at the Edge, you will likely require an edge operations model with an experienced team comprised of network engineers, platform engineers and DevOps engineers with an emphasis on site reliability engineering (SRE).
Three top considerations for containers at the edge
This is a challenge for SaaS vendors and others who don’t have full network operations teams and systems on-site and would much prefer to focus their time and energy on delivering an awesome application.
Conclusion
Organizations considering containerized edge applications should explore as-a-service edge solutions that leverage the portability of containers for efficient deployment across distributed systems, but abstracts the complexities of the actual network, workload and compute management. Ideally, this would enable deployment of applications to the edge through simple, familiar processes — much in the same way they would if deploying to a single cloud instance — and also provide the necessary tools and infrastructure to simplify and automate configuration control. Such an approach should give organizations the cost and performance benefits of edge computing while allowing them to concentrate on their core business rather than distributed network management.
About the author
Walt Noffsinger is Section’s Vice President of Product, serving as a key driver and advisor in building out the company’s innovative edge technologies. His experience in the product management, development, design and marketing disciplines provide valuable insights to help drive Section’s growth.
DISCLAIMER: Guest posts are submitted content. The views expressed in this post are that of the author, and don’t necessarily reflect the views of Edge Industry Review (EdgeIR.com).
Article Topics
application development | DevOps | edge application delivery | edge orchestration | Kubernetes | Section.io
Comments