All along the cell tower: where to place edge and micro data centers?
Edge computing is a growing trend in digital networking, with investment, research and development, and now deployments of edge devices and infrastructure promising a new era of speed, scale, and efficiency for digital processes and applications. Edge data centers are the miniature facilities that will house the servers and networking infrastructure that enables data to be stored and processed at the edge, rather than in a distant data center.
But what will they look like? Where will they be, and how will that affect their requirements?
For complex or sensitive real-time operations, transmitting data to and from a distant point for processing can result in latency and performance degradation. Further, as mobile and internet of things (IoT) devices proliferate, the amount of data being collected, transmitted, and processed has increased dramatically. Network traffic could create bandwidth availability challenges, as well as connection stability issues, as some applications tolerate very little variation in latency before performance degrades.
Edge computing advocates say these issues can be addressed by moving the point where data is processed closer to the point where it is generated and used.
When the term “edge” is used, it commonly refers to three different locations in network architecture: the device or near-device edge, the access edge, and the aggregation edge according to the Linux Foundation’s Open Glossary of Edge Computing.
The device edge refers to processing that would traditionally happen on a remote server, as in a cloud service, taking place on the device, using built-in capabilities. The access edge refers to any point at which connected devices interact with a network, from a router to a satellite. The aggregation edge is the first point in a traditional network architecture where remote data is stored.
Clusters of resources near the edge which provide cloud-like capabilities, such as elastic allocation, are sometimes referred to as the “edge cloud.”
An edge data center, therefore, is a general term for any kind of facility or enclosure that contains computing resources that are physically located at the access or aggregation edge, closer to the source and end-user of data than a traditional server environment.
Where is the edge datacenter?
The most common conception of edge data centers in the concept’s early stages has been of a server rack located at the bottom of a cell tower. In many cases, this can be where the edge data center is located and can be a useful example to help people visualize deployments, but actual implementations will vary widely — telecom central offices (CO) and cable company headend facilities, parking garages, building rooftops, and more will all house edge data centers. The common elements are that they are distributed near to end-users (whether those end-users are human or not), and the relative size constraint that imposes.
The three edge layers will each be developed and deployed with custom-designed data centers, the extent to which data center design will overlap at the different layers is a subject of ongoing debate, but there will be some differences between deployments at different layers by necessity.
Core
Edge computing services will sometimes interact with core cloud computing systems, meaning traditional data centers. Edge deployments will generally require connectivity to the core, at least intermittently, but edge data centers are those “downstream” from the core.
Aggregation
The aggregation edge is made up of local or regional edge data centers. These are often Tier II (or Level 2) data centers in Tier II or Tier III markets, which are smaller than hyperscale datacenters, and bring cloud-like capabilities closer to users, but at a scale that brings those capabilities to many services. This is also the layer that CO and headend facilities operate at, with similar architecture and physical configuration to traditional data centers in a smaller form.
Access Micro data centers, a new class of network infrastructure, reside at the access edge, which could be the base of a tower, a small building located outside, or a furniture-sized cabinet discreetly located inside a building or office.
Micro data center deployments are expected to be commonly deployed to power smart buildings or campuses for institutions or enterprises. The actual server racks could be found contained inside a closet, the room that has traditionally held on-premise servers, or even somewhere in plain sight, as they are neither noisy nor visually obvious. Some will also be located at traditional network access points, which include cell towers, but likely also rooftops and rooms with fiber connections. The TIA (Telecom Industry Association) suggests that the necessary hardware is “routinely able to operate” in temperatures ranging from 5 to 40 degrees Celsius (41 to 104 degrees Fahrenheit).
Device
Micro data centers as described above will in some places operate at the device edge, and smaller device layer resources are sometimes described as nano data centers. Nano data centers tend to operate in application-specific environments and can consist of only one or two servers, supporting an application or set of applications across less than 100 VMs or containers.
What do micro data centers look like?
The smaller data centers at the aggregation edge will look similar to traditional data centers and have similar requirements and constraints.
The TIA, for one, defines edge data centers as operating at a “micro” scale ranging from 50 to 150 kW of capacity and notes that they can be interconnected for enhanced local capacity, failure mitigation, and workload migration.
Micro data centers, however, are an entirely new form factor for network infrastructure, consisting of a server rack-mounted inside a specialized cabinet.
Manufacturers of micro data centers include Schneider Electric brand APC and HPE (Hewlett Packard Enterprise). Their Smartbunker and HPE Edge Center micro data center products (respectively) range in size from roughly 2x4x4 feet to 3.5x5x7.5 feet, typically in 23U or 42U (rack units) configurations, with 3 kW and 8 kW capacity, respectively. Dell EMC offers a 48U cabinet, with 40U dedicated to IT and the rest providing UPS and fire suppression, and a capacity of 30 kW.
Industry groups have begun launching initiatives that could influence the development of edge and micro data centers. The Open Compute Project’s (OCP’s) Rack and Power Project is working on specifications and standards to support interoperability and a consistent interface between infrastructure components. The Open19 Foundation is also creating specifications to define a common form factor for servers across the industry for “a new generation of open data centers and edge solutions.”
Power consumption considerations
The amount of power consumed by edge and micro data centers depends significantly on their cooling requirements, which are largely determined by the location they are deployed to, according to the TIA. The location of some of the most common implementation types will tend to be somewhat consistent, enabling some generalizations about their relative power requirements.
The TIA says edge data centers will repurpose existing power systems. They may use AC or DC systems, possibly including existing plant in some locations. Locations and size may, however, preclude power redundancy. A mix of traditional mechanical and free cooling systems are expected to be used.
· Cell tower base: Since cell towers, light poles, and other structures that may house edge resources are located outdoors, they may have seasonal cooling requirements. They are likely to have limited space for mechanical cooling systems, and will need to utilize the existing plant.
· Smart buildings: Deployments to provide services to the businesses or households in a given building will tend to be located indoors, and in most cases will need to operate in environments which are retrofitted to support them, such as a service closet. Alternatively, they may be located in environments without dedicated infrastructure, which may have limited support for traditional mechanical cooling methods. Technologies for cooling micro data center cabinets with limited power supply and a small form factor may be a potential area for innovation.
· Local area: Deployments serving enterprise campuses, factories, and other larger-scale user bases will generally require and accommodate larger installations, and may offer location options that provide “free” cooling. To the extent that they support larger form factors, the options for mechanical cooling are also likely to be greater than other deployment types.
There are many different visions of what edge data centers will be like as edge computing deployments grow. Which of these visions are closer to how computing capabilities are in fact delivered to the edge will only become clear as uncertainties related to applications, implementation, and the evolution of technology are resolved. What does seem certain is that as edge computing takes off, edge data centers will be a major growth area.
Defining the edge layers for next-gen applications
Article Topics
data center | datacenters | edge computing | micro data center
Comments