CXO Insight: Do we really need Kubernetes at the edge? Story-level
last week i attended Edge Picnic 1, a Tech Field Day event focused on cutting-edge computing solutions. Some of the sessions really made me think.
Edge infrastructures are quite different from anything in the data center or cloud: the further from the center you go, the smaller the devices become. Less CPU power, less memory and storage, less network and connectivity pose serious challenges. That’s before you consider the physical and logical security requirements that are less important in the data center or cloud, where the perimeter is well protected.
Additionally, many edge devices remain in the field for several years, posing environmental and lifecycle challenges. To further complicate matters, edge computing resources can run mission-critical applications, which are built for efficiency and resiliency. Containers and Kubernetes (K8) might be a good fit in this case, but do you really want the edge of the complexity of Kubernetes?
Assessing the value of Kubernetes at the edge
To be fair, Edge Kubernetes has been going on for some time. Several vendors now offer Kubernetes distributions optimized for edge use cases, as well as management platforms to manage large fleets of small clusters. The ecosystem is growing and many users are adopting these solutions in the field.
But does Edge Kubernetes make sense? Or more accurately, how far from the cloud-based core can you deploy Kubernetes, before it becomes more trouble than it’s worth? Kubernetes adds a layer of complexity that must be deployed and managed. And there are additional things to keep in mind:
- Even if an application is developed with microservices in mind (such as small containers), it is not always large and complex enough to require a full orchestration layer.
- K8s often need additional components to ensure redundancy and data persistence. In a resource constrained scenario where few containers are deployed, the Kubernetes orchestration layer might consume more resources than the application.
In the GigaOm report covering this space, we found that most vendors are working on how to deliver K8 management at scale. Different approaches, but all include some form of automation and lately GitOps. This addresses infrastructure management, but it doesn’t cover resource consumption, or really enable container and application management, which are still concerns at the edge.
Although application management can be solved with additional tools, the same ones you are using for the rest of the applications on your K8s, resource consumption is something that cannot be solved if you continue using Kubernetes. And this is particularly true when instead of three nodes, you have two or one, and maybe that one is also very small in size.
Alternatives to Kubernetes at the edge
Back at Tech Field Day, an approach that I found compelling was shown by avassa. They have an end-to-end container management platform that doesn’t need Kubernetes to work. It does everything you expect from a small container orchestrator at the edge, while removing complexity and unnecessary components.
As a result, the edge-tier component has a small footprint compared to (even) edge-optimized Kubernetes distributions. Additionally, it implements management and monitoring capabilities to provide visibility into important aspects of the application, including deployment and management. Currently, Avassa offers something quite differentiated, even with other options to remove K8s from the (border) image, including Web Assembly.
Key actions and conclusions
In short, many organizations are evaluating solutions in this space, and applications are often written to very precise requirements. Containers are the best way to implement them, but they are not synonymous with Kubernetes.
Before installing Kubernetes at the edge, it’s important to check if it’s worth doing. If you’ve already implemented it, you’ve probably found that its value increases with the size of the application. However, that value decreases with distance from the data center and the size and number of edge compute nodes.
Therefore, it may be worth exploring alternatives to simplify the stack and thus improve the TCO of the entire infrastructure. If the IT team in charge of the edge infrastructure is small and has to interact every day with the development team, this becomes even more true. The skills shortage in the industry, and particularly around Kubernetes, makes it mandatory to consider options.
I’m not saying that Kubernetes is unsuitable for edge applications. However, it is important to weigh the pros and cons and establish the best course of action before beginning what can be a challenging journey.