Kubernetes Day-2 Operations – Part III: Network & Traffic Management, Auto Scaling , Associating Pods to Nodes & Integration with your Legacy VMs
Part III of this series accounts for some major performance bottlenecks Kubernetes clusters experience when operating at scale. A bad network or a sudden onslaught of traffic is bad for performance, nevertheless. At times, associating nods to pods can become a major headache for an inexperienced developer. Who told you Kubernetes auto scales out of the box?
Perhaps, we should discuss these pain points in more detail.
Network & Traffic Management
However, when you’re running business-critical applications, there is more at stake. A transient fault can result in an undesirable action if the application encounters it in the midst of a critical transaction. A financial transaction running into a transient fault may result in a loss of trust between the parties.
When working with Kubernetes, developers are supposed to configure traffic management policies for each Node, Service, or Pod manually. There could be hundreds of those. In addition, they need to add service information in service mesh resources (virtual service, destination rule..).
Network and traffic management, while critical to a cloud application, Kubernetes makes the task a long and tedious job for developers.
The growing adoption of containers is only mounting pressure on the developer community to get familiar with Kubernetes autoscaling asap. Kubernetes doesn’t support auto scaling out of the box, and developers have to activate a Kubernetes add-on called Metrics Server (and probably other tools). Configuring Metrics Server shouldn’t be a task, except each public cloud provider has a unique set of configuration settings.
Hint: Multi-cloud application deployment can be a bad idea if you’re autoscaling Kubernetes and don’t have the right technical skills to go through all the complexity of each cloud provider along with how autoscaling works with each.
Did I mention that after starting a node, you must manually bootstrap it to join the Kubernetes cluster? Being a developer was never this challenging.
Associating Pods to Nodes
Expectedly, Kubernetes does nothing to make that association happen. A developer must label nodes, put the applicable selectors to the deployments, and repeat the exercise for other nodes.
They must configure convoluted affinity rules to associate a group of deployments to a group of nodes. This exercise tends to grow in complexity as the diverse pool of nodes expands.
1: Adding labels to node objects
2: Use those labels in the node selectors
3: Label all the nodes which have GPU
As said, while Kubernetes is a boon for organizations, it is not very friendly to developers. They not only have to learn the new technology with a steep learning curve but also have to undertake a lot more manual tasks than ever before. Coding is already a laborious task; they shouldn’t be spending the whole day associating pods to specific nodes or writing policies.
Integration with VM (Legacy) Services
When deciding on Kubernetes as a platform and an architectural shift to Docker containers, there is a critical need to manage and secure the communication between services, including integration with VM (legacy) services.
The use case where your Docker containers need to call one or some VMs services running behind a firewall, each service will require manual configurations, usually complex.
CloudPlex addresses challenges of Kubernetes day-2 operations
When it comes to network and traffic management, developers need not have to worry about putting in configurations to retry policies, circuit breakers, and fault injection. CloudPlex automates the process of creating and configuring the required resources (virtual service, destination rule) for each container. In addition, CloudPlex puts service information.
CloudPlex, by fixing the shortcomings of Kubernetes, makes the life of developers a lot easier.