The 3 biggest Kubernetes deployment mistakes you can make

7 months ago 23
News Banner

Looking for an Interim or Fractional CTO to support your business?

Read more
Liran Haimovitch alt, 3 biggest Kubernetes deployment mistakes

The last decade brought a wave of digital transformation that accelerated the move towards cloud-native tech, specifically Kubernetes. We could talk about this move’s benefits (and downsides!) for more than a few blogs, but that’s not why we’re here. We’re here to talk about the less savory side of cloud-native transformation—you know, all the things that can go wrong. More specifically, all the Kubernetes deployment mistakes you can make when moving to Kubernetes—and how to avoid making them in the first place.

As someone who has worked deep in the coding trenches with developers my whole life, I’ve hand-picked the top three mistakes you can make when moving to Kubernetes. So, without further ado, let me share these hard-earned mistakes you should avoid like the plague when moving to Kubernetes!

Kubernetes deployment mistake #1: Managing Kubernetes from the command line

Kubernetes deployments almost feel like magic the first time you get them working. You use a (hopefully) short YAML file to specify the application you want to run, and Kubernetes just makes it so. Make a change to the file, apply it, and it will update in near real-time.

But as powerful as kubectl is, and as instructive as it can be to explore Kubernetes using it, you should not come to rely on kubectl too much. Of course, you’ll return to it (or its amazing cousin, k9s) when you need to troubleshoot issues in Kubernetes, but don’t use it to manage your cluster.

Kubernetes was made for the configuration-as-code paradigm, and all those YAML files belong in a Git repo. You should commit any and all of your desired changes to a repo and have an automated pipeline deploy the changes to production. Some of your options include:

Kubernetes deployment mistake #2: Forgetting all about resources

Let’s assume all your workloads are up and running with all the goodness of Kubernetes and configuration as code. But now you’re orchestrating containers, not virtual machines. How do you ensure they get the CPU and RAM they need? Through resource allocation!

Resource requests

What happens if you forget to set resource requests?

Kubernetes will pack all your pods (“workloads” in Kubernetes-speak) into a handful of nodes. They won’t get the resources they need. The cluster won’t scale itself up as needed.

What are resource requests?

Resource requests tell the scheduler how many resources you expect your application to consume. When assigning pods to nodes, Kubernetes budgets them so that the node’s resources meet all of their requirements.

Resource limits

What happens if you forget to set resource limits?

A single pod may consume all the CPU or memory available on the node, causing its neighbors to be starved of CPU or hit Out of Memory errors.

What are resource limits?

Resource limits let the container runtime know how many resources you allow your application to consume. For the CPU limit, your application will be able to get that much CPU time but no more. Unfortunately (for the application), if it hits the memory limit, it will be OOMKilled by the container runtime.

So, go ahead and define requests and limits for each of your containers. If you aren’t sure, just take a guess, and keep in mind that the safe side is higher. Whether you’re certain or not, make sure to monitor actual resource usage by your pods and containers by using your cloud provider or APM tools.

Kubernetes deployment mistake #3: Leaving the developers behind

Immutable infrastructure and clean upgrades. Easy scalability. Highly available, self-healing services. Kubernetes provides you with lots of value directly out of the box. Unfortunately, this value might not be a priority for the developers working on your product. Your developers have other concerns:

  • How do I build and run my code?
  • How do I understand what my code is doing in development, testing, and integration?
  • How do I investigate bugs reported in QA and production environments?

For many of these tasks, Kubernetes pulls the rug out from under the developer. Running development environments locally is much harder because many dev and test workloads are moved to the cloud. The code-level visibility developers rely on is often poor in these environments, and direct access to the application and its filesystem is virtually impossible.

Successful Kubernetes adoption requires the right tools

To lead a successful adoption of a new platform such as Kubernetes, you need everyone to see the value in it. But don’t forget that developers require the right tools to keep up with their code and understand what it’s doing as it’s running.

Get started on your Kubernetes journey with Dynatrace.

The post The 3 biggest Kubernetes deployment mistakes you can make appeared first on Dynatrace news.

Read Entire Article