Kubernetes 101 Part 2/4: Containers vs Pods
We’ve already seen how Kubernetes allows you to build scalable distributed applications by allocating work to different worker nodes in your Kubernetes cluster. But how do you define work? And how do you ensure that the dependencies for different units of work are managed?
This is where pods and containers enter the story. Many developers are already familiar with containers from working with Docker or similar tools. Containers allow you to create a well-defined runtime that packages the code you write along with its operating system and runtime dependencies into an isolated context. For example, a web server written in Node.js would include the server code written by developer as well as the Node.js binary necessary to run it, along with all the dependencies down to the version of the operating system.
Containers let you create isolated units of work that can be run independently. To create complex applications, you often need to combine multiple containers. For example, the web server above might need a database to store long-term information, which would be a separate container that the web server depends on. Or a web application that uses machine learning to label photos might have separate containers for the core server, the model for labeling, and for handling the photos.
This is what pods are designed for. Pods allow you to take multiple containers and specify how they come together to create your application. Pods allow you to indicate what containers depend on which other containers, and the interface they expect to communicate on.
In this tutorial, we’ll take a closer look at how to construct containers and combine them into pods. These components are the core building blocks of a Kubernetes architecture, so even if you’re familiar with the basics, it’s worthwhile to invest some time into really understanding these concepts.
At their heart, containers are a controlled execution environment. They allow you to define that environment from the ground up: starting from the operating system to the individual versions of libraries you want to use, to the version of your code you want to add. You also specify the command you want to run.
Developers are most familiar with containers through Docker, the most famous software for managing and running containers. Docker allows you to create Dockerfiles, which follow the pattern above. In the following Dockerfile, we start with an Ubuntu operating system, add the dependencies for NodeJS, and our code in the file “app.js”.
kFROM ubuntu RUN apt-get update RUN apt-get install -y nodejs RUN mkdir /var/www ADD myapp.js /var/www/myapp.js CMD ["/usr/bin/node", "/var/www/myapp.js"]
To build this, you simply use the command docker build, and add a name using the -t flag:
docker build -t my-app/node-server .
Docker isn’t the only container tool Kubernetes allows. You can use other tools like CRI-O or Containterd, but these are less well-known. What container tool you use is called the container runtime, and it’s set by an environment variable when running Kubernetes.
A unit of work in Kubernetes is not a container, but a Pod. A pod takes a container and tells Kubernetes how to deploy and run it. You can define a pod by writing a YAML file that specifies the container in the pod, and how to run it, along with any extras like an attached storage volume or networking parameters.
A sample YAML file for running our node-server container would be:
apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: containers: - name: myapp-container image: my-app/node-server command: ["/usr/bin/node", "/var/www/myapp.js"]
This defines a pod with the name myapp-pod. The specification is simple, there’s only one container which we created earlier. The specification also includes the command we need to run the app. This is the simplest type of pod specification, there’s only one container and it doesn’t have any special networking or volumes.