Get Started Free

The Node.js Developer’s Guide To Kubernetes – Part I

Node.js Developer


In this step by step blog post, that illustrates how to integrate Node.js applications with Docker and run them in Kubernetes clusters, we will cover the following topics:

  • Dockerizing an existing Node.js application.
  • Deploying it using Docker Compose.
  • Integrating and deploying to a Kubernetes cluster.
  • Inspecting and scaling

We will work with an existing Node.js application that can be found on the following GitHub repository


Before proceeding, make sure that your environment satisfies these requirements. Start by installing the dependencies on your machine.

  • Docker
  • Kubernetes
  • NPM
  • Git

Run the application locally

$>  docker run -d -p 27017:27017 mongo:4.2

The above command will create a MongoDB container and expose it on port 27017 of the host machine.

The next step is to clone the project repository locally and run it using the below commands:

$> git clone
$> cd node-easy-notes-app
$> npm install
$> node server.js

Once the application starts, you can check and verify that it’s running using the following curl command:

$> curl -fs

Dockerize the application

To run our application in a Docker container, we need to create a Dockerfile that describes how to build the Docker image. The Dockerfile instructions should cover the following points

  • The base Docker image.
  • Coping the application source code to the Docker image.
  • Installing the application’s dependencies.
  • Setting up the application start command.

The below Dockerfile can be used to build the Docker image for the Node.js application. The file needs to be stored in the root directory of the application.

FROM node:12.0-slim
COPY . .
RUN npm install
CMD [ “node”, “server.js” ]

Building the Docker image for the application can be done using the below command

$> docker build -t ${namespace}/{imagename}:{tag} 

It’s a common practice to manage the build of Docker images using a makefile. The main benefit behind this recommendation is that a makefile helps in providing a standard interface to build Docker images and hide the complexity of its commands.

This task can be achieved by adding the following Makefile to the repository.

# Docker registry
#Image namespace
NAMESPACE ?= default_name_spacw
# image name
NAME ?= node-easy-notes-app
#image default tag
IMAGE_TAG ?= latest


	docker build -t ${IMAGE_NAME} .

Building the Docker image now can be done simply using one of the following commands based on the needs

$> make build # this command will use the default values 
$> IMAGE_TAG=v1 make build # this command will use v1 as an image tag

Run the application using Docker Compose

Docker Compose is a tool for defining and running multi-container Docker applications.

With Compose, the application’s services are defined and configured in a YAML file. Then, with a single command, services can be created and started.

To be able to run the Node.js applications with docker-compose we need to define two services within the docker-compose file: one for the MongoDB service and the second for the Node.js application. The below compose file can be used to deploy both services.

version: '3.7'


    external: false
    name: 'nodeJsNet'


    image: mongo:4.2
    restart: always
      - nodeJsNet
     - mongo-db:/data/db

    image: node-easy-notes-app:latest
    restart: always
      - nodeJsNet
      MONGO_URL: 'mongodb://mongodb:27017/easy-notes'
      - 8080:3000

As is shown in the above snippet, the docker-compose have the following configurations

  • Two services are defined, one for MongoDB and the other for the Node.js application.
  • Both applications are connected to the same network, and as a result, they can reach each other.
  • MongoDB is configured to use a volume to store the data. As a result, when the container crashes or exits, the data will not be lost.
  • The Node.js application is configured to use the MongoDB instance via environment variables.
  • The Node.js application is accessible from the host machine on port 8080.

It is easy to learn and deploy Docker services with docker-compose due to the fact the all the service definitions and configurations can be included in a simple single YAML file. In addition, docker-compose helps in creating and working with applications for development environments.

However, it is not recommended to use docker-compose for deploying, maintaining and managing production Docker services because it is missing a lot of features needed for production services such as support for zero-downtime deployment and running services in Docker clusters natively. On the other hand, container orchestrators such as Docker swarm and Kubernetes are designed to solve and handle Docker production workloads. Below are some advantages of using container orchestrators:

  • They are built to run complex applications with a large number of microservices.
  • They have native support for many deployment-related features such as zero-downtime deployments and resource management.
  • They support clustering Docker nodes, high availability, and auto-recovery.

In the next part, we will start using Kubernetes, one of the most widely known container orchestration frameworks. Then we will dive deep in the details of deploying and scaling Kubernetes Services.

Kubernetes Overview

Kubernetes (also called K8s) is an open-source system for automating deployment, scaling, and management of containerized applications. Kubernetes is commonly used by many individuals and companies for the following reasons:

  • Open Source: The project is totally free. You can download, install and use it freely. Moreover, you can access and contribute to its source code.
  • Huge community and support: Many individuals and well-known companies such as Redhat, Google, Microsoft, IBM, and Cisco contribute to K8s source code. This adds a layer of confidence to the quality of the source code.
  • Managed cloud solutions: Many cloud providers such as AWS, Google and DigitalOcean are providing Kubernetes managed services.
  • A multitude of features: Kubernetes supports a wide range of features such as isolating applications in namespaces, network policies, storage options, rolling updates, and many other capabilities.

Kubernetes consists of multiple components and each of these components are designed to serve a specific purpose. Below is a brief description of Kubernetes components

  • Etcd cluster: stores information about the cluster
  • Kube scheduler: responsible for scheduling applications or containers on nodes
  • Controllers: take care of different functions like the node control and replication controller.
  • Kube API server: responsible for orchestrating all operations within the cluster.
  • Kubelet: the primary node agent. It listens to the instructions from the Kube-api-server and manages containers in the registered nodes.
  • Kube-proxy: helps in enabling communication between services within the cluster.
  • Kubectl: the acronym of Kubernetes command-line client and it’s used to create, edit, update, delete, and view Kubernetes resources.

More information about the Kubernetes component and their roles can be found on the officialaKubernetes website. Below is a diagram that shows a Kubernetes cluster with all the components tied together.


Kubernetes Resources

Using the Kubernetes command-line tool kubectl (also using other k8s clients or directly using the kube-api-server), API resources can be created on Kubernetes.

Kubernetes supports more than 50 different resource types for managing the cluster such as Deployment and Service. You can view the full list using kubectl:

$> kubectl api-resources

It is not mandatory to use all the defined resources to deploy services in a Kubernetes cluster. In fact, the usage of Kubernetes resources is highly dependent on the nature and requirements of the services. Below is a brief summary of the most used resources and the ones that we will be using during this post.

  • A Pod is the basic execution unit of a Kubernetes application. Pods can run one or multiple Docker containers.
  • A ReplicaSet is responsible for setting and managing replications of Pods and maintaining a stable state of the replications.
  • A Deployment provides us with the capability to upgrade the underlying instances seamlessly using rolling updates, undo changes, and pause and resume changes as required.
  • A Service enables the communication between various components within and outside of the application.
  • The Ingress manages external access to the services in a cluster
  • A PersistentVolume is a piece of storage in the cluster that has been provisioned.
  • A PersistentVolumeClaim is a request for PersistentVolume by a user or an application.

Combining the powerful features of Node.js, Docker, microservices, and Kubernetes

If you are using Node.js to develop your applications, you’re very likely building microservices. Node.js is a lightweight technology that goes well with microservices architecture. Many built-in Node.js features allow communicating with other services such as databases in a performant and fast way.

To obtain the full advantages of microservices architecture, using Docker and a robust orchestration technology like Kubernetes, is certainly the best choice. Combining the performance of Node.js and the resilience of Kubernetes is what we are discovering in this tutorial series. In this first post, we built a local Docker environment for development.

We also started discovering Kubernetes as an alternative to Compose to be used in production environments. In the second part of this series, we are going to discover the internals of Kubernetes and guide you through the details of creating, deploying, and scaling stateless and stateful Node.js applications.

Asad Faizi

Founder CEO, Inc

Start building app

Start building your Kubernetes application

93080cookie-checkThe Node.js Developer’s Guide To Kubernetes – Part I