GitOps: FluxCD vs. ArgoCD vs. Spinnaker vs. Jenkins X
Software development has transformed itself from slow, monolithic development lifecycles to a rapidly evolving flexible development lifecycle. The evolving nature of product requirements and rapid technological advancements have aroused the need for both application and infrastructure changes in developments.
Therefore GitOps has gained popularity by allowing users to incorporate infrastructure changes within their Continuous Integration and Continuous Delivery pipelines. Furthermore, it helps create end-to-end product delivery processes that combine the application and infrastructure developments from a single pipeline. There are many tools to power GitOps pipelines. In this article, we will have a look at some of the leading tools to facilitate GitOps.
What is GitOps?
Before looking at GitOps tools, we need to understand GitOps. In its simplest form, GitOps is the process of combining DevOps practices like version control and CI/CD pipelines to facilitate infrastructure automation. GitOps allows users to create declarative infrastructure managed using the Git version control system to facilitate automation for application infrastructure.
Pioneered by Alex Richardson, the founder of Weaveworks, GitOps began its life as a method for Kubernetes cluster management and application delivery. All this revolves around the following four basic principles that facilitate GitOps.
- The entire system is configured in a declarative manner.
- Each system state (configuration) is versioned in Git
- Approved configuration changes are automatically applied to the underlying infrastructure.
- The desired system state is constantly being monitored, and any divergence is alerted and corrected.
All the infrastructure changes in a delivery pipeline are created declaratively and pushed to a version-controlled repository with Git as the single source of truth. Then these changes are deployed as a part of the overall CI/CD pipeline. GitOps offers significant advantages such as ease of development, quick updates, easy rollbacks, standardization, consistency for infrastructure, and a completely trackable and auditable change history.
Comparing GitOps Delivery Tools
When it comes to GitOps delivery tools, we can configure most CD tools to support a GitOps workflow. However, if you are targeting Kubernetes and want to leverage all the benefits offered by GitOps, the best option will be a specialized delivery tool built with GitOps in mind. In this section, we will look at the following delivery tools that can be used to power a GitOps delivery pipeline.
Flux is a Continuous Delivery tool developed to manage applications and configurations in a Kubernetes cluster. Flux positions itself as a tool to enable application deployment and progressive delivery. This platform is built from the ground up to use the native Kubernetes API extension system and built with a GitOps toolkit to directly facilitate GitOps workflows. As a CNCF incubating project, Flux is continuously developed and improved to support all the latest K8s features and has been adopted by many organizations.
The Flux GitOps toolkit comes with five major components, including API and controllers that make up the Flux runtime.
- Source Controller
The primary function of the source controller is for artifact acquisition.
- Kustomize Controller
The Kustomize controller is used to power continuous delivery pipelines for infrastructure and workload defined in the Kubernetes manifest and assembled via Flux Kustomize. It interacts with the source controller to determine the desired state of the cluster.
- Helm Controller
As the name suggests, the Helm controller is designed to manage Helm chart releases with Kubernetes manifest in a declarative manner. It looks for Kubernetes custom resource named HelmRelease and carries out the necessary modifications according to the state defined in the HelmRelase.
- Notification Controller
This Notification controller is used to handle inbound and outbound events in Flux. It is used to handle events from external systems such as source control systems, notify any changes to other controllers, and send outbound notifications to systems like Slack, Teams, etc., once the changes are done.
- Image Automation Controllers
The Image Automation Controller comes with two components: the image-reflector controller and the image-automation controller. The reflect controller scans image repos and reflects the image metadata to the K8s resources while the automation controller updates the YAML files and commits changes to the Git repo.
Core Features of Flux
- Multi-source configurations from Git repositories, Helm charts, Helm repositories, and AWS S3 compatible buckets.
- Native Kustomize and Helm Chart support.
- Integration with K8s Role-Based Access Control
- Supports policy-driven validation
- Supports Multi-Tenancy and Multi-Cluster
- Out-of-the-box integrations with source control platforms, notifications services, etc.
- Automated Image updates and dependency management for infrastructure and workloads.
The main requirement for Flux is a Kubernetes cluster running version 1.19 or newer with the cluster-admin role.
Users can use the Flux CLI tool to install and configure Flux on the Kubernetes cluster. The CLI tool is available for all major operating systems as a standalone package or via repositories like chocolatey.
Then users can configure Flux on a cluster using the flux bootstrap command and configure it to manage itself via a Git repository. All the required Flux components are publically available via DockerHub and GitHub Container Registry, and the bootstrap command will automatically update the existing components or install new ones. Flux also provides integration with the IaC tool Terraform via the Flux provider.
Limitations of Flux
- Can only deal with a single repository at a time.
- Only YAML files are supported for configurations.
- By Default, Flux ignores directories that look like helm charts that include Chart.yaml and values.yaml files.
- As Flux runs as a container itself, there might be DNS resolution issues if it is not properly configured.
ArgoCD is a declarative, Kubernetes native GitOps continuous deployment tool. It aims to provide a fast and reliable Kubernetes delivery experience while offering a powerful yet simple tool to use. Configurations and environments in ArgoCD application definitions are declaratively defined and version-controlled while providing an automated, auditable platform for cluster management.
ArgoCD relies on the GitOps pattern to facilitate this continuous deployment. It is implemented as a Kubernetes controller, continuously monitoring state changes and comparing the live state against the desired state within a Git repository. Any deviations from the desired state are marked as out of sync and reported to the users. Then ArgoCD provides the necessary tools to automatically or manually reconfigure the K8s cluster and applications to match the desired state.
The core components of ArgoCD architecture are;
- API Server
API Server exposes the API consumed by the Web UI, CLI, and the CI/CD systems. It provides interactions such as application management and monitoring, access control enforcement, repository management, and application operations like rollbacks, etc.
- Repository Server
Repository Server is an internal server that keeps the local cache of the Git repository for the K8s manifest. It is responsible for providing the required manifest when requested by the system.
- Application Controller
The Kubernetes controller is responsible for continuously monitoring state changes and taking corrective actions to match the desired state. It also handles user-defined hooks for lifecycle events such as PreSync and Sync.
Core Features of ArgoCD
- Support for multiple config management and templating tools like Kustomize, Helm, Ksonnet, and Jsonnet
- Multi-Tenancy and Multi-Cluster support.
- SSO Integration (OAuth2, LDAP, SAML 2.0, GitHub, GitLab, etc…)
- RBAC for authorization
- Easy Rollback/Roll-anywhere for configurations
- Real-time view of application activity via the Web UI
- Audit trails for application events and API calls
The only requirement for ArgoCD is a kubeconfig file and having kubeclt installed. Then users can install ArgoCD via the provided standardized installation manifest. ArgoCD comes in two installation variants: a full version with all the supported features and a stripped-down version that doesn’t have the UI, SSO, and multi-cluster features.
The recommended method is to create a separate namespace for ArgoCD and install ArgoCD in the new namespace using the installation manifest. ArgoCD also comes with a CLI tool that can be installed on macOS, Linux, and Windows to interact with ArgoCD directly. By default, the ArgoCD API server is not exposed outside the cluster. So, it should be configured via a load balancer service, K8s ingress, or port forwarding.
Limitations of ArgoCD
- Running ArgoCD in a highly available manner will be a relatively complex process.
- It can affect the performance when interacting with the source repository in the following scenarios.
- Multiple custom plugin based application
- Helm applications pointing to the same directory in a single Git repo
- Multiple Kustomize or Ksonnet within a single repo with parameter overrides.
Spinnaker is an open-source, multi-cloud continuous delivery platform. It combines powerful delivery pipeline management features and directly integrates with major cloud providers to easily manage the application delivery process. Spinnaker can also facilitate GitOps, providing complete infrastructure management capabilities as a part of Spinnaker delivery pipelines.
Spinnaker consists of multiple independent microservices such as Deck the browser-based UI, Orca the orchestration engine, Cloudriver for managing infrastructure and deployments, etc. It uses the Hayland CLI tool or the Kubernetes Operator to manage services in the pipeline. Spinnaker can be easily adapted to facilitate any delivery needs with its plugin framework and many built-in integrations.
Core Features of Spinnaker
- Direct CI Integrations from simple Git triggers to events from other CI tools such as Jenkins or Travis CI etc…
- Chaos Monkey Integration for application survivability testing.
- Role-based Access Control (OAuth, SAML, LDAP, etc…)
- Manual Judgements, enable manual approval for deployments.
- VM bakery to create immutable images via Packer
- Support for Chef and Puppet templates
- Native support for multi-cloud environments
- Restricted execution windows for deployment planning
The primary requirement for Spinnaker is Halyard. This CLI tool is used to manage the Spinnaker deployment throughout its lifecycle. Halyard powers the installation, configuration, and update of Spinnaker in production deployments. While it is possible to install Spinnaker without Halyard, it is not recommended and may lead to unforeseen issues.
Halyard can be installed as a standalone package on Linux (Debian/Ubuntu) and macOS or run as a docker container. Then you can configure the targeted cloud provider using Halyard. There are three options for deploying the Spinnaker environment: distributed installation on K8s, local installation using Debian package, or local git installation from GitHub. The recommended option is to select a K8 cluster that deploys all Spinnaker components as individual microservices. The next step is to configure an external storage service which is required to provide persistent storage. However, the availability of an external storage service depends on the selected cloud provider.
Once all the requirements are fulfilled and configured, users can pick a version via Halyard and deploy Spinnaker by running the “hal deploy apply” command and afterward connect to the Web UI via the “hal deploy connect” command.
Limitations of Spinnaker
- Initial setup and configuration can be complex.
- Relatively larger learning curve compared to other delivery tools.
- Management strategy and available services are dependent on the cloud provider.
JenkinsX is the cloud-native derivative of the popular Jenkins CI/CD tool. The primary difference between these two tools is that JenkinsX is built from the ground up to support cloud-native applications using Kubernetes. Furthermore, JenkinsX aims to bring together services like storage buckets, secret managers, container registries, and Kubernetes itself to provide a simple and effective delivery experience.
JenkinsX implements GitOps workflows as it is based on Git. All configurations and application deployments are managed via a cluster Git repository which consists of the desired state of the cluster. The Kubernetes operator runs within the cluster queries for changes in the Git repository and applies the approved changes. JenkinsX comes with Tekton support for creating cloud-native declarative pipelines.
JenkinsX is also considered a more of an opinionated solution as it makes certain assumptions on how things need to be done. It is advantageous as it enables getting up and running quickly while reducing the learning curve. However, modifying JenkinsX can be an arduous and time-consuming process where a custom implementation is required.
Core Features of JenkinsX
- Offers a complete toolset to facilitate automated cloud-native application delivery.
- Support for environmental promotion via GitOps.
- Automated preview environment creation for each pull request.
- Support for ChatOps
- Multi-Cluster Support
- Direct Integrations to Kubernetes, Tekton, Kuerhealthy, etc.
- Addons Library to extend JenkinsX functionality.
Installing JenkinsX can be a relatively involved process. The exact steps you need to follow may vary depending on the service provider. JenkinsX recommends the IaC tool Terraform for provisioning resources in cloud environments. Moreover, it uses the JX CLI tool to install, create and update JenkinsX deployments.
When installing JenkinsX on an AWS environment, the recommended method will be to utilize the Terraform configuration provided by JenkinsX to provision a K8s environment with JenkinsX pre-installed. Setup the necessary Git repositories for infrastructure and secrets backend. Then, install Terraform, JX CLI, and AWS CLI in the desired platform. Finally, create a Git Token for communication, provide all the necessary values to the Terraform script, and run Terraform to provide the necessary resources.
Limitations of JenkinsX
- Setup and configuration can be a complex and time-consuming process.
- Limited to Git Projects
- The opinionated nature of the solution might not suit all use cases.
GitOps is evolving the way we provision and manage application infrastructure. With infrastructure management as a key part of the delivery process, rapid infrastructure changes are quickly becoming the norm. GitOps based delivery tools are the perfect solutions to safely and efficiently carry out infrastructure modifications and configuration changes throughout the application lifecycle.