Skip to main content

Cloud Native CICD or CICD Natively in Cloud

Kubernetes aka K8s is a popular term these days and the technology is reshaping how we develop, build and deliver applications for obvious good reasons. All the major cloud vendors have K8s offering as well as it is also available on onprem private clouds/data centres too (Read my post about this). Thus, I can claim that K8s as platform is cloud native and applications running on K8s are also cloud native (provided it meets some other criteria). 

In this post I will describe my take on the topic Cloud Native CICD.

Table of Contents:

Cloud Native CI/CD:

The term "Cloud Native CICD" can be interpreted in two ways:

  1. Produce cloud native deployable (through CI) and deploy/deliver it on a cloud native platform (eg: K8s) (through CD) ☛ There are already good articles on this topic in the internet. So, I will skip this in this post.
  2. Run CI and CD processes itself natively in K8s (hence, in the cloud) 🫵This is what's trending in the community and worth your minutes.


CI, CD and Pipelines:

CI 🏗and CD 📀usually involves executing a series of tasks (eg: get code, perform tests, perform scans, build images, generate/hydrate deployment definitions etc) that need to be executed sequentially or in parallel in order to produce a runnable of the application for a target platform (eg: K8s) and deploy it onto target environments on that platform with as much automation as possible.  

Pipeline🚿is the definition of the "what and how" of the CI and CD tasks need to be performed for one or many applications. It can be written in code or created via GUI. The CICD automation tool/software dictates the language and format of the pipeline.

In an over simplified way CI and CD processes are workflows (often broken into 2 separate sections and automated) and Pipelines are the definitions of that end-to-end workflow. 

Traditionally👨‍🦳, softwares/tools used to automate CI and CD tasks used to be server-agent based models where server would have features like GUI, Dashboard, Pipeline management and it would be responsible to scheduling/orchestrating tasks in the agents (typically VMs). With the emergence of clouds and container the server component became a managed offerings such as AzureDevOps pipelines or Jenkins server in a Docker Container. In this post I will be referring to these as 👵 "Traditional CI/CD softwares". 

However, in the K8s world👨‍🦱, the server-agent based model is somewhat redundant because K8s itself is a scheduler/orchestrator (and a pretty good one). Hence, modern Cloud Native CI/CD softwares leverage K8s to provide more flexible, collaborative and agile way of implementing CI/CD processes that are more suitable for container based 👩cloud native applications. 

Challenges with traditional CI/CD tools:



  • CI-Challenge #1 - The overhead of too many pipelines 🤯⇒👎In traditional CI automation softwares the pipelines are defined in sequence (eg: step1, step2 etc). For 1 or 2 application it is ok. But in the world of micro-service based applications (meaning a lot of apps) when there are different types of applications, one type of pipelilne will likely to not work for all. To work around this we would create several pipeline pattern. Templating can be done to help with this but then we have another issue in hand with morphing templates sprawlled out of control. This is generally an uphill battle for DevOps engineers using traditional CI automation tools (eg: Jenkins, Azure Pipelines etc).
  • CI-Challenge #2 - Inefficient resources utilisation resulting in increasing cost💲⇒😟Servers are bulky and consumes resources to run and scale. When loaded with many CI processess executing frequently (which is has become the norm for modern applications eg: Micro-Services) the red light starts to blink. Technically, we can shift this issue to cloud which has "unlimited resources" (or get a cloud managed CI tool) but that will increase cost. Thus Scaling becomes an expensive operation both technically and financially.
  • CI-Challenge #3 - Lack of modularity 🧩⇒👎With most traditional CI automation tools the tasks for the CI process are defined per pipeline which means when more than one pipelines have similar or same tasks (eg: scan image, build code etc) it will be defined, unnecessarily, several time per pipeline. Although copy paste pipeline definition or templating addresses some repeatability it does not provide modularity. A side effect to this is that one simple change (eg: use Snyk instead of Trivy for scanning) in 1 task becomes a huge operational challenge to be applied across all the pipelines.
  • CI-Challenge #3 - Lack of reusability♻⇒👎 Most traditional CI automation tools/softwares (eg: Jenking, Azure etc) has very nice GUI, format for writing pipeline codes but also create 1:1 maping between an application source (code or image) and pipeline. As a result an organisation, unnecessarily, ends up with hundreds of same/similar pipelines sprawling out of control in the server. This creates maintenance, governance and control headache for DevOps or Platform Ops teams. AND No, Templating does not solve this issue (it is yet another band-aid).



  • CD-Challenge #1 - Not K8s native ☸⇒❌Traditional CD automation tools/softwares treats K8s as any other other VM based construct or server and requires to install tools like Docker, Kubectl, Auth client etc to be installed and setup. An unnecessary operational overheads (to secret rotation, secret porting, cloud credentials etc).  
  • CD-Challenge #2 - Security leaks🛡⇒⚠Traditional CD automation tools/softwares also requires access to K8s through secrets, service accounts and RBAC. It also requires cloud credentials. These secrets and credentials needs to be stored in the server which leads to security leaks and operational overheads to secret rotation, secret porting, cloud credentials etc.
  • CD-Challenge #3 - Hindered scaling across clouds📶⇒🚨As described in CD-Challenge #1 and #2 the unnecessary operational overheads caused by most traditional CD tools is a real challenge for DevOps and Ops engineers even for a few K8s clusters and 1 or 2 applications. Imagine how it will lead to a failure and unnecessary technical debts when you scale the number by more than 50 (K8s clusters or number of application to be deployed to K8s) which is the norm for modern application architecture OR adopting a different cloud.
  • CD-Challenge #4 - Lack of visibility of deployment status🙈🤔⇒👎In traditional CD automation tools it is usually a "Kubectl apply" execution in one of the CD step, which does not provide any visibility on the deployment status (eg: whether it is successfull or failed, whether container started, what event K8s fired etc). People have done weird unnecessary verification process as post deployment tasks as workaround. This is an anti-pattern and leads to overheads and unnecessary technical debts.



Core attributes of K8s native CI/CD:






  1. It is Serverless👍⇒⇒ This significantly reduces the operational overhead for managing and maintaining servers, agents and their connectivity. It also means increased resource efficiency out of the box. Traditional CI/CD tools cannot offer this. 
  2. Runs in K8s👍⇒⇒ This increases portability and scalability by design. It also reduces several operational overheads and chaos such as credentials, secrets across multiple systems, maintaining additional access controls etc that are most often required by Traditional CI/CD softwares which can also lead to security issues. 
  3. Promotes framework for collaboration  👍⇒⇒ DevOps is supposed to be a collaborative framework for the organisation. Many traditional CI/CD softwares arguably cannot not support that. K8s native CI/CD softwares (the popular ones available in the market today) offers a better collaborative framework and ownership of different roles. This leads to frictionless path to production for applications. More on this later in this post.
  4. Modular and Re-Usable by design👍⇒⇒  Modern K8s native CI/CD tools are by design modular and re-usable and are built for supporting CI/CD for applications at scale (eg: micro-services). More on this later in this post.



The choices available for K8s native CI/CD:

I have come across a handful K8s native CI/CD automation tools/softwares that I considered are good fit in cloud native world and removes the challenges mentioned above with traditional CI/CD tools.

Below are the ones:

  1. Tekton:
    Url: https://tekton.dev/
    Pros: Open source project. Advertised as CI CD framework, but, in my openion, any automation can be implemented using Tekton and CI / CD processes are just part of it, offers flexible framework for DevOps collaboration, Openshift has adopted it for its pipelines feature (Enterprise backed).
    Cons: By itself (eg: openshift pipeline) it is a lower order K8s native automation tool, Enterprise grade deployment and support is only available when using openshift cluster. 
  2. ArgoCD:
    Url: https://argo-cd.readthedocs.io/en/stable/
    Pros: Open source project, gained a lot of popularity due to its CD capability and GUI. Also has CI capabilities through workflows. Very flexible and scalable.
    Cons: Completely open source, no enterprise backing available.
     
  3. Cartographer.sh:
    Url: https://cartographer.sh/
    Pros: Open source project, Higher order CI CD framework that leverages K8s native tools such as Tekton, Flux, BuildPacks, Kaniko etc, Very flexible, increases re-usability significantly, offers flexible yet controlled framework for DevOps collaboration, VMWare Taznu Application Platform has adopted this to be its supply chain feature (Thus enterprise backed).
    Cons: Complex structure, No official UI like Tekton Dashboard or ArgoCD Console (Although the enterprise version available via VMware Tanzu Application Platform has GUI for it), Lack of catalog/library, dependencies are not packed with installation in the open source version.
  4. JenkinsX: (https://jenkins-x.io/) I do not know enough to be listing out its pros and cons. 



There are already several good articles on Tekton and ArgoCD. Hence, I will skip my take on those here. 

I want to bring your attention to Cartographer. 🤗Although it is the latest among the above it has a lot of potential. 💚


Why I like Cartographer.sh: 





  • Modular 🥇⇒👍This is possibly the most modular CICD framework I have come across so far. Each task is defined as template and can be used in different pipelines and that's standard, expected. But it goes one step further, that is, it also provides the ability to separate a task definition from its implementation (weirdly interesting right?). For example: A Task definition could be "Run Test" defining how to execute the test run, what to output etc. On the other hand a Task implementation is "Run Maven Test", "Run Node JS Test" etc defining what libraries to use, what command to execute to run test cases etc. The significant benefit here is that the DevOps role can define only 1 test definition for pipelines whereas developers can input their own testing implementation into the test definition based on their workload. Ofcourse, DevOps role can define the Task definition and Implementation in one template like Tekton or ArgoCD.
  • Reusable♻⇒👍 It is obvious that the modular nature of Cartographer (eg: Different Task definition templates, Task implementation etc) increases its re-usability factor. On top, it provides the ability to dynamically select a pipeline (aka SupplyChain, unlike tight coupling with 1 pipeline per git repository) making it reusable pipeline. Because of the definition and implementation modularity (as described above) a single SupplyChain can be re-used for multiple applications. In fact only 3 to 4 of these SupplyChains/Pipelines may cover 90% of the applications portfolio in an organisation. This reduces the pipeline sprawling and management chaos significantly.
  • Higher Order Framework🏆⇒☝I consider Cartographer as this higher order CI/CD framework that leverages other cloud-native tools such as Flux, BuildPacks, Kaniko, Tekton (yes even Tekton) to define CI and CD processes for applications' path to production. This is great because Cartographer should be able tackle the edge cases without creating morph in the DevOps processes. This also means that it can integrated to other tools that are K8s native or external (eg: Prisma cloud, Snyk, Splunk etc) or even other CI/CD tools (eg: Azure DevOps, Jenkins etc). Many of these integrations are built in the enterprise version of Cartographer what ships with VMware Tanzu Application Platform (aka TAP).
  • Composable💧/❄/ 🫗⇒👍 Due to the nature of Cartographer components (eg: Templates, Blueprint) and ontop how they can be dynamically selected (based on selection criteria) I consider Cartographer to be a highly composable CI CD automation tool/software. For example: A test definition can be used in many different pipelines/blueprints and the test implementation can be dynamically selected based on type and language of the application passing through the pipeline/blueprints. 
  • Flexible🧘⇒👍Cartographer provides automations and framework to implement CI/CD processes (with great modularity and re-usability by deisgn). It does not impose opinions (Although, you would find many best practices in blogs like this) meaning does not require workarounds for edge cases. Also, since it is higher order and leverages Tekton Tasks it is flexible enough to implement and integrate custom scenarios as well.

Cartographer's drawbacks and solutions available to overcome those:

Despite the attributes and benefits that I like (as described above) about Cartographer, it has some major drawbacks. But the Enterprise edition (Tanzu Application Platform) and free addons from the community removes the cons. Below I describe them:   

  • It is complex 🤯: The Task definitions and implementation (aka cartographer Templates) and pipelines for CI,CD (aka cartographer Blueprints) are written in complex structured yamls. On top lack of GUI to easily create these yamls makes it super difficult to understand and can be detracting for adoption.

    Solution😇👍: There exists an open source GUI for cartographer (Mercator).

    The GUI is called Mercator (https://mercatorthecartographerui.github.io/) which lets the user create those complex yamls through an easy and intuitive GUI. This takes out the complexity of learning though fuzzy suggestion engine and fast tracks adoption path. Mercator is free, online and also open source.

  • Lack of catalogue/library of templates 😒: The open source github repository for Cartographer has some basic sample codes of templates and blueprints but unlike Tekton or Argo it lacks a hub or catalogue (library of templates and blueprints) that I can pick from the ones I need and get started quickly. This is a real bummer.

    Solution 😇👍: Both Mercator and Tanzu Application Platform provides library of cartographer templates and blueprint.

    • Mercator addresses this problem through its library that are easily customisable from within the GUI itself through a form like interface with helper text and intellisense along the way. This also simplifies the learning and removes obstacle to adoption. 
    • The enterprise version (shipping with Tanzu Application Platform) deploys/installs with many pre-built templates and blueprint which are sufficient for quick start. Combining these with Mercator to visualise makes it super easy to understand and adopt. 

  • Lack of dashboard 😒: As it stands today the open source cartographer is all cli based. Although it emmits pretty good logs and events but they are all verbose in a shell window. Unlike Tekton and ArgoCD it lacks Dashboard where things are visually and nicely represted.

    Solution 👍Tanzu Application Platform has a nice Cartographer's dashboard (along with some other cool features).

So to summarise: The open source Cartographer does have a few drawbacks. However, the enterprise version from Tanzu Application Platform and the open source GUI, Mercator, removes these drawbacks and adds a lot more to it.

Mercator - The Cartographer.sh UI:

Cartographer, truely, has a lot of potential and may very well become the gold standard for cloud native CI,CD. But the lack of better UI was hindering it from realising its potential. As I was facing the same challenges myself, I thought about how to simplify it.
 As a result Mercator was created and open sourced for contribution.
  • Free and Open 🆓: It is accessible to anyone via https://mercatorthecartographerui.github.io/ open for contribution to its library by the community. (just do git pull request).
  • Flexible 🧘: Anyone can download it and run it privately. The open version is conveniently located @ https://mercatorthecartographerui.github.io/.
  • Simple 🤗: It is a client side UI meaning no need for server or compute power to run it. It only creates/edits Yamls for Cartographer and the user is responsible for applying them. So, if a mistake is made in the UI no worries, it is completely isolated from your Cartographer instance. Since Mercator is just an editor it does not require login or storing any information. It is that simple.
  • Intuitive 👏: The UI is super intuitive. It represents the Cartographer components in a nice looking visual way that is easy on the eye and provides familiar terminologies. With its intellisense based helper texts and suggestion boxes it shows information about the yaml field and what to do next. This way the learning of Cartographer is gamified and becomes easy, promotes faster adoption and saves a lot of time creating yamls from scratch.


Github: @mercatorthecartographerui
Video: https://www.youtube.com/watch?v=u93R0s8DLJE

Screen shots:

Tanzu Application Platform:

TAP is a licensed product from VMware that address modern/cloud native applications developement and delivery. It uses Cartographer for providing CICD feature for the enginer foundation platform with pre-built templates and blueprints for quick start. But it offers a lot more than just CICD. Cartographer is one piece of the puzzle (which is CICD) for builing and delivering cloud native applications. But there are several other components such as Application architecture, External resource and connectivity (eg: Databases, Queue etc), Self serve, Insights, API Management etc. Tanzu Application Platform brings them in a platform concept and provides a mechanism to collaborate the best practices for engineering. I will write another post to describe the what and how in future. For now I will leave you with this claim "If you are looking for a foundational platform for engineering cloud native application you may want to consider Tanzu Application Platform"


Product page: https://tanzu.vmware.com/application-platform


Conclusion:

I think the best way I can conclude this is leave with the below questions and answers.
  • Do I switch to K8s CICD practice and automation tools?
    Yes. Provided that you have more cloud native applications than legacy WebServer and VM based applications. If you are in the path to migrating legacy applications to modern cloud native application then now it is best time to start using K8s native CICD automation softwares/tools and phase out Traditional CICD.
  • But, I love my Traditional CICD automation tools?
    Remain objective. This cannot be influenced by emotion and the measurement is not how much effort is being put to operationalise in recent years. Calculate the running costs, operational cost and risks. Understand the TCO (total cost of ownership) across team and make a informed decision to phase out the Traditional CICD automation tool.
  • K8s native CICD is yet another automation tools?
    Yes, it is. Just like K8s was couple of years ago but is not a mainstream platform (if not THE main stream platform) for running applications. K8s native CICD is in the process of becoming the way to build and deploy containerised applications. The traditional CICD tools have served us well, but it is time to move on.
  • I do not have containerised applications, do I still need cloud native CICD?
    Although cloud native CICD is perfectly capable of delivering to traditional WebServers in VMs it will not be very effective. If you do not have containerised applications yet then no need to adopt cloud native CICD right away. Look into to when the time is right.
  • I have a mix of legacy and cloud native applications.
    No worries!! Most organisations are in the same situation. My suggestions it adopt cloud native CICD for the cloud native applications and keep the traditional CICD tools for legacy applications (if it is working..let it work). If the TCO of keeping traditional CICD tool is an overhead then calculate and consider delivering the legacy applications using cloud native CICD.
  • Which K8s CICD automation tool should I use.
    All the 3 major K8s native CICD automation tools works based on similar principle. So knowing one is like knowing them all (only the yaml will vary tool to tool).
    It is your own choice. My opinion is:

    • If using OpenShift exclusively then choose OpenShift pipeline otherwise consider below 2 points
    • If using for smaller scale then consider ArgoCD.
    • If using for enterprise scale use Tanzu Application Platform (and Cartographer will come with it)
    • If using for evaluation or testing it out then please use cartographer and mercator.

Thank You 😁🫡

Popular posts from this blog

The story of a Hack Job

"So, you have hacked it" -- Few days ago one of the guys at work passed me this comment on a random discussion about something I built. I paused for a moment and pondered: Do I reply defending how that's not a hack. OR Do I just not bother I picked the second option for 2 reasons: It was late. It probably isn't worth defending the "hack vs" topic as the comment passed was out of context. So I chose the next best action and replied "Yep, sure did and it is working great.". I felt like Batman in the moment. In this post I will rant about the knowledge gap around hacking and then describe about one of the components of my home automation project (really, this is the main reason for this post) and use that as an example how hacking is cool and does not always mean bad. But first lets align on my definition of hacking: People use this term in good and bad, both ways. For example: "He/she did a hack job" -- Yeah, that probably

Smart wifi controlled irrigation system using Sonoff and Home Assistant on Raspberry Pi - Part 1

If you have a backyard just for the sake of having one or it came with the house and you hate watering your garden or lawn/backyard then you have come to the right place. I genuinely believe that it is a waste of my valuable time. I would rather watch bachelorette on TV than go outside, turn on tap, hold garden hose in hand to water. Too much work!! Luckily, we have things like sprinkler system, soaker etc which makes things a bit easy. But you still have to get off that comfy couch and turn on tap (then turn off if there's no tap timer in place). ** Skip to the youtube video part if reading is not your thing   When I first moved into my house at first it was exciting to get a backyard (decent size), but soon that turned on annoyance when it came down maintaining it, specially the watering part. I laid bunch sprinklers and soaker through out the yard and bought tap timer but I still needed to routinely turn on the tap timer. Eventually few days ago I had enough of this rub

Exception Handling With Exception Policy

This is how I would think of an application at the very basic level: Now this works great. But one thing that is missing in this picture is Exception Handling . In many cases we pay very less attention to it and take it as "we'll cross that bridge when it'll come to that". We can get away with this as in many application as exceptions does not stop it from being in the state "is the application working" as long as we code it carefully and at the very least handling the exceptions in code blocks. This works. But we end up having try catch and if else everywhere and often with messy or no direction to what type of exception is to be handled where and how. Nonetheless, when it comes down an enhancement that depends upon different types exceptions, we will end up writing/modifying code every where, resulting in even messier code. I'm sure no one wants that. Even, in scenarios, a custom handler is not the answer either. Cause this way we will s