Interning at Red Hat — The Challenge

Satyam Bhardwaj
7 min readFeb 27, 2022

“Let’s give you the need to innovate.”

The amazing open-source culture of Red Hat vibrates through every one of its associates. At Red Hat, I met some of the best people that I know, and the common thing between them had been this intrinsic ability to comprehend others' potential and push them to achieve more and enquire more and learn more. This is one of the reasons you’ll find people at Red Hat self-motivated and always ready to give a helping hand to other colleagues.

Our assignment primarily required using Openshift Container Platform(Openshift Enterprise, formerly) which in short is an enterprise version of Kubernetes which is a subject of interest for us in this post.

Checkout the previous chapter — The Begining

An Orchestra demands the righteous Orchestrator — Kubernetes (K8s)

In Mid 2014, Google introduced Kubernetes as an open-source version of Borg, followed by Microsoft, RedHat, IBM, Docker joining the Kubernetes community.

K8s is a production-grade open-source container orchestration tool to help manage the containerized/dockerized applications supporting multiple deployment environments like On-premise, cloud, or virtual machines.

What features Does K8s Offer?

  • Assures high availability with zero downtime
  • Highly performant and scalable
  • Reliable infrastructure to support data recovery with ease

K8s Architecture

  • A K8s cluster has at least one Master node and a couple of Worker nodes where each node has a Kublet process running on it.
    The master node is also known as a control plane that is responsible to manage worker/slave nodes efficiently.

Master nodes interact with the worker node to

  • Schedule the pods
  • Monitor the worker nodes/Pods
  • Start/restart the pods
  • Manage the new worker nodes joining the cluster
  • Kublet is a K8s process that makes it possible for the nodes in the cluster to talk to one another & execute tasks.
  • Pod — A Pod is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers.
    Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.
  • Workload — A workload is an application running on Kubernetes. Whether your workload is a single component or several that work together, on Kubernetes you run it inside a set of pods.

The Elephant in the Room

All that we’ve been through was part of a warm-up routine. The thing of utmost importance has always been CRDs or Custom Resource Definitions. K8s empowers users in many ways, but in addition to that, it also provides this superpower of creating users’ own CRDs according to their needs and at their convenience.

Custom Resource Definition and Custom Controllers

Let’s start with understanding K8s resources.

A resource is an endpoint in the Kubernetes API that stores a collection of API objects of a certain kind; for example, the built-in pods resource contains a collection of Pod objects.

- Almost everything we create in K8s is a resource, for e.g- a pod, a service, a secret, a PVC, etc. But what if developers need a custom object or resource based on their specific requirements? This is where a Custom Resource comes into the picture.
- Custom Resource Definition(CRD) is what you use to define a Custom Resource. This is a powerful way to extend Kubernetes capabilities beyond the default installation.

  • On their own, custom resources let you store and retrieve structured data. When you combine a custom resource with a custom controller, custom resources provide a true declarative API.
  • The Kubernetes declarative API enforces a separation of responsibilities. You declare the desired state of your resource. The Kubernetes controller keeps the current state of Kubernetes objects in sync with your declared desired state. This is in contrast to an imperative API, where you instruct a server what to do.
  • One can deploy and update a custom controller on a running cluster, independently of the cluster’s lifecycle. Custom controllers can work with any kind of resource, but they are especially effective when combined with custom resources. The Operator pattern combines custom resources and custom controllers. You can use custom controllers to encode domain knowledge for specific applications into an extension of the Kubernetes API.

The Problem

We’re ready to understand the problem now. The three of us as a team were asked to design and implement an application called Minimal Tekton Server(MKS).

* The problem was to create Custom Resource Definitions and Custom Controllers which would expose a few fields for the user to create the new resources which are then watched by the controllers to invoke Tekton APIs to create corresponding Tekton resources in the cluster and store certain datapoints about these resources in a database.
* The subproblems included creating a CLI to interact with the server, and a UI to present a minimal dashboard that would show certain data points about the resources, writing Unit Tests and E2E tests, deployments, and CI/CD.

How to put together the pieces of a puzzle

I believe the hardest part of solving a puzzle is coming up with the right foundation to be able to put the rest of the pieces on top of it, maybe not a perfect foundation but a foundation that requires minimal changes later.

The steps which we took to reach the right solution-

  1. Create the required CRDs using struct construct of the Go.

2. Generate the deep copy and clientsets using Knative codegen generator script.

3. Writing controller (the not so easy part) — the Kubernetes community offers client-go, Go clients for talking to a K8s cluster.

4. The functions to watch for the resources being created, deleted, updated in the cluster. There are two constructs offered by K8s APIs, the first is using the Watchers and the second way is using Informers which has in its foundation the Watchers themselves but has more error handling capabilities, and hence is the recommended way of watching resources.

5. Invoking Tekton API to serve a request, see tektoncd-CLI Actions.

6. Adding a database to store the data points about the MKS resources in the cluster. We did this using the Redis database, choosing it completely out of curiosity and its simplicity.

7. Using Cobra package to implement a CLI which directly calls the APIs exposed by the CRDs of MKS-server to request to create, update, list, delete, get MKS resources.

8. Making a simple UI dashboard using the HTML-template package.

9. Deploying the MKS-server, Redis database, and the UI on the cluster.

  • Deploying Redis Database requires a PVC so that the associated data is not limited to the lifecycle of a pod but persistent in the cluster. A Deployment to manage replicas and a Server to expose the IP, here Cluster IP is enough.
  • Deploying UI requires a Docker image built from the latest code, Deployment which uses this image, and a Service exposed outward using a loadbalancer for the user.
  • Deploying MKS-server on the cluster is a little tricky as compared to the other two since we need to provide the custom controller with certain privileges to be able to reach the Kube-API server and finally handle the resources. This is achieved using RBAC(Role Based Authentication) which determines whether a user is allowed to perform a given action within a project. Hence, we’ve Service Account with Cluster Role Bindings. And finally, we build the image of the latest code and deploy it using a Deployment with the created Service Account, and a Service.

Checkout the final application — MINITEKS

Life is not easy for any of us.

Just as we thought the problem is crackable now, it turns out we were wrong.

There’s no such thing as a defect-free system, and we all make mistakes, especially developing a system that is complex. Testing enables us to see what the software does and how well it does it so that the business can measure the quality of the software before it goes live.

But how exactly do we test CRDs and Controllers and the functions? Life has thrown another challenge at us, but we were more determined this time than ever.

In the next post, let’s talk about how exactly did we write unit tests for them and finally how did we implement Tekton/Openshift Pipelines CI/CD.

Tekton CI/CD is exciting and has been my favorite and moderately challenging concept to comprehend.

To be continued…

References —

https://medium.com/litmus-chaos/extend-your-kubernetes-apis-with-crds-58a8d1135fd

https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/

--

--

Satyam Bhardwaj

The author is an undergrad pursuing a bachelor of technology in Computer Science. Everything he does is out of fascination for it and hence he gives his best.