Nullified Productions Bitch

Kubernetes v1.27.2 on RHEL9 and Containerd

I haven’t blogged in a minute so I wanted to make sure that I came with something that might stick to the ribz.

https://mistonline.in/wp/how-to-install-kubernetes-cluster-on-rhel-9/

Here is a tutorial on Kubernetes and RHEL9 but this person uses CRIO as the runtime for their containers. I used containerd and runc as my environment. I intend to try out these steps as well just to see the difference.

Although minikube may be a way to go for testing, growing, and deploying production cloud ready applications for a business.

I am me though.

I am able to deploy custom pods with containers in an controllers default namespace. The preference was to build a 2 worker node 1 controller scenario but I now realize that this will be a journey.

Having said that I am creating this blog as placeholder to track my progress with reference points as I try to verify this process and translate it into a concise document… just to start out so don’t expect this to be complete yet.

Bare with me as I gather my files for a potential repository creation (I definitely have a Dockerfile I can link you to help you with your worker-node build.) Hopefully this can provide some nice light content for any other wayward engineers.

Please come back for updates. I am also writing this just to postulate that I am not only a hard worker but I am also pretty intelligent.

Now lets get to it!!!

Things to consider

  • The controller node can’t have any swap space. ##Need to confirm if true for worker node ##Worker node has been tested on ubi9init image and I am not sure if that is using swap space.

What to Understand/Download/Install:

Most of the general directions are here:

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

I found that I may have strayed off this beaten path but not by much. Below I try walk to through an installation and provide my own experience.

Container Runtime – You have a decision to make here. I went with containerd and this document will only reflect steps related to that.

https://containerd.io/

I tried to refer to the installations repository for dependency resolution as much as possible. Runc came with the installations repository. Containerd (1.7.1 as of this writing) comes with runtime plugins. The networking plugins, and this container runtime interface debugger (cni-plugins or cni-tools, crictl) which are manual gunzips.

Note on crictl – I just realized that crictl is part of the installations repo but for some reason I have a 1.27 version of it from I’d like to say its repository. Don’t really remember.

https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md

Place them somewhere, make them work. Details might be below. Ultimately I mean at this point all of the executable files should be part of your $PATH and able to run without specifying its complete path. Services should be configured to run at startup. The services may still error out but that may be a configuration issue. This is the crowning part of the shit.

More information about Runtimes environments

https://kubernetes.io/docs/setup/production-environment/container-runtimes/

Now

One has to make sure that all of the components are running correctly. This takes navigating around and learning your installation. My #1 issue was centered around the api server mostly. kubeadm init doesn’t make a static configuration for kube-api which should be somewhere in /etc/kubernetes/manifests. The following link allowed me to figure out how my controller was accessing it.

https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/#generate-static-pod-manifests-for-control-plane-components

Containerd Plugin Settings:

/etc/containerd/config.toml

This needs to exist or else all of your plugins will use default settings. Which may work for some. Not much to do here for me. I commented out 1 line and set the other to true:

#systemd_cgroup = false 

under [plugins.”io.containerd.grpc.v1.cri”]

SystemdCgroup = true 

under [plugins.”io.containerd.grpc.v1.cri”.containerd.runtimes.runc.options]

I put the appropriate directory values where I was asked for cni information (cni_conf_dir, or bin_dir)

[plugins."io.containerd.grpc.v1.cri".cni]
  bin_dir = "/opt/cni/bin"
  conf_dir = "/etc/cni/net.d"
  conf_template = ""

Validate your container network bridge information in /etc/cni/net.d. Here is some info about that:

https://github.com/containernetworking/cni

For further insight I have 2 networks so I would create 1 file for each.

Module Configuration and Loading:

Create a file: (e.g. k8s.conf)

File contents:

overlay
br_netfilter

That file goes here: /etc/modules-load.d/

Create another file: (e.g. etc-sysctl.d-k8s.conf)

File contents:

net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1

That file goes here: /etc/sysctl.d/

If all goes well you should be able to restart the server and run kubeadm init which will initialize your cluster.

That should be it. I have to review these steps a couple dozen times. I will add links to more stuff as I put together different configurations and try to grow this environment. Getting this shit to run is just the beginning.

To start creating containers:

https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/

Troubleshoot Tips:

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/#pods-in-runcontainererror-crashloopbackoff-or-error-state

Good Commands

To get the configuration of a deployment part (e.g. this gets the internal coredns cluster configuration):

kubectl -n kube-system get deployment coredns -o yaml | \
  sed 's/allowPrivilegeEscalation: false/allowPrivilegeEscalation: true/g' | \
  kubectl apply -f -