How to Configure a Bare Metal Kubernetes Cluster with MetalLB

Overview

In this guide I will show you how to configure your own Kubernetes cluster! at home. As well as how to configure an Ingress and Load Balancer so you can access your services outside of the cluster. This guide assumes you're running Ubuntu 20.04.

Install Docker

You will need to install Docker on each Kubernetes node in your cluster. Docker is our container runtime for the cluster, there are other options, but we'll use Docker for this guide. To install Docker on Ubuntu run the following commands:

Uninstall Old Versions of Docker

1sudo apt-get remove docker docker-engine docker.io containerd runc

Setup The Repository

1sudo apt-get update
2
3sudo apt-get install \
4    ca-certificates \
5    curl \
6    gnupg \
7    lsb-release \
8    software-properties-common \
9    apt-transport-https
1curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
1echo \
2"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
3$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
1sudo apt-get update
2
3sudo apt-get install docker-ce docker-ce-cli containerd.io

Setup Docker Daemon

You will need to change the Docker daemon to use systemd for cgroup management. To do so run:

1cat <<EOF | sudo tee /etc/docker/daemon.json
2{ 
3  "exec-opts": ["native.cgroupdriver=systemd"],
4  "log-driver": "json-file",
5  "log-opts": { "max-size": "50m" },
6  "storage-driver": "overlay2" 
7} 
8EOF

Now, enable and restart docker by running:

1sudo systemctl enable docker
2
3sudo systemctl daemon-reload
4
5sudo systemctl restart docker

Install kubeadm, kubelet, and kubectl

Kubeadm allows us to quickly and easily create a Kubernetes cluster. Kubectl is the CLI tool that we will use to configure everything in our cluster. Kubelet is an agent that runs on each node and ensures that containers are running in pods.

To install these, run the following commands:

1sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
2
3echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
4
5sudo apt-get update 
6sudo apt-get install -y kubelet kubeadm kubectl 
7sudo apt-mark hold kubelet kubeadm kubectl

Initialize Cluster with kubeadm

After everything has installed, we'll use kubeadm to initialize the cluster. This will make this server the master node of the cluster. To do so, run:

1sudo kubeadm init --pod-network-cidr=10.244.0.0/16

The pod network CIDR must be 10.244.0.0/16 for our network overlay to work properly. I will say it is possible to change this though, I had success following this guide: Is it possible to change CIDR network flannel and Kubernetes - Server Fault

Start Using Your Cluster

After you've initialized your cluster, you have two options. You can manage your cluster as root, or as a regular user.

To manage your cluster as a regular user, run the the following commands:

1mkdir -p $HOME/.kube
2sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
3sudo chown $(id -u):$(id -g) $HOME/.kube/config

To manage your cluster as root, run the following command:

1export KUBECONFIG=/etc/kubernetes/admin.conf

Install the Network Overlay

Next we need to install a network overlay for our cluster. You can learn more about Kubernetes cluster networking at Cluster Networking | Kubernetes. We'll install the Flannel network overlay to our cluster, as it is one of the simplest overlay networks to use that satisfies Kubernetes requirements. To install it run

1kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Single Node Testing #Optional

If you only wish to test and run pods with a single host, or you want to allow the master node to run pods, you will need to remove the NoSchedule taint from the master node. To do so run this command on your master node:

1kubectl taint node <nodehostname> node-role.kubernetes.io/master:NoSchedule-

It is not advisable to remove the NoSchedule taint on the master node in a multi node cluster, because the worker nodes rely on the master node for instructions. If anything were to cause the master node to become unavailable, new pods would not be able to be scheduled on worker nodes, nor would they be able to respond to failing pods.

Join Worker Nodes

The first thing you will want to do on a fresh worker node is follow the Install Docker and Install kubeadm, kubelet, and kubectl sections again.

After you've done so, to join worker nodes to your cluster you will run the join command that is provided after you first initialized your cluster. If you didn't manage to copy it, no worries, you can regenerate a new join command on your master node like so:

1kubeadm token create --print-join-command

It will look something like this

1kubeadm join <master-host>:<master-api-port> --token <token> \ 
2--discovery-token-ca-cert-hash sha256:<hash>

MetalLB

Currently Kubernetes does not provide an implementation of a network load balancer for bare metal clusters, so MetalLB comes to the rescue for us bare metal users.

To install MetalLB on your cluster, we'll use Helm. We will install Helm on our master node by running the following commands:

1curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
2echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
3sudo apt-get update
4sudo apt-get install helm

After Helm is installed, to install MetalLB on the cluster, we'll first need to create a config file with the IP address range we wish to use with MetalLB. Create a yaml file with the following template as metallb-values.yaml:

1configInline:
2  address-pools:
3   - name: default
4     protocol: layer2
5     addresses:
6     - <IP-RANGE-OR-CIDR>

For the IP addresses you can either list addresses as a range 192.168.1.200-210 or with CIDR notation 192.168.1.0/24. The IP addresses MUST be in the same subnet as your Kubernetes nodes to work properly. It's advisable to remove these addresses from your DHCP scope, so that there are no IP address conflicts.

After you've configured your addresses, now we'll install MetalLB with Helm, and apply the config file above.

1helm repo add metallb https://metallb.github.io/metallb
2helm install metallb metallb/metallb -f metallb-values.yaml --create-namespace --namespace metallb

Verify that the metallb pods are running

1kubectl get pods -n metallb

That's all we need to do to get MetalLB working! Now any service with type LoadBalancer will be able to use an IP from the IP pool we've allocated to MetalLB.

Ingresses

At this point, you could just access your Kubernetes services by setting them to type LoadBalancer, but you'd have to remember each IP address and port to access them! A much better approach is to use Ingresses. If you've ever used a reverse proxy, that is, in a nutshell, what an Ingress is. If you're unfamiliar with a reverse proxy, it basically reroutes requests to an appropriate backend server.

Install the Ingress Controller

We will need an Ingress Controller to manage our Ingress resources. We'll use ingress-nginx for this guide. We'll install it with helm. Run the following command:

1helm install ingress-nginx ingress-nginx \
2--repo https://kubernetes.github.io/ingress-nginx \
3--namespace ingress-nginx --create-namespace

Verify that the Ingress Controller pod is running

1kubectl get pods -n ingress-nginx

Create a Test App

Before we create an Ingress, we need something to run! We'll run a simple web server for this guide. Copy the following into a file e.g web-server.yaml:

 1apiVersion: apps/v1
 2kind: Deployment
 3metadata:
 4  name: hello-world
 5  labels:
 6    app: hello-world
 7spec:
 8  replicas: 1
 9  selector:
10    matchLabels:
11      app: hello-world
12  template:
13    metadata:
14      labels:
15        app: hello-world
16    spec:
17      containers:
18      - name: hello-world
19        image: gcr.io/google-samples/node-hello:1.0
20        ports:
21        - containerPort: 8080
22---
23apiVersion: v1
24kind: Service
25metadata:
26  name: hello-world
27spec:
28  ports:
29  - port: 80
30    targetPort: 8080
31  selector:
32    app: hello-world

To deploy the web-server run:

1kubectl apply -f web-server.yaml

Verify the pod is running:

1kubectl get pod

Create Ingress Resource

Currently, we can't access the webpage from outside the cluster, so we need to create an Ingress resource. To do so copy the following Ingress template into a file named ingress.yaml:

 1apiVersion: networking.k8s.io/v1
 2kind: Ingress
 3metadata:
 4  name: hello-world
 5  annotations:
 6spec:
 7  ingressClassName: nginx
 8  rules:
 9  - host: myk8swebserver.com
10    http:
11      paths:
12      - pathType: Prefix
13        path: "/"
14        backend:
15          service:
16            name: hello-world
17            port:
18              number: 80

Replace the host value with the FQDN you would like for this service. The logic behind this is that the Ingress controller will forward the request for "myk8swebserver.com" to the service that is running our web server on our cluster. Now apply the Ingress to our cluster:

1kubectl apply -f ingress.yaml

Hosts file

For this simple demonstration, we'll just use our hosts file to map "myk8swebserver.com" to the IP of our Ingress Controller. To find the IP of our Ingress Controller, run:

1kubectl get svc -n ingress-nginx

The IP of our Ingress Controller will be under EXTERNAL-IP.

Edit your hosts file to include the following:

1<INGRESS-IP> myk8swebserver.com

View the Web Page

Now that we have our app running, our Ingress configured, and our hosts file configure. We should be able to type http://myk8swebserver.com in a browser and see a "Hello Kubernetes!" message.

Enable TLS

You can also use an Ingress for TLS termination, as in, your Ingress Controller will handle TLS, while your backend uses HTTP. To set this up, there's a minor addition to our Ingress resource:

 1apiVersion: networking.k8s.io/v1
 2kind: Ingress
 3metadata:
 4  name: hello-world
 5  annotations:
 6spec:
 7  ingressClassName: nginx
 8  tls:
 9    - hosts:
10      - myk8swebserver.com
11      # secretName refers to the secret containing the tls certificate
12      # if it does not exist, ingress-nginx will use a self-signed certificate
13      secretName: tls-secret 
14  rules:
15  - host: myk8swebserver.com
16    http:
17      paths:
18      - pathType: Prefix
19        path: "/"
20        backend:
21          service:
22            name: hello-world
23            port:
24              number: 80

Once you've added the tls section to your ingress.yaml, apply it again

1kubectl apply -f ingress.yaml

Now you should be able to access your web server with https://myk8swebserver.com. It will say there's an invalid certificate, because nginx ingress created a self-signed certificate since we didn't supply one,but traffic will still be encrypted.

For future reference, if you have an app that uses HTTPS on the backend, you will need to add this annotation to your Ingress:

1nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"

Conclusion

Congrats! You've successfully created your own Kubernetes cluster! I hope this guide has given you the knowledge necessary to configure your cluster.

Extras

There are a few extras you might want on your cluster. They really make Kubernetes shine in my opinion.

external-dns

External-DNS is able to synchronize our Services and Ingresses with our domain DNS records so we don't have to manually add them.

charts/bitnami/external-dns at master · bitnami/charts (github.com)

cert-manager

Cert-manager can request SSL certificates automatically for you when you add new hosts to your Ingress resources, or when your certificates expire. I have a guide on how to set this up: How to Configure Cert-Manager with ZeroSSL on Kubernetes

k8s_gateway

K8s_gateway acts as a DNS server that you can use to access your internal Kubernetes services that you do not wish to expose via External-DNS. For example, if all of your services are on internal.example.com, you could configure a conditional forwarder on your main DNS server to forward all DNS queries for internal.example.com to k8s_gateway's IP address.

k8s_gateway: A CoreDNS plugin to resolve all types of external Kubernetes resources (github.com)

Resources

For further knowledge, here are some links to the documentation for the tools mentioned in this guide. The Kubernetes docs are a fantastic resource for learning more about the types of resources in Kubernetes

NGINX Ingress Controller
Helm
Artifact Hub
Overview of kubectl
kubeadm
kubelet
Docker Documentation
Kubernetes Documentation