Creating Cluster
Initializing the Kubernetes Control Plane with kubeadm
To set up the Kubernetes control plane, follow these steps:
Initialize the Control Plane
Run the following command on the control plane node. Replace 192.168.1.100 with the IP address of your control plane node:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.1.100
Set Up Local kubeconfig After the control plane is initialized, set up the local kubeconfig file to interact with the cluster:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Install Flannel as the network plugin:
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
Restart Flannel
kubectl -n kube-flannel rollout restart daemonset kube-flannel-ds
Joining Worker Nodes to the Cluster
To join worker nodes to the cluster, follow these steps on each worker node.
Pre-checks
sudo systemctl status containerd
sudo systemctl status kubelet
if not expected
sudo systemctl restart containerd
sudo systemctl enable containerd
sudo systemctl status containerd
sudo systemctl restart kubelet
sudo systemctl enable kubelet
sudo systemctl status kubelet
Get token
If you missed the token that was displayed when you initialized the cluster, you can look up the currently valid token with the following command
kubeadm token list
if the token expired, you can generate a new one.
kubeadm token create
then print the hash of CA cert
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
so you can run the following on worker node
sudo su -
kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>
and then label the worker node
kubectl label node kube-worker-1 node-role.kubernetes.io/worker=
- Run the Join Command
On each worker node, run the kubeadm join command provided by the kubeadm init output. This command looks something like this:
sudo kubeadm join 192.168.1.100:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
- Verify Node Joining After joining the nodes, verify that they have successfully joined the cluster:
kubectl get nodes
You should see the worker nodes listed along with the control plane node.
Try to deploy a demo application
You have successfully initialized a Kubernetes control plane using kubeadm, joined worker nodes to the cluster, and deployed a simple HTTP Echo application using Helm. This setup provides a foundation for further Kubernetes exploration and application deployment.
Deploying Nginx to Test Worker Nodes
Now, let’s deploy an Nginx application to ensure that the worker nodes can pull images and run pods correctly.
- Create Nginx Deployment Use the following command to create an Nginx deployment:
kubectl create deploy nginx --image nginx:latest
- Verify Nginx Deployment Check the status of the Nginx deployment to ensure it was successful:
kubectl get pods
You should see a pod with the name nginx running. To get more details about the pod, use:
kubectl describe pod <pod-name>
- Expose Nginx Service Expose the Nginx deployment as a service to get a Cluster IP:
kubectl expose deployment nginx --port=80 --target-port=80 --type=ClusterIP
- Get Nginx Service IP Get the Cluster IP assigned to the Nginx service:
kubectl get svc nginx
- Test Nginx Service Test if the Nginx service is accessible by pinging the Cluster IP from within the cluster. Use a temporary pod for this:
kubectl run tmp-shell --rm -i --tty --image busybox -- /bin/sh
In the temporary shell, use the following commands:
wget -qO- http://<Cluster-IP>
Replace with the actual Cluster IP of the Nginx service.
If everything is set up correctly, you should see the default Nginx welcome page content.
Finally, please remember to delete your demo deployment and service.
kubectl delete svc nginx
kubectl delete deployment nginx
Install Load Balancer
kubectl edit configmap -n kube-system kube-proxy
and set:
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
strictARP: true
To install MetalLB, apply the manifest:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.5/config/manifests/metallb-native.yaml
# curl -x 192.168.137.200:7890 -O https://raw.githubusercontent.com/metallb/metallb/v0.14.5/config/manifests/metallb-native.yaml
then create a file named metalLb-ip.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default-pool
namespace: metallb-system
spec:
addresses:
- 192.168.1.120-192.168.1.160
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default
namespace: metallb-system
spec:
ipAddressPools:
- default-pool
try to apply it
kubectl apply -f metalLb-ip.yaml
Verify Extenal IP
try to verify the external ip for a service with type LoadBalancer
kubectl create deploy nginx --image nginx:latest
kubectl expose deploy nginx --port 80 --type LoadBalancer
Check the service to get the external IP assigned by MetalLB.
kubectl get svc nginx
The output should look something like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.96.222.240 192.168.1.120 80:30091/TCP 1m
then you can test the nginx via external ip
With the external IP assigned (e.g., 192.168.1.120), you can now test access to the Nginx service from your host machine or any device in the same network.
Open a web browser and navigate to:
http://192.168.5.200
You should see the default Nginx welcome page.
or just run a simple cmd
curl http://192.168.5.200
if successed, we can delete the nginx
kubectl delete deploy nginx
kubectl delete svc nginx
Enable masqueradeAll
kubectl edit configmap kube-proxy -n kube-system
edit masqueradeAll
from false
to true
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-proxy
namespace: kube-system
iptables:
masqueradeAll: true
kind: KubeProxyConfiguration
mode: "ipvs"
nftables:
masqueradeAll: true