Install Kubernetes Cluster on AWS EC2 instances

Kubernetes-Cluster-on-AWS-EC2-

If you are looking to install Kubernetes on AWS EC2 instances from scratch, then this guide will make your steps easier. Now let’s see all are the prerequisites that are needed.
You should have one AWS EC2 instance for each of the master nodes and worker nodes. Here we have used  Amazon Linux 2 AMI, based on centos Linux.

The AMI has pre-installed libraries. If you’re using a centos-based system such as RHEL, so most of the commands remain the same. Ensure all the ports are open in the respective security group. Keep both of the AWS EC2 instances on the same VPC and same zone.

You should connect to both the ec2 instances using two putty terminals and decide which EC2 instance is the master and which is the worker. Now let’s get into the main part.

Master and Worker nodes:

Master and Worker nodes

Being the root user, install Docker and then enable the Docker service, which will happen on system restarts.

sudo su
yum install docker -y
systemctl enable docker && systemctl start docker

Now create yum repo files and use the yum commands to install the components of Kubernetes.

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF

sysctl command : to modify kernel parameters at runtime.
Disable secure Linux to give access to kernel’s IP6 table.

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
setenforce 0

Installing kubeadm, kubectl and then start kubelet daemon

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable kubelet && systemctl start kubele


In Master Node:

First, let us initiate the cluster on the master node.

sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --ignore-preflight-errors=NumCPU

# sudo kubeadm init –pod-network-cidr=192.168.0.0/16 #Do this only if proper CPU cores are available

Allow approximately 4 minutes. The output screen of the master node will display the following commands. You can now copy and paste that into the same terminal or you can copy it from here and then paste it into the master’s terminal.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Then it will generate a command kubeadm join with certain tokens.

In Worker nodes:

After completing the cluster on the master node initialization you need to copy the kubeadm join command from the master node’s output. (First, get into root mode)

sudo su<kubeadm join command copies from master node>

Then initialize KUBECONFIG on both the master and worker nodes, so that kubectl commands can be executed.

export KUBECONFIG=/etc/kubernetes/kubelet.conf

Our cluster is now completed, and you can view it by running the command:

kubectl get nodes

The result of the following command will show both nodes in the NOTREADY status. This is because Kubernetes requires an “overlay network” for pod-to-pod communication to function effectively.

The third-party CNI plugins create this overlay network. Calico is used to creating that overlay network.

Create an Overlay network using Calico CNI plugin on the Master node
For calico networking, copy and paste the following commands and apply them solely to the master node.

sudo kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/etcd.yaml
sudo kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/rbac.yaml
sudo kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/calico.yaml

You can wait for 2 minutes to verify whether the nodes are ready before running it on the master node.

kubectl get nodes
You should see a list of two nodes, both of which should be in the READY state.
Stay updated on all system pods.

kubectl get pods --all-namespaces

The result of the above command should show a large number of pods. If it still doesn’t work, try refreshing the root profile by performing the command below and then running the prior command.

sudo su –kubectl get pods --all-namespaces 

Hope the installation of Kubernetes on AWS EC2 was easy, if you find any difficulty feel free to get assistance.

You can also check: Steps to Migrate AWS Classic VPN to AWS VPN connection

To get more updates you can follow us on Facebook, Twitter, LinkedIn

Subscribe to get free blog content to your Inbox
Loading

Written by actsupp-r0cks