Deployment of Kubernetes on AWS

Atul Yadav

2 min read

June 21, 2024

This article will talk about how to install Kubernetes manually on the AWS cloud environment. Though it can be done using Terraform as well but would like to show first manually and in the next article will show how it can be done using Terraform.

Let’s start with manual deployment. Following things are required for the deployment,

  1. AWS Account
  2. VPC (Virtual private cloud)
  3. EC2 instances
  4. Proper IAM roles

First login into your AWS account, don’t worry if you don’t have, it is simple to create an account with AWS and basic services are in the free tier for one year 🙂

Once you have logged in the AWS account, go to the services and select VPC

Create VPC with 10.0.0.0/16 CIDR

Newly created VPC

Enable DNS host names:

Check the enable option shown in the screenshot,

Now it’s time to create the Subnet in the VPC

To make communication with IGW, we need to enable Public IP’s for the EC2 instances

Check on the enable option

Next step is to create the Internet gateway (IGW)

Attach the newly created IGW to VPC

After attaching to the VPC

Create route table

Now in route table add a new route to 0.0.0.0/0 network via IGW

Once, the new route is created attached to the subnet

IAM Role

To make K8 working with AWS we need to create proper roles for EC2 instances. One EC2 instance will be master and other will be a slave.

Go to → IAM →Policies, create policy and then in the JSON tab add the below policy description

{
“Version”: “2012–10–17”,
“Statement”: [
{
“Effect”: “Allow”,
“Action”: [
“autoscaling:DescribeAutoScalingGroups”,
“autoscaling:DescribeLaunchConfigurations”,
“autoscaling:DescribeTags”,
“ec2:DescribeInstances”,
“ec2:DescribeRegions”,
“ec2:DescribeRouteTables”,
“ec2:DescribeSecurityGroups”,
“ec2:DescribeSubnets”,
“ec2:DescribeVolumes”,
“ec2:CreateSecurityGroup”,
“ec2:CreateTags”,
“ec2:CreateVolume”,
“ec2:ModifyInstanceAttribute”,
“ec2:ModifyVolume”,
“ec2:AttachVolume”,
“ec2:AuthorizeSecurityGroupIngress”,
“ec2:CreateRoute”,
“ec2:DeleteRoute”,
“ec2:DeleteSecurityGroup”,
“ec2:DeleteVolume”,
“ec2:DetachVolume”,
“ec2:RevokeSecurityGroupIngress”,
“ec2:DescribeVpcs”,
“elasticloadbalancing:AddTags”,
“elasticloadbalancing:AttachLoadBalancerToSubnets”,
“elasticloadbalancing:ApplySecurityGroupsToLoadBalancer”,
“elasticloadbalancing:CreateLoadBalancer”,
“elasticloadbalancing:CreateLoadBalancerPolicy”,
“elasticloadbalancing:CreateLoadBalancerListeners”,
“elasticloadbalancing:ConfigureHealthCheck”,
“elasticloadbalancing:DeleteLoadBalancer”,
“elasticloadbalancing:DeleteLoadBalancerListeners”,
“elasticloadbalancing:DescribeLoadBalancers”,
“elasticloadbalancing:DescribeLoadBalancerAttributes”,
“elasticloadbalancing:DetachLoadBalancerFromSubnets”,
“elasticloadbalancing:DeregisterInstancesFromLoadBalancer”,
“elasticloadbalancing:ModifyLoadBalancerAttributes”,
“elasticloadbalancing:RegisterInstancesWithLoadBalancer”,
“elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer”,
“elasticloadbalancing:AddTags”,
“elasticloadbalancing:CreateListener”,
“elasticloadbalancing:CreateTargetGroup”,
“elasticloadbalancing:DeleteListener”,
“elasticloadbalancing:DeleteTargetGroup”,
“elasticloadbalancing:DescribeListeners”,
“elasticloadbalancing:DescribeLoadBalancerPolicies”,
“elasticloadbalancing:DescribeTargetGroups”,
“elasticloadbalancing:DescribeTargetHealth”,
“elasticloadbalancing:ModifyListener”,
“elasticloadbalancing:ModifyTargetGroup”,
“elasticloadbalancing:RegisterTargets”,
“elasticloadbalancing:SetLoadBalancerPoliciesOfListener”,
“iam:CreateServiceLinkedRole”,
“kms:DescribeKey”
],
“Resource”: [
“*”
]
}
]
}

Click on review policy and click on create policy

Go to Roles, create a role using EC2 type

Click on permissions and find the newly created policy

Review the details and create a role

IAM worker role

Create policy and role for worker node, follow the same steps for creating policy. Use the below json file and place in JSON tab

{

“Version”: “2012–10–17”,

“Statement”: [

{

“Effect”: “Allow”,

“Action”: [

“ec2:DescribeInstances”,

“ec2:DescribeRegions”,

“ecr:GetAuthorizationToken”,

“ecr:BatchCheckLayerAvailability”,

“ecr:GetDownloadUrlForLayer”,

“ecr:GetRepositoryPolicy”,

“ecr:DescribeRepositories”,

“ecr:ListImages”,

“ecr:BatchGetImage”

],

“Resource”: “*”

}

]

}

Do repeat the same step for creating the IAM worker role

Now it’s time to create EC2 instances, one master node, and slave node.

Select t2.medium type instance and Ubuntu AMI. Attach it to VPC and assign the required roles which we created.

Review the instance details

Just follow the steps shown in the screenshots and create a security group for Kubernetes setup

Now launch the instance, it will ask for Keypair. Create a new keypair and download it to access the instances via SSH

While the master instance is spinning up, repeat the same steps for the worker node, and assign the worker IAM role to slave node.

Assign the existing security group rules to worker node

Verify both instances if they are up and running. Also,A check the public IP which will be used to SSH the instances.

SSH into the both instances or can ping to check the networking is working fine

Voilà!!

Now the AWS infrastructure is done for the K8 setup. Let’s start.

Login into the Kubernetes master server using SSH. Just save time I will paste all the working logs in one go.

Perform the below steps on both EC2 instances

root@ip-10–0–0–76:~# apt update && apt -y upgrade

Add Docker and Kubernetes repositories and other required packages

root@ip-10–0–0–76:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add — && add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable” && curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add — && echo “deb https://apt.kubernetes.io/ kubernetes-xenial main” > /etc/apt/sources.list.d/kubernetes.list && apt update && apt install -y docker-ce kubelet kubeadm kubectl

Edit the Hostname for both instances

root@ip-10–0–0–76:~# hostname

ip-10–0–0–76

root@ip-10–0–0–76:~# curl http://169.254.169.254/latest/meta-data/local-hostname

ip-10–0–0–76.ap-south-1.compute.internalroot@ip-10–0–0–76

Set the above FQDN as hostname

root@ip-10–0–0–76:~# hostnamectl set-hostname ip-10–0–0–76.ap-south-1.compute.internal

root@ip-10–0–0–76:~# hostname

ip-10–0–0–76.ap-south-1.compute.internal

On the master node

root@ip-10–0–0–193:~# hostname

ip-10–0–0–193

root@ip-10–0–0–193:~# curl http://169.254.169.254/latest/meta-data/local-hostname

ip-10–0–0–193.ap-south-1.compute.internalroot@ip-10–0–0–193:~#

root@ip-10–0–0–193:~# hostnamectl set-hostname ip-10–0–0–193.ap-south-1.compute.internal

root@ip-10–0–0–193:~# hostname

ip-10–0–0–193.ap-south-1.compute.internal

Kubernetes cluster setup on master node

Create yml file in the /etc/kubernetes/ current directory on the master node. All these files are present in the GitHub, if needed you can fork from there.

 

atul7107/Kubernetessetup

This repo is for setup up the K8 setup on AWS manually – atul7107/Kubernetessetup

github.com

Initialize cluster using this config:

root@ip-10–0–0–193:/etc/kubernetes# kubeadm init — config /etc/kubernetes/kubernetes.yml

W0517 08:43:52.369633 32117 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]

[init] Using Kubernetes version: v1.18.2

[preflight] Running pre-flight checks

[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/

[preflight] Pulling images required for setting up a Kubernetes cluster

[preflight] This might take a minute or two, depending on the speed of your internet connection

[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’

[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”

[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”

[kubelet-start] Starting the kubelet

[certs] Using certificateDir folder “/etc/kubernetes/pki”

[certs] Generating “ca” certificate and key

[certs] Generating “apiserver” certificate and key

[certs] apiserver serving cert is signed for DNS names [ip-10–0–0–193.ap-south-1.compute.internal kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.100.0.1 10.0.0.193]

[certs] Generating “apiserver-kubelet-client” certificate and key

[certs] Generating “front-proxy-ca” certificate and key

[certs] Generating “front-proxy-client” certificate and key

[certs] Generating “etcd/ca” certificate and key

[certs] Generating “etcd/server” certificate and key

[certs] etcd/server serving cert is signed for DNS names [ip-10–0–0–193.ap-south-1.compute.internal localhost] and IPs [10.0.0.193 127.0.0.1 ::1]

[certs] Generating “etcd/peer” certificate and key

[certs] etcd/peer serving cert is signed for DNS names [ip-10–0–0–193.ap-south-1.compute.internal localhost] and IPs [10.0.0.193 127.0.0.1 ::1]

[certs] Generating “etcd/healthcheck-client” certificate and key

[certs] Generating “apiserver-etcd-client” certificate and key

[certs] Generating “sa” key and public key

[kubeconfig] Using kubeconfig folder “/etc/kubernetes”

[kubeconfig] Writing “admin.conf” kubeconfig file

[kubeconfig] Writing “kubelet.conf” kubeconfig file

[kubeconfig] Writing “controller-manager.conf” kubeconfig file

[kubeconfig] Writing “scheduler.conf” kubeconfig file

[control-plane] Using manifest folder “/etc/kubernetes/manifests”

[control-plane] Creating static Pod manifest for “kube-apiserver”

[control-plane] Creating static Pod manifest for “kube-controller-manager”

W0517 08:44:25.643916 32117 manifests.go:225] the default kube-apiserver authorization-mode is “Node,RBAC”; using “Node,RBAC”

[control-plane] Creating static Pod manifest for “kube-scheduler”

W0517 08:44:25.645034 32117 manifests.go:225] the default kube-apiserver authorization-mode is “Node,RBAC”; using “Node,RBAC”

[etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s

[apiclient] All control plane components are healthy after 13.002152 seconds

[upload-config] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace

[kubelet] Creating a ConfigMap “kubelet-config-1.18” in namespace kube-system with the configuration for the kubelets in the cluster

[upload-certs] Skipping phase. Please see — upload-certs

[mark-control-plane] Marking the node ip-10–0–0–193.ap-south-1.compute.internal as control-plane by adding the label “node-role.kubernetes.io/master=’’”

[mark-control-plane] Marking the node ip-10–0–0–193.ap-south-1.compute.internal as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

[bootstrap-token] Using token: zf38t6.64yuqvq4mfb921gb

[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles

[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes

[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstrap-token] Creating the “cluster-info” ConfigMap in the “kube-public” namespace

[kubelet-finalize] Updating “/etc/kubernetes/kubelet.conf” to point to a rotatable kubelet client certificate and key

[addons] Applied essential addon: CoreDNS

[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:

https://kubernetes.io/docs/concepts/cluster-administration/addons

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.0.193:6443 — token zf38t6.64yuqvq4mfb921gb \

— discovery-token-ca-cert-hash sha256:b4b5290600a7e9a150c54799aa2d0e524c46219822ea9872376b4bd8b909103a

Kubelet file:

Now its time to create kubelet file which will be used to communicate with the worker node

root@ip-10–0–0–193:/etc/kubernetes# mkdir -p $HOME/.kube

root@ip-10–0–0–193:/etc/kubernetes# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

root@ip-10–0–0–193:/etc/kubernetes# chown ubuntu:ubuntu $HOME/.kube/config

Check the nodes status:

Check the cluster configuration info using config view:

Flannel CNI Installation: This is the main component of networking in K8 which helps how the two nodes will communicate using a particular networking plugin.

Execute the below command on master node

root@ip-10–0–0–193:/etc/kubernetes# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

podsecuritypolicy.policy/psp.flannel.unprivileged created

clusterrole.rbac.authorization.k8s.io/flannel created

clusterrolebinding.rbac.authorization.k8s.io/flannel created

serviceaccount/flannel created

configmap/kube-flannel-cfg created

daemonset.apps/kube-flannel-ds-amd64 created

daemonset.apps/kube-flannel-ds-arm64 created

daemonset.apps/kube-flannel-ds-arm created

daemonset.apps/kube-flannel-ds-ppc64le created

daemonset.apps/kube-flannel-ds-s390x created

root@ip-10–0–0–193:/etc/kubernetes#

Now check the status of nodes, it should be in Ready state.

Attaching the worker node to master node

Create a file node.yml in worker node with JoinConfiguration parameter

Run the command on worker node

Check the status on nodes on master server, the worker node will up and running

Finally, we are done with the deployment of the Kubernetes cluster with the master and worker node. In the next part, I will cover deploying the application on pods and accessing externally.

Hope you liked it and don’t forget to clap the article 🙂

Happy Learning 🙂