Controllers are the core components in Kubernetes. Controller processes monitor Kubernetes objects and take action accordingly. In this post, I am going to discuss the Replication Controller and ReplicaSet.
In a typical production scenario, to achieve high availability, we run our services on multiple instances. In the case of Kubernetes, we run multiple pods of the same application. In this way, even though one of the Pod fails, our application still continues to serve the requests from other Pods.
As you see above image, application ‘User-Service’ is running in 3 Pods.
Replication Controller
Replication controller allows us to run multiple instances of a single Pod in Kubernetes Cluster.
When currently running Pod fails, Replication controller automatically spins up other Pod instance.
Replication Controller vs Replica Set
Replication Controller is an older concept that is replaced by Replica Set. Kubernetes recommends using Replica Set to setup replication of Pods.
Below flowchart depicts the working of ReplicationSet (OR) ReplicationController.
Let’s create pods using the replication controller.
Step 1: Create ‘employeeServiceReplicationController.yml’ file.
employeeServiceReplicationController.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: employee-service-rc
labels:
app: employee-service-rc
author: krishna
serviceType: webservice
spec:
template:
metadata:
name: employee-service
labels:
app: employee-service
author: krishna
serviceType: webservice
spec:
containers:
- name: employee-service-container
image: jboss/wildfly
replicas: 3
Step 2: Execute the below command to create pods from the yml file.
kubectl create -f employeeServiceReplicationController.yml
$kubectl create -f employeeServiceReplicationController.yml replicationcontroller/employee-service-rc created
Step 3: Execute below command to see all the replication controllers.
kubectl get replicationcontroller
$kubectl get replicationcontroller
NAME DESIRED CURRENT READY AGE
employee-service-rc 3 3 3 88s
As you see the output, ‘employee-service-rc’ gets created. Desired number of pods are 3, current pods are 3 and are in Ready state.
Step 4: Execute the below command to see the list of pods.
$kubectl get pods
NAME READY STATUS RESTARTS AGE
employee-service-rc-bq7qz 1/1 Running 0 3m12s
employee-service-rc-cqj8f 1/1 Running 0 3m12s
employee-service-rc-w9zvg 1/1 Running 0 3m12s
You can check the IP addresses of pods using -o wide option.
$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
employee-service-rc-bq7qz 1/1 Running 0 3m37s 172.17.0.7 minikube <none> <none>
employee-service-rc-cqj8f 1/1 Running 0 3m37s 172.17.0.8 minikube <none> <none>
employee-service-rc-w9zvg 1/1 Running 0 3m37s 172.17.0.6 minikube <none> <none>
As you see the output, you can confirm each pod is assigned with the different internal IP addresses.
As you observe the output, Pod names are created by prefixing the Replication controller name which is ‘employee-service-rc’.
Step 5: You can delete the replication controller by executing the below command.
kubectl delete replicationcontroller {replicationController}
$kubectl delete replicationcontroller employee-service-rc
replicationcontroller "employee-service-rc" deleted
$
$
$kubectl get pods
NAME READY STATUS RESTARTS AGE
employee-service-rc-bq7qz 0/1 Terminating 0 6m32s
employee-service-rc-cqj8f 0/1 Terminating 0 6m32s
$
$kubectl get pods
No resources found in default namespace.
Let’s create the same using ReplicaSet
Step 1: Define ‘employeeServiceReplicationSet.yml’ file.
employeeServiceReplicationSet.yml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: employee-service-replica-set
labels:
app: employee-service-replica-set
author: krishna
serviceType: webservice
spec:
template:
metadata:
name: employee-service
labels:
app: employee-service
author: krishna
serviceType: webservice
spec:
containers:
- name: employee-service-container
image: jboss/wildfly
replicas: 3
selector:
matchLabels:
app: employee-service
As you see the definition files if ReplicationController and ReplicationSet, there is no difference, except I added the ‘selector’ section to it. 'selector' section defines what Pods fall under this Replica Set.
Why the selector section is needed?
Replicaset manages the Pods that are not created as part of the execution of this ReplicaSet definition file. If any Pods matches to the selector definition, then ReplicaSet considers those Pods also into consideration while creating the replicas.
Step 2: Execute the below command to create pods from the definition file.
kubectl create -f employeeServiceReplicationSet.yml
$kubectl create -f employeeServiceReplicationSet.yml replicaset.apps/employee-service-replica-set created
You can execute the command ‘kubectl get replicaset’ to see list of replicasets.
Step 3: Get all the Pods.
$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
employee-service-replica-set-262wm 1/1 Running 0 2m26s 172.17.0.8 minikube <none> <none>
employee-service-replica-set-nv6tx 1/1 Running 0 2m26s 172.17.0.6 minikube <none> <none>
employee-service-replica-set-xnqs2 1/1 Running 0 2m26s 172.17.0.7 minikube <none> <none>
As you see the ouput, name of the Pods start with Replica Set name which is ‘employee-service-replica-set’.
How a ReplicaSet monitor the Pods?
Using the selector definition that we provided, ReplicaSet knows which pods it should monitor.
How can I update number of replicas to 5?
Approach 1: Open the definition file and update number of replicas to 5.
employeeServiceReplicationSet.yml
apiVersion: apps/v1 kind: ReplicaSet metadata: name: employee-service-replica-set labels: app: employee-service-replica-set author: krishna serviceType: webservice spec: template: metadata: name: employee-service labels: app: employee-service author: krishna serviceType: webservice spec: containers: - name: employee-service-container image: jboss/wildfly replicas: 5 selector: matchLabels: app: employee-service
Run ‘kubectl replace -f {definitionFile}’ to update the replicaset.
$kubectl replace -f employeeServiceReplicationSet.yml
replicaset.apps/employee-service-replica-set replaced
Now, you can see there are 5 pods running.
$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
employee-service-replica-set-262wm 1/1 Running 0 10m 172.17.0.8 minikube <none> <none>
employee-service-replica-set-clfnf 1/1 Running 0 28s 172.17.0.9 minikube <none> <none>
employee-service-replica-set-nv6tx 1/1 Running 0 10m 172.17.0.6 minikube <none> <none>
employee-service-replica-set-xnqs2 1/1 Running 0 10m 172.17.0.7 minikube <none> <none>
employee-service-replica-set-zwj29 1/1 Running 0 28s 172.17.0.10 minikube <none> <none>
Approach 2: Use ‘kubectl scale --replicas=10 -f employeeServiceReplicationSet.yml’ to scale replicas to 10.
$kubectl scale --replicas=10 -f employeeServiceReplicationSet.yml
replicaset.apps/employee-service-replica-set scaled
$kubectl get pods
NAME READY STATUS RESTARTS AGE
employee-service-replica-set-262wm 1/1 Running 0 15m
employee-service-replica-set-58fnx 1/1 Running 0 30s
employee-service-replica-set-6ph2l 1/1 Running 0 30s
employee-service-replica-set-clfnf 1/1 Running 0 5m27s
employee-service-replica-set-hpjtv 1/1 Running 0 30s
employee-service-replica-set-nv6tx 1/1 Running 0 15m
employee-service-replica-set-tqsm4 1/1 Running 0 30s
employee-service-replica-set-xnqs2 1/1 Running 0 15m
employee-service-replica-set-z7s7t 1/1 Running 0 30s
employee-service-replica-set-zwj29 1/1 Running 0 5m27s
Approach 3: By specifying the replicaset.
For example, below command reduces the pods from 10 to 3.
$kubectl scale --replicas=3 replicaset employee-service-replica-set
replicaset.apps/employee-service-replica-set scaled
$kubectl get pods
NAME READY STATUS RESTARTS AGE
employee-service-replica-set-262wm 1/1 Running 0 17m
employee-service-replica-set-58fnx 0/1 Terminating 0 2m2s
employee-service-replica-set-6ph2l 0/1 Terminating 0 2m2s
employee-service-replica-set-clfnf 0/1 Terminating 0 6m59s
employee-service-replica-set-hpjtv 0/1 Terminating 0 2m2s
employee-service-replica-set-nv6tx 1/1 Running 0 17m
employee-service-replica-set-tqsm4 0/1 Terminating 0 2m2s
employee-service-replica-set-xnqs2 1/1 Running 0 17m
employee-service-replica-set-zwj29 0/1 Terminating 0 6m59s
As you see the output, Pods are in Terminating status. Wait for some time and query the Pods again, you can see that only 3 Pods will be in Running status.
$kubectl get pods
NAME READY STATUS RESTARTS AGE
employee-service-replica-set-262wm 1/1 Running 0 17m
employee-service-replica-set-nv6tx 1/1 Running 0 17m
employee-service-replica-set-xnqs2 1/1 Running 0 17m
How to delete ReplicaSet?
Execute below command to remove replicaset.
$kubectl delete replicaset employee-service-replica-set
replicaset.apps "employee-service-replica-set" deleted
Query Pods to confirm no Pods exist.
$kubectl get pods
No resources found in default namespace.
Can I move pods from one replication controller to another?
Pods created by ReplicationSet or ReplicationController are not tightly coupled to ReplicationController (or) ReplicationSet. By changing the labels, you can remove the pod from the monitoring eye of ReplicationController (or) ReplicationSet. Similarly, by changing the label, you can add a Pod to the monitoring eye of ReplicationController (or) ReplicationSet.
Previous Next Home
No comments:
Post a Comment