Using ‘readinessProbe’ and ‘livenessProbe’ we can monitor the health of a container in the Pod.
'kubelet' uses readiness probes to know when a container is ready to start accepting traffic.
What about livenessProbe?
‘livenessProbe’ used to check prod liveess.
Example
livenessProbe: initialDelaySeconds: 10 timeoutSeconds: 1 httpGet: path: / port: 80 readinessProbe: initialDelaySeconds: 10 httpGet: path: / port: 80
user-service-with-probe.yml
apiVersion: apps/v1 kind: Deployment metadata: name: user-service-with-probe spec: selector: matchLabels: app: helloworld replicas: 1 template: metadata: labels: app: helloworld spec: containers: - name: helloworld image: jboss/wildfly ports: - containerPort: 80 readinessProbe: initialDelaySeconds: 10 httpGet: path: / port: 80 livenessProbe: initialDelaySeconds: 10 timeoutSeconds: 1 httpGet: path: / port: 80
Step 1: Create deployment using ‘user-service-with-probe.yml‘ file.
$kubectl create -f user-service-with-probe.yml
deployment.apps/user-service-with-probe created
Step 2: Get all pods.
$kubectl get pods
NAME READY STATUS RESTARTS AGE
user-service-with-probe-6b5756456f-t4gz7 0/1 Running 0 38s
This is a happy scenario. Let’s misconfigure the readiness probe and check the health.
readinessProbe: initialDelaySeconds: 10 httpGet: path: / port: 90
As you see I configured readinessProbe port to 90, but the application is running on port 80. Kubernetes will check on port 90 to confirm whether the application is ready to accept the traffic.
user-service-with-misconfigured-ready-probe.yml
apiVersion: apps/v1 kind: Deployment metadata: name: user-service-with-misconfigured-ready-probe spec: selector: matchLabels: app: helloworld replicas: 1 template: metadata: labels: app: helloworld spec: containers: - name: helloworld image: jboss/wildfly ports: - containerPort: 80 readinessProbe: initialDelaySeconds: 10 httpGet: path: / port: 90 livenessProbe: initialDelaySeconds: 10 timeoutSeconds: 1 httpGet: path: / port: 80
$kubectl create -f user-service-with-misconfigured-ready-probe.yaml
deployment.apps/user-service-with-misconfigured-ready-probe created
Let’s see the status of pods.
$kubectl get pods
NAME READY STATUS RESTARTS AGE
user-service-with-misconfigured-ready-probe-5498fcdbb4-b9zc7 0/1 Running 0 44s
user-service-with-probe-6b5756456f-t4gz7 1/1 Running 0 27m
As you see the output, pod ‘user-service-with-misconfigured-ready-probe-5498fcdbb4-b9zc7’ is initiated 44 seconds ago, but it is still not yet ready (0/1).
Let’s see more information using describe command.
kubectl describe pod/user-service-with-misconfigured-ready-probe-5498fcdbb4-b9zc7
Once you execute above command, last line of output looks like below.
Warning Unhealthy 4s (x9 over 84s) kubelet, minikube Readiness probe failed: Get http://172.17.0.5:90/: dial tcp 172.17.0.5:90: connect: connection refused
From the output, you can confirm that Kubernetes check the readiness of the application 9 times over 84 seconds but it got connection refused error.
Misconfigure livenessProbe
livenessProbe: initialDelaySeconds: 10 timeoutSeconds: 1 failureThreshold: 3 periodSeconds: 6 httpGet: path: / port: 90
As you see above snippet, the application is actually running on port 80, but we are checking for the liveness on the port 90.
failureThreshold: 3
Kubernetes try 3 times before givingup and restarting the pod.
periodSeconds: 6
Perform the probe for every 6 seconds.
user-service-with-misconfigured-live-probe.yml
Let’s create a deployment from ‘user-service-with-misconfigured-live-probe.yml’ file.
$kubectl create -f user-service-with-misconfigured-live-probe.yml
deployment.apps/user-service-with-misconfigured-live-probe created
Let’s see the deployments.
$kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
user-service-with-misconfigured-live-probe 1/1 1 1 33s
user-service-with-misconfigured-ready-probe 0/1 1 0 11m
user-service-with-probe 1/1 1 1 37m
As you see ‘user-service-with-misconfigured-live-probe’ is up and running.
Let’s see the pod status.
$kubectl get pods
NAME READY STATUS RESTARTS AGE
user-service-with-misconfigured-live-probe-7b7466b497-bwrq7 0/1 Running 3 99s
user-service-with-misconfigured-ready-probe-5498fcdbb4-b9zc7 0/1 Running 0 12m
user-service-with-probe-6b5756456f-t4gz7 1/1 Running 0 38m
As you see the output, 3 times restart is initiated for the pod ‘user-service-with-misconfigured-live-probe-7b7466b497-bwrq7’.
Wait for some time and query the pods again.
$kubectl get pods
NAME READY STATUS RESTARTS AGE
user-service-with-misconfigured-live-probe-7b7466b497-bwrq7 0/1 CrashLoopBackOff 4 2m48s
user-service-with-misconfigured-ready-probe-5498fcdbb4-b9zc7 0/1 Running 0 13m
user-service-with-probe-6b5756456f-t4gz7 1/1 Running 0 39m
You will observe that the pod state is set to ‘CrashLoopBackOff’.
No comments:
Post a Comment