Wednesday 22 July 2020

Kubernetes: Limit the resources used by Pod

By default, a Pod can use as much as available resources (like CPU and Memory) to perform its work. We can manage this resource utilization by setting the property ‘pod.spec.containers.resources’.

 

How to express CPU limits?

CPU limits are expressed in millicore or millicpu  (1 millicore = 1/1000 of a CPU core).

 

Example

250 millicore is 0.25 CPU core

500 millicore is 0.5 CPU core

1000 millicore is 1 CPU core

 

‘kube-scheduler’ ensures that the node running all the Pods has sufficient resources available.

 

How to express memory limits?

Limits and requests for memory are measured in bytes. You can express memory as a plain integer or as a fixed-point integer using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. For example, the following represent roughly the same value:

 

128974848, 129e6, 129M, 123Mi

 

Let's create a Pod with two containers.

 

resourceLimitation.yml

apiVersion: v1
kind: Pod
metadata:
  name: resource-limitation-demo
  labels:
    app: hello-world
    author: krishna
    serviceType: user-service
spec:
  containers:
    - name: user-service-terminal
      image: busybox
      command:
        - sleep 
        - "30"
      resources:
        requests:
          memory: "64Mi"
          cpu: "250m"
        limits:
          memory: "128Mi"
          cpu: "500m"
    - name: user-service
      image: jboss/wildfly
      resources:
        requests:
          memory: "64Mi"
          cpu: "250m"
        limits:
          memory: "128Mi"
          cpu: "500m"

Let’s create a Pod using the above definition file.

$kubectl create -f resourceLimitation.yml 
pod/resource-limitation-demo created

Let’s query the Pods.

$kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
resource-limitation-demo   2/2     Running   2          57s

You can check the resource limitation using ‘kubectl describe’ command.

$kubectl describe pods resource-limitation-demo
Name:         resource-limitation-demo
Namespace:    default
Priority:     0
Node:         minikube/192.168.99.100
Start Time:   Sun, 07 Jun 2020 12:16:21 +0530
Labels:       app=hello-world
              author=krishna
              serviceType=user-service
Annotations:  <none>
Status:       Running
IP:           172.17.0.6
IPs:
  IP:  172.17.0.6
Containers:
  user-service-terminal:
    Container ID:  docker://8e4a064300931f3d5e618a1cb37da68019e0a4fe5d319f0041564a271b6510bb
    Image:         busybox
    Image ID:      docker-pullable://busybox@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209
    Port:          <none>
    Host Port:     <none>
    Command:
      sleep
      30
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun, 07 Jun 2020 12:17:04 +0530
      Finished:     Sun, 07 Jun 2020 12:17:34 +0530
    Ready:          False
    Restart Count:  1
    Limits:
      cpu:     500m
      memory:  128Mi
    Requests:
      cpu:        250m
      memory:     64Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-w7rp7 (ro)
  user-service:
    Container ID:   docker://ccb785bbf8b3f0bef7700455a147966d6f47c5180245adaf16853e6b949afe7b
    Image:          jboss/wildfly
    Image ID:       docker-pullable://jboss/wildfly@sha256:67a4f90b213bc2600d08d90e82df58be83813d118d163fbcc8765b6eeaade7e6
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       OOMKilled
      Exit Code:    0
      Started:      Sun, 07 Jun 2020 12:17:09 +0530
      Finished:     Sun, 07 Jun 2020 12:17:32 +0530
    Ready:          False
    Restart Count:  1
    Limits:
      cpu:     500m
      memory:  128Mi
    Requests:
      cpu:        250m
      memory:     64Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-w7rp7 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-w7rp7:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-w7rp7
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  <unknown>          default-scheduler  Successfully assigned default/resource-limitation-demo to minikube
  Normal   Pulled     48s (x2 over 83s)  kubelet, minikube  Successfully pulled image "busybox"
  Normal   Created    48s (x2 over 83s)  kubelet, minikube  Created container user-service-terminal
  Normal   Started    47s (x2 over 83s)  kubelet, minikube  Started container user-service-terminal
  Normal   Pulling    47s (x2 over 83s)  kubelet, minikube  Pulling image "jboss/wildfly"
  Normal   Created    42s (x2 over 79s)  kubelet, minikube  Created container user-service
  Normal   Pulled     42s (x2 over 79s)  kubelet, minikube  Successfully pulled image "jboss/wildfly"
  Normal   Started    42s (x2 over 79s)  kubelet, minikube  Started container user-service
  Warning  BackOff    17s (x2 over 18s)  kubelet, minikube  Back-off restarting failed container
  Warning  BackOff    17s                kubelet, minikube  Back-off restarting failed container
  Normal   Pulling    6s (x3 over 90s)   kubelet, minikube  Pulling image "busybox"

You can see the configured resources limits under the ‘Limits:’ section.

 

You can delete the Pod, by executing the below command.

 

kubectl delete pod resource-limitation-demo

$kubectl delete pod resource-limitation-demo
pod "resource-limitation-demo" deleted

If you want to know how these resources are managed, please go through Linux Control Groups (C-Groups).


Previous                                                    Next                                                    Home



No comments:

Post a Comment