Добавить в цитаты Настройки чтения

Страница 23 из 37

pod / liveness created

controlplane $ kubectl get pods

NAME READY STATUS RESTARTS AGE

liveness 1/1 Ru

controlplane $ kubectl describe pod / liveness | tail -n 15

SecretName: default-token-9v5mb

Optional: false

QoS Class: BestEffort

Node-Selectors: <none>

Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s

node.kubernetes.io/unreachable:NoExecute for 300s

Events:

Type Reason Age From Message

–– – – – –

Normal Scheduled 3m44s default-scheduler Successfully assigned default / liveness to node01

Normal Pulled 68s (x3 over 3m35s) kubelet, node01 Container image "alpine: 3.5" already present on machine

Normal Created 68s (x3 over 3m35s) kubelet, node01 Created container healtcheck

Normal Started 68s (x3 over 3m34s) kubelet, node01 Started container healtcheck

Warning Unhealthy 23s (x9 over 3m3s) kubelet, node01 Liveness probe failed: cat: can't open '/ tmp / healthy': No such file or directory

Normal Killing 23s (x3 over 2m53s) kubelet, node01 Container healtcheck failed liveness probe, will be restarted

We also see on cluster events that when cat / tmp / health fails, the container is re-created:

controlplane $ kubectl get events

controlplane $ kubectl get events | grep pod / liveness

13m Normal Scheduled pod / liveness Successfully assigned default / liveness to node01

13m Normal Pulling pod / liveness Pulling image "alpine: 3.5"

13m Normal Pulled pod / liveness Successfully pulled image "alpine: 3.5"

10m Normal Created pod / liveness Created container healtcheck

10m Normal Started pod / liveness Started container healtcheck

10m Warning Unhealthy pod / liveness Liveness probe failed: cat: can't open '/ tmp / healthy': No such file or directory

10m Normal Killing pod / liveness Container healtcheck failed liveness probe, will be restarted

10m Normal Pulled pod / liveness Container image "alpine: 3.5" already present on machine

8m32s Normal Scheduled pod / liveness Successfully assigned default / liveness to node01

4m41s Normal Pulled pod / liveness Container image "alpine: 3.5" already present on machine

4m41s Normal Created pod / liveness Created container healtcheck

4m41s Normal Started pod / liveness Started container healtcheck

2m51s Warning Unhealthy pod / liveness Liveness probe failed: cat: can't open '/ tmp / healthy': No such file or directory

5m11s Normal Killing pod / liveness Container healtcheck failed liveness probe, will be restarted

Let's take a look at RadyNess trial. The availability of this test indicates that the application is ready to accept requests and the service can switch traffic to it:

controlplane $ cat << EOF> readiness.yaml

apiVersion: apps / v1

kind: Deployment

metadata:

name: readiness

spec:

replicas: 2

selector:

matchLabels:

app: readiness

template:

metadata:

labels:

app: readiness

spec:

containers:

– name: readiness

image: python

args:

– / bin / sh

– -c

– sleep 15 && (hostname> health) && python -m http.server 9000

readinessProbe:

exec:

command:

– cat

– / tmp / healthy

initialDelaySeconds: 1

periodSeconds: 5

EOF

controlplane $ kubectl create -f readiness.yaml

deployment.apps / readiness created

controlplane $ kubectl get pods

NAME READY STATUS RESTARTS AGE

readiness-fd8d996dd-cfsdb 0/1 ContainerCreating 0 7s

readiness-fd8d996dd-sj8pl 0/1 ContainerCreating 0 7s

controlplane $ kubectl get pods





NAME READY STATUS RESTARTS AGE

readiness-fd8d996dd-cfsdb 0/1 Ru

readiness-fd8d996dd-sj8pl 0/1 Ru

controlplane $ kubectl exec -it readiness-fd8d996dd-cfsdb – curl localhost: 9000 / health

readiness-fd8d996dd-cfsdb

Our containers work great. Let's add traffic to them:

controlplane $ kubectl expose deploy readiness

–-type = LoadBalancer

–-name = readiness

–-port = 9000

–-target-port = 9000

service / readiness exposed

controlplane $ kubectl get svc readiness

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE

readiness LoadBalancer 10.98.36.51 <pending> 9000: 32355 / TCP 98s

controlplane $ curl localhost: 9000

controlplane $ for i in {1..5}; do curl $ IP: 9000 / health; done

one

2

3

four

five

Each container has a delay. Let's check what happens if one of the containers is restarted – whether traffic will be redirected to it:

controlplane $ kubectl get pods

NAME READY STATUS RESTARTS AGE

readiness-5dd64c6c79-9vq62 0/1 CrashLoopBackOff 6 15m

readiness-5dd64c6c79-sblvl 0/1 CrashLoopBackOff 6 15m

kubectl exec -it .... -c .... bash -c "rm -f healt"

controlplane $ for i in {1..5}; do echo $ i; done

one

2

3

four

five

controlplane $ kubectl delete deploy readiness

deployment.apps "readiness" deleted

Consider a situation when a container becomes temporarily unavailable for work:

(hostname> health) && (python -m http.server 9000 &) && sleep 60 && rm health && sleep 60 && (hostname> health) sleep 6000

/ bin / sh -c sleep 60 && (python -m http.server 9000 &) && PID = $! && sleep 60 && kill -9 $ PID

By default, the container enters the Ru

esschtolts @ cloudshell: ~ / bitrix (essch) $ cat health_check.yaml

apiVersion: v1

kind: Pod

metadata:

labels:

test: healtcheck

name: healtcheck

spec:

containers:

– name: healtcheck

image: alpine: 3.5

args:

– / bin / sh

– -c

– sleep 12; touch / tmp / healthy; sleep 10; rm -rf / tmp / healthy; sleep 60

readinessProbe:

exec:

command:

– cat

– / tmp / healthy

initialDelaySeconds: 5

periodSeconds: 5

livenessProbe:

exec:

command:

– cat

– / tmp / healthy

initialDelaySeconds: 15

periodSeconds: 5

The container starts after 3 seconds and after 5 seconds a readiness check starts every 5 seconds. On the second check (at 15 seconds of life), the readiness check cat / tmp / healthy will be successful. At this time, the livenessProbe operability check begins and at the second check (at 25 seconds) it ends with an error, after which the container is recognized as not working and is recreated.

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl create -f health_check.yaml && sleep 4 && kubectl get

pods && sleep 10 && kubectl get pods && sleep 10 && kubectl get pods