Добавить в цитаты Настройки чтения

Страница 19 из 37

Nginx-65899c769f-szwtd 1/1 Ru

Nginx-65899c769f-zs6g5 1/1 Ru

As we can see, immediately after the POD became unavailable (the process of deleting it began) its replacement began to be created. Soon, the cluster will fully restore its structure. After we have finished our experiments, remove the virtual machines with the cluster:

esschtolts @ cloudshell: ~ (essch) $ gcloud container clusters delete mycluster –zone europe-north1-a;

The following clusters will be deleted.

– [mycluster] in [europe-north1-a]

Do you want to continue (Y / n)? Y

Deleting cluster mycluster … done.

Deleted [https://container.googleapis.com/v1/projects/essch/zones/europe-north1-a/clusters/mycluster].

esschtolts @ cloudshell: ~ (essch) $ gcloud container clusters list –filter = name = mycluster

Total. We created a cluster and created a load balancer with just two run and expose commands, now we can go to the balancer's IP address and watch the NGINX welcome page in the browser. In this case, the cluster recovers itself, for this we emulated a failure of the pod by deleting it – it was created again.

Cluster Reproducibility

Let's take a look at the situation from the previous chapter, in which we created a cluster, deleted a replica, and it recovered. The fact is that we do not manage commands directly, but with the help of commands we create descriptions of the required configuration of the cluster and place it in the distributed storage, after which the state of the nodes is maintained in accordance with this description in the distributed storage. We can also get and edit these descriptions, or write ourselves and then upload them to a distributed storage. This will allow us to save the state on disk in the form of YAML files and restore it back, as is often done when moving from a production server to a test one. In addition, we get the opportunity to more flexibly customize the state, but since we are not limited to commands.

esschtolts @ cloudshell: ~ (essch) $ kubectl get deployment / Nginx –output = yaml

apiVersion: extensions / v1beta1

kind: Deployment

metadata:

a

deployment.kubernetes.io/revision: "1"

creationTimestamp: 2018-12-16T10: 23: 26Z

generation: 1

labels:

run: Nginx

name: Nginx

namespace: default

resourceVersion: "1612985"

selfLink: / apis / extensions / v1beta1 / namespaces / default / deployments / Nginx

uid: 9fb3ad6a-011c-11e9-bfaa-42010aa60088

spec:

progressDeadlineSeconds: 600

replicas: 1

revisionHistoryLimit: 10

selector:

matchLabels:

run: Nginx

strategy:

rollingUpdate:

maxSurge: 1

maxUnavailable: 1

type: RollingUpdate

template:

metadata:

creationTimestamp: null

labels:

run: Nginx

spec:

containers:

– image: Nginx

imagePullPolicy: Always

name: Nginx

resources: {}

terminationMessagePath: / dev / termination-log

terminationMessagePolicy: File

dnsPolicy: ClusterFirst

restartPolicy: Always

schedulerName: default-scheduler

securityContext: {}

terminationGracePeriodSeconds: 30

status:

availableReplicas: 1

conditions:

– lastTransitionTime: 2018-12-16T10: 23: 26Z

lastUpdateTime: 2018-12-16T10: 23: 26Z

message: Deployment has minimum availability.

reason: MinimumReplicasAvailable

status: "True"

type: Available

– lastTransitionTime: 2018-12-16T10: 23: 26Z

lastUpdateTime: 2018-12-16T10: 23: 28Z

message: ReplicaSet "Nginx-64f497f8fd" has successfully progressed.

reason: NewReplicaSetAvailable

status: "True"

type: Progressing

observedGeneration: 1

readyReplicas: 1

replicas: 1

updatedReplicas: 1

This will be superfluous for us, so I will delete the u

apiVersion: extensions / v1beta1

kind: Deployment

metadata:

labels:

run: Nginx

name: Nginx

spec:





selector:

matchLabels:

run: Nginx

template:

metadata:

labels:

run: Nginx

spec:

containers:

– image: Nginx

name: Nginx

You can also create a template:

gcloud services enable compute.googleapis.com –project = $ {PROJECT}

gcloud beta compute instance-templates create-with-container $ {TEMPLATE}

–-machine-type = custom-1-4096

–-image-family = cos-stable

–-image-project = cos-cloud

–-container-image = gcr.io / kuar-demo / kuard-amd64: 1

–-container-restart-policy = always

–-preemptible

–-region = $ {REGION}

–-project = $ {PROJECT}

gcloud compute instance-groups managed create $ {TEMPLATE}

–-base-instance-name = $ {TEMPLATE}

–-template = $ {TEMPLATE}

–-size = $ {CLONES}

–-region = $ {REGION}

–-project = $ {PROJECT}

High service availability

To ensure high availability, you need to redirect traffic to the spare in the event of an application crash. Also, it is often important that the load is evenly distributed, since the application in a single instance is not able to handle all the traffic. To do this, a cluster is created, for example, let's take a more complex image in order to parse a larger number of nuances:

esschtolts @ cloudshell: ~ / bitrix (essch) $ cat deploymnet.yaml

apiVersion: apps / v1

kind: Deployment

metadata:

name: Nginxlamp

spec:

selector:

matchLabels:

app: lamp

replicas: 1

template:

metadata:

labels:

app: lamp

spec:

containers:

– name: lamp

image: mattrayner / lamp: latest-1604-php5

ports:

– containerPort: 80

esschtolts @ cloudshell: ~ / bitrix (essch) $ cat loadbalancer.yaml

apiVersion: v1

kind: Service

metadata:

name: frontend

spec:

type: LoadBalancer

ports:

– name: front

port: 80

targetPort: 80

selector:

app: lamp

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get pods

NAME READY STATUS RESTARTS AGE

Nginxlamp-7fb6fdd47b-jttl8 2/2 Ru

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get svc

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE

frontend LoadBalancer 10.55.242.137 35.228.73.217 80: 32701 / TCP, 8080: 32568 / TCP 4m

kubernetes ClusterIP 10.55.240.1 none> 443 / TCP 48m

Now we can create identical copies of our clusters, for example, for Production and Develop, but balancing will not work as expected. The balancer will find PODs by label, and PODs in both production and Developer clusters correspond to this label. Also, placing clusters in different projects will not be an obstacle. Although, for many tasks, this is a big plus, but not in the case of a cluster for developers and production. The namespace is used to delimit the scope. We use them discreetly, when we list PODs without specifying a scope, we are displayed by default , but the PODs are not taken out of the system scope:

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get namespace

NAME STATUS AGE

default Active 5h

kube-public Active 5h

kube-system Active

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get pods –namespace = kube-system

NAME READY STATUS RESTARTS AGE

event-exporter-v0.2.3-85644fcdf-tdt7h 2/2 Ru

fluentd-gcp-scaler-697b966945-bkqrm 1/1 Ru

fluentd-gcp-v3.1.0-xgtw9 2/2 Ru

heapster-v1.6.0-beta.1-5649d6ddc6-p549d 3/3 Ru

kube-dns-548976df6c-8lvp6 4/4 Ru