Добавить в цитаты Настройки чтения

Страница 18 из 37

Now we need to co

$ gcloud container clusters get-credentials b –zone europe-north1-a –project essch

$ kubectl port-forward Nginxlamp-74c8b5b7f-d2rsg 8080: 8080

Forwarding from 127.0.0.1:8080 -> 8080

Forwarding from [:: 1]: 8080 -> 8080

$ google-chrome http: // localhost: 8080 # this won't work in Google Shell

$ kubectl expose Deployment Nginxlamp –type = "LoadBalancer" –port = 8080

To use kubectl locally, you need to install gcloud and use it to install kubectl using the gcloud components install kubectl command , but let's not complicate the first steps for now.

In the Services section of the admin panel, POD will be available not only through the front-end balancer service, but also through the internal balancer Deployment. Although it will be saved after the re-creation, the config is more maintainable and obvious.

It is also possible to make it possible to adjust the number of nodes in automatic mode depending on the load, for example, the number of containers with established resource requirements, using the keys –enable-autoscaling –min-nodes = 1 –max-nodes = 2 .

Simple cluster in GCP

There are two ways to create a cluster: through the Google Cloud Platform graphical interface or through its API with the gcloud command. Let's see how this can be done through the UI. Next to the menu, click on the drop-down list and create a separate project. In the Kubernetes Engine section, choose to create a cluster. Let's give the name, 2CPU, the europe-north-1 zone (the data center in Finland is closest to St. Petersburg) and the latest version of Kubernetes. After creating the cluster, click on co

gcloud container clusters create mycluster –zone europe-north1-a

After a while, it took me two and a half minutes, 3 virtual machines will be raised, the operating system is installed on them and the disk is mounted. Let's check:

esschtolts @ cloudshell: ~ (essch) $ gcloud container clusters list –filter = name = mycluster

NAME LOCATION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS

mycluster europe-north1-a 35.228.37.100 n1-standard-1 1.10.9-gke.5 3 RUNNING

esschtolts @ cloudshell: ~ (essch) $ gcloud compute instances list

NAME MACHINE_TYPE EXTERNAL_IP STATUS

gke-mycluster-default-pool-43710ef9-0168 n1-standard-1 35.228.73.217 RUNNING

gke-mycluster-default-pool-43710ef9-39ck n1-standard-1 35.228.75.47 RUNNING

gke-mycluster-default-pool-43710ef9-g76k n1-standard-1 35.228.117.209 RUNNING

Let's co

esschtolts @ cloudshell: ~ (essch) $ gcloud projects list

PROJECT_ID NAME PROJECT_NUMBER

agile-aleph-203917 My First Project 546748042692

essch app 283762935665

esschtolts @ cloudshell: ~ (essch) $ gcloud container clusters get-credentials mycluster

–-zone europe-north1-a

–-project essch

Fetching cluster endpoint and auth data.

kubeconfig entry generated for mycluster.

We don't have a cluster yet:

esschtolts @ cloudshell: ~ (essch) $ kubectl get pods

No resources found.

Let's create a cluster:

esschtolts @ cloudshell: ~ (essch) $ kubectl run Nginx –image = Nginx –replicas = 3

deployment.apps "Nginx" created

Let's check its composition:

esschtolts @ cloudshell: ~ (essch) $ kubectl get deployments –selector = run = Nginx

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE

Nginx 3 3 3 3 14s

esschtolts @ cloudshell: ~ (essch) $ kubectl get pods –selector = run = Nginx

NAME READY STATUS RESTARTS AGE

Nginx-65899c769f-9whdx 1/1 Ru

Nginx-65899c769f-szwtd 1/1 Ru

Nginx-65899c769f-zs6g5 1/1 Ru

Let's make sure that all three replicas of the cluster are distributed evenly across all three nodes:

esschtolts @ cloudshell: ~ (essch) $ kubectl describe pod Nginx-65899c769f-9whdx | grep Node:

Node: gke-mycluster-default-pool-43710ef9-g76k / 10.166.0.5

esschtolts @ cloudshell: ~ (essch) $ kubectl describe pod Nginx-65899c769f-szwtd | grep Node:





Node: gke-mycluster-default-pool-43710ef9-39ck / 10.166.0.4

esschtolts @ cloudshell: ~ (essch) $ kubectl describe pod Nginx-65899c769f-zs6g5 | grep Node:

Node: gke-mycluster-default-pool-43710ef9-g76k / 10.166.0.5

Now let's install the load balancer:

esschtolts @ cloudshell: ~ (essch) $ kubectl expose Deployment Nginx –type = "LoadBalancer" –port = 80

service "Nginx" exposed

Let's check that it was created:

esschtolts @ cloudshell: ~ (essch) $ kubectl expose Deployment Nginx –type = "LoadBalancer" –port = 80

service "Nginx" exposed

esschtolts @ cloudshell: ~ (essch) $ kubectl get svc –selector = run = Nginx

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE

Nginx LoadBalancer 10.27.245.187 pending> 80: 31621 / TCP 11s

esschtolts @ cloudshell: ~ (essch) $ sleep 60;

esschtolts @ cloudshell: ~ (essch) $ kubectl get svc –selector = run = Nginx

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE

Nginx LoadBalancer 10.27.245.187 35.228.212.163 80: 31621 / TCP 1m

Let's check its work:

esschtolts @ cloudshell: ~ (essch) $ curl 35.228.212.163:80 2>  dev null | grep h1

<h1> Welcome to Nginx! </ h1>

In order not to copy the full names every time, save them in variables (more about the JSONpath format in the Go documentation: https://golang.org/pkg/text/template/#pkg-overview):

esschtolts @ cloudshell: ~ (essch) $ pod1 = $ (kubectl get pods -o jsonpath = {. items [0] .metadata.name});

esschtolts @ cloudshell: ~ (essch) $ pod2 = $ (kubectl get pods -o jsonpath = {. items [1] .metadata.name});

esschtolts @ cloudshell: ~ (essch) $ pod3 = $ (kubectl get pods -o jsonpath = {. items [2] .metadata.name});

esschtolts @ cloudshell: ~ (essch) $ echo $ pod1 $ pod2 $ pod3

Nginx-65899c769f-9whdx Nginx-65899c769f-szwtd Nginx-65899c769f-zs6g5

Let's change the pages in each POD by copying the unique pages to each replica, and check the balancing by checking the distribution of requests across the POD:

esschtolts @ cloudshell: ~ (essch) $ echo 1> test.html;

esschtolts @ cloudshell: ~ (essch) $ kubectl cp test.html $ {pod1}: / usr / share / Nginx / html / index.html

esschtolts @ cloudshell: ~ (essch) $ echo 2> test.html;

esschtolts @ cloudshell: ~ (essch) $ kubectl cp test.html $ {pod2}: / usr / share / Nginx / html / index.html

esschtolts @ cloudshell: ~ (essch) $ echo 3> test.html;

esschtolts @ cloudshell: ~ (essch) $ kubectl cp test.html $ {pod3}: / usr / share / Nginx / html / index.html

esschtolts @ cloudshell: ~ (essch) $ curl 35.228.212.163:80 && curl 35.228.212.163:80 && curl 35.228.212.163:80

3

2

one

esschtolts @ cloudshell: ~ (essch) $ curl 35.228.212.163:80 && curl 35.228.212.163:80 && curl 35.228.212.163:80

3

one

one

Let's check the failover of the cluster by deleting one POD:

esschtolts @ cloudshell: ~ (essch) $ kubectl delete pod $ {pod1} && kubectl get pods && sleep 10 && kubectl get pods

pod "Nginx-65899c769f-9whdx" deleted

NAME READY STATUS RESTARTS AGE

Nginx-65899c769f-42rd5 0/1 ContainerCreating 0 1s

Nginx-65899c769f-9whdx 0/1 Terminating 0 54m

Nginx-65899c769f-szwtd 1/1 Ru

Nginx-65899c769f-zs6g5 1/1 Ru

NAME READY STATUS RESTARTS AGE

Nginx-65899c769f-42rd5 1/1 Ru