Добавить в цитаты Настройки чтения

Страница 34 из 37

module.nodejs.kubernetes_service.nodejs

essh @ kubernetes-master: ~ / node-cluster $ ./terraform state rm module.nodejs.kubernetes_deployment.nodejs

Removed module.nodejs.kubernetes_deployment.nodejs

Successfully removed 1 resource instance (s).

essh @ kubernetes-master: ~ / node-cluster $ ./terraform state rm module.nodejs.kubernetes_service.nodejs

Removed module.nodejs.kubernetes_service.nodejs

Successfully removed 1 resource instance (s).

essh @ kubernetes-master: ~ / node-cluster $ ./terraform apply

module.Kubernetes.google_container_cluster.node-ks: Refreshing state … [id = node-ks]

module.Kubernetes.google_container_node_pool.node-ks-pool: Refreshing state … [id = europe-west2-a / node-ks / node-ks-pool]

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Terraform Cluster Reliability and Automation

For a general overview of automation, see https://codelabs.developers.google.com/codelabs/cloud-builder-gke-continuous-deploy/index. html # 0. We will dwell in more detail. Now if we run the ./terraform destroy and try to recreate the entire infrastructure from the begi

locals {

app = kubernetes_deployment.nodejs.metadata.0.labels.app

}

Now we can add dependencies to the code depends_on = [var.endpoint] and depends_on = [kubernetes_deployment .nodejs] .

The service unavailability error may also appear: Error: Get https: //35.197.228.3/API/v1 …: dial tcp 35.197.228.3:443: co

Now let's move on to solving the problem of the reliability of the container, the main process of which we run in the command shell. The first thing we will do is separate the creation of the application from the launch of the container. To do this, you need to transfer the entire process of creating a service into the process of creating an image, which can be tested, and by which you can create a service container. So let's create an image:

essh @ kubernetes-master: ~ / node-cluster $ cat app / server.js

const http = require ('http');

const server = http.createServer (function (request, response) {

response.writeHead (200, {"Content-Type": "text / plain"});

response.end (`Nodejs_cluster is working! My host is $ {process.env.HOSTNAME}`);

});

server.listen (80);

essh @ kubernetes-master: ~ / node-cluster $ cat Dockerfile

FROM node: 12

WORKDIR / usr / src /

ADD ./app / usr / src /

RUN npm install

EXPOSE 3000

ENTRYPOINT ["node", "server.js"]

essh @ kubernetes-master: ~ / node-cluster $ sudo docker image build -t nodejs_cluster.

Sending build context to Docker daemon 257.4MB

Step 1/6: FROM node: 12

––> b074182f4154

Step 2/6: WORKDIR / usr / src /

––> Using cache

––> 06666b54afba

Step 3/6: ADD ./app / usr / src /

––> Using cache

––> 13fa01953b4a

Step 4/6: RUN npm install





––> Using cache

––> dd074632659c

Step 5/6: EXPOSE 3000

––> Using cache

––> ba3b7745b8e3

Step 6/6: ENTRYPOINT ["node", "server.js"]

––> Using cache

––> a957fa7a1efa

Successfully built a957fa7a1efa

Successfully tagged nodejs_cluster: latest

essh @ kubernetes-master: ~ / node-cluster $ sudo docker images | grep nodejs_cluster

nodejs_cluster latest a957fa7a1efa 26 minutes ago 906MB

Now let's put our image in the GCP registry, and not the Docker Hub, because we immediately get a private repository with which our services automatically have access:

essh @ kubernetes-master: ~ / node-cluster $ IMAGE_ID = "nodejs_cluster"

essh @ kubernetes-master: ~ / node-cluster $ sudo docker tag $ IMAGE_ID: latest gcr.io/$PROJECT_ID/$IMAGE_ID:latest

essh @ kubernetes-master: ~ / node-cluster $ sudo docker images | grep nodejs_cluster

nodejs_cluster latest a957fa7a1efa 26 minutes ago 906MB

gcr.io/node-cluster-243923/nodejs_cluster latest a957fa7a1efa 26 minutes ago 906MB

essh @ kubernetes-master: ~ / node-cluster $ gcloud auth configure-docker

gcloud credential helpers already registered correctly.

essh @ kubernetes-master: ~ / node-cluster $ docker push gcr.io/$PROJECT_ID/$IMAGE_ID:latest

The push refers to repository [gcr.io/node-cluster-243923/nodejs_cluster]

194f3d074f36: Pushed

b91e71cc9778: Pushed

640fdb25c9d7: Layer already exists

b0b300677afe: Layer already exists

5667af297e60: Layer already exists

84d0c4b192e8: Layer already exists

a637c551a0da: Layer already exists

2c8d31157b81: Layer already exists

7b76d801397d: Layer already exists

f32868cde90b: Layer already exists

0db06dff9d9a: Layer already exists

latest: digest: sha256: 912938003a93c53b7c8f806cded3f9bffae7b5553b9350c75791ff7acd1dad0b size: 2629

essh @ kubernetes-master: ~ / node-cluster $ gcloud container images list

NAME

gcr.io/node-cluster-243923/nodejs_cluster

Only listing images in gcr.io/node-cluster-243923. Use –repository to list images in other repositories.

Now we can see it in the GCP admin panel: Container Registry -> Images. Let's replace the code of our container with the code with our image. If for production it is necessary to version the launched image in order to avoid their automatic update during system re-creation of PODs, for example, when transferring POD from one node to another when taking a machine with our node for maintenance. For development, it is better to use the latest tag , which will update the service when the image is updated. When you update the service, you need to recreate it, that is, delete and recreate it, since otherwise the terraform will simply update the parameters, and not recreate the container with the new image. Also, if we update the image and mark the service as modified with the command ./terraform taint $ {NAME_SERVICE} , our service will simply be updated, which can be seen with the command ./terraform plan . Therefore, to update, for now, you need to use the commands ./terraform destroy -target = $ {NAME_SERVICE} and ./terraform apply , and the name of the services can be found in ./terraform state list :

essh @ kubernetes-master: ~ / node-cluster $ ./terraform state list

data.google_client_config.default

module.kubernetes.google_container_cluster.node-ks

module.kubernetes.google_container_node_pool.node-ks-pool

module.Nginx.kubernetes_deployment.nodejs