Добавить в цитаты Настройки чтения

Страница 5 из 37

if $ (docker ps | grep myapp)

then

docker start myapp

else

if! $ (docker images | grep myimage)

docker build.

fi

docker run -d –name myapp -p 80:80 myimage bash

fi

… And to create it, you need to delete the container, if it exists:

if $ (docker ps | grep myapp)

docker rm -f myapp

fi

if! $ (docker images | grep myimage)

docker build.

fi

docker run -d –name myapp -p 80:80 myimage bash

… It is clear that you need to general parameters, the name of the image, the container to be displayed in variables, to check that the Dockerfile is there, it is valid, and only after that delete the container and much more. To understand the real scale, without going into the interaction of containers, about cloning (scaling) these groups and the like, but I will just mention that the Docker run command can exceed one to two dozen lines. For example, a dozen of forwarded ports, mountable folders, memory and processor limits, co

# docker-compose

version: v1

services:

myapp:

container-name: myapp

images: myimages

ports:

– 80:80

build:.

… For start docker-compose up -d , and for bulkhead docker down; docker up -d . Moreover, when changing the configuration, when a complete bulkhead is not needed, it will simply be updated.

Now that we simplify the process of managing a single container, let's work with a group. But here, for us, only the config itself will change:

# docker-compose

version: v1

services:

mysql:

images: mysql

Nginx:

images: nginx

ports:

– 80:80

myapp:

container-name: myapp

build:.

depence-on: mysql

images: myimages

link:

– db: mysql

– Nginx: Nginx

… Here we see the whole picture as a whole, the containers are co

Service Discovery

With the growth of the cluster, the probability of nodes falling increases and manual detection of what has happened becomes more complicated; Service Discovery systems are designed to automate the detection of newly appeared services and their disappearance. But in order for the cluster to be able to detect the state, given that the system is decentralized – the nodes must be able to exchange messages with each other and choose a leader, examples are Consul, ETCD and ZooKeeper. We will consider Consul based on its following features: the whole program is one file, it is extremely easy to use and configure, has a high-level interface (ZooKeeper does not have it, it is believed that over time, third-party applications that implement it should appear), is written in a non-demanding language to computing machine resources (Consul – Go, ZooKeeper – Java) and neglected its support in other systems, such as, for example, ClickHouse (supports ZooKeeper by default).

Let's check the distribution of information between the nodes using a distributed key-value storage, that is, if we added records to one node, then they should spread to other nodes, and it should not have a hard-coded Master node. Since Consul consists of one executable file, download it from the official website at the link https://www.consul.io/downloads. html on each node:

wget https://releases.hashicorp.com/consul/1.3.0/consul_1.3.0_linux_amd64.zip -O consul.zip

unzip consul.zip

rm -f consul.zip





Now you need to start one node, for now, as master consul -server -ui , and others as slave consul -server -ui and consul -server -ui . After that, we will stop Consul, which is in master mode, and launch it as an equal, as a result of Consul – they will re-elect the temporary leader, and in case of a yoke of failure, they will re-elect again. Let's check the work of our cluster consul members :

consul members;

And so let's check the distribution of information in our storage:

curl -X PUT -d 'value1' .....: 8500 / v1 / kv / group1 / key1

curl -s .....: 8500 / v1 / kv / group1 / key1

curl -s .....: 8500 / v1 / kv / group1 / key1

curl -s .....: 8500 / v1 / kv / group1 / key1

Let's set up service monitoring, for more details see the documentation https://www.consul.io/docs/agent/options. html #telemetry, for that .... https://medium.com/southbridge/monitoring-consul-with-statsd-exporter-and-prometheus-bad8bee3961b

In order not to configure, we will use the container and mode for development with the already configured IP address at 172.17.0.2:

essh @ kubernetes-master: ~ $ mkdir consul && cd $ _

essh @ kubernetes-master: ~ / consul $ docker run -d –name = dev-consul -e CONSUL_BIND_INTERFACE = eth0 consul

Unable to find image 'consul: latest' locally

latest: Pulling from library / consul

e7c96db7181b: Pull complete

3404d2df15cb: Pull complete

1b2797650ac6: Pull complete

42eaf145982e: Pull complete

cef844389e8c: Pull complete

bc7449359c58: Pull complete

Digest: sha256: 94cdbd83f24ec406da2b5d300a112c14cf1091bed8d6abd49609e6fe3c23f181

Status: Downloaded newer image for consul: latest

c6079f82500a41f878d2c513cf37d45ecadd3fc40998cd35020c604eb5f934a1

essh @ kubernetes-master: ~ / consul $ docker inspect dev-consul | jq '. [] | .NetworkSettings.Networks.bridge.IPAddress'

"172.17.0.4"

essh @ kubernetes-master: ~ / consul $ docker run -d –name = consul_follower_1 -e CONSUL_BIND_INTERFACE = eth0 consul agent -dev -join = 172.17.0.4

8ec88680bc632bef93eb9607612ed7f7f539de9f305c22a7d5a23b9ddf8c4b3e

essh @ kubernetes-master: ~ / consul $ docker run -d –name = consul_follower_2 -e CONSUL_BIND_INTERFACE = eth0 consul agent -dev -join = 172.17.0.4

babd31d7c5640845003a221d725ce0a1ff83f9827f839781372b1fcc629009cb

essh @ kubernetes-master: ~ / consul $ docker exec -t dev-consul consul members

Node Address Status Type Build Protocol DC Segment

53cd8748f031 172.17.0.5:8301 left server 1.6.1 2 dc1 <all>

8ec88680bc63 172.17.0.5:8301 alive server 1.6.1 2 dc1 <all>

babd31d7c564 172.17.0.6:8301 alive server 1.6.1 2 dc1 <all>

essh @ kubernetes-master: ~ / consul $ curl -X PUT -d 'value1' 172.17.0.4:8500/v1/kv/group1/key1

true

essh @ kubernetes-master: ~ / consul $ curl $ (docker inspect dev-consul | jq -r '. [] | .NetworkSettings.Networks.bridge.IPAddress'): 8500 / v1 / kv / group1 / key1

[

{

"LockIndex": 0,

"Key": "group1 / key1",

"Flags": 0,

"Value": "dmFsdWUx",

"CreateIndex": 277,

"ModifyIndex": 277

}

]

essh @ kubernetes-master: ~ / consul $ firefox $ (docker inspect dev-consul | jq -r '. [] | .NetworkSettings.Networks.bridge.IPAddress'): 8500 / ui

With the determination of the location of the containers, it is necessary to provide authorization; for this, key stores are used.

dockerd -H fd: // –cluster-store = consul: //192.168.1.6: 8500 –cluster-advertise = eth0: 2376

* –cluster-store – you can get data about keys

* –cluster-advertise – can be saved