Добавить в цитаты Настройки чтения

Страница 31 из 37



}

co

host = "$ {google_compute_address.static-ip-address.address}"

type = "ssh"

user = "essh"

timeout = "2m"

private_key = "$ {file (" ~ / node-cluster / node-cluster ")}"

# agent = "false"

}

provisioner "file" {

source = "client.js"

destination = "~ / client.js"

}

provisioner "remote-exec" {

inline = [

"cd ~ && echo 1> test.txt"

]

}

essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform apply

google_compute_address.static-ip-address: Creating …

google_compute_address.static-ip-address: Creation complete after 5s [id = node-cluster-243923 / europe-north1 / static-ip-address]

google_compute_instance.cluster: Creating …

google_compute_instance.cluster: Still creating … [10s elapsed]

google_compute_instance.cluster: Creation complete after 12s [id = cluster]

null_resource.cluster: Creating …

null_resource.cluster: Provisioning with 'file' …

null_resource.cluster: Provisioning with 'remote-exec' …

null_resource.cluster (remote-exec): Co

null_resource.cluster (remote-exec): Host: 35.228.82.222

null_resource.cluster (remote-exec): User: essh

null_resource.cluster (remote-exec): Password: false

null_resource.cluster (remote-exec): Private key: true

null_resource.cluster (remote-exec): Certificate: false

null_resource.cluster (remote-exec): SSH Agent: false

null_resource.cluster (remote-exec): Checking Host Key: false

null_resource.cluster (remote-exec): Co

null_resource.cluster: Creation complete after 7s [id = 816586071607403364]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

esschtolts @ cluster: ~ $ ls / home / essh /

client.js test.txt

[sudo] password for essh:

google_compute_address.static-ip-address: Refreshing state … [id = node-cluster-243923 / europe-north1 / static-ip-address]

google_compute_instance.cluster: Refreshing state … [id = cluster]

null_resource.cluster: Refreshing state … [id = 816586071607403364]

Enter a value: yes

null_resource.cluster: Destroying … [id = 816586071607403364]

null_resource.cluster: Destruction complete after 0s

google_compute_instance.cluster: Destroying … [id = cluster]

google_compute_instance.cluster: Still destroying … [id = cluster, 10s elapsed]

google_compute_instance.cluster: Still destroying … [id = cluster, 20s elapsed]

google_compute_instance.cluster: Destruction complete after 27s

google_compute_address.static-ip-address: Destroying … [id = node-cluster-243923 / europe-north1 / static-ip-address]

google_compute_address.static-ip-address: Destruction complete after 8s

To deploy the entire project, you can add it to the repository, and we will upload it to the virtual machine by copying the installation script to this virtual machine and then launching it:

Moving on to Kubernetes

In the minimal version, creating a cluster of three nodes looks like this:

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ cat main.tf

provider "google" {

credentials = "$ {file (" ../ kubernetes_key.json ")}"

project = "node-cluster-243923"

region = "europe-north1"

}

resource "google_container_cluster" "node-ks" {

name = "node-ks"

location = "europe-north1-a"

initial_node_count = 3

}

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ sudo ../terraform init

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ sudo ../terraform apply



The cluster was created in 2:15, and after I added europe-north1-a two additional zones europe-north1 -b , europe-north1-c and set the number of created instances in the zone to one, the cluster was created in 3:13 seconds , because for higher availability, the nodes were created in different data centers: europe-north1-a , europe-north1-b , europe-north1-c :

provider "google" {

credentials = "$ {file (" ../ kubernetes_key.json ")}"

project = "node-cluster-243923"

region = "europe-north1"

}

resource "google_container_cluster" "node-ks" {

name = "node-ks"

location = "europe-north1-a"

node_locations = ["europe-north1-b", "europe-north1-c"]

initial_node_count = 1

}

Now let's split our cluster into two: the control cluster with Kubernetes and the cluster for our PODs. All clusters will be distributed over three data centers. The cluster for our PODs can auto scale under load up to 2 on each zone (from three to six in total):

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ cat main.tf

provider "google" {

credentials = "$ {file (" ../ kubernetes_key.json ")}"

project = "node-cluster-243923"

region = "europe-north1"

}

resource "google_container_cluster" "node-ks" {

name = "node-ks"

location = "europe-north1-a"

node_locations = ["europe-north1-b", "europe-north1-c"]

initial_node_count = 1

}

resource "google_container_node_pool" "node-ks-pool" {

name = "node-ks-pool"

cluster = "$ {google_container_cluster.node-ks.name}"

location = "europe-north1-a"

node_count = "1"

node_config {

machine_type = "n1-standard-1"

}

autoscaling {

min_node_count = 1

max_node_count = 2

}

}

Let's see what happened and look for the IP address of the cluster entry point:

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ gcloud container clusters list

NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS

node-ks europe-north1-a 1.12.8-gke.6 35.228.20.35 n1-standard-1 1.12.8-gke.6 6 RECONCILING

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ gcloud container clusters describe node-ks | grep '^ endpoint'

endpoint: 35.228.20.35

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ ping 35.228.20.35 -c 2

PING 35.228.20.35 (35.228.20.35) 56 (84) bytes of data.

64 bytes from 35.228.20.35: icmp_seq = 1 ttl = 59 time = 8.33 ms

64 bytes from 35.228.20.35: icmp_seq = 2 ttl = 59 time = 7.09 ms

–– 35.228.20.35 ping statistics –

2 packets transmitted, 2 received, 0% packet loss, time 1001ms

rtt min / avg / max / mdev = 7.094 / 7.714 / 8.334 / 0.620 ms

By adding variables, which I selected in a separate file just for clarity, which parameterize our config for different uses, we can use it, for example, to create test and production clusters. Variables can be added as var.name_value , and inserted into the text similarly to JS: $ {var.name_value} , as well as path.root .

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ cat variables.tf

variable "region" {

default = "europe-north1"

}

variable "project_name" {

type = string

default = ""

}

variable "gce_key" {

default = "./kubernetes_key.json"

}

variable "node_count_zone" {

default = 1

}

They can be passed through the -var switch , for example: sudo ./terraform apply -var = "project_name = node-cluster-243923" .

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ cp ../kubernetes_key.json.