Страница 29 из 37
zone = "us-central1-a"
}
$ cd gce
$ terraform init
$ terraform apply
$ cd ..
For distributed work, let's put the state in AWS S3 the state of the infrastructure (you can also put other data), but for security in a different region:
terraform {
backend "s3" {
bucket = "tfstate"
key = "terraform.tfstate"
region = "us-state-2"
}
}
provider "kubernetes" {
host = "https://104.196.242.174"
username = "ClusterMaster"
password = "MindTheGap"
}
resource "kubernetes_pod" "my_pod" {
spec {
container {
image = "Nginx: 1.7.9"
name = "Nginx"
port {
container_port = 80
}
}
}
}
Commands:
terraform init # downloading dependencies according to configs, checking them
terraform validate # syntax check
terraform plan # to see in detail how the infrastructure will be changed and why exactly so, for example,
whether only the service meta information will be changed or the service itself will be re-created, which is often unacceptable for databases.
terraform apply # applying changes
The common part for all providers is the core.
$ which aws
$ aws fonfigure # https://www.youtube.com/watch?v=IxA1IPypzHs
$ cat aws.tf
# https://www.terraform.io/docs/providers/aws/r/instance.html
resource "aws_instance" "ec2instance" {
ami = "$ {var.ami}"
instance_type = "t2.micro"
}
resource "aws_security_group" "instance_gc" {
…
}
$ cat run.js
export AWS_ACCESS_KEY_ID = "anaccesskey"
export AWS_SECRET_ACCESS_KEY = "asecretkey"
export AWS_DEFAULT_REGION = "us-west-2"
terraform plan
terraform apply
$ cat gce.tf # https://www.terraform.io/docs/providers/google/index.html#
# Google Cloud Platform Provider
provider "google" {
credentials = "$ {file (" account.json ")}"
project = "phalcon"
region = "us-central1"
}
#https: //www.terraform.io/docs/providers/google/r/app_engine_application.html
resource "google_project" "my_project" {
name = "My Project"
project_id = "your-project-id"
org_id = "1234567"
}
resource "google_app_engine_application" "app" {
project = "$ {google_project.my_project.project_id}"
location_id = "us-central"
}
# google_compute_instance
resource "google_compute_instance" "default" {
name = "test"
machine_type = "n1-standard-1"
zone = "us-central1-a"
tags = ["foo", "bar"]
boot_disk {
initialize_params {
image = "debian-cloud / debian-9"
}
}
// Local SSD disk
scratch_disk {
}
network_interface {
network = "default"
access_config {
// Ephemeral IP
}
}
metadata = {
foo = "bar"
}
metadata_startup_script = "echo hi> /test.txt"
service_account {
scopes = ["userinfo-email", "compute-ro", "storage-ro"]
}
}
Extensibility using an external resource, which can be a BASH script:
data "external" "python3" {
program = ["Python3"]
}
Building a cluster of machines with Terraform
Clustering with Terraform is covered in Building Infrastructure in GCP. Now let's pay more attention to the cluster itself, and not to the tools for creating it. I will create a project through the GCE admin panel (displayed in the interface header) node-cluster. I downloaded the key for Kubernetes IAM and administration -> Service accounts -> Create a service account and when creating it, I selected the Owner role and put it in a project called kubernetes_key.JSON:
eSSH @ Kubernetes-master: ~ / node-cluster $ cp ~ / Downloads / node-cluster-243923-bbec410e0a83.JSON ./kubernetes_key.JSON
Downloaded terraform:
essh @ kubernetes-master: ~ / node-cluster $ wget https://releases.hashicorp.com/terraform/0.12.2/terraform_0.12.2_linux_amd64.zip> / dev / null 2> / dev / null
essh @ kubernetes-master: ~ / node-cluster $ unzip terraform_0.12.2_linux_amd64.zip && rm -f terraform_0.12.2_linux_amd64.zip
Archive: terraform_0.12.2_linux_amd64.zip
inflating: terraform
essh @ kubernetes-master: ~ / node-cluster $ ./terraform version
Terraform v0.12.2
Added the GCE provider and started downloading the "drivers" to it:
essh @ kubernetes-master: ~ / node-cluster $ cat main.tf
provider "google" {
credentials = "$ {file (" kubernetes_key.json ")}"
project = "node-cluster"
region = "us-central1"
} essh @ kubernetes-master: ~ / node-cluster $ ./terraform init
Initializing the backend …
Initializing provider plugins …
– Checking for available provider plugins …
– Downloading plugin for provider "google" (terraform-providers / google) 2.8.0 …
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "…" constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
* provider.google: version = "~> 2.8"
Terraform has been successfully initialized!
You may now begin working with Terraform. Try ru
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Add a virtual machine:
essh @ kubernetes-master: ~ / node-cluster $ cat main.tf
provider "google" {
credentials = "$ {file (" kubernetes_key.json ")}"
project = "node-cluster-243923"
region = "europe-north1"
}
resource "google_compute_instance" "cluster" {
name = "cluster"
zone = "europe-north1-a"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "debian-cloud / debian-9"
}
}
network_interface {
network = "default"
access_config {}
}
essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform apply
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# google_compute_instance.cluster will be created
+ resource "google_compute_instance" "cluster" {
+ can_ip_forward = false
+ cpu_platform = (known after apply)
+ deletion_protection = false
+ guest_accelerator = (known after apply)
+ id = (known after apply)
+ instance_id = (known after apply)
+ label_fingerprint = (known after apply)
+ machine_type = "f1-micro"
+ metadata_fingerprint = (known after apply)