Добавить в цитаты Настройки чтения

Страница 27 из 37

google_compute_instance.terraform: Still destroying … (ID: terraform, 1m20s elapsed)

google_compute_instance.terraform: Still destroying … (ID: terraform, 1m30s elapsed)

google_compute_instance.terraform: Still destroying … (ID: terraform, 1m40s elapsed)

google_compute_instance.terraform: Still destroying … (ID: terraform, 1m50s elapsed)

google_compute_instance.terraform: Still destroying … (ID: terraform, 2m0s elapsed)

google_compute_instance.terraform: Still destroying … (ID: terraform, 2m10s elapsed)

google_compute_instance.terraform: Still destroying … (ID: terraform, 2m20s elapsed)

google_compute_instance.terraform: Destruction complete after 2m30s

Apply complete! Resources: 0 added, 0 changed, 1 destroyed.

Building infrastructure in AWS

To create an AWS cluster configuration, create a separate folder for it, and the previous one in a parallel one:

esschtolts @ cloudshell: ~ / terraform (agil7e-aleph-20391) $ mkdir gcp

esschtolts @ cloudshell: ~ / terraform (agil7e-aleph-20391) $ mv main.tf gcp / main.tf

esschtolts @ cloudshell: ~ / terraform (agil7e-aleph-20391) $ mkdir aws

esschtolts @ cloudshell: ~ / terraform (agil7e-aleph-20391) $ cd aws

Role is an analogue of a user, only not for people, but for services such as AWS, and in our case, these are EKS servers. But I do not see users as an analogue of roles, but groups, for example, a group for creating a cluster, a group for working with a database, etc. Only one role can be assigned to a server, and a role can contain multiple rights (Polices). As a result, we do not need to work with logins and passwords, or with tokens, or with certificates: store, transfer, restrict access, transfer – we only indicate in the WEB toolbar (IMA) or using the API (and derivatively in the configuration) the rights … Our cluster needs these rights in order for it to self-configure and replicate as it consists of standard AWS services. To manage the components of the AWS EC2 cluster (server), AWS ELB (Elastic Load Balancer, balancer) and AWS KMS (Key Management Service, key manager and encryption), you need AmazonEKSClusterPolicy access, to monitor AmazonEKSServicePolicy using CloudWatch Logs components (monitoring by logs) , Route 53 (creating a network in the zone), IAM (rights management). I did not describe the role in the config and created it through IAM according to the documentation: https://docs.aws.amazon.com/eks/latest/userguide/service_IAM_role. html # create-service-role.

For greater reliability, the nodes of the Kubernetes cluster should be located in different zones, that is, data centers. Each region contains several zones to maintain fault tolerance, while maintaining minimal letency (server response time) for the local population. It is important to note that some regions may be represented in several copies within the same country, for example, US-east-1 in US East (N. Virginia) and US-east-2 in US East (Ohio) – regions are designated in numbers. So far, the creation of an EKS cluster is available only to the US-east zone.

The VPC for the developer, at its simplest, boils down to naming a subnet as a specific resource.

Let's write the configuration according to the documentation www.terraform.io/docs/providers/aws/r/eks_cluster. html :

esschtolts @ cloudshell: ~ / terraform / aws (agile-aleph-203917) $ cat main.tf

provider "aws" {

access_key = "$ {var.token}"

secret_key = "$ {var.key}"

region = "us-east-1"

}

# Params

variable "token" {

default = ""

}

variable "key" {

default = ""

}

# EKS

resource "aws_eks_cluster" "example" {

enabled_cluster_log_types = ["api", "audit"]

name = "exapmle"

role_arn = "arn: aws: iam :: 177510963163: role / ServiceRoleForAmazonEKS2"

vpc_config {

subnet_ids = ["$ {aws_subnet.subnet_1.id}", "$ {aws_subnet.subnet_2.id}"]

}

}

output "endpoint" {

value = "$ {aws_eks_cluster.example.endpoint}"

}

output "kubeconfig-certificate-authority-data" {

value = "$ {aws_eks_cluster.example.certificate_authority.0.data}"

}

# Role

data "aws_iam_policy_document" "eks-role-policy" {

statement {

actions = ["sts: AssumeRole"]

principals {

type = "Service"

identifiers = ["eks.amazonaws.com"]

}

}





}

resource "aws_iam_role" "tf_role" {

name = "tf_role"

assume_role_policy = "$ {data.aws_iam_policy_document.eks-role-policy.json}"

tags = {

tag-key = "tag-value"

}

}

resource "aws_iam_role_policy_attachment" "attach-cluster" {

role = "tf_role"

policy_arn = "arn: aws: iam :: aws: policy / AmazonEKSClusterPolicy"

}

resource "aws_iam_role_policy_attachment" "attach-service" {

role = "tf_role"

policy_arn = "arn: aws: iam :: aws: policy / AmazonEKSServicePolicy"

}

# Subnet

resource "aws_subnet" "subnet_1" {

vpc_id = "$ {aws_vpc.main.id}"

cidr_block = "10.0.1.0/24"

availability_zone = "us-east-1a"

tags = {

Name = "Main"

}

}

resource "aws_subnet" "subnet_2" {

vpc_id = "$ {aws_vpc.main.id}"

cidr_block = "10.0.2.0/24"

availability_zone = "us-east-1b"

tags = {

Name = "Main"

}

}

resource "aws_vpc" "main" {

cidr_block = "10.0.0.0/16"

}

After 9 minutes 44 seconds, I got a ready-made self-supporting infrastructure for a Kubernetes cluster:

esschtolts @ cloudshell: ~ / terraform / aws (agile-aleph-203917) $ ./../terraform apply -var = "token = AKIAJ4SYCNH2XVSHNN3A" -var = "key = huEWRslEluynCXBspsul3AkKlin1ViR9 + Mo

Now let's delete (it took me 10 minutes 23 seconds):

esschtolts @ cloudshell: ~ / terraform / aws (agile-aleph-203917) $ ./../terraform destroy -var = "token = AKIAJ4SYCNH2XVSHNN3A" -var = "key = huEWRslEluynCXBspsul3AkKlin1ViR9 + Mo

Destroy complete! Resources: 7 destroyed.

Establishing the CI / CD process

Amazon provides (aws.amazon.com/ru/devops/) a wide range of DevOps tools designed in a cloud infrastructure:

* AWS Code Pipeline – the service allows you to create a chain of stages from a set of services in a visual editor, through which the code must go before it goes to production, for example, assembly and testing.

* AWS Code Build – the service provides an auto-scaling build queue, which may be required for compiled programming languages, when adding features or making changes requires a long re-compilation of the entire application, when using one server it becomes a bottleneck when rolling out the changes.

* AWS Code Deploy – Automates deployment and rollback in case of errors.

* AWS CodeStar – the service combines the main features of the previous services.

Setting up remote control

artifact server

aws s3 ls s3: // name_backet aws s3 sync s3: // name_backet name_fonder –exclude * .tmp # files from the bucket will be downloaded to the folder, for example, a website

Now, we need to download the AWS plugin:

esschtolts @ cloudshell: ~ / terraform / aws (agile-aleph-203917) $ ./../terraform init | grep success

Terraform has been successfully initialized!

Now we need to get access to AWS, for that we click on the name of your user in the header of the WEB interface, in addition to My account , the My Security Credentials item will appear , by selecting which, we go to Access Key -> Create New Access Key . Let's create EKS (Elastic Kuberntes Service):