Добавить в цитаты Настройки чтения

Страница 15 из 37

–v $ (pwd) /prometheus0_eu1.yml:/etc/prometheus/prometheus.yml

–-name prometheus-0-sidecar-eu1

–u root

quay.io/thanos/thanos:v0.7.0

sidecar

–-http-address 0.0.0.0:19090

–-grpc-address 0.0.0.0:19190

–-reloader.config-file /etc/prometheus/prometheus.yml

–-prometheus.url http://127.0.0.1:9090

Notifications are an important part of monitoring. Notifications consist of firing triggers and a provider. A trigger is written in PromQL, as a rule, with a condition in Prometheus. When a trigger is triggered (metric condition), Prometheus signals the provider to send a notification. The standard provider is Alertmanager and is capable of sending messages to various receivers such as email and Slack.

For example, the metric "up", which takes the values 0 or 1, can be used to poison a message if the server is off for more than 1 minute. For this, a rule is written:

groups:

– name: example

rules:

– alert: Instance Down

expr: up == 0

for: 1m

When the metric is equal to 0 for more than 1 minute, then this trigger is triggered and Prometheus sends a request to the Alertmanager. Alertmanager specifies what to do with this event. We can prescribe that when the InstanceDown event is received, we need to send a message to the mail. To do this, configure Alertmanager to do this:

global:

smtp_smarthost: 'localhost: 25'

smtp_from: '[email protected]'

route:

receiver: example-email

receivers:

– name: example-email

email_configs:

– to: '[email protected]'

Alertmanager itself will use the installed protocol on this computer. In order for it to be able to do this, it must be installed. Take Simple Mail Transfer Protocol (SMTP), for example. To test it, let's install a console mail server in parallel with the Alert Manager – sendmail.

Fast and clear analysis of system logs

OpenSource full-text search engine Lucene is used for quick search in logs. On its basis, two low-level products were built: Sold and Elasticsearch, which are quite similar in capabilities, but differ in usability and license. Many popular assemblies are built on them, for example, just a delivery set with ElasticSearch: ELK (Elasticsearch (Apache Lucene), Logstash, Kibana), EFK (Elasticsearch, Fluentd, Kibana), and products, for example, GrayLog2. Both GrayLog2 and assemblies (ELK / EFK) are actively used due to the lesser need to configure non-test benches, for example, you can put EFK in a Kubernetes cluster with almost one command

helm install efk-stack stable / elastic-stack –set logstash.enabled = false –set fluentd.enabled = true –set fluentd-elastics

An alternative that has not yet received much consideration are systems built on the previously considered Prometheus, for example, PLG (Promtail (agent) – Loki (Prometheus) – Grafana).

Comparison of ElasticSearch and Sold (systems are comparable):

Elastic:

** Commercial with open source and the ability to commit (via approval);

** Supports more complex queries, more analytics, out of the box support for distributed queries, more complete REST-full JSON-BASH, chaining, machine learning, SQL (paid);

*** Full-text search;

*** Real-time index;

*** Monitoring (paid);

*** Monitoring via Elastic FQ;

*** Machine learning (paid);

*** Simple indexing;

*** More data types and structures;

** Lucene engine;

** Parent-child (JOIN);

** Scalable native;

** Documentation from 2010;

Solr:

** OpenSource;





** High speed with JOIN;

*** Full-text search;

*** Real-time index;

*** Monitoring in the admin panel;

*** Machine learning through modules;

*** Input data: Work, PDF and others;

*** Requires a schema for indexing;

*** Data: nested objects;

** Lucene engine;

** JSON join;

** Scalable: Solar Cloud (setting) && ZooKeeper (setting);

** Documentation since 2004.

At the present time, micro-service architecture is increasingly used, which allows due to weak

the co

But in general, the system becomes more difficult to analyze due to its distribution. To analyze the condition

in general, logs are used, collected in a centralized place and converted into an understandable form. Also arises

the need to analyze other data, for example, access_log NGINX, to collect metrics about attendance, mail log,

mail server to detect attempts to guess a password, etc. Take ELK as an example of such a solution. ELK means

a bunch of three products: Logstash, Elasticsearch and Kubana, the first and last of which are heavily focused on the central and

provide ease of use. More generally ELK is called Elastic Stack, since the tool for preparing logs Logstash

can be replaced by analogs such as Fluentd or Rsyslog, and the Kibana renderer can be replaced by Grafana. For example, although

Kibana provides great analysis capabilities, Grafana provides notifications when events occur, and

can be used in conjunction with other products, for example, CAdVisor – analysis of the state of the system and individual containers.

EKL products can be self-installed, downloaded as self-contained containers for which you need to configure

communication or as a single container.

For Elasticsearch to work properly, you need the data to come in JSON format. If the data is submitted to

text format (the log is written in one line, separated from the previous one by a line break), then it can

provide only full-text searches as they will be interpreted as one line. For transmission

logs in JSON format, there are two options: either configure the product under investigation to be output in this format,

for example, for NGINX there is such a possibility. But, often this is impossible, since there is already

the accumulated database of logs, and traditionally they are written in text format. For such cases, it is necessary

post processing of logs from text format to JSON, which is handled by Logstash. It is important to note that if

it is possible to immediately transfer data in a structured form (JSON, XML and others), then this follows

do, because if you do detailed parsing, then any deviation is a one-sided deviation from the format

will lead to inoperability, and if superficial – we lose valuable information. Anyway, parsing in

this system is a bottleneck, although it can be scaled to a limited extent to a service or log

file. Fortunately, more and more products are starting to support structured logging, such as

the latest versions of NGINX support logs in JSON format.

For systems that do not support this format, you can use the conversion to it using such

programs like Logstash, File bear and Fluentd. The first one is included in the standard Elastic Stack delivery from the vendor

and can be installed in one way ELK in Docker – container. It supports fetching data from files, network and

standard stream both at the input and at the output, and most importantly, the native Elastic Search protocol.

Logstash monitors log files based on modification date or receives over the network telnet data from a distributed

systems, for example, containers and, after transformation, it is sent to the output, usually in Elastic Search. It is simple and

comes standard with the Elastic Stack, making it easy and hassle-free to configure. But thanks to

Java machine inside is heavy and not very functional, although it supports plugins, for example, synchronization with MySQL

to send new data. Filebeat provides slightly more options. An enterprise tool for everything