Добавить в цитаты Настройки чтения

Страница 3 из 5

We already have a few layers. But, in neural networks used in practice, there can be a lot of them. Networks with more than four layers are commonly called deep neural networks (DML, Deep ML). But, there can be a lot of them, so there are 152 of them in ResNet and this is far from the deepest network. But, as you have already noticed, the number of layers is not taken, according to the principle, the more the better, but prototyped. An excessive amount degrades the quality due to attenuation, unless certain solutions are used for this, such as data forwarding with subsequent summation. Examples of neural network architectures include ResNeXt, SENet, DenseNet, Inception-Res Net-V2, Inception-V4, Xception, NASNet, MobileNet V2, Shuffle Net, and Squeeze Net. Most of these networks are designed for image analysis and it is the images that often contain the greatest amount of detail and the largest number of operations is assigned to these networks, which determines their depth. We will consider one of such architectures when creating a number classification network – LeNet-5, created in 1998.

If we need not just to recognize a number or a letter, but their sequence, the meaning inherent in them, we need a co

Not all combinations are successful, some only allow solving narrow problems. As the complexity increases, a smaller percentage of possible architectures are successful and bear their own names.

In general, there are networks that are fundamentally different in structure and principles:

* direct distribution networks

* convolutional neural networks

* recurrent neural networks

* autoencoder (classic, thin, variational, noise canceling)

* networks of trust ("deep belief")

* generative adversarial networks – opposition of two networks: generator and evaluator

* neural Turing machines – a neural network with a block of memory

* Kohonen neural networks – for unsupervised learning

* various architectures of circular neural networks: Hopfield neural network, Markov chain, Boltzma

Let us consider in more detail the most commonly used, namely, feedforward, convolutional and recurrent networks:

Direct distribution networks:

* two entrances and one exit – Percetron (P)

* two inputs, two fully co

* three inputs, two layers of four fully co

* deep neural networks

* extreme propagation network – a network with random co

Convolutional neural networks:

* traditional convolutional neural networks (CNN) – image classification * unfolding neural networks – image generation by type * deep convolutional inverse graphic networks (DCEGC) – co

Recurrent neural networks:

* recurrent neural networks – networks with memory in neurons for sequence analysis, in which the sequence

matters such as text, sound and video

* Long Short Term Memory (LSTM) networks – the development of recurrent neural networks in which neurons can

classify data that are worth remembering into long-lived memory from those that are worth forgetting and delete information

from their memory



* deep residual networks – networks with co

* recruited recute neurons (GRU)

Basics for writing networks.

Until 2015, scikit-learn was leading by a wide margin, which Caffe was catching up with, but with the release of TensorFlow, it immediately became the leader. Over time, only gaining a gap from two to three times by 2020, when there were more than 140 thousand projects on GitHub, and the closest competitor had just over 45 thousand. In 2020, Keras, scikit-learn, PyTorch (FaceBook), Caffe, MXNet, XGBoost, Fastai, Microsoft CNTK (CogNiive ToolKit), DarkNet and some other lesser known libraries are located in descending order. The most popular are the Pytorch and TenserFlow libraries. Pytorch is good for prototyping, learning and trying out new models. TenserFlow is popular in production environments and the low-level issue is addressed by Keras.

* FaceBook Pytorch is a good option for learning and prototyping due to the high level and support of various

environments, a dynamic graph, can give advantages in learning. Used by Twitter, Salesforce.

* Google TenserFlow – originally had a static solution graph, now dynamic is also supported. Used in

Gmail, Google Translate, Uber, Airbnb, Dropbox. To attract use in the Google cloud for it

Google TPU (Google Tensor Processing Unit) hardware processor is being implemented.

* Keras is a high-level tweak providing more abstraction for TensorFlow, Theano

or CNTK. A good option for learning. For example, he

allows you not to specify the dimension of layers, calculating it yourself, allowing the developer to focus on the layers

architecture. Usually used on top of TenserFlow. The code on it is maintained by Microsoft CNTK.

There are also more specialized frameworks:

* Apache MXNet (Amazon) and a high-level add-on for it Gluon. MXNet is a framework with an emphasis on

scaling, supports integration with Hadoop and Cassandra. Supported

C ++, Python, R, Julia, JavaScript, Scala, Go and Perl.

* Microsoft CNTK has integrations with Python, R, C # due to the fact that most of the code is written in C ++. That all sonova

written in C ++, this does not mean that CNTK will train the model in C ++, and TenserFlow in Python (which is slow),

since TenserFlow builds graphs and its execution is already carried out in C ++. Features CNTK

from Google TenserFlow and the fact that it was originally designed to run on Azure clusters with multiple graphical

processors, but now the situation is leveled and TenserFlow supports the cluster.

* Caffe2 is a framework for mobile environments.

* So

* DL4J (Deep Learning for Java) is a framework with an emphasis on Java Enterprise Edition. High support for BigData in Java: Hadoop and Spark.

With the speed of availability of new pre-trained models, the situation is different and, so far, Pytorch is leading. In terms of support for environments, in particular public clouds, it is better for the farms promoted by the vendors of these clouds, so TensorFlow support is better in Google Cloud, MXNet in AWS, CNTK in Microsoft Azure, D4LJ in Android, Core ML in iOS. By languages, almost everyone has common support in Python, in particular, TensorFlow supports JavaScript, C ++, Java, Go, C # and Julia.

Many frameworks support TeserBodrd rendering. It is a complex Web interface for multi-level visualization of the state and the learning process and its debugging. To co