ISSN (Print): 2306-2053

Входит в eLIBRARY.RU

Импакт-фактор РИНЦ: 0,784

download

Abstract

The article consists analyzes the transition of the scientific community's attention from the main tool for pat- tern recognition using machine learning methods without using neural networks to convolutional neural networks as the best way to classify objects in images. The concepts of a neural network and a convolutional neural network are consid- ered in detail, examples of one-layer, two-layer neural networks are given, a principle is shown that can be used to cre- ate a neural network of any depth, the distinctive features of convolutional neural networks has been analyzed, the types of layers that a convolutional neural network can have are analyzed. The process of training a neural network on the Fashion MNIST dataset using the TensorFlow software package from Google has been analyzed, the structure of its own convolutional neural network has been selected, the neural network has been trained to recognize various types of clothing. The result of the model's work on the Fashion MNIST test dataset was 90%. Analysis of the Tensor-Flow package showed the simplicity of the high-level API provided by the module, which allows you to build, configure and train neural networks of any complexity, including convolutional neural networks, which makes TensorFlow easy to integrate and use in your own development.

Keywords

neural network, convolutional neural network, GPU, CPU, neuron, activation function, TensorFlow, loss function, quality metric.

1. He, K. Deep Residual Learning for Image IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770-778.

2. LeCun, Y. Backpropagation Applied to Handwritten Zip Code Recognition, Neural Computation, 1989, no. 1(4), pp. 541-551.

3. KS, O. GPU Implementation of Neural Networks, Advanced Computing an International Journal, 2004, no. 37(6), pp. 1311-1314.

4. Ghorpade, J. GPGPU Processing in Cuda, Advanced Computing an International Journal, 2012, vol. 3, no. 1, pp. 105-120.

5. Krizhevsky, A. ImageNet Classification with Deep Convolutional Neural, Communications of the ACM, 2012, no. 60(6), pp. 84-90.

6. Kolmogorov A.N. O predstavlenii nepreryvnyh funkcij neskol'kih peremennyh superpoziciyami nepreryvnyh funkcij men'shego chisla peremennyh, 1956, no. 108, pp. 179-182.

7. Galushkin A.I. Nejronnye seti. Osnovy teorii, 2010. – 496 с.

8. Doerfler M. Inside the Spectrogram Convolutional Neural Networks in Audio Processing, International Conference on Sampling Theory and Applications (SampTA), 2017, pp. 152-155.

9. Wyse L. Audio Spectrogram Representations for Processing with Convolutional Neural Networks, Proceedings of the First Inter- national Workshop on Deep Learning and Music joint with IJCNN, 2017, no. 1(1), pp. 37-41.

10. Masakazu M. Subject Independent Facial Expression Recognition with Robust Face Detection Using A Convolutional Neural Network, Neural Networks, 2003, vol. 16, no. 5, pp. 555-559.

11. Hinton, G.E. Improving Neural Networks by Preventing Co-Adaptation of Feature Detectors, Computer Vision and Pattern Recognition, 2012, pp. 1-15.

12. Versloot S. What Are Max Pooling, Average Pooling, Global Max Pooling and Global Average Pooling, 2020

13. Rosenblatt, F Frank Rosenblatt: Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Spartan Books, 1962.

14. Verma S. Understanding Different Loss Functions for Neural Networks, 2020.

15. Kingma, D.P. Adam: A Method for Stochastic Optimization, 3rd International Conference for Learning Representations, 2014, pp. 1-15.  

Bagaev I.I. (2020) Concept analysis neural network and conventional neural network, conventional neural network training using the tensorflow module. Software of systems in the industrial and social fields, 8 (1): 15-22. DOI: 10.18503/2306-2053-2020-8-1-15-22.