Предложен метод классификации вокальных звуков речи, который базируется на авторской саундлетной байесовской нейронной сети и позволяет учитывать структуру квазипериодического сигнала и сопоставлять образцы вокальных звуков речи разной длины. Разработаны методы создания образцов, формирования опорных образцов и модель их классификации.
Запропоновано метод класифікації вокальних звуків мовлення, який базується на авторській саундлетній байєсівській нейронній мережі та дозволяє враховувати структуру квазіперіодичного сигналу і зіставляти зразки вокальних звуків мовлення різної довжини. Розроблено методи створення зразків, формування опорних зразків і модель їх класифікації.
The urgent task of developing a software component is a human speech recognition, using the intelligent computer systems. The basis of this problem is the construction of the effective methods, providing the high speed of the learning pattern recognition models as well as high probability, the adequacy and speed of speech signals recognition. The existing speech recognition system images using the following approaches: logical, metric, Bayesian, artificial neural network, structural. The existing methods and models is usually based on hidden Markov models, dynamic programming algorithm DTW. The artificial neural networks have the following disadvantages: while learning a few months; the retention of the large amount of reference patterns (sounds and words), as well as weighting coefficients; big time recognition; probability of detection is less than 95 %; the presence of hundreds of thousands of training patterns. To remedy these shortcomings, this article describes a method for the classification of vocal speech sounds on reference patterns on the basis of sound let. The work improves the approach to detection of the vocal sounds, which allows to generalize the single sound patterns of different lengths and different swing amplitudes, which increases the efficiency of the classification of vocal speech sounds. The author introduces the notion of a vocal sound sample and the method of its creation. Further development of the generating the plurality of the reference patterns method, which is characterized based on the soundlet and soundlet mappings collections, which increases the efficiency of the procedure of generating the reference patterns. On the basis of the sound let and sound let mappings collections, the method of the vocal sounds classification is improved, based on sound let Bayesian neural network (SBNN). The proposed model SBNN has the following characteristics: the neurons of the input layer correspond to the components of the vector that describes the test pattern; the neurons of the first (hidden) layer correspond to the reference patterns; the neurons of the second layer correspond to the sounds; adaptation to the voice characteristics of the particular operator is carried out by adding to the model vectors of the reference patterns; each neuron of the first (hidden) layer processes information based on normalized distances between the reference sound pattern and a test pattern of the sound; the weight of connections between neurons in the first (hidden) and second (output) layer is equal to 1 or 0 for these balances do not require the procedure of training; aggregation of outputs of neurons in the first (hidden) layer is performed on the basis of the maximum; in the second (output) layer are calculated posterior probabilities by Bayes formula, which allows to determine the probability that a test pattern vocal sound. The numerical studies are conducted on the vocal sounds of the TIMIT database. Were use such artificial neural networks like MLP, RBFNN, GRNN, PNN, RMLP and author SBNN. The study allows to conclude that the author's method provides the highest probability of classification. Algorithms can be used to solve the problems associated with the speech recognition in information systems, analysis of the vibration signal in intelligent systems technical diagnostics, the speaker identification in security systems and for the phonoscope examination.