资源预览内容
第1页 / 共7页
第2页 / 共7页
第3页 / 共7页
第4页 / 共7页
第5页 / 共7页
第6页 / 共7页
第7页 / 共7页
亲,该文档总共7页全部预览完了,如果喜欢就下载吧!
资源描述
Control Theory and Control Engineering Ran Zhang 21210124 Intelligent Control and Intelligent Systems System Identification with BP Neural NetworkRan Zhang 21210124Page 2 AbstractThis article introduced a method of using a BP (Back-Propagation) Neural Network to realize system identification.We studied three systems of different system functions, and analyzed the affects of different parameters of the BP neural network. key wordsMLP (Multi-Layered Perceptron), Neurons, Hidden Layer, BP Neural NetworkAlgorithm IntroductionThe Neurons or Neurodes formed the central nervous system in animals or human beings brains. The networks in the human beings brain can deal with senior mental activities. AnArtificial Neural Network, often just called a neural network, is amathematical modelinspired bybiological neural networks. A neural network consists of an interconnected group ofartificial neurons, and it processes information by using aconnectionistapproach tocomputation. In most cases a neural network is anadaptive systemthat changes its structure during a learning phase. Neural networks are used to model complex relationships between inputs and outputs or tofind patternsin data.BP Neural Networks is one of the basical Artificial Neural Networks. It is based on the MLP architecture. Training with the system samples, the algorithm could prduct a Neural Network model to approximate the real system.(1) MLPMulti-Layered Perceptron network is a method using supervised learning, the architecture of MLP is showed in Figure1.Figure 1 The structure of MLPThe signal is transfered in the certain direction. There is no connection between the neurons in the same layer, and the neurons of adjacent layers are fully connected. And each connection between adjacent layers has a weight.In each hidden (or output) layer, every neuron has an activition function which against to the weighted sum of the out put of the previous layer. After serval iterations, then the model will generate a set of outputs.We have a lot of choices for the activition function, such as linear function, Sigmoid function and so on. Generally, we choose a Sigmoid function () as the activition function.(2) BP Neural NetworkBased on the MLP network, adjust the weights of each connection using the error of the next layer, that is the error feedback method.The BP algorithm is deduced from the steepest gradient descent method. Referring to Figure 1, for the qth sample, we define the power function as is the desired output of the qth sample; is the real output of the network.According to the steepest gradient descent method, we can get the adjustment of the weight of each connection as follows:For the output layer,For the hidden and input layer,In the formulas above, is the activition function, and is the derivative of, and s is equal to the difference between the weighted sum of the inputs and the threshold of each neuron. is the learning rate.Turn to the threshold of each neuron, we can conclude the simular fomula as follows:When the network has been trained with all the samples for one time, the algorithm would finish one epoch. Then calculate the performance index. If the index fit the accuracy requirements, then end the training, else start another training epoch.Experiments and AnalysisBased on the algorithm introduced above, we choose three systems with different system functions as follows:(1) We choose a MLP model with 1 hidden layer, and we applied different number of neurons of the hidden layer to study the affects of the number of neurons.We choosed 9 sets of uniform data to be used to train the network, and then tested the network with 361 sets of uniform data. Choose Matlab as the simulating tool. Performance index is set as.The results are showed below.Note:Due to the existence of zeros in the desired output, the relative error will be huge in the area neaby the zeros, and that will make the relative error useless to just the performances of the network. As a result, we compute the absolute error to characterize the performance as the desired output is same.a) 3 neurons in the hidden layer ()Figure 2 Plots of training convergence and functionsFigure 3 Absolute error between network output and desired outputb) 5 neurons in the hidden layer ()Figure 4 Plots of training convergence and functions Figure 5 absolute error between network output and desired outputc) 5 neurons in the hidden layer ()Figure 6 Plots of training convergence and functions Figure 7 Absolute error between actual output and desired outputd) 7 neurons in the hidden layer ()Figure 8 Plots of training convergence and functions Figure 9 Absolute error between network output and desired outputThe ranges of axises are set to be the same so as to make comparsion more convient by sight.From the results s
网站客服QQ:2055934822
金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号