资源预览内容
第1页 / 共156页
第2页 / 共156页
第3页 / 共156页
第4页 / 共156页
第5页 / 共156页
第6页 / 共156页
第7页 / 共156页
第8页 / 共156页
第9页 / 共156页
第10页 / 共156页
亲,该文档总共156页,到这儿已超出免费预览范围,如果喜欢就下载吧!
资源描述
DeepNeuralNetwork SungKim DeepNeuralNetwork SungKim Recap MachineLearningBasics LinearRegressionLogisticRegression Binaryclassification SoftmaxClassification Regression Linear Hypothesis Costfunction Goal Minimizecost Whatcost W lookslike W 1 cost W Whatcost W lookslike W 1 cost W 0 W 0 cost W 4 67 W 2 cost W Whatcost W lookslike W 1 cost W 0 W 0 cost W 4 67 W 2 cost W 4 67 Howtominimizecost Formaldefinition Classification LinearRegressionHypothesis LogisticRegression Casefory 1 Casefory 0 Multiplelabels a b c Softmaxfunction tf matmul X W b hypothesis tf nn softmax tf matmul X W b Costfunction crossentropy Crossentropycost losscost tf reduce mean tf reduce sum Y tf log hypothesis axis 1 Costfunction crossentropy hypothesis tf nn softmax tf matmul X W b Crossentropycost losscost tf reduce mean tf reduce sum Y tf log hypothesis axis 1 optimizer tf train GradientDescentOptimizer learning rate 0 1 minimize cost Recap MachineLearningBasics LinearRegressionLogisticRegression Binaryclassification SoftmaxClassification Today XORwithLogisticRegression Binaryclassification Solution DeepNeuralNetworkChallengesofDNNComputinggradients chainrulesandbackpropagationVanishingGradients ReluOverfitting regularization dropouts XORdataset x data np array 0 0 0 1 1 0 1 1 dtype np float32 y data np array 0 1 1 0 dtype np float32 Logisticregression W tf Variable tf random normal 2 1 name weight b tf Variable tf random normal 1 name bias Hypothesisusingsigmoid tf div 1 1 tf exp tf matmul X W hypothesis tf sigmoid tf matmul X W b x data np array 0 0 0 1 1 0 1 1 dtype np float32 y data np array 0 1 1 0 dtype np float32 x data np array 0 0 0 1 1 0 1 1 dtype np float32 y data np array 0 1 1 0 dtype np float32 X tf placeholder tf float32 Y tf placeholder tf float32 W tf Variable tf random normal 2 1 name weight b tf Variable tf random normal 1 name bias Hypothesisusingsigmoid tf div 1 1 tf exp tf matmul X W hypothesis tf sigmoid tf matmul X W b cost lossfunctioncost tf reduce mean Y tf log hypothesis 1 Y tf log 1 hypothesis train tf train GradientDescentOptimizer learning rate 0 1 minimize cost Accuracycomputation Trueifhypothesis 0 5elseFalsepredicted tf cast hypothesis 0 5 dtype tf float32 accuracy tf reduce mean tf cast tf equal predicted Y dtype tf float32 Launchgraphwithtf Session assess InitializeTensorFlowvariablessess run tf global variables initializer forstepinrange 10001 sess run train feed dict X x data Y y data ifstep 100 0 print step sess run cost feed dict X x data Y y data sess run W Accuracyreporth c a sess run hypothesis predicted accuracy feed dict X x data Y y data print nHypothesis h nCorrect c nAccuracy a XORwithlogisticregression Butitdoesn twork XORwithlogisticregression Butitdoesn twork x data np array 0 0 0 1 1 0 1 1 dtype np float32 y data np array 0 1 1 0 dtype np float32 X tf placeholder tf float32 Y tf placeholder tf float32 W tf Variable tf random normal 2 1 name weight b tf Variable tf random normal 1 name bias Hypothesisusingsigmoid tf div 1 1 tf exp tf matmul X W hypothesis tf sigmoid tf matmul X W b cost lossfunctioncost tf reduce mean Y tf log hypothesis 1 Y tf log 1 hypothesis train tf train GradientDescentOptimizer learning rate 0 1 minimize cost Accuracycomputation Trueifhypothesis 0 5elseFalsepredicted tf cast hypothesis 0 5 dtype tf float32 accuracy tf reduce mean tf cast tf equal predicted Y dtype tf float32 Launchgraphwithtf Session assess InitializeTensorFlowvariablessess run tf global variables initializer forstepinrange 10001 sess run train feed dict X x data Y y data ifstep 100 0 print step sess run cost feed dict X x data Y y data sess run W Accuracyreporth c a sess run hypothesis predicted accuracy feed dict X x data Y y data print nHypothesis h nCorrect c nAccuracy a Hypothesis 0 5 0 5 0 5 0 5 Correct 0 0 0 0 Accuracy 0 5 OnelogisticregressionunitcannotseparateXOR Howaboutmultiplelogisticregressionunits XORusingNN NeuralNet NeuralNet NeuralNet Forwardpropagation http playground tensorflow org CanyoufindanotherWandbfortheXOR Forwardpropagation NN Recap Multinomialclassification NN HowcanwelearnW andbfromtraningdata NeuralNet W tf Variable tf random normal 2 1 name weight b tf Variable tf random normal 1 name bias Hypothesisusingsigmoid tf div 1 1 tf exp tf matmul X W hypothesis tf sigmoid tf matmul X W b NeuralNet W tf Variable tf random normal 2 1 name weight b tf Variable tf random normal 1 name bias Hypothesisusingsigmoid tf div 1 1 tf exp tf matmul X W hypothesis tf sigmoid tf matmul X W b W1 tf Variable tf random normal 2 2 name weight1 b1 tf Variable tf random normal 2 name bias1 layer1 tf sigmoid tf matmul X W1 b1 W2 tf Variable tf random normal 2 1 name weight2 b2 tf Variable tf random normal 1 name bias2 hypothesis tf sigmoid tf matmul layer1 W2 b2 NNforXOR x data np array 0 0 0 1 1 0 1 1 dtype np float32 y data np array 0 1 1 0 dtype np float32 X tf placeholder tf float32 Y tf placeholder tf float32 W1 tf Variable tf random normal 2 2 name weight1 b1 tf Variable tf random norma
收藏 下载该资源
网站客服QQ:2055934822
金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号