matlab 深度学习工具

整理文档很辛苦,赏杯茶钱您下走!

免费阅读已结束,点击下载阅读编辑剩下 ...

阅读已结束,您可以下载文档离线阅读编辑

资源描述

TrainingaDeepNeuralNetworkforDigitClassificationOpenThisExampleThisexampleshowshowtouseNeuralNetworkToolbox™totrainadeepneuralnetworktoclassifyimagesofdigits.Neuralnetworkswithmultiplehiddenlayerscanbeusefulforsolvingclassificationproblemswithcomplexdata,suchasimages.Eachlayercanlearnfeaturesatadifferentlevelofabstraction.However,trainingneuralnetworkswithmultiplehiddenlayerscanbedifficultinpractice.Onewaytoeffectivelytrainaneuralnetworkwithmultiplelayersisbytrainingonelayeratatime.Youcanachievethisbytrainingaspecialtypeofnetworkknownasanautoencoderforeachdesiredhiddenlayer.Thisexampleshowsyouhowtotrainaneuralnetworkwithtwohiddenlayerstoclassifydigitsinimages.Firstyoutrainthehiddenlayersindividuallyinanunsupervisedfashionusingautoencoders.Thenyoutrainafinalsoftmaxlayer,andjointhelayerstogethertoformadeepnetwork,whichyoutrainonefinaltimeinasupervisedfashion.DatasetTrainingthefirstautoencoderVisualizingtheweightsofthefirstautoencoderTrainingthesecondautoencoderTrainingthefinalsoftmaxlayerFormingastackedneuralnetworkFinetuningthedeepneuralnetworkSummaryDatasetThisexampleusessyntheticdatathroughout,fortrainingandtesting.Thesyntheticimageshavebeengeneratedbyapplyingrandomaffinetransformationstodigitimagescreatedusingdifferentfonts.Eachdigitimageis28-by-28pixels,andthereare5,000trainingexamples.Youcanloadthetrainingdata,andviewsomeoftheimages.%Loadthetrainingdataintomemory[xTrainImages,tTrain]=digittrain_dataset;%Displaysomeofthetrainingimagesclffori=1:20subplot(4,5,i);imshow(xTrainImages{i});endThelabelsfortheimagesarestoredina10-by-5000matrix,whereineverycolumnasingleelementwillbe1toindicatetheclassthatthedigitbelongsto,andallotherelementsinthecolumnwillbe0.Itshouldbenotedthatifthetenthelementis1,thenthedigitimageisazero.TrainingthefirstautoencoderBeginbytrainingasparseautoencoderonthetrainingdatawithoutusingthelabels.Anautoencoderisaneuralnetworkwhichattemptstoreplicateitsinputatitsoutput.Thus,thesizeofitsinputwillbethesameasthesizeofitsoutput.Whenthenumberofneuronsinthehiddenlayerislessthanthesizeoftheinput,theautoencoderlearnsacompressedrepresentationoftheinput.Neuralnetworkshaveweightsrandomlyinitializedbeforetraining.Thereforetheresultsfromtrainingaredifferenteachtime.Toavoidthisbehavior,explicitlysettherandomnumbergeneratorseed.rng('default')Setthesizeofthehiddenlayerfortheautoencoder.Fortheautoencoderthatyouaregoingtotrain,itisagoodideatomakethissmallerthantheinputsize.hiddenSize1=100;Thetypeofautoencoderthatyouwilltrainisasparseautoencoder.Thisautoencoderusesregularizerstolearnasparserepresentationinthefirstlayer.Youcancontroltheinfluenceoftheseregularizersbysettingvariousparameters:L2WeightRegularizationcontrolstheimpactofanL2regularizerfortheweightsofthenetwork(andnotthebiases).Thisshouldtypicallybequitesmall.SparsityRegularizationcontrolstheimpactofasparsityregularizer,whichattemptstoenforceaconstraintonthesparsityoftheoutputfromthehiddenlayer.Notethatthisisdifferentfromapplyingasparsityregularizertotheweights.SparsityProportionisaparameterofthesparsityregularizer.Itcontrolsthesparsityoftheoutputfromthehiddenlayer.AlowvalueforSparsityProportionusuallyleadstoeachneuroninthehiddenlayerspecializingbyonlygivingahighoutputforasmallnumberoftrainingexamples.Forexample,ifSparsityProportionissetto0.1,thisisequivalenttosayingthateachneuroninthehiddenlayershouldhaveanaverageoutputof0.1overthetrainingexamples.Thisvaluemustbebetween0and1.Theidealvaluevariesdependingonthenatureoftheproblem.Nowtraintheautoencoder,specifyingthevaluesfortheregularizersthataredescribedabove.autoenc1=trainAutoencoder(xTrainImages,hiddenSize1,...'MaxEpochs',400,...'L2WeightRegularization',0.004,...'SparsityRegularization',4,...'SparsityProportion',0.15,...'ScaleData',false);Youcanviewadiagramoftheautoencoder.Theautoencoderiscomprisedofanencoderfollowedbyadecoder.Theencodermapsaninputtoahiddenrepresentation,andthedecoderattemptstoreversethismappingtoreconstructtheoriginalinput.view(autoenc1)VisualizingtheweightsofthefirstautoencoderThemappinglearnedbytheencoderpartofanautoencodercanbeusefulforextractingfeaturesfromdata.Eachneuronintheencoderhasavectorofweightsassociatedwithitwhichwillbetunedtorespondtoaparticularvisualfeature.Youcanviewarepresentationofthesefeatures.plotWeights(autoenc1);Youcanseethatthefeatureslearnedbytheautoencoderrepresentcurlsandstrokepatternsfromthedigitimages.The100-dimensionaloutputfromthehiddenlayeroftheautoencoderisacompressedversionoftheinput,whichsummarizesitsresponsetothefeaturesvisualizedabove.Trainthenextautoencoderonasetofthesevectorsextractedfromthetrainingdata.First,youmustusetheencoderfromthetrainedautoencodertogeneratethefeatures.feat1=encode(autoenc1,xTrainImages);TrainingthesecondautoencoderAftertrainingthefirstautoencoder,youtrainthesecondautoencoderinasimilarway.Themaindifferenceisthatyouusethefeaturesthatweregeneratedfromthefirstautoencoderasthetrainingdatainthesecondautoencoder.Also,youdecreasethesizeofthehiddenrepresentationto50,sothattheencoderinthesecondautoencoderlearnsanevensmall

1 / 8
下载文档,编辑使用

©2015-2020 m.777doc.com 三七文档.

备案号:鲁ICP备2024069028号-1 客服联系 QQ:2149211541

×
保存成功