The-Improvements-of-BP-Neural-Network-Learning-Alg

整理文档很辛苦,赏杯茶钱您下走!

免费阅读已结束,点击下载阅读编辑剩下 ...

阅读已结束,您可以下载文档离线阅读编辑

资源描述

ProceedingsofICSP2000TheImprovementsofBPNeuralNetworkLearningAlgorithmWenJin-WeiZhaoJia-LiLuoSi-WeiandHanZhenDepartmentofComputerScience&Technology,NorthemJiaotongUniversityBeiJing,100044,P.R.ChinaEmail:jw-wen@263.netABSTRACTTheback-propagationalgorithm(BP)isawell-knownmethodoftrainingamultilayerFeed-ForwardArtificialNeuralNetworks(FFANNS).Althoughthealgorithmissuccessful,ithassomedisadvantages.BecauseofadoptingthegradientmethodbyBPneuralnetwork,theproblemsincludingslowlylearningconvergentvelocityandeasilyconvergingtolocalminimumcannotbeavoided.Inaddition,theselectionoflearningfactorandinertialfactoraffectstheconvergenceofBPneuralnetwork,whichareusuallydeterminedbyexperiences.ThereforetheeffectiveapplicationofBPneuralnetworkislimited.InthispaperanewmethodinBPalgorithmtoavoidlocalminimumwasproposedbymeansofaddinggraduallytrainingdataandhiddenunits.Inaddition,thepaperalsoproposedanewmodelofcontrollablefeed-forwardneuralnetwork.Keywords:neuralnetwork:BPalgorithm:scaleconversion,propertiesofnetworks,localminimum1:BPalgorithmBackPropagationneuralnetworkisonekindofneuralnetworkswithmostwideapplication.Itisbasedongradientdescentmethodwhichminimisesthesumofthesquarederrorsbetweentheactualandthedesiredoutputvalues.BecauseofadoptingthegradientmethodbyBPneuralnetwork,theproblemsincludingslowlylearningconvergentvelocityandeasilyconvergingtolocalminimumcannotbe'avoided.Inaddition,theselectionoflearningfactorandinertialfactoraffectstheconvergenceofBPneuralnetwork,whichareusuallydeterminedbyexperiences.ThereforetheeffectiveapplicationofBPneuralnetworkislimited.ThebasicformulaofBPalgorithmis:W(n)=W(n-1)-AW(n(1)ThereintoaEAw(n)=v-(n-1)+aW(n-1)(2)aw.Intheformula:Wmeansweight,rlmeanslearningrate,Emeansgradientoferrorfunction:aAW(n-1)meansweightincrementalquantity.accordingtoKolmogorovtheoremandBPfixquantify,ThreelayersBPnetworkwithSigmoidfunctionasexcitationfunctioncanapproachanycontinualfunctioninanyprecision,butithassomedisadvantages:(I)Inmaths,itcanbelookeduponasagradientoptimizeproblemofnonlinearsoitmustexistlocalminimumproblem.(2)Learningconvergentvelocityisslow.(3)Thenetworkstructureisforwardstructurenotanonlineardynamicssystem,itisonlyanonlinearmappingsystem.(4)Theselectionofiterativestepandinertialfactorisdeterminedbyexperiences.itmaybebringaboutnetworkoscillationevenanaesthetizationandstopconvergentifitwasnotselectedcorrect.Andintheseproblems.howtoselectrlis0-7803-5747-7/00/$10.00@2000IEEE.important.Thelearningrateparameterisselectedasaconstant.Thelargerthisconstant,thelargerthechangesintheweights.Inordertoofferthemostrapidleaming,thelearningrateisoftenchosenaslargeaspossible.Inreality.alargelearningratemaycausetheproblemofoscillation.Toovercomethisprob1em.asmalllearningrateisrecommended.howeverasmalllearningratemaygetthenetworkstuckatalocalminimumbeforelearningthewholetrainingset.Thispaperpresentsanadaptivebackpropagationalgorithmwhichcanupdatelearningrateandinertiafactorautomaticallybasedondynamicaltrainingerrorrateofchange.thisalgorithmobtainmuchfasterconvergencerateandovercomeslowconvergencerateofthestandardBPalgorithm.2:LearningRateAndInertiaFactorFromformula(I)and(2),Itcanbeseenthatadecideshowmuchthehistoryweightamendmentaffectscurrentweightamendment.Theoreticalanalysisandemulationexperimentshowthatatthebeginningofnetworkstudy,Lnordertoofferthemostrapidlearning,thelearningrateandinertiafactorisoftenchosenaslargeaspossible.Thatwillacceleratetheconvergencespeed.However,whentheerrorapproachtheinfinitesimalalargelearningratemaycausetheproblemofoscillation.Thatmeansitisinsecuretouseunique,steadylearningrateandinertiafactor.Themoredifficultproblemistheredoesn’texist‘Landawhicharegeneralpurposeandsteadytodiversestudyproblem.ThenimprovedBPalgorithmisnecessary.3:AdaptiveBPAlgorithmThemainideaofadaptiveBPalgorithmis:Ifthesignsofgradientdirection(3E/W(n))isoppositeduringtwoconsecutiveiterationsindicatesthatcurrenterrorexceedlasttime,currentiterationisinvalid.thenitistimetoreducinglearningrateandinertiafactor.Ifthesignsofgradientdirection(dE/W(n))issameduringtwoconsecutiveiterationsindicatesthatdescendrateisslow,thelearningrateandinertiafactorcanbeincreased.Basedabovebaseknowledge,Therearemanywaystoadjustthelearningrateandinertiafactor.SuchaswecandescribethechangevalueasD.and:D=l/ValueIn(3)ValueInmeansthevalueoftheinputsizewhreValueIninthenumberofneuronsconnectedtoneuronIfrompreviouslayer,sothemodifiedweightcorrectiondW(n)=AW(n)*D(4)Italsocanshortenlearningratebyhalforincreaselearningratedoublesizeeverytime.A‘L(n)=EAq(n-1)(5)Intheformula:Eshouldkeepinarange,computationmeans0.016E60.1isaappropriatevalueAdefineasaEJE(6)=q%K7.a\v(Jr-I)WhenAO,increaselearningrate,thentheadaptiveBPalgorithmis:(rl(n)changeaccordingto(5)1W(n+l=W(n)-‘L(nZ(n)(7)Z(n)=E/W(n)+aAW(n-l)(Oda61)(8)fig(1)isaerrorcurveofdescentprocessinadaptiveBPalgorithm,nmeansiterationtimes,Eislearningerror.Fromfig(I)$canconcludethattheconvergenceoftheadaptiveBPalgorithmisstable.Fromthenumericalexperimentalresult,themodelisseentobeeffective0.20-0.0-'4OR0.84R0.0004400E

1 / 3
下载文档,编辑使用

©2015-2020 m.777doc.com 三七文档.

备案号:鲁ICP备2024069028号-1 客服联系 QQ:2149211541

×
保存成功