ResearchandApplicationonImprovedBPNeuralNetworkAlgorithmRongXieSchoolofAutomationNorthwesternPolytechnicalUniversityXi’an,Chinaxierong2005@tom.comXinminWangSchoolofAutomationNorthwesternPolytechnicalUniversityXi’an,Chinawxmin@nwpu.edu.cnYanLiSchoolofAutomationNorthwesternPolytechnicalUniversityXi’an,Chinaliyan@nwpu.edu.cnKairuiZhaoSchoolofAutomationNorthwesternPolytechnicalUniversityXi’an,Chinazhaokairui@nwpu.edu.cnAbstract—Astheiterationsaremuch,andtheadjustmentspeedisslow,theimprovementsaremadetothestandardBPneuralnetworkalgorithm.Themomentumtermoftheweightadjustmentruleisimproved,maketheweightadjustmentspeedmorequickerandtheweightadjustmentprocessmoresmoother.ThesimulationofaconcreteexampleshowsthattheiterationsoftheimprovedBPneuralnetworkalgorithmcanbecalculatedandcompared.Finally,choosingacertaintypeofairplaneasthecontrolledobject,theimprovedBPneuralnetworkalgorithmisusedtodesignthecontrollawforcontrolcommandtracking,thesimulationresultsshowthattheimprovedBPneuralnetworkalgorithmcanrealizequickerconvergencerateandbettertrackingaccuracy.Keywords—improvedBPneuralnetwor˗weightadjustment˗learningrate˗convergencerate˗momentumtermI.INTRODUCTIONArtificialneuralnetwork(ANN)isdevelopedunderthebasisofresearchingoncomplexbiologicalneuralnetworks.Thehumanbrainisconstitutedbyabout1011highlyinterconnectedunits,theseunitscalledneurons,andeachneuronhasabout104connections[1].Imitatingthebiologicalneurons,neuronscanbeexpressedmathematically,theconceptofartificialneuralnetworkisintroduced,andthetypescanbedefinedbythedifferentinterconnectionofneurons.Itisanimportantareaoftheintelligentcontrolbyusingtheartificialneuralnetwork.Accordingtothedifferenttypesoftheneuronconnections,theneuralnetworkscanbedividedintoseveraltypes.Thispaperstudiesfeed-forwardneuralnetwork,asthefeed-forwardneuralnetworkusingtheerrorbackpropagationfunctionintheweighttrainingprocess,itisalsoknownasbackpropagationneuralnetwork,orBPnetworkforshort[2,3].BPneuralnetworkisacorepartofthefeed-forwardneuralnetwork,whichcanrealizeaspecialnon-lineartransformation,transformtheinputspacetotheoutputspace.AlthoughtheBPneuralnetworkhasmaturetheoryandwideapplication,itstillhasmanyproblems,suchastheconvergencerateisslow,theiterationsaremuch,andthereal-timeperformanceisnotsogood.ItisnecessarytoimprovethestandardBPneuralnetworkalgorithmtosolvethereproblemsandachieveoptimalperformance.II.STUCTUREANDALGORITHMOFTHESTANDARDBPNEURALNETWORKA.StructureoftheBPneuralnetworkThestandardstructureofatypicalthree-layerfeed-forwardnetworkisshownasfollows:##1n2n1y1x2x1nx1o2o2no#m2y3ymy11w12w1mw2mw1nmw11v12v21nv23nv2mnvFigure1Thestandardstructureofatypicalthree-layerfeed-forwardnetworkB.AlgorithmoftheBPneuralnetworkTheflowchartofthestandardBPneuralnetworkalgorithmisasfollows[4]:1()(())iiiiognetggnetω−==212kikEEe==¦¦¦(),(0,1,2)rpkrε=1kkkωωηω−←+Δ1jjjvvvη−←+ΔmaxEE,kijkwvFigure2FlowchartofthestandardBPneuralnetworkalgorithm1462978-1-4244-5046-6/10/$26.00c2010IEEEIII.IMPROVEMENTOFTHESTANDARDBPNEURALNETWORKALGORITHMTheconvergencerateofthestandardBPalgorithmisslow,andtheiterationsofthestandardBPalgorithmaremuch,theyallhavenegativeinfluencesontherapidityofthecontrolsystem.Inthispaper,improvementhasbeenmadetothelearningrateofthestandardBPalgorithmtoacceleratethetrainingspeedoftheneuralnetwork.ForthestandardBPalgorithm,theformulatocalculatetheweightadjustmentisasfollows:()()WWnEnη∂Δ=−∂(1)Informula(1),ηrepresentsthelearningrate;ǻW(n)representstheweightadjustmentvalueofthenthiterations;E(n)representstheerrorofthenthiterations;W(n)representstheconnectionweightofthenthiterations.Fromformula(1),thelearningrateηinfluencestheweightadjustmentvalueǻW(n),andtheninfluencestheconvergencerateofthenetwork.Ifthelearningrateηistoosmall,theconvergenceratewillbecomeveryslow;Ifthelearningrateηistoobig,theexcessiveweightadjustmentwillcausetheconvergenceprocessoscillatesaroundtheminimumpoint.Inordertosolvetheproblem,themomentumtermisaddedbehindtheformula(1):()()()WW1WnnEnηα+Δ−∂Δ=−∂(2)Informula(2),()W1nαΔ−representsthemomentumterm,ǻW(n-1)representstheweightadjustmentvaluewhichgeneratedbythe(1)nth−iterations,αrepresentsthesmoothingcoefficient,itsvalueisfrom0to1.Formula(2)isaimprovementofformula(1),whichcanimprovetheconvergencerateoftheneuralnetworkinacertaindegree,buttheeffectisnotobvious.Inordertoacceleratetheconvergencespeedoftheneuralnetworks,theweightadjustmentformulaneedstobefurtherimproved.signfunctionBPalgorithm:multiplythemomentumtermwhichbehindtheformula(2)bythefunction()signWMEn∂∂§·¨¸©¹,theimprovedformulacanbeexpressedasfollows:()()()()WW1signWWnnMEEnnηαΔ−Δ+∂∂=−∂∂§·¨¸©¹(3)tanhfunctionBPalgorithm:multiplythemomentumtermwhichbehindtheformula(2)bythefunction()tanhWMEn∂∂§·¨¸©¹,theimprovedformulacanbeexpressedasfollows:()()()()WW1tanhWWnnEMnEnηαΔ−∂Δ+∂∂=−∂§·¨¸©¹(4)Informula(3)andformula(4),ηrepresentsthelearningrate;ǻW(n)representstheweightadjustmentvalueofthenthiterations;E(n)representstheerrorofthenthiterations;W(n)representstheconnectionweightofthenthiterations;Misaself-definedconstant.TheimprovedBPalgorithmhas