强化学习理论算法及应用

整理文档很辛苦,赏杯茶钱您下走!

免费阅读已结束,点击下载阅读编辑剩下 ...

阅读已结束,您可以下载文档离线阅读编辑

资源描述

*张汝波顾国昌刘照德王醒策(,150001):(reinforcementlearning),,.,Q;;.:;;Q;;:AReinforcementLearningTheory,AlgorithmsandItsApplicationZHANGRubo,GUGuochang,LIUZhaodeandWANGXingce(Departmentofcomputerscience,HarbinEngineeringUniversityHarbin,150001,P.R.China)Abstract:Theterm,reinforcementlearning,comesfrombehaviorpsychologythattakesbehaviorleamingastrialanderror,bywhichthestatesofenvironmentaremappedintocorrespondingactions.First,themainalgorithms,temporaldifference,Qlearningandadaptiveheuristiccritic,areroundlyintroduced.Then,theapplicationofreinforcementleamingispresented.Finally,somepresentresearchprojectsofreinforcementlearningarediscussed.Keywords:reinforcementleaming;temporaldifference;Qlearning;adaptiveheuristiccritic;intelligentcontrolsystem1(Introduction)(reinforcementlearning,,),..,,(unsupervisedlearning)(supervisedlearning).,(),,,(),RLS(reinforcementlearningsystem).,RLS.,RLS,.;.2(Developmentalhistoryandresearchfulstatusquoofreinforcementlearning),:5060,;80,.,!!Minsky[1]..,[2].Widrow,HoffRosenblatt,BushMosteller.!!,[2~4].,Waltz1965[5].,Samuel,Q,[6].Widrow,,1973Widrow,GuptaMaitraWidrowHoff(LMS).,,.!![7].Saridis,[8].,.80,,,,.Barto,*:.:1999-02-26;:2000-01-10.175200010CONTROLTHEORYANDAPPLICATIONSVol.17,No.5Oct.,2000:1000-8152(2000)05-0637-06(),(associatedrewardpenalty,ARP)[9].Barto1983,ASE(associativesearchelement)ACE(adaptivecriticelement),,,.AHC(adaptiveheuristiccritic)[10].,Sutton1984,AHC,AHC.,[11].,Sutton1988∀MachineLearning#LearningtoPredictbytheMethodsofTemporalDifferences![12],.TD(temporaldifferences),,TD.DayanTD[13].TD[14~18].,,WatkinsQLearning[19].WatkinsQLearning.JingPengWliliamsQLearning[20];SzepesvarQ[21].Werbos[22].Singh[23],(replacingeligiblitytraces),[24].Schwartz,QLearning[25].Mahadevan,RLearning,QLearning,RLearning[26].TadepaliOKDokyeongHLearning,,,[27].∀MachineLearning#19921996,[17,18,20,24,26,28~30].∀RoboticsandAutonomousSystem#1995,[31,32].,.∀#1996,[33];[34].,[35].TD[36].,[37];[38~40].Q[41],,QLearning[42].3(Mainalgorithmsofreinforcementlearning)31(Temporaldifferencemethod)Sutton1988TD(temporaldifferencemethod)[12].x1,x2,∃,xm,z,xit,z,,p(1),p(2),p(3),∃,p(m),p(t)z.p(t),,wt.,w:W%W+mt=1wt.(1):Et=12[z-p(t)]2.(2):wt=-!Et!wt=[z-p(t)]!p(t)!wt=[z-p(t)]∀wp(t).(3),.z-p(t)t,z-p(t)=mk=t[p(k+1)-p(k)].p(m+1)=z.,(1):W%W+mt=1[z-p(t)]∀wp(t)=W+mt=1mk=t[p(k+1)-p(k)]∀wp(t)=W+mk=1kt=1[p(k+1)-p(k)]∀wp(t)=W+mt=1[p(t+1)-p(t)]tk=1∀wp(t),:wt=[p(t+1)-p(t)]tk=1∀wp(t).(4).(4)TD(1).,k(0&&1):wt=[p(t+1)-p(t)]tk=1t-k∀wp(k),(5)(5)TD().32Q[19](Qlearningalgorithm),RLS,,r.63817St+1:prob[s=ss+1/st,at]=P[st,at,st+1].RLS,.!,stV!(St)=r(!(St))+∀st+1∋SP[st,at,st+1]V!(st+1).(6)!*V!*(St)=maxa∋A{r(!(St))+∀st+1∋SP[st,at,st+1]V!*(st)}.(7)Q,Q,WatkinQstat,,Q(st,at)=rt+∀maxat{Q(st+1,at)|at∋A}.(8)WatkinQ.Q,Q,:Q(s,ai).Qlearng.Q:Q(st+1,at)=rt+∀maxa∋A{Q(st+1,a)}.(9).,[40]:Q=rt+∀maxa∋AQ(st+1,a)-Q(st,a).(10),Q(st+1,at)Q,Q.33(Adaptiveheuristiccriticalgorithm)AHC1.ASN(associativesearchnetwork)ACN(adaptivecriticnetwork)[11].331AHC(TheneuralnetworkimplementofAHCalgorithmfordiscreteactions).RLSA={a1,a2,∃,aM},ASN,m(ai).ACNp(t),p(t).,p(t),:wa(t)=!p(t)!Wa(t)=!p(t)!m(ai)!m(ai)!Wa(t).(11)!p(t)!m(ai),.!p(t)!m(ai),[40]:!p(t)!m(ai)([r(t)+∀p(t+1)-p(t)][1-m(ai)].(12)1m(ai)ai;[r(t)+∀p(t+1)-p(t)]TD.#m(ai)/#Wa(t).ACN,:EC=12[r(t)+∀p(t+1)-p(t)]2.(13),wc(t)=-!Ec!Wc(t)=[r(t)+∀p(t+1)-p(t)]!p(t)!Wc(t).(14)#p(t)/#Wc(t)CMAC.332AHC(AHCalgorithmsforcontinuousactions)AHC,,ai,ai,ASN,AHC,.AHCACNAHC.ASNWa:wa=∃!p(t)!Wa=∃!p(t)!A!A!Wa,(15)∃.,SRV(stochasticrealvalued)[43].,,,%(t):%(t)=F[p(t)]=K1+ep(t).(16)%(t),:ark~N(ak,%(t)),k=1,2,∃,M.(17)N().:!p(t)!ak=a[r(t)+∀p(t+1)-p(t)][a∀k(t-1)-ak(t-1)%(t)].(18)5639%,(a∀k-ak)/%.#p(t)/#ak,(15),#A/#Wa.4(Applicationofreinforcementlearning)41(Applicationingameplay),.,Samuel.,TesauroBackgammon.Backgammon1020,TesauroBP.401[44,45].42(Applicationincontrolsystem),[46~49].,.,,,.Barto[10],ASEACE,.,,,,,,.,[50].43(Applicationinrobot),.[51~57].HeeRakBeem,[58].Winfriedllg[31].SebastianThurn[32].,.44(Applicationinscheduling),.CritesBarto410[59]..1022,().,1000.CritesBarto.Q.,.,.5(Conclusion),,[60~63].,[64~69].,.1).,,.:AHC,TDQlearning,.2).,Brooks.,.,,,,.,.,.,;,,.,,.,.3).,.,,.,,,.,..,,:1).,.,,,.2)..,,.3)(exploration)(exploitation).,.,64017,,.,,.(References)[1]MinskyML.Theoryofneuralanalogreinforcementsystemsanditsapplicationtothebrainmodelproblem[D].NewJersey,USA:PrincetonUniversity,1954[2]BushRR&MostellerF.StochasticModelsforLearning[M].NewYork:Wiley,1955[3]WidrowB&HoffME.Adaptiveswitchingcircuits[A].In:AndersonJAandRosenfeldE.Neurocompating:FoundationsofResearch[M].Cambridge,MA:TheMITPress,1988,126-134[4]RosenblattF.PrinciplesofNeurodymamics:PerceptronsandtheTheoryofBrainMechanisms[M].WashingtonDC:SpartanBooks,1961[5]WaltzMD&FuKS.Aheuristicapproachtoreinforcementlearningcontrolsystems[J].IEEETrans.AutomaticControl,1965,10(3):390-398[6]SamuelAL.Somestudiesinmachinelearningusingthegameofcheckers[J].IBMJournalonResearchandDevelopment,1967,11:601-617[7]WidrowB,GuptaNK&MaitraS.Punish/reward:Learningwithacriticinadaptivethresholdsystem[J].IEEETrans.onSystems,Man,andCybernetics,

1 / 6
下载文档,编辑使用

©2015-2020 m.777doc.com 三七文档.

备案号:鲁ICP备2024069028号-1 客服联系 QQ:2149211541

×
保存成功