Hidden-mode markov decision processes for nonstati

整理文档很辛苦,赏杯茶钱您下走!

免费阅读已结束,点击下载阅读编辑剩下 ...

阅读已结束,您可以下载文档离线阅读编辑

资源描述

Hidden-ModeMarkovDecisionProcessesforNonstationarySequentialDecisionMakingSamuelP.M.Choi,Dit-YanYeung,andNevinL.ZhangDepartmentofComputerScience,HongKongUniversityofScienceandTechnologyClearWaterBay,Kowloon,HongKongfpmchoi,dyyeung,lzhangg@cs.ust.hk1IntroductionProblemformulationisoftenanimportantrststepforsolvingaproblemeec-tively.Insequentialdecisionproblems,Markovdecisionprocess(MDP)(Bellman1957b;Puterman1994)isamodelformulationthathasbeencommonlyused,duetoitsgenerality,exibility,andapplicabilitytoawiderangeofproblems.Despitetheseadvantages,therearethreenecessaryconditionsthatmustbesatisedbeforetheMDPmodelcanbeapplied;thatis,1.Theenvironmentmodelisgiveninadvance(acompletely-knownenviron-ment).2.Theenvironmentstatesarecompletelyobservable(fully-observablestates,implyingaMarkovianenvironment).3.Theenvironmentparametersdonotchangeovertime(astationaryenviron-ment).Theseprerequisites,however,limittheusefulnessofMDPs.Inthepast,re-searcheortshavebeenmadetowardsrelaxingthersttwoconditions,leadingtodierentclassesofproblemsasillustratedinFigure1.EnvironmentObservablePartiallyObservableCompletelyStatesofObservableMDPMDPKnownPartiallyUnknownModelofEnvironmentTraditionalRLHidden-stateRLFig.1.Categorizationintofourrelatedproblemswithdierentconditions.Notethatthedegreeofdicultyincreasesfromlefttorightandfromuppertolower.Thispapermainlyaddressestherstandthirdconditions,whereasthesec-ondconditionisonlybrieydiscussed.Inparticular,weareinterestedinaspe-cialtypeofnonstationaryenvironmentsthatrepeattheirdynamicsinacertainmanner.Weproposeaformalmodelforsuchenvironments.Wealsodevelopal-gorithmsforlearningthemodelparametersandforcomputingoptimalpolicies.Beforeweproceed,letusbrieyreviewthefourcategoriesofproblemsshowninFigure1anddenetheterminologythatwillbeusedinthispaper.1.1FourProblemTypesMarkovDecisionProcessMDPisthecentralframeworkforalltheproblemswediscussinthissection.AnMDPformulatestheinteractionbetweenanagentanditsenvironment.Theenvironmentconsistsofastatespace,anactionspace,aprobabilisticstatetran-sitionfunction,andaprobabilisticrewardfunction.Thegoaloftheagentistond,accordingtoitsoptimalitycriterion,amappingfromstatestoactions(i.e.policy)thatmaximizesthelong-termaccumulatedrewards.Thispolicyiscalledanoptimalpolicy.Inthepast,severalmethodsforsolvingMarkovdecisionprob-lemshavebeendeveloped,suchasvalueiterationandpolicyiteration(Bellman1957a).ReinforcementLearningReinforcementlearning(RL)(Kaelblingetal.1996;SuttonandBarto1998)isoriginallyconcernedwithlearningtoperformasequentialdecisiontaskbasedonlyonscalarfeedbacks,withoutanyknowledgeaboutwhatthecorrectac-tionsshouldbe.AroundadecadeagoresearchersrealizedthatRLproblemscouldnaturallybeformulatedintoincompletelyknownMDPs.ThisrealizationisimportantbecauseitenablesonetoapplyexistingMDPalgorithmstoRLproblems.Thishasledtoresearchonmodel-basedRL.Themodel-basedRLap-proachrstreconstructstheenvironmentmodelbycollectingexperiencefromitsinteractionwiththeworld,andthenappliesconventionalMDPmethodstondasolution.Onthecontrary,model-freeRLlearnsanoptimalpolicydirectlyfromtheexperience.Itisthissecondapproachthataccountsforthemajordif-ferencebetweenRLandMDPalgorithms.Sincelessinformationisavailable,RLproblemsareingeneralmoredicultthantheMDPones.PartiallyObservableMarkovDecisionProcessTheassumptionofhavingfully-observablestatesissometimesimpracticalintherealworld.Inaccuratesensorydevices,forexample,couldmakethiscon-ditiondiculttoholdtrue.ThisconcernleadstostudiesonextendingMDPtopartially-observableMDP(POMDP)(Monahan1982;Lovejoy1991;WhiteIII1991).APOMDPbasicallyintroducestwoadditionalcomponentstotheoriginalMDP,i.e.anobservationspaceandanobservationprobabilityfunction.Observationsaregeneratedbasedonthecurrentstateandthepreviousaction,andaregovernedbytheobservationfunction.Theagentisonlyabletoperceiveobservations,butnotstatesthemselves.Asaresult,pastobservationsbecomerelevanttotheagent’schoiceofactions.Hence,POMDPsaresometimesreferredtoasnon-MarkovianMDPs.TraditionalapproachestoPOMDPs(Sondik1971;Cheng1998;Littmanetal.1995b;Cassandraetal.1997;Zhangetal.1997)main-tainaprobabilitydistributionoverthestates,calledbeliefstate.ItessentiallytransformstheproblemintoanMDPonewithanaugmented(andcontinuous)statespace.Unfortunately,solvingPOMDPproblemsexactlyisknowntobeintractableingeneral(PapadimitriouandTsitsiklis1987;Littmanetal.1995a).Hidden-StateReinforcementLearningRecently,researchhasbeenconductedonthecasewheretheenvironmentisbothincompletelyknownandpartiallyobservable.Thistypeofproblemsissometimesreferredtoashidden-statereinforcementlearning,incompleteper-ception,perceptionaliasing,ornon-Markovianreinforcementlearning.Hidden-stateRLalgorithmscanalsobeclassiedintomodel-basedandmodel-freeap-proaches.Fortheformer,avariantoftheBaum-Welchalgorithm(Chrisman1992)istypicallyusedformodelreconstruction,andhenceturnstheproblemintoaconventionalPOMDP.OptimalpoliciescanthenbecomputedbyusingexistingPOMDPalgorithms.Forthelatter,researcheortsarediverse,rangingfromstate-freestochasticpolicy(Jaakkol

1 / 24
下载文档,编辑使用

©2015-2020 m.777doc.com 三七文档.

备案号:鲁ICP备2024069028号-1 客服联系 QQ:2149211541

×
保存成功