马尔可夫决策过程(MDP)

整理文档很辛苦,赏杯茶钱您下走!

免费阅读已结束,点击下载阅读编辑剩下 ...

阅读已结束,您可以下载文档离线阅读编辑

资源描述

Markovdecisionprocess(MDP)/Email:yangzhang@whut.edu.cn••MarkovProcess•MarkovRewardProcessMRP•MarkovDecisionProcessMDP•MDP•MDP••••MDP••MarkovProcess•MarkovRewardProcessMRP•MarkovDecisionProcessMDP•MDP•MDP••••MDP•P(Xt+1|Xt,Xt1,Xt2,···)=P(Xt+1|Xt)XtXt+1Xt1,Xt2,···(Xt,t2I)-5231045P(4|3)•••Randomwalk•••P(Xt+1|Xt,Xt1,Xt2,···)=P(Xt+1|Xt,Xt1)St=(Xt,Xt1)St2{(s,s),(s,r),(r,s),(r,r)}•••/•randomwalk••MarkovProcess•MarkovRewardProcessMRP•MarkovDecisionProcessMDP•MDP•MDP••••MDPMarkovrewardprocess(MRP)10Reward20MDP=Markovprocess+reward/utilityfunctions+/231045Reward5Reward0RewardRewardReward…u(S=3)u(S=4)0.10.90.20.81.01.0MRP•MRP••statetransitionprob.•rewardfunction•/discountfactorS,P,U,SPUMRP-•Reward20231045Reward5Reward0RewardRewardReward…u(S=3)u(S=4)0.10.90.20.81.01.0MRP13Reward20231045Reward5Reward0Reward6Reward2Reward90.10.90.20.81.01.0Rewardimmediate“”SH(S)startfromhereMRP-Backwardinduction14Reward20231045Reward5Reward0Reward6Reward2Reward90.10.90.20.81.01.0H(S=4)=u(S=4)=2H(S=5)=u(S=5)=9MRP-15Reward20231045Reward5Reward0Reward6Reward2Reward90.10.90.20.81.01.0“”H(S=3)=u(S=3)+⇥0.2H(S=4)+0.8H(S=5)⇤=6+⇥0.2·2+0.8·9⇤2[0,1)MRP-16Reward20231045Reward5Reward0Reward6Reward2Reward90.10.90.20.81.01.0H(S=2),H(S=1),…H(S=3)=u(S=3)+⇥0.2H(S=4)+0.8H(S=5)⇤=6+⇥0.2·2+0.8·9⇤MRP17Reward20231045Reward5Reward0Reward6Reward2Reward90.10.90.20.81.01.0H(St)=Eu(St)+H(St+1)H(S)=u(S)+XS02SP(S,S0)H(S0)()MRP-•absorbingstate•Reward202310Reward5Reward0Reward61.01.01.01.0MRP-Valueiteration19Reward20231045Reward5Reward0Reward6Reward2Reward90.10.90.20.81.01.0H(S),0,8S2SH(S)=u(S)+XS02SP(S,S0)H(S0)H(S)••MarkovProcess•MarkovRewardProcessMRP•MarkovDecisionProcessMDP•MDP•MDP••••MDPMarkovdecisionprocess(MDP)22123ActionA1Reward20CurrentstatePossiblefuturestatePossiblefuturestateMDP=Markovprocess+actions+rewardfunctions++123ActionA2Reward5CurrentstatePossiblefuturestatePossiblefuturestate0.10.912Markovdecisionprocess(MDP)23ActionA1Reward20MDP=Markovprocess+actions+rewardfunctions++123ActionA2Reward5CurrentstatePossiblefuturestatePossiblefuturestate0.10.912MDP•MDP••action•statetransitionprob.•rewardfunction•/discountfactorS,A,P,U,SAPUCMDPPOMDPMDP•MDP•MDP“”“”•[]•[]action/decisionMDP•MDP•-•-•reward-S,A,P,U,PSA••MarkovProcess•MarkovRewardProcessMRP•MarkovDecisionProcessMDP•MDP•MDP••••MDPMDP•MDP“”Policy••••:S7!ASA⇤:S7!ABellmanBellmanequation•MRP•MDPH(S)=u(S)+XS02SP(S,S0)H(S0)H(S,A)=u(S,A)+XS02SP(S,A,S0)U(S0)U(S)=maxA2AH(S,A)⇤(S)=argmaxH(A,S)BellmanequationBellmanequation•MRPbackwardinduction•absorbingstate•“”•Bellmanequation•(Valueiterationalgorithm)•0•Bellmaneqn123ActionA2Reward5ActionA1Reward20CurrentstatePossiblefuturestatePossiblefuturestate0.10.9U0(S),0,8S2SHn+1(S,A)=u(S,A)+XS02SP(S,A,S0)Un(S0)Un+1(S)=maxA2AHn+1(S,A)U(S)ValueiterationalgorithmForeachstate:SH0(S),0Repeatuntilconverge:Foreachstate:SForeachaction:AHn+1(S,A)=u(S,A)+XS02SP(S,A,S0)Un(S0)ComputeComputeandstore⇤n+1(S)=argmaxAHn+1(S,A)ComputeandstoreUn+1(S)=maxA2AHn+1(S,A)Return⇤(S),U(S),8S2SBellmanequation•(Policyiterationalgorithm)•Valueiteration•Bellmaneqn••0(S),8S2Sn+1(S):S7!A,8S2SBellmanequation•••-Theprincipleofoptimality•O(|A|·|S|2)|A||S|f(x)=x••MarkovProcess•MarkovRewardProcessMRP•MarkovDecisionProcessMDP•MDP•MDP••••MDPMDPState0State1State2State3S={0,1,2,3}A={Left,Right}0123Action:LeftAction:RightReward:-1foreverystepmovedDiscountfactor:0.5MDPState0State1State2State30123Action:LeftAction:RightP(A=Left)=266410001000010000103775P(A=Right)=266410000010000100013775Value:H=0.00.00.00.0Action:←/→←/→←Value:H=0.0-1.0-1.0-1.0Action:←←/→←Value:H=0.0-1.0-1.5-1.5Action:←←←…Period1Period2Period3MDP•MDP••••MarkovProcess•MarkovRewardProcessMRP•MarkovDecisionProcessMDP•MDP•MDP••••MDP•:,,•/-•:•:41Copyright:Forbes-•:•RFenergyTx/Rx•Friis’formula•Beamforming•42PowercasterTxandRxPChargingstation•Electricitychargers:Atdifferentfixedlocations,e.g.,poweroutlets,basestations•Endusersofenergy:Thosewhoneedenergy,butarenotcoveredbychargers•Mobileenergygateway:Movingandcharge/transferring(wirelessly)43Buy/Sellenergy•Energygatewaybuysfromchargers(Charging)•Eachchargerasksacertainpricewhencharging•Energygatewaysellstoendusers(Transferring)•Moreusers,morepayments•Nearusergetsmoreenergy,thushigherpayments44“”“”Mobileenergygateway•enduserofenergy•RF•-45MDP••:;:;:•: ••,,46S={S=(L,E,N,P)}LENdecidesenduserpaymentA={A=0,1,2}PS,A,P,U,MDP:••47f(n,l|N)=3RB(n+23,Nn+1)B(Nn+1,n)✓l3R3;n+23,Nn+1◆R(n,ES)=ZR0f(n,l|N)r(eDn)dl+ZRRf(n,l|N)r(gESl2)dlnthsumuptogetoverallpaymentS,A,P,U,MDP:48S,A,P,U,MDP:49S,A,P,U,P(A=1)=2664......···0.3···0.7···......3775P(A=0)=2664......···1.0···0.0···......3775•Bellmanequationvalueiterationalgorithm•pymdptoolboxMDP•mdptoolboxMatlab•MDP•MDP/•Greedyscheme[GRDY]:maximizingimmediateutility•Randomscheme[RND]:randomlytakinganyaction(i.e.,0,1,2)fromtheactionset•Location-awarescheme[LOCA]:chargingatcharger,transferringatendusers•Location-awarerandomscheme[LRND]:randomlytaking0and1atthechargerside;randomlytaking0and2attheenduserside51MDP•Greedyscheme[GRDY]:•MDPQ52Numer

1 / 60
下载文档,编辑使用

©2015-2020 m.777doc.com 三七文档.

备案号:鲁ICP备2024069028号-1 客服联系 QQ:2149211541

×
保存成功