Font Size: a A A

Representation Of Limit Values For Nonexpansive Stochastic Control And Stochastic Differential Games

Posted on:2019-08-01Degree:DoctorType:Dissertation
Country:ChinaCandidate:N N ZhaoFull Text:PDF
GTID:1360330572956687Subject:Probability theory and mathematical statistics
Abstract/Summary:PDF Full Text Request
In ergodic control problems and stochastic control problems the limit of the value function ?V? is studied,when the discounted factor A tends to zero.These problems have been well studied in the literature and the used assumptions guarantee that the value function ?V? converges uniformly to a constant as ??0,for details the readers can see,Arisawa[3],Arisawa,Lions[4],Artstein,Gaitsgory[5],Basak,Borkar,Ghosh[11],Borkar,Gaitsgory[18],Buckdahn,Ichihara[27],Lions,Papanicolaou,Varadhan[75],Richou[93].On the other hand,Buckdahn,Goreac,Quincampoix[23],Quincampoix,Renault[92],Cannarsa,Quincampoix[28]used the nonexpansivity assumption which differs from the ergodic case,the limit value function can depend on the initial value x.Based on the above research works,the value function V? is defined by the associated discounted cost functional with infinite time horizon,we use the PDE method to further discuss the convergence problem of the value function under the nonexpansive condition.In our paper,the radial monotonicity of the Hamiltonian plays a key role.The objective of the first part in this work consists in studying these problems under the assumption,namely,the nonexpansivity assumption,under which the limit function is not necessarily constant.Our discussion goes beyond the case of the stochastic control problem with infinite time horizon and discusses also V? given by a Hamilton-Jacobi-Bellman?HJB?equation of second order which is not necessarily associated with a stochastic control problem.On the other hand,the stochastic control case generalizes considerably earlier works by considering cost functionals defined through a backward stochastic differential equation?BSDE?with infinite time horizon and we give an explicit representation formula for the limit of ?V?,as??0.Moreover,in the second part of this paper we study this problem for the lower value function Va of a stochastic differential game with recursive cost,i.e.,the cost functional is defined through a BSDE with infinite time horizon.But unlike the ergodic control approach,we are interested in the case where the limit can be a function depending on the initial condition.For this we extend the so-called non-expansivity assumption from the case of control problems to that of stochastic differential games and we derive that?V??·?is bounded and Lipschitz uniformly with respect to ?>0.Using PDE methods and assuming radial monotonicity of the Hamiltonian of the associated Hamilton-Jacobi-Bellamn-Isaacs?HJBI?equation we obtain the monotone convergence of ?V??·?and we characterize its limit W0 as maximal viscosity subsolution of a limit PDE.Using BSDE methods we prove that W0 satisfies a uniform dynamic programming principle involving the supremum and the infimum with respect to the time,and this is the key for an explicit representation formula for W0.Let us introduce the content and structure of this thesis.In Chapter 1 we give the introduction of our paper.In Chapter 2 we consider the general HJB equations which are not necessarily relat-ed with a stochastic control problem.We define the value function V??x?by the solution of the BSDEs on the infinite time interval[0,?).Unlike the discussion of the limit function in ergodic stochastic problems under some ergodicity condition,we first intro-duce the nonexpansive condition,then study the monotone convergence of ?V? under the nonexpansive assumption,and finally characterize the limit of the value function V?as the maximal viscosity solution of some HJB equation.The novelty of this chapter:We introduce the new stochastic nonexpansivity con-dition in stochastic control framework and establish the relationship between it and the nonexpansivity condition.Moreover,we give a characterisation of the limit of the value function which extends the results of control problems in Cannarsa and Quincampoix[28]to stochastic control problems.In Chapter 3 we use the same framework of Chapter 2 but now with the Hamiltonian H related to the stochastic control problem,we study the limit behaviour of the optimal value of a discounted cost functional with infinite time horizon as the discount factor?>0 tends to zero.The novelty of this chapter:Our main results here characterize the value function V? as the unique viscosity solution on ??? of the associated HJB equation,the dynamic programming principle?DPP?for V?.Moreover,still in the stochastic control case the HJB equation satisfied by w0?x?is studied and an explicit formula for w0?x?is given with the help of the g-expectation,a nonlinear expectation introduced by Peng in[87].The Chapters 2 and 3 of the present paper are based on:J.Li,N.Zhao.Representation of asymptotic values for nonexpansive stochastic control systems.Submitted,https://arxiv.org/abs/1708.02335.In Chapter 4 we generalize the results of Chapter 2 to stochastic differential game,that is we consider the general HJBI equations which are not necessarily related with the stochastic differential game.Inspired by the paper by Buckdahn and Li[20]the associated cost functional is considered as recursive,i.e.,it is defined through a backward stochastic differential equation,but unlike[20]now with infinite time horizon.We mainly study the characterisation of the limit function of the lower value function V?.But unlike the Chapter 2,the coefficient ? of the backward stochastic differential equation discussed in this chapter is dependent on the y.We also introduce a new nonexpansive condition under the setting of our stochastic differential game.The novelty of this chapter:We prove the existence and the uniqueness of the solution of the BSDEs on the infinite time interval when we we replace the Lipschitz condition of the driving coefficient ? with respect to y by a continuity and a monotonicity condition and we show that nonexpansivity condition can imply the new stochastic nonexpansivity condition we introduced to our stochastic differential games.Then,we show that the value function no longer restricted to the constrained viscosity solution of a HJB equation on a compact ??????RN the lower value function V? defined in our paper is the unique viscosity solution of a HJBI equation of second order on RN.Moreover,we give a characterisation for the limit function under the radial monotonicity condition.In Chapter 5 using the same framework of Chapter 4 but now with the Hamiltonian H related to the stochastic differential game,we study the behaviour of the limit of the lower value function V? of a discounted cost functional with infinite time horizon in the stochastic differential game when we suppose that our nonexpansivity condition is satisfied and the coefficient ? of the BSDE differs from that in stochastic control framework depends also on y and another control process v.We characterize V? as a viscosity solution of the associated HJBI equation.Moreover,to give a stochastic characterisation for WO?x?= lim??0 ?V??x?we first present the dynamic programming principle?DPP?of the lower value function V? which allows to derive with the help of the notion of backward stochastic semigroup introduced by Peng[88]and extended to stochastic differential games in[20].We then prove the representation formula for W0.The novelty of this chapter:We generalize the results of Chapter 3 to the stochastic differential game.We prove the representation formula for the limit of the lower val-ue function V? by using the notations of backward stochastic semigroup and the limit backward stochastic semigroup.The Chapters 4 and 5 of the present paper is based on:R.Buckdahn,J.Li,N.Zhao.Representation of limit values for nonexpansive s-tochastic differential games.Preprint.This paper include five chapters,we now give an outline of the structure and the main conclusion of this dissertation.Chapter 1 Introduction;Chapter 2 Characterisation of asymptotic values for general Hamilton-Jabobi-Bellman equations;Chapter 3 Representation of asymptotic values for nonexpansive stochastic con-trol;Chapter 4 Characterisation of limit values for general Hamilton-Jabobi-Bellman-Isaacas equations;Chapter 5 Representation of limit values for nonexpansive stochastic differential game.Chapter 2:We define the value function of the stochastic control system with infinite time horizon and introduce the new stochastic nonexpansivi-ty condition of stochastic control problems.On the other hand,we give a characterisation of the limit of the value function.Stochastic control system:Given now a function ?:RN×Rd×U?R,for any?>0,we consider the following BSDE on the infinite time interval[0,?):and associated controlled stochastic system Lemma 2.1.1 Under our standard assumptions?H1?,for all control u? U,the above controlled stochastic system has a unique RN-valued continuous,F-adapted solution Xx,u = Xtx,u)t>0.Moreover,for all T>0,and k>2,there is a constant Ck?T?>0 such thatProposition 2.1.1 Under the assumptions?H1?and?H2?,the above BSDE on the in-finite time interval[0.?)has a unique solution?Y?,x,u,Z?,x,u?? LF??0,?;R?×Hloc2?Rd?.Moreover,we haveWe now introduce the following value functionIn order to study the value function V? and its limit,we introduce the new stochastic nonexpansivity condition and establish the relationship between it and the nonexpan-sivity condition.Proposition 2.2.1 Under the assumptions?H1?and?H2?the nonexpansivity condition?H3?implies the stochastic nonexpansivity condition?H4?.Next we give the properties of the value function V?.Lemma 2.3.1 We suppose that?H1?,?H2?and?H3?hold.Then the family of functions{AV?}a is equicontinuous and equibounded on ???.Indeed,for the constants ???>0,M>0 defined in?H2?,it holds that,for all ?>0,and for all x,x'????,In this chapter we consider a Hamiltonian H:RN ×RN × SN ? R not necessarily related with a stochastic control problem.By SN we denote the set of synmetric N × N matrices.And we assume that Hamiltonian H is a uniformly continuous function.We consider the PDETheorem 2.4.3 We suppose that,in addition to?A????,?AN?and?H?,the Hamiltonian H satisfies the radial monotonicity condition:For all ?>0,let V? be the constrained viscosity solution of the above PDE such that?V? ? LipM?????.Then?i?A ??V??x?is nondecreasing,for every x ????;?ii?The limit lim??0+ ?V??x?exists,for every x ????;?iii?The convergence in???is uniform on ???.Lemma 2.4.1 Let H?x,p,A?be convex in?p,A?? RN × SN.Then we have the following equivalence:i)The radial monotonicity?H5?holds true for H?x,·,·?;?)H?x,l'p,l'A??H{x,lp,IA),0 ?l?l',?p,A??RN × SN;iii)H?x,p,A?>H?x,0,0?,?p,A??RN × SN.Theorem 2.4.4 We make the same assumptions as in Theorem 2.4.3.For all ?>0,let V? be the unique constrained viscosity solution of the PDEsuch that ?V? ? LipM0?????,for some M0>0 large enough and independent of ?.Then,satisfies,Corollary 2.4.1 Under the same assumptions in Theorem 2.4.4,we have,for all x ????,Corollary 2.4.2 Under the same assumptions in Theorem 2.4.4 we suppose that,for allChapter 3:We mainly study the the limit behaviour of the optimal value of a discounted cost functional with infinite time horizon as the discount factor ?>0 tends to zero.We first prove that V? is the unique constrained viscosity solution of some HJB equation,then give an explicit formula for w0?x?:lim??0 V?.Under the framework of Chapter 2,we consider the Hamiltonian H of the formwhere?x,p,A??RN ×RN × SN.Proposition 3.1.1 Under the assumptions?H1?,?H2?and?H3?the value function V?is a viscosity solution of the Hamilton-Jacobi-Bellman equation?V?x?+ H{x,DV?x?,D2V?x?)= 0,x????,where H?x,p,A?is defined as aboveTo prove the Proposition 3.1.1,using the Peng's method,we need recall the notion of stochastic backward semigroup in Peng[88].Stochastic backward semigroup:Given the initial value x at time t = 0 of the SDE introduced in Chapter 2,u?·?? U and ??L2??,Ft,P?,we define a stochastic backward semigroup:For given ?>0,x ????,u ? U,t ? R+,we putwhere?Y???s?[0,t]is the unique solution of the BSDEProposition 3.1.2?DPP?Under the assumptions?H1?,?H2?and?H3?,for all ?>0,x E RN and t>0,it holdsProposition 3.1.3 Assume?H1?holds.Let H1,H2:RN × RN × SN ? R be two Hamiltonians of the form as above with ?=?1 and ???2,respectively,where ?1 and?2 are assumed to satisfy?H2?.We suppose that u ? USC?????is a subsolution of and v ? LSC?????is a supersolution ofThen it holdsTheorem 3.2.1 We suppose that the assumptions?H1?,?H2?and?H3?hold.Moreover,we suppose:There is a concave increasing function ?:R+ ?R+ with p?0+?= 0 such that,for all?Recall that d is the metric we consider on the control state space U?.Then,along a suitable subsequence 0<?n?0,there exists the uniform limit w?x?= lim??0+ ?V??x?is a viscosity solution of the equationin the sense of Definition 2.4.1,where h?x,p,A?= maxu?U{<-p,b?x,u?>-1/2tr???*?x,u?A?-??P??x,u?,u?}.The function ? is described below in the Chapter 3.Theorem 3.2.2 We suppose that the assumptions?H1?,?H3?and?H5?hold true.Now we consider the case:??x,z,u?=?1?x,u?+ g?z?,where ?1:???×U? R is bounded?by M?,uniformly continuous and satisfieswhile g:Rd?R is supposed to be Lipschitz?with Lipschitz constant Kz?,positive homogeneous,concave and satisfies g?0?=0.For ??L2?Ft?,we consider the following BSDEand define the nonlinear expectation e9[?]:= y0?.Then,there exists the uniform limit Chapter 4:We extend the result of Chapter 2 to the stochastic differential game,i.e.,the lower value function V? defined by a discounted cost functional with infinite time horizon in the stochastic differential game.We first prove that nonexpansivity condition can imply the new stochastic nonexpansivity condition we introduced to our stochastic differential games,then we char-acterize the limit function WO:=lim??0 ?V?as maximal viscosity subsolution of some HJBI equation.For any given ?>0 we consider the following BSDE on the infinite time interval[0,?):Proposition 4.1,1 Under the assumptions?A1?,the above BSDE on the infinite time interval[0,?)has a unique solution?Y?,Z??? LF??0,?;R?×Hloc2?Rd?.Moreover,we haveBefore we prove the above Proposition we introduce a technical Lemma and for the similar methods the authors can refer to Barlow,Perkins[10],Du,Li,Wei[40]and Lepeltier,San Martin[70].Lemma 4.1.1 Let ?:R+×? ×R×Rd?R satisfy assumption?A1?and,for all n?1,putThen,for all n ?1,?n:R+×?×R ×Rd?R satisfies?A1?,it is Lipschitz with respect to y with Lipschitz constant n,andThis pointwise convergence is non-increasing and bounded by M.The above lemma allows to reduce the proof of Proposition 4.1.1 to prove the fol-lowing Lemma.Lemma 4.1.2 Let the coefficient ? satisfy?A1?and be Lipschitz with respect to y with a Lipschitz constant Ky.Then the above BSDE has a unique solution?Y?,Z???LF??0,?;R?× Hloc2?Rd?.Moreover,we haveHowever,the uniqueness is a direct consequence of the following comparison result.Lemma 4.1.3 Let the coefficients?i,i = 1,2,satisfy?Al?and let us suppose that they are Lipschitz with respect to y?with,some Lipschitz constant Ky?and satisfy ?1??2.Then,if?Yi,Zi?denotes the solution of the above BSDE with coefficient ?i,we haveStochastic differential game:For all couple of controls?u,v?? U × V,we consider the controlled stochastic systemand,for any ?>0,x?RN and?u,v??U× V,we consider the following BSDE on the infinite time interval[0,?):Lemma 4.2-1 Under the assumptions?C1?,for all couples of controls?u,v?? U × V,the above controlled stochastic system has a unique RN-valued continuous,F-adapted solution Xx,u,v =(Xtx,u,v)t>0.Moreover,for all T>0,and k ? 2,there is a constant Ck?T?>0 such thatFrom Proposition 4.1.1 we know that there is a unique solution?Y?,x,u,v,Z?,x,u,v ?LF??0,?;R?× Hloc2?Rd?,and |Yt?,x,u,v|??M,t?0,and we can define the lower and upper value functions associating with stochastic differential game as follows· the lower value functionand· the upper value functionThen we introduce the new stochastic nonexpansivity condition extending that of[23]and Chapter 2 to our setting of stochastic differential games.And we give the relationship between the stochastic nonexpansivity condition and the nonexpansivity condition as follows:Theorem 4.3.1 Under the assumptions?C1?and?C2?the nonexpansivity condition?C3?implies the stochastic nonexpansivity condition?C4?.Next we give the properties of the lower value function V?.Lemma 4.4.1 We suppose that?C1?,?C2?and?C3?are satisfied.Then the family of functions {?V?}?>0 is equicontinuoUs and equibo1nded on RN.Indeed,for the constantsIn this chapter we make a more general discussion by considering the Hamiltonian H:RN×R × RN ×RN × SN? R which is not necessarily related with our stochastic differential game.By SN we denote here the set of symmetric N × N matrices.Let H be a uniformly continuous function.Theorem 4.5.1 Let the Hamiltonian H satisfy the assumptions?AH?,?H?and also the radial monotonicity condition?RM?.Then????? ?V??x?is nondecreasing,for all x?RN;???The limit V?x?:= lim??0+ ?V??x?exists,for all x ?RN;???The convergence in???is uniform on compacts in RNLemma 4.5.1 LetH?x,r,p,A?be convex in?p,A?? RN × SN.Then we have the following equivalence:??The radial monotonicity condition?RM?is satisfied by H?x,r,·,·?;?)H?x,r,e'p,e'A?? H?x,r,lp,lA?,0?l?l,?p,A??RN × SN;iii)H?x,r,P,A?? H?x,r,0,0?,?p,A?? RN × SN.Theorem 4.5.2 Let us make the same assumptions as in Theorem 4.5.1.For every?>0 let Va denote the unique viscosity solution of the PDE?V?x?+ H?x,?V?x?,DV?x?,D2V?x??= 0,x ?RN,such that ?V? ? Lip M0?RN?,for some M0>0 large enough and independent of ?.Then,WO satisfieswhereCorollary 4.5.2 In addition to the assumptions in Theorem 4.5.2 we suppose that,for allWO?·?must be constant on RN.Chapter 5:We study the convergence problem of the value function for the stochastic differential game introduced in the Chapter 4.We first prove that the lower value function V? is the unique viscosity solution of some Hamilton-Jabobi-Bellman-Isaaca?HJBI,for short?equation,then we give the representation formula for WO:= V? by using the dynamic programming principle of V? in our stochastic differential game.Under the framework of Chapter 4,we consider the Hamiltonian H of the formTheorem 5.1.1 Under the assumptions?C1?,?C2?and?C3?the lower value function V? of the stochastic differential game is a viscosity solution of the Hamilton-Jacobi-Bellman-Isaacs?HJBI?equation?V?x?+ H?x,?V?x?,DV?x?,D2V?x??= 0,x?RN,where H?x,y,p,A?is defined as above.Moreover,the solution is unique in the class of the uniformly continuous functions on RN.Backward stochastic semigroup:Let ?:RN×R×RN×U×V? R satisfy the assumption?C2?.Then,given ?>0,?x,u,v?? RN ×U × V we define for any finite time horizon t?0 the backward stochastic semigroup where is the unique solution of the BSDEThe following dynamic programming principle for the lower value function V? is given by the notion of the backward stochastic semigroup introduced above.Proposition 5.2.1?DPP?Under our standard assumptions?C1?,?C2?and?C3?the lower value function V? of the stochastic differential game satisfies the following DPP:for all t?>0,x ? RN and all A>0.The limit backward stochastic semigroup:Puttingwe define the backward stochastic semigroup lthrough its generating BSDELemma 5.2.1 Under the assumptions?C2?and?C2'?Remark 5.2.2 In fact,as the properties stated in the lemma show,,,defines a conditional g-expectation introduced and studied first by Peng[87].In particular,for s= 0 we have the g-expectation Gu[?]:=,??L2?Ft;R?,0? s ?t.The more interested reader is referred to this paper.By the notion of the limit backward to the backward stochastic semigroup which we have introduced above,we give the following dynamic programming principle for the limit value function WO.Theorem 5.2.1 We suppose that the assumptions?C1?,?C2?,?C2'?,?C3?and?RM?hold true.Then W0?x??lim??0?V??x?,x€ RN satisfies the DPPMoreover,if ????x,z,u?is concave for all?x,u?andthen WO?·?has the following representation formula:?Recall that ??x?= minu?U??x,0,u??.Theorem 5.2.2 Let us assume that the assumptions?C1?,?C2?,?C25?,?C3?and?RM?are satisfied.Then we have the following strong version of the DPP:...
Keywords/Search Tags:Stochastic nonexpansivity condition, Value function, Limit value, B-SDE, Stochastic control system, Stochastic differential games, Radial monotonicity of Hamiltonians, Backward stochastic semigroup, The limit backward stochastic semigroup
PDF Full Text Request
Related items