Font Size: a A A

Some Backward Problems In Stochastic Control And Game Theory

Posted on:2009-07-17Degree:DoctorType:Dissertation
Country:ChinaCandidate:Z Y YuFull Text:PDF
GTID:1100360245494125Subject:Probability theory and mathematical statistics
Abstract/Summary:PDF Full Text Request
Backward stochastic differential equations (BSDEs) consider how to make a system achieve an anticipated objective in the randomly perturbed environment. The theory of BSDEs is widely applied in stochastic control and game theory, mathematical finance, partial differential equations, non-linear expectations after which is established. The objective of this thesis is to improve and enrich the theory of BSDEs in order to study the corresponding backward problems in stochastic control and game theory.Sometimes BSDEs is used to describe cost (or utility) functionals or control systems in stochastic control and game theory. The key role of these problems is the theory of BSDEs, which is a kind of stochastic control problem itself, then the improvement and enrichment of BSDEs theory will develop the study of some control and game problems. Chapter 2 and Chapter 3 in this thesis devote themselves to the BSDEs theory.In Chapter 2, we obtain a foundational conclusion that the uniqueness and continuous dependence of solution for BSDEs are equivalent. When coefficients g of BSDEs satisfy the Lipschitz condition, the continuous dependence is described by the following inequalityfrom which fruitful results are derived. Our result, which can be regarded as the analog of the above inequality in some sense, provides a useful method to study BSDEs with non-Lipschitz condition.Unlike a (forward) stochastic differential equation, the solution of a BSDE is a pair of adapted processes (Y, Z). Up to now, most researches were focused on the first part of the solution Y, but the comprehension about Z is very important also. In Chapter 3 in this thesis, we study some basic properties about the second part of the solution Z, which may be interpreted as a risk-adjustment factor or a control strategy, such as bounded property, backward stochastic viability property, comparison property. Z represents the portfolios in the pricing theory of contingent claims. Our results can be used to characterize clearly whether the portfolio process is positive or negative, and get its bounded estimation. We also get some comparison results of portfolios. As another application of the bounded property of Z, we deal with a kind of stochastic game problem raised by Bensoussan and Frehsc [6].In stochastic control theory, there exists a kind of cost functional which is described by the solution of a BSDE. For instance, in the utility theory, economists using the solutions BSDEs to describe recursive utilities. In order to maximize the utility, a kind of recursive optimal control problem is appeared. Peng [59; 74] studied this kind of recursive optimal control problem. In practice, sometimes an investor require his/her utility is bigger than a function of the wealth. This requires us to use the solution of a reflected BSDE to describe this kind of recursive utility with obstacle constraint. Corresponding, a kind of recursive optimal control problem with the obstacle constraint for cost functional is appeared. In financial market, when loan interest is higher than deposit interest, the pricing problem of American contingent claims is an example of this kind of control problem. In Chapter 4 of this thesis, we consider this kind of recursive optimal control problem with the obstacle constraint for cost functional. We get the celebrated dynamic programming principle, and prove that the value function is the unique viscosity solution of the corresponding HJB equation. This work is spirited by Peng [74].Since a BSDE is a well-defined dynamic system, it is very natural and appealing,first at the theoretical level, to consider the stochastic control and game problems in which BSDEs arc used to described the control systems. We called the backward stochastic control problems and backward stochastic game problems. As for applications,under the condition to achieve an anticipate objective, one person expect to minimize his/her cost (or maximize his/her utility), this can be viewed as a backward stochastic control problem. several persons cooperate to achieve a common goal, but they have their own personal benefits at the same time, This kind of cooperation games can be viewed as backward stochastic game problems. However, the study on backward stochastic control is quite lacking in literature, and there is a blank in the field of backwardstochastic game before this thesis. In Chapter 5 of this thesis, we consider one important class of backward stochastic control and game problems (we also consider the general partial coupled forward-backward case): the linear-quadratic (LQ) problems. We get the explicit forms of the unique optimal control (for the corresponding control problem) and the unique Nash equilibrium point (for the corresponding game problem).This thesis consists of five chapter. In the following, we list the main results of this thesis.Chapter 1: We introduce problems studied from Chapter 2 to Chapter 5.Chapter 2: We study the equivalent property between uniqueness and continuous dependence of solution for BSDEs with continuous coefficient. Just like the related property of ordinary differential equations (ODEs), this property is a foundational conclusion for BSDEs theory. The main results are the following Theorem 2.2.1 for a simple ease and Theorem 2.3.4 for a general case.Theorem 2.2.1. If Assumptions (H2.1)-(H2.3) hold for g, then the following two statements are equivalent.(i) Uniqueness: The equation (2.1) has a unique solution.(ii) Continuous dependence with respect toζ:For any {ζn}n=1∞,ζ∈L2(Ω,FT,P;R), ifζn→ζin L2(Ω,FT, P; R) as n→∞, thenwhere (yζ(·),zζ(·)) is any solution of BSDE (2.1) and (yζn(·),zζn(·)) are any solutionsof the BSDEs (g, T,ζn).Theorem 2.3.4. If gλsatisfies (H2.1')-(H2.4'), then the following statements are equivalent:(iii) Uniqueness: there exists a unique solution of BSDE (2.8) whenλ=λ0, that is, the solution of (gλ0,T,ζλ0) is unique.(iv) Continuous dependence with respect to g andζ: for anyζλ,ζλ0∈L2 (Ω, FT, P;R), ifζλ→ζλ0 in L2(Ω,FT,P;R) asλ→λ0 (yλ(·),zλ(·)) are any solutions of BSDEs (2.8), (yλ0(·),zλ0(·)) is any solution of BSDE (2.8) whenλ=λ0, then Chapter 3: Using Malliavin calculus, we study some properties about the second part solution process Z of BSDEs, such as bounded property, backward stochastic viability property (BSVP), comparison property.Proposition 3.2.1. (Bounded Property) Let Assumptions (A3.1) and (A3.2) hold. Suppose that Dθζand D0 g are bounded, then we havewhere C is a constant. Especially, Zθ=DθYθis bounded.Theorem 3.2.7. (BSVP) Suppose that g satisfies (A3.1)-(A3.3). If (?)0≤θ≤t≤T,(?)z∈Rm×d×d and (?)y∈Rm×d, dK2(·) is twice differential at y andthen the solution Z of BSDE (3.1) enjoys the BSVP in K.Theorem 3.2.12. (Comparison Property) Suppose that g1 and g2 satisfy (A3.1)-(A3.3). Forany0≤θ≤τ≤T, (?)ζ1,ζ2∈(D1,2)m∩L2(Ω,Fτ, P) we have Dθζ1≥Dθζ2, (Yi,Zi) (i=1,2), which are the unique solutions of BSDEs (3.19) over time interval [0,τ]. For any t∈[0,τ], y, y'∈R m×d, z, z'∈Rm×d×d, if the following inequalityholds, then Zt1≥Zt2, t∈[0,τ].Then we apply these theoretical results to mathematics finance. Z represents the portfoliosin the pricing theory of contingent claims. Our results can be used to characterize clearly whether the portfolio process is positive or negative, and get its bounded estimation.We also get some comparison results of portfolios.At the end of this Chapter, we study a kind of stochastic nonzero-sum differential game problem coming from Bensoussan and Frehse [6], In [6], using partial differential equations method, they were only able to deal with the game problem in Markovian case. Using Malliavin calculus and the bounded property of Z, we obtain the explicit form of a Nash equilibrium in non-Markovian case, which has some practical meaning. Theorem 3.5.2. Under assumption (H3.2)-(H3.5), u* = (u1*,…,ui*,….,uN*), where ui* is defined by (3.57), is one Nash equilibrium point for the stochastic nonzero-sum differential game problem, Ji(x,u*)=Yi*(0) and Ji(x.u*)=Ji(x,ui,(u|-)i*), ui is ith componentof any admissible control u,i=1,2,(?), N, where (Yi*(·), Zi*(·)) is one solution of BSDEs (3.56).Chapter 4: We study one kind of recursive optimal control problem with the obstacle constraint for the cost functional, i.e. the cost functional of the control system is described by the solution of a reflected BSDE with one lower barrier. In details, we consider the following control systemand the associated cost functionalwhere (Yt,x;v(·), Zt,x;v(·),Kt,x,v(·)) is the solution of reflected BSDE:We will maximize the cost functional and define the value functionThis kind of recursive optimal control problem has some practical meaning in financial market. When loan interest is higher than deposit interest, the pricing problem of American contingent claims is an example of this kind of control problem. An interesting problem is: does the celebrated dynamic programming principle hold true for this kind of optimal control problem? We prove some properties of reflected BSDEs. Using the idea and framework from Peng [74], with the help of these properties and some analysis techniques, we get the deterministic property and the continuity of value function u and the general dynamic programming principle (DPP).Proposition 4.2.6. (Deterministic Property) Under the assumptions (H4.2.1)-(H4.2.4), the value function u(t,x) defined in (4.10) is a deterministic function.Lemma 4.2.8. (Continuity on x) For each t∈[0,T], x and x'∈Rn, we have Theorem 4.2.11. (DPP) Under the assumptions (H4.2.1)-(H4.2.4), the value functionu(t,x) obeys the following dynamic programming principle: For each 0<δ≤T-t,Proposition 4.2.12. (Continuity on t) Under the Assumption (H4.2.1)-(H4.2.4), the value function u(t,x) defined by (4-10) is continuous in t.At last, using the penalization method and some techniques of viscosity solution, we prove that u(t, x) is the unique viscosity solution of the following general Hamilton-Jacobi-Bellman equation (HJB):Theorem 4.3.6. (Existence) Assume that b,σ, g,Φand h satisfy (H4.2.1)-(H4.2.4), u defined by (4.10) is a viscosity solution of HJB equations (4.20).Theorem 4.3.10. (Uniqueness) Assume that b,σ, g,Φand h satisfy (H4.2.1) (H4.2.4), respectively. Then there exists at most one viscosity solution of HJB equation (4.20) in the class of continuous functions which grow at most polynomially at infinity.Chapter 5: First, we consider the linear-quadratic (LQ) game problem of BSDE. This kind of game problem generalizing the corresponding control problem in Lim and Zhou [47] can be used to describe cooperation game. For the sake of notations, we only consider two players. The system isand the corresponding cost functionals are: The problem is to look for (u1(·),u2(·)) which is called the Nash equilibrium point of the game, such thatWe link it to a linear initial coupled forward-backward stochastic differential equation (FBSDE). Using the "continuation method", we get an existence and unique result for this kind of FBSDE.Theorem 5.1.3. Let (H5.1.1) and (H5.1.3) hold. There exists a unique adapted solution(X,Y,Z) of FBSDE (5.1).Applying this result and a transformation, we deal with the backward LQ game problem and get the explicit form of the unique Nash equilibrium point.Theorem 5.1.6. The function (ut1,ut2) = ((N1)-1(B1)τxt1, (N2)-1(B2)τxt2), t∈[0,T], is one Nash equilibrium point for the above game problem, where (xt1,xt2,yt,zt) is the solution of the different dimensional FBSDE (5.7).Second, using the same idea and method, we consider a general problem: the LQ control and game problem of partial coupled FBSDE, and get the corresponding conclusions.Theorem 5.2.2. We assume (H5.2.1) and (H5.2.2). Then there exists a unique adapted solution (X, Q, P, Y, K, Z) of FBSDE with double dimensions (DFBSDE) (5.10).Theorem 5.2.4. The mapping ut =-Rt-1(Btτpt + Dtτkt -Htτqt), t∈[0,T], is the unique optimal control for the linear-quadratic control problem (5.17) and (5.18), where (xt,qt,pt,yt,kt,zt) is the solution of the DFBSDE (5.19).Theorem 5.2.7. We assume that the dimension of x is equal to that of y: n = m.(a) In the case that Dt1≡0, Dt2≡0, Ht1≡0 and Ht2≡0 in system (5.20), and for i = 1,2, the matricial process Bti(Rti)-1(Bti)τis independent of t, andthe mapping (ut1,ut2) = (-(Rt1)-1(Bt1)τpt1, -(Rt2)-1(Bt2)τpt2), t∈[0,T], is the unique Nash equilibrium point for the game problem (5.20) and (5.21), where (xt,qt1,qt2,pt1, pt2,yt,kt1,kt2,zt) is the unique solution of TFBSDE (5.23). (b) In the case that Bt1≡0, Bt2≡0, Ht1≡0 and Ht2≡0 in system (5.20), and for i = 1,2, the matricial process Dti(Rti)-1(Dti)τis independent of t, andthe mapping (ut1,ut2) = (-(Rt1)-1(Dt1)τkt1,-(Rt2)-1(Dt2)τkt2), t∈[0,T], is the unique Nash equilibrium point for the game problem (5.20) and (5.21), where (xt,qt1, qt2, pt1, pt2, yt,kt1,kt2,zt) is the unique solution of TFBSDE (5.23).(c) In the case that Bt1≡0, Bt2≡0, Dt1≡0 and Dt2≡0 in system (5.20), and for i= 1,2, the matricial process Hti(Rti)-1(Hti)τis independent of t, andthe mapping (ut1,ut2) = ((Rt1)-1(Ht1)τqt1,(Rt2)-1(Ht2)τqt2), t∈[0,T], is the unique Nash equilibrium point for the game problem (5.20) and (5.21), where (xt, qt1,qt2,pt1, pt2,yt,kt1,kt2,zt) is the unique solution of TFBSDE (5.23).
Keywords/Search Tags:backward stochastic differential equation, reflected backward stochastic differential equation, forward-backward stochastic differential equation, dynamic programming principle, viscosity solution, stochastic optimal control, stochastic game
PDF Full Text Request
Related items