Font Size: a A A

Backward Stochastic Differential Equations,G-Expectations And Related Topics

Posted on:2013-01-12Degree:DoctorType:Dissertation
Country:ChinaCandidate:Q LinFull Text:PDF
GTID:1110330374480799Subject:Financial mathematics and financial engineering
Abstract/Summary:PDF Full Text Request
The motivation for studying backward stochastic differential equations (BSDEs, for short) came originally from the stochastic optimal control theory, and the theory of linear BSDEs was first studied by Bismut [9]. Pardoux and Peng [82] introduced nonlinear BSDEs: where W is a Brownian motion. They obtained the existence and uniqueness of the solution under Lipschitz condition on the driving coefficient. Since this pioneering work, the theory of BSDEs has been developed quickly and dynamically. In particular, there have been many works which weakened the Lipschitz condition on the driving coefficient and the integrability assumption on the terminal condition (see e.g., Bahlali [2], Briand and Confortola [21], Briand and Hu [23], Darling and Pardoux [30], El Karoui and Huang [37], Hamadene [46], Jia [58], Kobylanski [62], Lepeltier and San Martin [67], Mao [80] and the references therein). On the other hand, various forms of BSDEs have been developed and studied. They include namely reflected BSDEs (see e.g., El Karoui, Kapoudjian, Pardoux, Peng and Quenez [38]), forward-backward BSDEs (see e.g., Hu and Peng [51], Hu and Yong [52], Ma, Protter, Yong [78], Pardoux and Tang [85], Peng and Wu [102], Wu [118] and the references therein), BSDEs with jumps (sec e.g., Barles, Buckdahn and Pardoux [5], Situ [108], Tang and Li [112], Wu [118] and the references therein). The development of the theory of BSDEs has been heavily stimulated by its applications. So the theory of BSDE has become a powerful tool in the study of partial differential equations (see e.g., Buckdahn and Hu [13,14], Pardoux [81], Pardoux, Pradeillcs and Rao [84], Peng [87,91]), in mathematical finance (see e.g., Chen and Epstein [26], Delbaen, Peng and Rosazza Gianin [32], El Karoui, Peng and Quenez [39], El Karoui and Quenez [40,41], Yong [122] and the references therein) as well as in stochastic control (see e.g., Kohlmann and Tang [63], Kohlmann and Zhou [64], Peng [87,88,89,91], Yong and Zhou [123] and the references therein) and in stochastic differential games (see e.g., Buckdahn, Cardaliaguet and Quincampoix [11], Buckdahn, Hu and Li [15], Buckdahn and Li [16], Hamadene [45], Hamadene and Lepeltier [47], Hamadene, Lepeltier and Peng [48] and the references therein).In the generalization of the concept of BSDEs, Pardoux and Peng [83] introduced backward doubly stochastic differential equations (BDSDEs, for short). These equations involve not only the forward stochastic Ito integral with respect to the Brownian motion W but also a backward stochastic Ito integral with respect to a Brownian motion B independent of WThe notion of g-expectations was introduced by Peng [90] by using a BSDE. Let us consider one-dimensional BSDE (0.0.2). Assuming conditions on the driver g, under which BSDE (0.0.2) has a unique solution (Y,Z), we define the g-expectation of ξ as follows: g-expectations can be regarded as a dynamic risk measures (see Delbacn, Peng and Rosazza Gianin [32], Rosazza Gianin [105]). However,g-expectations have the limitation that the involved uncertain probability measures are absolutely continuous with respect to a reference probability measure.Motivated by g-expectations, risk measures and superhedging in finance, Peng in-troduced a new notion of nonlinear expectation, the so-called G-expectation (sec [94],[95],[97],[99] and [100]), which is associated with the following nonlinear heat equation where D2u is the Hessian matrix of u, i.e., D2u=((?)xixj2u)i,jn=1and where Sn denotes the space of n x n symmetric matrices and Г is a given non-empty. bounded and closed subset of Rn×n. In the special case when Г is a singleton, the G-expectation coincides with the classical linear expectation. Together with the notion of G-expectations Peng also introduced the related G-normal distribution and the G-Brownian motion. The G-Brownian motion is a stochastic process with stationary and independent increments and its quadratic variation process is, unlike the classical case of a linear expectation, a non deterministic process. Moreover, an Ito calculus for the G-Brownian motion has been developed recently in [94],[95],[97] and [99]. The law of large numbers and the central limit theorem under nonlinear expectations were obtained by Peng [95],[98] and [101].The development of the theory of BSDEs,g-expectations and G-expectations has created powerful tools for the work in stochastic analysis and its applications in finance. I use, in particular, the method of BSDEs in my studies of Nash equilibrium payoffs for nonzero-sum stochastic differential games.Since the pioneering work of Isaacs [53] differential games and stochastic differ-ential games have been investigated by many authors. Fleming and Souganidis [42] were the first to study in a rigorous manner zero-sum stochastic differential games and they obtained that the lower and the upper value functions of such games satisfy the dynamic programming principle and coincide under the Isaacs condition. Basing on the framework developed in Fleming and Souganidis [42], a lot of authors have studied stochastic differential games. Among them, Tang and Hou [113] studied switching games of stochastic differential systems. Biswas [10] investigated zero-sum stochastic differen-tial games with jump-diffusion by generalizing the approach of Fleming and Souganidis [42] from stochastic differential games without jumps to those with jumps. Recently, Buckdahn and Li [16] investigated zero-sum stochastic differential games with the help of a Girsanov transformation argument and a BSDE approach. Buckdahn, Hu and Li [15] studied stochastic differential games with jumps.This doctoral thesis has been devoted to the study of Nash equilibrium payoffs for stochastic differential games, the selected problems in the rather recent theory of G-expectaticns, and backward doubly stochastic differential equations with non-Lipschitz coefficients(I) In Chapter1and Chapter2, we study Nash equilibrium payoffs for stochastic differential games with nonlinear cost functionals.In nonzero-sum deterministic differential games, Konorienko [65] in the framework of positional stategics and Tolwinski, Hauric and Lcitmann [114] in the framework of Friedman strategies obtained an existence result and a characterization for Nash equilib-rium payoffs. Recently, Buckdahn, Cardaliaguet and Rainer [12] generalized the above results to stochastic differential games. On the other hand, Bcssoussan and Frehse [8] and Mannucci [79] obtained Nash equilibrium payoffs for stochastic differential games by using parabolie partial differential equations, while Hamadene, Lepeltier and Peng [48] obtained the existence of a Nash equilibrium point for nonzero-sum stochastic differential games with the help of backward stochastic differential equations, but in a framework of the type of control against control. The latter both approaches rely heavily on the assumption of the non degeneracy diffusion of coefficients.I study Nash equilibrium payoffs for nonzero-sum stochastic differential games via the theory of backward stochastic differential equations. The cost functionals for both players are defined by controlled BSDEs. In order to give a symmetric tool to the players we consider nonzero-sum stochastic differential games of the type of NAD strategy against NAD strategy, where NAD strategy is used as abbreviation for nonanticipative strategy with delay. In a first step I first studied Nash equilibrium payoffs for nonzero-sum stochastic differential games with decoupled nonlinear cost functionals (There coupling consists in the interaction through the control processes of both players). After, in a second step I studied Nash equilibrium payoffs for nonzero-sum stochastic differential games with coupled nonlinear cost functionals.I obtain an existence theorem and a characterization theorem of Nash equilibrium payoffs for nonzero-sum stochastic differential games with nonlinear cost functionals. The obtained results extend former ones by Buckdahn, Cardaliaguet and Rainer [12] and arc based on a backward stochastic differential equation approach. The generalization of earlier result by Buckdahn, Cardaliaguet and Rainer [12] concerns the following aspects: Firstly, our cost functionals are defined by controlled backward stochastic differential equations, and the admissible control processes depend on events occurring before the beginning of the stochastic differential game. Thus, our cost functionals arc not neces-sarily deterministic. Secondly, since our cost functionals arc nonlinear, we cannot apply the methods used in [12]. We make use of the notion of stochastic backward semigroups introduced by Peng [91], and the theory of backward stochastic differential equations.(Ⅰ.1) In Chapter1, we establish an existence and a characterization of Nash equilibrium payoffs for stochastic differential games with decoupled nonlinear cost functionals.Let (Ω,F, P) be the classical Wiener space, i.e., for a given terminal time T>0, we consider Ω=C0([0, T];Rd) as the space of continuous functions h:[0, T]→Rd such that h(0)=0, endowed with the supremum norm, and we let P be the Wiener measure on the Borel σ-field B(Ω) over Ω; it is the unique probability with respect to which the coordinate process Bt(ω)=ωt,ω∈Ω,t∈[0, T], is a d-dimensional standard Brownian motion. We denote by Np the collection of all P-null sets on Ω and we define the filtration F={Ft}t∈[0,T] which is generated by the coordinate process B and completed by all P-null sets, i.e., where Np is the set of all P-null sets.Let U and V be two compact metric spaces. Here U is considered as the control state space of the first player, and V as that of the second one. The associated sets of admissible controls will be denoted by U and V, respectively. The set U is formed by all U-valued F-progressively measurable processes, and V is the set of all V-valued F-progressively measurable processes.For a given initial time t∈[0, T] and admissible controls u(·)∈U and v(·)∈V, we consider the following stochastic control system: where We suppose that, for all x∈Rn,b(·, x,·,·) and σ(·, x,·,·) arc continuous in (t, u, v), and b(t,·, u, v) and σ(t,·, u. v) are Lipschitz, uniformly with respect to (t, u, v) E [0, T]×U×VFor arbitrarily given admissible controls u(·)∈U and v(·)∈V, we consider the following BSDEs, j=1,2,t≤s≤T, where Xt,x;u,v is introduced by equation (0.0.4). The cost functional for the jth player, j=1,2, is defined byFor all (α,β)∈At,T×Bt,T, there exists a unique couple of controls (u, v)∈Ut,T×Vt,T such that (α(v),β(u))=(u,v). This allows to define Jj(t,x;α,β):=Jj(t,x;u,v),j Let us suppose that,for j=1,2and(x,y,z)∈Rn×R×Rd,fj(t,x,y,z,·,·)is continuous in(t,u,v),and Φj(·)and fj(t,·,·,·,u,v)are Lipschitz,uniformly with respect to(t,u,v)∈[0,T]×U×V. In addition,in order to simplify the arguments,we also suppose that all the cofficients are bounded.We assume that the Isaacs condition holds in the following sense:For all(t,x,y,p)∈[0,T]×Rn×R×Rn and A∈Sn,we have and Under the Isaaes condition we have,for(t,z)∈[0,T]×Rn,We give the definition of the Nash equilibrium payoff for stochastic differential games. Definition A couple(e1:e2)∈R2is called a Nash equilibrium payoff at the point(t,x) if for any ε>0,there exists(αε,βε)∈At,T×Bt,T such that,for all(α,β)∈At,T×Bt,T, J1(t,x,αε,βε)≥J1(t,x;α,βε)—ε,J2(t,x;αε,βε)≥J2(t,x;αε,β)—ε,P—a.s., andWe obtain the following characterization and existence of Nash equilibrium payoffs for nonzero-sum stoehastic differential games.Theorem1.3.16Let(t,x)∈[0,T]×Rn.Under the Isaaee condition,(e1,e2)∈R2is a Nash equilibrium payoff at point(t,x)if and only if for all ε>0,there exist uε∈Ut,T and Uε∈Vt,T sueh that for all s∈[t,T] and j=1,2, andTheorem1.3.19Under the Isaacs condition, there exists a Nash equilibrium payoff.(1.2) In Chapter2, we obtain an existence and a characterization of Nash equilibrium payoffs for stochastic differential games with coupled nonlinear cost functionals.In Chapter1, we generalize the result in [12] by investigating Nash equilibrium payoffs for nonzero-sum stochastic differential games with nonlinear cost functionals. However, in Chapter1, the cost funetionals of the both players are defined by a system of decoupled BSDEs. An open problem was how to study stochastic differential games whose cost functionals are defined by two coupled BSDEs. In Chapter2, we investi-gate Nash equilibrium payoffs for two-player nonzero-sum stochastie differential games whose cost functionals are defined by a system of coupled backward stochastic differential equations.Let the underlying probability space (Ω, F, P) be the completed product of the Wiener space(Ω1,F1,P1) and the Poisson space (Ω2,Ω2,P2).As concerns the Wiener space (Ω1,F1,P1):Ω1=C0(R;Rd) is the set of continuous functions from R to Rd with value zero at0, endowed with the topology generated by the uniform convergence on compacts;F1is the Borel σ-algebra over Ω1, completed by the Wiener measure Pi under which the d-dimensional coordinate processes Bs(ω)=ωs, s∈R+, ω∈Ω1, and B-s(ω)=ω(-s), s∈R+, ω∈Ω1, are two independent d-dimensional Brownian motions. We denote by{FsB, s≥0} the natural filtration generated by B and augmented by all P1-null sets, i.e.,Let us now introduce the Poisson space (Ω2, F2, P2) as follows: and the Probability measure p2can defmed over(Ω2,F')such that{Nt}t≥0and {N-t)t≥0are two independent Poisson processes with intensity λ.Let us denote F2by the com-pletion of F' with respect to the probability P2and and augmented by the P2-null sets.Moreover,we put where F is completed with respect to p,and the filtration F={Ft}t≥0is generated byaugmented by all P-null sets.Let us be more precise now:The dynamics of our two-player nonzero-sum SDG is given by the process Nt,i and the following doubly controlled stochastic system: For0≤s≤t≤T,i=1:2,we let Nst,i=m(i+Ns-Nt),where m(j)=1,if j is odd, and m(j)=2,if j is even.The control u={u}s∈[t,T](resp.,v={v}s∈[t,T])is supposed to F-predictable and takes its values in a compact metric space U(resp.,V).The set of these controls is denoted by Ut,T(resp.,Vt,T).The processes N and Nt,i turn out to be crucial in our approach:They allow to transform the system of two coupled BSDEs into an equivalent one of twe decoupled BSDEs.More precisely,let us consider the follwing system of coupled BSDEs: defining the cost funetionals for both players, where are the unique solution of the above BSDE. However,although formally redueed to the framework of the preceding work,but with jumps,this work has its own diffculties coming from the presence of jump processes and most of all from the necessity to work over stochastic intervals. For the couple of NAD strategies(α,β)∈At,T,×Bt,T,there exists a unique couple of controls(u,v)∈Ut,T×Vt,T such that(α(v,),β(u))=(u,v). This allows to define Ji(t,x;α,β):=Ji(t,x;u,v),as well as the value functions of both players of the zero-sum stochastic differential games,i.e.,the lower value functions,i=1,2, and the upper value functionsIn our approach we need a probabilistic interpretation of coupled systems of Hamilton-Jacobi-Bellman-Isaacs equations which has also its own interest.Theorem2.4.2The value functions U=(U1,U2)and VV=(W1,W2)arc viseosity solutions of the following coupled Isaacs cquations: and respectively,where,for(t,x,y1,y2,p,A,u,v)∈[0,T]×Rn×R×R×R"×Sn×U×V andA crucial step in the proof of this results is a dynamic programming principle for stopping times.Theorem2.3.11For any stopping time τ with0≤t<τ≤T,x∈Rn,i=1,2, where At,T and Bt,T are the sets of NAD strategies over the stochastic interval [[t, τ]].We obtain the following characterization and existence of Nash equilibrium payoffs for nonzero-sum SDGs with coupled nonlinear cost functionals.Theorem2.5.6For (t,x)∈[0, T]×Rn, a couple (e1,e2)∈R2is a Nash equilibrium payoff at point (t, x) if and only if, for all ε>0, there exists a couple (uε,vε)∈Ut,T×Vt,T such that for all δ∈[0, T-t] and j=1,2, andTheorem2.5.9Under the Isaacs condition, there exists a Nash equilibrium payoff at (t,x), for all (t,x)∈[0, T]×Rn.Let us explain the difficulties. In comparison with [12] and Chapter1, the first difficulty was to get a dynamic programming principle for a system of two coupled BSDEs. To overcome this difficulty, we associate with this system an auxiliary one which cost functionals coincide with ours. This leads to the new problem that we need a dynamic programming principle for this system not only for deterministic but also for stopping times. The method used in Buckdahn and Hu [14] to get for control problems the dynamic programming principle for stopping times is not applicable here, because in the framework of stochastic differential games the monotonicity arguments used in [14] do not work any more. Finally, in comparison with Chapter1, the presence of jump terms adds a supplementary complexity.(Ⅱ) In Chapter3we study the notion of local time and obtain the Tanaka formula for the G-Brownian motion. Moreover, the joint continuity of the local time of the G-Brownian motion is obtained and the quadratic variation of the local time is established.Let Ω=C0(R+) be the space of all real valued continuous functions (ωt)t∈R+with ω0=0, equipped with the distance We denote by B(Ω) the Borel σ-algebra on Ω. We also set, for each t∈[0,∞), Qt:={ω.∧t:ω∈Ω} and Ft:=B(Ωt). Let H be a linear space of real valued functions defined on Ω such that, if Xi∈H, i=1,…, d, then φ(X1,…, Xd)∈H, for all φ∈Cb,Lip(Rd).Peng [94] constructed a G-Brownian motion on a sublinear expectation space (Ω, LGp(Ω), E), where LGp(Ω) is the Banach space defined as closure of H with respect to the norm‖X‖p:=E[|X|p]1/p,1≤p≤∞. In this space the coordinate process Bt(ω)=ωt, t∈[0,∞), ω;∈Ω, is a G-Brownian motion. Moreover, there exists a weakly compact family P of probability measures on (Ω,B(Ω)) such that and the associated Choquct capacity A set A (?) Ω is called polar if e(A)=0. A property is said to hold quasi-surely (q.s.) if it holds outside a polar set.The G-Brownian motion B is a continuous P-martingale, for all P∈P, and its quadratic variation process (B) is continuous and increasing outside a polar set N. Let us now give a definition of the local time of B, which is independent of the underlying probability measure P∈P. For all a∈R, t∈[0, T], and ω∈Ω\N, we put Let us consider the stochastic integral under P∈P. Under P, the G-Brownian motion is a continuous square integrable martingale, so the above stochastic integral under P is well-defined. We emphasize that the process Mp is defined only P-a.s. Let La,p be the local time associated with B under P, defined by the relation: It is well known that La,p admits a P-modification that is continuous in (t,a), and Thus, due to the definition of La, we have Consequently, La has a continuous P-modification, for all P∈P. However, P is not dominated by a probability measure. Thus the question, if we can find a continuous modification of La or even a jointly continuous modification of (α. t)→Ltα under E, i.e., w.r.t all P∈P, is a nontrivial one which cannot be solved in the frame of the classical stochastic analysis. The other question which has its own importance is that of the integrability of sgn(B.-a) in the framework of G-expectations. Indeed, we have considered the stochastic integral of sgn(B.-a) under P, for each P∈P separately. However, the main difficulty is that the above family of probability measures is not dominated by one of these probability measures. To overcome this difficulty, we show that sgn(B.-a) belongs to a suitable space of processes integrablc with respect to the G-Brownian motion B.The following proposition is very important for our approach.Proposition3.4.6For any real a, all δ>0and t≥0, we have Moreover, if σ>0, then we also have Here C is a constant which depends on t but not on δ neither on a.Corollary3.4.7For any real number a and t≥0, we have Moreover, if σ>0, then we haveWe obtain Tanaka formula for the G-Brownian motion as follows. For this we denote Theorem3.4.9For each a∈R, the process sgn(B.-a)∈M*2(0, T). Moreover, for all t∈[0, T], we have where and La is an increasing process. La is called the local time for G-Brownian motion at a.We obtain that the local time for the G-Brownian motion has a jointly continuous modification. For the proof we use an approximation method here, which is different from the classical case.Theorem3.4.13For all t∈[0, T], there exists a jointly continuous modification of (a, t)→Lta. Moreover,(a,t)→Lta is Holder continuous of order γ for all γ<1/2.We also obtain the quadratic variation of the local time for the G-Brownian motion.Theorem3.5.4Let σ>0. Then for all p>1and a<b, we have, along the sequence of partitions of of the interval of [a, b], the following convergence uniformly with respect to t∈[0, T].(Ⅲ) In Chapter4, we derive a representation of symmetric G-martingales as stochastic integrals with respect to G-Brownian motion.A celebrated result of Levy [69] and Doob [35] states that a classical continuous martingale M is a Brownian motion if and only if its quadratic variation process is the deterministic function 〈M〉t=t, t≥0. Recently, the martingale characterization of the G-Brownian motion has been obtained in [120] and [121]. In [35], Doob obtained that a continuous square integrable martingale can be represented as a stochastic integral in terms of a Brownian motion. We obtain a representation of symmetric G-martingalcs as stochastic integrals with respect to G-Brownian motion, which generalizes Levy char-acterization of G-Brownian motion in [121]. Our result is different from that in Soner, Touzi and Zhang [109] and Song [110], in which the G-Brownian motion is given, while we want to find a G-Brownian motion such that our result holds. The main result is the following representation of continuous symmetric G-martingales as stochastic integrals with respect to G-Brownian motion, which generalizes the mar-tingale characterization of the G-Brownian motion established by Xu [121].Theorem4.4.3Let f∈MG2(0,T) be such that E[∫oT|fs|4ds]<∞. Moreover, if there exists a constant δ (small enough) such that0<δ<|f|, and the following holds:(i) M is a continuous symmetric G-martingale;(ii) The process is a G-martingale(iii) The process is a G-martingale Then there exists a G-Brownian motion B such that Mt=∫ptfsdBs, for all t∈[0, T].(IX) In Chapter5, we obtain the Tychonoff uniqueness theorem for the G-heat equation.Theorem5.3.2Let (H) be satisfied and let u1:u2∈C(Q) be solutions of (5.1.2) in the strip Q=(0, T) x Rn with u1(0, x)=u2(0, x)=φ(x). If there are two positive constants c1,c2such that uniformly for t∈[0,T], then u1=u2in Q.(V) In Chapter6, we study one dimensional BDSDEs with non-Lipschitz coefficients. We get a uniqueness theorem for solutions of BDSDEs with con-tinuous coefficients. Moreover, an existence theorem and a comparison the-orem for solutions of BDSDEs with discontinuous coefficients are obtained.Since the work of Pardoux and Peng [83], there have been several works relaxing the Lipschitz condition on the coefficient. Shi, Gu and Liu [107] obtained that a one-dimensional BDSDE (0.0.3) has at least one solution if f is continuous and of linear growth in (y,z), and {f(t,0,0)}t∈[o,T] is bounded. Under the assumptions that f is bounded, left continuous and non-decreasing in y and Lipschitz in z, I established in [71] an existence theorem for the one-dimensional BDSDE (0.0.3). In [72] I proved that the one-dimensional BDSDE (0.0.3) has at least one solution if the coefficient f is left Lipschitz and left continuous in y, and Lipschitz in z.We obtain an existence result for the one-dimensional BDSDE (0.0.3) where f is left Lipschitz and left continuous in y and uniformly continuous in z. Since f is only uniformly continuous in z, we can not apply the comparison theorems for solutions of BDSDEs in [107] and [72]. In order to get such existence theorem for solutions of such BDSDEs we first established a comparison theorem for solutions of BDSDEs when f is Lipschitz in y and uniformly continuous in z.We have the existence theorem for BDSDEs with continuous coefficients.Theorem6.3.3Under the assumptions (H7) and (H8'), BDSDE with data (f,g, T,ξ) has a minimal (resp., maximal) solution (y, z)(resp.,(y,z)) of BDSDE with data (f, g, T, ξ), in the sense that, for any other solution (y, z) of BDSDE with data (f. g, T, ξ), we have y≤y (resp., y≥y).We establish a uniqueness theorem and a comparison theorem for BDSDEs under the conditions that f is Lipschitz in y and uniformly continuous in z, which plays an important role in the proof of Theorem6.4.2.Theorem6.3.6Under the assumptions (H4) and (H10), BDSDE (0.0.3) has a unique solution (Y,Z)∈S2(0,T;R)×M2(0,T;Rd).Theorem6.3.8Suppose that the BDSDEs with data (f1,g, T,ξ1) and (f2, g, T, ξ2) have solutions (y1,z1) and (y2,z2), respectively. If f1satisfies (H4) and (H10), ξ1≤ξ2, a.s., f1{t,yt2,zt2)≤f2(t,yt2), dPdt-a.s.(resp., f2satisfies (H4) and (H10), f1(t,yt1.zt1)≤f2(t, yt1,zt1), dPdt-a.s.), then we have yt1≤yt2, a.s., for all t∈[0, T].We have an existence theorem for solutions of BDSDEs.Theorem6.4.2Under the assumptions (H5),(H6) and (H9), the BDSDE with data (f, g, T, ξ) has a solution. Moreover, if f1satisfies (H4) and (H10), then the BDSDE with data (f,g,T,ξ) has a minimal solution (y,z), in the sense that, for any other solution (y, z) of BDSDE with data (f, g, T, ξ), we have y≤y.
Keywords/Search Tags:Backward stochastic differential equations, Backward doubly stochastic dif-ferential equations, Stochastic differential games, G-expectations, G-Brownian motion, G-heat equation, Nash equilibrium payoffs, Dynamic programming principle, Local time
PDF Full Text Request
Related items