Font Size: a A A

Mean-field Stochastic Systems And Their Applications

Posted on:2015-06-20Degree:DoctorType:Dissertation
Country:ChinaCandidate:R M XuFull Text:PDF
GTID:1220330467461110Subject:Probability theory and mathematical statistics
Abstract/Summary:PDF Full Text Request
In this paper, we will discuss the well-posedness and the related stochastic con-trol problems of two kinds of mean-field stochastic systems——mean-field backward doubly stochastic differential equations (mean-field BDSDEs) and mean-field backward stochastic evolution equations (mean-field BSEEs), the equilibrium HJB equation for mean-field stochastic control systems, and probabilistic interpretation for Sobolev solu-tion of McKean-Vlasov partial differential equations. This paper consists of four parts. In the first part, existence and uniqueness result of the solutions to mean-field BDSDEs with globally (locally) monotone coefficients, as well as the comparison theorem for these equations are established. Furthermore, we give the probabilistic representation of the solutions for a class of stochastic partial differential equations by virtue of mean-field BDSDEs. In the second part, we obtain the existence and uniqueness result of the mild solutions to mean-field BSEEs in Hilbert spaces under a weaker condition than the Lips-chitz one, and we study the solvability of a kind of backward stochastic partial differential equations of mean-field type with initial boundary value. In the third part, we mainly study maximum principle for mean-field backward doubly stochastic control systems, maximum principle for BSPDEs control system of mean-field type, and the equilibrium HJB equation for mean-field stochastic control systems. The fourth part is devoted to give the probabilistic representation of Sobolev solutions to McKean-Vlasov PDEs via mean-field FBSDEs.Let us introduce the main content and explain the organization of this thesis.In Chapter1, the Introduction gives the research background and an overview of our topics in Chapter2to Chapter5.In Chapter2, we study the existence and uniqueness result of the solutions to mean-field BDSDEs with globally (locally) monotone coefficients, and give the probabilistic representation of the solutions for a class of stochastic partial differential equations. First, the existence and uniqueness result for the solutions of mean-field BDSDEs with globally monotone coefficients is established. On this basis, by an approximate sequence, we prove that mean-field BDSDEs with locally monotone coefficients admits a unique solution. The comparison theorem for mean-field BDSDEs are also established. Further-more, we give the probabilistic representation of the solutions for a class of stochastic partial differential equations by virtue of mean-field BDSDEs, which can be viewed as the stochastic Feynman-Kac formula for SPDEs of mean-field type.Chapter3investigates the existence and uniqueness result of the mild solutions to mean-field BSEEs in Hilbert spaces under a weaker condition than the Lipschitz one. As an intermediate step, the existence and uniqueness result for the mild solutions of mean-field BSEEs under Lipschitz condition is also established. As an application, we study the solvability of a kind of backward stochastic partial differential equations of mean-field type with initial boundary value.In Chapter4. we mainly study maximum principle for mean-field backward doubly stochastic control systems, maximum principle for BSPDEs of mean-field type, and the equilibrium HJB equation for mean-field stochastic control systems. Firstly, we consider an optimal control problem described by mean-field BDSDEs. Under the assumption of a convex action space, by the convex variation method, we give both the maximum principle and verification theorem for the optimal control. And then a maximum princi-ple for optimal control problems governed by BSPDEs control system of mean-field type with an initial state constraint is presented. In this control system, the control domain need not to be convex and the coefficients, both in the state equation and in the cost functional, depend on the law of the BSPDE as well as the state and the control. Due to the initial state constraint, we first need to apply Ekeland’s variational principle to convert the original control problem into a free initial state optimal control problem. Then spike variation approach is used to derive necessary optimality conditions for the control problem in the form of a maximum principle in the mean-field framework. A linear-quadratic optimal control problem is given to explain our theoretical results. Fi-nally, the value function of the mean-field stochastic control system is time inconsistent in the sense that the Bellman optimality principle does not hold. To this problem, an equilibrium control is therefore introduced and we prove the equilibrium value function satisfies an equilibrium HJB equation.Chapter5is devoted to give the probabilistic representation of Sobolev solutions to McKean-Vlasov PDEs. We first prove that the solution of McKean-Vlasov SDEs, which is denoted by Xst,x;x∈Rd is a C2-diffeomorphism a.s. stochastic flow w.r.t the initial value x. Moreover the inverse of the stochastic flow satisfies a backward SDE of mean-field type. By the stochastic flow inverse Xt,s(y), we can introduce a process to act as the test function and the test function Φt(s, x) meets the semimartingale decomposition theorem. A probabilistic interpretation for Sobolev solution of McKean-Vlasov PDEs is given by using mean-field FBSDEs.In the sequel, we list the main results in this dissertation.1. Mean-field backward doubly stochastic differential equations and re-lated SPDEsIn this section, we will consider the following mean-field backward doubly stochastic differential equations (mean-field BDSDEs): where{Wt,0≤t≤T} and{Bt,0≤t≤T} be two mutually independent standard Brownian motion processes, with values respectively in Rd and Rl, defined over some complete probability space (Ω,F, P), where T is a fixed positive number throughout this paper. The integral dWt is a forward Ito integral and the integral dBt denotes a backward Ito integral. Due to our notation, the coefficients of (0.0.26) are interpreted as follows:E’[φ(s,Y’s,Z’s,Ys,Zs)](ω)=E’[φ(s,Y’s,Z’s,Ys(ω),Zs(ω))]The following theorem present the existence and uniqueness result for the solutions of mean-field BDSDEs with globally monotone coefficients.Theorem0.1. For any random variable ζ∈L2(Ω, FT, P;Rn), under Assumption (A2.1)-(A2.5), then mean-field BDSDE (0.0.26) admits a unique solution (Y, Z)∈SF2([0,T];Rn)×HF2(0,T;Rn×d).When the condition of coefficients weaken to locally monotone coefficients, a se-quence of functions{fm}m=1∞which is approximate to coefficient f is constructed, where for any given m∈N, fm is globally monotone.If we denote by (Ym,Zm) the solution of (0.0.26) corresponding to the functions (fm,g), then (Ym, Zm) is a Cauchy sequence, whose limit is the unique solution of mean-field BDSDEs (0.0.26) to the functions (f,g). We summarize it as follows: Theorem0.2. Let (A2.1),(A2.2),(A2.3’)-(A2.5’) hold. Assume moreover1+exp(2L+2|λN|+2λN+2LNθ-1+2/N2(1-γ)'0,as N'∞,(0.0.27) where9is an arbitrarily fixed constant such that0<θ<1-2α. Then mean-field BDSDE (0.0.26) has a unique solution (Y, Z)∈SF2([0, T]; Rn)×HF2(0, T;Rn×d).Now we discuss the comparison theorem for mean-field BDSDEs. We only consider one-dimensional mean-field BDSDEs. i.e., n=1.Theorem0.3.(Comparison Theorem). Consider the following mean-field BDSDEs:Assume mean-field BDSDEs Eg.(0.0.28) and Eq.(0.0.29) satisfy the conditions of Theorem0.2. Let (Y1,Z1) and (Y2, Z2) be solutions of mean-field BDSDEs Eq.(0.0.28) and Eq.(0.0.29) respectively. Moreover, for the two generators of f1and f2, we suppose:(i) One of the two generators is independent of z’.(ii) One of the two generators is nondecreasing in y’.Then if ζ1≤ζ2, a.s., f1(t,y’,z’,y, z)≤f2(t,y’, z’,y, z), a.s., there also holds that Yt1≤Yt2, a.s.(?)t∈[0,T].In the last, we give the probabilistic representation of the solutions for a class of stochastic partial differential equations by virtue of mean-field BDSDEs.We pay attention to investigate the following system of quasilinear backward stochas-tic partial differential equations which are called McKean-Vlasov SPDEs: for any (t, x)£[0, T]×Rd, where σ*is the transpose of σ which is defined by σ:=E[σ(s,Xs0,x0,x)], and L is a second-order differential operator given by (Lu)i=(Lui)1≤i≤n with and a(t,x):=(ai,j)(t,x)=(E|σ(t,Xt0,x0,x)]E[σ(t,Xt0,x0,x)]*). Here, the function u(t, x):[0, T]×Rd'Rn is the unknown function, and {Bt,0≤t≤T} is a/-dimensional Brownian motion process defined on a given complete probability space (Ω,F, P). Xt0,x0, a stochastic process starting at x0when t-0, is the solution of one class of stochastic differential equations (SDEs), and E denotes expectation with respect to the probability P.The main result of last section in Chapter2provides the relationship between the solutions of SPDEs (0.0.30) and those of mean-field BDSDEs.Theorem0.4. Suppose that condition (A2.7) and condition (A2.8) hold. Let {u(t, x);0≤t≤T, x∈Rd} be a Ft,TB-measurable random field such that u(t, x) satisfies Eq.(0.0.30), and for each (t,x),u∈C0,2([0,T]×Rd; Rn) a.s. Moreover, we assume that f,g∈C[0,T]×Rd×Rd×Rn×Rn×Rn×d×Rn×d) for a.s. ω∈Ω.Then we have u(t,x)=Ytt,x, where {(Yst,xx, Zst,x); t≤s≤T}t>0,x∈Rd is the unique solution of the mean-field BDSDEs (2.4.3) and Y st,x=u(x,Xst,x),Zst,x=E’[σ(x,(Xs0,x0)’,Xst,x]*·▽u(s,Xst,x).(0.0.31)Formula (0.0.31) generalizes the stochastic Feynman-Kac formula for SPDEs of the mean-field type.2. Mean-field backward stochastic evolution equations in Hilbert spacesIn this section, we investigate a new type of backward stochastic evolution equations in Hilbert spaces which we call mean-field BSEEs. where W(s) is a cylindrical Wiener process, and A represents the generator of a strongly continuous semigroup etA in H with t≥0. Precise interpretation of the given measurable mapping f is with (ω’,ω))∈Ω×ΩQ. Therefore, the coefficient depends on the stochastic process (Y(·):,Z(·)) as well as the law of this process (in the form of expectation).The definition of mild solution is as follows:Definition0.1. We say that a pair of adapted processes (Y, Z) is a mild solution of mean-field BSEE (0.0.32), if (Y,Z)∈SF2([0,T];H)×HF2([O,T];L(Γ,H)) and for all t∈[0,T],With the contraction mapping theorem, we first establish the existence and unique-ness result for the mild solutions of mean-field BSEEs (0.0.32) under Lipschitz condition.Theorem0.5. For any random variable ζ E L2(Ω,FT,P;H), under condition (A3.1) and (A3.2), mean-field BSEE (0.0.32) admits a unique mild solution (Y, Z)∈SF2([0,T]; H)×HF2([0,T];L(Γ,H)).Under a more general condition, which is weaker than the Lipschitz one, by Picard-type iteration, we construct a Cauchy sequence, using which we can obtain the following result.Theorem0.6. Assume that (A3.2) and (A3.3) hold. Then, there exists a unique mild solution (Y,Z) to mean-field BSEE (0.0.32).By the obtained existence and uniqueness result, we study the solvability of a kind of backward stochastic partial differential equations of mean-field type with initial boundary value.3. Maximum principle and equilibrium HJB equations for mean-field stochastic control systemsIn chapter4, we mainly study maximum principle for mean-field backward doubly stochastic control systems, maximum principle for BSPDEs control system of mean-field type, and the equilibrium HJB equation for mean-field stochastic control systems. The remarkable feature of these mean-field stochastic control systems is the coefficients, both in the state equation and in the cost functional, depend on the law of the state process as well as the state and the control (in the form of expectation).Ⅰ. Maximum principle for mean-field backward doubly stochastic control systemsWe consider the following mean-field doubly stochastic control system:For a nonempty convex subset U∈Rk (k∈N+), let U={vt∈LF2(0,T;U)|vt(ω’,ω):[0,T]×Ω×Ω'U,t∈[0,T]}.Our optimal control problem is to minimize the following cost functional over U.For optimal control u(·) satisfying the corresponding (Y(·),Z(·)) is called an optimal state process.For any v(·)∈U, let utθ=ut+θ(vt-ut),vt∈u,0≤θ≤1. uθ(·)∈U since U is convex.For ψ=f, g, h, we use the notation ψ(t)(?)ψ(t,(Yt)’,(Zt)’,Yt,Zt,ut), for simplicity.We introduce the variational equation:From the convergence property of the solution of Eq.(0.0.36), we have the following variational inequality: Lemma0.1. We assume that (A4.1.1)-(A4.1.3) hold. Then(?)v(·) G U the following variational inequality is obtained:In order to derive the maximum principle, we introduced the following adjoint equa-tion: where for ψ=f,g,h, we set ψ(f)=ψ(t,Yt,Zt, Y’t, Z’t,(ut)’). We define the Hamiltonian function H as follows:H(t, y, z, y, z, p, q, v)(?) f(t, y,z, y, z, v)p+g(t, y, z, y, z, v)q+h(t. y, z, y, z, v).Applying Ito’s formula and combining with the variational inequality (0.0.37), we derive necessary conditions for optimality in the form of a maximum principle. Theorem0.7. Suppose that (A4.1.1)-(A4.1.3) hold. Let u(·) be an optimal control of the m.ean-field backward doubly stochastic control problem subject to (0.0.33)-(0.0.34), and (Y(·),Z(·)) is the corresponding trajectory. Then we haveE’[Hv(t,Y’t,Z’t,Yt,Zt,P(t),q(t),ut)(vt-ut)]≥0,(?)v∈u, a.e.,a.s.,(0.0.39) where (p(·).q(·)) is the solution of the adjoint equation (0.0.38).The maximum condition in terms of H (see (0.0.39)) associated with some convexity assumptions can constitute sufficient conditions of optimality for the mean-field backward doubly stochastic control problem.Theorem0.8.(Verification Theorem) Assume that conditions (A4.1.1)-(A4.1.4) hold. Suppose H(t, y, z. y, z.p. q, v) are convex w.r.t.(t, y, z, y, z. v). Let u(·)∈U with the cor-responding trajectory (Yt,Zt). Let (p(·),q(·)) be the solution of adjoint equation (0.0.38). If, for any t∈[0,2],E’[H(t,Y’t,Z’t,Yt,Zt,p(t),q(t), ut)=min E’[H(t,Y’t, Z’t, Yt,Zt,p(t), q(t), vt)(0.0.40) holds, then u is an optimal control for stochastic control problem (0.0.33)-(0.0.34). We apply our maximum principle to study a backward doubly stochastic linear-quadratic problem of mean-field type, and the corresponding optimal control is derived.Ⅱ. Maximum principle for BSPDEs of mean-field typeBased on the results of mean-field BSEEs in Hilbert spaces in Chapter3, we study optimal control problems for the BSPDE systems of mean-field type with an initial state constraint.Let (?)∈Rn be a bounded open set with smooth boundary (?) and U, the space of controls, be a separable real Hilbert space. We denote U={v(·)∈LF2(0,T; U)|vt(ω’,ω):[0,T]×Ω×Ω'U is F(?)Ft-progressively measurable}. An element of U is called an admissible control.For any v∈u, we consider the following controlled BSPDE system in the state space H=L2((?))(norm|·|, scalar product<·,·>): where A is a partial differential operator.The cost functional is given byOur purpose is to minimize the functional J(·) over Uad, subject to the following state constraint where Φ:(?)×H×H'>R.(0.0.44)An admissible control u u Uad which satisfies is called optimal. Now fix v∈Uad, let where m denotes the Lebesgue measure on R.The following lemma can convert the original control problem into a free initial state optimal control problem. Lemma0.2.(S, d(·,·)) is a complete metric space and (Y, Z) is the mild solution of equation (0.0.41) corresponding to the control v. Jρ is defined by Then Jρ is continuous and bounded on S.It is easy to check that0≤inf Jρ(v(·-))≤Jρ(u(·))=ρ.According to Ekeland’s variational principle, there exists a up(·)∈V such that (ⅰ) Jρ(uρ(·))≤ρ,(ⅱ) d(uρ(·),u(·))≤ρ,(ⅲ) Jρ(uρ(·))≤Jρ(v(≮))+ρd(uρ(·),u(·)),(?)v(·)∈S.(0.0.45)To investigate the original control problem with the state constraint, we just need to study the following free initial state optimal control problem:Let P(t) be the solution of the following adjoint equation: where for φ=f, h, we use the notation φ(t)=φ(t,x,Yt(x), Zt(x),(Yt(x))’(Zt(x))’,(ut)’), φ(t)=φ(t,x,(Yt(x))’,(Zt(x))’,Yt(x), Zt(x),ut).On the assumption that the control domain is not necessarily convex, spike variation approach is used to derive necessary optimality conditions for the BSPDEs control system in the form of a maximum principle.Theorem0.9. Suppose assumptions (A4.2.1)-(A4.2.3) hold. Suppose u(·) is an optimal control and (Y(·), Z(·)) is the corresponding optimal state trajectory for the BSPDE control system (0.0.41)-(0.0.42) with the initial state constraint (0.0.43). Then there exists P(t)∈SF2([0, T];K) which satisfies (0.0.46), such thatH(t,Y’t,Z’t, Yt, Zt,vt, P(t))≥H(t,Y’t, Z’t, Yt, Zt,uu P(t)), a.e., a.s.(?)v∈Uad, where H:[0, T]×H×L(Γ, H)×H×L(Γ, H)×U×K'R is the Hamiltonian function defined by H(t,y,z,y,z,v,p)=l1h(t,y,z,y,z,v)+pf(t,y,z,y,z,v).As an application, a linear-quadratic optimal control problem is given to explain our theoretical results. An explicit optimal control is obtained in this example.Ⅲ.The equilibrium HJB equation for mean-field stochastic control prob-lemsFor the stochastic control problem of mean-field type, the cost functional J is a (possibly) nonlinear function of the expected value, which makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. To this problem, an equilibrium control is therefore introduced and we prove the equilibrium value function satisfies an extended Hamilton-Jacobi-Bellman equation, which is called equilibrium HJB equation.For any u∈u, we consider the following mean-field SDE parameterized by the initial condition (t, x)∈[0, T]×R: The expected cost functional is given by where Et[·]=E[·|Ft] denotes the conditional expectation, Q(s,t) is a deterministic, continuously differentiable and positive function such that Q(t,t)=1.Now, we can obtain the recursion for J(t, x;u) which plays a key role to obtain the extended HJB equation. Lemma0.3. For any s∈[t,T), the value function J satisfies the following recursion:We need the definition of the equilibrium control. Definition0.2. Choose a fixed u∈u and a given real number£>0. For an arbitrarily chosen initial point (t,x), Define the control uε by holds for all u∈u, we say that u is an equilibrium control.We denote by Xr and ur the equilibrium state trajectory and equilibrium control, respectively.The equilibrium value function v(·,·) is defined by v(t,x)=J(t.x;u).The following theorem, which is the main result in this section, shows that the equilibrium value function v satisfies an extended HJB equation. Theorem0.10. Suppose conditions (A4.3.1)-(A4.3.2) hold. If equilibrium value func-tion u(·,·)∈C1,2([0,T]×Rn), then v(·,·) solves the following equation: with boundary condition v(T,XT)=F(XT,E(XT))+G(XT).Function f is defined by f(t,x)=Et[XT] which satisfies with u is the equilibrium control.注0.2.To distinguish the expectation in functions b,σ and h, we replace x with Xt in u(t,Xt).4. Probabilistic interpretation for Sobolev solution of McKean-Vlasov partial differential equationsChapter5is devoted to give the probabilistic representation of Sobolev solutions to McKean-Vlasov PDEs via mean-field FBSDEs.We consider the following partial differential equations(McKean-Vlasov PDE): where Lu(t,x)(?)E[b(t,Xt0,x0,x)]·▽u(t,x)+1/2tr((E[σ(t,Xt0,x0,x)]E[σ(t,Xt0,x0,x)]*)△u(t,x)). Here X0,x0is the solution of the following McKean-Vlasov SDE when(t,x)=(0,x0):For Φ∈Cc1,∞([0,T]×Rd),we use the notation (αk,j)(s,x)(?)E[σ(Xs0,,x)]E[σ(Xs0,x0,x)]*,The definition of Sobolev solution of Eq.(0.0.49)is: Definition0.3. We say u∈H is a Sobolev solution of McKean-Vlasov PDE (0.0.49) with final condition E[Φ(XT,0,x0,·)] if the following relation holds, for each Φ∈Cc1,∞(0,T]×Rd).Stochastic flow technics is one key element to give the probability interpretation of the Sobolev solution of McKean-Vlasov PDEs. Due to the influence of mean-field term in Eq (0.0.50), the inverse flow of process (Xst,x)t≤s≤T is different form the classical SDEs.Proposition0.1. Assume the condition (A5.3) hold. Then the Ito process (Xst,x;x∈Rd) defined by McKean-Vlasov (0.0.50) is a C2-diffeomorphism a.s. stochastic flow. Moreover the inverse of the flow denoted by Xt,s,ω(y)(here we use ω to emphasize the stochastic process with respect to particle ω, sometimes we may omit ω if no confusion) satisfies the following backward SDE of mean-field type where (?)E’[σ(Xr0,x0)’,Xr,x,w(y))] denotes the derivative of function σ w.r.t. the second variable.By the stochastic flow inverse Xt,s(y), we can introduce a process to act as the test function. Let J(Xt,s(y)) denote the determinant of the Jacobian matrix of Xt,s(y), which is positive and J(Xt,t(y))=1. For Φ∈Cc+∞(Rd), a process Φt:Ω×[t,T]×Rd'R is defined by Φt(s,y)(?)Φ(Xt,s(y))J(Xt,s(y)). For any v∈L2(Rd), the composition of v with the stochastic flow Xt,s(ω) is (voXt,s(·),Φ)(?)(v,Φt(s,·)). In fact, by a change of variable, we have Since (Φt(s, x))t≤s is a process, we may not use it directly as a test function because ftT (u(s,?),(?)sΦ(s,·)) has no sense. However Φt(s,x) is a semimartingale and we have the following decomposition of Φt(s, x): Lemma0.4. For every function Φ∈Cc∞(Rd), where C*is the adjoint operator of C.We connect McKean-Vlasov PDEs (0.0.49)with the following mean-field FBSDEs:The following proposition allows us to link the solution of McKean-Vlasov PDEs (0.0.49) with the associated mean-field FBSDE in a natural way. Proposition0.2. Suppose that (A5.3)-(A5.4) hold. Let u∈H be a Sobolev solution of McKean-Vlasov PDEs (0.0.30). Then for s∈[t,T] and Φ∈Cc∞(Rd), where fst (U(r,·),dΦt(r,?)) is well defined thanks to the semimartingale decomposition result.Based on the equivalence of norms and the above proposition, the existence and uniqueness result for the Sobolev solution of McKean-Vlasov PDE (0.0.49) is obtained through the solution of mean-field FBSDE (0.0.52). In addition, the probabilistic inter-pretation of the Sobolev solution of McKean-Vlasov PDE (0.0.49) is also presented.Theorem0.11. Assume the conditions (A5.3)-(A5.4) hold. Then there exists a unique solution u∈H of the McKean-Vlasov PDEs (0.0.49). Moreover, we have the probabilistic representation of the solution: u(t,x)=Ytt,x, where (Yst,x,Zst,x) is the solution of mean-field FBSDE (0.0.52) and for all s∈[t,T] Yst,s=u(s,Xst,x),Zst,x=E[σ(Xs0,x0,Xst,x)]*·▽u(x,Xst,x).
Keywords/Search Tags:mean-field, backward doubly stochastic differential equations, backward stochasticevolution equations, McKean-Vlasov SPDEs, McKean-Vlasov PDEs, BSPDEs, stochas-tic maximum principle, time-inconsistent, Hamilton-Jacobi-Bellman equation, Sobolevsolution
PDF Full Text Request
Related items