Font Size: a A A

Optimal Control Problem For Stochastic Delayed Systems And Applications

Posted on:2011-05-02Degree:DoctorType:Dissertation
Country:ChinaCandidate:L ChenFull Text:PDF
GTID:1100360305451709Subject:Probability theory and mathematical statistics
Abstract/Summary:PDF Full Text Request
It has been recognized in recent years that the description of many real world prob-lems should be modelled by stochastic dynamical systems whose evolution depend on the past history of the state. Such models are often referred to as stochastic differential delay equations (SDDEs for short). Due to their wide applications in engineering, life science and finance (see e.g. [3; 11; 43; 44; 33]), SDDEs become a popular topic in modern research. This thesis is dedicated to study the controlled systems with delay which arising in finance and other areas.A delay term may arise in a control problem when there is a time lag between observation and regulation or the aftereffect of control. In 2000,(?)ksendal and Sulem [47] investigated a class of problems where the wealth X(t) at time t is given by a SDDE. In their model, not only the present value X(t) but also X(t -δ) and some sliding average of previous values affect the growth at time t. Because of the specific structure of the dependence of the past that they considered, they are able to reduce the problem to finite dimension. They proved maximum principles for such models and applied them to solving some problems related to finance. But the assumptions they need is relatively stronger.In practice, the observation for the history of the state may be finite points. So, at first, we consider the system involving finite delayed points. Moreover, the delay points can be time-varying.We pay attention to the systems involving both delays in state variable and in control variable. We derive the maximum principle for this kind of controlled systems with delay under more general conditions. As an application, we apply our result to a produce and consumption choice problem with delay in economics and the explicit solution of the problem is given. Moreover, by the numerical results, we show the effects of different time delays. The main novelty of our method is we introduce a new type backward stochastic differential equations(BSDEs for short)-anticipated BSDE (see Peng and Yang [55]) as our adjoint equation. To our best knowledge, it is the first attempt to study stochastic optimal control problem with delay in this way. We also consider the forward-backward systems with time-varying delay i.e. the stochastic delayed systems with recursive utility and the delayδis a function of time t. This result extends that of [65].Fully coupled forward-backward stochastic differential equation (FBSDE for short) was studied in Hu, Peng [29], Peng, Wu [54] and Yong [69] etc. It can be encountered in linear-quadratic(LQ) problem(see[62; 67]) and mathematic finance when we consider large investor (see Cvitanic and Ma[15]). A new type of forward-backward stochastic differential equations with Ito stochastic delay equations as forward equations and anticipated BSDEs as backward equations came out during our studying the maximum principle for the systems with delay. We obtain the existence and uniqueness results of the general FBSDEs under some suitable conditions.The LQ regulatory problem involving stochastic delay equations was studied in Kolmanovskii and Maizenberg [34]. We consider the LQ problem with delays in state and control, and find the feedback regulator by FBSDE method and value function method independently. Also we apply our results on the general FBSDEs to deal with the LQ nonzero sum stochastic differential game problem with delay.In general case, we treat a stochastic recursive optimal control problem in which both the controlled state dynamics and the recursive utility may depend on segment of the history, not only several points, i.e. the system is described by stochastic functional differential equation(SFDE for short). For such problems we prove that the value function satisfies a Bellman-type dynamic programming principle. Because of various forms of path dependency, the closed form of Ito's formula or Dynkin formula are generally hard to obtained, so the work to derive the Hamilton-Jacobi-Bellman equation is not easy. Using the weak infinitesimal generator introduced in Mohammed [44] and the joint quadratic variations in Fuhrman etc. [24], we obtain an infinite-dimensional HJB equation. It is shown that the value function is a viscosity solution of the HJB equation.Finally, as an application of our results, we consider a class of dynamic advertising problems under uncertainty in the presence of carryover and distributed forgetting effects, which is also discussed in Gozzi and Marinelli [26]. We deal with the problem using maximum principle in infinite dimensional which is a method different from theirs. An this method was introduced firstly in Hu and Peng [28]. This section is also a generalization of our results in Section 2.3.The thesis consists of five chapters. In the following, we list the main results.Chapter 1:We introduce problems studied from Chapter 2 to Chapter 5. Chapter 2:We study the stochastic optimal control problem for the system with delay as following: We give the existence and uniqueness of the solution for this type of SDDEs. And then we study the stochastic optimal control problem in which the domain of the control is convex. We have the following stochastic maximum principle.Theorem 2.2.6. Let u(·) be an optimal control of the optimal stochastic control problem with delay subject to (2.3)-(2.5), X(·) is the corresponding optimal trajectory. Then we assert with notation (?)for all 0< t< T. And Hamiltonian function is defined by: (p(·),z(·)) is the solution of the following anticipated BSDE:Moreover, we obtain the following sufficient optimality result.Theorem 2.2.8. Suppose u(·)∈A and let X(·) be the corresponding trajectory, p(t) and z(t) be the solution of adjoint equation (2.12). If (H2.5)-(H2.6) and (2.13) (or (2.16)) hold for u(·), then u(·) is an optimal control for stochastic delayed optimal problem (2.3)-(2.5). And we we apply our maximum principle to study a kind of production and con-sumption choice optimization problem with delay. The capital of the investorχ((?)) satisfies: The problem is how to choose the consumption rate c(t) to maximize the performance functionThe explicit solution of the optimal control problem is given by the following proposition and the numerical results with differential delays are shown in Figure 1 and Figure 2.Proposition 2.2.9. For the production and consumption choice problem (2.17)-(2.18), the optimal consumption rate is c*(t)= (?), where p(t) is of the form (2.20).Consequently, we assume that the domain of the controls is non-convex, the prob-lem is with recursive utility and the delays are time-varying. In this case the control variable does not enter the diffusion coefficientσ, i.e. the system is described as the following forward-backward equation with delay:Theorem 2.3.6. Let u(·) be an optimal control and (χ(·), y(·), z(·)) be the corresponding trajectory. Then, for all0≤t
Keywords/Search Tags:System with delay, Stochastic differential delay equation, Stochastic optimal control with delay, Maximum principle, Forward-backward stochastic differential equation, Stochastic LQ problem with delay, Dynamic programming principle
PDF Full Text Request
Related items