Font Size: a A A

Some Problems About Optimal Control Theorem Of Stochastic Differential Equations

Posted on:2016-03-31Degree:DoctorType:Dissertation
Country:ChinaCandidate:Q X WangFull Text:PDF
GTID:1220330473461755Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
The maximum principle, formulated and derived by Ponryagin and his group in the 1950s, is truly a milestone of optimal control theory. It states that any optimal control along with the optimal state trajectory must solve the so-called Hamiltonian system, which is a two-point boundary value problem(also called a forward-backward differential equation), plus a maximum condition of a function called the Hamiltonian. The mathematical significance of the maximum principle lies in that maximizing the Hamiltonian is much easier than the original control problem that is infinite-dimensional.The original version of Pontryagin’s maximum principle was for determin-istic problem. In deriving the maximum principle, one first slightly perturbs an optimal control by means of the spike variation, then considers the first-order term in a sort of Taylor expansion with respect to this perturbation. By send-ing the perturbation to zero, one obtains a kind of variational inequality. The final desired result then follows from the duality. However, one encounters an essential difficulty when learning maximum principle if the diffusion terms also depend on the controls. That is, the Ito stochastic integral ∫t (t+ε)σdW is only of order ε1/2, thus the usual first-order variation method fails. To overcome this difficulty, one need to study the the second-order terms in the Taylor expansion of the spike variation and come up with a stochastic maximum principle.In this paper, we consider some types of maximum principle. In the first chapter, we introduce the development of stochastic differential equations and some basic definitions. In the second chapter, we consider the maximum prin-ciple of delay stochastic differential equations driven by fractional Brownian motions. In Section 2.2, we give some definitions and calculus of fractional. In Section 2.3, we proved the wellposedness of the solutions and give the following theorem: As the main result of chapter two, we give the necessary condition of the maximum principle in Section 2.4, and give the following theoremTheorem 0.0.2 Assume that b,σ satisfy (A1) and (A2), (x*(t),u*(t)) is the optimal pair of problem (C). Then the following anticipated backward stochastic differential equations hold: Moreover, we getIn Section 2.5, we give a linear quadratic problem of this system as an appli-cation. Here, we use Ekeland principle and the convexity of the equation to prove the uniqueness and existence of the solution. Then we give the necessary conditions of the maximum principle. This two theorems state as follows.Theorem 0.0.3 Let (x(t),u(t)) ∈ ∧ be the solution of the following sys-tem where A(t),A{t),B(t),C(t),C(t),D(t)≤L,|x0(t)|≤ F, Q(t),Q(t),R(t),S(t) are positive-definite matrixes, L, F are positive constants. Then there exists an unique optimal control pair (x*(t),u*(t)) such thatTheorem 0.0.4 Let (x(t),u(t)) ∈ A be the optimal control pair of the following systemIn chapter three, we introduce a stochastic equation driven by a stable process. Analogous to chapter two, we give the wellposedness of the solution as follows.Theorem 0.0.5 Let b and a be measurable functions satisfying H(1) and H(2), T> 0 and T be independent of X(0). Then the stochastic differential equation In Section 3.3, we give some estimates of the solution. In Section 3.4, we give the necessary and sufficient conditions of the maximum principle, and the theorem are stated as follows.Theorem 0.0.6 Assume that b,σ satisfy (H1) and (H2), u*(t) ∈ U[0,T] is the optimal control of equationThen (y(t),z(t)) is the solution ofsuch thatIn Section 3.5, we give a linear quadratic problem as an application. We get the explicit expression of the optimal control.Theorem 0.0.8 If the stochastic Riccati equationadmits a solution, then the stochastic LQ problem in Section 3.5 is well-posed.
Keywords/Search Tags:Stochastic differential equations, Optimal control, Ponryagin maximum prin- ciple, dual equation, Variation equation, α stable, Delay, Fractional calculus
PDF Full Text Request
Related items