Font Size: a A A

Necessary Conditions Of Optimal Control Governed By Stochastic And Deterministic Distributed Parameter Systems

Posted on:2013-01-25Degree:DoctorType:Dissertation
Country:ChinaCandidate:H Q YuFull Text:PDF
GTID:1110330371980926Subject:Probability theory and mathematical statistics
Abstract/Summary:PDF Full Text Request
In this paper, the necessary conditions for optimal control governed by stochastic and deterministic distributed parameter systems are studied. The main results are presented in the form of Pontryagin's maximum principle.The introduction of this paper is given in Chapter]. In this chapter, we first introduce the origin of maximum principle for optimal control problem and the history of optimal control theory, especially, the theories governed by deterministic and stochastic distributed parameter systems. Moreover, the main content of this paper is summarized in this chapter.In Chapter 2, the boundary control problem for semilinear stochastic parabolic equa-tions with Neumann boundary conditions is studied. Through imposing the super-parabolic condition, the existence and uniqueness of the weak solutions to the state and adjoint equa-tions with non-homogeneous boundary conditions are established by the method of Galerkin approximation. By using these results, the necessary optimality conditions for control sys-tems under convex constraints can be derived by the method of convex perturbation.We consider a stochastic control problem in Chapter 3 in which the dynamic system is a controlled backward stochastic heat equation with Neumann boundary control and bound-ary noise. By denning a proper form of the mild solution of the state equation, the existence and uniqueness of this solution is given. Then, a global maximum principle for our control problem is presented. In the end of this chapter, the main result is applied to a backward linear-quadratic control problem in which an optimal control is obtained explicitly as a feed-back of the solution to a forward-backward stochastic partial differential equation.In Chapter 4, we present the maximum principle for the optimal control governed by a backward stochastic evolution equation. This equation is driven by an infinite dimensional martingale in a sparable Hilbert space and an unbounded time-dependent linear differential operator. For the proof of the maximum principle, the convex variational method is used.Chapter 5 deals with the Pontryagin's principle of optimal control problems for the 2D Navier-Stokes equation with integral state constraints and coupled integral control-state constraints. As an application of the main result, the necessary conditions for the local solution in the sense of Lr(r>2) are also obtained in this chapter. In Chapter 6, we continue to consider the controlled fluid dynamic systems in which the constraints are also presented in the form of coupled integral control-state constraints. Two techniques are mainly used to obtain our results:ε-perturbation for the admissible control set and diffuse perturbation for the admissible control. By these results, the necessary conditions for L2-local optimal solution of our control problem can be studied under some assumptions. In the end of this chapter, these results are applied to some special fluid dynamic systems which contain the fluid systems driven by its boundary and Magnetohydrodynamics (MHD).
Keywords/Search Tags:Optimal control, Distributed parameter system, Necessary condition, Max-imum principle, Stochastic partial differential equation, Fluid dynamic sys-tem, Backward stochastic partial differential equation, Mixed control-state con-straints
PDF Full Text Request
Related items