Font Size: a A A

Stochastic Optimal Control Problems On Time Scales

Posted on:2022-09-02Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y J ZhuFull Text:PDF
GTID:1480306311967189Subject:Statistics
Abstract/Summary:PDF Full Text Request
In 1988,Hilger introduced time scales theory in his Ph.D.thesis in order to unify continuous and discrete analysis.After that,time scales theory has received worldwide attention because it has excellent time structure and has a wide application prospect.In reality,the time variables of many processes are neither classical continuous time nor uniform discrete time.For example,a simple series circuit consisting of a resistor,a capacitor and a self inductance coil.When the capacitor is closed periodically at a fixed frequency,the rate of change of the electric current can be described by the derivative on the time scale.The time scale is more applicable and feasible,which has been widely concerned in recent years.At the same time,the actual control system has random factors,which can not be ignored in many cases.Therefore,it is of great significance to study the stochastic optimal control problem under the time scales framework,especially to dealing with the system problems with the complex structure of time variables.This thesis investigate optimal control problems governed by stochastic ?-differential systems on arbitrary time scales.Compared with the classical continuous time and dis-crete time situations,the study of optimal control problems on time scales not only establishes a unified control theory including the continuous and discrete time cases to avoid the repetitive study between them and to understand the differences and rela-tions,but also provides theoretical guidance for the control systems with time variables including interval and isolated point set in practical engineering problems.We mainly concern two kinds of stochastic optimal control problems.One of which is the optimal control problem of stochastic linear systems on time scales.We investigate the stochastic linear quadratic optimal control problem and mean field stochastic linear quadratic optimal control problem respectively.The other is the optimal control problem governed by nonlinear stochastic systems on time scales.The dynamic programming principle and the maximum principle are established.The main results of thesis are as follows:In Chapter 1,we investigate the research background and illustrate the research contents of this thesis.In Chapter 2,we mainly introduces the relevant results of the time scale theory for the following research content.In Chapter 3,one of our current work focus on the quadratic cost functional optimal control problems governed by a class of stochastic linear control system on time scales.In order to solve this problem,the product rule of stochastic process is established and the Riccati ?-differential equation(R?E)and an auxiliary linear equation are introduced through the complete square method.Under certain conditions,the linear feedback for-m of optimal control is given.Inspired by this,we investigate the mean-filed stochastic linear quadratic optimal control problem on time scale.Compared with the control prob-lems existed on time scales,the difference is that the control system and cost functional contain the expectation terms of state and control.For the state equation,the existence and uniqueness of its solution are proved by iterative method.The feedback expression of the optimal control is given by the solutions of two coupled R?Es.In addition,we discuss the existence and uniqueness of the solution of the R?E and give a necessary and suficient condition on solvability of R?E.In Chapter 4,we study the dynamic programming principle for stochastic nonlinear optimal control problems on time scales.In order to solve this problem,the definition of chain derivative of multivariate composite function is introduced and its chain rule is established.On this basis,Ito's formula for stochastic process is reconstructed.With the help of Ito's formula,the optimality principle and Hamilton-Jacobi-Bellman(HJB)equation of optimal control problem axe obtained.It should be pointed out that the HJB equation is much more complex than the corresponding equations in the previous literature which is a second order partial A-differential equation with expectation.The reason is that the discontinuity of discrete points causes the HJB equation contains expectation.Besides,we apply the dynamic programming principle to the stochastic linear quadratic control problem.In Chapter 5,two kinds of stochastic nonlinear control systems are considered and the corresponding maximum principles are established respectively.One is the optimal control problem of Stochastic ?-differential system.Assuming that the control domain is a convex set,the dual relation is established by the product rule and the suitable form of the adjoint equation is derived.Furthermore,the maximum principle of optimal control problem is obtained by using variational method.The result degenerates to the discrete time case,which is not consistent with the existing results.We analysis this phenomenon and prove the equivalence of the two results.In addition,the application of the stochastic maximum principle in the stochastic linear quadratic control problem is discussed.The other is that the control system is governed by a stochastic ?-differential equation(S?E)with conditional expectation term.We give the existence and uniqueness of the solution of the S?E by iterative method.Compared with the existing results of this kind of equation,our equation is more complex because the conditional expectation term.By using convex variational method,the variational equations of the control system and some related estimates are given,which enables us to derive variational inequalities.Then,we get the corresponding adjoint equation by dual relation.According to the form of variational inequality and adjoint equation,we obtain the necessary of optimal control-maximum principle.When the result degenerates to the discrete time case,it is also a new result.In Chapter 6,we apply the theoretical results to financial mathematical problems and seasonal population models.A basic problem in financial mathematics and math-ematical economics is the investment strategy.The mean-variance portfolio has been widely studied.For the classical mean variance portfolio model in continuous time and discrete time,we reconstruct the model under the time scale framework and find some new phenomena.The quantity changes of the seasonal mosquito population has both continuous and discrete characteristics,Based on this,the control model of mosquito population was established on time scales.The result shows that a pulse control at the beginning of dormancy period could inhibit the number of mosquitoes in the coming year.
Keywords/Search Tags:Time Scales, Linear Quadratic Optimal Control, Mean-Field, Riccati Equation, Dynamic Programming Principle, Ito's Formula, HJB Equation, Maximum Principle, Mean-Variance Invest Portfolio, Species Model
PDF Full Text Request
Related items