Dynamic programming and variational inequalities in singular stochastic control | | Posted on:1993-11-09 | Degree:Ph.D | Type:Thesis | | University:Brown University | Candidate:Zhu, Hang | Full Text:PDF | | GTID:2470390014996227 | Subject:Mathematics | | Abstract/Summary: | PDF Full Text Request | | I study the general formulation of stochastic control problems in which the controls are of bounded variation on finite time interval with possible jumps or singularly continuous displacements. The dynamic programming (Hamilton-Jacobi-Bellman's) equation takes the form of variational inequalities, which is not classical in stochastic control because it imposes a pointwise constraint on the gradient ;My thesis began with a few examples illustrating the dynamic optimization models with bounded variation controls. Next, I provided in Chapter 2 a complete proof of the dynamic programming principle for singular stochastic control model and verified the optimal value function as a viscosity solution of the DP-variational inequality. In the Chapter 3 and Chapter 4, I discussed the both analytic and weak characterization of the value function in the case of nondegenerate as well as degenerate control problems. As a major result, I explored the notion of weak (viscosity) solution and proved a comparison principle for weak solutions of the DP-variational inequality. Meanwhile, I developed an approach which rests on the regularization techniques through perturbation and penalization to obtain various analytic/weak characterizations of the value function and especially as a unique solution of its DP-variational inequality.;In last two Chapters, my focus turned to the numerical approximation and the application of convex analysis to our stochastic control model with bounded variation controls. Markov chain approximation method was discussed in a full detail and shown to yield a monotone, stable and consistant finite-difference scheme for our DP-variational inequality. The convex duality approach was also extended there by imposing a finite-fuel constraint on the bounded variation controls, and then the dynamic optimization problem can be reduced into a static minimization problem over a convex, w*-compact space of vector-valued occupation measures at each stage of available fuels.;The main goal in this thesis is to establish a theoretical framework in the study of stochastic control problems with bounded variation controls. The methods and approaches, which we have adapted and developed in this context, exhibit some important applications of recent developments in several areas, including the viscosity solution theory in nonlinear 2nd-order partial differential equations, Markov chain approximation method for optimal stochastic control and the convex duality approach in optimal control theory. | | Keywords/Search Tags: | Stochastic control, Variation, Dynamic programming, Control problems, Convex | PDF Full Text Request | Related items |
| |
|