Font Size: a A A

Deterministic and stochastic optimal control and cell mapping methods

Posted on:2003-05-16Degree:Ph.DType:Dissertation
University:University of DelawareCandidate:Crespo, Luis GuillermoFull Text:PDF
GTID:1460390011484180Subject:Engineering
Abstract/Summary:
In the present document, the optimal control problem of deterministic and stochastic systems is studied. A numerical approach based on the Bellman's principle of optimality is developed and used to generate global feedback control solutions. The solution method allows us to study strongly nonlinear systems and control problems with state and control constraints.; In the process, the phase space is discretized into a countable number of regions, called cells. For the deterministic problem, the point-to-point dynamics of the system is then replaced by a cell-to-cell dynamics using the Simple Cell Mapping method. The admissible control range and time are also discretized.; Such discretizations allow us to apply a dynamic programming strategy to solve the optimal control problem with fixed terminal conditions. By making a sequence of interrelated decisions, control solutions that drive the system from all initial conditions in the domain of interest to the target set are found progressively.; In the deterministic case, the existent solution approach has been modified in order to guarantee its convergence and reduce its computational demands. The method is then validated and applied to various nonlinear systems. Extensions that allow us to consider multiply connected state domains and solve the fixed final time optimal control problem and the tracking control problem are proposed as well.; In the stochastic case, a novel solution approach based on the stochastic version of the Bellman's principle of optimality is proposed. The method makes use of the moment equations of the state variables and the cumulant neglect closure method of order two. In this manner, the short-time Gaussian approximation (STGA) is used to describe the transient behavior of the system response. The method allows us to generate feedback laws to control strongly nonlinear systems externally and/or parametrically excited, when state and control constraints are present, and when the diffusion term is constant or state dependent.; Once the global control solution is found, the generalized cell mapping (GCM) method is used to study the time evolution of the probability density function of the system response. In this way, the transient and stationary effects of the control law can be evaluated. The solution method is validated by comparing the numerical solution of a particular dynamic system with its analytical control solution, available for this particular case. The methodology is then applied to several systems subject to random excitation. The effects of the control bounds in the system response are studied numerically, revealing the bifurcation processes induced by the control.; In order to study dynamic systems whose moment equations for the state variables cannot be analytically closed, Taylor expansions about each cell center are used to approximate the behavior of the vector field locally throughout the admissible domain. This practice leads to very accurate results. Due to the original discretization of the state space, such an extension can be easily implemented without increasing considerably the computational demands of the method.; By transforming the original dynamic equations and the admissible state space to a new domain, reflecting boundary conditions can be considered. The new problem can be solved using the proposed approach. Once the global feedback control solution is found, it can be transformed back to the original domain, provided that the transformation is invertible. In this way we introduce the use of transformation of variables in the framework of the optimal control with fixed final state terminal conditions.
Keywords/Search Tags:Optimal control, Method, Deterministic, Stochastic, Cell mapping, State, System, Conditions
Related items