Font Size: a A A

A study of differential dynamic programming for optimal control

Posted on:1991-03-28Degree:Ph.DType:Dissertation
University:The University of IowaCandidate:Lin, Tshen-ChanFull Text:PDF
GTID:1470390017451869Subject:Engineering
Abstract/Summary:
Differential dynamic programming (DDP) approach to solve general optimal control problems is developed. This approach is a successive approximation technique based on dynamic programming rather than the calculus of variations for optimal control of linear and nonlinear systems. Based on Bellman's principle of optimality, Hamilton-Jacobi-Bellman equation is derived, and the differential dynamic programming technique is developed based on the equation. Both the discrete-time and continuous-time approaches are developed. A set of differential equations needs to be solved backward in time to construct the control law in the continuous-time approach, and a set of linear algebraic equations needs to be solved at each grid point in the discrete-time approach.;For unconstrained problems, the discrete-time and continuous-time DDP are developed and compared. A comparison with the nonlinear programming (NLP) approach is also given.;For constrained problems, two approaches for DDP are developed. The first one solves a quadratic programming problem at every time step and thus constructs the control law. The other one uses the multiplier method to sum up all the point-wise constraints and the unconstrained DDP approach to construct the control law.;A method for simultaneous design of control and structural systems is developed. The problem is divided into two subproblems--structural design and control design subproblems. A general nonlinear programming approach is used to solve the structural design subproblem, and the differential dynamic programming technique is used to design the control systems.;Several numerical examples are used to demonstrate the DDP technique and the procedure for simultaneous design of control and structural systems.;It is concluded that the DDP approach is more efficient than the NLP approach for solving optimal control problems. It can be used for nonlinear problems that cannot be treated by pole allocation or linear feedback control laws. Among the DDP approaches, the continuous-time approach is superior to the discrete-time approach in terms of numerical stability, efficiency and flexibility. It is recommended for optimal control of constrained problems and/or nonlinear problems.
Keywords/Search Tags:Optimal control, Dynamic programming, DDP, Approach, Developed, Nonlinear, Technique
Related items