Professional Documents
Culture Documents
Course Basics
Credit Hours 03
Lecture(s) Nbr of Lec(s) Per Week 1 Duration 150min
Recitation/Lab (per week) Nbr of Lec(s) Per Week Duration
Tutorial (per week) Nbr of Lec(s) Per Week Duration
Course Distribution
Core
Elective Electrical Engineering majors.
Open for Student Category All
Close for Student Category None
COURSE DESCRIPTION
The course prepares students to do independent work at the frontiers of systems theory and control engineering. The
course builds further on standard linear systems theory to explore issues related to optimization, estimation and
adaptation. Students will learn to formulate and appreciate fundamental limitations in control, filtering and estimation..
Topics include review of linear control systems, static constrained optimization, calculus of variations, dynamic
optimization, Bellmans principle of optimality, Maximum principle, two-point boundary value problems and Riccati
equations, linear quadratic regulater (LQR), learning and adaptation in controllers, policy- and value-iteration.
COURSE PREREQUISITE(S)
EE561. Digital Control Systems or by premission of instructor.
COURSE OBJECTIVES
1. Understand fundamental limits of control & estimation
1. Interpret, reproduce and create deep mathematical results for advanced control engineering.
Examination Detail
Yes/No: Yes
Combine Separate:
Midterm
Duration: 120 min
Exam
Preferred Date:
Exam Specifications:
Yes/No: Yes
Combine Separate:
Final Exam
Duration: 180 min
Exam Specifications:
COURSE OVERVIEW
Review of pole placement techniques, full-state feedback, observer design, reference tracking, Franklin
2
stabilization.
Discrete-time optimal control (LQR) as a constrainted optimization problem, Riccati equation, Franklin.
3
origins of the two point boundary value problem. Bryson.
Discrete-time optimal control (contd.). LQR steady-state optomal control, symmetric root-locus Franklin.
4
interpretation for optimality. Examples Bryson.
5 Differentiability and calculus of variations. Bamieh
Textbook(s)/Supplementary Readings
The course will be taught from a combination of textbooks and course notes. The following books and course notes will
be used as reference.
Dynamic Optimization by Arthur Bryson. Addison Wesley. 1999.
Optimal Control and Linear Quadratic Problems. Lecture Notes by Bassam Bameih. 2001.
Reinforcement Learning and Approximate Dynamic Programming by Frank Lewis, Derong Liu. 2013.