This textbook offers a concise yet rigorous introduction to calculus of variations and optimal control theory, and is a self-contained resource for graduate students in engineering, applied mathematics, and related subjects. Designed specifically for a one-semester course, the book begins with calculus of variations, preparing the ground for optimal control. It then gives a complete proof of the maximum principle and covers key topics such as the Hamilton-Jacobi-Bellman theory of dynamic programming and linear-quadratic optimal control. Calculus of Variations and Optimal Control Theory also traces the historical development of the subject and features numerous exercises, notes and references at the end of each chapter, and suggestions for further study.
Die Inhaltsangabe kann sich auf eine andere Ausgabe dieses Titels beziehen.
Daniel Liberzon is associate professor of electrical and computer engineering at the University of Illinois, Urbana-Champaign. He is the author of Switching in Systems and Control.
"A very scholarly and concise introduction to optimal control theory. Liberzon nicely balances rigor and accessibility, and provides fascinating historical perspectives and thought-provoking exercises. A course based on this book will be a pleasure to take."--Andrew R. Teel, University of California, Santa Barbara
"A very scholarly and concise introduction to optimal control theory. Liberzon nicely balances rigor and accessibility, and provides fascinating historical perspectives and thought-provoking exercises. A course based on this book will be a pleasure to take."--Andrew R. Teel, University of California, Santa Barbara
Preface..............................................................xiii1 Introduction.......................................................12 Calculus of Variations.............................................263 From Calculus of Variations to Optimal Control.....................714 The Maximum Principle..............................................1025 The Hamilton-Jacobi-Bellman Equation...............................1566 The Linear Quadratic Regulator.....................................1807 Advanced Topics....................................................200Bibliography.........................................................225Index................................................................231
1.1 OPTIMAL CONTROL PROBLEM
We begin by describing, very informally and in general terms, the class of optimal control problems that we want to eventually be able to solve. The goal of this brief motivational discussion is to fix the basic concepts and terminology without worrying about technical details.
The first basic ingredient of an optimal control problem is a control system. It generates possible behaviors. In this book, control systems will be described by ordinary differential equations (ODEs) of the form
[??] = f(t, x, u), x(t0) = x0 (1.1)
where x is the state taking values in Rn, u is the control input taking values in some control set U [subset] Rm, t is time, t0 is the initial time, and x0 is the initial state. Both x and u are functions of t, but we will often suppress their time arguments.
The second basic ingredient is the cost functional. It associates a cost with each possible behavior. For a given initial data (t0; x0), the behaviors are parameterized by control functions u. Thus, the cost functional assigns a cost value to each admissible control. In this book, cost functionals will be denoted by J and will be of the form
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.2)
where L and K are given functions (running cost and terminal cost, respectively), tf is the final (or terminal) time which is either free or fixed, and xf := x(tf) is the final (or terminal) state which is either free or fixed or belongs to some given target set. Note again that u itself is a function of time; this is why we say that J is a functional (a real-valued function on a space of functions).
The optimal control problem can then be posed as follows: Find a control u that minimizes J(u) over all admissible controls (or at least over nearby controls). Later we will need to come back to this problem formulation and fill in some technical details. In particular, we will need to specify what regularity properties should be imposed on the function f and on the admissible controls u to ensure that state trajectories of the control system are well defined. Several versions of the above problem (depending, for example, on the role of the final time and the final state) will be stated more precisely when we are ready to study them. The reader who wishes to preview this material can find it in Section 3.3.
It can be argued that optimality is a universal principle of life, in the sense that many|if not most|processes in nature are governed by solutions to some optimization problems (although we may never know exactly what is being optimized). We will soon see that fundamental laws of mechanics can be cast in an optimization context. From an engineering point of view, optimality provides a very useful design principle, and the cost to be minimized (or the profit to be maximized) is often naturally contained in the problem itself. Some examples of optimal control problems arising in applications include the following:
• Send a rocket to the moon with minimal fuel consumption.
• Produce a given amount of chemical in minimal time and/or with minimal amount of catalyst used (or maximize the amount produced in given time).
• Bring sales of a new product to a desired level while minimizing the amount of money spent on the advertising campaign.
• Maximize throughput or accuracy of information transmission over a communication channel with a given bandwidth/capacity.
The reader will easily think of other examples. Several specific optimal control problems will be examined in detail later in the book. We briey discuss one simple example here to better illustrate the general problem formulation.
Example 1.1 Consider a simple model of a car moving on a horizontal line. Let x [member of] R be the car's position and let u be the acceleration which acts as the control input. We put a bound on the maximal allowable acceleration by letting the control set U be the bounded interval [-1; 1] (negative acceleration corresponds to braking). The dynamics of the car are [??] = u. In order to arrive at a first-order differential equation model of the form (1.1), let us relabel the car's position x as x1 and denote its velocity [??] by x2. This gives the control system [??]1 = x2, [??]2 = u with state [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. Now, suppose that we want to "park" the car at the origin, i.e., bring it to rest there, in minimal time. This objective is captured by the cost functional (1.2) with the constant running cost L [equivalent to] 1, no terminal cost (K [equivalent to] 0), and the fixed final state [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. We will solve this optimal control problem in Section 4.4.1. (The basic form of the optimal control strategy may be intuitively obvious, but obtaining a complete description of the optimal control requires some work.)
In this book we focus on the mathematical theory of optimal control. We will not undertake an in-depth study of any of the applications mentioned above. Instead, we will concentrate on the fundamental aspects common to all of them. After finishing this book, the reader familiar with a specific application domain should have no difficulty reading papers that deal with applications of optimal control theory to that domain, and will be prepared to think creatively about new ways of applying the theory.
We can view the optimal control problem as that of choosing the best path among all paths feasible for the system, with respect to the given cost function. In this sense, the problem is infinite-dimensional, because the space of paths is an infinite-dimensional function space. This problem is also a dynamic optimization problem, in the sense that it involves a dynamical system and time. However, to gain appreciation for this problem, it will be useful to first recall some basic facts about the more standard static finitedimensional optimization problem, concerned with finding a minimum of a given function f :...
„Über diesen Titel“ kann sich auf eine andere Ausgabe dieses Titels beziehen.
Anbieter: Aideo Books, San Marino, CA, USA
Trade paperback. Zustand: New in new dust jacket. First edition. ***INTERNATIONAL EDITION*** Read carefully before purchase: This book is the international edition in mint condition with the different ISBN and book cover design, the major content is printed in full English as same as the original North American edition. The book printed in black and white, generally send in twenty-four hours after the order confirmed. All shipments contain tracking numbers. Great professional textbook selling experience and expedite shipping service. Glued binding. Paper over boards. 235 p. Contains: Figures. Audience: General/trade. Bestandsnummer des Verkäufers K6450000588
Anzahl: 10 verfügbar
Anbieter: HPB-Red, Dallas, TX, USA
hardcover. Zustand: Acceptable. Connecting readers with great books since 1972. Used textbooks may not include companion materials such as access codes, etc. May have condition issues including wear and notes/highlighting. We ship orders daily and Customer Service is our top priority! Bestandsnummer des Verkäufers S_445902871
Anzahl: 1 verfügbar
Anbieter: Basi6 International, Irving, TX, USA
Zustand: Brand New. New. US edition. Expediting shipping for all USA and Europe orders excluding PO Box. Excellent Customer Service. Bestandsnummer des Verkäufers ABEOCT25-107289
Anbieter: Romtrade Corp., STERLING HEIGHTS, MI, USA
Zustand: New. This is a Brand-new US Edition. This Item may be shipped from US or any other country as we have multiple locations worldwide. Bestandsnummer des Verkäufers ABNR-232489
Anbieter: SMASS Sellers, IRVING, TX, USA
Zustand: New. Brand New Original US Edition. Customer service! Satisfaction Guaranteed. Bestandsnummer des Verkäufers ASNT3-232489
Anbieter: ALLBOOKS1, Direk, SA, Australien
Brand new book. Fast ship. Please provide full street address as we are not able to ship to P O box address. Bestandsnummer des Verkäufers SHAK107289
Anzahl: 1 verfügbar
Anbieter: Brook Bookstore On Demand, Napoli, NA, Italien
Zustand: new. Bestandsnummer des Verkäufers f066e1c02fd600915941151d27841903
Anzahl: 8 verfügbar
Anbieter: eCampus, Lexington, KY, USA
Zustand: New. Bestandsnummer des Verkäufers N:9780691151878:ONHAND
Anzahl: 2 verfügbar
Anbieter: Kennys Bookshop and Art Galleries Ltd., Galway, GY, Irland
Zustand: New. 2012. Hardcover. Offers an introduction to calculus of variations and optimal control theory, and is a self-contained resource for graduate students in engineering, applied mathematics, and related subjects. This book traces the historical development of the subject and features numerous exercises, notes and references at the end of each chapter. Num Pages: 256 pages, 63 line illus. BIC Classification: PBKQ; PBU. Category: (P) Professional & Vocational; (U) Tertiary Education (US: College). Dimension: 261 x 186 x 24. Weight in Grams: 676. . . . . . Bestandsnummer des Verkäufers V9780691151878
Anzahl: 1 verfügbar
Anbieter: Lucky's Textbooks, Dallas, TX, USA
Zustand: New. Bestandsnummer des Verkäufers ABLIING23Feb2416190101861
Anzahl: Mehr als 20 verfügbar