Many devices (we say dynamical systems or simply systems) behave like black boxes: they receive an input, this input is transformed following some laws (usually a differential equation) and an output is observed. The problem is to regulate the input in order to control the output, that is for obtaining a desired output. Such a mechanism, where the input is modified according to the output measured, is called feedback. The study and design of such automatic processes is called control theory. As we will see, the term system embraces any device and control theory has a wide variety of applications in the real world. Control theory is an interdisci plinary domain at the junction of differential and difference equations, system theory and statistics. Moreover, the solution of a control problem involves many topics of numerical analysis and leads to many interesting computational problems: linear algebra (QR, SVD, projections, Schur complement, structured matrices, localization of eigenvalues, computation of the rank, Jordan normal form, Sylvester and other equations, systems of linear equations, regulariza tion, etc), root localization for polynomials, inversion of the Laplace transform, computation of the matrix exponential, approximation theory (orthogonal poly nomials, Pad6 approximation, continued fractions and linear fractional transfor mations), optimization, least squares, dynamic programming, etc. So, control theory is also a. good excuse for presenting various (sometimes unrelated) issues of numerical analysis and the procedures for their solution. This book is not a book on control.
This book presents numerical methods and computational aspects for linear integral equations. Such equations occur in various areas of applied mathematics, physics, and engineering. The material covered in this book, though not exhaustive, offers useful techniques for solving a variety of problems. Historical information cover ing the nineteenth and twentieth centuries is available in fragments in Kantorovich and Krylov (1958), Anselone (1964), Mikhlin (1967), Lonseth (1977), Atkinson (1976), Baker (1978), Kondo (1991), and Brunner (1997). Integral equations are encountered in a variety of applications in many fields including continuum mechanics, potential theory, geophysics, electricity and mag netism, kinetic theory of gases, hereditary phenomena in physics and biology, renewal theory, quantum mechanics, radiation, optimization, optimal control sys tems, communication theory, mathematical economics, population genetics, queue ing theory, and medicine. Most of the boundary value problems involving differ ential equations can be converted into problems in integral equations, but there are certain problems which can be formulated only in terms of integral equations. A computational approach to the solution of integral equations is, therefore, an essential branch of scientific inquiry.
The authors present analytical methods for synthesis of linear stationary and periodical optimal controlled systems, and create effective computational algorithms for synthesis of optimal regulators and filters. The procedures of Youla-Jabr-Bongiorno (1976) and Desoer-Lin-Murray-Saeks (1980) are special cases of this procedure. The monograph also includes original computational algorithms (solutions of usual and generalized Lyapunov and Riccati equations, polynomial matrix factorization) and illustrates the effectiveness of these algorithms by examples in the field of numerical methods for optimization of linear controlled systems.
Presents and demonstrates stabilizer design techniques that can be used to solve stabilization problems with constraints. These methods have their origins in convex programming and stability theory. However, to provide a practical capability in stabilizer design, the methods are tailored to the special features and needs of this field. Hence, the main emphasis of this book is on the methods of stabilization, rather than optimization and stability theory. The text is divided into three parts. Part I contains some background material. Part II is devoted to behavior of control systems, taking examples from mechanics to illustrate the theory. Finally, Part III deals with nonlocal stabilization problems, including a study of the global stabilization problem.
A survey is given on the state of the art in theory and numerical solution of general autonomous linear quadratic optimal control problems (continuous and discrete) with differential algebraic equation constraints. It incorporates the newest developments on differential algebraic equations, Riccati equations and invariant subspace problems. In particular, it gives a decision chart of numerical methods, that can be used to determine the right numerical method according to special properties of the problem. The book closes a gap between mathematical theory, numerical solution and engineering application. The mathematical tools are kept as basic as possible in order to address the different groups of readers, mathematicians and engineers.