An overview of the computational issues; statistical, numerical, and algebraic properties, and new generalizations and applications of advances on TLS and EIV models. Experts from several disciplines prepared overview papers which were presented at the conference and are included in this book.
In response to a growing interest in Total Least Squares (TLS) and Errors-In-Variables (EIV) modeling by researchers and practitioners, well-known experts from several disciplines were invited to prepare an overview paper and present it at the third international workshop on TLS and EIV modeling held in Leuven, Belgium, August 27-29, 2001. These invited papers, representing two-thirds of the book, together with a selection of other presented contributions yield a complete overview of the main scientific achievements since 1996 in TLS and Errors-In-Variables modeling. In this way, the book nicely completes two earlier books on TLS (SIAM 1991 and 1997). Not only computational issues, but also statistical, numerical, algebraic properties are described, as well as many new generalizations and applications. Being aware of the growing interest in these techniques, it is a strong belief that this book will aid and stimulate users to apply the new techniques and models correctly to their own practical problems.
This is the first book devoted entirely to total least squares. The authors give a unified presentation of the TLS problem. A description of its basic principles are given, the various algebraic, statistical and sensitivity properties of the problem are discussed, and generalizations are presented. Applications are surveyed to facilitate uses in an even wider range of applications. Whenever possible, comparison is made with the well-known least squares methods. A basic knowledge of numerical linear algebra, matrix computations, and some notion of elementary statistics is required of the reader; however, some background material is included to make the book reasonably self-contained.
This volume collects refereed contributions based on the presentations made at the Sixth Workshop on Advanced Mathematical and Computational Tools in Metrology, held at the Istituto di Metrologia “G. Colonnetti” (IMGC), Torino, Italy, in September 2003. It provides a forum for metrologists, mathematicians and software engineers that will encourage a more effective synthesis of skills, capabilities and resources, and promotes collaboration in the context of EU programmes, EUROMET and EA projects, and MRA requirements. It contains articles by an important, worldwide group of metrologists and mathematicians involved in measurement science and, together with the five previous volumes in this series, constitutes an authoritative source for the mathematical, statistical and software tools necessary to modern metrology.The proceedings have been selected for coverage in: Index to Scientific & Technical Proceedings® (ISTP® / ISI Proceedings)Index to Scientific & Technical Proceedings (ISTP CDROM version / ISI Proceedings)CC Proceedings — Engineering & Physical Science
This book presents an overview of the different errors-in-variables (EIV) methods that can be used for system identification. Readers will explore the properties of an EIV problem. Such problems play an important role when the purpose is the determination of the physical laws that describe the process, rather than the prediction or control of its future behaviour. EIV problems typically occur when the purpose of the modelling is to get physical insight into a process. Identifiability of the model parameters for EIV problems is a non-trivial issue, and sufficient conditions for identifiability are given. The author covers various modelling aspects which, taken together, can find a solution, including the characterization of noise properties, extension to multivariable systems, and continuous-time models. The book finds solutions that are constituted of methods that are compatible with a set of noisy data, which traditional approaches to solutions, such as (total) least squares, do not find. A number of identification methods for the EIV problem are presented. Each method is accompanied with a detailed analysis based on statistical theory, and the relationship between the different methods is explained. A multitude of methods are covered, including: instrumental variables methods; methods based on bias-compensation; covariance matching methods; and prediction error and maximum-likelihood methods. The book shows how many of the methods can be applied in either the time or the frequency domain and provides special methods adapted to the case of periodic excitation. It concludes with a chapter specifically devoted to practical aspects and user perspectives that will facilitate the transfer of the theoretical material to application in real systems. Errors-in-Variables Methods in System Identification gives readers the possibility of recovering true system dynamics from noisy measurements, while solving over-determined systems of equations, making it suitable for statisticians and mathematicians alike. The book also acts as a reference for researchers and computer engineers because of its detailed exploration of EIV problems.
This book is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory with a range of applications from systems and control theory to psychometrics being described. Special knowledge of the application fields is not required. The second edition of /Low-Rank Approximation/ is a thoroughly edited and extensively rewritten revision. It contains new chapters and sections that introduce the topics of: • variable projection for structured low-rank approximation;• missing data estimation;• data-driven filtering and control;• stochastic model representation and identification;• identification of polynomial time-invariant systems; and• blind identification with deterministic input model. The book is complemented by a software implementation of the methods presented, which makes the theory directly applicable in practice. In particular, all numerical examples in the book are included in demonstration files and can be reproduced by the reader. This gives hands-on experience with the theory and methods detailed. In addition, exercises and MATLAB^® /Octave examples will assist the reader quickly to assimilate the theory on a chapter-by-chapter basis. “Each chapter is completed with a new section of exercises to which complete solutions are provided.” Low-Rank Approximation (second edition) is a broad survey of the Low-Rank Approximation theory and applications of its field which will be of direct interest to researchers in system identification, control and systems theory, numerical linear algebra and optimization. The supplementary problems and solutions render it suitable for use in teaching graduate courses in those subjects as well.
A comprehensive treatment of numerical linear algebra from the standpoint of both theory and practice. The fourth edition of Gene H. Golub and Charles F. Van Loan's classic is an essential reference for computational scientists and engineers in addition to researchers in the numerical linear algebra community. Anyone whose work requires the solution to a matrix problem and an appreciation of its mathematical properties will find this book to be an indispensible tool. This revision is a cover-to-cover expansion and renovation of the third edition. It now includes an introduction to tensor computations and brand new sections on • fast transforms • parallel LU • discrete Poisson solvers • pseudospectra • structured linear equation problems • structured eigenvalue problems • large-scale SVD methods • polynomial eigenvalue problems Matrix Computations is packed with challenging problems, insightful derivations, and pointers to the literature—everything needed to become a matrix-savvy developer of numerical methods and software. The second most cited math book of 2012 according to MathSciNet, the book has placed in the top 10 for since 2005.
The first book of its kind, Power Converters and AC Electrical Drives with Linear Neural Networks systematically explores the application of neural networks in the field of power electronics, with particular emphasis on the sensorless control of AC drives. It presents the classical theory based on space-vectors in identification, discusses control of electrical drives and power converters, and examines improvements that can be attained when using linear neural networks. The book integrates power electronics and electrical drives with artificial neural networks (ANN). Organized into four parts, it first deals with voltage source inverters and their control. It then covers AC electrical drive control, focusing on induction and permanent magnet synchronous motor drives. The third part examines theoretical aspects of linear neural networks, particularly the neural EXIN family. The fourth part highlights original applications in electrical drives and power quality, ranging from neural-based parameter estimation and sensorless control to distributed generation systems from renewable sources and active power filters. Simulation and experimental results are provided to validate the theories. Written by experts in the field, this state-of-the-art book requires basic knowledge of electrical machines and power electronics, as well as some familiarity with control systems, signal processing, linear algebra, and numerical analysis. Offering multiple paths through the material, the text is suitable for undergraduate and postgraduate students, theoreticians, practicing engineers, and researchers involved in applications of ANNs.
The presentation of a novel theory in orthogonal regression The literature about neural-based algorithms is often dedicated to principal component analysis (PCA) and considers minor component analysis (MCA) a mere consequence. Breaking the mold, Neural-Based Orthogonal Data Fitting is the first book to start with the MCA problem and arrive at important conclusions about the PCA problem. The book proposes several neural networks, all endowed with a complete theory that not only explains their behavior, but also compares them with the existing neural and traditional algorithms. EXIN neurons, which are of the authors' invention, are introduced, explained, and analyzed. Further, it studies the algorithms as a differential geometry problem, a dynamic problem, a stochastic problem, and a numerical problem. It demonstrates the novel aspects of its main theory, including its applications in computer vision and linear system identification. The book shows both the derivation of the TLS EXIN from the MCA EXIN and the original derivation, as well as: Shows TLS problems and gives a sketch of their history and applications Presents MCA EXIN and compares it with the other existing approaches Introduces the TLS EXIN neuron and the SCG and BFGS acceleration techniques and compares them with TLS GAO Outlines the GeTLS EXIN theory for generalizing and unifying the regression problems Establishes the GeMCA theory, starting with the identification of GeTLS EXIN as a generalization eigenvalue problem In dealing with mathematical and numerical aspects of EXIN neurons, the book is mainly theoretical. All the algorithms, however, have been used in analyzing real-time problems and show accurate solutions. Neural-Based Orthogonal Data Fitting is useful for statisticians, applied mathematics experts, and engineers.