This second, much enlarged edition by Lehmann and Casella of Lehmann's classic text on point estimation maintains the outlook and general style of the first edition. All of the topics are updated, while an entirely new chapter on Bayesian and hierarchical Bayesian approaches is provided, and there is much new material on simultaneous estimation. Each chapter concludes with a Notes section which contains suggestions for further study. This is a companion volume to the second edition of Lehmann's "Testing Statistical Hypotheses".
when certain parameters in the problem tend to limiting values (for example, when the sample size increases indefinitely, the intensity of the noise ap proaches zero, etc.) To address the problem of asymptotically optimal estimators consider the following important case. Let X 1, X 2, ... , X n be independent observations with the joint probability density !(x,O) (with respect to the Lebesgue measure on the real line) which depends on the unknown patameter o e 9 c R1. It is required to derive the best (asymptotically) estimator 0:( X b ... , X n) of the parameter O. The first question which arises in connection with this problem is how to compare different estimators or, equivalently, how to assess their quality, in terms of the mean square deviation from the parameter or perhaps in some other way. The presently accepted approach to this problem, resulting from A. Wald's contributions, is as follows: introduce a nonnegative function w(0l> ( ), Ob Oe 9 (the loss function) and given two estimators Of and O! n 2 2 the estimator for which the expected loss (risk) Eown(Oj, 0), j = 1 or 2, is smallest is called the better with respect to Wn at point 0 (here EoO is the expectation evaluated under the assumption that the true value of the parameter is 0). Obviously, such a method of comparison is not without its defects.
This book introduces readers to the fundamentals of estimation and dynamical system theory, and their applications in the field of multi-source information fused autonomous navigation for spacecraft. The content is divided into two parts: theory and application. The theory part (Part I) covers the mathematical background of navigation algorithm design, including parameter and state estimate methods, linear fusion, centralized and distributed fusion, observability analysis, Monte Carlo technology, and linear covariance analysis. In turn, the application part (Part II) focuses on autonomous navigation algorithm design for different phases of deep space missions, which involves multiple sensors, such as inertial measurement units, optical image sensors, and pulsar detectors. By concentrating on the relationships between estimation theory and autonomous navigation systems for spacecraft, the book bridges the gap between theory and practice. A wealth of helpful formulas and various types of estimators are also included to help readers grasp basic estimation concepts and offer them a ready-reference guide.
Written by one of the main figures in twentieth century statistics, this book provides a unified treatment of first-order large-sample theory. It discusses a broad range of applications including introductions to density estimation, the bootstrap, and the asymptotics of survey methodology. The book is written at an elementary level making it accessible to most readers.
Theory and Methods of Statistics covers essential topics for advanced graduate students and professional research statisticians. This comprehensive resource covers many important areas in one manageable volume, including core subjects such as probability theory, mathematical statistics, and linear models, and various special topics, including nonparametrics, curve estimation, multivariate analysis, time series, and resampling. The book presents subjects such as "maximum likelihood and sufficiency," and is written with an intuitive, heuristic approach to build reader comprehension. It also includes many probability inequalities that are not only useful in the context of this text, but also as a resource for investigating convergence of statistical procedures. Codifies foundational information in many core areas of statistics into a comprehensive and definitive resource Serves as an excellent text for select master’s and PhD programs, as well as a professional reference Integrates numerous examples to illustrate advanced concepts Includes many probability inequalities useful for investigating convergence of statistical procedures
These volumes present a selection of Erich L. Lehmann’s monumental contributions to Statistics. These works are multifaceted. His early work included fundamental contributions to hypothesis testing, theory of point estimation, and more generally to decision theory. His work in Nonparametric Statistics was groundbreaking. His fundamental contributions in this area include results that came to assuage the anxiety of statisticians that were skeptical of nonparametric methodologies, and his work on concepts of dependence has created a large literature. The two volumes are divided into chapters of related works. Invited contributors have critiqued the papers in each chapter, and the reprinted group of papers follows each commentary. A complete bibliography that contains links to recorded talks by Erich Lehmann – and which are freely accessible to the public – and a list of Ph.D. students are also included. These volumes belong in every statistician’s personal collection and are a required holding for any institutional library.