This book deals with shrinkage regression estimators obtained by shrinking the ordinary least squares (OLS) estimator towards the origin. The author's main concern is to compare the sampling properties of a family of Stein-rule estimators with those of a family of minimum mean squared error estimators. In this book, the author deals with shrinkage regression estimators obtained by shrinking the ordinary least squares (OLS) estimator towards the origin. In particular, he deals with a family of Stein-rule (SR) estimators and a family of minimum mean squared error (MMSE) estimators.
This book provides a self-contained introduction to shrinkage estimation for matrix-variate normal distribution models. More specifically, it presents recent techniques and results in estimation of mean and covariance matrices with a high-dimensional setting that implies singularity of the sample covariance matrix. Such high-dimensional models can be analyzed by using the same arguments as for low-dimensional models, thus yielding a unified approach to both high- and low-dimensional shrinkage estimations. The unified shrinkage approach not only integrates modern and classical shrinkage estimation, but is also required for further development of the field. Beginning with the notion of decision-theoretic estimation, this book explains matrix theory, group invariance, and other mathematical tools for finding better estimators. It also includes examples of shrinkage estimators for improving standard estimators, such as least squares, maximum likelihood, and minimum risk invariant estimators, and discusses the historical background and related topics in decision-theoretic estimation of parameter matrices. This book is useful for researchers and graduate students in various fields requiring data analysis skills as well as in mathematical statistics.
A guide to the systematic analytical results for ridge, LASSO, preliminary test, and Stein-type estimators with applications Theory of Ridge Regression Estimation with Applications offers a comprehensive guide to the theory and methods of estimation. Ridge regression and LASSO are at the center of all penalty estimators in a range of standard models that are used in many applied statistical analyses. Written by noted experts in the field, the book contains a thorough introduction to penalty and shrinkage estimation and explores the role that ridge, LASSO, and logistic regression play in the computer intensive area of neural network and big data analysis. Designed to be accessible, the book presents detailed coverage of the basic terminology related to various models such as the location and simple linear models, normal and rank theory-based ridge, LASSO, preliminary test and Stein-type estimators. The authors also include problem sets to enhance learning. This book is a volume in the Wiley Series in Probability and Statistics series that provides essential and invaluable reading for all statisticians. This important resource: Offers theoretical coverage and computer-intensive applications of the procedures presented Contains solutions and alternate methods for prediction accuracy and selecting model procedures Presents the first book to focus on ridge regression and unifies past research with current methodology Uses R throughout the text and includes a companion website containing convenient data sets Written for graduate students, practitioners, and researchers in various fields of science, Theory of Ridge Regression Estimation with Applications is an authoritative guide to the theory and methodology of statistical estimation.
This book provides a coherent framework for understanding shrinkage estimation in statistics. The term refers to modifying a classical estimator by moving it closer to a target which could be known a priori or arise from a model. The goal is to construct estimators with improved statistical properties. The book focuses primarily on point and loss estimation of the mean vector of multivariate normal and spherically symmetric distributions. Chapter 1 reviews the statistical and decision theoretic terminology and results that will be used throughout the book. Chapter 2 is concerned with estimating the mean vector of a multivariate normal distribution under quadratic loss from a frequentist perspective. In Chapter 3 the authors take a Bayesian view of shrinkage estimation in the normal setting. Chapter 4 introduces the general classes of spherically and elliptically symmetric distributions. Point and loss estimation for these broad classes are studied in subsequent chapters. In particular, Chapter 5 extends many of the results from Chapters 2 and 3 to spherically and elliptically symmetric distributions. Chapter 6 considers the general linear model with spherically symmetric error distributions when a residual vector is available. Chapter 7 then considers the problem of estimating a location vector which is constrained to lie in a convex set. Much of the chapter is devoted to one of two types of constraint sets, balls and polyhedral cones. In Chapter 8 the authors focus on loss estimation and data-dependent evidence reports. Appendices cover a number of technical topics including weakly differentiable functions; examples where Stein’s identity doesn’t hold; Stein’s lemma and Stokes’ theorem for smooth boundaries; harmonic, superharmonic and subharmonic functions; and modified Bessel functions.
"This volume presents in detail the fundamental theories of linear regression analysis and diagnosis, as well as the relevant statistical computing techniques so that readers are able to actually model the data using the techniques described in the book. This book is suitable for graduate students who are either majoring in statistics/biostatistics or using linear regression analysis substantially in their subject area." --Book Jacket.
The book covers the basic theory of linear regression models and presents a comprehensive survey of different estimation techniques as alternatives and complements to least squares estimation. Proofs are given for the most relevant results, and the presented methods are illustrated with the help of numerical examples and graphics. Special emphasis is placed on practicability and possible applications. The book is rounded off by an introduction to the basics of decision theory and an appendix on matrix algebra.
The objective of this book is to compare the statistical properties of penalty and non-penalty estimation strategies for some popular models. Specifically, it considers the full model, submodel, penalty, pretest and shrinkage estimation techniques for three regression models before presenting the asymptotic properties of the non-penalty estimators and their asymptotic distributional efficiency comparisons. Further, the risk properties of the non-penalty estimators and penalty estimators are explored through a Monte Carlo simulation study. Showcasing examples based on real datasets, the book will be useful for students and applied researchers in a host of applied fields. The book’s level of presentation and style make it accessible to a broad audience. It offers clear, succinct expositions of each estimation strategy. More importantly, it clearly describes how to use each estimation strategy for the problem at hand. The book is largely self-contained, as are the individual chapters, so that anyone interested in a particular topic or area of application may read only that specific chapter. The book is specially designed for graduate students who want to understand the foundations and concepts underlying penalty and non-penalty estimation and its applications. It is well-suited as a textbook for senior undergraduate and graduate courses surveying penalty and non-penalty estimation strategies, and can also be used as a reference book for a host of related subjects, including courses on meta-analysis. Professional statisticians will find this book to be a valuable reference work, since nearly all chapters are self-contained.
In the last ten years, there has been increasing interest and activity in the general area of partially linear regression smoothing in statistics. Many methods and techniques have been proposed and studied. This monograph hopes to bring an up-to-date presentation of the state of the art of partially linear regression techniques. The emphasis is on methodologies rather than on the theory, with a particular focus on applications of partially linear regression techniques to various statistical problems. These problems include least squares regression, asymptotically efficient estimation, bootstrap resampling, censored data analysis, linear measurement error models, nonlinear measurement models, nonlinear and nonparametric time series models.
The essential introduction to the theory and application of linear models—now in a valuable new edition Since most advanced statistical tools are generalizations of the linear model, it is neces-sary to first master the linear model in order to move forward to more advanced concepts. The linear model remains the main tool of the applied statistician and is central to the training of any statistician regardless of whether the focus is applied or theoretical. This completely revised and updated new edition successfully develops the basic theory of linear models for regression, analysis of variance, analysis of covariance, and linear mixed models. Recent advances in the methodology related to linear mixed models, generalized linear models, and the Bayesian linear model are also addressed. Linear Models in Statistics, Second Edition includes full coverage of advanced topics, such as mixed and generalized linear models, Bayesian linear models, two-way models with empty cells, geometry of least squares, vector-matrix calculus, simultaneous inference, and logistic and nonlinear regression. Algebraic, geometrical, frequentist, and Bayesian approaches to both the inference of linear models and the analysis of variance are also illustrated. Through the expansion of relevant material and the inclusion of the latest technological developments in the field, this book provides readers with the theoretical foundation to correctly interpret computer software output as well as effectively use, customize, and understand linear models. This modern Second Edition features: New chapters on Bayesian linear models as well as random and mixed linear models Expanded discussion of two-way models with empty cells Additional sections on the geometry of least squares Updated coverage of simultaneous inference The book is complemented with easy-to-read proofs, real data sets, and an extensive bibliography. A thorough review of the requisite matrix algebra has been addedfor transitional purposes, and numerous theoretical and applied problems have been incorporated with selected answers provided at the end of the book. A related Web site includes additional data sets and SAS® code for all numerical examples. Linear Model in Statistics, Second Edition is a must-have book for courses in statistics, biostatistics, and mathematics at the upper-undergraduate and graduate levels. It is also an invaluable reference for researchers who need to gain a better understanding of regression and analysis of variance.