This textbook offers an accessible and comprehensive overview of statistical estimation and inference that reflects current trends in statistical research. It draws from three main themes throughout: the finite-sample theory, the asymptotic theory, and Bayesian statistics. The authors have included a chapter on estimating equations as a means to unify a range of useful methodologies, including generalized linear models, generalized estimation equations, quasi-likelihood estimation, and conditional inference. They also utilize a standardized set of assumptions and tools throughout, imposing regular conditions and resulting in a more coherent and cohesive volume. Written for the graduate-level audience, this text can be used in a one-semester or two-semester course.
This book offers a brief course in statistical inference that requires only a basic familiarity with probability and matrix and linear algebra. Ninety problems with solutions make it an ideal choice for self-study as well as a helpful review of a wide-ranging topic with important uses to professionals in business, government, public administration, and other fields. 2011 edition.
A concise, easily accessible introduction to descriptive and inferential techniques Statistical Inference: A Short Course offers a concise presentation of the essentials of basic statistics for readers seeking to acquire a working knowledge of statistical concepts, measures, and procedures. The author conducts tests on the assumption of randomness and normality, provides nonparametric methods when parametric approaches might not work. The book also explores how to determine a confidence interval for a population median while also providing coverage of ratio estimation, randomness, and causality. To ensure a thorough understanding of all key concepts, Statistical Inference provides numerous examples and solutions along with complete and precise answers to many fundamental questions, including: How do we determine that a given dataset is actually a random sample? With what level of precision and reliability can a population sample be estimated? How are probabilities determined and are they the same thing as odds? How can we predict the level of one variable from that of another? What is the strength of the relationship between two variables? The book is organized to present fundamental statistical concepts first, with later chapters exploring more advanced topics and additional statistical tests such as Distributional Hypotheses, Multinomial Chi-Square Statistics, and the Chi-Square Distribution. Each chapter includes appendices and exercises, allowing readers to test their comprehension of the presented material. Statistical Inference: A Short Course is an excellent book for courses on probability, mathematical statistics, and statistical inference at the upper-undergraduate and graduate levels. The book also serves as a valuable reference for researchers and practitioners who would like to develop further insights into essential statistical tools.
Introductory Statistical Inference develops the concepts and intricacies of statistical inference. With a review of probability concepts, this book discusses topics such as sufficiency, ancillarity, point estimation, minimum variance estimation, confidence intervals, multiple comparisons, and large-sample inference. It introduces techniques of two-stage sampling, fitting a straight line to data, tests of hypotheses, nonparametric methods, and the bootstrap method. It also features worked examples of statistical principles as well as exercises with hints. This text is suited for courses in probability and statistical inference at the upper-level undergraduate and graduate levels.
Taken literally, the title "All of Statistics" is an exaggeration. But in spirit, the title is apt, as the book does cover a much broader range of topics than a typical introductory book on mathematical statistics. This book is for people who want to learn probability and statistics quickly. It is suitable for graduate or advanced undergraduate students in computer science, mathematics, statistics, and related disciplines. The book includes modern topics like non-parametric curve estimation, bootstrapping, and classification, topics that are usually relegated to follow-up courses. The reader is presumed to know calculus and a little linear algebra. No previous knowledge of probability and statistics is required. Statistics, data mining, and machine learning are all concerned with collecting and analysing data.
This classic textbook builds theoretical statistics from the first principles of probability theory. Starting from the basics of probability, the authors develop the theory of statistical inference using techniques, definitions, and concepts that are statistical and natural extensions, and consequences, of previous concepts. It covers all topics from a standard inference course including: distributions, random variables, data reduction, point estimation, hypothesis testing, and interval estimation. Features The classic graduate-level textbook on statistical inference Develops elements of statistical theory from first principles of probability Written in a lucid style accessible to anyone with some background in calculus Covers all key topics of a standard course in inference Hundreds of examples throughout to aid understanding Each chapter includes an extensive set of graduated exercises Statistical Inference, Second Edition is primarily aimed at graduate students of statistics, but can be used by advanced undergraduate students majoring in statistics who have a solid mathematics background. It also stresses the more practical uses of statistical theory, being more concerned with understanding basic statistical concepts and deriving reasonable statistical procedures, while less focused on formal optimality considerations. This is a reprint of the second edition originally published by Cengage Learning, Inc. in 2001.
Based on the authors' lecture notes, this text presents concise yet complete coverage of statistical inference theory, focusing on the fundamental classical principles. Unlike related textbooks, it combines the theoretical basis of statistical inference with a useful applied toolbox that includes linear models. Suitable for a second semester undergraduate course on statistical inference, the text offers proofs to support the mathematics and does not require any use of measure theory. It illustrates core concepts using cartoons and provides solutions to all examples and problems.
Theory of Statistical Inference is designed as a reference on statistical inference for researchers and students at the graduate or advanced undergraduate level. It presents a unified treatment of the foundational ideas of modern statistical inference, and would be suitable for a core course in a graduate program in statistics or biostatistics. The emphasis is on the application of mathematical theory to the problem of inference, leading to an optimization theory allowing the choice of those statistical methods yielding the most efficient use of data. The book shows how a small number of key concepts, such as sufficiency, invariance, stochastic ordering, decision theory and vector space algebra play a recurring and unifying role. The volume can be divided into four sections. Part I provides a review of the required distribution theory. Part II introduces the problem of statistical inference. This includes the definitions of the exponential family, invariant and Bayesian models. Basic concepts of estimation, confidence intervals and hypothesis testing are introduced here. Part III constitutes the core of the volume, presenting a formal theory of statistical inference. Beginning with decision theory, this section then covers uniformly minimum variance unbiased (UMVU) estimation, minimum risk equivariant (MRE) estimation and the Neyman-Pearson test. Finally, Part IV introduces large sample theory. This section begins with stochastic limit theorems, the δ-method, the Bahadur representation theorem for sample quantiles, large sample U-estimation, the Cramér-Rao lower bound and asymptotic efficiency. A separate chapter is then devoted to estimating equation methods. The volume ends with a detailed development of large sample hypothesis testing, based on the likelihood ratio test (LRT), Rao score test and the Wald test. Features This volume includes treatment of linear and nonlinear regression models, ANOVA models, generalized linear models (GLM) and generalized estimating equations (GEE). An introduction to decision theory (including risk, admissibility, classification, Bayes and minimax decision rules) is presented. The importance of this sometimes overlooked topic to statistical methodology is emphasized. The volume emphasizes throughout the important role that can be played by group theory and invariance in statistical inference. Nonparametric (rank-based) methods are derived by the same principles used for parametric models and are therefore presented as solutions to well-defined mathematical problems, rather than as robust heuristic alternatives to parametric methods. Each chapter ends with a set of theoretical and applied exercises integrated with the main text. Problems involving R programming are included. Appendices summarize the necessary background in analysis, matrix algebra and group theory.
A Balanced Treatment of Bayesian and Frequentist Inference Statistical Inference: An Integrated Approach, Second Edition presents an account of the Bayesian and frequentist approaches to statistical inference. Now with an additional author, this second edition places a more balanced emphasis on both perspectives than the first edition. New to the Second Edition New material on empirical Bayes and penalized likelihoods and their impact on regression models Expanded material on hypothesis testing, method of moments, bias correction, and hierarchical models More examples and exercises More comparison between the approaches, including their similarities and differences Designed for advanced undergraduate and graduate courses, the text thoroughly covers statistical inference without delving too deep into technical details. It compares the Bayesian and frequentist schools of thought and explores procedures that lie on the border between the two. Many examples illustrate the methods and models, and exercises are included at the end of each chapter.
This book offers a detailed history of parametric statistical inference. Covering the period between James Bernoulli and R.A. Fisher, it examines: binomial statistical inference; statistical inference by inverse probability; the central limit theorem and linear minimum variance estimation by Laplace and Gauss; error theory, skew distributions, correlation, sampling distributions; and the Fisherian Revolution. Lively biographical sketches of many of the main characters are featured throughout, including Laplace, Gauss, Edgeworth, Fisher, and Karl Pearson. Also examined are the roles played by DeMoivre, James Bernoulli, and Lagrange.