A wide range of topics and perspectives in the field of statistics are brought together in this volume. The contributions originate from invited papers presented at an international conference which was held in honour of C. Radhakrishna Rao, one of the most eminent statisticians of our time and a distinguished scientist.
This textbook provides a comprehensive introduction to statistical principles, concepts and methods that are essential in modern statistics and data science. The topics covered include likelihood-based inference, Bayesian statistics, regression, statistical tests and the quantification of uncertainty. Moreover, the book addresses statistical ideas that are useful in modern data analytics, including bootstrapping, modeling of multivariate distributions, missing data analysis, causality as well as principles of experimental design. The textbook includes sufficient material for a two-semester course and is intended for master’s students in data science, statistics and computer science with a rudimentary grasp of probability theory. It will also be useful for data science practitioners who want to strengthen their statistics skills.
This classic textbook builds theoretical statistics from the first principles of probability theory. Starting from the basics of probability, the authors develop the theory of statistical inference using techniques, definitions, and concepts that are statistical and natural extensions, and consequences, of previous concepts. It covers all topics from a standard inference course including: distributions, random variables, data reduction, point estimation, hypothesis testing, and interval estimation. Features The classic graduate-level textbook on statistical inference Develops elements of statistical theory from first principles of probability Written in a lucid style accessible to anyone with some background in calculus Covers all key topics of a standard course in inference Hundreds of examples throughout to aid understanding Each chapter includes an extensive set of graduated exercises Statistical Inference, Second Edition is primarily aimed at graduate students of statistics, but can be used by advanced undergraduate students majoring in statistics who have a solid mathematics background. It also stresses the more practical uses of statistical theory, being more concerned with understanding basic statistical concepts and deriving reasonable statistical procedures, while less focused on formal optimality considerations. This is a reprint of the second edition originally published by Cengage Learning, Inc. in 2001.
Now in its second edition, this introductory statistics textbook conveys the essential concepts and tools needed to develop and nurture statistical thinking. It presents descriptive, inductive and explorative statistical methods and guides the reader through the process of quantitative data analysis. This revised and extended edition features new chapters on logistic regression, simple random sampling, including bootstrapping, and causal inference. The text is primarily intended for undergraduate students in disciplines such as business administration, the social sciences, medicine, politics, and macroeconomics. It features a wealth of examples, exercises and solutions with computer code in the statistical programming language R, as well as supplementary material that will enable the reader to quickly adapt the methods to their own applications.
Mounting failures of replication in social and biological sciences give a new urgency to critically appraising proposed reforms. This book pulls back the cover on disagreements between experts charged with restoring integrity to science. It denies two pervasive views of the role of probability in inference: to assign degrees of belief, and to control error rates in a long run. If statistical consumers are unaware of assumptions behind rival evidence reforms, they can't scrutinize the consequences that affect them (in personalized medicine, psychology, etc.). The book sets sail with a simple tool: if little has been done to rule out flaws in inferring a claim, then it has not passed a severe test. Many methods advocated by data experts do not stand up to severe scrutiny and are in tension with successful strategies for blocking or accounting for cherry picking and selective reporting. Through a series of excursions and exhibits, the philosophy and history of inductive inference come alive. Philosophical tools are put to work to solve problems about science and pseudoscience, induction and falsification.
This book presents recently developed statistical methods and theory required for the application of the tools of functional data analysis to problems arising in geosciences, finance, economics and biology. It is concerned with inference based on second order statistics, especially those related to the functional principal component analysis. While it covers inference for independent and identically distributed functional data, its distinguishing feature is an in depth coverage of dependent functional data structures, including functional time series and spatially indexed functions. Specific inferential problems studied include two sample inference, change point analysis, tests for dependence in data and model residuals and functional prediction. All procedures are described algorithmically, illustrated on simulated and real data sets, and supported by a complete asymptotic theory. The book can be read at two levels. Readers interested primarily in methodology will find detailed descriptions of the methods and examples of their application. Researchers interested also in mathematical foundations will find carefully developed theory. The organization of the chapters makes it easy for the reader to choose an appropriate focus. The book introduces the requisite, and frequently used, Hilbert space formalism in a systematic manner. This will be useful to graduate or advanced undergraduate students seeking a self-contained introduction to the subject. Advanced researchers will find novel asymptotic arguments.
Taken literally, the title "All of Statistics" is an exaggeration. But in spirit, the title is apt, as the book does cover a much broader range of topics than a typical introductory book on mathematical statistics. This book is for people who want to learn probability and statistics quickly. It is suitable for graduate or advanced undergraduate students in computer science, mathematics, statistics, and related disciplines. The book includes modern topics like non-parametric curve estimation, bootstrapping, and classification, topics that are usually relegated to follow-up courses. The reader is presumed to know calculus and a little linear algebra. No previous knowledge of probability and statistics is required. Statistics, data mining, and machine learning are all concerned with collecting and analysing data.
A text for the non-majors introductory statistics service course. The chapters--including Web site material--can be organized for one or two semester sequences; algrebra is the mathematics prerequisite. Web site chapters on quality control, time series, plus business applications regularly throughout the work make it suitable for business statistics courses on some campuses. The text combines lucid and statistically engaging exposition, graphic and poignantly applied examples, realistic exercise settings to take student past the mechanics of introductory-level statistical techniques into the realm of practical data analysis and inference-based problem solving.
Now available in paperback, this book covers some recent developments in statistical inference. It provides methods applicable in problems involving nuisance parameters such as those encountered in comparing two exponential distributions or in ANOVA without the assumption of equal error variances. The generalized procedures are shown to be more powerful in detecting significant experimental results and in avoiding misleading conclusions.