Emphasizing concepts rather than recipes, An Introduction to Statistical Inference and Its Applications with R provides a clear exposition of the methods of statistical inference for students who are comfortable with mathematical notation. Numerous examples, case studies, and exercises are included. R is used to simplify computation, create figures
This classic textbook builds theoretical statistics from the first principles of probability theory. Starting from the basics of probability, the authors develop the theory of statistical inference using techniques, definitions, and concepts that are statistical and natural extensions, and consequences, of previous concepts. It covers all topics from a standard inference course including: distributions, random variables, data reduction, point estimation, hypothesis testing, and interval estimation. Features The classic graduate-level textbook on statistical inference Develops elements of statistical theory from first principles of probability Written in a lucid style accessible to anyone with some background in calculus Covers all key topics of a standard course in inference Hundreds of examples throughout to aid understanding Each chapter includes an extensive set of graduated exercises Statistical Inference, Second Edition is primarily aimed at graduate students of statistics, but can be used by advanced undergraduate students majoring in statistics who have a solid mathematics background. It also stresses the more practical uses of statistical theory, being more concerned with understanding basic statistical concepts and deriving reasonable statistical procedures, while less focused on formal optimality considerations. This is a reprint of the second edition originally published by Cengage Learning, Inc. in 2001.
An Introduction to Statistical Learning provides an accessible overview of the field of statistical learning, an essential toolset for making sense of the vast and complex data sets that have emerged in fields ranging from biology to finance, marketing, and astrophysics in the past twenty years. This book presents some of the most important modeling and prediction techniques, along with relevant applications. Topics include linear regression, classification, resampling methods, shrinkage approaches, tree-based methods, support vector machines, clustering, deep learning, survival analysis, multiple testing, and more. Color graphics and real-world examples are used to illustrate the methods presented. This book is targeted at statisticians and non-statisticians alike, who wish to use cutting-edge statistical learning techniques to analyze their data. Four of the authors co-wrote An Introduction to Statistical Learning, With Applications in R (ISLR), which has become a mainstay of undergraduate and graduate classrooms worldwide, as well as an important reference book for data scientists. One of the keys to its success was that each chapter contains a tutorial on implementing the analyses and methods presented in the R scientific computing environment. However, in recent years Python has become a popular language for data science, and there has been increasing demand for a Python-based alternative to ISLR. Hence, this book (ISLP) covers the same materials as ISLR but with labs implemented in Python. These labs will be useful both for Python novices, as well as experienced users.
Based on the authors' lecture notes, this text presents concise yet complete coverage of statistical inference theory, focusing on the fundamental classical principles. Unlike related textbooks, it combines the theoretical basis of statistical inference with a useful applied toolbox that includes linear models. Suitable for a second semester undergraduate course on statistical inference, the text offers proofs to support the mathematics and does not require any use of measure theory. It illustrates core concepts using cartoons and provides solutions to all examples and problems.
A Balanced Treatment of Bayesian and Frequentist Inference Statistical Inference: An Integrated Approach, Second Edition presents an account of the Bayesian and frequentist approaches to statistical inference. Now with an additional author, this second edition places a more balanced emphasis on both perspectives than the first edition. New to the Second Edition New material on empirical Bayes and penalized likelihoods and their impact on regression models Expanded material on hypothesis testing, method of moments, bias correction, and hierarchical models More examples and exercises More comparison between the approaches, including their similarities and differences Designed for advanced undergraduate and graduate courses, the text thoroughly covers statistical inference without delving too deep into technical details. It compares the Bayesian and frequentist schools of thought and explores procedures that lie on the border between the two. Many examples illustrate the methods and models, and exercises are included at the end of each chapter.
Taken literally, the title "All of Statistics" is an exaggeration. But in spirit, the title is apt, as the book does cover a much broader range of topics than a typical introductory book on mathematical statistics. This book is for people who want to learn probability and statistics quickly. It is suitable for graduate or advanced undergraduate students in computer science, mathematics, statistics, and related disciplines. The book includes modern topics like non-parametric curve estimation, bootstrapping, and classification, topics that are usually relegated to follow-up courses. The reader is presumed to know calculus and a little linear algebra. No previous knowledge of probability and statistics is required. Statistics, data mining, and machine learning are all concerned with collecting and analysing data.
The twenty-first century has seen a breathtaking expansion of statistical methodology, both in scope and in influence. 'Big data', 'data science', and 'machine learning' have become familiar terms in the news, as statistical methods are brought to bear upon the enormous data sets of modern science and commerce. How did we get here? And where are we going? This book takes us on an exhilarating journey through the revolution in data analysis following the introduction of electronic computation in the 1950s. Beginning with classical inferential theories - Bayesian, frequentist, Fisherian - individual chapters take up a series of influential topics: survival analysis, logistic regression, empirical Bayes, the jackknife and bootstrap, random forests, neural networks, Markov chain Monte Carlo, inference after model selection, and dozens more. The distinctly modern approach integrates methodology and algorithms with statistical inference. The book ends with speculation on the future direction of statistics and data science.
This book is based upon lecture notes developed by Jack Kiefer for a course in statistical inference he taught at Cornell University. The notes were distributed to the class in lieu of a textbook, and the problems were used for homework assignments. Relying only on modest prerequisites of probability theory and cal culus, Kiefer's approach to a first course in statistics is to present the central ideas of the modem mathematical theory with a minimum of fuss and formality. He is able to do this by using a rich mixture of examples, pictures, and math ematical derivations to complement a clear and logical discussion of the important ideas in plain English. The straightforwardness of Kiefer's presentation is remarkable in view of the sophistication and depth of his examination of the major theme: How should an intelligent person formulate a statistical problem and choose a statistical procedure to apply to it? Kiefer's view, in the same spirit as Neyman and Wald, is that one should try to assess the consequences of a statistical choice in some quan titative (frequentist) formulation and ought to choose a course of action that is verifiably optimal (or nearly so) without regard to the perceived "attractiveness" of certain dogmas and methods.
The study of spatial processes and their applications is an important topic in statistics and finds wide application particularly in computer vision and image processing. This book is devoted to statistical inference in spatial statistics and is intended for specialists needing an introduction to the subject and to its applications. One of the themes of the book is the demonstration of how these techniques give new insights into classical procedures (including new examples in likelihood theory) and newer statistical paradigms such as Monte-Carlo inference and pseudo-likelihood. Professor Ripley also stresses the importance of edge effects and of lack of a unique asymptotic setting in spatial problems. Throughout, the author discusses the foundational issues posed and the difficulties, both computational and philosophical, which arise. The final chapters consider image restoration and segmentation methods and the averaging and summarising of images. Thus, the book will find wide appeal to researchers in computer vision, image processing, and those applying microscopy in biology, geology and materials science, as well as to statisticians interested in the foundations of their discipline.