This book is a collection of essays on the foundations of Statistical Inference. The sequence in which the essays have been arranged makes it possible to read the book as a single contemporay discourse on the likelihood principle, the paradoxes that attend its violation, and the radical deviation from classical statistical practices that its adoption would entail. The book can also be read, with the aid of the notes as a chronicle of the development of Basu's ideas.
Interpreting statistical data as evidence, Statistical Evidence: A Likelihood Paradigm focuses on the law of likelihood, fundamental to solving many of the problems associated with interpreting data in this way. Statistics has long neglected this principle, resulting in a seriously defective methodology. This book redresses the balance, explaining why science has clung to a defective methodology despite its well-known defects. After examining the strengths and weaknesses of the work of Neyman and Pearson and the Fisher paradigm, the author proposes an alternative paradigm which provides, in the law of likelihood, the explicit concept of evidence missing from the other paradigms. At the same time, this new paradigm retains the elements of objective measurement and control of the frequency of misleading results, features which made the old paradigms so important to science. The likelihood paradigm leads to statistical methods that have a compelling rationale and an elegant simplicity, no longer forcing the reader to choose between frequentist and Bayesian statistics.
Presents a unified approach to parametric estimation, confidence intervals, hypothesis testing, and statistical modeling, which are uniquely based on the likelihood function This book addresses mathematical statistics for upper-undergraduates and first year graduate students, tying chapters on estimation, confidence intervals, hypothesis testing, and statistical models together to present a unifying focus on the likelihood function. It also emphasizes the important ideas in statistical modeling, such as sufficiency, exponential family distributions, and large sample properties. Mathematical Statistics: An Introduction to Likelihood Based Inference makes advanced topics accessible and understandable and covers many topics in more depth than typical mathematical statistics textbooks. It includes numerous examples, case studies, a large number of exercises ranging from drill and skill to extremely difficult problems, and many of the important theorems of mathematical statistics along with their proofs. In addition to the connected chapters mentioned above, Mathematical Statistics covers likelihood-based estimation, with emphasis on multidimensional parameter spaces and range dependent support. It also includes a chapter on confidence intervals, which contains examples of exact confidence intervals along with the standard large sample confidence intervals based on the MLE's and bootstrap confidence intervals. There’s also a chapter on parametric statistical models featuring sections on non-iid observations, linear regression, logistic regression, Poisson regression, and linear models. Prepares students with the tools needed to be successful in their future work in statistics data science Includes practical case studies including real-life data collected from Yellowstone National Park, the Donner party, and the Titanic voyage Emphasizes the important ideas to statistical modeling, such as sufficiency, exponential family distributions, and large sample properties Includes sections on Bayesian estimation and credible intervals Features examples, problems, and solutions Mathematical Statistics: An Introduction to Likelihood Based Inference is an ideal textbook for upper-undergraduate and graduate courses in probability, mathematical statistics, and/or statistical inference.
This book provides a unified introduction to a variety of computational algorithms for likelihood and Bayesian inference. In this second edition, I have attempted to expand the treatment of many of the techniques dis cussed, as well as include important topics such as the Metropolis algorithm and methods for assessing the convergence of a Markov chain algorithm. Prerequisites for this book include an understanding of mathematical statistics at the level of Bickel and Doksum (1977), some understanding of the Bayesian approach as in Box and Tiao (1973), experience with condi tional inference at the level of Cox and Snell (1989) and exposure to statistical models as found in McCullagh and Neider (1989). I have chosen not to present the proofs of convergence or rates of convergence since these proofs may require substantial background in Markov chain theory which is beyond the scope ofthis book. However, references to these proofs are given. There has been an explosion of papers in the area of Markov chain Monte Carlo in the last five years. I have attempted to identify key references - though due to the volatility of the field some work may have been missed.
It is an honor to be asked to write a foreword to this book, for I believe that it and other books to follow will eventually lead to a dramatic change in the current statistics curriculum in our universities. I spent the 1975-76 academic year at Florida State University in Tallahassee. My purpose was to complete a book on Statistical Reliability Theory with Frank Proschan. At the time, I was working on total time on test processes. At the same time, I started attending lectures by Dev Basu on statistical inference. It was Lehmann's hypothesis testing course and Lehmann's book was the text. However, I noticed something strange - Basu never opened the book. He was obviously not following it. Instead, he was giving a very elegant, measure theoretic treatment of the concepts of sufficiency, ancillarity, and invariance. He was interested in the concept of information - what it meant. - how it fitted in with contemporary statistics. As he looked at the fundamental ideas, the logic behind their use seemed to evaporate. I was shocked. I didn't like priors. I didn't like Bayesian statistics. But after the smoke had cleared, that was all that was left. Basu loves counterexamples. He is like an art critic in the field of statistical inference. He would find a counterexample to the Bayesian approach if he could. So far, he has failed in this respect.
This richly illustrated textbook covers modern statistical methods with applications in medicine, epidemiology and biology. Firstly, it discusses the importance of statistical models in applied quantitative research and the central role of the likelihood function, describing likelihood-based inference from a frequentist viewpoint, and exploring the properties of the maximum likelihood estimate, the score function, the likelihood ratio and the Wald statistic. In the second part of the book, likelihood is combined with prior information to perform Bayesian inference. Topics include Bayesian updating, conjugate and reference priors, Bayesian point and interval estimates, Bayesian asymptotics and empirical Bayes methods. It includes a separate chapter on modern numerical techniques for Bayesian inference, and also addresses advanced topics, such as model choice and prediction from frequentist and Bayesian perspectives. This revised edition of the book “Applied Statistical Inference” has been expanded to include new material on Markov models for time series analysis. It also features a comprehensive appendix covering the prerequisites in probability theory, matrix algebra, mathematical calculus, and numerical analysis, and each chapter is complemented by exercises. The text is primarily intended for graduate statistics and biostatistics students with an interest in applications.
Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It al
This broad text provides a complete overview of most standard statistical methods, including multiple regression, analysis of variance, experimental design, and sampling techniques. Assuming a background of only two years of high school algebra, this book teaches intelligent data analysis and covers the principles of good data collection. * Provides a complete discussion of analysis of data including estimation, diagnostics, and remedial actions * Examples contain graphical illustration for ease of interpretation * Intended for use with almost any statistical software * Examples are worked to a logical conclusion, including interpretation of results * A complete Instructor's Manual is available to adopters
Taken literally, the title "All of Statistics" is an exaggeration. But in spirit, the title is apt, as the book does cover a much broader range of topics than a typical introductory book on mathematical statistics. This book is for people who want to learn probability and statistics quickly. It is suitable for graduate or advanced undergraduate students in computer science, mathematics, statistics, and related disciplines. The book includes modern topics like non-parametric curve estimation, bootstrapping, and classification, topics that are usually relegated to follow-up courses. The reader is presumed to know calculus and a little linear algebra. No previous knowledge of probability and statistics is required. Statistics, data mining, and machine learning are all concerned with collecting and analysing data.