The papers in this volume represent the most timely and advanced contributions to the 2014 Joint Applied Statistics Symposium of the International Chinese Statistical Association (ICSA) and the Korean International Statistical Society (KISS), held in Portland, Oregon. The contributions cover new developments in statistical modeling and clinical research: including model development, model checking, and innovative clinical trial design and analysis. Each paper was peer-reviewed by at least two referees and also by an editor. The conference was attended by over 400 participants from academia, industry, and government agencies around the world, including from North America, Asia, and Europe. It offered 3 keynote speeches, 7 short courses, 76 parallel scientific sessions, student paper sessions, and social events.
Mounting failures of replication in social and biological sciences give a new urgency to critically appraising proposed reforms. This book pulls back the cover on disagreements between experts charged with restoring integrity to science. It denies two pervasive views of the role of probability in inference: to assign degrees of belief, and to control error rates in a long run. If statistical consumers are unaware of assumptions behind rival evidence reforms, they can't scrutinize the consequences that affect them (in personalized medicine, psychology, etc.). The book sets sail with a simple tool: if little has been done to rule out flaws in inferring a claim, then it has not passed a severe test. Many methods advocated by data experts do not stand up to severe scrutiny and are in tension with successful strategies for blocking or accounting for cherry picking and selective reporting. Through a series of excursions and exhibits, the philosophy and history of inductive inference come alive. Philosophical tools are put to work to solve problems about science and pseudoscience, induction and falsification.
This concise, yet thorough, book is enhanced with simulations and graphs to build the intuition of readers Models for Probability and Statistical Inference was written over a five-year period and serves as a comprehensive treatment of the fundamentals of probability and statistical inference. With detailed theoretical coverage found throughout the book, readers acquire the fundamentals needed to advance to more specialized topics, such as sampling, linear models, design of experiments, statistical computing, survival analysis, and bootstrapping. Ideal as a textbook for a two-semester sequence on probability and statistical inference, early chapters provide coverage on probability and include discussions of: discrete models and random variables; discrete distributions including binomial, hypergeometric, geometric, and Poisson; continuous, normal, gamma, and conditional distributions; and limit theory. Since limit theory is usually the most difficult topic for readers to master, the author thoroughly discusses modes of convergence of sequences of random variables, with special attention to convergence in distribution. The second half of the book addresses statistical inference, beginning with a discussion on point estimation and followed by coverage of consistency and confidence intervals. Further areas of exploration include: distributions defined in terms of the multivariate normal, chi-square, t, and F (central and non-central); the one- and two-sample Wilcoxon test, together with methods of estimation based on both; linear models with a linear space-projection approach; and logistic regression. Each section contains a set of problems ranging in difficulty from simple to more complex, and selected answers as well as proofs to almost all statements are provided. An abundant amount of figures in addition to helpful simulations and graphs produced by the statistical package S-Plus(r) are included to help build the intuition of readers.
There have been major developments in the field of statistics over the last quarter century, spurred by the rapid advances in computing and data-measurement technologies. These developments have revolutionized the field and have greatly influenced research directions in theory and methodology. Increased computing power has spawned entirely new areas of research in computationally-intensive methods, allowing us to move away from narrowly applicable parametric techniques based on restrictive assumptions to much more flexible and realistic models and methods. These computational advances have also led to the extensive use of simulation and Monte Carlo techniques in statistical inference. All of these developments have, in turn, stimulated new research in theoretical statistics.This volume provides an up-to-date overview of recent advances in statistical modeling and inference. Written by renowned researchers from across the world, it discusses flexible models, semi-parametric methods and transformation models, nonparametric regression and mixture models, survival and reliability analysis, and re-sampling techniques. With its coverage of methodology and theory as well as applications, the book is an essential reference for researchers, graduate students, and practitioners.
The past decades have transformed the world of statistical data analysis, with new methods, new types of data, and new computational tools. Modern Statistics with R introduces you to key parts of this modern statistical toolkit. It teaches you: Data wrangling - importing, formatting, reshaping, merging, and filtering data in R. Exploratory data analysis - using visualisations and multivariate techniques to explore datasets. Statistical inference - modern methods for testing hypotheses and computing confidence intervals. Predictive modelling - regression models and machine learning methods for prediction, classification, and forecasting. Simulation - using simulation techniques for sample size computations and evaluations of statistical methods. Ethics in statistics - ethical issues and good statistical practice. R programming - writing code that is fast, readable, and (hopefully!) free from bugs. No prior programming experience is necessary. Clear explanations and examples are provided to accommodate readers at all levels of familiarity with statistical principles and coding practices. A basic understanding of probability theory can enhance comprehension of certain concepts discussed within this book. In addition to plenty of examples, the book includes more than 200 exercises, with fully worked solutions available at: www.modernstatisticswithr.com.
Linear regression with one predictor variable; Inferences in regression and correlation analysis; Diagnosticis and remedial measures; Simultaneous inferences and other topics in regression analysis; Matrix approach to simple linear regression analysis; Multiple linear regression; Nonlinear regression; Design and analysis of single-factor studies; Multi-factor studies; Specialized study designs.
Models and likelihood are the backbone of modern statistics and data analysis. The coverage is unrivaled, with sections on survival analysis, missing data, Markov chains, Markov random fields, point processes, graphical models, simulation and Markov chain Monte Carlo, estimating functions, asymptotic approximations, local likelihood and spline regressions as well as on more standard topics. Anthony Davison blends theory and practice to provide an integrated text for advanced undergraduate and graduate students, researchers and practicioners. Its comprehensive coverage makes this the standard text and reference in the subject.
Written specifically for graduate students and practitioners beginning social science research, Statistical Modeling and Inference for Social Science covers the essential statistical tools, models and theories that make up the social scientist's toolkit. Assuming no prior knowledge of statistics, this textbook introduces students to probability theory, statistical inference and statistical modeling, and emphasizes the connection between statistical procedures and social science theory. Sean Gailmard develops core statistical theory as a set of tools to model and assess relationships between variables - the primary aim of social scientists - and demonstrates the ways in which social scientists express and test substantive theoretical arguments in various models. Chapter exercises guide students in applying concepts to data, extending their grasp of core theoretical concepts. Students will also gain the ability to create, read and critique statistical applications in their fields of interest.
This lively and engaging book explains the things you have to know in order to read empirical papers in the social and health sciences, as well as the techniques you need to build statistical models of your own. The discussion in the book is organized around published studies, as are many of the exercises. Relevant journal articles are reprinted at the back of the book. Freedman makes a thorough appraisal of the statistical methods in these papers and in a variety of other examples. He illustrates the principles of modelling, and the pitfalls. The discussion shows you how to think about the critical issues - including the connection (or lack of it) between the statistical models and the real phenomena. The book is written for advanced undergraduates and beginning graduate students in statistics, as well as students and professionals in the social and health sciences.