Design and Analysis of Time Series Experiments develops methods and models for analysis and interpretation of time series experiments while also addressing recent developments in causal modeling. Unlike other time series texts, it integrates the statistical issues of design, estimation, and interpretation with foundational validity issues. Drawing on examples from criminology, economics, education, pharmacology, public policy, program evaluation, public health, and psychology, this text addresses researchers and graduate students in a wide range of the behavioral, biomedical, and social sciences.
Hailed as a landmark in the development of experimental methods when it appeared in 1975, Design and Analysis of Time-Series Experiments is available again after several years of being out of print. Gene V Glass, Victor L. Willson and John M. Gottman have carried forward the design and analysis of perhaps the most powerful and useful quasi-experimental design identified by their mentors in the classic Campbell & Stanley text Experimental and Quasi-experimental Design for Research (1966). In an era when governments seek to resolve questions of experimental validity by fiat and the label "Scientifically Based Research" is appropriated for only certain privileged experimental designs, nothing could be more appropriate than to bring back the classic text that challenges doctrinaire opinions of proper causal analysis. Glass, Willson & Gottman introduce and illustrate an armamentarium of interrupted time-series experimental designs that offer some of the most powerful tools for discovering and validating causal relationships in social and education policy analysis. Drawing on the ground-breaking statistical analytic tools of Box & Jenkins, the authors extend the comprehensive autoregressive-integrated-moving-averages (ARIMA) model to accommodate significance testing and estimation of the effects of interventions into real world time-series. Designs and full statistical analyses are richly illustrated with actual examples from education, behavioral psychology, and sociology.
Featuring engaging examples from diverse disciplines, this book explains how to use modern approaches to quasi-experimentation to derive credible estimates of treatment effects under the demanding constraints of field settings. Foremost expert Charles S. Reichardt provides an in-depth examination of the design and statistical analysis of pretest-posttest, nonequivalent groups, regression discontinuity, and interrupted time-series designs. He details their relative strengths and weaknesses and offers practical advice about their use. Reichardt compares quasi-experiments to randomized experiments and discusses when and why the former might be a better choice. Modern moethods for elaborating a research design to remove bias from estimates of treatment effects are described, as are tactics for dealing with missing data and noncompliance with treatment assignment. Throughout, mathematical equations are translated into words to enhance accessibility.
Interrupted Time Series Analysis develops a comprehensive set of models and methods for drawing causal inferences from time series. It provides example analyses of social, behavioral, and biomedical time series to illustrate a general strategy for building AutoRegressive Integrated Moving Average (ARIMA) impact models. Additionally, the book supplements the classic Box-Jenkins-Tiao model-building strategy with recent auxiliary tests for transformation, differencing, and model selection. Not only does the text discuss new developments, including the prospects for widespread adoption of Bayesian hypothesis testing and synthetic control group designs, but it makes optimal use of graphical illustrations in its examples. With forty completed example analyses that demonstrate the implications of model properties, Interrupted Time Series Analysis will be a key inter-disciplinary text in classrooms, workshops, and short-courses for researchers familiar with time series data or cross-sectional regression analysis but limited background in the structure of time series processes and experiments.
This bestselling professional reference has helped over 100,000 engineers and scientists with the success of their experiments. The new edition includes more software examples taken from the three most dominant programs in the field: Minitab, JMP, and SAS. Additional material has also been added in several chapters, including new developments in robust design and factorial designs. New examples and exercises are also presented to illustrate the use of designed experiments in service and transactional organizations. Engineers will be able to apply this information to improve the quality and efficiency of working systems.
Oehlert's text is suitable for either a service course for non-statistics graduate students or for statistics majors. Unlike most texts for the one-term grad/upper level course on experimental design, Oehlert's new book offers a superb balance of both analysis and design, presenting three practical themes to students: • when to use various designs • how to analyze the results • how to recognize various design options Also, unlike other older texts, the book is fully oriented toward the use of statistical software in analyzing experiments.
This book describes methods for designing and analyzing experiments that are conducted using a computer code, a computer experiment, and, when possible, a physical experiment. Computer experiments continue to increase in popularity as surrogates for and adjuncts to physical experiments. Since the publication of the first edition, there have been many methodological advances and software developments to implement these new methodologies. The computer experiments literature has emphasized the construction of algorithms for various data analysis tasks (design construction, prediction, sensitivity analysis, calibration among others), and the development of web-based repositories of designs for immediate application. While it is written at a level that is accessible to readers with Masters-level training in Statistics, the book is written in sufficient detail to be useful for practitioners and researchers. New to this revised and expanded edition: • An expanded presentation of basic material on computer experiments and Gaussian processes with additional simulations and examples • A new comparison of plug-in prediction methodologies for real-valued simulator output • An enlarged discussion of space-filling designs including Latin Hypercube designs (LHDs), near-orthogonal designs, and nonrectangular regions • A chapter length description of process-based designs for optimization, to improve good overall fit, quantile estimation, and Pareto optimization • A new chapter describing graphical and numerical sensitivity analysis tools • Substantial new material on calibration-based prediction and inference for calibration parameters • Lists of software that can be used to fit models discussed in the book to aid practitioners
We shall examine the validity of 16 experimental designs against 12 common threats to valid inference. By experiment we refer to that portion of research in which variables are manipulated and their effects upon other variables observed. It is well to distinguish the particular role of this chapter. It is not a chapter on experimental design in the Fisher (1925, 1935) tradition, in which an experimenter having complete mastery can schedule treatments and measurements for optimal statistical efficiency, with complexity of design emerging only from that goal of efficiency. Insofar as the designs discussed in the present chapter become complex, it is because of the intransigency of the environment: because, that is, of the experimenter’s lack of complete control.
W. Newton Suter argues that what is important in a changing education landscape is the ability to think clearly about research methods, reason through complex problems and evaluate published research. He explains how to evaluate data and establish its relevance.
Originally published in 1992, the editors of this volume fulfill three main goals: to take stock of progress in the development of data-analysis procedures for single-subject research; to clearly explain errors of application and consider them within the context of new theoretical and empirical information of the time; and to closely examine new developments in the analysis of data from single-subject or small n experiments. To meet these goals, this book provides examples of applicable single-subject research data analysis. It presents a wide variety of topics and perspectives and hopes that readers will select the data-analysis strategies that best reflect their methodological approaches, statistical sophistication, and philosophical beliefs. These strategies include visual analysis, nonparametric tests, time-series experiments, applications of statistical procedures for multiple behaviors, applications of meta-analysis in single-subject research, and discussions of issues related to the application and misapplication of selected techniques.