This book examines the consequences of misspecifications for the interpretation of likelihood-based methods of statistical estimation and interference. The analysis concludes with an examination of methods by which the possibility of misspecification can be empirically investigated.
This book was first published in 2007. The small sample properties of estimators and tests are frequently too complex to be useful or are unknown. Much econometric theory is therefore developed for very large or asymptotic samples where it is assumed that the behaviour of estimators and tests will adequately represent their properties in small samples. Refined asymptotic methods adopt an intermediate position by providing improved approximations to small sample behaviour using asymptotic expansions. Dedicated to the memory of Michael Magdalinos, whose work is a major contribution to this area, this book contains chapters directly concerned with refined asymptotic methods. In addition, there are chapters focusing on new asymptotic results; the exploration through simulation of the small sample behaviour of estimators and tests in panel data models; and improvements in methodology. With contributions from leading econometricians, this collection will be essential reading for researchers and graduate students concerned with the use of asymptotic methods in econometric analysis.
Mounting failures of replication in social and biological sciences give a new urgency to critically appraising proposed reforms. This book pulls back the cover on disagreements between experts charged with restoring integrity to science. It denies two pervasive views of the role of probability in inference: to assign degrees of belief, and to control error rates in a long run. If statistical consumers are unaware of assumptions behind rival evidence reforms, they can't scrutinize the consequences that affect them (in personalized medicine, psychology, etc.). The book sets sail with a simple tool: if little has been done to rule out flaws in inferring a claim, then it has not passed a severe test. Many methods advocated by data experts do not stand up to severe scrutiny and are in tension with successful strategies for blocking or accounting for cherry picking and selective reporting. Through a series of excursions and exhibits, the philosophy and history of inductive inference come alive. Philosophical tools are put to work to solve problems about science and pseudoscience, induction and falsification.
′The editors of the new SAGE Handbook of Regression Analysis and Causal Inference have assembled a wide-ranging, high-quality, and timely collection of articles on topics of central importance to quantitative social research, many written by leaders in the field. Everyone engaged in statistical analysis of social-science data will find something of interest in this book.′ - John Fox, Professor, Department of Sociology, McMaster University ′The authors do a great job in explaining the various statistical methods in a clear and simple way - focussing on fundamental understanding, interpretation of results, and practical application - yet being precise in their exposition.′ - Ben Jann, Executive Director, Institute of Sociology, University of Bern ′Best and Wolf have put together a powerful collection, especially valuable in its separate discussions of uses for both cross-sectional and panel data analysis.′ -Tom Smith, Senior Fellow, NORC, University of Chicago Edited and written by a team of leading international social scientists, this Handbook provides a comprehensive introduction to multivariate methods. The Handbook focuses on regression analysis of cross-sectional and longitudinal data with an emphasis on causal analysis, thereby covering a large number of different techniques including selection models, complex samples, and regression discontinuities. Each Part starts with a non-mathematical introduction to the method covered in that section, giving readers a basic knowledge of the method’s logic, scope and unique features. Next, the mathematical and statistical basis of each method is presented along with advanced aspects. Using real-world data from the European Social Survey (ESS) and the Socio-Economic Panel (GSOEP), the book provides a comprehensive discussion of each method’s application, making this an ideal text for PhD students and researchers embarking on their own data analysis.
The Econometric Analysis of Network Data serves as an entry point for advanced students, researchers, and data scientists seeking to perform effective analyses of networks, especially inference problems. It introduces the key results and ideas in an accessible, yet rigorous way. While a multi-contributor reference, the work is tightly focused and disciplined, providing latitude for varied specialties in one authorial voice.
This User’s Guide is a resource for investigators and stakeholders who develop and review observational comparative effectiveness research protocols. It explains how to (1) identify key considerations and best practices for research design; (2) build a protocol based on these standards and best practices; and (3) judge the adequacy and completeness of a protocol. Eleven chapters cover all aspects of research design, including: developing study objectives, defining and refining study questions, addressing the heterogeneity of treatment effect, characterizing exposure, selecting a comparator, defining and measuring outcomes, and identifying optimal data sources. Checklists of guidance and key considerations for protocols are provided at the end of each chapter. The User’s Guide was created by researchers affiliated with AHRQ’s Effective Health Care Program, particularly those who participated in AHRQ’s DEcIDE (Developing Evidence to Inform Decisions About Effectiveness) program. Chapters were subject to multiple internal and external independent reviews. More more information, please consult the Agency website: www.effectivehealthcare.ahrq.gov)
It is increasingly common for analysts to seek out the opinions of individuals and organizations using attitudinal scales such as degree of satisfaction or importance attached to an issue. Examples include levels of obesity, seriousness of a health condition, attitudes towards service levels, opinions on products, voting intentions, and the degree of clarity of contracts. Ordered choice models provide a relevant methodology for capturing the sources of influence that explain the choice made amongst a set of ordered alternatives. The methods have evolved to a level of sophistication that can allow for heterogeneity in the threshold parameters, in the explanatory variables (through random parameters), and in the decomposition of the residual variance. This book brings together contributions in ordered choice modeling from a number of disciplines, synthesizing developments over the last fifty years, and suggests useful extensions to account for the wide range of sources of influence on choice.
Offers a radically new approach to inference with nonexperimental data when the statistical model is ambiguously defined. Examines the process of model searching and its implications for inference. Identifies six different varieties of specification searches, discussing the inferential consequences of each in detail.
This book presents statistical methods for analysis of the duration of events. The primary focus is on models for single-spell data, events in which individual agents are observed for a single duration. Some attention is also given to multiple-spell data. The first part of the book covers model specification, including both structural and reduced form models and models with and without neglected heterogeneity. The book next deals with likelihood based inference about such models, with sections on full and semiparametric specification. A final section treats graphical and numerical methods of specification testing. This is the first published exposition of current econometric methods for the study of duration data.