This book collects and unifies statistical models and methods that have been proposed for analyzing interval-censored failure time data. It provides the first comprehensive coverage of the topic of interval-censored data and complements the books on right-censored data. The focus of the book is on nonparametric and semiparametric inferences, but it also describes parametric and imputation approaches. This book provides an up-to-date reference for people who are conducting research on the analysis of interval-censored failure time data as well as for those who need to analyze interval-censored data to answer substantive questions.
Missing data pose challenges to real-life data analysis. Simple ad-hoc fixes, like deletion or mean imputation, only work under highly restrictive conditions, which are often not met in practice. Multiple imputation replaces each missing value by multiple plausible values. The variability between these replacements reflects our ignorance of the true (but missing) value. Each of the completed data set is then analyzed by standard methods, and the results are pooled to obtain unbiased estimates with correct confidence intervals. Multiple imputation is a general approach that also inspires novel solutions to old problems by reformulating the task at hand as a missing-data problem. This is the second edition of a popular book on multiple imputation, focused on explaining the application of methods through detailed worked examples using the MICE package as developed by the author. This new edition incorporates the recent developments in this fast-moving field. This class-tested book avoids mathematical and technical details as much as possible: formulas are accompanied by verbal statements that explain the formula in accessible terms. The book sharpens the reader’s intuition on how to think about missing data, and provides all the tools needed to execute a well-grounded quantitative analysis in the presence of missing data.
Multistate Models for the Analysis of Life History Data provides the first comprehensive treatment of multistate modeling and analysis, including parametric, nonparametric and semiparametric methods applicable to many types of life history data. Special models such as illness-death, competing risks and progressive processes are considered, as well as more complex models. The book provides both theoretical development and illustrations of analysis based on data from randomized trials and observational cohort studies in health research. It features: Discusses a wide range of applications of multistate models, Presents methods for both continuously and intermittently observed life history processes, Gives a thorough discussion of conditionally independent censoring and observation processes, Discusses models with random effects and joint models for two or more multistate processes, Discusses and illustrates software for multistate analysis that is available in R, Target audience includes those engaged in research and applications involving multistate models.
Survival Analysis Using S: Analysis of Time-to-Event Data is designed as a text for a one-semester or one-quarter course in survival analysis for upper-level or graduate students in statistics, biostatistics, and epidemiology. Prerequisites are a standard pre-calculus first course in probability and statistics, and a course in applied linear regression models. No prior knowledge of S or R is assumed. A wide choice of exercises is included, some intended for more advanced students with a first course in mathematical statistics. The authors emphasize parametric log-linear models, while also detailing nonparametric procedures along with model building and data diagnostics. Medical and public health researchers will find the discussion of cut point analysis with bootstrap validation, competing risks and the cumulative incidence estimator, and the analysis of left-truncated and right-censored data invaluable. The bootstrap procedure checks robustness of cut point analysis and determines cut point(s). In a chapter written by Stephen Portnoy, censored regression quantiles - a new nonparametric regression methodology (2003) - is developed to identify important forms of population heterogeneity and to detect departures from traditional Cox models. By generalizing the Kaplan-Meier estimator to regression models for conditional quantiles, this methods provides a valuable complement to traditional Cox proportional hazards approaches.
Survival Analysis with Interval-Censored Data: A Practical Approach with Examples in R, SAS, and BUGS provides the reader with a practical introduction into the analysis of interval-censored survival times. Although many theoretical developments have appeared in the last fifty years, interval censoring is often ignored in practice. Many are unaware of the impact of inappropriately dealing with interval censoring. In addition, the necessary software is at times difficult to trace. This book fills in the gap between theory and practice. Features: -Provides an overview of frequentist as well as Bayesian methods. -Include a focus on practical aspects and applications. -Extensively illustrates the methods with examples using R, SAS, and BUGS. Full programs are available on a supplementary website. The authors: Kris Bogaerts is project manager at I-BioStat, KU Leuven. He received his PhD in science (statistics) at KU Leuven on the analysis of interval-censored data. He has gained expertise in a great variety of statistical topics with a focus on the design and analysis of clinical trials. Arnošt Komárek is associate professor of statistics at Charles University, Prague. His subject area of expertise covers mainly survival analysis with the emphasis on interval-censored data and classification based on longitudinal data. He is past chair of the Statistical Modelling Society and editor of Statistical Modelling: An International Journal. Emmanuel Lesaffre is professor of biostatistics at I-BioStat, KU Leuven. His research interests include Bayesian methods, longitudinal data analysis, statistical modelling, analysis of dental data, interval-censored data, misclassification issues, and clinical trials. He is the founding chair of the Statistical Modelling Society, past-president of the International Society for Clinical Biostatistics, and fellow of ISI and ASA.
In the statistical domain, certain topics have received considerable attention during the last decade or so, necessitated by the growth and evolution of data and theoretical challenges. This growth has invariably been accompanied by computational advancement, which has presented end users as well as researchers with the necessary opportunities to handle data and implement modelling solutions for statistical purposes. Showcasing the interplay among a variety of disciplines, this book offers pioneering theoretical and applied solutions to practice-oriented problems. As a carefully curated collection of prominent international thought leaders, it fosters collaboration between statisticians and biostatisticians and provides an array of thought processes and tools to its readers. The book thereby creates an understanding and appreciation of recent developments as well as an implementation of these contributions within the broader framework of both academia and industry. Computational and Methodological Statistics and Biostatistics is composed of three main themes: • Recent developments in theory and applications of statistical distributions;• Recent developments in supervised and unsupervised modelling;• Recent developments in biostatistics; and also features programming code and accompanying algorithms to enable readers to replicate and implement methodologies. Therefore, this monograph provides a concise point of reference for a variety of current trends and topics within the statistical domain. With interdisciplinary appeal, it will be useful to researchers, graduate students, and practitioners in statistics, biostatistics, clinical methodology, geology, data science, and actuarial science, amongst others.
The past decades have transformed the world of statistical data analysis, with new methods, new types of data, and new computational tools. Modern Statistics with R introduces you to key parts of this modern statistical toolkit. It teaches you: Data wrangling - importing, formatting, reshaping, merging, and filtering data in R. Exploratory data analysis - using visualisations and multivariate techniques to explore datasets. Statistical inference - modern methods for testing hypotheses and computing confidence intervals. Predictive modelling - regression models and machine learning methods for prediction, classification, and forecasting. Simulation - using simulation techniques for sample size computations and evaluations of statistical methods. Ethics in statistics - ethical issues and good statistical practice. R programming - writing code that is fast, readable, and (hopefully!) free from bugs. No prior programming experience is necessary. Clear explanations and examples are provided to accommodate readers at all levels of familiarity with statistical principles and coding practices. A basic understanding of probability theory can enhance comprehension of certain concepts discussed within this book. In addition to plenty of examples, the book includes more than 200 exercises, with fully worked solutions available at: www.modernstatisticswithr.com.
Making complex methods more accessible to applied researchers without an advanced mathematical background, the authors present the essence of new techniques available, as well as classical techniques, and apply them to data. Practical suggestions for implementing the various methods are set off in a series of practical notes at the end of each section, while technical details of the derivation of the techniques are sketched in the technical notes. This book will thus be useful for investigators who need to analyse censored or truncated life time data, and as a textbook for a graduate course in survival analysis, the only prerequisite being a standard course in statistical methodology.
Although standard mixed effects models are useful in a range of studies, other approaches must often be used in correlation with them when studying complex or incomplete data. Mixed Effects Models for Complex Data discusses commonly used mixed effects models and presents appropriate approaches to address dropouts, missing data, measurement errors, censoring, and outliers. For each class of mixed effects model, the author reviews the corresponding class of regression model for cross-sectional data. An overview of general models and methods, along with motivating examples After presenting real data examples and outlining general approaches to the analysis of longitudinal/clustered data and incomplete data, the book introduces linear mixed effects (LME) models, generalized linear mixed models (GLMMs), nonlinear mixed effects (NLME) models, and semiparametric and nonparametric mixed effects models. It also includes general approaches for the analysis of complex data with missing values, measurement errors, censoring, and outliers. Self-contained coverage of specific topics Subsequent chapters delve more deeply into missing data problems, covariate measurement errors, and censored responses in mixed effects models. Focusing on incomplete data, the book also covers survival and frailty models, joint models of survival and longitudinal data, robust methods for mixed effects models, marginal generalized estimating equation (GEE) models for longitudinal or clustered data, and Bayesian methods for mixed effects models. Background material In the appendix, the author provides background information, such as likelihood theory, the Gibbs sampler, rejection and importance sampling methods, numerical integration methods, optimization methods, bootstrap, and matrix algebra. Failure to properly address missing data, measurement errors, and other issues in statistical analyses can lead to severely biased or misleading results. This book explores the biases that arise when naïve methods are used and shows which approaches should be used to achieve accurate results in longitudinal data analysis.
Interval-Censored Time-to-Event Data: Methods and Applications collects the most recent techniques, models, and computational tools for interval-censored time-to-event data. Top biostatisticians from academia, biopharmaceutical industries, and government agencies discuss how these advances are impacting clinical trials and biomedical research. Divided into three parts, the book begins with an overview of interval-censored data modeling, including nonparametric estimation, survival functions, regression analysis, multivariate data analysis, competing risks analysis, and other models for interval-censored data. The next part presents interval-censored methods for current status data, Bayesian semiparametric regression analysis of interval-censored data with monotone splines, Bayesian inferential models for interval-censored data, an estimator for identifying causal effect of treatment, and consistent variance estimation for interval-censored data. In the final part, the contributors use Monte Carlo simulation to assess biases in progression-free survival analysis as well as correct bias in interval-censored time-to-event applications. They also present adaptive decision making methods to optimize the rapid treatment of stroke, explore practical issues in using weighted logrank tests, and describe how to use two R packages. A practical guide for biomedical researchers, clinicians, biostatisticians, and graduate students in biostatistics, this volume covers the latest developments in the analysis and modeling of interval-censored time-to-event data. It shows how up-to-date statistical methods are used in biopharmaceutical and public health applications.