This book collects and unifies statistical models and methods that have been proposed for analyzing interval-censored failure time data. It provides the first comprehensive coverage of the topic of interval-censored data and complements the books on right-censored data. The focus of the book is on nonparametric and semiparametric inferences, but it also describes parametric and imputation approaches. This book provides an up-to-date reference for people who are conducting research on the analysis of interval-censored failure time data as well as for those who need to analyze interval-censored data to answer substantive questions.
Interval-Censored Time-to-Event Data: Methods and Applications collects the most recent techniques, models, and computational tools for interval-censored time-to-event data. Top biostatisticians from academia, biopharmaceutical industries, and government agencies discuss how these advances are impacting clinical trials and biomedical research. Divided into three parts, the book begins with an overview of interval-censored data modeling, including nonparametric estimation, survival functions, regression analysis, multivariate data analysis, competing risks analysis, and other models for interval-censored data. The next part presents interval-censored methods for current status data, Bayesian semiparametric regression analysis of interval-censored data with monotone splines, Bayesian inferential models for interval-censored data, an estimator for identifying causal effect of treatment, and consistent variance estimation for interval-censored data. In the final part, the contributors use Monte Carlo simulation to assess biases in progression-free survival analysis as well as correct bias in interval-censored time-to-event applications. They also present adaptive decision making methods to optimize the rapid treatment of stroke, explore practical issues in using weighted logrank tests, and describe how to use two R packages. A practical guide for biomedical researchers, clinicians, biostatisticians, and graduate students in biostatistics, this volume covers the latest developments in the analysis and modeling of interval-censored time-to-event data. It shows how up-to-date statistical methods are used in biopharmaceutical and public health applications.
Panel count data occur in studies that concern recurrent events, or event history studies, when study subjects are observed only at discrete time points. By recurrent events, we mean the event that can occur or happen multiple times or repeatedly. Examples of recurrent events include disease infections, hospitalizations in medical studies, warranty claims of automobiles or system break-downs in reliability studies. In fact, many other fields yield event history data too such as demographic studies, economic studies and social sciences. For the cases where the study subjects are observed continuously, the resulting data are usually referred to as recurrent event data. This book collects and unifies statistical models and methods that have been developed for analyzing panel count data. It provides the first comprehensive coverage of the topic. The main focus is on methodology, but for the benefit of the reader, the applications of the methods to real data are also discussed along with numerical calculations. There exists a great deal of literature on the analysis of recurrent event data. This book fills the void in the literature on the analysis of panel count data. This book provides an up-to-date reference for scientists who are conducting research on the analysis of panel count data. It will also be instructional for those who need to analyze panel count data to answer substantive research questions. In addition, it can be used as a text for a graduate course in statistics or biostatistics that assumes a basic knowledge of probability and statistics.
Kosorok’s brilliant text provides a self-contained introduction to empirical processes and semiparametric inference. These powerful research techniques are surprisingly useful for developing methods of statistical inference for complex models and in understanding the properties of such methods. This is an authoritative text that covers all the bases, and also a friendly and gradual introduction to the area. The book can be used as research reference and textbook.
Making complex methods more accessible to applied researchers without an advanced mathematical background, the authors present the essence of new techniques available, as well as classical techniques, and apply them to data. Practical suggestions for implementing the various methods are set off in a series of practical notes at the end of each section, while technical details of the derivation of the techniques are sketched in the technical notes. This book will thus be useful for investigators who need to analyse censored or truncated life time data, and as a textbook for a graduate course in survival analysis, the only prerequisite being a standard course in statistical methodology.
This book focuses on statistical methods for the analysis of discrete failure times. Failure time analysis is one of the most important fields in statistical research, with applications affecting a wide range of disciplines, in particular, demography, econometrics, epidemiology and clinical research. Although there are a large variety of statistical methods for failure time analysis, many techniques are designed for failure times that are measured on a continuous scale. In empirical studies, however, failure times are often discrete, either because they have been measured in intervals (e.g., quarterly or yearly) or because they have been rounded or grouped. The book covers well-established methods like life-table analysis and discrete hazard regression models, but also introduces state-of-the art techniques for model evaluation, nonparametric estimation and variable selection. Throughout, the methods are illustrated by real life applications, and relationships to survival analysis in continuous time are explained. Each section includes a set of exercises on the respective topics. Various functions and tools for the analysis of discrete survival data are collected in the R package discSurv that accompanies the book.
Although standard mixed effects models are useful in a range of studies, other approaches must often be used in correlation with them when studying complex or incomplete data. Mixed Effects Models for Complex Data discusses commonly used mixed effects models and presents appropriate approaches to address dropouts, missing data, measurement errors, censoring, and outliers. For each class of mixed effects model, the author reviews the corresponding class of regression model for cross-sectional data. An overview of general models and methods, along with motivating examples After presenting real data examples and outlining general approaches to the analysis of longitudinal/clustered data and incomplete data, the book introduces linear mixed effects (LME) models, generalized linear mixed models (GLMMs), nonlinear mixed effects (NLME) models, and semiparametric and nonparametric mixed effects models. It also includes general approaches for the analysis of complex data with missing values, measurement errors, censoring, and outliers. Self-contained coverage of specific topics Subsequent chapters delve more deeply into missing data problems, covariate measurement errors, and censored responses in mixed effects models. Focusing on incomplete data, the book also covers survival and frailty models, joint models of survival and longitudinal data, robust methods for mixed effects models, marginal generalized estimating equation (GEE) models for longitudinal or clustered data, and Bayesian methods for mixed effects models. Background material In the appendix, the author provides background information, such as likelihood theory, the Gibbs sampler, rejection and importance sampling methods, numerical integration methods, optimization methods, bootstrap, and matrix algebra. Failure to properly address missing data, measurement errors, and other issues in statistical analyses can lead to severely biased or misleading results. This book explores the biases that arise when naïve methods are used and shows which approaches should be used to achieve accurate results in longitudinal data analysis.
This book is an accessible, practical and comprehensive guide for researchers from multiple disciplines including biomedical, epidemiology, engineering and the social sciences. Written for accessibility, this book will appeal to students and researchers who want to understand the basics of survival and event history analysis and apply these methods without getting entangled in mathematical and theoretical technicalities. Inside, readers are offered a blueprint for their entire research project from data preparation to model selection and diagnostics. Engaging, easy to read, functional and packed with enlightening examples, ‘hands-on’ exercises, conversations with key scholars and resources for both students and instructors, this text allows researchers to quickly master advanced statistical techniques. It is written from the perspective of the ‘user’, making it suitable as both a self-learning tool and graduate-level textbook. Also included are up-to-date innovations in the field, including advancements in the assessment of model fit, unobserved heterogeneity, recurrent events and multilevel event history models. Practical instructions are also included for using the statistical programs of R, STATA and SPSS, enabling readers to replicate the examples described in the text.
Readers will find in the pages of this book a treatment of the statistical analysis of clustered survival data. Such data are encountered in many scientific disciplines including human and veterinary medicine, biology, epidemiology, public health and demography. A typical example is the time to death in cancer patients, with patients clustered in hospitals. Frailty models provide a powerful tool to analyze clustered survival data. In this book different methods based on the frailty model are described and it is demonstrated how they can be used to analyze clustered survival data. All programs used for these examples are available on the Springer website.
Data collected on the time to an event-such as the death of a patient in a medical study-is known as survival data. The methods for analyzing survival data can also be used to analyze data on the time to events such as the recurrence of a disease or relief from symptoms. Modelling Survival Data in Medical Research begins with an introduction to survival analysis and a description of four studies in which survival data was obtained. These and other data sets are then used to illustrate the techniques presented in the following chapters, including the Cox and Weibull proportional hazards models; accelerated failure time models; models with time-dependent variables; interval-censored survival data; model checking; and use of statistical packages. Designed for statisticians in the pharmaceutical industry and medical research institutes, and for numerate scientists and clinicians analyzing their own data sets, this book also meets the need for an intermediate text which emphasizes the application of the methodology to survival data arising from medical studies.