This book illustrates numerous statistical practices that are commonly used by medical researchers, but which have severe flaws that may not be obvious. For each example, it provides one or more alternative statistical methods that avoid misleading or incorrect inferences being made. The technical level is kept to a minimum to make the book accessible to non-statisticians. At the same time, since many of the examples describe methods used routinely by medical statisticians with formal statistical training, the book appeals to a broad readership in the medical research community.
Clinical trials are used to elucidate the most appropriate preventive, diagnostic, or treatment options for individuals with a given medical condition. Perhaps the most essential feature of a clinical trial is that it aims to use results based on a limited sample of research participants to see if the intervention is safe and effective or if it is comparable to a comparison treatment. Sample size is a crucial component of any clinical trial. A trial with a small number of research participants is more prone to variability and carries a considerable risk of failing to demonstrate the effectiveness of a given intervention when one really is present. This may occur in phase I (safety and pharmacologic profiles), II (pilot efficacy evaluation), and III (extensive assessment of safety and efficacy) trials. Although phase I and II studies may have smaller sample sizes, they usually have adequate statistical power, which is the committee's definition of a "large" trial. Sometimes a trial with eight participants may have adequate statistical power, statistical power being the probability of rejecting the null hypothesis when the hypothesis is false. Small Clinical Trials assesses the current methodologies and the appropriate situations for the conduct of clinical trials with small sample sizes. This report assesses the published literature on various strategies such as (1) meta-analysis to combine disparate information from several studies including Bayesian techniques as in the confidence profile method and (2) other alternatives such as assessing therapeutic results in a single treated population (e.g., astronauts) by sequentially measuring whether the intervention is falling above or below a preestablished probability outcome range and meeting predesigned specifications as opposed to incremental improvement.
Dynamic Treatment Regimes: Statistical Methods for Precision Medicine provides a comprehensive introduction to statistical methodology for the evaluation and discovery of dynamic treatment regimes from data. Researchers and graduate students in statistics, data science, and related quantitative disciplines with a background in probability and statistical inference and popular statistical modeling techniques will be prepared for further study of this rapidly evolving field. A dynamic treatment regime is a set of sequential decision rules, each corresponding to a key decision point in a disease or disorder process, where each rule takes as input patient information and returns the treatment option he or she should receive. Thus, a treatment regime formalizes how a clinician synthesizes patient information and selects treatments in practice. Treatment regimes are of obvious relevance to precision medicine, which involves tailoring treatment selection to patient characteristics in an evidence-based way. Of critical importance to precision medicine is estimation of an optimal treatment regime, one that, if used to select treatments for the patient population, would lead to the most beneficial outcome on average. Key methods for estimation of an optimal treatment regime from data are motivated and described in detail. A dedicated companion website presents full accounts of application of the methods using a comprehensive R package developed by the authors. The authors’ website www.dtr-book.com includes updates, corrections, new papers, and links to useful websites.
Randomized clinical trials are the primary tool for evaluating new medical interventions. Randomization provides for a fair comparison between treatment and control groups, balancing out, on average, distributions of known and unknown factors among the participants. Unfortunately, these studies often lack a substantial percentage of data. This missing data reduces the benefit provided by the randomization and introduces potential biases in the comparison of the treatment groups. Missing data can arise for a variety of reasons, including the inability or unwillingness of participants to meet appointments for evaluation. And in some studies, some or all of data collection ceases when participants discontinue study treatment. Existing guidelines for the design and conduct of clinical trials, and the analysis of the resulting data, provide only limited advice on how to handle missing data. Thus, approaches to the analysis of data with an appreciable amount of missing values tend to be ad hoc and variable. The Prevention and Treatment of Missing Data in Clinical Trials concludes that a more principled approach to design and analysis in the presence of missing data is both needed and possible. Such an approach needs to focus on two critical elements: (1) careful design and conduct to limit the amount and impact of missing data and (2) analysis that makes full use of information on all randomized participants and is based on careful attention to the assumptions about the nature of the missing data underlying estimates of treatment effects. In addition to the highest priority recommendations, the book offers more detailed recommendations on the conduct of clinical trials and techniques for analysis of trial data.
Reliably optimizing a new treatment in humans is a critical first step in clinical evaluation since choosing a suboptimal dose or schedule may lead to failure in later trials. At the same time, if promising preclinical results do not translate into a real treatment advance, it is important to determine this quickly and terminate the clinical evaluation process to avoid wasting resources. Bayesian Designs for Phase I–II Clinical Trials describes how phase I–II designs can serve as a bridge or protective barrier between preclinical studies and large confirmatory clinical trials. It illustrates many of the severe drawbacks with conventional methods used for early-phase clinical trials and presents numerous Bayesian designs for human clinical trials of new experimental treatment regimes. Written by research leaders from the University of Texas MD Anderson Cancer Center, this book shows how Bayesian designs for early-phase clinical trials can explore, refine, and optimize new experimental treatments. It emphasizes the importance of basing decisions on both efficacy and toxicity.
It is not news that each of us grows old. What is relatively new, however, is that the average age of the American population is increasing. More and better information is required to assess, plan for, and meet the needs of a graying population. The Aging Population in the Twenty-First Century examines social, economic, and demographic changes among the aged, as well as many health-related topics: health promotion and disease prevention; quality of life; health care system financing and use; and the quality of careâ€"especially long-term care. Recommendations for increasing and improving the data availableâ€"as well as for ensuring timely access to themâ€"are also included.
Understanding risk -- Putting risk in perspective -- Risk charts : a way to get perspective -- Judging the benefit of a health intervention -- Not all benefits are equal : understand the outcome -- Consider the downsides -- Do the benefits outweight the downsides? -- Beware of exaggerated importance -- Beware of exaggerated certainty -- Who's behind the numbers?
On topics from genetic engineering and mad cow disease to vaccination and climate change, this Handbook draws on the insights of 57 leading science of science communication scholars who explore what social scientists know about how citizens come to understand and act on what is known by science.
This work provides a thought-provoking account of how medical treatments can be tested with unbiased or 'fair' trials and explains how patients can work with doctors to achieve this vital goal. It spans the gamut of therapy from mastectomy to thalidomide and explores a vast range of case studies.
Focusing on the statistical methods most frequently used in the health care literature and featuring numerous charts, graphs, and up-to-date examples from the literature, this text provides a thorough foundation for the statistics portion of nursing and all health care research courses. All Fifth Edition chapters include new examples and new computer printouts using the latest software, SPSS for Windows, Version 12. New material on regression diagnostics has been added.