Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It al
Empirical Likelihood Methods in Biomedicine and Health provides a compendium of nonparametric likelihood statistical techniques in the perspective of health research applications. It includes detailed descriptions of the theoretical underpinnings of recently developed empirical likelihood-based methods. The emphasis throughout is on the application of the methods to the health sciences, with worked examples using real data. Provides a systematic overview of novel empirical likelihood techniques. Presents a good balance of theory, methods, and applications. Features detailed worked examples to illustrate the application of the methods. Includes R code for implementation. The book material is attractive and easily understandable to scientists who are new to the research area and may attract statisticians interested in learning more about advanced nonparametric topics including various modern empirical likelihood methods. The book can be used by graduate students majoring in biostatistics, or in a related field, particularly for those who are interested in nonparametric methods with direct applications in Biomedicine.
Empirical Likelihood Method in Survival Analysis explains how to use the empirical likelihood method for right censored survival data. The author uses R for calculating empirical likelihood and includes many worked out examples with the associated R code. The datasets and code are available for download on his website and CRAN. The book focuses on all the standard survival analysis topics treated with empirical likelihood, including hazard functions, cumulative distribution functions, analysis of the Cox model, and computation of empirical likelihood for censored data. It also covers semi-parametric accelerated failure time models, the optimality of confidence regions derived from empirical likelihood or plug-in empirical likelihood ratio tests, and several empirical likelihood confidence band results. While survival analysis is a classic area of statistical study, the empirical likelihood methodology has only recently been developed. Until now, just one book was available on empirical likelihood and most statistical software did not include empirical likelihood procedures. Addressing this shortfall, this book provides the functions to calculate the empirical likelihood ratio in survival analysis as well as functions related to the empirical likelihood analysis of the Cox regression model and other hazard regression models.
Bayesian and such approaches to inference have a number of points of close contact, especially from an asymptotic point of view. Both emphasize the construction of interval estimates of unknown parameters. In this volume, researchers present recent work on several aspects of Bayesian, likelihood and empirical Bayes methods, presented at a workshop held in Montreal, Canada. The goal of the workshop was to explore the linkages among the methods, and to suggest new directions for research in the theory of inference.
"This is truly an outstanding book. [It] brings together all of the latest research in clinical trials methodology and how it can be applied to drug development.... Chang et al provide applications to industry-supported trials. This will allow statisticians in the industry community to take these methods seriously." Jay Herson, Johns Hopkins University The pharmaceutical industry's approach to drug discovery and development has rapidly transformed in the last decade from the more traditional Research and Development (R & D) approach to a more innovative approach in which strategies are employed to compress and optimize the clinical development plan and associated timelines. However, these strategies are generally being considered on an individual trial basis and not as part of a fully integrated overall development program. Such optimization at the trial level is somewhat near-sighted and does not ensure cost, time, or development efficiency of the overall program. This book seeks to address this imbalance by establishing a statistical framework for overall/global clinical development optimization and providing tactics and techniques to support such optimization, including clinical trial simulations. Provides a statistical framework for achieve global optimization in each phase of the drug development process. Describes specific techniques to support optimization including adaptive designs, precision medicine, survival-endpoints, dose finding and multiple testing. Gives practical approaches to handling missing data in clinical trials using SAS. Looks at key controversial issues from both a clinical and statistical perspective. Presents a generous number of case studies from multiple therapeutic areas that help motivate and illustrate the statistical methods introduced in the book. Puts great emphasis on software implementation of the statistical methods with multiple examples of software code (both SAS and R). It is important for statisticians to possess a deep knowledge of the drug development process beyond statistical considerations. For these reasons, this book incorporates both statistical and "clinical/medical" perspectives.
Kosorok’s brilliant text provides a self-contained introduction to empirical processes and semiparametric inference. These powerful research techniques are surprisingly useful for developing methods of statistical inference for complex models and in understanding the properties of such methods. This is an authoritative text that covers all the bases, and also a friendly and gradual introduction to the area. The book can be used as research reference and textbook.
An up-to-date, comprehensive account of major issues in finitemixture modeling This volume provides an up-to-date account of the theory andapplications of modeling via finite mixture distributions. With anemphasis on the applications of mixture models in both mainstreamanalysis and other areas such as unsupervised pattern recognition,speech recognition, and medical imaging, the book describes theformulations of the finite mixture approach, details itsmethodology, discusses aspects of its implementation, andillustrates its application in many common statisticalcontexts. Major issues discussed in this book include identifiabilityproblems, actual fitting of finite mixtures through use of the EMalgorithm, properties of the maximum likelihood estimators soobtained, assessment of the number of components to be used in themixture, and the applicability of asymptotic theory in providing abasis for the solutions to some of these problems. The author alsoconsiders how the EM algorithm can be scaled to handle the fittingof mixture models to very large databases, as in data miningapplications. This comprehensive, practical guide: * Provides more than 800 references-40% published since 1995 * Includes an appendix listing available mixture software * Links statistical literature with machine learning and patternrecognition literature * Contains more than 100 helpful graphs, charts, and tables Finite Mixture Models is an important resource for both applied andtheoretical statisticians as well as for researchers in the manyareas in which finite mixture models can be used to analyze data.
This book constitutes the proceedings of the 5th International Conference on Geometric Science of Information, GSI 2021, held in Paris, France, in July 2021. The 98 papers presented in this volume were carefully reviewed and selected from 125 submissions. They cover all the main topics and highlights in the domain of geometric science of information, including information geometry manifolds of structured data/information and their advanced applications. The papers are organized in the following topics: Probability and statistics on Riemannian Manifolds; sub-Riemannian geometry and neuromathematics; shapes spaces; geometry of quantum states; geometric and structure preserving discretizations; information geometry in physics; Lie group machine learning; geometric and symplectic methods for hydrodynamical models; harmonic analysis on Lie groups; statistical manifold and Hessian information geometry; geometric mechanics; deformed entropy, cross-entropy, and relative entropy; transformation information geometry; statistics, information and topology; geometric deep learning; topological and geometrical structures in neurosciences; computational information geometry; manifold and optimization; divergence statistics; optimal transport and learning; and geometric structures in thermodynamics and statistical physics.
Based on a course in the theory of statistics this text concentrates on what can be achieved using the likelihood/Fisherian method of taking account of uncertainty when studying a statistical problem. It takes the concept ot the likelihood as providing the best methods for unifying the demands of statistical modelling and the theory of inference. Every likelihood concept is illustrated by realistic examples, which are not compromised by computational problems. Examples range from a simile comparison of two accident rates, to complex studies that require generalised linear or semiparametric modelling. The emphasis is that the likelihood is not simply a device to produce an estimate, but an important tool for modelling. The book generally takes an informal approach, where most important results are established using heuristic arguments and motivated with realistic examples. With the currently available computing power, examples are not contrived to allow a closed analytical solution, and the book can concentrate on the statistical aspects of the data modelling. In addition to classical likelihood theory, the book covers many modern topics such as generalized linear models and mixed models, non parametric smoothing, robustness, the EM algorithm and empirical likelihood.