This book provides an overview of recent work on developing a theory of statistical inference based on measuring statistical evidence. It attempts to establish a gold standard for how a statistical analysis should proceed. The book illustrates relative belief theory using many examples and describes the strengths and weaknesses of the theory. The author also addresses fundamental statistical issues, including the meaning of probability, the role of subjectivity, the meaning of objectivity, and the role of infinity and continuity.
Large sample techniques are fundamental to all fields of statistics. Mixed effects models, including linear mixed models, generalized linear mixed models, non-linear mixed effects models, and non-parametric mixed effects models are complex models, yet, these models are extensively used in practice. This monograph provides a comprehensive account of asymptotic analysis of mixed effects models. The monograph is suitable for researchers and graduate students who wish to learn about asymptotic tools and research problems in mixed effects models. It may also be used as a reference book for a graduate-level course on mixed effects models, or asymptotic analysis.
Hidden Markov Models for Time Series: An Introduction Using R, Second Edition illustrates the great flexibility of hidden Markov models (HMMs) as general-purpose models for time series data. The book provides a broad understanding of the models and their uses. After presenting the basic model formulation, the book covers estimation, forecasting, decoding, prediction, model selection, and Bayesian inference for HMMs. Through examples and applications, the authors describe how to extend and generalize the basic model so that it can be applied in a rich variety of situations. The book demonstrates how HMMs can be applied to a wide range of types of time series: continuous-valued, circular, multivariate, binary, bounded and unbounded counts, and categorical observations. It also discusses how to employ the freely available computing environment R to carry out the computations. Features Presents an accessible overview of HMMs Explores a variety of applications in ecology, finance, epidemiology, climatology, and sociology Includes numerous theoretical and programming exercises Provides most of the analysed data sets online New to the second edition A total of five chapters on extensions, including HMMs for longitudinal data, hidden semi-Markov models and models with continuous-valued state process New case studies on animal movement, rainfall occurrence and capture-recapture data
This is the second edition of a monograph on generalized linear models with random effects that extends the classic work of McCullagh and Nelder. It has been thoroughly updated, with around 80 pages added, including new material on the extended likelihood approach that strengthens the theoretical basis of the methodology, new developments in variable selection and multiple testing, and new examples and applications. It includes an R package for all the methods and examples that supplement the book.
The first part of the book gives a general introduction to key concepts in algebraic statistics, focusing on methods that are helpful in the study of models with hidden variables. The author uses tensor geometry as a natural language to deal with multivariate probability distributions, develops new combinatorial tools to study models with hidden data, and describes the semialgebraic structure of statistical models. The second part illustrates important examples of tree models with hidden variables. The book discusses the underlying models and related combinatorial concepts of phylogenetic trees as well as the local and global geometry of latent tree models. It also extends previous results to Gaussian latent tree models. This book shows you how both combinatorics and algebraic geometry enable a better understanding of latent tree models. It contains many results on the geometry of the models, including a detailed analysis of identifiability and the defining polynomial constraints
"Scientific discoveries often build on - and are inspired by - previous discoveries. If the scientific enterprise were a tower of blocks, each piece representing a scientific finding, scientific progress might entail making the tower bigger and better block by block, discovery by discovery. Rather than strong wooden blocks, imagine the blocks, or scientific findings, can take on shape based on scientific accuracy. The most accurate pieces are the strongest and sturdiest, while the least accurate are soft and pliable. Building a tower of the scientific enterprise with a large number of inaccurate blocks will cause the tower to start to wobble, lean over, and potentially collapse, as more and more blocks are placed upon weak and faulty pieces"--
This book presents a systematic and unified approach for modern nonparametric treatment of missing and modified data via examples of density and hazard rate estimation, nonparametric regression, filtering signals, and time series analysis. All basic types of missing at random and not at random, biasing, truncation, censoring, and measurement errors are discussed, and their treatment is explained. Ten chapters of the book cover basic cases of direct data, biased data, nondestructive and destructive missing, survival data modified by truncation and censoring, missing survival data, stationary and nonstationary time series and processes, and ill-posed modifications. The coverage is suitable for self-study or a one-semester course for graduate students with a prerequisite of a standard course in introductory probability. Exercises of various levels of difficulty will be helpful for the instructor and self-study. The book is primarily about practically important small samples. It explains when consistent estimation is possible, and why in some cases missing data should be ignored and why others must be considered. If missing or data modification makes consistent estimation impossible, then the author explains what type of action is needed to restore the lost information. The book contains more than a hundred figures with simulated data that explain virtually every setting, claim, and development. The companion R software package allows the reader to verify, reproduce and modify every simulation and used estimators. This makes the material fully transparent and allows one to study it interactively. Sam Efromovich is the Endowed Professor of Mathematical Sciences and the Head of the Actuarial Program at the University of Texas at Dallas. He is well known for his work on the theory and application of nonparametric curve estimation and is the author of Nonparametric Curve Estimation: Methods, Theory, and Applications. Professor Sam Efromovich is a Fellow of the Institute of Mathematical Statistics and the American Statistical Association.
Longitudinal studies often incur several problems that challenge standard statistical methods for data analysis. These problems include non-ignorable missing data in longitudinal measurements of one or more response variables, informative observation times of longitudinal data, and survival analysis with intermittently measured time-dependent covariates that are subject to measurement error and/or substantial biological variation. Joint modeling of longitudinal and time-to-event data has emerged as a novel approach to handle these issues. Joint Modeling of Longitudinal and Time-to-Event Data provides a systematic introduction and review of state-of-the-art statistical methodology in this active research field. The methods are illustrated by real data examples from a wide range of clinical research topics. A collection of data sets and software for practical implementation of the joint modeling methodologies are available through the book website. This book serves as a reference book for scientific investigators who need to analyze longitudinal and/or survival data, as well as researchers developing methodology in this field. It may also be used as a textbook for a graduate level course in biostatistics or statistics.
The state-space approach provides a formal framework where any result or procedure developed for a basic model can be seamlessly applied to a standard formulation written in state-space form. Moreover, it can accommodate with a reasonable effort nonstandard situations, such as observation errors, aggregation constraints, or missing in-sample values. Exploring the advantages of this approach, State-Space Methods for Time Series Analysis: Theory, Applications and Software presents many computational procedures that can be applied to a previously specified linear model in state-space form. After discussing the formulation of the state-space model, the book illustrates the flexibility of the state-space representation and covers the main state estimation algorithms: filtering and smoothing. It then shows how to compute the Gaussian likelihood for unknown coefficients in the state-space matrices of a given model before introducing subspace methods and their application. It also discusses signal extraction, describes two algorithms to obtain the VARMAX matrices corresponding to any linear state-space model, and addresses several issues relating to the aggregation and disaggregation of time series. The book concludes with a cross-sectional extension to the classical state-space formulation in order to accommodate longitudinal or panel data. Missing data is a common occurrence here, and the book explains imputation procedures necessary to treat missingness in both exogenous and endogenous variables. Web Resource The authors’ E4 MATLAB® toolbox offers all the computational procedures, administrative and analytical functions, and related materials for time series analysis. This flexible, powerful, and free software tool enables readers to replicate the practical examples in the text and apply the procedures to their own work.