The primary aim of this book is to provide modern statistical techniques and theory for stochastic processes. The stochastic processes mentioned here are not restricted to the usual AR, MA, and ARMA processes. A wide variety of stochastic processes, including non-Gaussian linear processes, long-memory processes, nonlinear processes, non-ergodic processes and diffusion processes are described. The authors discuss estimation and testing theory and many other relevant statistical methods and techniques.
A comprehensive and timely edition on an emerging new trend in time series Linear Models and Time-Series Analysis: Regression, ANOVA, ARMA and GARCH sets a strong foundation, in terms of distribution theory, for the linear model (regression and ANOVA), univariate time series analysis (ARMAX and GARCH), and some multivariate models associated primarily with modeling financial asset returns (copula-based structures and the discrete mixed normal and Laplace). It builds on the author's previous book, Fundamental Statistical Inference: A Computational Approach, which introduced the major concepts of statistical inference. Attention is explicitly paid to application and numeric computation, with examples of Matlab code throughout. The code offers a framework for discussion and illustration of numerics, and shows the mapping from theory to computation. The topic of time series analysis is on firm footing, with numerous textbooks and research journals dedicated to it. With respect to the subject/technology, many chapters in Linear Models and Time-Series Analysis cover firmly entrenched topics (regression and ARMA). Several others are dedicated to very modern methods, as used in empirical finance, asset pricing, risk management, and portfolio optimization, in order to address the severe change in performance of many pension funds, and changes in how fund managers work. Covers traditional time series analysis with new guidelines Provides access to cutting edge topics that are at the forefront of financial econometrics and industry Includes latest developments and topics such as financial returns data, notably also in a multivariate context Written by a leading expert in time series analysis Extensively classroom tested Includes a tutorial on SAS Supplemented with a companion website containing numerous Matlab programs Solutions to most exercises are provided in the book Linear Models and Time-Series Analysis: Regression, ANOVA, ARMA and GARCH is suitable for advanced masters students in statistics and quantitative finance, as well as doctoral students in economics and finance. It is also useful for quantitative financial practitioners in large financial institutions and smaller finance outlets.
This book compiles theoretical developments on statistical inference for time series and related models in honor of Masanobu Taniguchi's 70th birthday. It covers models such as long-range dependence models, nonlinear conditionally heteroscedastic time series, locally stationary processes, integer-valued time series, Lévy Processes, complex-valued time series, categorical time series, exclusive topic models, and copula models. Many cutting-edge methods such as empirical likelihood methods, quantile regression, portmanteau tests, rank-based inference, change-point detection, testing for the goodness-of-fit, higher-order asymptotic expansion, minimum contrast estimation, optimal transportation, and topological methods are proposed, considered, or applied to complex data based on the statistical inference for stochastic processes. The performances of these methods are illustrated by a variety of data analyses. This collection of original papers provides the reader with comprehensive and state-of-the-art theoretical works on time series and related models. It contains deep and profound treatments of the asymptotic theory of statistical inference. In addition, many specialized methodologies based on the asymptotic theory are presented in a simple way for a wide variety of statistical models. This Festschrift finds its core audiences in statistics, signal processing, and econometrics.
This important book describes procedures for selecting a model from a large set of competing statistical models. It includes model selection techniques for univariate and multivariate regression models, univariate and multivariate autoregressive models, nonparametric (including wavelets) and semiparametric regression models, and quasi-likelihood and robust regression models. Information-based model selection criteria are discussed, and small sample and asymptotic properties are presented. The book also provides examples and large scale simulation studies comparing the performances of information-based model selection criteria, bootstrapping, and cross-validation selection methods over a wide range of models.
′The editors of the new SAGE Handbook of Regression Analysis and Causal Inference have assembled a wide-ranging, high-quality, and timely collection of articles on topics of central importance to quantitative social research, many written by leaders in the field. Everyone engaged in statistical analysis of social-science data will find something of interest in this book.′ - John Fox, Professor, Department of Sociology, McMaster University ′The authors do a great job in explaining the various statistical methods in a clear and simple way - focussing on fundamental understanding, interpretation of results, and practical application - yet being precise in their exposition.′ - Ben Jann, Executive Director, Institute of Sociology, University of Bern ′Best and Wolf have put together a powerful collection, especially valuable in its separate discussions of uses for both cross-sectional and panel data analysis.′ -Tom Smith, Senior Fellow, NORC, University of Chicago Edited and written by a team of leading international social scientists, this Handbook provides a comprehensive introduction to multivariate methods. The Handbook focuses on regression analysis of cross-sectional and longitudinal data with an emphasis on causal analysis, thereby covering a large number of different techniques including selection models, complex samples, and regression discontinuities. Each Part starts with a non-mathematical introduction to the method covered in that section, giving readers a basic knowledge of the method’s logic, scope and unique features. Next, the mathematical and statistical basis of each method is presented along with advanced aspects. Using real-world data from the European Social Survey (ESS) and the Socio-Economic Panel (GSOEP), the book provides a comprehensive discussion of each method’s application, making this an ideal text for PhD students and researchers embarking on their own data analysis.
R is a language and environment for data analysis and graphics. It may be considered an implementation of S, an award-winning language initially - veloped at Bell Laboratories since the late 1970s. The R project was initiated by Robert Gentleman and Ross Ihaka at the University of Auckland, New Zealand, in the early 1990s, and has been developed by an international team since mid-1997. Historically, econometricians have favored other computing environments, some of which have fallen by the wayside, and also a variety of packages with canned routines. We believe that R has great potential in econometrics, both for research and for teaching. There are at least three reasons for this: (1) R is mostly platform independent and runs on Microsoft Windows, the Mac family of operating systems, and various ?avors of Unix/Linux, and also on some more exotic platforms. (2) R is free software that can be downloaded and installed at no cost from a family of mirror sites around the globe, the Comprehensive R Archive Network (CRAN); hence students can easily install it on their own machines. (3) R is open-source software, so that the full source code is available and can be inspected to understand what it really does, learn from it, and modify and extend it. We also like to think that platform independence and the open-source philosophy make R an ideal environment for reproducible econometric research.
This concise, yet thorough, book is enhanced with simulations and graphs to build the intuition of readers Models for Probability and Statistical Inference was written over a five-year period and serves as a comprehensive treatment of the fundamentals of probability and statistical inference. With detailed theoretical coverage found throughout the book, readers acquire the fundamentals needed to advance to more specialized topics, such as sampling, linear models, design of experiments, statistical computing, survival analysis, and bootstrapping. Ideal as a textbook for a two-semester sequence on probability and statistical inference, early chapters provide coverage on probability and include discussions of: discrete models and random variables; discrete distributions including binomial, hypergeometric, geometric, and Poisson; continuous, normal, gamma, and conditional distributions; and limit theory. Since limit theory is usually the most difficult topic for readers to master, the author thoroughly discusses modes of convergence of sequences of random variables, with special attention to convergence in distribution. The second half of the book addresses statistical inference, beginning with a discussion on point estimation and followed by coverage of consistency and confidence intervals. Further areas of exploration include: distributions defined in terms of the multivariate normal, chi-square, t, and F (central and non-central); the one- and two-sample Wilcoxon test, together with methods of estimation based on both; linear models with a linear space-projection approach; and logistic regression. Each section contains a set of problems ranging in difficulty from simple to more complex, and selected answers as well as proofs to almost all statements are provided. An abundant amount of figures in addition to helpful simulations and graphs produced by the statistical package S-Plus(r) are included to help build the intuition of readers.
A thorough review of the most current regression methods in time series analysis Regression methods have been an integral part of time series analysis for over a century. Recently, new developments have made major strides in such areas as non-continuous data where a linear model is not appropriate. This book introduces the reader to newer developments and more diverse regression models and methods for time series analysis. Accessible to anyone who is familiar with the basic modern concepts of statistical inference, Regression Models for Time Series Analysis provides a much-needed examination of recent statistical developments. Primary among them is the important class of models known as generalized linear models (GLM) which provides, under some conditions, a unified regression theory suitable for continuous, categorical, and count data. The authors extend GLM methodology systematically to time series where the primary and covariate data are both random and stochastically dependent. They introduce readers to various regression models developed during the last thirty years or so and summarize classical and more recent results concerning state space models. To conclude, they present a Bayesian approach to prediction and interpolation in spatial data adapted to time series that may be short and/or observed irregularly. Real data applications and further results are presented throughout by means of chapter problems and complements. Notably, the book covers: * Important recent developments in Kalman filtering, dynamic GLMs, and state-space modeling * Associated computational issues such as Markov chain, Monte Carlo, and the EM-algorithm * Prediction and interpolation * Stationary processes