This important collection of essays is a synthesis of foundational studies in Bayesian decision theory and statistics. An overarching topic of the collection is understanding how the norms for Bayesian decision making should apply in settings with more than one rational decision maker and then tracing out some of the consequences of this turn for Bayesian statistics. The volume will be particularly valuable to philosophers concerned with decision theory, probability, and statistics, statisticians, mathematicians, and economists.
This important collection of essays is a synthesis of foundational studies in Bayesian decision theory and statistics. An overarching topic of the collection is understanding how the norms for Bayesian decision making should apply in settings with more than one rational decision maker and then tracing out some of the consequences of this turn for Bayesian statistics. There are four principal themes to the collection: cooperative, non-sequential decisions; the representation and measurement of 'partially ordered' preferences; non-cooperative, sequential decisions; and pooling rules and Bayesian dynamics for sets of probabilities. The volume will be particularly valuable to philosophers concerned with decision theory, probability, and statistics, statisticians, mathematicians, and economists.
Statistical Rethinking: A Bayesian Course with Examples in R and Stan builds readers’ knowledge of and confidence in statistical modeling. Reflecting the need for even minor programming in today’s model-based statistics, the book pushes readers to perform step-by-step calculations that are usually automated. This unique computational approach ensures that readers understand enough of the details to make reasonable choices and interpretations in their own modeling work. The text presents generalized linear multilevel models from a Bayesian perspective, relying on a simple logical interpretation of Bayesian probability and maximum entropy. It covers from the basics of regression to multilevel models. The author also discusses measurement error, missing data, and Gaussian process models for spatial and network autocorrelation. By using complete R code examples throughout, this book provides a practical foundation for performing statistical inference. Designed for both PhD students and seasoned professionals in the natural and social sciences, it prepares them for more advanced or specialized statistical modeling. Web Resource The book is accompanied by an R package (rethinking) that is available on the author’s website and GitHub. The two core functions (map and map2stan) of this package allow a variety of statistical models to be constructed from standard model formulas.
Classic analysis of the foundations of statistics and development of personal probability, one of the greatest controversies in modern statistical thought. Revised edition. Calculus, probability, statistics, and Boolean algebra are recommended.
If you know how to program, you have the skills to turn data into knowledge using the tools of probability and statistics. This concise introduction shows you how to perform statistical analysis computationally, rather than mathematically, with programs written in Python. You'll work with a case study throughout the book to help you learn the entire data analysis process—from collecting data and generating statistics to identifying patterns and testing hypotheses. Along the way, you'll become familiar with distributions, the rules of probability, visualization, and many other tools and concepts. Develop your understanding of probability and statistics by writing and testing code Run experiments to test statistical behavior, such as generating samples from several distributions Use simulations to understand concepts that are hard to grasp mathematically Learn topics not usually covered in an introductory course, such as Bayesian estimation Import data from almost any source using Python, rather than be limited to data that has been cleaned and formatted for statistics tools Use statistical inference to answer questions about real-world data
Mounting failures of replication in social and biological sciences give a new urgency to critically appraising proposed reforms. This book pulls back the cover on disagreements between experts charged with restoring integrity to science. It denies two pervasive views of the role of probability in inference: to assign degrees of belief, and to control error rates in a long run. If statistical consumers are unaware of assumptions behind rival evidence reforms, they can't scrutinize the consequences that affect them (in personalized medicine, psychology, etc.). The book sets sail with a simple tool: if little has been done to rule out flaws in inferring a claim, then it has not passed a severe test. Many methods advocated by data experts do not stand up to severe scrutiny and are in tension with successful strategies for blocking or accounting for cherry picking and selective reporting. Through a series of excursions and exhibits, the philosophy and history of inductive inference come alive. Philosophical tools are put to work to solve problems about science and pseudoscience, induction and falsification.
This book is a fresh approach to a calculus based, first course in probability and statistics, using R throughout to give a central role to data and simulation. The book introduces probability with Monte Carlo simulation as an essential tool. Simulation makes challenging probability questions quickly accessible and easily understandable. Mathematical approaches are included, using calculus when appropriate, but are always connected to experimental computations. Using R and simulation gives a nuanced understanding of statistical inference. The impact of departure from assumptions in statistical tests is emphasized, quantified using simulations, and demonstrated with real data. The book compares parametric and non-parametric methods through simulation, allowing for a thorough investigation of testing error and power. The text builds R skills from the outset, allowing modern methods of resampling and cross validation to be introduced along with traditional statistical techniques. Fifty-two data sets are included in the complementary R package fosdata. Most of these data sets are from recently published papers, so that you are working with current, real data, which is often large and messy. Two central chapters use powerful tidyverse tools (dplyr, ggplot2, tidyr, stringr) to wrangle data and produce meaningful visualizations. Preliminary versions of the book have been used for five semesters at Saint Louis University, and the majority of the more than 400 exercises have been classroom tested.
The 2008 financial crisis, the rise of Trumpism and the other populist movements which have followed in their wake have grown out of the frustrations of those hurt by the economic policies advocated by conventional economists for generations. Despite this, textbooks continue to praise conventional policies such as deregulation and hyperglobalization. This textbook demonstrates how misleading it can be to apply oversimplified models of perfect competition to the real world. The math works well on college blackboards but not so well on the Main Streets of America. This volume explores the realities of oligopolies, the real impact of the minimum wage, the double-edged sword of free trade, and other ways in which powerful institutions cause distortions in the mainstream models. Bringing together the work of key scholars, such as Kahneman, Minsky, and Schumpeter, this book demonstrates how we should take into account the inefficiencies that arise due to asymmetric information, mental biases, unequal distribution of wealth and power, and the manipulation of demand. This textbook offers students a valuable introductory text with insights into the workings of real markets not just imaginary ones formulated by blackboard economists. A must-have for students studying the principles of economics as well as micro- and macroeconomics, this textbook redresses the existing imbalance in economic teaching. Instead of clinging to an ideology that only enriched the 1%, Komlos sketches the outline of a capitalism with a human face, an economy in which people live contented lives with dignity instead of focusing on GNP.
Taken literally, the title "All of Statistics" is an exaggeration. But in spirit, the title is apt, as the book does cover a much broader range of topics than a typical introductory book on mathematical statistics. This book is for people who want to learn probability and statistics quickly. It is suitable for graduate or advanced undergraduate students in computer science, mathematics, statistics, and related disciplines. The book includes modern topics like non-parametric curve estimation, bootstrapping, and classification, topics that are usually relegated to follow-up courses. The reader is presumed to know calculus and a little linear algebra. No previous knowledge of probability and statistics is required. Statistics, data mining, and machine learning are all concerned with collecting and analysing data.
The principles of freedom of expression have been developed over centuries. How are they reserved and passed on? How can large internet gatekeepers be required to respect freedom of expression and to contribute actively to a diverse and plural marketplace of ideas? These are key issues for media regulation, and will remain so for the foreseeable decades. The book starts with the foundations of freedom of expression and freedom of the press, and then goes on to explore the general issues concerning the regulation of the internet as a specific medium. It then turns to analysing the legal issues relating to the three most important gatekeepers whose operations directly affect freedom of expression: ISPs, search engines and social media platforms. Finally it summarises the potential future regulatory and media policy directions. The book takes a comparative legal approach, focusing primarily on English and American regulations, case law and jurisprudential debates, but it also details the relevant international developments (Council of Europe, European Union) as well as the jurisprudence of the European Court of Human Rights.