The third edition of this book was very well received by researchers working in many different fields of research. The use of that text also gave these researchers the opportunity to raise questions, and express additional needs for materials on techniques poorly covered in the literature. For example, when designing an inter-rater reliability study, many researchers wanted to know how to determine the optimal number of raters and the optimal number of subjects that should participate in the experiment. Also, very little space in the literature has been devoted to the notion of intra-rater reliability, particularly for quantitative measurements. The fourth edition of this text addresses those needs, in addition to further refining the presentation of the material already covered in the third edition. Features of the Fourth Edition include: New material on sample size calculations for chance-corrected agreement coefficients, as well as for intraclass correlation coefficients. The researcher will be able to determine the optimal number raters, subjects, and trials per subject.The chapter entitled “Benchmarking Inter-Rater Reliability Coefficients” has been entirely rewritten.The introductory chapter has been substantially expanded to explore possible definitions of the notion of inter-rater reliability.All chapters have been revised to a large extent to improve their readability.
Agreement among raters is of great importance in many domains. For example, in medicine, diagnoses are often provided by more than one doctor to make sure the proposed treatment is optimal. In criminal trials, sentencing depends, among other things, on the complete agreement among the jurors. In observational studies, researchers increase reliability by examining discrepant ratings. This book is intended to help researchers statistically examine rater agreement by reviewing four different approaches to the technique. The first approach introduces readers to calculating coefficients that allow one to summarize agreements in a single score. The second approach involves estimating log-linear models that allow one to test specific hypotheses about the structure of a cross-classification of two or more raters' judgments. The third approach explores cross-classifications or raters' agreement for indicators of agreement or disagreement, and for indicators of such characteristics as trends. The fourth approach compares the correlation or covariation structures of variables that raters use to describe objects, behaviors, or individuals. These structures can be compared for two or more raters. All of these methods operate at the level of observed variables. This book is intended as a reference for researchers and practitioners who describe and evaluate objects and behavior in a number of fields, including the social and behavioral sciences, statistics, medicine, business, and education. It also serves as a useful text for graduate-level methods or assessment classes found in departments of psychology, education, epidemiology, biostatistics, public health, communication, advertising and marketing, and sociology. Exposure to regression analysis and log-linear modeling is helpful.
Multivariate statistics and mathematical models provide flexible and powerful tools essential in most disciplines. Nevertheless, many practicing researchers lack an adequate knowledge of these techniques, or did once know the techniques, but have not been able to keep abreast of new developments. The Handbook of Applied Multivariate Statistics and Mathematical Modeling explains the appropriate uses of multivariate procedures and mathematical modeling techniques, and prescribe practices that enable applied researchers to use these procedures effectively without needing to concern themselves with the mathematical basis. The Handbook emphasizes using models and statistics as tools. The objective of the book is to inform readers about which tool to use to accomplish which task. Each chapter begins with a discussion of what kinds of questions a particular technique can and cannot answer. As multivariate statistics and modeling techniques are useful across disciplines, these examples include issues of concern in biological and social sciences as well as the humanities.
The contributors to Best Practices in Quantitative Methods envision quantitative methods in the 21st century, identify the best practices, and, where possible, demonstrate the superiority of their recommendations empirically. Editor Jason W. Osborne designed this book with the goal of providing readers with the most effective, evidence-based, modern quantitative methods and quantitative data analysis across the social and behavioral sciences. The text is divided into five main sections covering select best practices in Measurement, Research Design, Basics of Data Analysis, Quantitative Methods, and Advanced Quantitative Methods. Each chapter contains a current and expansive review of the literature, a case for best practices in terms of method, outcomes, inferences, etc., and broad-ranging examples along with any empirical evidence to show why certain techniques are better. Key Features: Describes important implicit knowledge to readers: The chapters in this volume explain the important details of seemingly mundane aspects of quantitative research, making them accessible to readers and demonstrating why it is important to pay attention to these details. Compares and contrasts analytic techniques: The book examines instances where there are multiple options for doing things, and make recommendations as to what is the "best" choice—or choices, as what is best often depends on the circumstances. Offers new procedures to update and explicate traditional techniques: The featured scholars present and explain new options for data analysis, discussing the advantages and disadvantages of the new procedures in depth, describing how to perform them, and demonstrating their use. Intended Audience: Representing the vanguard of research methods for the 21st century, this book is an invaluable resource for graduate students and researchers who want a comprehensive, authoritative resource for practical and sound advice from leading experts in quantitative methods.
This book presents strategies for analyzing qualitative and mixed methods data with MAXQDA software, and provides guidance on implementing a variety of research methods and approaches, e.g. grounded theory, discourse analysis and qualitative content analysis, using the software. In addition, it explains specific topics, such as transcription, building a coding frame, visualization, analysis of videos, concept maps, group comparisons and the creation of literature reviews. The book is intended for masters and PhD students as well as researchers and practitioners dealing with qualitative data in various disciplines, including the educational and social sciences, psychology, public health, business or economics.
The internal validity of a study reflects the extent to which the design and conduct of the study have prevented bias(es). One of the key steps in a systematic review is assessment of a study's internal validity, or potential for bias. This assessment serves to: (1) identify the strengths and limitations of the included studies; (2) investigate, and potentially explain heterogeneity in findings across different studies included in a systematic review; and (3) grade the strength of evidence for a given question. The risk of bias assessment directly informs one of four key domains considered when assessing the strength of evidence. With the increase in the number of published systematic reviews and development of systematic review methodology over the past 15 years, close attention has been paid to the methods for assessing internal validity. Until recently this has been referred to as “quality assessment” or “assessment of methodological quality.” In this context “quality” refers to “the confidence that the trial design, conduct, and analysis has minimized or avoided biases in its treatment comparisons.” To facilitate the assessment of methodological quality, a plethora of tools has emerged. Some of these tools were developed for specific study designs (e.g., randomized controlled trials (RCTs), cohort studies, case-control studies), while others were intended to be applied to a range of designs. The tools often incorporate characteristics that may be associated with bias; however, many tools also contain elements related to reporting (e.g., was the study population described) and design (e.g., was a sample size calculation performed) that are not related to bias. The Cochrane Collaboration recently developed a tool to assess the potential risk of bias in RCTs. The Risk of Bias (ROB) tool was developed to address some of the shortcomings of existing quality assessment instruments, including over-reliance on reporting rather than methods. Several systematic reviews have catalogued and critiqued the numerous tools available to assess methodological quality, or risk of bias of primary studies. In summary, few existing tools have undergone extensive inter-rater reliability or validity testing. Moreover, the focus of much of the tool development or testing that has been done has been on criterion or face validity. Therefore it is unknown whether, or to what extent, the summary assessments based on these tools differentiate between studies with biased and unbiased results (i.e., studies that may over- or underestimate treatment effects). There is a clear need for inter-rater reliability testing of different tools in order to enhance consistency in their application and interpretation across different systematic reviews. Further, validity testing is essential to ensure that the tools being used can identify studies with biased results. Finally, there is a need to determine inter-rater reliability and validity in order to support the uptake and use of individual tools that are recommended by the systematic review community, and specifically the ROB tool within the Evidence-based Practice Center (EPC) Program. In this project we focused on two tools that are commonly used in systematic reviews. The Cochrane ROB tool was designed for RCTs and is the instrument recommended by The Cochrane Collaboration for use in systematic reviews of RCTs. The Newcastle-Ottawa Scale is commonly used for nonrandomized studies, specifically cohort and case-control studies.
Categorical Statistics for CommunicationResearch presents scholars with a discipline-specific guide to categorical data analysis. The text blends necessary background information and formulas for statistical procedures with data analyses illustrating techniques such as log- linear modeling and logistic regression analysis. Provides techniques for analyzing categorical data from a communication studies perspective Provides an accessible presentation of techniques for analyzing categorical data for communication scholars and other social scientists working at the advanced undergraduate and graduate teaching levels Illustrated with examples from different types of communication research such as health, political and sports communication and entertainment Includes exercises at the end of each chapter and a companion website containing exercise answers and chapter-by-chapter PowerPoint slides
How can we make sense of make sense of the deluge of information in the digital age? The new science of Quantitative Ethnography dissolves the boundaries between quantitative and qualitative research to give researchers tools for studying the human side of big data: to understand not just what data says, but what it tells us about the people who created it. Thoughtful, literate, and humane, Quantitative Ethnography integrates data-mining, discourse analysis, psychology, statistics, and ethnography into a brand-new science for understanding what people do and why they do it. Packed with anecdotes, stories, and clear explanations of complex ideas, Quantitative Ethnography is an engaging introduction to research methods for students, an introduction to data science for qualitative researchers, and an introduction to the humanities for statisticians--but also a compelling philosophical and intellectual journey for anyone who wants to understand learning, culture and behavior in the age of big data.
Statistical science’s first coordinated manual of methods for analyzing ordered categorical data, now fully revised and updated, continues to present applications and case studies in fields as diverse as sociology, public health, ecology, marketing, and pharmacy. Analysis of Ordinal Categorical Data, Second Edition provides an introduction to basic descriptive and inferential methods for categorical data, giving thorough coverage of new developments and recent methods. Special emphasis is placed on interpretation and application of methods including an integrated comparison of the available strategies for analyzing ordinal data. Practitioners of statistics in government, industry (particularly pharmaceutical), and academia will want this new edition.
Discourse on the Move is the first book-length exploration of how corpus-based methods can be used for discourse analysis, applied to the description of discourse organization. The primary goal is to bring these two analytical perspectives together: undertaking a detailed discourse analysis of each individual text, but doing so in terms that can be generalized across all texts of a corpus. The book explores two major approaches to this task: ‘top-down’ and ‘bottom-up’. In the ‘top-down’ approach, the functional components of a genre are determined first, and then all texts in a corpus are analyzed in terms of those components. In contrast, textual components emerge from the corpus analysis in the bottom-up approach, and the discourse organization of individual texts is then analyzed in terms of linguistically-defined textual categories. Both approaches are illustrated through case studies of discourse structure in particular genres: fund-raising letters, biology/biochemistry research articles, and university classroom teaching.