The analysis of discourse is probably one of the most complex problems of linguistics. It can be approached from many different directions, involving a large variety of different methods. This volume unites psycholinguistic studies, investigations of logical and computational models of discourse, corpus studies, and linguistic case studies of language-specific devices. This variety of approaches reflects the complexity of discourse production and understanding, and it also reflects the necessity of understanding the complex interplay of diverse parameters which influence these processes. The growing importance of corpus-based and experimental approaches to discourse analysis is duly reflected in this volume. Most of the chapters make use of them in one or the other form. This collection of articles grew out of the third installment of the Constraints in Discourse conferences, and will be of interest to researchers from linguistics, artificial intelligence, and cognitive science.
It is a commonplace to say that the meaning of text is more than the conjunction of the meaning of its constituents. But what are the rules governing its interpretation, and what are the constraints that define well-formed discourse? Answers to these questions can be given from various perspectives. In this edited volume, leading scientists in the field investigate these questions from structural, cognitive, and computational perspectives. The last decades have seen the development of numerous formal frameworks in which the structure of discourse can be analysed, the most important of them being the Linguistic Discourse Model, Rhetorical Structure Theory and Segmented Discourse Representation Theory. This volume contains an introduction to these frameworks and the fundamental topics in research about discourse constraints. Thus it should be accessible to specialists in the field as well as advanced graduate students and researchers from neighbouring areas. The volume is of interest to discourse linguists, psycholinguists, cognitive scientists, and computational linguists.
It is a commonplace to say that the meaning of text is more than the conjunction of the meaning of its constituents. But what are the rules governing its interpretation, and what are the constraints that define well-formed discourse? Answers to these questions can be given from various perspectives. In this edited volume, leading scientists in the field investigate these questions from structural, cognitive, and computational perspectives. The last decades have seen the development of numerous formal frameworks in which the structure of discourse can be analysed, the most important of them being the Linguistic Discourse Model, Rhetorical Structure Theory and Segmented Discourse Representation Theory. This volume contains an introduction to these frameworks and the fundamental topics in research about discourse constraints. Thus it should be accessible to specialists in the field as well as advanced graduate students and researchers from neighbouring areas. The volume is of interest to discourse linguists, psycholinguists, cognitive scientists, and computational linguists.
This monograph is the first large-scale corpus analysis of French il y a clefts. While most research on clefts focusses on the English ‘prototypical’ it-cleft and its equivalents across languages, this study examines the lesser-known il y a clefts – of both presentational-eventive and specificational type – and provides an in-depth analysis of their syntactic, semantic and discourse-functional properties. In addition to an extensive literature review and a comparison with Italian c’è clefts and with French c’est clefts, the strength of the study lies in the critical approach it develops to the common definition of clefts. Several commonly used criteria for clefts are applied to the corpus data, revealing that these criteria often lead to ambiguous results. The reasons for this ambiguity are explored, thus leading to a better understanding of what constitutes a cleft. In this sense, the analysis will be of interest to specialists of Romance and non-Romance clefts alike.
In contrastive linguistics of English and German, there is a tradition of accounting for contrasts with respect to grammar and, to a lesser extent, for lexis and phonetics. Moving on to discourse and text, there is a sizeable body of literature on cohesive patterns in English and German respectively - but very little in terms of a comparison. The latter, though, is of particular interest for language learners, translators and, of course, linguists and researchers in language technology. This book attempts to close this gap, based on a number of years of corpus-based study into variation and cohesion in the two languages. While there is an overall focus on language contrasts, it also investigates variation between different registers language-internally, and between written and spoken mode in particular. For each of the five major types of cohesion (co-reference, substitution, ellipsis, conjunctive relations and lexical cohesion), overviews are given of contrasts in the system and of contrastive frequencies in texts. Results and methods presented in this book are thus relevant for language teaching, translation, language technology and corpus-based work on English and German generally.
In this book leading scholars from every relevant field report on all aspects of compositionality, the notion that the meaning of an expression can be derived from its parts. Understanding how compositionality works is a central element of syntactic and semantic analysis and a challenge for models of cognition. It is a key concept in linguistics and philosophy and in the cognitive sciences more generally, and is without question one of the most exciting fields in the study of language and mind. The authors of this book report critically on lines of research in different disciplines, revealing the connections between them and highlighting current problems and opportunities. The force and justification of compositionality have long been contentious. First proposed by Frege as the notion that the meaning of an expression is generally determined by the meaning and syntax of its components, it has since been deployed as a constraint on the relation between theories of syntax and semantics, as a means of analysis, and more recently as underlying the structures of representational systems, such as computer programs and neural architectures. The Oxford Handbook of Compositionality explores these and many other dimensions of this challenging field. It will appeal to researchers and advanced students in linguistics and philosophy and to everyone concerned with the study of language and cognition including those working in neuroscience, computational science, and bio-informatics.
Human-Machine Shared Contexts considers the foundations, metrics, and applications of human-machine systems. Editors and authors debate whether machines, humans, and systems should speak only to each other, only to humans, or to both and how. The book establishes the meaning and operation of "shared contexts between humans and machines; it also explores how human-machine systems affect targeted audiences (researchers, machines, robots, users) and society, as well as future ecosystems composed of humans and machines. This book explores how user interventions may improve the context for autonomous machines operating in unfamiliar environments or when experiencing unanticipated events; how autonomous machines can be taught to explain contexts by reasoning, inferences, or causality, and decisions to humans relying on intuition; and for mutual context, how these machines may interdependently affect human awareness, teams and society, and how these "machines" may be affected in turn. In short, can context be mutually constructed and shared between machines and humans? The editors are interested in whether shared context follows when machines begin to think, or, like humans, develop subjective states that allow them to monitor and report on their interpretations of reality, forcing scientists to rethink the general model of human social behavior. If dependence on machine learning continues or grows, the public will also be interested in what happens to context shared by users, teams of humans and machines, or society when these machines malfunction. As scientists and engineers "think through this change in human terms," the ultimate goal is for AI to advance the performance of autonomous machines and teams of humans and machines for the betterment of society wherever these machines interact with humans or other machines. This book will be essential reading for professional, industrial, and military computer scientists and engineers; machine learning (ML) and artificial intelligence (AI) scientists and engineers, especially those engaged in research on autonomy, computational context, and human-machine shared contexts; advanced robotics scientists and engineers; scientists working with or interested in data issues for autonomous systems such as with the use of scarce data for training and operations with and without user interventions; social psychologists, scientists and physical research scientists pursuing models of shared context; modelers of the internet of things (IOT); systems of systems scientists and engineers and economists; scientists and engineers working with agent-based models (ABMs); policy specialists concerned with the impact of AI and ML on society and civilization; network scientists and engineers; applied mathematicians (e.g., holon theory, information theory); computational linguists; and blockchain scientists and engineers. - Discusses the foundations, metrics, and applications of human-machine systems - Considers advances and challenges in the performance of autonomous machines and teams of humans - Debates theoretical human-machine ecosystem models and what happens when machines malfunction
The volume is a collection of papers reporting the results of investigations on the interaction of discourse and sentence structure in the languages of Europe. The subjects discussed in the book include: morphosyntactic characteristics of spontaneous spoken texts; different patterns of word order in a pragmatic perspective; the coding of the pragmatic functions topic and focus in sentences with non-canonical word orders (e.g. dislocations, clefts); the range of functions of verb-subject order in declarative clauses and the notion of theticity; prosodic patterns of de-accenting of given information; deixis and anaphora; coding of definiteness and article systems. The book provides the empirical basis for the comparative survey of major phenomena found in the languages of Europe which have pragmatic relevance. Beside traditional areas of investigation at the interface between syntax and pragmatics such as dislocations, new areas are explored, such as the prosody of given information. Data are considered within a functional-typological approach.
"The book guides the reader through an analysis of eight distinct performances at work in the discourse on customary international law. One of its key claims is that customary international law is not the surviving trace of an ancient law-making mechanism that used to be found in traditional societies. Indeed, as is shown throughout, customary international law is anything but ancient, and there is hardly any doctrine of international law that contains so many of the features of modern thinking. It is also argued that, contrary to mainstream opinion, customary international law is in fact shaped by texts, and originates from a textual environment"--Page 4 de la couverture.