Cognitive and Computational Strategies for Word Sense Disambiguation examines cognitive strategies by humans and computational strategies by machines, for WSD in parallel. Focusing on a psychologically valid property of words and senses, author Oi Yee Kwong discusses their concreteness or abstractness and draws on psycholinguistic data to examine the extent to which existing lexical resources resemble the mental lexicon as far as the concreteness distinction is concerned. The text also investigates the contribution of different knowledge sources to WSD in relation to this very intrinsic nature of words and senses.
Cognitive and Computational Strategies for Word Sense Disambiguation examines cognitive strategies by humans and computational strategies by machines, for WSD in parallel. Focusing on a psychologically valid property of words and senses, author Oi Yee Kwong discusses their concreteness or abstractness and draws on psycholinguistic data to examine the extent to which existing lexical resources resemble the mental lexicon as far as the concreteness distinction is concerned. The text also investigates the contribution of different knowledge sources to WSD in relation to this very intrinsic nature of words and senses.
The book features recent attempts to construct corpora for specific purposes – e.g. multifactorial Dutch (parallel), Geasy Easy Language Corpus (intralingual), HK LegCo interpreting corpus – and showcases sophisticated and innovative corpus analysis methods. It proposes new approaches to address classical themes – i.e. translation pedagogy, translation norms and equivalence, principles of translation – and brings interdisciplinary perspectives – e.g. contrastive linguistics, cognition and metaphor studies – to cast new light. It is a timely reference for the researchers as well as postgraduate students who are interested in the applications of corpus technology to solving translation and interpreting problems.
The Routledge Encyclopedia of Translation Technology provides a state-of-the art survey of the field of computer-assisted translation. It is the first definitive reference to provide a comprehensive overview of the general, regional and topical aspects of this increasingly significant area of study. The Encyclopedia is divided into three parts: Part One presents general issues in translation technology, such as its history and development, translator training and various aspects of machine translation, including a valuable case study of its teaching at a major university; Part Two discusses national and regional developments in translation technology, offering contributions covering the crucial territories of China, Canada, France, Hong Kong, Japan, South Africa, Taiwan, the Netherlands and Belgium, the United Kingdom and the United States Part Three evaluates specific matters in translation technology, with entries focused on subjects such as alignment, bitext, computational lexicography, corpus, editing, online translation, subtitling and technology and translation management systems. The Routledge Encyclopedia of Translation Technology draws on the expertise of over fifty contributors from around the world and an international panel of consultant editors to provide a selection of articles on the most pertinent topics in the discipline. All the articles are self-contained, extensively cross-referenced, and include useful and up-to-date references and information for further reading. It will be an invaluable reference work for anyone with a professional or academic interest in the subject.
This book is devoted to a systemic study of socio-economic development risks arising in the Decade of Action, as well as the prospects for risk management in support of sustainable development. It aims to overcome fragmentary consideration of risks in the existing literature through their comprehensive coverage and the establishment of their interconnections from the perspective of sustainable development. The novelty of this book is that it provides a comprehensive accounting of socio-economic development risks in the Decade of Action, as well as a rethinking of these risks from a sustainable development perspective. The book also opens up the possibility of the most comprehensive and effective risk management in support of sustainable development. The practical relevance of the book stems from the fact that it describes and discusses practical experience in detail and accompanies the theoretical material with numerous case studies, including cases and frameworks with extensive coverage of international best practices. The book is intended for scholars, for whom the book forms a systemic scientific view of the risks of socio-economic development arising in the Decade of Action, as well as the prospects for risk management in support of sustainable development. The book is also of interest to practitioners, for whom it offers practical advice on risk management at all levels of the economy for sustainable development. Many examples from different countries make the book attractive to a wide international audience. The book is of particular interest to readers from Russia.
The process of learning words and languages may seem like an instinctual trait, inherent to nearly all humans from a young age. However, a vast range of complex research and information exists in detailing the complexities of the process of word learning. Theoretical and Computational Models of Word Learning: Trends in Psychology and Artificial Intelligence strives to combine cross-disciplinary research into one comprehensive volume to help readers gain a fuller understanding of the developmental processes and influences that makeup the progression of word learning. Blending together developmental psychology and artificial intelligence, this publication is intended for researchers, practitioners, and educators who are interested in language learning and its development as well as computational models formed from these specific areas of research.
With the advent and increasing popularity of Computer Supported Collaborative Learning (CSCL) and e-learning technologies, the need of automatic assessment and of teacher/tutor support for the two tightly intertwined activities of comprehension of reading materials and of collaboration among peers has grown significantly. In this context, a polyphonic model of discourse derived from Bakhtin’s work as a paradigm is used for analyzing both general texts and CSCL conversations in a unique framework focused on different facets of textual cohesion. As specificity of our analysis, the individual learning perspective is focused on the identification of reading strategies and on providing a multi-dimensional textual complexity model, whereas the collaborative learning dimension is centered on the evaluation of participants’ involvement, as well as on collaboration assessment. Our approach based on advanced Natural Language Processing techniques provides a qualitative estimation of the learning process and enhances understanding as a “mediator of learning” by providing automated feedback to both learners and teachers or tutors. The main benefits are its flexibility, extensibility and nevertheless specificity for covering multiple stages, starting from reading classroom materials, to discussing on specific topics in a collaborative manner and finishing the feedback loop by verbalizing metacognitive thoughts.
The most frequently used words in English are highly ambiguous; for example, Webster's Ninth New Collegiate Dictionary lists 94 meanings for the word "run" as a verb alone. Yet people rarely notice this ambiguity. Solving this puzzle has commanded the efforts of cognitive scientists for many years. The solution most often identified is "context": we use the context of utterance to determine the proper meanings of words and sentences. The problem then becomes specifying the nature of context and how it interacts with the rest of an understanding system. The difficulty becomes especially apparent in the attempt to write a computer program to understand natural language. Lexical ambiguity resolution (LAR), then, is one of the central problems in natural language and computational semantics research. A collection of the best research on LAR available, this volume offers eighteen original papers by leading scientists. Part I, Computer Models, describes nine attempts to discover the processes necessary for disambiguation by implementing programs to do the job. Part II, Empirical Studies, goes into the laboratory setting to examine the nature of the human disambiguation mechanism and the structure of ambiguity itself. A primary goal of this volume is to propose a cognitive science perspective arising out of the conjunction of work and approaches from neuropsychology, psycholinguistics, and artificial intelligence--thereby encouraging a closer cooperation and collaboration among these fields. Lexical Ambiguity Resolution is a valuable and accessible source book for students and cognitive scientists in AI, psycholinguistics, neuropsychology, or theoretical linguistics.