This book describes a novel, cross-linguistic approach to machine translation that solves certain classes of syntactic and lexical divergences by means of a lexical conceptual structure that can be composed and decomposed in language-specific ways. This approach allows the translator to operate uniformly across many languages, while still accounting for knowledge that is specific to each language.
This volume constitutes the proceedings of the Third International Workshop of the European Association for Machine Translation, held in Heidelberg, Germany in April 1993. The EAMT Workshops traditionally aim at bringing together researchers, developers, users, and others interested in the field of machine or computer-assisted translation research, development and use. The volume presents thoroughly revised versions of the 15 best workshop contributions together with an introductory survey by the volume editor. The presentations are centered primarily on questions of acquiring, sharing, and managing lexical data, but also address aspects of lexical description.
The goal of this book is to integrate the research being carried out in the field of lexical semantics in linguistics with the work on knowledge representation and lexicon design in computational linguistics. Rarely do these two camps meet and discuss the demands and concerns of each other's fields. Therefore, this book is interesting in that it provides a stimulating and unique discussion between the computational perspective of lexical meaning and the concerns of the linguist for the semantic description of lexical items in the context of syntactic descriptions. This book grew out of the papers presented at a workshop held at Brandeis University in April, 1988, funded by the American Association for Artificial Intelligence. The entire workshop as well as the discussion periods accom panying each talk were recorded. Once complete copies of each paper were available, they were distributed to participants, who were asked to provide written comments on the texts for review purposes. VII JAMES PUSTEJOVSKY 1. INTRODUCTION There is currently a growing interest in the content of lexical entries from a theoretical perspective as well as a growing need to understand the organization of the lexicon from a computational view. This volume attempts to define the directions that need to be taken in order to achieve the goal of a coherent theory of lexical organization.
A history of machine translation (MT) from the point of view of a major writer and innovator in the field is the subject of this book. It details the deep differences between rival groups on how best to do MT, and presents a global perspective covering historical and contemporary systems in Europe, the US and Japan. The author considers MT as a fundamental part of Artificial Intelligence and the ultimate test-bed for all computational linguistics.
AMTA 2002: From Research to Real Users Ever since the showdown between Empiricists and Rationalists a decade ago at TMI 92, MT researchers have hotly pursued promising paradigms for MT, including da- driven approaches (e.g., statistical, example-based) and hybrids that integrate these with more traditional rule-based components. During the same period, commercial MT systems with standard transfer archit- tures have evolved along a parallel and almost unrelated track, increasing their cov- age (primarily through manual update of their lexicons, we assume) and achieving much broader acceptance and usage, principally through the medium of the Internet. Webpage translators have become commonplace; a number of online translation s- vices have appeared, including in their offerings both raw and postedited MT; and large corporations have been turning increasingly to MT to address the exigencies of global communication. Still, the output of the transfer-based systems employed in this expansion represents but a small drop in the ever-growing translation marketplace bucket.
This comprehensive handbook, written by leading experts in the field, details the groundbreaking research conducted under the breakthrough GALE program--The Global Autonomous Language Exploitation within the Defense Advanced Research Projects Agency (DARPA), while placing it in the context of previous research in the fields of natural language and signal processing, artificial intelligence and machine translation. The most fundamental contrast between GALE and its predecessor programs was its holistic integration of previously separate or sequential processes. In earlier language research programs, each of the individual processes was performed separately and sequentially: speech recognition, language recognition, transcription, translation, and content summarization. The GALE program employed a distinctly new approach by executing these processes simultaneously. Speech and language recognition algorithms now aid translation and transcription processes and vice versa. This combination of previously distinct processes has produced significant research and performance breakthroughs and has fundamentally changed the natural language processing and machine translation fields. This comprehensive handbook provides an exhaustive exploration into these latest technologies in natural language, speech and signal processing, and machine translation, providing researchers, practitioners and students with an authoritative reference on the topic.
This remarkable new dictionary represents the first attempt in some four centuries to record the state of development of English as used across the entire Caribbean region.