Contrastive Linguistics (CL), Translation Studies (TS) and Machine Translation (MT) have common grounds: They all work at the crossroad where two or more languages meet. Despite their inherent relatedness, methodological exchange between the three disciplines is rare. This special issue touches upon areas where the three fields converge. It results directly from a workshop at the 2011 German Association for Language Technology and Computational Linguistics (GSCL) conference in Hamburg where researchers from the three fields presented and discussed their interdisciplinary work. While the studies contained in this volume draw from a wide variety of objectives and methods, and various areas of overlaps between CL, TS and MT are addressed, the volume is by no means exhaustive with regard to this topic. Further cross-fertilisation is not only desirable, but almost mandatory in order to tackle future tasks and endeavours.}
Historically a dubbing country, Germany is not well-known for subtitled productions. But while dubbing is predominant in Germany, more and more German viewers prefer original and subtitled versions of their favourite shows and films. Conventional subtitling, however, can be seen as a strong intrusion into the original image that can not only disrupt but also destroy the director’s intended shot composition and focus points. Long eye movements between focus points and subtitles decrease the viewer’s information intake, and especially German audiences, who are often not used to subtitles, seem to prefer to wait for the next subtitle instead of looking back up again. Furthermore, not only the placement, but also the overall design of conventional subtitles can disturb the image composition – for instance titles with a weak contrast, inappropriate typeface or irritating colour system. So should it not, despite the translation process, be possible to preserve both image and sound as far as possible? Especially given today’s numerous artistic and technical possibilities and the huge amount of work that goes into the visual aspects of a film, taking into account not only special effects, but also typefaces, opening credits and text-image compositions. A further development of existing subtitling guidelines would not only express respect towards the original film version but also the translator’s work. The presented study shows how integrated titles can increase information intake while maintaining the intended image composition and focus points as well as the aesthetics of the shot compositions. During a three-stage experiment, the specifically for this purpose created integrated titles in the documentary “Joining the Dots” by director Pablo Romero-Fresco were analysed with the help of eye movement data from more than 45 participants. Titles were placed based on the gaze behaviour of English native speakers and then rated by German viewers dependant on a German translation. The results show that a reduction of the distance between intended focus points and titles allow the viewers more time to explore the image and connect the titles to the plot. The integrated titles were rated as more aesthetically pleasing and reading durations were shorter than with conventional subtitles. Based on the analysis of graphic design and filmmaking rules as well as conventional subtitling standards, a first workflow and set of placement strategies for integrated titles were created in order to allow a more respectful handling of film material as well as the preservation of the original image composition and typographic film identity.
The present volume seeks to contribute some studies to the subfield of Empirical Translation Studies and thus aid in extending its reach within the field of translation studies and thus in making our discipline more rigorous and fostering a reproducible research culture. The Translation in Transition conference series, across its editions in Copenhagen (2013), Germersheim (2015) and Ghent (2017), has been a major meeting point for scholars working with these aims in mind, and the conference in Barcelona (2019) has continued this tradition of expanding the sub-field of empirical translation studies to other paradigms within translation studies. This book is a collection of selected papers presented at that fourth Translation in Transition conference, held at the Universitat Pompeu Fabra in Barcelona on 19–20 September 2019.
The contributions to this volume investigate relations of cohesion and coherence as well as instantiations of discourse phenomena and their interaction with information structure in multilingual contexts. Some contributions concentrate on procedures to analyze cohesion and coherence from a corpus-linguistic perspective. Others have a particular focus on textual cohesion in parallel corpora that include both originals and translated texts. Additionally, the papers in the volume discuss the nature of cohesion and coherence with implications for human and machine translation. The contributors are experts on discourse phenomena and textuality who address these issues from an empirical perspective. The chapters in this volume are grounded in the latest research making this book useful to both experts of discourse studies and computational linguistics, as well as advanced students with an interest in these disciplines. We hope that this volume will serve as a catalyst to other researchers and will facilitate further advances in the development of cost-effective annotation procedures, the application of statistical techniques for the analysis of linguistic phenomena and the elaboration of new methods for data interpretation in multilingual corpus linguistics and machine translation.
Drawing on work from both eminent and emerging scholars in translation and interpreting studies, this collection offers a critical reflection on current methodological practices in these fields toward strengthening the theoretical and empirical ties between them. Methodological and technological advances have pushed these respective areas of study forward in the last few decades, but advanced tools, such as eye tracking and keystroke logging, and insights from their use have often remained in isolation and not shared across disciplines. This volume explores empirical and theoretical challenges across these areas and the subsequent methodologies implemented to address them and how they might be mutually applied across translation and interpreting studies but also brought together toward a coherent empirical theory of translation and interpreting studies. Organized around three key themes—target-text orientedness, source-text orientedness, and translator/interpreter-orientedness—the book takes stock of both studies of translation and interpreting corpora and processes in an effort to answer such key questions, including: how do written translation and interpreting relate to each other? How do technological advances in these fields shape process and product? What would an empirical theory of translation and interpreting studies look like? Taken together, the collection showcases the possibilities of further dialogue around methodological practices in translation and interpreting studies and will be of interest to students and scholars in these fields.
Cognitive aspects of the translation process have become central in Translation and Interpreting Studies in recent years, further establishing the field of Cognitive Translatology. Empirical and interdisciplinary studies investigating translation and interpreting processes promise a hitherto unprecedented predictive and explanatory power. This collection contains such studies which observe behaviour during translation and interpreting. The contributions cover a vast area and investigate behaviour during translation and interpreting – with a focus on training of future professionals, on language processing more generally, on the role of technology in the practice of translation and interpreting, on translation of multimodal media texts, on aspects of ergonomics and usability, on emotions, self-concept and psychological factors, and finally also on revision and post-editing. For the present publication, we selected a number of contributions presented at the Second International Congress on Translation, Interpreting and Cognition hosted by the Tra&Co Lab at the Johannes Gutenberg University of Mainz.
Although the notion of meaning has always been at the core of translation, the invariance of meaning has, partly due to practical constraints, rarely been challenged in Corpus-based Translation Studies. In answer to this, the aim of this book is to question the invariance of meaning in translated texts: if translation scholars agree on the fact that translated language is different from non-translated language with respect to a number of grammatical and lexical aspects, would it be possible to identify differences between translated and non-translated language on the semantic level too? More specifically, this books tries to formulate an answer to the following three questions: (i) how can semantic differences in translated vs non-translated language be investigated in a corpus-based study?, (ii) are there any differences on the semantic level between translated and non-translated language? and (iii) if there are differences on the semantic level, can we ascribe them to any of the (universal) tendencies of translation? In this book, I establish a way to visually explore semantic similarity on the basis of representations of translated and non-translated semantic fields. A technique for the comparison of semantic fields of translated and non-translated language called SMM++ (based on Helge Dyvik’s Semantic Mirrors method) is developed, yielding statistics-based visualizations of semantic fields. The SMM++ is presented via the case of inchoativity in Dutch (beginnen [to begin]). By comparing the visualizations of the semantic fields on different levels (translated Dutch with French as a source language, with English as a source language and non-translated Dutch) I further explore whether the differences between translated and non-translated fields of inchoativity in Dutch can be linked to any of the well-known universals of translation. The main results of this study are explained on the basis of two cognitively inspired frameworks: Halverson’s Gravitational Pull Hypothesis and Paradis’ neurolinguistic theory of bilingualism.
Eyetracking has become a powerful tool in scientific research and has finally found its way into disciplines such as applied linguistics and translation studies, paving the way for new insights and challenges in these fields. The aim of the first International Conference on Eyetracking and Applied Linguistics (ICEAL) was to bring together researchers who use eyetracking to empirically answer their research questions. It was intended to bridge the gaps between applied linguistics, translation studies, cognitive science and computational linguistics on the one hand and to further encourage innovative research methodologies and data triangulation on the other hand. These challenges are also addressed in this proceedings volume: While the studies described in the volume deal with a wide range of topics, they all agree on eyetracking as an appropriate methodology in empirical research.
The purpose of this volume is to explore key issues, approaches and challenges to quality in institutional translation by confronting academics’ and practitioners’ perspectives. What the reader will find in this book is an interplay of two approaches: academic contributions providing the conceptual and theoretical background for discussing quality on the one hand, and chapters exploring selected aspects of quality and case studies from both academics and practitioners on the other. Our aim is to present these two approaches as a breeding ground for testing one vis-à-vis the other. This book studies institutional translation mostly through the lens of the European Union (EU) reality, and, more specifically, of EU institutions and bodies, due to the unprecedented scale of their multilingual operations and the legal and political importance of translation. Thus, it is concerned with the supranational (international) level, deliberately leaving national and other contexts aside. Quality in supranational institutions is explored both in terms of translation processes and their products – the translated texts.
This volume of the series “Translation and Multilingual Natural Language Processing” includes most of the papers presented at the Workshop “Language Technology for a Multilingual Europe”, held at the University of Hamburg on September 27, 2011 in the framework of the conference GSCL 2011 with the topic “Multilingual Resources and Multilingual Applications”, along with several additional contributions. In addition to an overview article on Machine Translation and two contributions on the European initiatives META-NET and Multilingual Web, the volume includes six full research articles. Our intention with this workshop was to bring together various groups concerned with the umbrella topics of multilingualism and language technology, especially multilingual technologies. This encompassed, on the one hand, representatives from research and development in the field of language technologies, and, on the other hand, users from diverse areas such as, among others, industry, administration and funding agencies. The Workshop “Language Technology for a Multilingual Europe” was co-organised by the two GSCL working groups “Text Technology” and “Machine Translation” (http://gscl.info) as well as by META-NET (http://www.meta-net.eu).