Lexical semantics has become a major research area within computational linguistics, drawing from psycholinguistics, knowledge representation, and computer algorithms and architecture. Research programs whose goal is the definition of large lexicons are asking what the appropriate representation structure is for different facets of lexical information. Among these facets, semantic information is probably the most complex and the least explored. Computational Lexical Semantics is one of the first volumes to provide models for the creation of various kinds of computerized lexicons for the automatic treatment of natural language, with applications to machine translation, automatic indexing, and database front-ends, knowledge extraction, among other things. It focuses on semantic issues, as seen by linguists, psychologists, and computer scientists. Besides describing academic research, it also covers ongoing industrial projects.
Semantic change — how the meanings of words change over time — has preoccupied scholars since well before modern linguistics emerged in the late 19th and early 20th century, ushering in a new methodological turn in the study of language change. Compared to changes in sound and grammar, semantic change is the least understood. Ever since, the study of semantic change has progressed steadily, accumulating a vast store of knowledge for over a century, encompassing many languages and language families. Historical linguists also early on realized the potential of computers as research tools, with papers at the very first international conferences in computational linguistics in the 1960s. Such computational studies still tended to be small-scale, method-oriented, and qualitative. However, recent years have witnessed a sea-change in this regard. Big-data empirical quantitative investigations are now coming to the forefront, enabled by enormous advances in storage capability and processing power. Diachronic corpora have grown beyond imagination, defying exploration by traditional manual qualitative methods, and language technology has become increasingly data-driven and semantics-oriented. These developments present a golden opportunity for the empirical study of semantic change over both long and short time spans. A major challenge presently is to integrate the hard-earned knowledge and expertise of traditional historical linguistics with cutting-edge methodology explored primarily in computational linguistics. The idea for the present volume came out of a concrete response to this challenge. The 1st International Workshop on Computational Approaches to Historical Language Change (LChange'19), at ACL 2019, brought together scholars from both fields. This volume offers a survey of this exciting new direction in the study of semantic change, a discussion of the many remaining challenges that we face in pursuing it, and considerably updated and extended versions of a selection of the contributions to the LChange'19 workshop, addressing both more theoretical problems — e.g., discovery of "laws of semantic change" — and practical applications, such as information retrieval in longitudinal text archives.
Semantic fields are lexically coherent – the words they contain co-occur in texts. In this book the authors introduce and define semantic domains, a computational model for lexical semantics inspired by the theory of semantic fields. Semantic domains allow us to exploit domain features for texts, terms and concepts, and they can significantly boost the performance of natural-language processing systems. Semantic domains can be derived from existing lexical resources or can be acquired from corpora in an unsupervised manner. They also have the property of interlinguality, and they can be used to relate terms in different languages in multilingual application scenarios. The authors give a comprehensive explanation of the computational model, with detailed chapters on semantic domains, domain models, and applications of the technique in text categorization, word sense disambiguation, and cross-language text categorization. This book is suitable for researchers and graduate students in computational linguistics.
The goal of this book is to integrate the research being carried out in the field of lexical semantics in linguistics with the work on knowledge representation and lexicon design in computational linguistics. Rarely do these two camps meet and discuss the demands and concerns of each other's fields. Therefore, this book is interesting in that it provides a stimulating and unique discussion between the computational perspective of lexical meaning and the concerns of the linguist for the semantic description of lexical items in the context of syntactic descriptions. This book grew out of the papers presented at a workshop held at Brandeis University in April, 1988, funded by the American Association for Artificial Intelligence. The entire workshop as well as the discussion periods accom panying each talk were recorded. Once complete copies of each paper were available, they were distributed to participants, who were asked to provide written comments on the texts for review purposes. VII JAMES PUSTEJOVSKY 1. INTRODUCTION There is currently a growing interest in the content of lexical entries from a theoretical perspective as well as a growing need to understand the organization of the lexicon from a computational view. This volume attempts to define the directions that need to be taken in order to achieve the goal of a coherent theory of lexical organization.
Computational semantics is the art and science of computing meaning in natural language. The meaning of a sentence is derived from the meanings of the individual words in it, and this process can be made so precise that it can be implemented on a computer. Designed for students of linguistics, computer science, logic and philosophy, this comprehensive text shows how to compute meaning using the functional programming language Haskell. It deals with both denotational meaning (where meaning comes from knowing the conditions of truth in situations), and operational meaning (where meaning is an instruction for performing cognitive action). Including a discussion of recent developments in logic, it will be invaluable to linguistics students wanting to apply logic to their studies, logic students wishing to learn how their subject can be applied to linguistics, and functional programmers interested in natural language processing as a new application area.
This book constitutes the thoroughly refereed post-workshop proceedings of the 20th Chinese Lexical Semantics Workshop, CLSW 2019, held in Chiayi, Taiwan, in June 2019. The 39 full papers and 46 short papers included in this volume were carefully reviewed and selected from 254 submissions. They are organized in the following topical sections: lexical semantics; applications of natural language processing; lexical resources; corpus linguistics.
This book constitutes the thoroughly refereed post-workshop proceedings of the 21st Chinese Lexical Semantics Workshop, CLSW 2020, held in Hong Kong, China in May 2020.Due to COVID-19, the conference was held virtually. The 76 full papers included in this volume were carefully reviewed and selected from 233 submissions. They are organized in the following topical sections: Lexical semantics and general linguistics, AI, Big Data, and NLP, Cognitive Science and experimental studies.
Lexical Semantics is about the meaning of words. Although obviously a central concern of linguistics, the semantic behaviour of words has been unduly neglected in the current literature, which has tended to emphasize sentential semantics and its relation to formal systems of logic. In this textbook D. A. Cruse establishes in a principled and disciplined way the descriptive and generalizable facts about lexical relations that any formal theory of semantics will have to encompass. Among the topics covered in depth are idiomaticity, lexical ambiguity, synonymy, hierarchical relations such as hyponymy and meronymy, and various types of oppositeness. Syntagmatic relations are also treated in some detail. The discussions are richly illustrated by examples drawn almost entirely from English. Although a familiarity with traditional grammar is assumed, readers with no technical linguistic background will find the exposition always accessible. All readers with an interest in semantics will find in this original text not only essential background but a stimulating new perspective on the field.
This collection of articles sketches the complexity of the subject of lexical-semantic relations and addresses semantic, lexicographic and computational issues on an array of meaning relations in different languages. It brings together a variety of linguistic studies on the contextualised construction of synonymy and antonymy in discourse. It shows that research on language and cognition calls for empirical evidence from different sources. This volume demonstrates how the internet, corpus data, as well as psycholinguistic methods contribute profitably to gain insights into the nature of the paradigmatics in actual language use. Furthermore, the volume is concerned with practical and application-oriented research on lexical databases, and it includes explorations of sense-related items in dictionaries from both a text-technological and lexicographic perspective.
This volume of newly commissioned essays examines current theoretical and computational work on polysemy, the term used in semantic analysis to describe words with more than one meaning or function, sometimes perhaps related (as in plain) and sometimes perhaps not (as in bank). Such words present few difficulties in everyday language, but pose central problems for linguists and lexicographers, especially for those involved in lexical semantics and in computational modelling. The contributors to this book–leading researchers in theoretical and computational linguistics–consider the implications of these problems for grammatical theory and how they may be addressed by computational means. The theoretical essays in the book examine polysemy as an aspect of a broader theory of word meaning. Three theoretical approaches are presented: the Classical (or Aristotelian), the Prototypical, and the Relational. Their authors describe the nature of polysemy, the criteria for detecting it, and its manifestations across languages. They examine the issues arising from the regularity of polysemy and the theoretical principles proposed to account for the interaction of lexical meaning with the semantics and syntax of the context in which it occurs. Finally they consider the formal representations of meaning in the lexicon, and their implications for dictionary construction. The computational essays are concerned with the challenge of polysemy to automatic sense disambiguation–how intended meaning for a word occurrence can be identified. The approaches presented include the exploitation of lexical information in machine-readable dictionaries, machine learning based on patterns of word co-occurrence, and hybrid approaches that combine the two. As a whole, the volume shows how on the one hand theoretical work provides the motivation and may suggest the basis for computational algorithms, while on the other computational results may validate, or reveal problems in, the principles set forth by theories.