This volume is a collection of original contributions about the core knowledge in fundamental domains. It includes work on naive physics, such as formal specifications of intuitive theories of spatial relations, time causality, substance and physical objects, and on naive psychology.
Ontology began life in ancient times as a fundamental part of philosophical enquiry concerned with the analysis and categorisation of what exists. In recent years, the subject has taken a practical turn with the advent of complex computerised information systems which are reliant on robust and coherent representations of their subject matter. The systematisation and elaboration of such representations and their associated reasoning techniques constitute the modern discipline of formal ontology, which is now being applied to such diverse domains as artificial intelligence, computational linguistics, bioinformatics, GIS, knowledge engineering, information retrieval and the Semantic Web. Researchers in all these areas are becoming increasingly aware of the need for serious engagement with ontology, understood as a general theory of the types of entities and relations making up their respective domains of enquiry, to provide a solid foundation for their work. The conference series Formal Ontology in Information Systems (FOIS) provides a meeting point for researchers from these and other disciplines with an interest in formal ontology, where both theoretical issues and concrete applications can be explored in a spirit of genuine interdisciplinarity. This volume contains the proceedings of the sixth FOIS conference, held in Toronto, Canada, during 11-14 May 2010, including invited talks by Francis Jeffry Pelletier, John Bateman, and Alan Rector and the 28 peer-reviewed submissions selected for presentation at the conference, ranging from foundational issues to more application-oriented topics. IOS Press is an international science, technical and medical publisher of high-quality books for academics, scientists, and professionals in all fields. Some of the areas we publish in: -Biomedicine -Oncology -Artificial intelligence -Databases and information systems -Maritime engineering -Nanotechnology -Geoengineering -All aspects of physics -E-governance -E-commerce -The knowledge economy -Urban studies -Arms control -Understanding and responding to terrorism -Medical informatics -Computer Sciences
How we can create artificial intelligence with broad, robust common sense rather than narrow, specialized expertise. It’s sometime in the not-so-distant future, and you send your fully autonomous self-driving car to the store to pick up your grocery order. The car is endowed with as much capability as an artificial intelligence agent can have, programmed to drive better than you do. But when the car encounters a traffic light stuck on red, it just sits there—indefinitely. Its obstacle-avoidance, lane-following, and route-calculation capacities are all irrelevant; it fails to act because it lacks the common sense of a human driver, who would quickly figure out what’s happening and find a workaround. In Machines like Us, Ron Brachman and Hector Levesque—both leading experts in AI—consider what it would take to create machines with common sense rather than just the specialized expertise of today’s AI systems. Using the stuck traffic light and other relatable examples, Brachman and Levesque offer an accessible account of how common sense might be built into a machine. They analyze common sense in humans, explain how AI over the years has focused mainly on expertise, and suggest ways to endow an AI system with both common sense and effective reasoning. Finally, they consider the critical issue of how we can trust an autonomous machine to make decisions, identifying two fundamental requirements for trustworthy autonomous AI systems: having reasons for doing what they do, and being able to accept advice. Both in the end are dependent on having common sense.
Lay theories - the informal, common-sense explanations people give for particular social behaviours - are often very different from formal 'scientific' explanations of what actually happens. While they have been studied in the past, this is the first attempt to review, in detail, the nature of these beliefs. More specifically, it is the first study to consider such fundamental questions as the structure, aetiology, stability and consequence of lay theories about a range of topics. Each chapter covers a different area, such as psychology, psychiatry, medicine, economics, statistics, law and education.
This is a truly groundbreaking work that examines today’s notions of folk psychology. Bringing together disciplines as various as cognitive science and anthropology, the authors analyze the consensual views of the subject. The contributors all maintain that current understandings of folk psychology and of the mechanisms that underlie it need to be revised, supplemented or dismissed altogether. That’s why this book is essential reading for those in the field.
Now in paperback for the first time since its original publication, the material gathered here is perfect for anyone who needs a detailed and accessible introduction to the important semantic theories. Designed for a wide audience, it will be of great value to linguists, cognitive scientists, philosophers, and computer scientists working on natural language. The book covers theories of lexical semantics, cognitively oriented approaches to semantics, compositional theories of sentence semantics, and discourse semantics. This clear, elegant explanation of the key theories in semantics research is essential reading for anyone working in the area.