Interactive Information Seeking, Behaviour and Retrieval

Interactive Information Seeking, Behaviour and Retrieval

Author: Ian Ruthven

Publisher: Facet Publishing

Published: 2011

Total Pages: 337

ISBN-13: 1856047075

DOWNLOAD EBOOK

Information retrieval (IR) is a complex human activity supported by sophisticated systems. Information science has contributed much to the design and evaluation of previous generations of IR system development and to our general understanding of how such systems should be designed and yet, due to the increasing success and diversity of IR systems, many recent textbooks concentrate on IR systems themselves and ignore the human side of searching for information. This book is the first text to provide an information science perspective on IR. Unique in its scope, the book covers the whole spectrum of information retrieval, including: history and background information behaviour and seeking task-based information searching and retrieval approaches to investigating information interaction and behaviour information representation access models evaluation interfaces for IR interactive techniques web retrieval, ranking and personalization recommendation, collaboration and social search multimedia: interfaces and access. Readership: Senior undergraduates and masters' level students of all information and library studies courses and practising LIS professionals who need to better appreciate how IR systems are designed, implemented and evaluated.


Interactive Information Retrieval in Digital Environments

Interactive Information Retrieval in Digital Environments

Author: Iris Xie

Publisher: IGI Global Snippet

Published: 2008

Total Pages: 351

ISBN-13: 9781599042404

DOWNLOAD EBOOK

"This book includes the integration of existing frameworks on user-oriented information retrieval systems across multiple disciplines; the comprehensive review of empirical studies of interactive information retrieval systems for different types of users, tasks, and subtasks; and the discussion of how to evaluate interactive information retrieval systems"--Provided by publisher.


Introduction to Information Retrieval

Introduction to Information Retrieval

Author: Christopher D. Manning

Publisher: Cambridge University Press

Published: 2008-07-07

Total Pages:

ISBN-13: 1139472100

DOWNLOAD EBOOK

Class-tested and coherent, this textbook teaches classical and web information retrieval, including web search and the related areas of text classification and text clustering from basic concepts. It gives an up-to-date treatment of all aspects of the design and implementation of systems for gathering, indexing, and searching documents; methods for evaluating systems; and an introduction to the use of machine learning methods on text collections. All the important ideas are explained using examples and figures, making it perfect for introductory courses in information retrieval for advanced undergraduates and graduate students in computer science. Based on feedback from extensive classroom experience, the book has been carefully structured in order to make teaching more natural and effective. Slides and additional exercises (with solutions for lecturers) are also available through the book's supporting website to help course instructors prepare their lectures.


Test Collection Based Evaluation of Information Retrieval Systems

Test Collection Based Evaluation of Information Retrieval Systems

Author: Mark Sanderson

Publisher: Now Publishers Inc

Published: 2010-06-03

Total Pages: 143

ISBN-13: 1601983603

DOWNLOAD EBOOK

Use of test collections and evaluation measures to assess the effectiveness of information retrieval systems has its origins in work dating back to the early 1950s. Across the nearly 60 years since that work started, use of test collections is a de facto standard of evaluation. This monograph surveys the research conducted and explains the methods and measures devised for evaluation of retrieval systems, including a detailed look at the use of statistical significance testing in retrieval experimentation. This monograph reviews more recent examinations of the validity of the test collection approach and evaluation measures as well as outlining trends in current research exploiting query logs and live labs. At its core, the modern-day test collection is little different from the structures that the pioneering researchers in the 1950s and 1960s conceived of. This tutorial and review shows that despite its age, this long-standing evaluation method is still a highly valued tool for retrieval research.


Information Retrieval Evaluation

Information Retrieval Evaluation

Author: Donna Harman

Publisher: Springer Nature

Published: 2022-05-31

Total Pages: 107

ISBN-13: 3031022769

DOWNLOAD EBOOK

Evaluation has always played a major role in information retrieval, with the early pioneers such as Cyril Cleverdon and Gerard Salton laying the foundations for most of the evaluation methodologies in use today. The retrieval community has been extremely fortunate to have such a well-grounded evaluation paradigm during a period when most of the human language technologies were just developing. This lecture has the goal of explaining where these evaluation methodologies came from and how they have continued to adapt to the vastly changed environment in the search engine world today. The lecture starts with a discussion of the early evaluation of information retrieval systems, starting with the Cranfield testing in the early 1960s, continuing with the Lancaster "user" study for MEDLARS, and presenting the various test collection investigations by the SMART project and by groups in Britain. The emphasis in this chapter is on the how and the why of the various methodologies developed. The second chapter covers the more recent "batch" evaluations, examining the methodologies used in the various open evaluation campaigns such as TREC, NTCIR (emphasis on Asian languages), CLEF (emphasis on European languages), INEX (emphasis on semi-structured data), etc. Here again the focus is on the how and why, and in particular on the evolving of the older evaluation methodologies to handle new information access techniques. This includes how the test collection techniques were modified and how the metrics were changed to better reflect operational environments. The final chapters look at evaluation issues in user studies -- the interactive part of information retrieval, including a look at the search log studies mainly done by the commercial search engines. Here the goal is to show, via case studies, how the high-level issues of experimental design affect the final evaluations. Table of Contents: Introduction and Early History / "Batch" Evaluation Since 1992 / Interactive Evaluation / Conclusion


Information Storage and Retrieval Systems

Information Storage and Retrieval Systems

Author: Gerald J. Kowalski

Publisher: Springer Science & Business Media

Published: 2005-11-19

Total Pages: 323

ISBN-13: 0306470314

DOWNLOAD EBOOK

Chapter 1 places into perspective a total Information Storage and Retrieval System. This perspective introduces new challenges to the problems that need to be theoretically addressed and commercially implemented. Ten years ago commercial implementation of the algorithms being developed was not realistic, allowing theoreticians to limit their focus to very specific areas. Bounding a problem is still essential in deriving theoretical results. But the commercialization and insertion of this technology into systems like the Internet that are widely being used changes the way problems are bounded. From a theoretical perspective, efficient scalability of algorithms to systems with gigabytes and terabytes of data, operating with minimal user search statement information, and making maximum use of all functional aspects of an information system need to be considered. The dissemination systems using persistent indexes or mail files to modify ranking algorithms and combining the search of structured information fields and free text into a consolidated weighted output are examples of potential new areas of investigation. The best way for the theoretician or the commercial developer to understand the importance of problems to be solved is to place them in the context of a total vision of a complete system. Understanding the differences between Digital Libraries and Information Retrieval Systems will add an additional dimension to the potential future development of systems. The collaborative aspects of digital libraries can be viewed as a new source of information that dynamically could interact with information retrieval techniques.


Evaluating Information Retrieval and Access Tasks

Evaluating Information Retrieval and Access Tasks

Author: Tetsuya Sakai

Publisher: Springer Nature

Published: 1901

Total Pages: 225

ISBN-13: 9811555540

DOWNLOAD EBOOK

This open access book summarizes the first two decades of the NII Testbeds and Community for Information access Research (NTCIR). NTCIR is a series of evaluation forums run by a global team of researchers and hosted by the National Institute of Informatics (NII), Japan. The book is unique in that it discusses not just what was done at NTCIR, but also how it was done and the impact it has achieved. For example, in some chapters the reader sees the early seeds of what eventually grew to be the search engines that provide access to content on the World Wide Web, todays smartphones that can tailor what they show to the needs of their owners, and the smart speakers that enrich our lives at home and on the move. We also get glimpses into how new search engines can be built for mathematical formulae, or for the digital record of a lived human life. Key to the success of the NTCIR endeavor was early recognition that information access research is an empirical discipline and that evaluation therefore lay at the core of the enterprise. Evaluation is thus at the heart of each chapter in this book. They show, for example, how the recognition that some documents are more important than others has shaped thinking about evaluation design. The thirty-three contributors to this volume speak for the many hundreds of researchers from dozens of countries around the world who together shaped NTCIR as organizers and participants. This book is suitable for researchers, practitioners, and students--anyone who wants to learn about past and present evaluation efforts in information retrieval, information access, and natural language processing, as well as those who want to participate in an evaluation task or even to design and organize one.


Online Evaluation for Information Retrieval

Online Evaluation for Information Retrieval

Author: Katja Hofmann

Publisher:

Published: 2016-06-07

Total Pages: 134

ISBN-13: 9781680831634

DOWNLOAD EBOOK

Provides a comprehensive overview of the topic. It shows how online evaluation is used for controlled experiments, segmenting them into experiment designs that allow absolute or relative quality assessments. It also includes an extensive discussion of recent work on data re-use, and experiment estimation based on historical data.