Images are not neutral conveyors of messages that are sent around the globe in order to reach a global audience. They represent a force that can trigger very different reactions and counteract new visual tendencies towards hegemony within a global world. The volume contains a compilation of case studies from the media, art, political and religious sciences, as well as from anthropology and the natural sciences. By focusing on the power of images outside their use in the media, the authors venture into new territory: the contributions from Hans Belting, Georges Didi-Huberman, W.J.T. Mitchell and others provide proof that globalization is not the same as homogenization, and that images are very capable of opening paths towards alternative facts in order to portray the future.
Images are not neutral conveyors of messages shipped around the globe to achieve globalized spectatorship. They are powerful forces that elicit very diverse responses and can resist new visual hegemonies of our global world. Bringing together case studies from the field of media, art, politics, religion, anthropology and science, this volume breaks new ground by reflecting on the very power of images beyond their medial exploitation. The contributions by Hans Belting, Susan Buck-Morss, Georges Didi-Huberman, W.J.T. Mitchell, and Ticio Escobar among others testify that globalization does not necessarily equal homogenization, and that images can open up alternative ways of picturing what is to come.
High Dynamic Range Imaging, Second Edition, is an essential resource for anyone working with images, whether it is for computer graphics, film, video, photography, or lighting design. It describes HDRI technology in its entirety and covers a wide-range of topics, from capture devices to tone reproduction and image-based lighting. The techniques described enable students to produce images that have a dynamic range much closer to that found in the real world, leading to an unparalleled visual experience. This revised edition includes new chapters on High Dynamic Range Video Encoding, High Dynamic Range Image Encoding, and High Dynamic Range Display Devices. All existing chapters have been updated to reflect the current state-of-the-art technology. As both an introduction to the field and an authoritative technical reference, this book is essential for anyone working with images, whether in computer graphics, film, video, photography, or lighting design. - New material includes chapters on High Dynamic Range Video Encoding, High Dynamic Range Image Encoding, and High Dynammic Range Display Devices - Written by the inventors and initial implementors of High Dynamic Range Imaging - Covers the basic concepts (including just enough about human vision to explain why HDR images are necessary), image capture, image encoding, file formats, display techniques, tone mapping for lower dynamic range display, and the use of HDR images and calculations in 3D rendering - Range and depth of coverage is good for the knowledgeable researcher as well as those who are just starting to learn about High Dynamic Range imaging - The prior edition of this book included a DVD-ROM. Files from the DVD-ROM can be accessed at: http://www.erikreinhard.com/hdr_2nd/index.html
Engages with the impact of modern technology on experimental physicists. This study reveals how the increasing scale and complexity of apparatus has distanced physicists from the very science which drew them into experimenting, and has fragmented microphysics into different technical traditions.
This text is concerned with a probabilistic approach to image analysis as initiated by U. GRENANDER, D. and S. GEMAN, B.R. HUNT and many others, and developed and popularized by D. and S. GEMAN in a paper from 1984. It formally adopts the Bayesian paradigm and therefore is referred to as 'Bayesian Image Analysis'. There has been considerable and still growing interest in prior models and, in particular, in discrete Markov random field methods. Whereas image analysis is replete with ad hoc techniques, Bayesian image analysis provides a general framework encompassing various problems from imaging. Among those are such 'classical' applications like restoration, edge detection, texture discrimination, motion analysis and tomographic reconstruction. The subject is rapidly developing and in the near future is likely to deal with high-level applications like object recognition. Fascinating experiments by Y. CHOW, U. GRENANDER and D.M. KEENAN (1987), (1990) strongly support this belief.
The classic work on the evaluation of city form. What does the city's form actually mean to the people who live there? What can the city planner do to make the city's image more vivid and memorable to the city dweller? To answer these questions, Mr. Lynch, supported by studies of Los Angeles, Boston, and Jersey City, formulates a new criterion—imageability—and shows its potential value as a guide for the building and rebuilding of cities. The wide scope of this study leads to an original and vital method for the evaluation of city form. The architect, the planner, and certainly the city dweller will all want to read this book.
This research addresses the problem of acquiring a time series of magnetic resonance images with both high spatial and temporal resolutions. Specifically, we systematically investigate the advantages and limitations of reduced-encoding imaging using a priori constraints. This study reveals that if the available a priori information is a reference image, direct use of this information to 'optimize' data acquisition using the existing wavelet transform or singular value decomposition schemes can undermine the capability to detect new image features. However, proper incorporation of the a priori information in the image reconstruction step can significantly reduce the resolution loss associated with reduced-encoding. For Fourier encoded data, we have shown that the Generalized-Series (GS) model is an effective mathematical framework for carrying out the constrained reconstruction step. Several techniques are proposed in this dissertation to improve the basis functions of the GS model by introducing dynamic information. The two reference reduced-encoding imaging by generalized-series reconstruction (TRIGR) method suppresses background information through the use of a second high resolution reference image. A second technique injects information from the dynamic data into the GS basis functions, as opposed to deriving them solely from the reference information. These techniques allow the GS basis functions to more accurately represent the areas of dynamic change. Finally, motion that occurs between the acquisition of the reference and dynamic data sets can render the reference information useless as a constraint for image reconstruction. A motion compensation method is proposed which uses a similarity norm to accurately detect the motion in spite of contrast changes and the low resolution nature of the dynamic data.
How computer graphics transformed the computer from a calculating machine into an interactive medium, as seen through the histories of five technical objects. Most of us think of computer graphics as a relatively recent invention, enabling the spectacular visual effects and lifelike simulations we see in current films, television shows, and digital games. In fact, computer graphics have been around as long as the modern computer itself, and played a fundamental role in the development of our contemporary culture of computing. In Image Objects, Jacob Gaboury offers a prehistory of computer graphics through an examination of five technical objects--an algorithm, an interface, an object standard, a programming paradigm, and a hardware platform--arguing that computer graphics transformed the computer from a calculating machine into an interactive medium. Gaboury explores early efforts to produce an algorithmic solution for the calculation of object visibility; considers the history of the computer screen and the random-access memory that first made interactive images possible; examines the standardization of graphical objects through the Utah teapot, the most famous graphical model in the history of the field; reviews the graphical origins of the object-oriented programming paradigm; and, finally, considers the development of the graphics processing unit as the catalyst that enabled an explosion in graphical computing at the end of the twentieth century. The development of computer graphics, Gaboury argues, signals a change not only in the way we make images but also in the way we mediate our world through the computer--and how we have come to reimagine that world as computational.