Relating the story of the transatlantic struggle for subnuclear domination, The Quark Machines: How Europe Fought the Particle Physics War, Second Edition covers the history, the politics, and the personalities of particle physics. Extensively illustrated with many original photographs of the key players in the field, the book sheds new light on the sovereignty issues of modern scientific research as well as the insights it has produced. Throughout the twentieth century, Europe and the United States have vied for supremacy of subnuclear physics. Initially, the advent of World War II and an enforced exodus of scientific talent from Europe boosted American efforts. Then, buoyed along by the need to develop the bomb and the ensuing distrust of the Cold War, the United States vaulted into a commanding role-a position it retained for almost fifty years. Throughout this period, each new particle accelerator was a major campaign, each new particle a battle won. With the end of the Cold War, U.S. preeminence evaporated and Europe retook the advantage. Now CERN, for four decades the spearhead of the European fightback, stands as the leading global particle physics center. Today, particle physics is at a turning point in its history-how well Europe retains its advantage remains to be seen.
Relating the story of the transatlantic struggle for subnuclear domination, The Quark Machines: How Europe Fought the Particle Physics War, Second Edition covers the history, the politics, and the personalities of particle physics. Extensively illustrated with many original photographs of the key players in the field, the book sheds new light on the sovereignty issues of modern scientific research as well as the insights it has produced. Throughout the twentieth century, Europe and the United States have vied for supremacy of subnuclear physics. Initially, the advent of World War II and an enforced exodus of scientific talent from Europe boosted American efforts. Then, buoyed along by the need to develop the bomb and the ensuing distrust of the Cold War, the United States vaulted into a commanding role-a position it retained for almost fifty years. Throughout this period, each new particle accelerator was a major campaign, each new particle a battle won. With the end of the Cold War, U.S. preeminence evaporated and Europe retook the advantage. Now CERN, for four decades the spearhead of the European fightback, stands as the leading global particle physics center. Today, particle physics is at a turning point in its history-how well Europe retains its advantage remains to be seen.
Widely regarded as a classic in its field, Constructing Quarks recounts the history of the post-war conceptual development of elementary-particle physics. Inviting a reappraisal of the status of scientific knowledge, Andrew Pickering suggests that scientists are not mere passive observers and reporters of nature. Rather they are social beings as well as active constructors of natural phenomena who engage in both experimental and theoretical practice. "A prodigious piece of scholarship that I can heartily recommend."—Michael Riordan, New Scientist "An admirable history. . . . Detailed and so accurate."—Hugh N. Pendleton, Physics Today
A Tour of the Subatomic Zoo is a brief and ambitious expedition into the remarkably simple ingredients of all the wonders of nature. Tour guide, Professor Cindy Schwarz clearly explains the language and substance of elementary particle physics for the 99% of us who are not physicists. With hardly a mathematical formula, views of matter from the atom to the quark are discussed in a form that an interested person with no physics background can easily understand. It is a look not only into some of the most profound insights of our time, but a look at the answers we are still searching for. College and university courses can be developed around this book and it can be used alone or in conjunction with other material. Even college physics majors would enjoy reading this book as an introduction to particle physics. High-school, and even middle-school, teachers could also use this book to introduce this material to their students. It will also be beneficial for high-school teachers who have not been formally exposed to high-energy physics, have forgotten what they once knew, or are no longer up to date with recent developments.
Summary: White is a colour not found in the rainbow, white is the sum of two complementary colours. Two such complementary colours can be green and magenta, together they become white light. Green lies in the middle of the rainbow, while magenta is created as the sum of two colours symmetrically around this middle, from the colours red and blue. In the rainbow we find, among other things, the colours red, green and blue, but we do not find magenta, because magenta does not have its own wavelength, it only occurs as the sum of two wavelengths, red and blue. Yellow has its own wavelength and is found in the rainbow, but we also get yellow by combining green and red. There are therefore two ways to make yellow, either as the wavelength we find in the rainbow, or as the sum of green and red symmetrically around yellow. Green is in the middle of the rainbow and has its own wavelength, but if we take two colours symmetrically around green (red and blue) then, as is the case with yellow, we should expect to get green, but then we get magenta instead. Green is never the sum of two wavelengths, then we get magenta, which is therefore not part of the rainbow because it does not have its own wavelength, and the expected own-wavelength of magenta gives green instead. In this book I call this consciousness’s own technology. By that I mean that it is a technology that is not rooted in the material, in matter, but in consciousness itself. By containing a colour not found in the rainbow, magenta, our consciousness is not a mere shadow of the wavelengths of physics. Why do two colours symmetrically around green create not green but magenta. Green and magenta are complementary colours and together they become white light, or to put it another way; red, green and blue become white light, which is again the core of the computer screen’s RGB technology. If we turn off the green light cell on the computer screen, we get magenta, where according to the model for how the colour yellow behaves, it should have been green. We have a number of jars in front of us. We cannot differentiate and only lift the jars that are composed of two colours symmetrically around yellow, because we will end up lifting all the yellow jars, but we can differentiate when it comes to green, because that colour is called magenta, and we can lift the magenta jars and leave the green ones untouched. Our subjectivity has physical consequences. This is something a computer can never learn by itself, we have to tell it! There is thus no motivating factor in the physical world that would dictate the existence of magenta. We can say that the rainbow is physically conditioned, but not magenta, because magenta is consciousness’s own technology, the colour is not found in the rainbow. This is how we break out of nature, this is how we break the laws of nature, this is how free will is made visible. Yes, this discussion about magenta should not have happened. Consciousness has its own technology, elevated above the material. We therefore respond outwardly with something that is not found in the laws of nature, something non-material is projected back onto the world… we lift the magenta jar!
Every age has characteristic inventions that change the world. In the 19th century it was the steam engine and the train. For the 20th, electric and gasoline power, aircraft, nuclear weapons, even ventures into space. Today, the planet is awash with electronic business, chatter and virtual-reality entertainment so brilliant that the division between real and simulated is hard to discern. But one new idea from the 19th century has failed, so far, to enter reality—time travel, using machines to turn the time dimension into a two-way highway. Will it come true, as foreseen in science fiction? Might we expect visits to and from the future, sooner than from space? That is the Time Machine Hypothesis, examined here by futurist Damien Broderick, an award-winning writer and theorist of the genre of the future. Broderick homes in on the topic through the lens of science as well as fiction, exploring some fifty different time-travel scenarios and conundrums found in the science fiction literature and film.
Machine learning is part of Artificial Intelligence since its beginning. Certainly, not learning would only allow the perfect being to show intelligent behavior. All others, be it humans or machines, need to learn in order to enhance their capabilities. In the eighties of the last century, learning from examples and modeling human learning strategies have been investigated in concert. The formal statistical basis of many learning methods has been put forward later on and is still an integral part of machine learning. Neural networks have always been in the toolbox of methods. Integrating all the pre-processing, exploitation of kernel functions, and transformation steps of a machine learning process into the architecture of a deep neural network increased the performance of this model type considerably. Modern machine learning is challenged on the one hand by the amount of data and on the other hand by the demand of real-time inference. This leads to an interest in computing architectures and modern processors. For a long time, the machine learning research could take the von-Neumann architecture for granted. All algorithms were designed for the classical CPU. Issues of implementation on a particular architecture have been ignored. This is no longer possible. The time for independently investigating machine learning and computational architecture is over. Computing architecture has experienced a similarly rampant development from mainframe or personal computers in the last century to now very large compute clusters on the one hand and ubiquitous computing of embedded systems in the Internet of Things on the other hand. Cyber-physical systems’ sensors produce a huge amount of streaming data which need to be stored and analyzed. Their actuators need to react in real-time. This clearly establishes a close connection with machine learning. Cyber-physical systems and systems in the Internet of Things consist of diverse components, heterogeneous both in hard- and software. Modern multi-core systems, graphic processors, memory technologies and hardware-software codesign offer opportunities for better implementations of machine learning models. Machine learning and embedded systems together now form a field of research which tackles leading edge problems in machine learning, algorithm engineering, and embedded systems. Machine learning today needs to make the resource demands of learning and inference meet the resource constraints of used computer architecture and platforms. A large variety of algorithms for the same learning method and, moreover, diverse implementations of an algorithm for particular computing architectures optimize learning with respect to resource efficiency while keeping some guarantees of accuracy. The trade-off between a decreased energy consumption and an increased error rate, to just give an example, needs to be theoretically shown for training a model and the model inference. Pruning and quantization are ways of reducing the resource requirements by either compressing or approximating the model. In addition to memory and energy consumption, timeliness is an important issue, since many embedded systems are integrated into large products that interact with the physical world. If the results are delivered too late, they may have become useless. As a result, real-time guarantees are needed for such systems. To efficiently utilize the available resources, e.g., processing power, memory, and accelerators, with respect to response time, energy consumption, and power dissipation, different scheduling algorithms and resource management strategies need to be developed. This book series addresses machine learning under resource constraints as well as the application of the described methods in various domains of science and engineering. Turning big data into smart data requires many steps of data analysis: methods for extracting and selecting features, filtering and cleaning the data, joining heterogeneous sources, aggregating the data, and learning predictions need to scale up. The algorithms are challenged on the one hand by high-throughput data, gigantic data sets like in astrophysics, on the other hand by high dimensions like in genetic data. Resource constraints are given by the relation between the demands for processing the data and the capacity of the computing machinery. The resources are runtime, memory, communication, and energy. Novel machine learning algorithms are optimized with regard to minimal resource consumption. Moreover, learned predictions are applied to program executions in order to save resources. The three books will have the following subtopics: Volume 1: Machine Learning under Resource Constraints - Fundamentals Volume 2: Machine Learning and Physics under Resource Constraints - Discovery Volume 3: Machine Learning under Resource Constraints - Applications Volume 2 is about machine learning for knowledge discovery in particle and astroparticle physics. Their instruments, e.g., particle accelerators or telescopes, gather petabytes of data. Here, machine learning is necessary not only to process the vast amounts of data and to detect the relevant examples efficiently, but also as part of the knowledge discovery process itself. The physical knowledge is encoded in simulations that are used to train the machine learning models. At the same time, the interpretation of the learned models serves to expand the physical knowledge. This results in a cycle of theory enhancement supported by machine learning.
With his unique knack for making cutting-edge theoretical science effortlessly accessible, world-renowned physicist Paul Davies now tackles an issue that has boggled minds for centuries: Is time travel possible? The answer, insists Davies, is definitely yes—once you iron out a few kinks in the space-time continuum. With tongue placed firmly in cheek, Davies explains the theoretical physics that make visiting the future and revisiting the past possible, then proceeds to lay out a four-stage process for assembling a time machine and making it work. Wildly inventive and theoretically sound, How to Build a Time Machine is creative science at its best—illuminating, entertaining, and thought provoking.
This book provides a thorough introduction to the phenomenology of heavy flavour physics, those working on the B-factories, LHCb, BTeV, HERA and the Tevatron. It explains how heavy quark theory could be implemented on the lattice, and discusses the status of CP-violation in the neutral kaon system.