To conclude our detailed look at the ATLAS experiment, this episode looks at the computing infrastructure. We start out with the trigger systems that decide, very quickly, whether the data from a particular collision is worth keeping. We then discuss the reconstruction of the event, the simulation needed to understand the background as well as the LHC Grid used distribute data and computation over the whole planet. Our guest is CERN’n Frank Berghaus.
After understanding the history and development of ATLAS (and covering the LHC and particle physics in general) in previous episodes, we are now at the point where we can try to understand how a scientist uses the data produced by one of these large detectors and make sense of it. This is what we’ll do in this episode with physicist (and listener) Philipp Windischhofer. If you want to learn even more, you can check out these links provided by Philipp or read the last chapter of the book :-)
ATLAS is one of the two general-purpose experiments at the LHC. It has been conceived, designed, and built over decades by hundreds of scientists and engineers from dozens of countries and hundreds of organizations. My guest, Peter Jenni, has been the head of the ATLAS collaboration for most of this time. In this episode we talk about science and engineering, but mostly about organizational aspects and the “community management” necessary to get such a magnificent machine off the ground.
In May I visited ALICE, one of the four large experiments at the LHC and talked with Despina Hatzifotiadou. We briefly discussed the science that ALICE is interested in, and then spent the majority of the time dissecting the detector to understand its components and how they detect the various products of particle collisions.