HomeNews and blogs hub

Shining a light on dark matter

Bookmark this page Bookmarked

Shining a light on dark matter

Author(s)

Mike Jackson

Posted on 19 December 2016

Estimated read time: 6 min
Sections in this article
Share on blog/article:
Twitter LinkedIn

Shining a light on dark matter

Posted by m.jackson on 19 December 2016 - 1:42pm View of the 260 tonne water tank that will house the LZ experimentView of the 260 tonne water tank that will house the LZ
experiment, located 1 mile underground in Davis Cavern of
the Sanford Underground Research Facility, South Dakota.
Credit: Carlos Faham, Berkeley Lab.

By Mike Jackson, Software Sustainability Institute

85% of the mass of the Universe is made up of dark matter. Despite indirect evidence of the existence of dark matter, going all the way back to the early 20th century, there has, so far, been no direct measurement of dark matter interacting with a detector here on Earth. Not yet at least, for the LUX-ZEPLIN (LZ) project are building the largest and most sensitive dark matter detector of its type ever constructed. I will be providing consultancy to LZ’s researchers at University College London on migrating LZ’s data storage and analysis software from Microsoft Excel to a database-centred solution.

The LUX-ZEPLIN project is a consortium of 230 scientists in 37 institutions in the U.S., U.K., Portugal, Russia, and Korea and is joint-funded by the US Department of Energy and the UK Science and Technologies Facilities Council (STFC). LZ are building their dark matter detector a mile underground in the Sanford Underground Research Facility (SURF) in Lead, South Dakota, a short ride from the famous Old West town of Deadwood. Going live in 2020, 7 tonnes of liquefied xenon will be used to detect the faint and extremely rare interactions expected between galactic dark matter and regular matter, the xenon itself.

Background radioactivity hampers our hunt for dark matter

Detecting dark matter signals is a significant challenge. The signals may be very faint, for example a typical collision by a dark matter particle would impart around one-billionth the kinetic energy of a mosquito. The signals are also very rare. Indeed, a handful of interactions might be expected over the entire 3-year running time of the detector. Natural background radioactivity present in both the detector materials, the xenon itself and the environment in which the detector operates, can make signal detection even more challenging.

To reduce the impact of background radioactivity, the detector is shielded from cosmic rays by over a mile of rock overburden and further protected by a 50,000 gallon tank of ultra-pure water to create an ultra-low background environment in which to run. In this environment the level of background radioactivity is so low that the dominant source is that coming from the materials used to construct the detector itself and the intrinsic activity in the 7 tonnes of liquefied xenon.

Controlling and predicting this background radioactivity is critical to being able to discover dark matter and there is an extensive screening campaign to source materials that have low enough levels of radioactivity so as not to interfere with the rarest dark matter signals.

Predicting interference from detector materials

Many 100s of potential detector materials are screened in facilities in the UK and US prior to their use in the experiment. The radioactivity of these materials are measured and these measurements are used to build a complete model of the flux of radiation that the detector will see during its operation. Interpreting the impact of each material requires a detailed and computationally-demanding Monte Carlo simulation of the experiment and the propagation of radiation throughout a complex 3-dimensional geometry. Keeping track of the results of the screening campaign and the outputs of the simulation in a reliable, reproducible and accessible form (to other members of LZ) is of utmost importance. This is the focus of LZ research at the Department of Physics and Astronomy at University College London. UCL is part of the UK contingent of LZ, a consortium of 7 universities and research laboratories, and around 40 researchers in total, that together lead and make significant contributions to many areas in LZ.

Along with their colleagues at the University of Coimbra, UCL maintain LZ’s backgrounds control software, that records and tracks the results from the materials screening campaign in order to build up an accurate model of the radiogenic backgrounds that the experiment will see. These results not only inform the design of the experiment and the procurement of expensive detector materials, but will also serve as a key input to the analysis and interpretation of data once the detector is running.

Storage, management and analysis of the screening results is currently done via a large Microsoft Excel spreadsheet, which combines the results of the screening campaign (stored in a separate MongoDB) with the results of the detailed simulations of the detector to build the model of radiogenic backgrounds. This spreadsheet is complemented by, and populated with data from running C, C++, Python and bash code.

Moving on from Excel

Whilst fit for purpose in the early design and procurement stage, Excel is now reaching its limits in terms of sustainability, the ability to interface with other software in the experiment (for example, analysis software that interprets the dark matter data), and the interface with other users in the collaboration such that they can view and interactively query both current and historical data (for example tracking changes over time). Jim Dobson of UCL applied to the Institute’s open call for projects to request consultancy on how LZ could move on from Excel to a database-centred solution.

Using LZ’s current software usage and development practices as a basis, I, with the help of my data science colleagues at EPCC, will provide advice on, and help plan, a transition from Excel to a database-centred solution, which both interfaces efficiently with the rest of the LZ software, via open data formats or interfaces where appropriate, and provides an interactive and feature-rich interface onto this data to the collaborators that use it. We will also explore options as to the best way to archive and perform operations on more complex data types (beyond single scalar numbers) and on how to record where results originated and the state of the database at various times in its past. I look forward to reporting on progress.

Share on blog/article:
Twitter LinkedIn