The Large Hadron Collider (LHC) near Geneva, Switzerland, became known worldwide in 2012 thanks to the discovery of the Higgs boson. This discovery was a decisive confirmation of the Standard Model of particle physics. To date, the ATLAS project is being carried out at the LHC – on the detector of the same name designed to study proton-proton collisions. The detector is now waiting for an update with high luminosity for operations, which is scheduled to begin in 2027. To this end, a team of physicists and scientists has developed an algorithm based on machine learning that brings the current detector closer to responding to a much larger amount of data expected during updating. The study publishes the Journal of Instrumentation.

As the largest physical machine ever created, the LHC shoots two beams of protons in opposite directions around a 17-mile ring, until they get close to the speed of light, breaks them together and analyzes collision products using giant detectors like ATLAS. The ATLAS device has a six-story building height and weighs about 7,000 tons. Today, the LHC continues to study the Higgs boson and also solves fundamental questions about how and why matter in the universe is what it is.

Most of the research questions at ATLAS relate to finding a needle in a giant haystack, where scientists are only interested in finding one event among a billion others.

Walter Hopkins, Assistant Physicist at Argonne High Energy Physics (HEP)

As part of the LHC modernization, efforts are being made to increase the brightness of the collider – the number of proton-proton interactions in a collision of two proton beams – by five times. This will give about 10 times more data per year than currently receive experiments on the LHC. How well detectors respond to this increased frequency of events remains to be seen. This requires the launch of high-performance computer simulation of detectors to accurately evaluate the processes that result from LHC collisions. This large-scale simulation is costly and computationally expensive on the best and most powerful supercomputers in the world.

The Argonne lab team has created a machine learning algorithm that will run as a preliminary simulation before any full-scale simulation. This algorithm will help you in a very fast and less costly manner to show how a real detector will respond to more data expected during updating. It includes modeling the response of the detector to an experiment with a collision of particles and the restoration of objects from physical processes.

The discovery of new physics at the LHC and elsewhere requires increasingly sophisticated methods for analyzing big data, and the use of machine learning and other methods of artificial intelligence is useful here.

The team algorithm may prove to be invaluable not only for ATLAS but also for several experimental detectors on the LHC, as well as for other experiments in elementary particle physics, which are now being carried out around the world.