Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The ROOT system in an Object Oriented framework for large scale data analysis. ROOT written in C++, contains, among others, an efficient hierarchical OO database, a C++ interpreter, advanced statistical analysis (multi-dimensional histogramming, fitting, minimization, cluster finding algorithms) and visualization tools. The user interacts with ROOT via a graphical user interface, the command line or batch scripts. The command and scripting language is C++ (using the interpreter) and large scripts can be compiled and dynamically linked in. The OO database design has been optimized for parallel access (reading as well as writing) by multiple processes.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... There is a wide range of serialization libraries. Chapter 6 compares the performance against ROOT IO [360], which according to Blomer [361] outperforms Protobuf [362], HDF5 [363], Parquet [364], and Avro [365]. MPI [366] also provides functionality to define derived data types, but targets use cases with regular patterns, for example, the last element of each row in a matrix. ...
... BioDynaMo uses ROOT [360] to integrate the backup and restore functionality transparently. ...
... BioDynaMo builds upon the rich features of CERN's primary data analysis framework ROOT [360], which provides an extensive mathematical, histogram, graphing, and fitting library. Bio-DynaMo complements this functionality by providing an easy mechanism to collect simulation ...
Preprint
Full-text available
Agent-based modeling is indispensable for studying complex systems across many domains. However, existing simulation platforms exhibit two major issues: performance and modularity. Low performance prevents simulations with a large number of agents, increases development time, limits parameter exploration, and raises computing costs. Inflexible software designs motivate modelers to create their own tools, diverting valuable resources. This dissertation introduces a novel simulation platform called BioDynaMo and its significant improvement, TeraAgent, to alleviate these challenges via three major works. First, we lay the platform's foundation by defining abstractions, establishing software infrastructure, and implementing a multitude of features for agent-based modeling. We demonstrate BioDynaMo's modularity through use cases in neuroscience, epidemiology, and oncology. We validate these models and show the simplicity of adding new functionality with few lines of code. Second, we perform a rigorous performance analysis and identify challenges for shared-memory parallelism. Provided solutions include an optimized grid for neighbor searching, mechanisms to reduce the memory access latency, and exploiting domain knowledge to omit unnecessary work. These improvements yield up to three orders of magnitude speedups, enabling simulations of 1.7 billion agents on a single server. Third, we present TeraAgent, a distributed simulation engine that allows scaling out the computation of one simulation to multiple servers. We identify and address server communication bottlenecks and implement solutions for serialization and delta encoding to accelerate and reduce data transfer. TeraAgent can simulate 500 billion agents and scales to 84096 CPU cores. BioDynaMo has been widely adopted, including a prize-winning radiotherapy simulation recognized as a top 10 breakthrough in physics in 2024.
... We use the parton distribution function (PDF) CTEQ66 from the library LHAPDF6 [25]. We carry out the event analysis with ROOT version 6.22 [26], including the Toolkit for Multivariate Analysis, TMVA [27]. We apply in our MG5 simulations the mild, flavor-blind cuts (5) to avoid infrared divergences in background processes. ...
... In order to estimate the total uncertainty in the asymmetry A, we take into account the background contributions to it, A B , the statistical uncertainties as described in "Appendix C," Eq. (26), and the systematic uncertainties related to the choice of PDF and its scale. The latter are provided by MG5 and the library LHAPDF6 for the cross sections, and propagated to the asymmetry as in "Appendix C." We obtain uncertainties of 4.5% (scale) and 3.34% (PDF) at the LHeC and 23.5% (scale) and 35.7% (PDF) at the FCC-he. ...
... where in the last step we used the approximate equality N + N − due to the smallness of the asymmetry. Equation (26) gives the statistical uncertainty for A. ...
Article
Full-text available
We study the consequences for top-quark physics of having electron and positron beams available at the LHeC and FCC-he, as was the case in HERA. We show that the asymmetry between top production in pe+pe^+ p e + collisions and antitop production in pepe^- p e - reactions is sensitive to Vtd|V_{td}| | V td | . By means of detailed parton-level Monte Carlo simulations of single t and tˉ\bar{t} t ¯ production and its backgrounds, we parametrize the asymmetry dependence on Vtd|V_{td}| | V td | and estimate its uncertainties. Our analysis includes realistic phase-space cuts and machine-learning binary classifiers for background rejection. We thus obtain limits on Vtd|V_{td}| | V td | that are substantially stronger than current ones and also smaller than current projections for the HL-LHC. We have Vtd<1.6×VtdPDG|V_{td}| < 1.6\times |V_{td}^{\textrm{PDG}}| | V td | < 1.6 × | V td PDG | at the LHeC, at 68% C.L. with Lint=2L_{\textrm{int}}=2 L int = 2 /ab.
... The satellites are marked as asterisks and the size of the symbol is a proxy for the relative brightness. In many occasions Galileo also documented the fact that the satellites appeared to be not completely aligned (see for example observations 6,7,11,16,17,19,23,27,30,31). As we will see in the following when comparing with modern simulation also the recordings of these displacements from the ecliptic plane are remarkably accurate. ...
... The elongation datasets, (x i , t i ), separated by satellite have been fitted with a function of the form x(t) = A sin(ω(t − t 0 ) + ϕ 0 ) using the ROOT [6] fitter libraries. We have set the uncertainty on the point to match the r.m.s. of the residuals in practice setting the to the errors determined by the fit itself. ...
Preprint
Full-text available
We analyse the observations of the satellites of Jupiter from the Sidereus Nuncius (January 7 to March 1, 1610) and compare them to the predictions obtained using a modern sky simulator, verifying them one by one. A sinusoidal fit of the data obtained from the 64 available sketches, allows measuring the relative major semi-axes of the satellites' orbits and their periods with a statistical precision of 2-4\% and 0.1-0.3\% respectively. The periods are basically unbiased while the orbits tend to be underestimated for Callisto by about 12\%. The posterior fit error indicates that the positions of the satellites are determined with a resolution of 0.4-0.6 Jupiter diameters in the notation of Galilei corresponding to about 40- 70 arc sec i.e. similar to the true angular diameter of Jupiter, in those days. We show that with this data one can infer in a convincing way the third law of Kepler for the Jupiter system. The 1:2 and 1:4 orbital resonance between the periods of Io and Europa/Ganymede can be determined with \% precision. In order to obtain these results it is important to separate the four datasets. This operation, which is nowadays simple using a sky simulator, and is fully reported in this work, was an extremely difficult task for Galilei as the analysis will evidence. Nevertheless we show how the four periods might have been extracted using the modern Lomb-Scargle technique without having to separate the four data-sets already just using these early observations. We also perform a critical evaluation of the accuracy of the observation of the Pleiades and other clusters and the Moon.
... This was followed by designing various collimator geometries and simulating a 111 In source to evaluate their effectiveness. The outcomes of the GATE simulations were comprehensively analyzed using ROOT [24]. More details of the collimator design and optimization process are provided below. ...
Article
Full-text available
This study focused on designing and optimizing collimators for cascade gamma-ray imaging through Monte Carlo simulations. The trapezoidal-shaped collimator blocks, designed in the Geant4 application for emission tomography (GATE) environment, were attached to a simulated small animal GATE - PET model. The collimators were optimized by simulating septa thicknesses from 0.2 mm to 1.2 mm, in 0.2 mm increments. A 1.0 MBq 111In source having radius of 0.25 mm was used as the cascade gamma-ray emitter. Sixteen trapezoidal tungsten collimator blocks were designed, each with a 16.31 mm × 37.5 mm surface facing the detector crystals, and a 12.33 mm × 37.5 mm surface facing the scanned object. Each block featured 105 parallel rectangular holes arranged in a 7 × 15 array, with a length of 10.0 mm, resulting in a ring-like collimator with a 41.0 mm outer radius. The designed collimator, intended for small animal imaging, prioritizes resolution. Hence, a collimator with 1.0 mm septa and hole sizes of 1.5 mm × 0.7 mm, offering spatial resolutions of 7.6 mm and 4.1 mm in the axial and transaxial directions, respectively, was chosen. The collimators demonstrated energy resolution of approximately 8.96% and 10.10% at 171.3 keV and 245.4 keV, respectively, within a 10% energy resolution threshold set during simulations. Besides, the reconstructed source positions ranged from 81.1% to 100% of the true simulated source positions within the field of view. The optimized collimator design presents a viable solution for imaging small animals’ internal organs, with sizes exceeding 7.6 mm.
... Most detailed detector simulations make use of the Geant4 package [16][17][18] complemented by custom simulations for the tracking of the Cherenkov light, fluorescence photons, and the simulation of the electronics. The Event serialization is implemented using the ROOT toolkit [19]. The internal ROOT-based format allows the user to save and restore the complete information in the Event data structure. ...
... The geometry of the setup has been implemented using a Geant4 [40][41][42] based software toolkit called ImpCRESST [43], which was initially developed within the CRESST [44] dark matter search and later also within COSINUS [36]. ImpCRESST utilizes Geant4 v10.2.3 and root v6-08-06 [45]. In the simulation, the geometry is modeled as described in section 2. A 30 cm dead layer is achieved with a reflective foil that was set to have 95% reflectivity. ...
Article
Full-text available
While neutrinos are often treated as a background for many dark matter experiments, these particles offer a new avenue for physics: the detection of core-collapse supernovae. Supernovae are extremely energetic, violent and complex events that mark the death of massive stars. During their collapse stars emit a large number of neutrinos in a short burst. These neutrinos carry 99% of the emitted energy which makes their detection fundamental in understanding supernovae. This paper illustrates how COSINUS (Cryogenic Observatory for SIgnatures seen in Next-generation Underground Searches), a sodium iodide (NaI) based dark matter search, will be sensitive to the next galactic core-collapse supernova. The experiment is composed of two separate detectors which will respond to far away and nearby supernovae. The inner core of the experiment will consist of NaI crystals operating as scintillating calorimeters. These crystals will mainly be sensitive to the Coherent Elastic Neutrino-Nucleus Scattering (CEνNS) against Na and I nuclei. The low mass of the cryogenic detectors enables the experiment to identify close supernovae within 1 kpc without pileup. The crystals will see up to hundreds of CEνNS events from a supernova happening at 200 pc. They reside at the center of a large cylindrical 230 T water tank, instrumented with 30 photomultiplier tubes. This tank acts simultaneously as a passive and active shield able to detect the Cherenkov radiation induced by impinging charged particles from ambient and cosmogenic radioactivity. A supernova near the Milky Way Center (10 kpc) will be easily detected inducing ∼60 measurable events, and the water tank will have a 3σ sensitivity to supernovae up to 22 kpc, seeing ∼10 events. This paper shows how, even without dedicated optimization, modern dark matter experiments will also be able to play their part in the multi-messenger effort to detect the next galactic core-collapse supernova.
... PandaRoot offers tools for event simulation, beginning with the production of Monte Carlo events and continuing with the propagation of particles through detector material, digitisation of signals, reconstruction and calibration, and physics analysis. PandaRoot, a detector-specific framework, is derived from the general-purpose Fair-Root framework [30], which in turn is based on the ROOT framework [31]. FairRoot constitutes a base for other detector-specific frameworks within the FAIR software ecosystem and provides a wide range of basic classes that facilitate the customising of each detector configuration. ...
Preprint
Full-text available
We present track reconstruction algorithms based on deep learning, tailored to overcome specific central challenges in the field of hadron physics. Two approaches are used: (i) deep learning (DL) model known as fully-connected neural networks (FCNs), and (ii) a geometric deep learning (GDL) model known as graph neural networks (GNNs). The models have been implemented to reconstruct signals in a non-Euclidean detector geometry of the future antiproton experiment PANDA. In particular, the GDL model shows promising results for cases where other, more conventional track-finders fall short: (i) tracks from low-momentum particles that frequently occur in hadron physics experiments and (ii) tracks from long-lived particles such as hyperons, hence originating far from the beam-target interaction point. Benchmark studies using Monte Carlo simulated data from PANDA yield an average technical reconstruction efficiency of 92.6% for high-multiplicity muon events, and 97.1% for the Λ\Lambda daughter particles in the reaction pˉpΛˉΛpˉπ+pπ\bar{p}p \to \bar{\Lambda}\Lambda \to \bar{p}\pi^+ p\pi^-. Furthermore, the technical tracking efficiency is found to be larger than 70% even for particles with transverse momenta pTp_T below 100 MeV/c. For the long-lived Λ\Lambda hyperons, the track reconstruction efficiency is fairly independent of the distance between the beam-target interaction point and the Λ\Lambda decay vertex. This underlines the potential of machine-learning-based tracking, also for experiments at low- and intermediate-beam energies.
... Specifically, we compare the remaining events after the invisible selection criteria described in Section 2 as well as for a selection of EUM events with charged hadrons. For this, we use ROOT, RooFit and ROOT's RDataframe [29] 2 . Then, having validated our simulation, we study three different VHCAL configurations and evaluate them in terms of their hermeticity and suppression for this particular background. ...
Preprint
Full-text available
NA64 is a fixed-target experiment at the CERN SPS designed to search for Light particle Dark Matter (LDM) candidates with masses in the sub-GeV range. During the 2016-2022 runs, the experiment obtained the world-leading constraints, leaving however part of the well-motivated region of parameter space suggested by benchmark LDM models still unexplored. To further improve sensitivity, as part of the upgrades to the setup of NA64 at the CERN SPS H4 beamline, a prototype veto hadron calorimeter (VHCAL) was installed in the downstream region of the experiment during the 2023 run. The VHCAL, made of Cu-Sc layers, was expected to be an efficient veto against upstream electroproduction of large-angle hadrons or photon-nuclear interactions, reducing the background from secondary particles escaping the detector acceptance. With the collected statistics of 4.4×10114.4\times10^{11} electrons on target (EOT), we demonstrate the effectiveness of this approach by rejecting this background by more than an order of magnitude. This result provides an essential input for designing a full-scale optimized VHCAL to continue running background-free during LHC Run 4, when we expect to collect 101310^{13} EOT. Furthermore, this technique combined with improvements in the analysis enables us to decrease our missing energy threshold from 50 GeV to 40 GeV thereby enhancing the signal sensitivity of NA64.
... To complement this, we have also prepared a standalone package in the Python Package Index (pip) that implements OmniFold. The pip version is more light weight, since it does not rely on ROOT [39]. Best practices for the two approaches are discussed in more detail below. ...
Preprint
Full-text available
Machine learning has enabled differential cross section measurements that are not discretized. Going beyond the traditional histogram-based paradigm, these unbinned unfolding methods are rapidly being integrated into experimental workflows. In order to enable widespread adaptation and standardization, we develop methods, benchmarks, and software for unbinned unfolding. For methodology, we demonstrate the utility of boosted decision trees for unfolding with a relatively small number of high-level features. This complements state-of-the-art deep learning models capable of unfolding the full phase space. To benchmark unbinned unfolding methods, we develop an extension of existing dataset to include acceptance effects, a necessary challenge for real measurements. Additionally, we directly compare binned and unbinned methods using discretized inputs for the latter in order to control for the binning itself. Lastly, we have assembled two software packages for the OmniFold unbinned unfolding method that should serve as the starting point for any future analyses using this technique. One package is based on the widely-used RooUnfold framework and the other is a standalone package available through the Python Package Index (PyPI).
... This package does full event reconstruction for the SHMS used alone or in coincidence with other detectors. hcana is based on the modular Hall A analyzer [68] ROOT [69] based C++ analysis framework. This framework provides for run time user configuration of histograms, ROOT tree contents, cuts, parameters and detector layout. ...
Preprint
Full-text available
The Super High Momentum Spectrometer (SHMS) has been built for Hall C at the Thomas Jefferson National Accelerator Facility (Jefferson Lab). With a momentum capability reaching 11 GeV/c, the SHMS provides measurements of charged particles produced in electron-scattering experiments using the maximum available beam energy from the upgraded Jefferson Lab accelerator. The SHMS is an ion-optics magnetic spectrometer comprised of a series of new superconducting magnets which transport charged particles through an array of triggering, tracking, and particle-identification detectors that measure momentum, energy, angle and position in order to allow kinematic reconstruction of the events back to their origin at the scattering target. The detector system is protected from background radiation by a sophisticated shielding enclosure. The entire spectrometer is mounted on a rotating support structure which permits measurements to be taken with a large acceptance over laboratory scattering angles from 5.5 to 40 degrees, thus allowing a wide range of low cross-section experiments to be conducted. These experiments complement and extend the previous Hall C research program to higher energies.
... Of course exceptions exist, with software such as ROOT [23], GEANT4 [24], COR-SIKA [25], etc. ubiquitous within our three communities. Other examples are middleware which provides key functionality to manage and access storage and computing resources, and the use of smaller toolkit components, e.g., AwkwardArray [26], that leverages the flexibility of the Python ecosystem in a very domain independent way. ...
Preprint
Full-text available
The scientific communities of nuclear, particle, and astroparticle physics are continuing to advance and are facing unprecedented software challenges due to growing data volumes, complex computing needs, and environmental considerations. As new experiments emerge, software and computing needs must be recognised and integrated early in design phases. This document synthesises insights from ECFA, NuPECC and APPEC, representing particle physics, nuclear physics, and astroparticle physics, and presents collaborative strategies for improving software, computing frameworks, infrastructure, and career development within these fields.
... The readout software generates a binary file in a proprietary format and a ROOT file (Brun and Rademakers 1997) in parallel during data taking. In the off-line analysis, the ROOT files are used. ...
Article
Full-text available
Objective. Monolithic active pixel sensors are used for charged particle tracking in many applications, from medical physics to astrophysics. The Bergen pCT collaboration designed a sampling calorimeter for proton computed tomography, based entirely on the ALICE PIxel DEtector (ALPIDE). The same telescope can be used for in-situ range verification in particle therapy. An accurate charge diffusion model is required to convert the deposited energy from Monte Carlo simulations to a cluster of pixels, and to estimate the deposited energy, given an experimentally observed cluster. Approach. We optimize the parameters of different charge diffusion models to experimental data for both proton computed tomography and proton range verification, collected at the Danish Centre for Particle Therapy. We then evaluate the performance of downstream tasks to investigate the impact of charge diffusion modeling. Main results. We find that it is beneficial to optimize application-specific models, with a power law working best for proton computed tomography, and a model based on a 2D Cauchy-Lorentz distribution giving better agreement for range verification. We further highlight the importance of evaluating the downstream tasks with multiple approaches to obtain a range of expected performance metrics for the application. Significance. This work demonstrates the influence of the charge diffusion model on downstream tasks, and recommends a new model for proton range verification with an ALPIDE-based pixel telescope.
... These volumes register the entrance and exit coordinates of intersecting photon trajectories, adding them to the ROOT output. 73 As a consequence, given that the intersection coordinates of all detected photons are known for both grating positions, the grating geometry can be defined post-simulation. This leads to significantly reduced simulation times if many grating geometries are to be tested for the same phantom, at the cost of an increased output size. ...
Article
Full-text available
In recent years, the complementary nature of multi-contrast imaging has increased the popularity of x-ray phase contrast imaging, including edge illumination. However, edge illumination system optimization most often relies on phase and transmission contrast only, without considering dark field contrast. Computer simulations are a widespread approach to design and optimize imaging systems, including the benchmarking of simulation results, i.e., the comparison to a reference value. Providing such a reference is, however, particularly challenging for the dark field signal. In this work, we present a practical method to directly estimate transmission, refraction, and dark field contrast reference values from simulated x-ray trajectories in Monte Carlo simulations. This allows an immediate comparison of the retrieved simulated contrasts to their respective references. We show how the generated reference values can be used effectively for benchmarking simulation results and discuss other potential applications of the presented approach.
... The simulated with GEANT4 Cherenkov photons transport is shown in Figures 5 and 6. The number of photons detected by each SiPM is recorded event-by-event in ROOT TTree format and analyzed using a C++ code in ROOT environment [23] . The simulated data represent the distribution of the number of photons detected by the SiPMs connected to both the upper radiator (x-axis) and lower radiator (y-axis) WLS fibers. ...
Preprint
Full-text available
Cherenkov detectors have been extensively developed and utilized in various scientific fields, including particle physics, astrophysics, and nuclear engineering. These detectors operate based on Cherenkov radiation, which is emitted when a charged particle traverses a dielectric medium at a velocity greater than the phase velocity of light in that medium. In this work, we present the development of a Cherenkov radiation detector designed for a muon tomography system with high spatial resolution, employing wavelength-shifting (WLS) fiber readout. The detector consists of two large-area Cherenkov radiators, each measuring 1 m x 1 m, with each read out by WLS fibers arranged orthogonally to determine the x and y coordinates of muon hit positions. The system is modeled using the GEANT4 simulation package, and the achieved position resolution is 1.8 mm+-0.1 (FWHM). This design enables precise tracking of muon trajectories, making it suitable for high-resolution imaging applications in muon tomography.
... An example of a popup window when clustering if off is shown in Fig. 6, while an example with clustering on is shown in Fig. 7. All the graphical part is done using ROOT libraries [7]. ...
Preprint
Full-text available
DataPix4 (Data Acquisition for Timepix4 Applications) is a new C++ framework for the management of Timepix4 ASIC, a multi-purpose hybrid pixel detector designed at CERN. Timepix4 consists of a matrix of 448x512 pixels that can be connected to several types of sensors, to obtain a pixelated detector suitable for different applications. DataPix4 can be used both for the full configuration of Timepix4 and its control board, and for the data read-out via slow control or fast links. Furthermore, it has a flexible architecture that allows for changes in the hardware, making it easy to adjust the framework to custom setups and exploit all classes with minimal modification.
... In several Higgs analyses, machine-learning methods such as Boosted Decision Trees (BDTs) [42] and Multi Layer Perceptrons (MLPs) are employed to enhance the separation between the signal and the physics backgrounds. The BDTs and MLPs have been trained and applied with the TMVA package [43] from the ROOT software suite [44]. Their configuration parameters are set to the default values as defined by TMVA. ...
Article
Full-text available
The Muon Collider is one of the most promising future collider facilities with the potential to reach multi-TeV center-of-mass energy and high luminosity. Due to the significant Higgs boson production cross section in muon-antimuon collisions at such high energies, the collider offers an excellent opportunity for in-depth exploration of Higgs boson properties. It holds the capability to significantly advance our understanding of the Higgs sector to a very high level of precision. However, the presence of beam-induced background resulting from the decay of the beam muons poses unique challenges for detector development and event reconstruction. In this paper, the prospects for measuring various Higgs boson properties at a center-of-mass energy of 3 TeV are presented, using a detailed detector simulation in a realistic environment. The study demonstrates the feasibility of achieving high precision results with the current state-of-the-art detector design. In addition, the paper discusses the detector requirements necessary to achieve this level of accuracy.
... The ROOT analysis software was integrated in the simulation since the output files of each simulation are made of ROOT trees [16]. In the output, the number of optical photons detected by each channel in each event is recorded. ...
Preprint
Full-text available
One consolidated technique for the treatment of cancer is Targeted Radionuclide Therapy (TRT). With this technique, radionuclides are attached to a specific drug that is able to bring them to the target tumor site. The ISOLPHARM project is currently developing a radiopharmaceutical for TRT based on Ag-111, an innovative radionuclide. Ag-111 has a half-life of 7.45 days and decays emitting both electrons and gamma-rays. The emission of gamma-rays, mainly with energy of 342\,keV, allows the Ag-111 nuclei to be visualized through the use of a gamma camera. In this contribution, we describe the Monte Carlo simulation built to optimize the parameters of this imaging device. The software used for this aim is the Geant4 toolkit, which is able to simulate the interaction between particles and matter.
... The shifters also examine all equipment ensuring its stable and correct performance, and typically restart a new run every week or, if needed, more often. The offline analysis is performed later with the ROOT software [31]. ...
Preprint
Full-text available
The nuGeN experiment searches for coherent elastic neutrino-nucleus scattering (CEvNS) at the Kalinin Nuclear Power Plant. A 1.41-kg high-purity low-threshold germanium detector surrounded by active and passive shielding is deployed at the minimal distance of 11.1 m allowed by the lifting mechanism from the center of reactor core, utilizing one of the highest antineutrino fluxes among the competing experiments. The direct comparison of the count rates obtained during reactor-ON and reactor-OFF periods with the energy threshold of 0.29~keVee_{ee} shows no statistically significant difference. New upper limits on the number of CEvNS events are evaluated on the basis of the residual ON-OFF count rate spectrum.
... In our work, for analytical and numerical evaluation, we have used [35] for the calculations of the cross-sections and [36] for the branch-GnuPlot ing ratio and total decay width. [37] is used for the graphical plotting. ...
Article
Full-text available
This study explores the production of charged Higgs particles through photon-photon collisions within the context of the Two Higgs Doublet Model, including one-loop-level scattering amplitudes of electroweak and QED radiation.
... The interaction of the generated particles with the detector, and its response, are implemented using the GEANT4 toolkit [74,75] as described in Ref. [76]. The ROOT [77] and LHCb [78][79][80] software frameworks are used for the initial data preparation, while the analysis is written in the PYTHON language with standard scientific packages [81][82][83][84][85][86]. The total integrated luminosity of the used data sample is determined using empty-event counters calibrated by van der Meer scans and beam profile measurements [87] and is found to be L int = 4.41 ± 0.13 fb −1 . ...
Article
Full-text available
Measurements are presented of the cross-section for the central exclusive production of J/\psi\to\mu^+\mu^- J / ψ → μ + μ − and \psi(2S)\to\mu^+\mu^- ψ ( 2 S ) → μ + μ − processes in proton-proton collisions at \sqrt{s} = 13 \ \mathrm{TeV} s = 13 T e V with 2016–2018 data. They are performed by requiring both muons to be in the LHCb acceptance (with pseudorapidity 2<\eta_{\mu^±} < 4.5 2 < η μ ± < 4.5 ) and mesons in the rapidity range 2.0 < y < 4.5 2.0 < y < 4.5 . The integrated cross-section results are \sigma_{J/\psi\to\mu^+\mu^-}(2.0<y_{J/\psi}<4.5,2.0<\eta_{\mu^{±}} < 4.5) = 400 ± 2 ± 5 ± 12 \mathrm{pb}, \ \sigma_{\psi(2S)\to\mu^+\mu^-}(2.0<y_{\psi(2S)}<4.5,2.0<\eta_{\mu^{±}} < 4.5) = 9.40 ± 0.15 ± 0.13 ± 0.27 \mathrm{pb}, σ J / ψ → μ + μ − ( 2.0 < y J / ψ < 4.5 , 2.0 < η μ ± < 4.5 ) = 400 ± 2 ± 5 ± 12 p b , σ ψ ( 2 S ) → μ + μ − ( 2.0 < y ψ ( 2 S ) < 4.5 , 2.0 < η μ ± < 4.5 ) = 9.40 ± 0.15 ± 0.13 ± 0.27 p b , where the uncertainties are statistical, systematic and due to the luminosity determination. In addition, a measurement of the ratio of \psi(2S) ψ ( 2 S ) and J/\psi J / ψ cross-sections, at an average photon-proton centre-of-mass energy of 1\ \mathrm{TeV} 1 T e V , is performed, giving=0.1763±0.0029±0.0008±0.0039, = 0.1763 ± 0.0029 ± 0.0008 ± 0.0039, where the first uncertainty is statistical, the second systematic and the third due to the knowledge of the involved branching fractions. For the first time, the dependence of the J/\psi J / ψ and \psi(2S) ψ ( 2 S ) cross-sections on the total transverse momentum transfer is determined in pp p p collisions and is found consistent with the behaviour observed in electron-proton collisions.
... The same applies to training materials; the HSF has developed the HSF Training Center with modules on cross-experiment topics which can be used to teach fundamental software and analysis skills. These are used effectively by some experiments and include, but are not limited to, Machine Learning, Git, Singularity/Docker, ROOT (Brun and Rademakers, 1997) and other programming languages. The available personpower is also a key consideration when deciding on synchronous or asynchronous training. ...
Article
Full-text available
In this article we document the current analysis software training and onboarding activities in several High Energy Physics (HEP) experiments: ATLAS, CMS, LHCb, Belle II and DUNE. Fast and efficient onboarding of new collaboration members is increasingly important for HEP experiments. With rapidly increasing data volumes and larger collaborations the analyses and consequently, the related software, become ever more complex. This necessitates structured onboarding and training. Recognizing this, a meeting series was held by the HEP Software Foundation (HSF) in 2022 for experiments to showcase their initiatives. Here we document and analyze these in an attempt to determine a set of key considerations for future HEP experiments.
... In this section, we discuss the cut-based analysis performed in the ROOT [61] framework with the detectorsimulated signal and background datasets to get an optimized value of the signal-to-background ratio. For that purpose we have considered here benchmark values of m χ = 1 GeV and Λ = 1 TeV although, in the next sec- tion, we will vary the DM mass to get 3σ sensitivity limit on the cutoff scale Λ. ...
Article
Dark Matter being electrically neutral does not participate in electromagnetic interactions at leading order. However, we discuss here fermionic dark matter (DM) with permanent magnetic and electric dipole moment that interacts electromagnetically with photon at loop-level through a dimension-5 operator. We discuss the search prospect of the dark matter at the proposed International Linear Collider (ILC) and constrain the parameter space in the plane of the DM mass and the cutoff scale Λ. At the 500 GeV ILC with 4 ab −1 of integrated luminosity we probed the mono-photon channel and utilizing the advantages of beam polarization we obtained an upper bound on the cutoff scale that reaches up to Λ = 3.72 TeV.
... [52]. The robust R statistical (applied math system) is used to construct the analytical and visualization tools themselves [53], in C and Fortran programs and in Java applications [54]. The component that unifies the pieces and conceals from the user the complexity of the analytical techniques is Visual Basic for Applications. ...
Article
Full-text available
This study investigates the role of the Fibroblast Growth Factor (FGF) Pathway in breast cancers at the RNA expression level. Key cancer-related genes within the FGF pathway were analysed using datasets containing RNA and clinical information for breast cancer patients. The study involved 266, 289, and 2089 patient samples across different datasets. Various statistical tests, including Kaplan-Meier Test, Chi-Square Test, overall survival, and disease-free survival analysis, were conducted using tools such as BRB array and IBM SPSS Statistics. Associations between RNA expression and clinicopathological features were identified, such as FGFR1 being linked to early-stage grades and FGFR1OP associated with late-stage grades. Expression patterns of specific genes were also correlated with different cancer statuses. Surprisingly, survival analysis revealed contradictory findings, with FGFR1OP2 and FGF2 associated with poor overall survival, FGFR2 with good survival, and FGFR1OP2 linked to poor disease-free survival in breast cancer. These inconsistencies emphasize the necessity of additional study to better understand the dual roles of FGF pathway genes in cancer progression.
... This work was supported in part by the Italian Ministry of Foreign Affairs and International Cooperation, grant number ZA23GR03. We thank the authors of the following software tools and libraries that have been extensively used for data reduction, analysis, and visualization: caesar (Riggi et al. 2016(Riggi et al. , 2019, astropy (Astropy Collaboration et al. 2013, Root (Brun & Rademakers 1997), Topcat (Taylor 2005(Taylor , 2011 ...
Article
Full-text available
We present a catalogue of extended radio sources from the SARAO MeerKAT Galactic Plane Survey (SMGPS). Compiled from 56 survey tiles and covering approximately 500 deg^2 across the first, third, and fourth Galactic quadrants, the catalogue includes 16534 extended and diffuse sources with areas larger than 5 synthesised beams. Of them, 3891 (24% of the total) are confidently associated with known Galactic radio-emitting objects in the literature, such as regions, supernova remnants, planetary nebulae, luminous blue variables, and Wolf-Rayet stars. A significant fraction of the remaining sources, 5462 (33%), are candidate extragalactic sources, while 7181 (43%) remain unclassified. Isolated radio filaments are excluded from the catalogue. The diversity of extended sources underscores MeerKAT's contribution to the completeness of censuses of Galactic radio emitters, and its potential for new scientific discoveries. For the catalogued sources, we derived basic positional and morphological parameters, as well as flux density estimates, using standard aperture photometry. This paper describes the methods followed to generate the catalogue from the original SMGPS tiles, detailing the source extraction, characterisation, and crossmatching procedures. Additionally, we analyse the statistical properties of the catalogued populations.
... Radiation of additional gluons is modeled by Pythia. The detector response is simulated with Delphes v3.5.0 [33] using the standard CMS card, ex-tended to include an additional reconstruction of widecone jets, and root version 5.34.25 [34]. ...
Preprint
Collisions of particles at the energy frontier can reveal new particles and forces via localized excesses. However, the initial observation may be consistent with a large variety of theoretical models, especially in sectors with new top quark partners, which feature a rich set of possible underlying interactions. We explore the power of the LHC dataset to distinguish between models of the singly produced heavy top-like quark which interacts with the Standard Model through an electromagnetic form factor. We study the heavy top decay to a top quark and a virtual photon which produces a pair of fermions, propose a technique to disentangle the models, and calculate the expected statistical significance to distinguish between various hypotheses.
... The points falling outside the energy range are then used afterwards in the post-validation for a qualitative check. As a tool to find the minimum value of a multi-parameter function and analyze the shape of the function around the minimum, we used the Minuit2 [29] package of ROOT [30]. To compute the best-fit parameter values and uncertainties, including correlations between the parameters the chi-square-like function to minimize is defined as: is the integral of the flux function (Φ fitfunc ) calculated for i that represents the bin/data point for the respective limits of energy (E low and E hi ) and zenith angle (θ low and θ hi ) provided by each dataset (see table 1). ...
Article
Full-text available
Starting from the measurements conducted in 2004 with the ADAMO silicon spectrometer, we have performed a 2D fit procedure to diverse combinations of muon flux datasets documented in the literature. The fit employed three different formulas describing the energy and angular distribution of atmospheric muons at the Earth’s surface. The analysis revealed that some measurements showed discrepancies in their compatibility with the expected results or between them with variations greater than a few sigmas. For example, vertical measurements expand over a wide range of flux values even for similar setup experiments. However, we have identified a formula capable of furnishing comprehensive spectrum coverage utilizing a singular set of parameters. Despite inherent limitations, we have achieved a satisfactory agreement with the Guan parameterization. Subsequently, we have integrated this optimized parameterization into the EcoMug library and a Geant4-based muon generator. The model’s predictive capabilities have been validated by comparing it with experimental flux measurements and against other measurements in the literature. Through this process, we have shown and tested the reliability and accuracy of different muon flux modelling, facilitating advancements in domains reliant on the precise characterization of atmospheric muon phenomena.
... The position signals of the particles were obtained from the wire planes of the detectors with the help of the delay line readout technique with a resolution better than 1.5 mm, while the timing information was given by the central foil of the MWPCs. The times of flight and the position signals were recorded in list mode using the root [58] based data acquisition system NiasMARS. The details of the data acquisition setup and related electronics are discussed in Refs. ...
... The impact of the qg-discrimination variables is evaluated by comparing two BDTs. The first BDT is constructed with variables listed in Table I, and the other one is constructed with variables in both Table I which is a part of the ROOT framework, to implement the BDTs with AdaBoost [34][35][36]. ...
Preprint
Full-text available
Flavor-changing neutral currents (FCNCs) are forbidden at tree level in the Standard Model (SM), but they can be enhanced in physics Beyond the Standard Model (BSM) scenarios.In this paper, we investigate the effectiveness of deep learning techniques to enhance the sensitivity of current and future collider experiments to the production of a top quark and an associated parton through the tqg FCNC process, which originates from the tug and tcg vertices. The tqg FCNC events can be produced with a top quark and either an associated gluon or quark, while SM only has events with a top quark and an associated quark. We apply machine learning techniques to distinguish the tqg FCNC events from the SM backgrounds, including qg-discrimination variables. We use the Boosted Decision Tree (BDT) method as a baseline classifier, assuming that the leading jet originates from the associated parton. We compare with a Transformer-based deep learning method known as the Self-Attention for Jet-parton Assignment (SAJA) network, which allows us to include information from all jets in the event, regardless of their number, eliminating the necessity to match the associated parton to the leading jet. The \SaJa\ network with qg-discrimination variables has the best performance, giving expected upper limits on the branching ratios Br(tqgt \to qg) that are 25-35\% lower than those from the BDT method.
... Le fichier de résultats contenant les variables enregistrées occupe en moyenne, pour chaque configuration, quelques Go d'espace disque. Ces fichiers ont été analysés avec le logiciel ROOT [98] qui est codé en langage C++ et qui est utilisé dans une vaste gamme de domaines, particulièrement en physique des particules. Il est compatible avec GEANT4 ce qui allège le stockage et le traitement des données. ...
Thesis
Full-text available
Discipline : Physique Thèse de doctorat Issa BRIKI: Étude de la dépendance angulaire et énergétique du flux de muons cosmiques de basse énergie au niveau du sol.
Article
A 20-kiloton liquid scintillator detector is under construction at the Jiangmen Underground Neutrino Observatory (JUNO) for several physics purposes. Detecting neutrinos released from nuclear reactors, the sun, supernova bursts, and Earth’s atmosphere across a wide energy range necessitates efficient reconstruction algorithms. In this study, we introduce a novel method for reconstructing event energy by counting 3-inch photomultiplier tubes (PMTs) with or without signals. The proposed algorithm demonstrated excellent performance in accurate energy reconstruction, validated with electron Monte Carlo samples covering kinetic energies ranging from 10 MeV to 1 GeV.
Article
Photomultiplier tubes (PMTs) are extensively employed as photosensors in neutrino and dark matter detection. The precise charge and timing information extracted from the PMT waveform plays a crucial role in energy and vertex reconstruction. In this study, we investigate the deconvolution algorithm utilized for PMT waveform reconstruction, while enhancing the timing separation ability for pile-up hits by redesigning filters based on the time-frequency uncertainty principle. This filter design sacrifices signal-to-noise ratio (SNR) to achieve narrower pulse widths. Furthermore, we optimize the selection of signal pulses in the case of low SNR based on Short-Time Fourier Transform (STFT). Monte Carlo data confirms that our optimization yields enhanced reconstruction performance: improving timing separation ability for pile-up hits from 7∼10 ns to 3∼5 ns, while controlling the residual nonlinearity of charge reconstruction to about 1% in the range of 0 to 20 photoelectrons.
Article
Full-text available
Primary activity standardization of the radionuclide iodine-123 using 4π(X,e)(LS)-γ coincidence counting is presented in this paper. The activity concentration of an aqueous solution was determined by the efficiency extrapolation technique using neutral density filters. The measurements were realized using a digitizer-based data acquisition system. The coincidence analysis of the collected list-mode data was performed offline using a custom-developed computer code. The results were used to participate in an international comparison under the auspices of the International Bureau of Weights and Measures (BIPM) via the International Reference System (SIR) as well as the Transfer Instrument of the International Reference System (SIRTI).
Article
Full-text available
The \upalpha -decay fine structure of 179^{179}Hg and 177^{177}Au was studied by means of decay spectroscopy. Two experiments were performed at the Accelerator Laboratory of the University of Jyväskylä (JYFL), Finland, utilizing the recoil separator RITU and a digital data acquisition system. The heavy-ion induced fusion-evaporation reactions 3682^{82}_{36}Kr + 44100^{100}_{44}Ru and 3888^{88}_{38}Kr + 4292^{92}_{42}Mo were used to produce the 179^{179}Hg and 177^{177}Au nuclei, respectively. Studying the evaporation residues (ER, recoils)-α1\alpha _1-α2\alpha _2 correlations and \upalpha -γ\gamma coincidences, a new \upalpha decay with Eα_\alpha = 6156(10) keV was observed from 179^{179}Hg. This decay populates the (9/2^-) excited state at an excitation energy of 131.3(5) keV in 175^{175}Pt. The internal conversion coefficient for the 131.3(5) keV transition de-exciting this state was measured for the first time. Regarding the 177^{177}Au nucleus, a new \upalpha decay with Eα_\alpha = 5998(9) keV was observed to populate the 156.1(6) keV excited state in 173^{173}Ir. Two de-excitation paths were observed from this excited state. Moreover, a new 215.7(13) keV transition was observed to depopulate the 424.4(13) keV excited state in 173^{173}Ir. Properties of the 179^{179}Hg and 177^{177}Au \upalpha decays were examined in a framework of reduced widths and hindrance factors. For clarity and simplicity, the spin and parity assignments (e.g. JπJ^{\pi }) are presented without brackets throughout the text.
Article
Full-text available
The ATLAS experiment has developed extensive software and distributed computing systems for Run 3 of the LHC. These systems are described in detail, including software infrastructure and workflows, distributed data and workload management, database infrastructure, and validation. The use of these systems to prepare the data for physics analysis and assess its quality are described, along with the software tools used for data analysis itself. An outlook for the development of these projects towards Run 4 is also provided.
Article
We have investigated the gas electron multiplier (GEM) signal and time resolution using a numerical analysis method. The Garfield++ simulation package with a known field solver, ANSYS, is used here. To examine the impacts of gas mixture and electron transport characteristics inside the detector, two other softwares, Magboltz and Heed, were utilised. By exploring the effects of detector geometry, electric fields, incoming particle energy and gas mixture characteristics, we tried improving GEM detectors for higher temporal resolution. A single GEM detector was investigated with two radiation sources, i.e., a 5.9 keV Fe55\hbox {Fe}^{55} X-ray photon and cosmic muons with energies ranging from 1 MeV to 1 TeV. With Ar:CO2\hbox {CO}_2 gas mixture for a particular set-up, a minimum time resolution of up to around 4 ns was recorded. This number can be reduced even more by using various detector geometries and field settings. A significant result in lowering the temporal resolution was achieved by changing the drift field and percentage of the ionisation component in the gas mixture. The admixture of O2\hbox {O}_2 and N2\hbox {N}_2 in the gas medium also improved the detector time performance. It was also observed that the initial particle energy has little effect on the timing accuracy of the detector.
Article
Full-text available
Tumor motion is a major challenge for scanned ion-beam therapy. In the case of lung tumors, strong under- and overdosage can be induced due to the high density gradients between the tumor- and bone tissues compared to lung tissues. This work proposes a non-invasive concept for 4D monitoring of high density gradients in carbon ion beam therapy, by detecting charged fragments. The method implements CMOS particle trackers that are used to reconstruct the fragment vertices, which define the emission points of nuclear interactions between the primary carbon ions and the patient tissues. A 3D treatment plan was optimized to deliver 2 Gy to a static spherical target volume. The goodness of the method was assessed by comparing reconstructed vertices measured in two static cases to the ones in a non-compensated moving case with an amplitude of 20 mm. The measurements, performed at the Marburg Ion-Beam Therapy Center (MIT), showed promising results to assess the conformity of the delivered dose. In particular to measure overshoots induced by high density gradients due to motion with 83.0 ± 1.5% and 92.0 ± 1.5% reliability based on the ground truth provided by the time-resolved motor position and depending on the considered volume and the iso-energy layers.
Article
Full-text available
A bstract Differential cross sections for top quark pair ( tt \textrm{t}\overline{\textrm{t}} t t ¯ ) production are measured in proton-proton collisions at a center-of-mass energy of 13 TeV using a sample of events containing two oppositely charged leptons. The data were recorded with the CMS detector at the CERN Large Hadron Collider and correspond to an integrated luminosity of 138 fb − 1 . The differential cross sections are measured as functions of kinematic observables of the tt \textrm{t}\overline{\textrm{t}} t t ¯ system, the top quark and antiquark and their decay products, as well as of the number of additional jets in the event. The results are presented as functions of up to three variables and are corrected to the parton and particle levels. When compared to standard model predictions based on quantum chromodynamics at different levels of accuracy, it is found that the calculations do not always describe the observed data. The deviations are found to be largest for the multi-differential cross sections.
Article
Fast-neutron energy measurements are essential for nuclear particle research and dosimetry. However, such measurements are challenging due to the low interaction probability of fast-neutrons with matter, given their small cross-section for scattering and absorption compared to thermal neutrons, as well as their zero charge. Traditional neutron energy measurement methods have limitations related to the distance and detector size. Therefore, this study proposes a novel kinematic neutron energy compensation method that measures neutron scattering using the first detector and directly captures fast-neutrons using the second detector. The scattering and post-scattering energies of neutrons are measured and used to reconstruct the neutron energy, enabling more accurate measurements of high-energy monoenergetic neutrons. This method is less sensitive to the energy and angle distribution of neutrons as they interact with the first detector and scatter toward the second detector. The energy deposited in the first detector is measured, while the scattered fast-neutron energy is determined through nuclear reactions within the second detector, enabling event-by-event compensation in energy reconstruction. Therefore, the system is not sensitive to the distance between detectors, or the solid angle determined by the detector size. The performance of the system is verified using EJ-309 liquid and 7Li-enriched Cs2LiYCl6:Ce3+ scintillators. Additionally, at the Korea Research Institute of Standards and Science, 14.8 MeV monoenergetic neutrons were used to characterize the proposed method, achieving an energy resolution of 2.8% full width at half maximum for measurements and energy reconstruction.
Article
High-energy proton implants are used in the manufacturing process of IGBT devices to improve performance by extending minority carrier lifetime and reducing switching speeds. The energy requirement for H+ implanted into these devices exceeds the nuclear fusion reaction threshold above 300 keV of beamline liners containing 12C, 13C, and 29Si within silicon wafers of the ion implant systems. The prompt γ-ray photon radiation generated from 12C(p, γ)13N and 29Si(p, γ)30P reactions has been characterized using a photon energy resolving Sodium Iodide (NaI) scintillating crystal detection system. Determination of the photon energy is instrumental for designing a precise shielding system that can prevent exposure to ionizing radiation. Dosimetry results are reported to characterize the radiation emitted from post-implant Si wafers given the 29Si(p, γ)30P reaction, radiation emitted from post-implant SiC wafers given the 12C(p,γ)13N reaction, and the radiation present from the graphite beamline liners compared to high-Z mitigative liners.
Article
Geant4 was created for precise simulation of high-energy physics experiments to explore the origin of the universe. It provides various physical models of hadronic and electromagnetic interactions between particles and matter compared to statistical processing of other simulations. In addition, it is widely used not only in the field of high-energy physics but also in various fields such as cosmic radiation research, astrophysics, and medical physics due to its many types of roles and flexibility. This paper presents the current status and future plan of Geant4 applications for RAON beam. There is a need to study the secondary particles in heavy ion beam and fixed target collision experiments because they have been relatively little concentrated. Using the experiment-optimized physics model, we study physical properties of the primary and secondary heavy ion beams of 20, 40, and 200 MeV/A at the fixed target experiments. These experiments involve 238U and 132Sn beam with 9Be target, as well as 40Ar beam with 9Be and 12C target. We have studied distribution and momentum of secondary particles. These results will help RAON experiment to study secondary particles.
Conference Paper
Fabrik is an experimental interactive graphical programming environment designed to simplify the programming process by integrating the user interface, the programmer language and its representation, and the environmental languages used to construct and debug programs. The programming language uses a functional, bidirectional data-flow model that trivializes syntax and eliminates the need for some traditional programming abstractions. Program synthesis is simplified by the use of aggregate and application-specific operations, modifiable examples, and the direct construction of graphical elements. The user interface includes several features designed to ease the construction and editing of the program graphs. Understanding of both individual functions and program operation are aided by immediate execution and feedback as the program is edited
CERN Program Library. [S] FATMEN Users Guide
  • Hepdb Users Guide
HEPDB Users Guide, CERN Program Library. [S] FATMEN Users Guide, CERN Program Library.
  • R Brun
R. Brun, F. Rademakers ) Nucl. lnstr. and Meth. in Ph_vs. Res. A 389 (1997) 81-86
The Java Language Environment
The Java Language Environment, J. Gosling, SUN Microsystems, 1995.