Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Linux now facilitates scientific research in the Atlantic Ocean and Antarctica

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The data analysis framework ROOT is extensively employed in high-energy particle physics and astroparticle physics [13]. In addition to the analysis and mathematical libraries (written in C++), ROOT contains a geometry library (libGeom) that provides a range of functionalities for building, browsing, tracking, and visualizing detector geometries in high-energy particle experiments [14]. ...
... In GCT development, ROBAST was used to accurately calculate the GCT effective area and to evaluate the mirror misalignment tolerance [27]. Panels (f) and (i) of Figure 8 show ROBAST simulations of the GCT optical system and its off-axis PSF, respectively 13 . Shadows cast by the telescope masts are clearly visible in Figure 8(i). ...
... Shadows cast by the telescope masts are clearly visible in Figure 8(i). 13 The ROBAST simulations were performed on an old telescope design that differs from Figure 8(c). Figure 9: A simulation of a hexagonal Okumura cone with a cutoff angle of 30 • . ...
Preprint
We have developed a non-sequential ray-tracing simulation library, ROOT-based simulator for ray tracing (ROBAST), which is aimed to be widely used in optical simulations of cosmic-ray (CR) and gamma-ray telescopes. The library is written in C++, and fully utilizes the geometry library of the ROOT framework. Despite the importance of optics simulations in CR experiments, no open-source software for ray-tracing simulations that can be widely used in the community has existed. To reduce the dispensable effort needed to develop multiple ray-tracing simulators by different research groups, we have successfully used ROBAST for many years to perform optics simulations for the Cherenkov Telescope Array (CTA). Among the six proposed telescope designs for CTA, ROBAST is currently used for three telescopes: a Schwarzschild-Couder (SC) medium-sized telescope, one of SC small-sized telescopes, and a large-sized telescope (LST). ROBAST is also used for the simulation and development of hexagonal light concentrators proposed for the LST focal plane. Making full use of the ROOT geometry library with additional ROBAST classes, we are able to build the complex optics geometries typically used in CR experiments and ground-based gamma-ray telescopes. We introduce ROBAST and its features developed for CR experiments, and show several successful applications for CTA.
... Therefore, instead of using equation (21) in macroscopic problems, we would like to utilize the effective theory whose linear version was defined in the preceding section. In order to find the non-linear format, we note that in a free falling frame, the field equations of the generally non-linear effective theory is linear and is given by equation (34). A simple frame transformation will give the field equations in an arbitrary frame. ...
... Now we can go further back and find the values at R − 2∆r and beyond by simply repeating the same procedure. We have programmed this algorithm in C++ and plotted using ROOT [34]. The results are shown in figure 2 where the solid line represents the numerical values and the points show the values of equation (59) at a few occasional locations. ...
Preprint
Following the ideas of effective field theories, we derive classically effective field equations of recently developed Lorentz gauge theory of gravity. It is shown that Newton's gravitational constant emerges as an effective coupling parameter if an extremely small length is integrated out of the underlying theory. The linear version of the effective theory is shown to be fully consistent with the Newtonian gravity. We also derive a numerical solution for the interior of a star and show that in the non-linear regions, the behavior of the effective theory deviates from the predictions of general relativity.
... The motivation is to help optimizing the detector design such as photomultipliers (PMT) arrangement, understanding the simulation data, and tuning the reconstruction algorithm for complicated muons and other backgrounds with JUNO specific data model. The software is fully integrated into the JUNO offline framework [3] and is based on the ROOT [4] display package Event Visualization Environment (EVE) [5,6] to implement the functions for detector and event data visualization. It also provides the functions of 3D and 2D projection views, animations, data association and interactive display for users to better understand the simulation and real data events. ...
... The JUNO offline software uses SNiPER (Software for Non-collider Physics Experiments) [7] as its framework. SNiPER is a light-weight, flexible framework with dependencies on external software packages such as GEANT4 [8] and ROOT [4]. It is designed to host all offline tasks in the experiment. ...
Preprint
An event display software SERENA has been designed for the Jiangmen Underground Neutrino Observatory (JUNO). The software has been developed in the JUNO offline software system and is based on the ROOT display package EVE. It provides an essential tool to display detector and event data for better understanding of the processes in the detectors. The software has been widely used in JUNO detector optimization, simulation, reconstruction and physics study.
... In addition to kinematic fitting, tools are provided to construct a particle candidate from its daughter tracks. A functionality for running a kinematic fit for event selection on a ROOT [14] file in an automated way is included as well. The particle tracks are assumed to originate from a region free of magnetic fields, meaning they are propagated as straight tracks. ...
... The KinFit package is written in C++ and based on ROOT [14] (version 6) and uses CMake [17] (version 3.0 or newer) for the installation. It is available online at: ...
Article
Full-text available
A kinematic fitting package, KinFit, based on the Lagrange multiplier technique has been implemented for generic hadron physics experiments. It is particularly suitable for experiments where the interaction point is unknown, such as experiments with extended target volumes. The KinFit package includes vertex finding tools and fitting with kinematic constraints, such as mass hypothesis and four-momentum conservation, as well as combinations of these constraints. The new package is distributed as an open source software via GitHub. This paper presents a comprehensive description of the KinFit package and its features, as well as a benchmark study using Monte Carlo simulations of the pp→pK+Λ→pK+pπ-pppK+ΛpK+pπpp\rightarrow pK^+\Lambda \rightarrow pK^+p\pi ^- reaction. The results show that KinFit improves the parameter resolution and provides an excellent basis for event selection.
... The statistical weights for these two values are given by the area ratio of the plane surface between the cones and the side surfaces of the cones. For the case of the half-spheres the random generator was implemented using the GetRandom method of the TH1 histogram of the ROOT toolkit [9]. The histogram was filled by sweeping the half-sphere area and extracting the angle between the normals. ...
Preprint
A modification of the optical model for rough surfaces, implemented in Geant4 as a part of the unified model, is suggested. The modified model takes into account the variation of the interaction probability of the photon with the microfacet based on the relative orientation of the photon and the sampled microfacet's normal. The implementation is using a rejection algorithm and assumes the interaction probability to be proportional to the projection of the microfacet area on the plane perpendicular to the photon direction. A comparison of the results obtained with the original and the modified models, as well as obtained in direct Monte Carlo simulations are presented for several test surfaces constructed using a pattern of elementary geometrical shapes.
... The list-mode data were processed using SoftKAM analysis software [10]. The code, which was custom developed by our research group at PTB, utilizes the tools contained in CERN's data analysis framework, ROOT [13]. ...
Article
Full-text available
Primary activity standardization of the radionuclide iodine-123 using 4π(X,e)(LS)-γ coincidence counting is presented in this paper. The activity concentration of an aqueous solution was determined by the efficiency extrapolation technique using neutral density filters. The measurements were realized using a digitizer-based data acquisition system. The coincidence analysis of the collected list-mode data was performed offline using a custom-developed computer code. The results were used to participate in an international comparison under the auspices of the International Bureau of Weights and Measures (BIPM) via the International Reference System (SIR) as well as the Transfer Instrument of the International Reference System (SIRTI).
... In order to assess the experimental prospects for DVCS at the future Electron-Ion Collider, generated MC samples were all processed through a full detector simulation constructed in the EICROOT [38] simulation framework, which combines ROOT TGeo [39] geometry definitions with GEANT4 [40] simulations. The detector geometry is based on the EIC project detector, ePIC [41], but the full simulation framework for the ePIC detector was not in a stable state for processing the full simulations at the time of the present study. ...
Preprint
Full-text available
This study presents the impact of future measurements of deeply virtual Compton scattering (DVCS) with the ePIC detector at the electron-ion collider (EIC), currently under construction at Brookhaven National Laboratory. The considered process is sensitive to generalized parton distributions (GPDs), the understanding of which is a cornerstone of the EIC physics programme. Our study marks a milestone in the preparation of DVCS measurements at EIC and provides a reference point for future analyses. In addition to presenting distributions of basic kinematic variables obtained with the latest ePIC design and simulation software, we examine the impact of future measurements on the understanding of nucleon tomography and DVCS Compton form factors, which are directly linked to GPDs. We also assess the impact of radiative corrections and background contribution arising from exclusive π0\pi^0 production.
... Data fitting and correlation analysis were done using Python module. LQ-fitting was performed on the clonogenic survival data using an in-house tool based on Minuit package available in ROOT [24] as in previous reports [25]. ...
Preprint
Tumor cell networks formed by tumor microtubes (TMs) may play a key role in the development of therapy resistance in glioblastoma (GB). TM-mediated detoxification from radiation-induced reactive oxygen species (ROS) may infer radioresistance. We hypothesize that high linear energy transfer (LET) radiation, which describes the amount of energy deposited by radiation per unit length, interacts directly with the DNA backbone to induce complex lesions and thus might be less dependent on TM-mediated resistance mechanisms. Therefore, we sought to systematically investigate the impact of LET-induced complex DNA damage on TM formation and GB survival. To this end, the formation of TMs, radiation-induced nuclear DNA damage repair foci (RIF), and GB survival were correlated with a gradual increase in LET using a dose series of clinical protons (low), helium (intermediate), and carbon (high) ion beams. Consistent with conventional photon/X-rays, low-LET proton irradiation promoted TM formation in a dose-dependent manner. In contrast, an anti-correlation between LET and TM induction was found, i.e., a decreased network connectivity with gradual increase of LET and formation of complex DNA damage. Consequently, LET increase correlated with reduced cell survival, with the most pronounced cell killing observed after high-LET carbon irradiation. Moreover, the inverse correlation between LET and TM density was further confirmed for a broad range of LET modulated within the carbon ion irradiation spectrum. This is the first report on the relevance of LET as a novel mean to overcome TM network-mediated radioresistance in GB, with ramifications for the clinical translation of high-LET particle radiotherapy to further improve outcome in this still devastating disease.
... The parameter d is set to be 3 GeV −1 . The signal and background formulas are implemented using the ROOT/RooFit [13] software [version 6.32.08] package as PDFs. Each PDF is used to generate simulated events according to its distribution. ...
Preprint
Full-text available
In particle physics, it is needed to evaluate the possibility that excesses of events in mass spectra are due to statistical fluctuations as quantified by the standards of local and global significances. Without prior knowledge of a particle's mass, it is especially critical to estimate its global significance. The usual approach is to count the number of times a significance limit is exceeded in a collection of simulated Monte Carlo (MC) 'toy experiments.' To demonstrate this conventional method for global significance, we performed simulation studies according to a recent Compact Muon Solenoid (CMS) result to show its effectiveness. However, this counting method is not practical for computing large global significances. To address this problem, we developed a new 'extrapolation' method to evaluate the global significance. We compared the global significance estimated by our new method with that of the conventional approach, and verified its feasibility and effectiveness. This method is also applicable for cases where only small toy MC samples are available. In this approach, the significance is calculated based on p-values, assuming symmetrical Gaussian distributions.
... For training the classifier, we have used the TMVA 4.3 toolkit [146] integrated into ROOT 6.24 [147]. The training and evaluation process uses 1.5M fatjets of both V and QCD types. ...
Preprint
Full-text available
We explore the collider phenomenology of the fat-brane realization of the Minimal Universal Extra Dimension (mUED) model, where Standard Model (SM) fields propagate in a small extra dimension while gravity accesses additional large extra dimensions. This configuration allows for gravity-mediated decay (GMD) of Kaluza-Klein (KK) particles, resulting in unique final states with hard photons, jets, massive SM bosons, and large missing transverse energy due to invisible KK gravitons. We derive updated constraints on the model's parameter space by recasting ATLAS mono-photon, di-photon, and multi-jet search results using 139 inverse femtobern of integrated luminosity data. Recognizing that current LHC search strategies are tailored for supersymmetric scenarios and may not fully capture the distinct signatures, we propose optimized strategies using machine learning algorithms to tag boosted SM bosons and enhance signal discrimination against SM backgrounds. These methods improve sensitivity to fat-brane mUED signatures and offer promising prospects for probing this model in future LHC runs.
... In addition to the above variables, BDT t r ck includes several other track-based features that characterize the composition of charged and neutral hadrons inside the sub-jets and the fat jet. For a consistent performance comparison, both BDTs have the same hyper-parameters and are trained using the TMVA 4.3 toolkit [120] integrated into ROOT 6.24 [121] analysis framework. Table 1 summarizes these hyper-parameters. ...
Article
Full-text available
Machine learning algorithms have the capacity to discern intricate features directly from raw data. We demonstrated the performance of top taggers built upon three machine learning architectures: a BDT that uses jet-level variables (high-level features, HLF) as input, a CNN (a miniature version of ResNet) trained on the jet image, and a GNN (LorentzNet) trained on the particle cloud representation of a jet utilizing the 4-momentum (low-level features, LLF) of the jet constituents as input. We found significant performance enhancement for all three classes of classifiers when trained on combined data from calorimeter towers and tracker detectors. The high resolution of the tracking data not only improved the classifier performance in the high transverse momentum region, but the information about the distribution and composition of charged and neutral constituents of the fat jets and subjets helped identify the quark/gluon origin of sub-jets and hence enhances top tagging efficiency. The LLF-based classifiers, such as CNN and GNN, exhibit significantly better performance when compared to HLF-based classifiers like BDT, especially in the high transverse momentum region. Nevertheless, the LLF-based classifiers trained on constituents’ 4-momentum data exhibit substantial dependency on the jet modeling within Monte Carlo generators. The composite classifiers, formed by stacking a BDT on top of a GNN/CNN, not only enhance the performance of LLF-based classifiers but also mitigate the uncertainties stemming from the showering and hadronization model of the event generator. We have conducted a comprehensive study on the influence of the fat jet’s reconstruction and labeling procedure on the efficiency of the classifiers.
... CheckMATE requires Python 2.7.X where X>3 (note that at the current time, Python 3 is NOT supported), the data analysis package ROOT (v5.34.36 or later) [115] and the detector simulation Delphes (v3.3.3 or later) [1]. If any of these packages are already installed, the respective sections of the tutorial can be skipped. ...
Preprint
We present the latest developments to the CheckMATE program that allows models of new physics to be easily tested against the recent LHC data. To achieve this goal, the core of CheckMATE now contains over 60 LHC analyses of which 12 are from the 13 TeV run. The main new feature is that CheckMATE 2 now integrates the Monte Carlo event generation via Madgraph and Pythia 8. This allows users to go directly from a SLHA file or UFO model to the result of whether a model is allowed or not. In addition, the integration of the event generation leads to a significant increase in the speed of the program. Many other improvements have also been made, including the possibility to now combine signal regions to give a total likelihood for a model.
... The invariant mass spectra of the relevant fermion pairs are extracted analyzing the event samples with ROOT [126]. In the case of the SM-L (6) interference terms, the histograms are further rescaled by |σ(C i , int.)|/σ(SM) so that their bin content can be directly compared 21 The estimate of the interference term obtained with this procedure is more accurate and numerically stable than the estimate obtained e.g. ...
Preprint
We report codes for the Standard Model Effective Field Theory (SMEFT) in FeynRules -- the SMEFTsim package. The codes enable theoretical predictions for dimension six operator corrections to the Standard Model using numerical tools, where predictions can be made based on either the electroweak input parameter set {α^ew,m^Z,G^F}\{\hat{\alpha}_{ew}, \hat{m}_Z, \hat{G}_F \} or {m^W,m^Z,G^F}\{\hat{m}_{W}, \hat{m}_Z, \hat{G}_F\}. All of the baryon and lepton number conserving operators present in the SMEFT dimension six Lagrangian, defined in the Warsaw basis, are included. A flavour symmetric U(3)5{\rm U}(3)^5 version with possible non-SM CP\rm CP violating phases, a (linear) minimal flavour violating version neglecting such phases, and the fully general flavour case are each implemented. The SMEFTsim package allows global constraints to be determined on the full Wilson coefficient space of the SMEFT. As the number of parameters present is large, it is important to develop global analyses on reduced sets of parameters minimizing any UV assumptions and relying on IR kinematics of scattering events and symmetries. We simultaneously develop the theoretical framework of a "W-Higgs-Z pole parameter" physics program that can be pursued at the LHC using this approach and the SMEFTsim package. We illustrate this methodology with several numerical examples interfacing SMEFTsim with MadGraph5. The SMEFTsim package can be downloaded at https://feynrules.irmp.ucl.ac.be/wiki/SMEFT
... The various variables for event selection ( [17][18][19][20] are calculated for both FD and MC. The TObjectArray in the ROOT analysis framework [21] provides a flexible format in which it is possible to record the results of multiple algorithms in the Level-2 data, allowing easy comparisons between different algorithms. By taking advantage of such Level-2 data features, an efficient and detailed study of systematic uncertainty becomes feasible [16]. ...
Preprint
Full-text available
The CALorimetric Electron Telescope (CALET), launched for installation on the International Space Station (ISS) in August, 2015, has been accumulating scientific data since October, 2015. CALET is intended to perform long-duration observations of high-energy cosmic rays onboard the ISS. CALET directly measures the cosmic-ray electron spectrum in the energy range of 1 GeV to 20 TeV with a 2% energy resolution above 30 GeV. In addition, the instrument can measure the spectrum of gamma rays well into the TeV range, and the spectra of protons and nuclei up to a PeV. In order to operate the CALET onboard ISS, JAXA Ground Support Equipment (JAXA-GSE) and the Waseda CALET Operations Center (WCOC) have been established. Scientific operations using CALET are planned at WCOC, taking into account orbital variations of geomagnetic rigidity cutoff. Scheduled command sequences are used to control the CALET observation modes on orbit. Calibration data acquisition by, for example, recording pedestal and penetrating particle events, a low-energy electron trigger mode operating at high geomagnetic latitude, a low-energy gamma-ray trigger mode operating at low geomagnetic latitude, and an ultra heavy trigger mode, are scheduled around the ISS orbit while maintaining maximum exposure to high-energy electrons and other high-energy shower events by always having the high-energy trigger mode active. The WCOC also prepares and distributes CALET flight data to collaborators in Italy and the United States. As of August 31, 2017, the total observation time is 689 days with a live time fraction of the total time of approximately 84%. Nearly 450 million events are collected with a high-energy (E>10 GeV) trigger. By combining all operation modes with the excellent-quality on-orbit data collected thus far, it is expected that a five-year observation period will provide a wealth of new and interesting results.
... • ROOT ≥ 5 [21] and its dependencies. ...
Preprint
Containers are increasingly used as means to distribute and run Linux services and applications. In this paper we describe the architectural design and implementation of udocker, a tool which enables the user to execute Linux containers in user mode. We also present a few practical applications, using a range of scientific codes characterized by different requirements: from single core execution to MPI parallel execution and execution on GPGPUs.
... In order to perform the collider signature analysis our model is implemented in Feynrules [60], where the hadronic level cross section is calculated in Madgraph 5 [61] utilizing the NNPDF23 parton distribution function set [62]. Kinematic plots are then produced through the Pythia and Delphes interfaces [61,63,64] and analyzed in Root [65]. For the mono-jet search, a minimum jet p T of 100 GeV is used with a pseudo-rapidity cut of |η| <5, where both the mono-g and mono-quark channels are included. ...
Preprint
The general strategy for dark matter (DM) searches at colliders currently relies on simplified models. In this paper, we propose a new t-channel UV-complete simplified model that improves the existing simplified DM models in two important respects: (i) we impose the full SM gauge symmetry including the fact that the left-handed and the right-handed fermions have two independent mediators with two independent couplings, and (ii) we include the renormalization group evolution when we derive the effective Lagrangian for DM-nucleon scattering from the underlying UV complete models by integrating out the t-channel mediators. The first improvement will introduce a few more new parameters compared with the existing simplified DM models. In this study we look at the effect this broader set of free parameters has on direct detection and the mono-X + MET (X=jet,W,Z) signatures at 13 TeV LHC while maintaining gauge invariance of the simplified model under the full SM gauge group. We find that the direct detection constraints require DM masses less than 10 GeV in order to produce phenomenologically interesting collider signatures. Additionally, for a fixed mono-W cross section it is possible to see very large differences in the mono-jet cross section when the usual simplified model assumptions are loosened and isospin violation between RH and LH DM-SM quark couplings are allowed.
... In addition to the text-based interface, a C++ interface is provided to define the input to the combination. This interface can either read basic C++ standard library data types or ROOT [10] histogram and graph classes, which are commonly used in high-energy-physics analyses. ...
Preprint
A method is discussed that allows combining sets of differential or inclusive measurements. It is assumed that at least one measurement was obtained with simultaneously fitting a set of nuisance parameters, representing sources of systematic uncertainties. As a result of beneficial constraints from the data all such fitted parameters are correlated among each other. The best approach for a combination of these measurements would be the maximisation of a combined likelihood, for which the full fit model of each measurement and the original data are required. However, only in rare cases this information is publicly available. In absence of this information most commonly used combination methods are not able to account for these correlations between uncertainties, which can lead to severe biases as shown in this article. The method discussed here provides a solution for this problem. It relies on the public result and its covariance or Hessian, only, and is validated against the combined-likelihood approach. A dedicated software package implementing this method is also presented. It provides a text-based user interface alongside a C++ interface. The latter also interfaces to ROOT classes for simple combination of binned measurements such as differential cross sections.
... We also limited the maximum inter-event duration to seven hours, once again to avoid spurious effects in the queue distribution. Our fits were done using the software ROOT [16] and compared with the procedure by Clauset et al. [17]. ...
Preprint
A number of human activities exhibit a bursty pattern, namely periods of very high activity that are followed by rest periods. Records of these processes generate time series of events whose inter-event times follow a probability distribution that displays a fat tail. The grounds for such phenomenon are not yet clearly understood. In the present work we use the freely available Wikipedia editing records to unravel some features of this phenomenon. We show that even though the probability to start editing is conditioned by the circadian 24 hour cycle, the conditional probability for the time interval between successive edits at a given time of the day is independent from the latter. We confirm our findings with the activity of posting on the social network Twitter. Our result suggests there is an intrinsic humankind scheduling pattern: after overcoming the encumbrance to start an activity, there is a robust distribution of new related actions, which does not depend on the time of day.
... These requirements could be met by implementing an alternative for the Objectivity/DB based BABAR data store as outlined above using the ROOT system [11,12] as a file based data store for the micro-DST only. In this way user analysis code only had to be relinked using the new input and output modules. ...
Preprint
A system based on ROOT for handling the micro-DST of the BaBar experiment is described. The purpose of the Kanga system is to have micro-DST data available in a format well suited for data distribution within a world-wide collaboration with many small sites. The design requirements, implementation and experience in practice after three years of data taking by the BaBar experiment are presented.
... A fake detector volume positioned at the exit of the spectrometer emulates the test detector in the simulation. For all particles entering this volume information about their type, position, momentum and time is stored to the ROOT [10] file. If required this information is used as an input to further simulation. ...
Preprint
A new versatile facility LEETECH for detector R&D, tests and calibration is designed and constructed. It uses electrons produced by the photoinjector PHIL at LAL, Orsay and provides a powerful tool for wide range R&D studies of different detector concepts delivering "mono-chromatic" samples of low energy electrons with adjustable energy and intensity. Among other innovative instrumentation techniques, LEETECH will be used for testing various gaseous tracking detectors and studying new Micromegas/InGrid concept which has very promising characteristics of spatial resolution and can be a good candidate for particle tracking and identification. In this paper the importance and expected characteristics of such facility based on detailed simulation studies are addressed.
... The model of the tree described here has been implemented in C++ using ROOT libraries [34]. This object oriented framework has been used to analyze the results of the simulation too. ...
Preprint
Full-text available
We present a computational model to reconstruct trees of ancestors for animals with sexual reproduction. Through a recursive algorithm combined with a random number generator, it is possible to reproduce the number of ancestors for each generation and use it to constraint the maximum number of the following generation. This new model allows to consider the reproductive preferences of particular species and combine several trees to simulate the behavior of a population. It is also possible to obtain a description analytically, considering the simulation as a theoretical stochastic process. Such process can be generalized in order to use an algorithm associated with it to simulate other similar processes of stochastic nature. The simulation is based in the theoretical model previously presented before.
... To optimise the geometry and to assess the performance of the microlens implementation, a custom simulation based on a ray tracing Monte Carlo using the ROOT framework [15] has been developed. The calculation of the effectiveness of the microlens is evaluated for a system of nine pixels with one microlens placed on top of the central pixel. ...
Preprint
Full-text available
A novel concept to enhance the photo-detection efficiency (PDE) of silicon photomultipliers (SiPMs) has been applied and remarkable positive results can be reported. This concept uses arrays of microlenses to cover every second SiPM pixel in a checkerboard arrangement and aims to deflect the light from the dead region of the pixelised structure towards the active region in the center of the pixel. The PDE is improved up to 24%, external cross-talk is reduced by 40% compared to a flat epoxy layer, and single photon time resolution is improved. This detector development is conducted in the context of the next generation LHCb scintillating fibre tracker located in a high radiation environment with a total of 700'000 detector channels. The simulation and measurement results are in good agreement and will be discussed in this work.
... Next, the reconstructed hit pulses ( rec ( )) can be obtained after converted to the time domain with Inverse DFT. The DFT and Inverse DFT operations in this paper are based on the Fast Fourier Transform package in ROOT [21]. The above calculation steps can be described by formula 3.1. ...
Preprint
Photomultiplier tubes (PMTs) are extensively employed as photosensors in neutrino and dark matter detection. The precise charge and timing information extracted from the PMT waveform plays a crucial role in energy and vertex reconstruction. In this study, we investigate the deconvolution algorithm utilized for PMT waveform reconstruction, while enhancing the timing separation ability for pile-up hits by redesigning filters based on the time-frequency uncertainty principle. This filter design sacrifices signal-to-noise ratio (SNR) to achieve narrower pulse widths. Furthermore, we optimize the selection of signal pulses in the case of low SNR based on Short-Time Fourier Transform (STFT). Monte Carlo data confirms that our optimization yields enhanced reconstruction performance: improving timing separation ability for pile-up hits from \sim10 ns to 353\sim5 ns, while controlling residual nonlinearity of charge reconstruction within 1\%.
... Both the signal and background samples are analyzed with the Toolkit for Multivariate Analysis toolkit for ROOT [33] with various multivariate classification algorithms. Events are selected as the presence of the required number of objects in the final state. ...
Preprint
This paper reports on the theoretical investigation of charged Higgs bosons and their coupling to fermions within the Two Higgs Doublet Model (THDM). The study focuses on the discovery potential of charged Higgs bosons predicted in Types III and IV at the future Circular Hadron-Hadron Collider (FCC-hh) with a center-of-mass energy of (\sqrt{s} = 100) TeV. By analyzing their decays, couplings to fermions, branching ratios, and production cross-section via (pp \rightarrow tH^-), we investigate the signatures of charged Higgs bosons, including kinematical distributions based on the background processes of (b\bar{b}) quarks.
... Photons are emitted isotropically and those which reach the detector volume create banked electrons which are recorded in the PTRAC file post-processed by DRiFT. In Fig. 1(a) DRiFT post-processes the PTRAC file with no M.T. Andrews and A.D. Mullen detector physics turned on, and the total energy deposition in each detector volume is plotted as a 2D histogram with ROOT (Brun and Rademakers, 1997). In the remaining plots shown in Fig. 1, the same MCNP output file is post-processed by DRiFT with varying detector physics capabilities enabled. ...
... The second layer is the so-called "Data Layer", which is used to describe the data inside the messages. Multiple backends are supported by the framework, including ROOT [5], Apache Arrow [6] for data analysis, or detector tailored formats optimised for zero copy and specially targeted for direct usage from the GPU. This allows us to map data directly in the GPU buffers for reconstruction, minimising copies and format changes. ...
Article
Full-text available
ALICE has upgraded many of its detectors for LHC Run 3 to operate in continuous readout mode recording Pb–Pb collisions at 50 kHz interaction rate without trigger. This results in the need to process data in real time at rates 100 times higher than during Run 2. In order to tackle such a challenge we introduced O², a new computing system and the associated infrastructure. Designed and implemented during the LHC long shutdown 2, O² is now in production taking care of all the data processing needs of the experiment. O² is designed around the message passing paradigm, enabling resilient, parallel data processing for both the synchronous (to LHC beam) and asynchronous data taking and processing phases. The main purpose of the synchronous online reconstruction is detector calibration and raw data compression. This synchronous processing is dominated by the TPC detector, which produces by far the largest data volume, and TPC reconstruction runs fully on GPUs. When there is no beam in the LHC, the powerful GPU-equipped online computing farm of ALICE is used for the asynchronous reconstruction, which creates the final reconstructed output for analysis from the compressed raw data. Since the majority of the compute performance of the online farm is in the GPUs, and since the asynchronous processing is not dominated by the TPC in the way the synchronous processing is, there is an ongoing effort to offload a significant amount of compute load from other detectors to the GPU as well.
... The processing of ATLAS detector data and simulation is a multi-step procedure: Information detected by the different ATLAS sub-detectors for LHC collision events and simulation is reconstructed and stored in ROOT files [2]. A data product containing every event and all analysis objects is called Analysis Object Data (AOD) or primary AOD. ...
Article
Full-text available
For HEP event processing, data is typically stored in column-wise synchronized containers, such as most prominently ROOT’s TTree, which have been used for several decades to store by now over 1 exabyte. These containers can combine row-wise association capabilities needed by most HEP event processing frameworks (e.g. Athena for ATLAS) with column-wise storage, which typically results in better compression and more efficient support for many analysis use-cases. One disadvantage is that these containers, TTree in the HEP use-case, require to contain the same attributes for each entry/row (representing events), which can make extending the list of attributes very costly in storage, even if those are only required for a small subsample of events. Since the initial design, the ATLAS software framework features powerful navigational infrastructure to allow storing custom data extensions for subsamples of events in separate, but synchronized containers. This allows adding event augmentations to ATLAS standard data products (such as DAOD-PHYS or PHYSLITE) avoiding duplication of those core data products, while limiting their size increase. For this functionality, the framework does not rely on any associations made by the I/O technology (i.e. ROOT), however it supports TTree friends and builds the associated index to allow for analysis outside of the ATLAS framework. A prototype based on the Long-Lived Particle search is implemented and preliminary results with this prototype will be presented. At this point, augmented data are stored within the same file as the core data. Storing them in separate files will be investigated in future, as this could provide more flexibility, e.g. certain sites may only want a subset of several augmentations or augmentations can be archived to tape once their analysis is complete.
... HistFactory is a mathematical framework for building statistical models of binned analyses across different channels, see Sec. 2.1. RooFit [1] is a framework that already allows for Bayesian inference for HistFactory models, its range of application though is limited by the lack of the implementation of gradients and availability of advanced diagnostics for Bayesian inference results due to the historical focus on frequentist inference in HEP. An example for a library that allows for advanced Bayesian inference for particle and astro-physics is BAT.jl [2] but tools to construct HistFactory models within Julia are not yet readily available 1 . ...
Article
Full-text available
bayesian_pyhf is a Python package that allows for the parallel Bayesian and frequentist evaluation of multi-channel binned statistical models. The Python library pyhf is used to build such models according to the HistFactory framework and already includes many frequentist inference methodologies. The pyhf-built models are then used as data-generating model for Bayesian inference and evaluated with the Python library PyMC. Based on Monte Carlo Chain Methods, PyMC allows for Bayesian modelling and together with the arviz library offers a wide range of Bayesian analysis tools.
... As the parameters of interest, 5-dimensional information (position in the x, y, and z directions, energy (E), and time (t)) and scattering angle are captured in addition to recording the particle types as the ground truth information. The collected information is initially analyzed using the ROOT framework [11], following which further analysis can be performed in Python. The primary goal is to differentiate the backscattered particles from those of forward-traveling and characterize the parametrical differences between the backscattered particles originating from the dry Lunar/Martian surface and those from the frozen lake scenario. ...
... An energy window of [350,750] keV was used in the simulation; the 350 keV lower threshold is the value used in clinical settings for the commercial dual-panel scanner Naviscan PEM Flex Solo II [24] (previously used also by our group for PEM experimental studies [25]) and for the MAMMI dedicated breast PET [26]. In all the simulations, hit and coincidence events were registered in ROOT files [27], followed by further off-line analysis with programs written in C++ and Matlab R2020a (The MathWorks Inc., Natick, Massachusetts, USA). ...
Article
Full-text available
Positron Emission Mammography (PEM) is a valuable molecular imaging technique for breast studies using pharmaceuticals labeled with positron emitters and dual-panel detectors. PEM scanners normally use large scintillation crystals coupled to sensitive photodetectors. Multiple interactions of the 511 keV annihilation photons in the crystals can result in event mispositioning leading to a negative impact in radiopharmaceutical uptake quantification. In this work, we report the study of crystal scatter effects of a large-area dual-panel PEM system designed with either monolithic or pixelated lutetium yttrium orthosilicate (LYSO) crystals using the Monte Carlo simulation platform GATE. The results show that only a relatively small fraction of coincidences (~20%) arise from events where both coincidence photons undergo single interactions (mostly through photoelectric absorption) in the crystals. Most of the coincidences are events where at least one of the annihilation photons undergoes a chain of Compton scatterings: approximately 79% end up in photoelectric absorption while the rest (<1%) escape the detector. Mean positioning errors, calculated as the distance between first hit and energy weighted (assigned) positions of interaction, were 1.70 mm and 1.92 mm for the monolithic and pixelated crystals, respectively. Reconstructed spatial resolution quantification with a miniDerenzo phantom and a list mode iterative reconstruction algorithm shows that, for both crystal types, 2 mm diameter hot rods were resolved, indicating a relatively small effect in spatial resolution. A drastic reduction in peak-to-valley ratios for the same hot-rod diameters was observed, up to a factor of 14 for the monolithic crystals and 7.5 for the pixelated ones.
... Fun-Tuple utilises these functors to compute a diverse range of observables and writes a TTree in the ROOT N-tuple format. 3 The N-tuple format is widely used in the High Energy Physics community to store flattened data in a tabular format [29]. Furthermore, the component's lightweight design ensures simplified maintenance and seamless knowledge transfer. ...
Article
Full-text available
The offline software framework of the LHCb experiment has undergone a significant overhaul to tackle the data processing challenges that will arise in the upcoming Run 3 and Run 4 of the Large Hadron Collider. This paper introduces FunTuple, a novel component developed for offline data processing within the LHCb experiment. This component enables the computation and storage of a diverse range of observables for both reconstructed and simulated events by leveraging on the tools initially developed for the trigger system. This feature is crucial for ensuring consistency between trigger-computed and offline-analysed observables. The component and its tool suite offer users flexibility to customise stored observables, and its reliability is validated through a full-coverage set of rigorous unit tests. This paper comprehensively explores FunTuple’s design, interface, interaction with other algorithms, and its role in facilitating offline data processing for the LHCb experiment for the next decade and beyond.
... It takes all of the incoming signals and saves for each one the timestamp, energy and PSD. The data from each measurement is saved in separate ROOT files in the computer it is connected to, to be later analyzed by ROOT [17]. Activation measurements are a way of measuring the yield of a reaction in a material by measuring the decay of a resulting isotope. ...
Thesis
Full-text available
(a,n) reactions are the main contribution to the neutron background in deep underground experiments, like those trying to detect WIMPs, dark matter candidates. In the context of the Measurement of Alpha Neutron Yields (MANY) collaboration, an effort to carry out measurements of (a,n) reactions, this work asseses the viability of using the HiSPANoS neutron line at the Centro Nacional de Aceleradores (CNA). To that end, we measure (a,n) thick target yields at 5.5, 7.0, 7.5, 8.25 and 8.5 MeV energies and obtain a large systematic error, a factor 1.90(9) that remains unexplained. We also measure the energy spectra of the neutrons produced by (a,n) reactions of 5.5, 7.0 and 8.25 MeV by time-of-flight, and find good agreement with data from the literature where it is available, if we apply a simple deconvolution algorithm.
... The simulated data were analyzed using ROOT (ROOT 6.22.06) [14]. As a first step,pʼs generated by primarynʼs are required to produce ionization signals in at least one of the ITS layers. ...
Article
Full-text available
Simulations to evaluate the feasibility of antineutron identification and kinematic characterization via the hadronic charge exchange (CEX) interaction n+n¯→p+p¯ are reported. The target neutrons are those composing the silicon nuclei of which inner tracking devices present in the Large Hadron Collider experiments ALICE, ATLAS, and CMS. Simulations of pp collisions in PYTHIA were carried out at different energies to investigate n¯ production and energy spectra. These simulations produced a decreasing power-law n¯ energy spectra. Then, two types of GEANT4 simulations were performed, placing an n¯ point source at the ALICE primary vertex, as a working example. In the first simulation, the kinetic energy E k was kept at an arbitrary (1 GeV) fix value to develop an n¯ identification and kinematics reconstruction protocol. The second GEANT4 simulation used the resulting PYTHIA at spp=13 TeV n¯ energy spectra. In both GEANT4 simulations, the occurrence of CEX interactions was identified by the unique outgoing p¯ . The simplified simulation allowed to estimate a 0.11% CEX-interaction identification efficiency at E k = 1 GeV. The p CEX-partner identification is challenging because of the presence of silicon nucleus-fragmentation protons. Momentum correlations between the n¯ and all possible p¯p pairs showed that p CEX-partner identification and n¯ kinematics reconstruction corresponds to minimal momentum-loss events. The use of inner tracking system dE/dx information is found to improve n¯ identification and kinematic characterization in both GEANT4 simulations. The final protocol applied to the realistic GEANT4 simulation resulted in a n¯ identification and kinematic reconstruction efficiency of 0.006%, based solely on p¯p pair observable. If applied to the ALICE minimum-bias RUN2 pp at spp=13 TeV data sample, this technique is found to have the potential to identify and reconstruct the kinematics of 4.3×108n¯ 's, illustrating the feasibility of the method.
... The EDM objects are defined in a single yaml [8] file. The PODIO [9] package is used to generate actual C++ classes and provides a serialization mechanisms to store/retrieve data from/to ROOT [10] files. The EDM is not finalized yet, as the software development process often requires introduction of new and adjustment of existing data types. ...
Article
Full-text available
The Super Charm-Tau factory (high luminosity electron-positron collider with 3–7 GeV center of mass energy range) experiment project is under development by the consortium of Russian scientific and education organizations. The article describes the present status of the Super Charm-Tau detector fast simulation and the algorithms on which it is based, example usage and demonstration of fast simulation results.
... The deposited energies of the gamma-rays interacting in the detector was recorded and resampled from a Gaussian distribution to account for the energy resolution of the detector, using the ROOT data analysis framework (v6.18.04) [17]. The resolution defined by the FWHM was calculated with the formula [18] ...
Article
Full-text available
Prompt Gamma-ray Spectroscopy (PGS) in conjunction with the Monte Carlo Library Least Squares (MCLLS) approach was investigated for the purposes of range monitoring in proton therapy through Monte Carlo simulations. Prompt gamma-rays are produced during treatment and can be correlated to the range of the proton beam in the tissue. In contrast to established approaches, MCLLS does not rely on the identification of specific photopeaks. Instead it treats each individual constituent as a library spectrum and calculates coefficients for each spectrum, and therefore takes both the photopeaks and the Compton continuum into account. It can thus be applied to organic scintillators traditionally not used for energy spectroscopy due to their low Z number and density. Preliminary results demonstrate that the proposed approach returns a strong linear correlation between the range of the primary proton beam and the calculated library coefficients, depending on the composition of libraries. This can be exploited for range monitoring.
Preprint
Full-text available
ATLAS Open Data for Education delivers proton--proton collision data from the ATLAS experiment at CERN to the public along with open-access resources for education and outreach. To date ATLAS has released a substantial amount of data from 8 TeV and 13 TeV collisions in an easily-accessible format and supported by dedicated documentation, software, and tutorials to ensure that everyone can access and exploit the data for different educational objectives. Along with datasets, ATLAS also provides data visualisation tools and interactive web based applications for studying the data, along with Jupyter Notebooks and downloadable code enabling users to further analyse data for known and unknown physics cases. The Open Data educational platform which hosts the data and tools is used by tens of thousands of students worldwide, and we present the project development, lessons learnt, impacts, and future goals.
Article
Full-text available
Tumor motion is a major challenge for scanned ion-beam therapy. In the case of lung tumors, strong under- and overdosage can be induced due to the high density gradients between the tumor- and bone tissues compared to lung tissues. This work proposes a non-invasive concept for 4D monitoring of high density gradients in carbon ion beam therapy, by detecting charged fragments. The method implements CMOS particle trackers that are used to reconstruct the fragment vertices, which define the emission points of nuclear interactions between the primary carbon ions and the patient tissues. A 3D treatment plan was optimized to deliver 2 Gy to a static spherical target volume. The goodness of the method was assessed by comparing reconstructed vertices measured in two static cases to the ones in a non-compensated moving case with an amplitude of 20 mm. The measurements, performed at the Marburg Ion-Beam Therapy Center (MIT), showed promising results to assess the conformity of the delivered dose. In particular to measure overshoots induced by high density gradients due to motion with 83.0 ± 1.5% and 92.0 ± 1.5% reliability based on the ground truth provided by the time-resolved motor position and depending on the considered volume and the iso-energy layers.
Article
Full-text available
A bstract We develop the formalism for production of a fully heavy tetraquark and apply it to the calculation of pp → T 4 c + X cross-sections. We demonstrate that the production cross-section of a fully heavy tetraquark, even if it is a diquark-antidiquark cluster, can be obtained in the meson-like basis, for which the spin-color projection technique is well established. Prompted by the recent LHCb, ATLAS and CMS data, we perform a pQCD calculation of O \mathcal{O} O ( αs5 {\alpha}_s^5 α s 5 ) short-distance factors in the dominant channel of gluon fusion, and match these to the four-body T 4 c wave functions in order to obtain the unpolarized T 4 c (0 ⁺⁺ , 1 + − , 2 ⁺⁺ ) cross-sections. The novelty in comparison with the recently published article [1] lies in the fact that we predict the absolute values as well as the dσ/dp T spectra in the kinematic ranges accessible at the ongoing LHC experiments. From the comparison with the signal yield at LHCb we derive the constraints on the Φ · Br( J/ψ J/ψ ) (reduced wave function times branching) product for the T 4 c candidates for X (6900) and observe that X (6900) is compatible with a 2 ⁺⁺ (2 S ) state.
Technical Report
Full-text available
This report summarises the activities and main achievements of the CERN strategic R&D programme on technologies for future experiments during the year 2021.
Article
Full-text available
A bstract This study provides an analysis of atmospheric neutrino oscillations at the ESSnuSB far detector facility. The prospects of the two cylindrical Water Cherenkov detectors with a total fiducial mass of 540 kt are investigated over 10 years of data taking in the standard three-flavor oscillation scenario. We present the confidence intervals for the determination of mass ordering, θ 23 octant as well as for the precisions on sin ² θ 23 and Δm312 \left|\Delta {m}_{31}^2\right| Δ m 31 2 . It is shown that mass ordering can be resolved by 3 σ CL (5 σ CL) after 4 years (10 years) regardless of the true neutrino mass ordering. Correspondingly, the wrong θ 23 octant could be excluded by 3 σ CL after 4 years (8 years) in the case where the true neutrino mass ordering is normal ordering (inverted ordering). The results presented in this work are complementary to the accelerator neutrino program in the ESSnuSB project.
Article
Full-text available
Modeling contact between deformable solids is a fundamental problem in computer animation, mechanical design, and robotics. Existing methods based on C⁰‐discretizations—piece‐wise linear or polynomial surfaces—suffer from discontinuities and irregularities in tangential contact forces, which can significantly affect simulation outcomes and even prevent convergence. In this work, we show that these limitations can be overcome with a smooth surface representation based on Implicit Moving Least Squares (IMLS). In particular, we propose a self collision detection scheme tailored to IMLS surfaces that enables robust and efficient handling of challenging self contacts. Through a series of test cases, we show that our approach offers advantages over existing methods in terms of accuracy and robustness for both forward and inverse problems.
Article
Full-text available
Background Gold nanoparticles (GNPs) accumulated within tumor cells have been shown to sensitize tumors to radiotherapy. From a physics point of view, the observed GNP‐mediated radiosensitization is due to various downstream effects of the secondary electron (SE) production from internalized GNPs such as GNP‐mediated dose enhancement. Over the years, numerous computational investigations on GNP‐mediated dose enhancement/radiosensitization have been conducted. However, such investigations have relied mostly on simple cellular geometry models and/or artificial GNP distributions. Thus, it is at least desirable, if not necessary, to conduct further investigations using cellular geometry models that properly reflect realistic cell morphology as well as internalized GNP distributions at the nanoscale. Purpose The primary aim of this study was to develop a nanometer‐resolution geometry model of a GNP‐laden tumor cell for computational investigations of GNP‐mediated dose enhancement/radiosensitization. The secondary aim was to demonstrate the utility of this model by quantifying GNP‐induced SE tracks/dose distribution at sub‐cellular levels for further validation of a nanoscopic dose point kernel (nDPK) method against full‐fledged Geant4 Monte Carlo (MC) simulation. Methods A transmission electron microscopy (TEM) image of a single cell showing cytoplasm, cellular nucleus, and internalized GNPs in the cellular endosome was segmented into sub‐cellular levels based on pixel value thresholding. A corresponding material density was allocated to each pixel, and, by adding a thickness, each pixel was transformed to a geometric voxel and imported as a Geant4‐acceptable input geometry file. In Geant4‐Penelope MC simulation, a clinical 6 MV photon beam was applied, vertically or horizontally to the cell surface, and energy deposition to the cellular nucleus and cytoplasm, due to SEs emitted by internalized GNPs, was scored. Next, nDPK calculations were performed by generating virtual electron tracks from each GNP voxel to all nucleus and cytoplasm voxels. Subsequently, another set of Geant4 simulation was performed with both Penelope and DNA physics models under the geometry closely mimicking in vitro cell irradiation with a clinical 6 MV photon beam, allowing for derivation of nDPK specific to this geometry and further comparison between Gean4 simulation and nDPK method. Results The Geant4‐calculated SE tracks and associated energy depositions showed significant dependence on photon incidence angle. For perpendicular incidence, nDPK results showed good agreement (average percentage pixel‐to‐pixel difference of 0.4% for cytoplasm and 0.5% for nucleus) with Geant4 results, while, for parallel incidence, the agreement became worse (–1.7%–0.7% for cytoplasm and –5.5%–0.8% for nucleus). Under the 6 MV cell irradiation geometry, nDPK results showed reasonable agreement (pixel‐to‐pixel Pearson's product moment correlation coefficient of 0.91 for cytoplasm and 0.98 for nucleus) with Geant4 results. Conclusions The currently developed TEM‐based model of a GNP‐laden cell offers unprecedented details of realistic intracellular GNP distributions for nanoscopic computational investigations of GNP‐mediated dose enhancement/radiosensitization. A benchmarking study performed with this model showed reasonable agreement between Geant4‐ and nDPK‐calculated intracellular dose deposition by SEs emitted from internalized GNPs, especially under perpendicular incidence — a popular cell irradiation geometry and when the Geant4‐Penelope physics model was used.
Article
Full-text available
A chemistry module has been implemented in Geant4‐DNA since Geant4 version 10.1 to simulate the radiolysis of water after irradiation. It has been used in a number of applications, including the calculation of G‐values and early DNA damage, allowing the comparison with experimental data. Since the first version, numerous modifications have been made to the module to improve the computational efficiency and extend the simulation to homogeneous kinetics in bulk solution. With these new developments, new applications have been proposed and released as Geant4 examples, showing how to use chemical processes and models. This work reviews the models implemented and application developments for modeling water radiolysis in Geant4‐DNA as reported in the ESA BioRad III Project.
Article
Full-text available
The Deep Underground Neutrino Experiment (DUNE) has so far represented data using a combination of custom data formats and those based on ROOT I/O. Recently, DUNE has begun using the Hierarchical Data Format (HDF5) for some of its data storage applications. HDF5 provides high-performance, low-overhead I/O in DUNE’s data acquisition (DAQ) environment. DUNE will use HDF5 to record raw data from the ProtoDUNE-II Horizontal Drift (HD), ProtoDUNE-II Vertical Drift (VD) and a number of test stands. Dedicated I/O modules have been developed to read the HDF5 data from these detectors into the offline framework for reconstruction directly and via XRootD. HDF5 is also very commonly used on High Performance Computers (HPCs) and is well-suited for use in AI/ML applications. The DUNE software stack contains modules that export data from an offline job in HDF5 format, so that they can be processed by external AI/ML software. The collaboration is also developing strategies to incorporate HDF5 in the detector simulation chains.
Article
Liquid argon time projection chambers play a crucial role in neutrino oscillation and dark matter experiments. Detecting scintillation light in these chambers is challenging due to the short wavelengths in the VUV range and the extremely low cryogenic temperatures (˜87 K) at which sensors operate. To take advantage of the higher photon detection efficiency (PDE) at the visible range, the use of wavelength shifters (WLS) is extended along the community. The Hamamatsu VUV4 S13370-6075CN Silicon PhotoMultiplier (SiPM) are VUV-sensitive sensors that can directly detect VUV light without the use of WLS, providing an improved PDE at these short wavelengths, and at the same time can detect the visible light from WLS if needed. The manufacturer provides a complete characterization of these sensors at room temperature. In this work we present the developed experimental setups used to measure the PDE of VUV4 SiPMs at cryogenic temperatures for different wavelengths in the range from 127 nm to 570 nm.
Article
Full-text available
Background Simulation of tomographic imaging systems with fan-beam geometry, estimation of scattered beam profile using Monte Carlo techniques, and scatter correction using estimated data have always been new challenges in the field of medical imaging. The most important aspect is to ensure the results of the simulation and the accuracy of the scatter correction. This study aims to simulate 128-slice computed tomography (CT) scan using the Geant4 Application for Tomographic Emission (GATE) program, to assess the validity of this simulation and estimate the scatter profile. Finally, a quantitative comparison of the results is made from scatter correction. Methods In this study, 128-slice CT scan devices with fan-beam geometry along with two phantoms were simulated by GATE program. Two validation methods were performed to validate the simulation results. The data obtained from scatter estimation of the simulation was used in a projection-based scatter correction technique, and the post-correction results were analyzed using four quantities, such as: pixel intensity, CT number inaccuracy, contrast-to-noise ratio (CNR), and signal-to-noise ratio (SNR). Results Both validation methods have confirmed the appropriate accuracy of the simulation. In the quantitative analysis of the results before and after the scatter correction, it should be said that the pixel intensity patterns were close to each other, and the accuracy of the CT scan number reached <10%. Moreover, CNR and SNR have increased by more than 30%–65% respectively in all studied areas. Conclusion The comparison of the results before and after scatter correction shows an improvement in CNR and SNR while a reduction in cupping artifact according to pixel intensity pattern and enhanced CT number accuracy.
The Java Language Environment
The Java Language Environment, J. Gosling, SUN Microsystems, 1995.
  • R Brun
R. Brun, F. Rademakers ) Nucl. lnstr. and Meth. in Ph_vs. Res. A 389 (1997) 81-86