ArticlePDF Available

Total Cost of Ownership and Evaluation of Google Cloud Resources for the ATLAS Experiment at the LHC

Authors:

Abstract and Figures

The ATLAS Google Project was established as part of an ongoing evaluation of the use of commercial clouds by the ATLAS Collaboration, in anticipation of the potential future adoption of such resources by WLCG grid sites to fulfil or complement their computing pledges. Seamless integration of Google cloud resources into the worldwide ATLAS distributed computing infrastructure was achieved at large scale and for an extended period of time, and hence cloud resources are shown to be an effective mechanism to provide additional, flexible computing capacity to ATLAS. For the first time a total cost of ownership analysis has been performed, to identify the dominant cost drivers and explore effective mechanisms for cost control. Network usage significantly impacts the costs of certain ATLAS workflows, underscoring the importance of implementing such mechanisms. Resource bursting has been successfully demonstrated, whilst exposing the true cost of this type of activity. A follow-up to the project is underway to investigate methods for improving the integration of cloud resources in data-intensive distributed computing environments and reducing costs related to network connectivity, which represents the primary expense when extensively utilising cloud resources.
This content is subject to copyright. Terms and conditions apply.
Comput Softw Big Sci (2025) 9:2
https://doi.org/10.1007/s41781-024-00128-x
Research
Total Cost of Ownership and Evaluation of Google Cloud
Resources for the ATLAS Experiment at the LHC
The ATLAS Collaboration
CERN, Geneva, Switzerland
Received: 24 May 2024 / Accepted: 27 September 2024
© The Author(s) 2025
Abstract The ATLAS Google Project was established as
part of an ongoing evaluation of the use of commercial clouds
by the ATLAS Collaboration, in anticipation of the poten-
tial future adoption of such resources by WLCG grid sites
to fulfil or complement their computing pledges. Seamless
integration of Google cloud resources into the worldwide
ATLAS distributed computing infrastructure was achieved
at large scale and for an extended period of time, and hence
cloud resources are shown to be an effective mechanism to
provide additional, flexible computing capacity to ATLAS.
For the first time a total cost of ownership analysis has been
performed, to identify the dominant cost drivers and explore
effective mechanisms for cost control. Network usage sig-
nificantly impacts the costs of certain ATLAS workflows,
underscoring the importance of implementing such mech-
anisms. Resource bursting has been successfully demon-
strated, whilst exposing the true cost of this type of activity.
A follow-up to the project is underway to investigate meth-
ods for improving the integration of cloud resources in data-
intensive distributed computing environments and reducing
costs related to network connectivity, which represents the
primary expense when extensively utilising cloud resources.
Executive Summary
The ATLAS Google Project was established to continue an
evaluation of commercial clouds, in anticipation of the poten-
tial future adoption of such resources by WLCG grid sites
to fulfil or complement their computing pledges to ATLAS.
Cost estimates of commercial cloud resources have been
done within ATLAS before, but this is the first time a total
cost of ownership evaluation was performed for a long-term
15-month period. Whilst commercial cloud pricing struc-
tures are usually fine grained, like most clients ATLAS has
negotiated a flat rate subscription agreement with Google for
the duration of the project. Therefore, the method employed
here is to analyse the relative contributions of various com-
e-mail: atlas.publications@cern.ch
ponents within the cloud service to the total cost of owner-
ship, including compute, storage, and network, for different
ATLAS workflows and under different operating conditions,
and identify the dominant cost drivers.
A substantial amount of technical development work was
dedicated to the seamless integration of cloud resources into
the ATLAS distributed computing infrastructure, employ-
ing the same software stack, primarily through cloud-native
interfaces like Kubernetes and signed HTTP URLs follow-
ing a cloud signature standard like S3v4. Care was taken
to avoid vendor-specific choices and technology to mitigate
the risk of potential price volatility in scenarios where one
resource or service provider has a monopoly. The ATLAS
Google Project has been key to test and validate at scale. Once
these interfaces were established, the operation of an ATLAS
site in the cloud has proven to be very effective, unlocking
capabilities that would be either unfeasible or considerably
time-consuming in an on-premises setting. For example, it
has enabled rapid scaling of the number of processing jobs
within a few hours and the deployment of non-x86 architec-
ture resources on a large scale, tasks that could ordinarily
take months to accomplish.
From the technical perspective, the project was a success,
demonstrating that an ATLAS site can be deployed in a com-
mercial cloud at a very large scale and in a very effective
manner, requiring little additional operational effort. No sig-
nificant technical issue was discovered to prevent the exper-
iment routinely employing such resources in the future, and
the existing workflow and data management services and
software packages used by ATLAS are found to be adequate
for effectively managing the operation of a significant amount
of resources in the Google Cloud Platform.
Over the course of the project, running jobs on the ATLAS
Google site may be broken down into several phases. The ini-
tial phase, which is described in some detail, was a process of
familiarisation, learning how the site behaves, and applying
necessary changes to bring the respective costs of the differ-
ent services under control. This was followed by an extended
period of running individual ATLAS workflows. Bursting to
0123456789().: V,-vol 123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2 Page 2 of 35 Comput Softw Big Sci (2025) 9:2
a much greater number of cores was then examined, fol-
lowed by the final phase, where for the last few months of
the project both the CPU and storage were expanded to the
size of a typical large ATLAS site.
This project has shown that commercial cloud computing is
an effective technical mechanism for ATLAS for providing
additional CPU resources at the level of a large WLCG site.
Resource bursting was successfully demonstrated, where
available CPU resources can be increased by 100,000 addi-
tional cores within an hour and with no additional opera-
tional overhead. The utilisation of cloud-based storage was
also demonstrated, and the impact of network costs evaluated.
Network egress costs can be very high and currently dominate
the overall cost depending on the workloads run at the cloud
site. This study has quantified this effect, whilst also con-
firming that the ATLAS data management software, Rucio,
includes features that allow network traffic and derived costs
to be effectively controlled. The ATLAS computing model
currently relies on a plentiful network connectivity between
all sites, but it might evolve to reduce data traffic if the actual
network costs were exposed.
By leveraging the Google Cloud Subscription Agreement
pricing model, ATLAS has effectively harnessed between
three and four times the resources compared to what the same
investment would deliver for the list-price. This underscores
the vital importance of establishing such agreements with
cloud providers, which serve as essential tools for accessing
significant volume discounts and ensuring cost predictabil-
ity, while retaining the flexibility and scalability advantages
inherent to cloud services. In essence, these agreements
are not merely advantageous but rather a prerequisite for
enabling large-scale cloud deployments. As such, the list-
price should therefore be seen as the upper limit of the actual
price paid for large-scale cloud services.
Whilst many valuable insights into the use of commercial
clouds have been provided, it is still possible to further
develop the ATLAS workload and data management soft-
ware stack in order to integrate commercial clouds in the
most efficient way. Key areas for future work include evalu-
ating how the private cloud and LHC research network infras-
tructure can be interconnected and how the orchestration of
data and workflows can provide maximal gains in the perfor-
mance and flexibility of the computing model with minimal
additional cost. The full implementation of these develop-
ments is important to enable exploring a potential evolution
of the ATLAS computing model that tackles the issue of net-
work costs. An extension of the project would also enable the
current, wide ranging R&D programme to continue, which
exploits the elastic availability of special resources such as
GPUs and alternative architectures such as ARM, otherwise
not readily available to ATLAS, enabling fast validation and
benchmarking of new architectures without the need to make
upfront investment in hardware.
1 Introduction
The ATLAS experiment [1]attheLHC[2] employs dis-
tributed computing resources of up to one million cores of
computing, 350 PB of disk and over 500 PB of tape storage.
These resources comprise the Tier-0 at CERN, the Tier-1 and
Tier-2 Worldwide LHC Computing Grid (WLCG) sites [3,4],
opportunistic resources at High Performance Computing
(HPC) sites and cloud computing providers, as well as volun-
teer computing. In recent years such non-WLCG resources,
particularly from HPC sites, have made increasingly sig-
nificant contributions to ATLAS computing, a trend that is
expected to continue in the future.
The use of commercial clouds has been investigated by all
LHC experiments [58], and was recently reviewedby the US
ATLAS and CMS communities as part of a blueprint publi-
cation [9]. ATLAS has examined the viability of both Google
and Amazon cloud resources [10], including the integration
of the two core components of the distributed computing
environment: the workload management system PanDA [11]
and the data management system Rucio [12]. These initial
studies showed that such resources can be adopted for both
the ATLAS production and user analysis workflows, albeit
at a limited capacity, as well as demonstrating the potential
for future R&D [10].
As the start of Run 4 and the HL-LHC [13] era approaches,
the need for additional CPU and especially disk resources
continues to grow, to satisfy the computing requirements
of the experiments [14,15]. At the same time, WLCG sites
are increasingly exploring methods to utilise new resources,
aiming to fulfil their pledges in the most cost-effective and
energy-efficient manner while also making emerging tech-
nologies accessible to users. With this in mind, the ATLAS
Google Project (AGP) was established to continue an evalu-
ation of commercial clouds as a resource for ATLAS.
Whilst basic cost estimates of commercial cloud resources
have been done before [7,10], one of the primary goals of
the AGP is to develop a Total Cost of Ownership (TCO)
model for commercial clouds, which is made possible by the
highly granular pricing information available from the cloud
provider. By comparison, other types of computing resources
employed by ATLAS, namely the grid, HPCs, clouds, and
volunteer computing, incur different effective costs to the
experiment, which are often difficult to evaluate and com-
pare since funding methodologies vary by resource type, by
country, and even between different funding agencies within
the same country. Additionally, there are local arrangements
at the sites or savings, for example where some sites, typi-
cally at universities, do not directly pay electricity or WAN
access costs. It is therefore important to note that this TCO
evaluation is not attempting to directly compare the cost of
running ATLAS jobs in the Google Cloud with the cost of
running an ATLAS grid site. A TCO comparison of data
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Comput Softw Big Sci (2025) 9:2 Page 3 of 35 2
centre colocation and commercial cloud solutions has been
previously performed in the context of expanding computing
resources at CERN [16], using input from the Helix Nebula
Science Cloud project [17].
The structure of this paper is the following.
Commercial Cloud Cost Modelling (Sect. 2) delves into the
critical elements of commercial cloud cost models relevant
to this study, examining the subscription agreement model
employed by ATLAS and outlining the methodology and
scope of the TCO analysis. ATLAS Google Site Integration
(Sect. 3) briefly outlines the technical aspects of integrating
the Google site and Running the ATLAS Google Site (Sect.
4) offers a detailed account of the various tests conducted
at different phases and the resulting conclusions. Cloud and
Network Costs (Sect. 5) more closely examines the role of
network traffic in overall costs and underscores the rationale
for considering dedicated links. Feedback from Grid Site
Administrators (Sect. 6) presents feedback received from
a number of system administrators regarding the adoption
of commercial clouds. Finally, Further Investigations and
Future Work (Sect. 7) provides an outline of potential future
working directions and a summary is presented in Sum-
mary (Sect. 8).
2 Commercial Cloud Cost Modelling
In contrast to other computing resources employed by
ATLAS, including those deployed via the WLCG grid, com-
mercial cloud resources have well-defined pricing for com-
puting time and services used, for example at Amazon Web
Services (AWS) [18] or on the Google Cloud Platform
(GCP) [19]. While each cloud may differ in their specific
pricing structure, the cost of the individual services used by
the client is usually published in an itemised and highly gran-
ular way, making it easier to develop future cost models for
commercial clouds. In this way, a client can choose from a
menu of services, which are charged or billed directly.
While the list price-based cost model is suitable for ad hoc
or specialised short-term resource needs, it is likely to prove
more costly for consistent, long-term usage of resources
compared to grid-based, general-purpose offline computing.
However, most commercial cloud providers offer discounts
and credits for large-scale users, which has been previously
explored by ATLAS with varying degrees of success. Such
discounts and credits can significantly reduce the cost com-
pared to the list-price. Funding agencies can often negotiate
even better deals. The list price-based cost model therefore
provides a maximum ceiling for a TCO, but rarely reflects
the actual price paid for large-scale cloud services.
The primary obstacle to using commercial clouds is usu-
ally egress costs, which are network costs incurred when
data is exported from the cloud resource, and may be pro-
hibitively high for the distributed computing workflows used
by ATLAS. These workflows often exploit the high intercon-
nectivity of the grid sites and incorporate many data transfers,
as data hosted by one site is transferred to be used elsewhere.
This high network interconnectivity may incur significant
expense, which goes on top of the site budgets, as discussed
in Dedicated Networks (Sect. 5.1). Egress costs associated
with cloud resources were a concern expressed by several
Tier-1 site administrators when interviewed for this report
(see Feedback from Grid Site Administrators (Sect. 6)).
Whilst discounts, credits and subscription plans may increase
the complexity in determining future costs, they also pro-
vide mechanisms to reduce the total cost of using commer-
cial clouds for ATLAS. It should however still be noted that
even with deep discounts and credits, continuous monitoring
of expenditure must be done, and as automated as possible,
especially for large-scale use.
2.1 Subscription Model Employed by the ATLAS Google
Project
For this latest phase of the AGP, a Google Cloud Service
Agreement for Public Sector contract was negotiated with
Google to explore more solid ideas about employing Google
as an ATLAS production site. This contract, which uses the
latest pricing model, was negotiated via US national labo-
ratories and is based on an initial assumption of an average
of 7000 compute cores together with up to 7 PB of storage,
and an estimated egress of no more than 0.7 PB per month.
As is often the case with commercial cloud resources, there
is no charge for data ingress, that is for uploading ATLAS
data to the Google Cloud Platform. The contract ran for 15
months, from July 2022 to October 2023, at a flat rate cost of
$56,630.54 per month, which resulted in a subscription price
of around $1900 per day.
The subscription model does not put a limit on usage once
active, and ATLAS can use the available resources at any time
at any scale. For example, for 1 month the disk usage could
be 10 PB, and the next month 1 PB, or for 1 month ATLAS
may use 3000 cores, and the next month use 20,000 cores.
This resource elasticity may be very useful for short-duration
data-processing campaigns and is explored in dedicated tests.
The TCO for the subscription model is the total price of
the contract for 15 months, $849,458, although there is a
clear caveat to be considered, namely that the negotiation
for any subsequent contract will likely examine the actual
average usage during the previous contract. If a significantly
higher usage is observed than the initial estimate, it may be
the case that the monthly price may be higher in any subse-
quent contract. A key part of the TCO evaluation is there-
fore to use the detailed usage breakdown provided by the
GCP accounting, without applying the subscription model,
to understand which site configurations produce the most sig-
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2 Page 4 of 35 Comput Softw Big Sci (2025) 9:2
nificant increase to the total cost according to the list-price,
and which particular services drive this increase.
2.2 Total Cost of Ownership Methodology
As previously mentioned, this TCO evaluation is not trying to
directly compare the cost of running an ATLAS grid site with
the costs associated with running ATLAS jobs in the Google
Cloud. This is primarily for two reasons, as described above.
Firstly, a significant cost variation exists already among dif-
ferent resources, and among different grid sites, which is at
least in part attributable to the variation in service costs, as
well as regional variation in labour costs for site administra-
tors. Secondly, relying solely on absolute numbers for Google
Cloud costs may not provide meaningful insights, due to the
substantial influence of individual agreements and contracts,
which include volume discounts that vary case by case and
over time.
The focus of this TCO is therefore rather on gaining a com-
prehensive understanding of how to make the best use of such
a resource, including how the configuration may differ from
a standard grid site. To accomplish this, the relative contri-
butions of various components within the cloud service to
the TCO are analysed, including compute, storage, and net-
work, across different operating models. By identifying the
dominant cost drivers and exploring effective cost control
mechanisms, it is possible to optimise resource allocation
and management to maximise cost efficiency.
In addition to hardware considerations, the invaluable contri-
bution of dedicated personnel at the grid sites should not be
overlooked. These individuals not only fulfil the role of sys-
tem administrators but also play a vital part in maintaining
the middleware and the distributed computing infrastructure
of the experiment, and in some cases contribute to federated
support services, including user-support, and areas such as
R&D, outreach and education. Their expertise and involve-
ment are crucial to sustain a high efficiency and effective-
ness in the operation of the ATLAS computing infrastructure,
more so if sites opt for using cloud computing resources at
scale.
3 ATLAS Google Site Integration
The ATLAS data are distributed worldwide across data cen-
tres or sites, organised into Tiers with varying capacities and
responsibilities under the umbrella of the WLCG [3,4]. The
single Tier-0 centre is CERN, and there are ten Tier-1 sites,
connected via dedicated National Research and Education
Networks (NRENs) typically with between 10 and 100 Gbps,
using the LHCOPN and LHCONE overlays [20]. The Tier-
1 sites provide both disk and tape storage and function as
the perpetual archival of the collision data. Around 50 Tier-2
sites, typically hosted by national universities and laborato-
ries, provide disk storage that is used for data processing and
user analysis.
When setting up an ATLAS site using GCP resources the
objective is to fully support all ATLAS production work-
flows. This includes not only processing the RAW collision
data into reconstructed data (Analysis Object Data, AOD),
but also running all components of the multi-step Monte
Carlo (MC) production workflow, such as Event Generation
(EVNT), Full or Fast Simulation which produces simulated
detector interaction data (HITS), simulated detector output
data (Raw Data Object, RDO) and reconstructed simulated
data (AOD), as well as creating data and MC derived for-
mats (DAOD) for input to analysis (a process referred to as
“Group Production”). Additional formats such as Derived
Event Summary Data (DESD), which are tailored to the spe-
cific needs of various subdetector and object reconstruction
and identification performance groups, may also be produced
in data-processing campaigns. The site is also expected to
handle user workloads, which can encompass a diverse range
of demands. Further details on ATLAS production workflows
and data formats can be found elsewhere [21].
In the ATLAS grid site setup, production tasks running
ATLAS jobs at various locations consolidate their outputs
at a designated site known as the nucleus. The nucleus is
chosen during task definition and can be either a Tier-1 site
or a substantial and reliable Tier-2 site. Considering its size
and expected performance, this responsibility could also be
anticipated from the ATLAS Google site and is also exam-
ined here.
The integration of compute at cloud-based sites with ATLAS
distributed computing relies on Kubernetes [22], where
the resource-facing component of PanDA, Harvester [23],
utilises the native job controller of the Kubernetes clusters
for submitting batch jobs to the PanDA queue associated with
the site [24,25]. ATLAS software is provided in the same way
as at grid sites via CVMFS [26].
The access to Google storage was configured as a standard
Rucio Storage Element (RSE), as used by other WLCG stor-
age systems abstracted via standard HTTP/WebDAV using
authorisation tokens based on the S3v4 format [27]. At the
lowest layer in the stack, the Davix [28] library implements
HTTP/WebDAV access to all storage systems used in the
WLCG, and already supports chunked transport required by
object stores, which are used by ATLAS for cloud-based
sites [29]. Further significant development was then required
to make access to Google storage work, as described in the
following.
The Rucio server was extended to allow only specific
accounts to access cloud storage and to generate cloud stor-
age tokens ad hoc when listing replicas, a functionality that
is needed both by interactive command line interface users as
well as production and analysis jobs. It was also necessary to
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Comput Softw Big Sci (2025) 9:2 Page 5 of 35 2
extend the functionality of the Rucio clients to allow seamless
upload/download for object stores, as object stores prohibit
several useful functions from the HTTP/WebDAV standard
that Rucio uses to ensure safe uploads such as checksum ver-
ification. The CERN File Transfer Service (FTS) [30]was
also extended to dynamically generate authorisation tokens
based on the S3v4 format when enacting a transfer.
Making the Google storage available within the WLCG
infrastructure also required a creative authentication solution,
as Google is not part of the Interoperable Global Trust Feder-
ation (IGTF) [31]. To do this, a dedicated load-balancer was
set up on the Google Cloud side with a fake hostname in the
CERN DNS server such that a CERN-based X.509 host cer-
tificate could be issued and uploaded. This load-balancer was
configured based on path-based regular expressions, which
allowed two RSEs to be deployed: a DATADISK to store
production input and output data and a SCRATCHDISK to
temporarily store analysis job outputs. With this setup, the
Google storage could be globally integrated into the ATLAS
distributed computing infrastructure like any other WLCG
site, and could be used by any account with the appropriate
permissions.
Due to the proximity to CERN and the low carbon-intensive
energy usage in the region, the Google europe-west1 region
in Belgium was chosen to host the primary ATLAS Google
site, although during investigations into the network connec-
tivity a second site in the US Google us-east4 region in Vir-
ginia was also employed (see Cloud and Network Costs
(Sect. 4)). The number of running single-core slots (“job
slots”) at the ATLAS Google site can be manually config-
ured, and is typically set to either 5000 or 10,000, although
additional set-ups such as those used for the evaluation of
larger scale “bursts” were also deployed, as described in
Resource Bursting (Sect. 4.1). CPU cores are provided
as “spot instances” [32], so that allocated resources may be
preempted at any time depending on current situation at the
cloud, but have the advantage of costing significantly less in
real terms. The storage limit at the ATLAS Google site is
initially configured to be between 2 and 5 PB.
Thanks to the development of support for cloud-native inter-
faces, the deployment of an ATLAS site in the Google Cloud
capable of scaling to tens of thousands of CPUs and several
petabytes of disk could be accomplished within a matter of
weeks. Moreover, the operation and maintenance of the site
has required only a fraction of a full-time equivalent (FTE)
staff with expertise in cloud administration. This achieve-
ment paves the way for exploring cost–benefit scenarios in
which the agility and scalability of cloud computing can be
harnessed to accelerate the ATLAS science programme.
4 Running the ATLAS Google Site
4.1 Initial Phase
The ATLAS Google site was initially configured with a
PanDA queue able to run up to 5000 CPU slots, together with
a single RSE where data files could be stored. The PanDA
queue could be configured to accept certain types of jobs and
the initial goal was to try to increase the number of different
workloads and eventually test all the job workflows at the
site. Figure1displays various metrics of the ATLAS Google
site during the first 6 months of running.
The number of running jobs is shown in Fig. 1a. Aside
from alternating the number of CPU slots between five
and ten thousand, the configuration of the site was essen-
tially unchanged during this period. The number of different
types of job running in the PanDA queue was progressively
increased by changing the brokerage decisions to adjust the
job mix of the PanDA queue. The thin spikes that can be
seen in the number of running jobs are due to short-term site
configuration changes.
In the 3 months from August 24th until November 24th 2022,
the overall job mix that was run corresponded to approxi-
mately 30% MC Event Generation, 30% MC Full Simula-
tion, 30% MC Reconstruction and 10% Group Production,
which is a typical job mix seen on a standard grid site. Analy-
sis workloads were not tested in this period as many changes
needed to be made to the ATLAS middleware.
As jobs continued to run at the site, the accumulated data
from the various production steps steadily increased at the
Google RSE at a rate of approximately 50 TB per day, until
it reached 6 PB on November 24th as can be seen in Fig. 1b.
At the same time, the availability of these data generated an
increasing number of accesses from other ATLAS sites. The
egress network traffic from the ATLAS Google site due to
production and analysis jobs elsewhere using data at Google
as input ramped up from an average of about 20 TB per day
in August to about 130 TB per day in October and November.
In November, there were periods with egress network traffic
over 200 TB per day for several days in a row, as can be seen in
Fig. 1c. It is worth noting that this traffic is above the average
seen at ATLAS sites, relative to the stored data volume. The
ATLAS Google site was generating egress traffic at the level
of 4 PB per month in November whilst hosting 5–6 PB data,
which is significantly more than the initial estimate of 0.7 PB.
By comparison, the MWT2 grid site in the US also generates
an average of 4 PB per month of egress traffic, whilst hosting
more than 15 PB of data.
Fig.1d shows the breakdown of the Google list-price cost per
service, where the various contributions from compute, stor-
age and egress are presented. It can clearly be seen that the
costs of egress traffic and storage can quickly become dom-
inant in cloud resources. To contain these costs, on Novem-
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2 Page 6 of 35 Comput Softw Big Sci (2025) 9:2
Fig. 1 Monitoring plots for the first 6 months of running at the ATLAS
Google site, from July to December 2022. aThe number of running
jobs at the Google site. bThe accumulated data at the Google RSE
split into different formats, the main ones being AOD (green), RDO
(blue), HITS (purple) and DAOD (yellow). cThe daily egress traffic
out of the ATLAS Google site, split into the various destination sites. d
The monthly list-price cost per service from the Google billing console,
where the six main components are shown in the legend
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Comput Softw Big Sci (2025) 9:2 Page 7 of 35 2
Fig. 2 Data stored (blue) and egressed for job inputs (red) at the ATLAS Google site per month from July 2022 to September 2023. The ratio is
also indicated by the black line
ber 25th the configuration of the ATLAS Google site RSE
was changed in Rucio by enabling greedy deletion, so that
data were deleted as soon as the corresponding replication
rules had expired [33]. This had the immediate effect that the
5.5 PB of accumulated cached data were quickly removed,
and from then on new temporary data would stay on the RSE
for only 1 or 2 weeks. This drastically reduced the egress
traffic, since before the change there was significant egress
as the cached data were being transferred multiple times to
other sites as job inputs.
After this change, the remaining egress was mainly due to job
output being sent to the task nucleus. Another change was
then applied to the Google RSE on December 8th that set the
distance [34] of the RSE to any other ATLAS site to a very
large value. This had the effect of completely eliminating the
remaining egress traffic due to job inputs then preferentially
being read from other sites. The number of CPU slots at
the ATLAS Google site was also reduced back to 5000. The
effect of these changes can be seen in the distributions in
Fig. 1.
With this new configuration, the stored data at the Google
RSE stayed around 300 TB and egress traffic fell below
5–10 TB per day. The solitary spike in transfer volume in
December visible in Fig. 1c is from the egress of the outputs
of a small data reprocessing running at the ATLAS Google
site, visible as the yellow contribution in Fig. 1a.
4.2 First Observations on Network Considerations
Cloud resources have well-defined price structures for each
resource type, not only compute and storage, but also net-
work traffic. The first two are the usual capacity metrics
that are closely accounted for in a distributed infrastructure
like the WLCG. The network capacity is certainly also con-
sidered within WLCG, but the planning and provisioning
cycle is usually done on a longer time scale. This is in part
because the administration domain for networks is typically
broader and often spans national or continental institutions
beyond the sites themselves. This has probably had an effect
in the experiment computing models, whose workflows rely
to some extent on a plentiful any-to-any connectivity.
The long-term large-scale test of the ATLAS Google site
has provided useful insights into the management of the
associated network traffic. Given that the site was newly
deployed, it was possible to monitor how the consumption
of different types of resources evolved. CPU usage was con-
stant as expected, as a fixed configuration parameter, whereas
storage usage increased at an essentially constant rate. The
steady increase in egress network traffic was correlated with
increased use of storage, albeit with a much larger variabil-
ity. An interesting outcome of this first period of the ATLAS
Google site was that it was possible to reduce the egress net-
work traffic by adjusting a few parameters in Rucio, allowing
the cost of cloud resources to be very effectively controlled.
A key question remains about how useful a grid site is with
limited egress network traffic. Whilst this question is beyond
the scope of this report, it is an important topic that deserves
dedicated studies in the future in the context of computing
model evolution. The approach taken here is to quantify the
tests that were done with the ATLAS Google site. As pre-
viously described, after the first 3–4 months of continuous
unattended operations, the site had accumulated 6 PB of data
on disk. The values for average stored data and total egress
of data for job inputs are depicted in Fig. 2. Remarkably,
between September and December 2022, between 75% and
100% of all the data at the site was egressed each month
for production or analysis job inputs. Comparing this with
other large ATLAS sites, typically having storage sizes rang-
ing from 4 to 30 PB, this metric falls to between 15% and
20%. Consequently, the significant amount of new data at the
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2 Page 8 of 35 Comput Softw Big Sci (2025) 9:2
Fig. 3 a The variation of workflows running at the ATLAS Google site
from January to April 2023 featuring several periods of running with a
single workflow. The contribution from user analysis jobs can be seen
from March. bThe daily list-price cost per service from the Google
billing console for the period from January to April. The dominant ser-
vices are compute CPU/RAM (blue/red), local storage on the worker
nodes (orange), cloud storage (purple) and network egress (green and
turquoise)
ATLAS Google site generated data movement dynamics that
significantly deviated from the average. This is relevant when
evaluating a cloud resource, since abnormally high egress
traffic levels will have an impact on costs.
4.3 Understanding the Cost Impact of Different ATLAS
Workflows
Between January and April 2023 the ATLAS Google site
ran at an approximately constant CPU capacity of around
5000 job slots. The types of jobs allowed to run were con-
trolled through the fairshare policy parameter associated with
the PanDA queue. By adjusting this parameter, four periods
of running only one activity were carried out, as shown in
Fig. 3a: a 14-day period with only MC Simulation jobs in
January, a 5-day period with only MC Reconstruction jobs
in early February, a 9-day period with only Group Production
jobs in mid-February and a 6-day period of only MC Event
Generation jobs at the beginning of April.
The Google billing console provides daily list-price cost
information, shown in Fig.3b, which can be used to infer
general trends. It was observed that 80–90% of the cost is
consistently dominated by three services: compute, storage
and network egress. The remainder comes from a combi-
nation of infrastructure overhead and costs associated with
orthogonal ATLAS R&D activities at Google. The compute
cost (which has three components associated with it: CPU,
RAM and local disk) remains pretty much constant through-
out this period, as expected from the fact that the number of
slots in the ATLAS Google site was set to 5000 job slots and
not modified. This cost is consistent with the advertised pric-
ing for the spot n2-standard-8 instances of around $0.01 per
CPU-h. This situation can be compared to the first months
of the project, where egress costs dominated, as shown in
Fig.1d.
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Comput Softw Big Sci (2025) 9:2 Page 9 of 35 2
Compute is the dominant cost for these four workflows. Stor-
age and network costs for a given day are partially but not
fully correlated with the activity that is running at that time.
If data are not purged from the storage, the storage costs will
reflect the accumulation of any past activity. On the other
hand, if there are old data stored at the site that are accessed
from outside for any reason, this will generate egress costs
that are not correlated with the activity running at that pre-
cise moment. Still, the single workflow testing periods were
of relatively short duration and the Rucio storage configu-
ration meant that no background egress was present, so any
correlation observed should be meaningful.
A final, significant single workflow test was done in July
2023, by running a fraction of the reprocessing of the 2022
data on 10,000 job slots at the ATLAS Google site. About
15% of the proton–proton collision data recorded in 2022
for physics analysis, comprising around 600M events, were
processed into multiple formats for analysis and further, sec-
ondary processing between July 11th and 18th. The data
carousel mechanism [35] was employed as done for the grid,
whereby input files, in this case RAW data, were recalled
from ATLAS tape resources and replicated to the Google
site, without observing any adverse effects.
Fig.4a shows the data reprocessing jobs during the period
July 11th to July 18th, as well as a small number of associ-
ated output data merging jobs. Further data reprocessing jobs
can be seen in Fig. 4a after July 18th, which are part of the
wider campaign to reprocess the remaining 85% of the 2022
data on all ATLAS distributed computing sites, including the
ATLAS Google site. Figure 4b shows the daily data volume
transferred out of the ATLAS Google site to the various grid
sites, where it can be seen that during the data reprocessing
a total of about 100 TB of data were exported daily.
Fig.4c also shows the daily data volume transferred, but now
broken down into the different activities. Production Output
(cyan) accounts for approximately half of the egress, which
is equivalent to the export to CERN shown as the purple
component in Fig. 4b. Most of the remaining egress during
this period is attributable to Data Consolidation, which is a
rebalancing procedure performed on the ATLAS distributed
computing system as a whole. It may be interpreted as other
data moved out of the Google storage to make room for the
output of the data reprocessing. The egress from the data
reprocessing jobs after July 18th is also visible in Fig. 4b and
c, albeit at a lower level.
The periods with different types of activities show differ-
ent trends on the relative cost of the three main services, as
shown in Table 1. The average values for the full duration
of the project are also displayed. For the first four columns,
a few trends are visible in the numbers in the table. In the
Group Production period, the egress network activity and its
associated costs show a clear increase, averaging 21% over
the 9-day testing period. In the 6-day MC Event Generation
period, the storage is seen to contribute a significant fraction
of the total cost at 30%; however this is from the replication
of almost 1 PB of DAOD data sets to the Google site done
during the same week. The most striking observation is that
the period of Data Reprocessing activity shows a very differ-
ent pattern compared to the others, with the network egress
cost clearly dominating and averaging 63% of the total cost.
The relative costs of different workflows are further exam-
ined in Cloud and Network Costs (Sect. 5) when discussing
networks.
User analysis workflows have also been running on the
ATLAS Google site since March 2023, as can be seen in
Fig. 3a. Whilst it would have been desirable to run only
user analysis at the ATLAS Google site for some period, this
proved difficult, primarily due to the unpredictable nature of
such workflows, compared to the rather standard production
workflows employed in the single job type periods described
in this section. Despite replicating several popular analysis
datasets to the site, it was difficult to get enough user jobs in
the queue, and ultimately it was not possible to run only anal-
ysis workflows on the site, although this was almost achieved
in the second half of March.
4.4 Resource Bursting
Cloud computing is intrinsically highly elastic in its nature,
and offers the opportunity to acquire a significant number of
additional resources, potentially at short notice. In the context
of Active Learning [36] this is particularly advantageous,
when it is essential to increase the speed of each iteration
of MC sample production. Previous studies [25]haveshown
that it is possible to quickly ramp up many tens of thousands
of job slots at Google and process a small number of MC
events through all steps in the MC production chain to arrive
at the DAOD used for analysis.
Bursting a large amount of additional compute capacity
may also be useful if for example a particular MC sample
is urgently required, and this scenario was explored at the
ATLAS Google site in June 2023. In this case a 50 M event
standard top-quark pair production MC sample was chosen
to undergo Full Simulation as quickly as possible, by burst-
ing to 100,000 job slots. The task was configured as standard
2000 event 8-core jobs, which each take on average between
6–8 h, so that all 50 M events should be processed within 24 h.
The input EVNT data was replicated to the Google storage
and the site was drained of all other running jobs before start-
ing the test. Whilst this was not strictly required, it allowed
the burst of resources to be isolated, which was useful for
monitoring purposes.
The results of the burst test are shown in Fig. 5. The ramp up
to 100,000 running job slots was achieved in 1–2h without
issue, as can be seen in Fig. 5a. Wall-clock consumption is
shown in Fig. 5b, which revealed a considerable amount of
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2 Page 10 of 35 Comput Softw Big Sci (2025) 9:2
Table 1 Relative cost of each of the main services during periods of
time when only one workflow was running at the ATLAS Google Site:
MC Full Simulation, MC Reconstruction, Group Production, MC Event
Generation and Data Reprocessing. The final column shows the average
values for the full duration of the project, from July 2022 to September
2023
MC Full MC Group MC Event Data Full
Simulation Reconstruction Production Generation Reprocessing Project
06/01–19/01 05/02–09/02 11/02–19/02 07/04–12/04 12/07–16/07 07/22–09/23
Compute 73% 65% 52% 47% 21% 28%
Storage 10% 10% 9% 32% 11% 20%
Network egress 7% 11% 26% 3% 63% 46%
Other 10% 14% 14% 19% 4% 6%
lost wall-time in the ramp up phase. This was due to nodes
accepting jobs while CVMFS was still being initialised, and
hence the burst test was repeated with a slower ramp up pro-
file. In both cases, the overall lost wall-time was 11–13%,
somewhat more than observed on the grid, and coming from
the ramp up phase and the low and constant level of preemp-
tions throughout both tests. Nevertheless, in both cases all
50M events were processed within 24 h as expected.
The same MC Full Simulation sample has been processed
several times on the various resources currently employed
by ATLAS, without draining queues or using a dedicated
site or queue. Each of these tasks took between 8 and 10
days to process, even when a significant fraction of the work,
between 30% and 75%, was executed on a few powerful sites
such as the ATLAS High Level Trigger Farm [37,38] when
used in Sim@P1 [39,40] configuration or the Vega [41] and
NERSC-Perlmutter [42] HPCs. In this sense, the burst test
can be considered a success, whilst at the same time having
the advantage of exposing the cost of a well-defined data-
processing activity. The list-price cost of each burst run at
the ATLAS Google site can be seen in Fig. 5c, where the
sum of the compute-based components is around $23,000
each time.
4.5 Scaling Up the Site Again
For the final few weeks of the current project, the size of
the ATLAS Google site was scaled up again to between the
size of an ATLAS Tier-1 and Tier-2 grid site, with around
5 PB of storage and 10,000 running jobs slots, utilised by all
job types. This was done in July 2023, ahead of launching
the data reprocessing single workflow period described in
Understanding the Cost Impact of Different ATLAS Work-
flows (Sect. 4.3), and this configuration remained until the
end of the project in September. The 15 largest Tier-2 grid
sites and about half of the Tier-1 sites typically each pro-
vide at least this number of job slots to ATLAS computing.
Almost all of the Tier-1 sites and the ten largest Tier-2 sites
have at least 5 PB allocated to their DATADISKs, so at the
end of July the size of the ATLAS Google Site DATADISK
was increased from 2.5 PB to 5 PB. At this time, the site was
also reconfigured as a nucleus, so that not only unique data
could be stored there but the site could also act as the output
destination for a task running jobs at all other ATLAS grid
sites. Furthermore, the large distances to other RSEs set in
an earlier phase of the project were somewhat reduced, in
particular to the German Tier-1 and associated Tier-2 sites,
which are geographically close to the ATLAS Google site
in Belgium. Figure6shows the changes to the data stored
and the transfers out during the month following these site
reconfigurations.
Fig.6a shows the data stored at the ATLAS Google site,
grouped into three different replica types: Persistent, which
has a Rucio rule with no lifetime (typically data placed at
the site); Temporary, which has a Rucio rule with a lifetime
(typically data replicated to the site via PanDA for produc-
tion jobs, with a lifetime of 2 weeks) and Cached, which is
data with no current Rucio rule and therefore may be deleted
at any time. The main consequence of the site changes was,
as expected, an increase in the Cached data as a result of
the increased space available for output data from produc-
tion tasks running at the site. The volume of Persistent and
Temporary data remains roughly constant.
The variety of data types stored at the ATLAS Google site
during this period can be seen in Fig. 6b, where a marked
increase in RDO files is visible. These data are used as input to
MC Reconstruction tasks in combination with HITS, which
is the output of MC Simulation. It is likely that the nucleus
nature of the site, in combination with the reduced distances,
means that more HITS datasets remain on the RSE and are
available as favourable input for MC Reconstruction tasks,
both at the ATLAS Google site and elsewhere. A steady
increase in HITS and AOD (the output of MC Reconstruc-
tion) is also observed as well as the presence of some RAW
data files used as input to data reprocessing tasks.
Fig.6c shows the daily transfers out of the ATLAS Google
site, which show a steady increase after the end of July. Most
of the egress is due to files replicated from the site to use as
production input elsewhere, for example AODs to be used
as input for the production of analysis-level data (DAOD)
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Comput Softw Big Sci (2025) 9:2 Page 11 of 35 2
Fig. 4 Distributions covering the data reprocessing campaign per-
formed on the Google ATLAS site. aNumber of running jobs, where
the single job type period from July 11th to July 18th shows the data
jobs in yellow, together with a small number of associated merge jobs
in blue. bThe data transfers out of the ATLAS Google site for the same
period to different grid sites, where the main contribution in dark purple
is the replication of the reprocessing output data to CERN. This is also
visible for the data reprocessing jobs after July 18th. cThe different
types of transfers out of the ATLAS Google site for the same period
at another ATLAS grid site. The level of egress is however
significantly less than in 2022, which at its peak had a rate
of more than 1 PB per week (see Fig. 1c). This reduction is
likely due to two main reasons. Firstly, compared to 2022,
the short distances set in Rucio between RSEs were limited
to only the sites in the German cloud. Secondly, there was a
higher fraction of new data at the Google RSE when it was
first activated. At the beginning of the project, the RSE was
empty and all data were newly written there, whereas in July
2023 there was already around 2 PB stored there before the
site was scaled up again.
Decommissioning of the ATLAS Google site took place in
September 2023, so that all resources employed during the
project were effectively switched off from the holistic per-
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2 Page 12 of 35 Comput Softw Big Sci (2025) 9:2
Fig. 5 Distributions covering the resource burst tests done at the
ATLAS Google site in June 2023. aThe running jobs of the two bursts
of MC Full Simulation. bThe wall-clock consumption of the jobs run-
ning on the Google site. cThe daily list-price cost per service from the
Google billing console, where the compute contributions are seen to
dominate on the burst days
spective of ATLAS distributed computing. The PanDA queue
was disabled mid-September and all unique data moved to
one of the ATLAS Tier-1 sites. Any remaining user data from
R&D projects were removed and the site was fully decom-
missioned by September 21st. No significant difficulties were
encountered during this process, which was similar to the
usual decommissioning of a typical ATLAS grid site.
5 Cloud and Network Costs
Deploying grid resources in commercial clouds creates a
demand for networking services that can have an impact on
performance and potentially also on cost. Egress traffic is a
particularly expensive resource in the cloud, due in part to the
commercial strategy of providers to incentivise to continue to
use their resources and not migrate to other cloud providers.
At the time of writing, the list-price for storing data in the
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Comput Softw Big Sci (2025) 9:2 Page 13 of 35 2
Fig. 6 Data stored at the ATLAS Google site: agrouped into different replica types and bgrouped into different ATLAS data formats. cThe
different types of daily transfers out of the ATLAS Google site
Google europe-west1 region is $20 per TB per month [43],
while the price for egressing data is between $45 and $85 per
TB, depending on the volume [44].
As previously described, probably the most important feature
of the cloud resources is that the costs involved are heavily
dependent on which services are used. This dependence can
be seen in the cost breakdown of running the ATLAS Google
site, which varied significantly month to month, or day to day,
depending on the activity of the site. Figure7ashowsthe
monthly list-price cost profile for the full 15-month duration
of the project. The cost of compute is basically stable, only
showing small variations at the times when the number of
running job slots at the sites was changed (for example when
the site was increased from five to ten thousand job slots
from August to November 2022, and the CPU burst test in
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2 Page 14 of 35 Comput Softw Big Sci (2025) 9:2
Fig. 7 Cost breakdown for the ATLAS Google site for the period July 2022 to September 2023. aThe monthly list-price cost per service from the
Google billing console. bThe percentage contribution to the monthly cost from each of the services grouped as indicated in the legend
June 2023). However, the cost of storage and network egress
varies considerably, depending on the dominant activity.
Fig.7b shows the relative fraction of each service to the total
monthly cost. For the first months of the project, until Novem-
ber 2022, the egress cost increased as new data accumulated
at the site and jobs running at other grid sites accessed these
data. By November 2022, the costs associated with egress
reached 54% of the monthly total. Other patterns can be
observed in 2023, such as in April and May when the analysis
input data was replicated to the ATLAS Google site, corre-
spondingly increasing the fraction of the total cost spent on
storage. The increase in egress due to the data reprocessing
performed at Google is also visible in July.
5.1 Dedicated Networks
Egress network traffic is the resource that can have the largest
impact on the running costs over short timescales. It is an
expensive resource, and its use can increase very rapidly
if some data at the site become popular and are suddenly
accessed by thousands of jobs at other sites. Moreover, the
data going from the ATLAS Google site to other ATLAS grid
sites do so through the general purpose internet, as opposed
to using the LHCONE/LHCOPN private networks that link
most of the ATLAS sites. This has the effect of generat-
ing potentially very large traffic over the internet link into
the destination sites, which can have cost implications at the
destination, since some sites have higher costs or lower avail-
able bandwidth associated with their general purpose inter-
net links compared to the dedicated LHCONE links. The
utilisation of such links can lead to operational disruptions,
potentially impacting a site’s availability to its users. This is
often due to lower provisioning of general internet bandwidth
by the sites, primarily stemming from the associated higher
costs, resulting in rapid saturation. It is therefore important
to investigate mechanisms that could control and potentially
reduce the egress costs. One such mechanism is dedicated
network links with the cloud providers.
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Comput Softw Big Sci (2025) 9:2 Page 15 of 35 2
The price for sending traffic through a dedicated link [45]
has two components: a fixed one of the order of $2.4 per
hour for a 10 Gbps circuit for instance, plus a variable one
that depends on traffic, priced at $20 per TB. Additional
costs might also arise, depending on the specific service
provider and route of the intermediate connection. Accord-
ing to this, routing the traffic exiting the ATLAS Google
site through a dedicated link could potentially reduce the
egress costs to less than half if traffic above 3 PB per month
is generated. A recent study within the IceCube Collabora-
tion [46] came to a similar conclusion, where by employing
dedicated network links the egress costs for data-intensive
applications could be reduced by between 50% and 75%. It
is however worth emphasising that the scope of the study
undertaken by IceCube significantly differs from that of
ATLAS. IceCube utilised dedicated links connecting the
cloud to a specific site at UW-Madison, whereas ATLAS
is currently investigating the feasibility of deploying dedi-
cated links to establish connections between cloud resources
and the LHCONE overlay network, facilitating data trans-
fers to numerous sites worldwide. It is therefore important
to recognise that the complexity and potentially the cost
associated with these two approaches may differ consider-
ably.
To explore this option, an engagement has been started with
ESnet [47] to provision a dedicated 10 Gbps link to the
Google us-east4 cloud through their ESnet Cloud Connect
service. The aim of this exercise is to test two things: first,
to measure and confirm a reduction of the egress costs for
large data transfers, and second, to try to route the egress
traffic from the ATLAS Google site into LHCONE to avoid
downstream costs associated to high-volume traffic through
the regular internet links at receiving sites.
The expectation is that dedicated network links will provide
a way to lower the networking costs, but at the same time they
will also add complexity to the deployment and operations
of the cloud resources. Moreover, besides the technical com-
plexity of provisioning the dedicated network links, there is
also an organisational complexity that arises from the fact
that the implementation of this setup will vary depending on
the combination of cloud provider (and even region inside
a provider) and NREN that provides the peering. In differ-
ent countries, NRENs will have different capabilities and
conditions for peering with cloud providers. Furthermore,
each cloud provider likely offers their own specific tools to
deploy and manage the dedicated links, each with very differ-
ent provisioning procedures. Another consideration would be
if multiple experiments, for example ATLAS and CMS, were
to provision resources from the same cloud provider whether
they could both use the same dedicated network link. There is
clearly a significant programme of work in this area, beyond
the time frame of the current AGP.
6 Feedback from Grid Site Administrators
As part of this study, feedback was collected from several
system administrators representing various ATLAS Tier-1
grid sites. In particular, their experiences with commercial
clouds were discussed, to gain some insights into the primary
advantages and drawbacks associated with these services.
6.1 Concerns Related to Cloud Computing and
Comparisons to this Project
It was a common view among site administrators that cloud
is more expensive than on-premises solutions. In particular,
concerns were raised about high egress costs, which can sig-
nificantly impact the overall expenses. Some of these com-
parisons may have been made against dedicated compute
instances rather than spot instances, which have a higher cost.
Past experiences [7] with spot instances revealed eviction
rates of up to 15%, which administrators considered unac-
ceptably high. This issue becomes particularly critical since
many sites support multiple users beyond LHC activities,
and these users might be less tolerant to evictions than the
LHC experiments, affecting the overall service reliability.
During this project preemption was observed to be signifi-
cantly lower, where only around 20% of all failed jobs were
due to evictions. With an overall rate of 5% lost wall-clock
from failed jobs over the duration of the project, the result-
ing eviction rate of between 1–2% compares favourably to
the previous result described above. However, it is important
to note that eviction rates for spot instances can vary based
on several factors, including the time of the year, the cloud
provider, the geographical region, the volume, and the types
of resources utilised.
Worries were also expressed about unpredictable perfor-
mance variations over time in cloud environments. Adminis-
trators were concerned that cloud providers might change the
underlying hardware behind a specific instance type, leading
to potential performance fluctuations that could affect the
quality of services. The PanDA queue associated with the
ATLAS Google site was very stable, experiencing negligible
downtime throughout the duration of the project. While some
cloud providers do not specify the exact provided CPU mod-
els, within CPU families it is possible to define a preference
for the newer generations. The masked CPU values (publish-
ing clock frequency and cache size) collected by the work-
load management system were homogeneous throughout the
duration of the project, indicating a stable performance. The
adoption of the new, more flexible HEPScore [48] bench-
marking model should provide further information about this
topic in the near future.
Site administrators expressed a general concern about the
risks associated with vendor lock-in. They emphasised the
importance of maintaining flexibility and the ability to
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2 Page 16 of 35 Comput Softw Big Sci (2025) 9:2
migrate between different cloud providers to avoid being
tied to a single vendor and potentially facing challenges
with cost escalation, integration or data portability. The solu-
tions implemented by this project to interface Google Cloud
resources are essentially cloud-agnostic, not only avoiding
vendor lock-in, but also potentially enabling access to other
commercial cloud resources, as demonstrated at a lower scale
at AWS [18].
Data ownership and control were significant worries. Another
concern was about having critical data solely stored in the
cloud, without having direct control over the physical infras-
tructure where the data is hosted. Ensuring digital sovereignty
is seen as crucial when handling data from unique scien-
tific experiments. The Google Cloud data access policy is
clearly and strictly defined [49], and user data is fully pro-
tected and accessible only to the customer cloud adminis-
trators, designated users, and contract managers. Ultimately,
ATLAS retains data (and algorithm) ownership when running
in the Google Cloud. In addition, an assurance on privacy is
made that customer data will not be used for any commercial
use or for training purposes. A further, related point of con-
cern is that as cloud resources are essentially leased, there
is no opportunity to ensure that the hardware is responsi-
bly and sustainably deployed to its maximum capability and
longevity.
According to the feedback gathered from one site adminis-
trator, the cost breakdown of operating a grid site generally
comprises approximately one-third for personnel, one-third
for operational expenses, and one-third for hardware invest-
ments. The technical personnel effort required to operate one
of the ATLAS Tier-1 centres is estimated to be around 10
FTEs. Some site administrators hold the view that even with
a substantial migration of resources to the cloud, the essen-
tial operational effort needed to run a site would not expe-
rience a significant reduction. This is because only a lim-
ited number of hardware-oriented technical positions may
no longer be required. However, some issues arise that make
this comparison inherently challenging. The experience of
the Google site primarily focuses on the operational effort
required to provide CPU and storage at scale for a single
experiment. In contrast, the roles and responsibilities of tech-
nical personnel at on-premises sites may be much broader in
scope.
Finally, site administrators emphasised that funding agen-
cies are making substantial investments in building new,
energy-efficient data centres, which does not indicate a trend
or incentive to increase the utilisation of cloud resources.
In this context, Google cloud resources operate with net-
zero operational greenhouse gas emissions by neutralis-
ing any remainder via investing in carbon offsets. Google
data centres have above average Power Usage Effective-
ness [50] and provide dynamic information about the car-
bon intensity of their regions, enabling users to steer
their load to minimise emissions. In particular, the site
utilised by this project, europe-west1, has one of the low-
est grid carbon intensities [51] among Google data cen-
tres.
6.2 Purchasing Procedures
Public institutions, including those operating grid sites, often
encounter the requirement to procure services through public
tendering processes. However, when it comes to large-scale
purchasing of cloud services, the administrative demands
involved can be quite difficult. In response to this challenge,
the OCRE [52] project was initiated in 2019 with the aim of
streamlining the procurement process for cloud services in
Europe.
For organisations contracting cloud services within Europe
today, utilising a framework like OCRE becomes a viable
option to do their purchasing. OCRE facilitates this purchas-
ing process through NRENs, which in turn maintain lists
of country-specific authorised resellers that offer cloud ser-
vices covered under the OCRE framework agreements. One
of the key benefits provided by OCRE is the provision-
ing of a standardised contract. While this contract serves
as a baseline, further negotiations at the country or site
level are possible. These negotiations could lead to larger
volume discounts, ultimately benefiting the organisations
involved.
Whilst OCRE is available for institutions within Europe,
it is not applicable in countries other than the 40 mem-
bers of the framework. The process for contracting cloud
services in the USA, for example via HEPCloud [53]or
CloudBank [54], or in Asia may be completely different
and may result in different pricing conditions compared to
those in Europe. It is crucial to recognise that the landscape
for cloud service procurement can vary between regions,
necessitating tailored approaches based on the specific reg-
ulatory and contractual requirements of each region. If
multiple sites were to buy into the same cloud provider,
some of these differences could be overcome, although
remaining administrative hurdles would need to be clari-
fied.
For international organisations like ATLAS seeking to imple-
ment a coherent strategy for utilising cloud services, the vary-
ing regulations and procurement processes across regions add
complexity to the management and planning efforts. Flexi-
bility and adaptability become essential for navigating the
diverse cloud landscapes and effectively leveraging cloud
resources across different geographical areas.
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Comput Softw Big Sci (2025) 9:2 Page 17 of 35 2
7 Further Investigations and Future Work
There are several areas that would benefit from an extension
of the ATLAS Google Project, to allow further exploration
of employing commercial cloud resources for ATLAS.
Firstly, this could involve further investigation of the ATLAS
Google Site, evaluating the typical grid site configuration
but at a lower level of resources than deployed at the end
of this project cycle. As discussed in Scaling Up the Site
Again (Sect. 4.5), whilst the egress is more under control
it is nevertheless still significant, so a more detailed evalua-
tion of which workflows are suitable for commercial cloud
should be done, including any necessary, additional config-
uration changes. Understanding how the subscription agree-
ment structure works and what a reasonable discount looks
like is also an important factor when considering workflow
restrictions, site structure adjustments or long-term contract
costs. The impact of these studies could also be expanded to
potentially evolve the ATLAS workflow and data manage-
ment systems to take into account the cost of the network.
Secondly, peering with commercial cloud networks would
be the main focus for further investigation, as this multi-
faceted, time-intensive activity is only just beginning now.
The network costs incurred due to transfers via the inter-
net are a critical roadblock to widespread commercial cloud
storage adoption for scientific computing overall and in
the case of the LHC experiments, transfers need to be
routed through the LHCONE overlay to eliminate these
costs for the WLCG sites. At the same time, data schedul-
ing should be introduced to reduce the data volumes that
incur transfer and storage costs. Widespread peering is nec-
essary to reduce the need to route via complicated paths.
There are several NRENs in the US and EU, which should
be approached to discuss the technical details of network
peering, for example ESnet [55], Internet2 [56], or GÉANT
Network [57].
There are various options in the Google Network stack to
support this activity, but at the same time the necessary
Google documentation does not seem to be publicly avail-
able. Detailed discussions would be needed between WLCG
and CERN IT network experts together with the correspond-
ing network experts from Google. If necessary, short-term
Google premium support could be financed once the peering
options with the NRENs have been explored.
There is the potential need to develop new features in the
Rucio [12], GFAL [58], Davix [28], and FTS [30] stack
to support this R&D and ATLAS would continue to work
with the respective teams to discuss the objectives and mile-
stones, and to follow the implementation, deployment, and
operations. There are two distinct analyses and respective
evolutions of the ATLAS computing model that could be
explored. One idea is to improve the necessary workflow and
data management policies when using cloud storage. This
would involve for example only storing data in the cloud that
is currently in use, which is rather different to the current
grid storage model. In this way, the complexity of a hetero-
geneous infrastructure setup reduces the egress volume, but
still allows bursting to cloud compute with large data inputs.
Alternatively, the necessary capabilities to exploit cloud stor-
age and network features such as bucket-level copy could
be developed, to facilitate internal transfers between differ-
ent cloud regions. This would remove the need to egress
the data via FTS, which incurs the usual associated transfer
costs.
Thirdly, beyond the ATLAS Google Site configuration and
network considerations, many areas of R&D were performed
as part of the AGP, using multiple services offered within the
Google Cloud [59]. Most of these projects have taken advan-
tage of the elastic availability of special types of resources
to ramp up and down ephemeral compute clusters using
for example GPUs, large amounts of memory, or ARM
CPUs depending on the current need. This was done either
through the ATLAS PanDA workflow management system
or using interactive compute with Jupyter [60] notebooks and
Dask [61] task scheduling.
The usage of such non-standard resources that are not eas-
ily available at standard WLCG grid sites has proven to be
extremely valuable and effective, helping to develop and
expedite new ATLAS data analysis techniques using machine
learning and the migration of the ATLAS software to ARM
CPUs. Compact data formats with columnar data access have
been investigated using Google resources.
ATLAS plans to continue to take advantage of non-standard
resources like GPUs and ARM CPUs to work on innova-
tive and novel analysis and software techniques, accelerating
the process to discoveries. Another larger focus area could
be the development of high-energy physics algorithms using
tools not available before, such as Large Language Models
and Generative Artificial Intelligence. A continuation of the
AGP will facilitate these initiatives, providing proven access
to such resources. Like the other activities proposed in this
section, they are not expected to be resource or cost intensive.
There are several other topics of interest that might be inves-
tigated during a continuation of the AGP. Running analysis
jobs at the ATLAS Google Site was only briefly examined
and this workflow, which is unique in its highly variable
nature, may warrant further scrutiny. In particular, actions
to replicate the most highly requested analysis-level data
samples were not measurably successful, requiring further
understanding of data popularity and data placement. Tak-
ing running analysis on the Google Cloud further, the idea
of extending the resources of the ATLAS Google site with
user-specific Google credits has also been raised. Authen-
tication issues could also be interesting to look into, when
considering using the site not only for ATLAS data, but also
for hosting Open Data, available to all. On the other hand,
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2 Page 18 of 35 Comput Softw Big Sci (2025) 9:2
it may be worth investing some time in understanding the
privacy implications of the data stored in commercial cloud
resources, which may contain protected information, at least
according to GDPR [62], and whether this must be taken into
consideration.
The ATLAS Google Project has provided valuable insights
into the use of commercial clouds, and there remain many
avenues of investigation that could be pursued. This next
phase could feature a significant reduction of resources
required, and hence financial expense, as the focus shifts
to network connectivity and continued R&D. The ATLAS
Google site could nevertheless still continue, albeit at a lower
level, although if the network R&D is successful in reducing
the egress cost, it could once again be ramped up to verify the
savings arising from the implementation of dedicated peering
solutions.
8 Summary
While traditional, site-based resources have always formed
the backbone of ATLAS computing, commercial clouds may
in some cases provide a viable and attractive alternative or
addition. Much experience was gained in the integration of
the ATLAS Google site into ATLAS distributed computing
and no significant technical issue was discovered to prevent
the experiment employing such resources in the future. Fur-
thermore, the current workflow and data management tools
employed by ATLAS are shown to be adequate for apply-
ing changes to the site configuration. The technical solutions
implemented are essentially cloud-agnostic, not only avoid-
ing vendor lock-in, but also potentially enabling access to
other commercial cloud resources. The subscription pricing
model applied in this project has proven to be beneficial to
ATLAS, although questions remain how this may change
going forward.
The project has shown that commercial cloud sites are an
effective mechanism for providing additional, on-demand
CPU resources. At the level employed by ATLAS, typically
five or ten thousand cores, preemption of the allocated job
slots is barely an issue, even when using the spot instance
model as is done here. Whilst higher eviction rates were
occasionally noticeable, for example during the data repro-
cessing single workflow period, the overall failure rate was
not significantly higher than that observed on the grid. The
ATLAS Google site was also shown to be extremely effec-
tive as a bursting resource, quickly providing up to one hun-
dred thousand additional job slots, resulting in a significantly
faster production turnaround than is possible on the ATLAS
grid sites. In addition, the project has enabled parallel R&D
efforts to flourish by providing different types of resources,
for example GPU or ARM, on an elastic basis, demonstrating
rapid integration [59].
Whilst it was also shown that it is possible to integrate, adjust
and expand associated storage at the cloud site, this is less
trivial than CPU as the intrinsic network costs must be taken
into consideration. Storage and in particular network costs
are known to dominate the TCO of commercial clouds, so
much so this often dissuades sites taking an active interest
in employing such resources. The studies performed during
the AGP and outlined in this TCO analysis have shown that
commercial cloud is a technically viable option for ATLAS
distributed computing, albeit with additional costs not neces-
sarily considered when employing traditional grid resources.
Within the WLCG model the cost of the network is some-
times hidden, and whilst in reality this is probably rather
high and means to reduce it are worth investigating, it can
also be considered as irreducible and somewhat independent
of ATLAS. Conversely, commercial cloud data transfers over
standard internet networks incur significant costs for the data
centres.
The TCO evaluation has shown that without the subscription
model, the cost of commercial cloud resources is signifi-
cantly increased. The Google Cloud resources used during
this project cost a total of $3.162M at list-price compared
to the $849,458 paid via the subscription agreement, repre-
senting a discount of 73%. Alternatively, ATLAS used 3.72
times more Google Cloud resources than were purchased via
the subscription agreement, which means the resources used
during this project would have been 272% more expensive at
list-price. This is most obvious in the costs associated with
the bursting test shown in Fig. 5c, which depicts daily expen-
diture considerably in excess of the $1900 per day rate of the
subscription agreement.
As shown in Table 1, almost half of the total list-price costs
are due to egress. With this in mind, the ATLAS Collaboration
is investigating ways to reduce this cost via dedicated net-
work solutions, as outlined in Dedicated Networks (Sect.
5.1). It was also shown that egress costs are workflow depen-
dent, which may be a consideration when employing such
resources in the future. In particular, the substantial egress
associated with data reprocessing means that for now this
workflow is best avoided until further improvements in net-
work connectivity are deployed, and that until then the site
cannot be seen as universally suitable for ATLAS. If work-
flows such as Fast or Full Chain [63] can be employed, where
the egress of intermediate MC formats is avoided, this will
also help to reduce these costs.
Establishing a viable and cost-effective subscription agree-
ment between experiment and commercial cloud provider is
clearly a critical consideration of the TCO, given the large
discrepancy between the list-prices and the agreement associ-
ated with this project. A collaborative approach may be nec-
essary to obtain the best deal with a large volume discount.
One option could be for CERN to make a significant purchase
of cloud capacity and give the option to pay to be part of it,
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Comput Softw Big Sci (2025) 9:2 Page 19 of 35 2
which has been done before [64]. This offer could extend
not only to multiple sites, but also multiple experiments, and
could even be done cooperatively with other international
organisations such as EMBL [65]. Another important con-
sideration going forward, especially if commercial clouds are
employed by a significant number of sites, is to understand
how such resources fit into the WLCG pledge structure.
The initial concerns about a significant loss of on-site person-
nel when outsourcing computing to the cloud, and the poten-
tial wider implications for ATLAS due to additional parallel
support roles, appear to be less pronounced. This is because,
for the most part, these individuals at the sites would still be
needed to contribute to ATLAS distributed computing. As
such, at least for larger sites, employing commercial clouds
and off-premises resources may not immediately result in
significant cost savings.
In summary, commercial cloud computing is an effective
technical solution for ATLAS for providing additional CPU
resources, and whilst the seamless integration of cloud-based
storage was also achieved, network costs may be significant,
based on the list-price. Some ATLAS workflows are found to
be better than others with respect to egress. Resource burst-
ing was shown to be very effective, albeit at significant cost.
Establishing a favourable subscription agreement model that
makes sense to both the cloud provider and the client is
an advantage. By leveraging the Google Cloud Subscription
Agreement pricing model, ATLAS has effectively harnessed
between three to four times the resources compared to what
the same investment would deliver for the list-price. It is yet
to be seen how much this project influences the structure and
cost of any potential follow-up deal to be brokered. There
is much interest for ATLAS to continue this project, where
network connectivity would be the main focus.
Acknowledgements We thank CERN for the very successful operation
of the LHC and its injectors, as well as the support staff at CERN and at
our institutions worldwide without whom ATLAS could not be operated
efficiently.
The crucial computing support from all WLCG partners is acknowl-
edged gratefully, in particular from CERN, the ATLAS Tier-1 facilities
at TRIUMF/SFU (Canada), NDGF (Denmark, Norway, Sweden), CC-
IN2P3 (France), KIT/GridKA (Germany), INFN-CNAF (Italy), NL-T1
(Netherlands), PIC (Spain), RAL (UK) and BNL (USA), the Tier-2
facilities worldwide and large non-WLCG resource providers. Major
contributors of computing resources are listed in Ref. [66].
We gratefully acknowledge the support of ANPCyT, Argentina; YerPhI,
Armenia; ARC, Australia; BMWFW and FWF, Austria; ANAS, Azer-
baijan; CNPq and FAPESP, Brazil; NSERC, NRC and CFI, Canada;
CERN; ANID, Chile; CAS, MOST and NSFC, China; Minciencias,
Colombia; MEYS CR, Czech Republic; DNRF and DNSRC, Den-
mark; IN2P3-CNRS and CEA-DRF/IRFU, France; SRNSFG, Georgia;
BMBF, HGF and MPG, Germany; GSRI, Greece; RGC and Hong Kong
SAR, China; ISF and Benoziyo Center, Israel; INFN, Italy; MEXT and
JSPS, Japan; CNRST, Morocco; NWO, Netherlands; RCN, Norway;
MNiSW, Poland; FCT, Portugal; MNE/IFA, Romania; MESTD, Ser-
bia; MSSR, Slovakia; ARRS and MIZŠ, Slovenia; DSI/NRF, South
Africa; MICINN, Spain; SRC and Wallenberg Foundation, Sweden;
SERI, SNSF and Cantons of Bern and Geneva, Switzerland; MOST,
Taipei; TENMAK, Türkiye; STFC, United Kingdom; DOE and NSF,
USA.
Individual groups and members have received support from BCKDF,
CANARIE, CRC and DRAC,Canada; CERN-CZ, FORTE and PRIMUS,
Czech Republic; COST, ERC, ERDF, Horizon 2020, ICSC-Next-
GenerationEU and Marie Skłodowska-Curie Actions, European Union;
Investissements d’Avenir Labex, Investissements d’Avenir Idex and
ANR, France; DFG and AvHFoundation, Germany; Herakleitos, Thales
and Aristeia programmes co-financed by EU-ESF and the Greek NSRF,
Greece; BSF-NSF and MINERVA, Israel; NCN and NAWA, Poland;
La Caixa Banking Foundation, CERCA Programme Generalitat de
Catalunya and PROMETEO and GenT Programmes Generalitat Valen-
ciana, Spain; Göran Gustafssons Stiftelse, Sweden; The Royal Society
and Leverhulme Trust, United Kingdom.
In addition, individual members wish to acknowledge support from
Armenia: Yerevan Physics Institute (FAPERJ); CERN: European
Organization for Nuclear Research (CERN PJAS); Chile: Agen-
cia Nacional de Investigación y Desarrollo (FONDECYT 1230812,
FONDECYT 1230987, FONDECYT 1240864); China: Chinese Min-
istry of Science and Technology (MOST-2023YFA1605700), National
Natural Science Foundation of China (NSFC - 12175119, NSFC
12275265, NSFC-12075060); Czech Republic: Czech Science Foun-
dation (GACR - 24-11373S), Ministry of Education Youth and Sports
(FORTE CZ.02.01.01/00/22_008/0004632), PRIMUS Research Pro-
gramme (PRIMUS/21/SCI/017); EU: H2020 European Research Coun-
cil (ERC - 101002463); European Union: European Research Coun-
cil (ERC - 948254, ERC 101089007), Horizon 2020 Framework
Programme (MUCCA - CHIST-ERA-19-XAI-00), European Union,
Future Artificial Intelligence Research (FAIR-NextGenerationEU
PE00000013), Italian Center for High Performance Computing, Big
Data and Quantum Computing (ICSC, NextGenerationEU); France:
Agence Nationale de la Recherche (ANR-20-CE31-0013, ANR-21-
CE31-0013, ANR-21-CE31-0022), Investissements d’Avenir Labex
(ANR-11-LABX-0012); Germany: Baden-Württemberg Stiftung (BW
Stiftung-Postdoc Eliteprogramme), Deutsche Forschungsgemeinschaft
(DFG - 469666862, DFG - CR 312/5-2); Italy: Istituto Nazionale di
Fisica Nucleare (ICSC, NextGenerationEU); Japan: Japan Society for
the Promotion of Science (JSPS KAKENHI JP22H01227, JSPS KAK-
ENHI JP22H04944, JSPS KAKENHI JP22KK0227, JSPS KAKENHI
JP23KK0245); Netherlands: Netherlands Organisation for Scientific
Research (NWO Veni 2020 - VI.Veni.202.179); Norway: Research
Council of Norway (RCN-314472); Poland: Polish National Agency
for Academic Exchange (PPN/PPO/2020/1/00002/U/00001), Polish
National Science Centre (NCN 2021/42/E/ST2/00350, NCN OPUS
nr 2022/47/B/ST2/03059, NCN UMO-2019/34/E/ST2/00393, UMO-
2020/37/B/ST2/01043, UMO-2021/40/C/ST2/00187, UMO-2022/
47/O/ST2/00148, UMO-2023/49/B/ST2/04085); Slovenia: Slovenian
Research Agency (ARIS grant J1-3010); Spain: Generalitat Valen-
ciana (Artemisa, FEDER, IDIFEDER/2018/048), Ministry of Science
and Innovation (MCIN & NextGenEU PCI2022-135018-2, MICIN &
FEDER PID2021-125273NB, RYC2019-028510-I, RYC2020-030254-
I, RYC2021-031273-I, RYC2022-038164-I), PROMETEO and GenT
Programmes Generalitat Valenciana (CIDEGENT/2019/027); Swe-
den: Swedish Research Council (Swedish Research Council 2023-
04654, VR 2018-00482, VR 2022-03845, VR 2022-04683, VR 2023-
03403, VR grant 2021-03651), Knut and Alice Wallenberg Foun-
dation (KAW 2018.0157, KAW 2018.0458, KAW 2019.0447, KAW
2022.0358); Switzerland: Swiss National Science Foundation (SNSF
- PCEFP2_194658); United Kingdom: Leverhulme Trust (Leverhulme
Trust RPG-2020-004), Royal Society (NIF-R1-231091); United States
of America: U.S. Department of Energy (ECA DE-AC02-76SF00515),
Neubauer Family Foundation.
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2 Page 20 of 35 Comput Softw Big Sci (2025) 9:2
Author contributions All authors have contributed to the publication,
being variously involved in the design and the construction of the detec-
tors, in writing software, calibrating subsystems, operating the detectors
and acquiring data, and finally analysing the processed data. The ATLAS
Collaboration members discussed and approved the scientific results.
The manuscript was prepared by a subgroup of authors appointed by
the collaboration and subject to an internal collaboration-wide review
process. All authors reviewed and approved the final version of the
manuscript.
Funding Open access funding provided by CERN (European Organi-
zation for Nuclear Research).
Data Availability No datasets were generated or analysed during the
current study.
Declarations
Competing interests The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attri-
bution 4.0 International License, which permits use, sharing, adaptation,
distribution and reproduction in any medium or format, as long as you
give appropriate credit to the original author(s) and the source, pro-
vide a link to the Creative Commons licence, and indicate if changes
were made. The images or other third party material in this article
are included in the article’s Creative Commons licence, unless indi-
cated otherwise in a credit line to the material. If material is not
included in the article’s Creative Commons licence and your intended
use is not permitted by statutory regulation or exceeds the permit-
ted use, you will need to obtain permission directly from the copy-
right holder. To view a copy of this licence, visit http://creativecomm
ons.org/licenses/by/4.0/.
References
1. ATLAS Collaboration (2008) The ATLAS experiment at the CERN
Large Hadron Collider. JINST 3:S08003. https://doi.org/10.1088/
1748-0221/3/08/S08003
2. Evans L, Bryant P (2008) LHC machine. JINST 3:S08001. https://
doi.org/10.1088/1748-0221/3/08/S08001
3. Bird I et al (2005) LHC computing grid: technical design report,
CERN-LHCC-2005-024. https://cds.cern.ch/record/840543
4. Bird I et al (2014) Update of the computing models of the WLCG
and the LHC experiments, CERN-LHCC-2014-014. https://cds.
cern.ch/record/1695401
5. Úbeda García M et al (2014) Integration of cloud resources in the
LHCb distributed computing. J Phys Conf Ser 513:032099. https://
doi.org/10.1088/1742-6596/513/3/032099
6. Panitkin S (2015) Look to the clouds and beyond. Nat Phys 11:373.
https://doi.org/10.1038/nphys3319
7. Holzman B et al (2017) HEPCloud, a new paradigm for HEP facil-
ities: CMS Amazon web services investigation. Comput Softw Big
Sci. https://doi.org/10.1007/s41781-017-0001-9
8. Lonˇcar P (2023) Scalable data processing model of the ALICE
experiment in the cloud, PhD thesis: Sveuˇcilište u Splitu. Fakul-
tet elektrotehnike, strojarstva i brodogradnje. Zavod za elektron-
iku i raˇcunarstvo., University of Split. https://cds.cern.ch/record/
2874778
9. Megino F Barreiro, Bryant L, Hufnagel D, Anampa K Hurtado
(2023) US ATLAS and US CMS HPC and cloud blueprint. ArXiv:
2304.07376 [physics.comp-ph]
10. Megino F Barreiro et al (2021) Seamless integration of commer-
cial clouds with ATLAS distributed computing. EPJ Web Conf
251:02005. https://doi.org/10.1051/epjconf/202125102005
11. Megino F Barreiro et al (2017) PanDA for ATLAS distributed com-
puting in the next decade. J Phys Conf Ser 898:052002. https://doi.
org/10.1088/1742-6596/898/5/052002
12. Barisits M et al (2019) Rucio: scientific data management. Comput
Softw Big Sci. https://doi.org/10.1007/s41781-019-0026-3
13. Apollinari G et al (2015) High–luminosity Large Hadron Collider
(HL–LHC): preliminary design report, CERN-2015-005. https://
cds.cern.ch/record/2116337
14. ATLAS Collaboration (2020) ATLAS HL–LHC computing con-
ceptual design report, CERN-LHCC-2020-015, LHCC-G-178.
https://cds.cern.ch/record/2729668
15. ATLAS Collaboration (2022) ATLAS software and comput-
ing HL–LHC roadmap, CERN-LHCC-2022-005, LHCC-G-182.
https://cds.cern.ch/record/2802918
16. Devouassoux M (2018) Method to calculate the total cost of own-
ership of infrastructure as a service, version 2. https://doi.org/10.
5281/zenodo.2161088
17. Helix Nebula the science cloud. https://doi.org/10.3030/687614
18. Amazon Web Services pricing. https://aws.amazon.com/pricing
19. Google Cloud Platform pricing. https://cloud.google.com/
products/calculator
20. Martelli E (2024) Evolving the LHCOPN and LHCONE net-
works to support HL-LHC computing requirements. EPJ WebConf
295:07016. https://doi.org/10.1051/epjconf/202429507016
21. ATLAS Collaboration (2024) Software and computing for Run 3
of the ATLAS experiment at the LHC. arXiv:2404.06335 [hep-ex]
22. Kubernetes. https://kubernetes.io/docs/home/
23. Maeno T et al (2019) Harvester: an edge service harvesting hetero-
geneous resources for ATLAS. EPJ Web Conf 214:03030. https://
doi.org/10.1051/epjconf/201921403030
24. Megino F Barreiro et al (2020) Using Kubernetes as an ATLAS
computing site. EPJ WebConf 245:07025. https://doi.org/10.1051/
epjconf/202024507025
25. Megino F Barreiro et al (2024) Accelerating science: The usage
of commercial clouds in ATLAS Distributed Computing. EPJ Web
Conf 295:07002. https://doi.org/10.1051/epjconf/202429507002
26. Blomer J et al (2020) The CernVM file system, version 2.7.5.
https://doi.org/10.5281/zenodo.4114078
27. Authenticating requests (AWS Signature, version 4).
https://docs.aws.amazon.com/AmazonS3/latest/API/
sig-v4-authenticating-requests.html
28. Davix. https://davix.web.cern.ch/davix/docs/devel/
29. Barisits M et al (2024) Extending Rucio with modern cloud stor-
age support. EPJ Web Conf 295:01030. https://doi.org/10.1051/
epjconf/202429501030
30. CERN File Transfer Service. https://fts.web.cern.ch/fts/
31. Interoperable Global Trust Federation. https://www.igtf.net/
32. Google spot VMs. https://cloud.google.com/compute/docs/
instances/spot
33. Rucio replica management. https://rucio.cern.ch/documentation/
started/concepts/replica_management/
34. Rucio RSE configuration. https://rucio.cern.ch/documentation/
started/concepts/rucio_storage_element/
35. Borodin M et al (2021) The ATLAS data carousel project sta-
tus. EPJ Web Conf 251:02006. https://doi.org/10.1051/epjconf/
202125102006
36. ATLAS Collaboration (2022) Active learning reinterpretation of
an ATLAS dark matter search constraining a model of a dark
Higgs boson decaying to two (b)-quarks, ATL-PHYS-PUB-2022-
045, 2022. https://cds.cern.ch/record/2839789
37. ATLAS Collaboration (2017) Performance of the ATLAS trigger
system in 2015. Eur Phys J C 77:317. https://doi.org/10.1140/epjc/
s10052-017-4852-3
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Comput Softw Big Sci (2025) 9:2 Page 21 of 35 2
38. ATLAS Collaboration (2024) The ATLAS trigger system for LHC
run 3 and trigger performance in 2022. arXiv:2401.06630 [hep-ex]
39. Berghaus F et al (2020) ATLAS Sim@P1 upgrades during long
shutdown two. EPJ Web Conf 245:07044. https://doi.org/10.1051/
epjconf/202024507044
40. Glushkov I, Lee C, Di Girolamo A, Walker R, Gottardo CA (2024)
Optimization of opportunistic utilization of the ATLAS high-level
trigger farm for LHC Run 3. EPJ Web Conf 295:07035. https://doi.
org/10.1051/epjconf/202429507035
41. HPC Vega. https://www.izum.si/en/vega-en/
42. HPC Perlmutter. https://docs.nersc.gov/systems/perlmutter/
43. Google storage pricing. https://cloud.google.com/storage/pricing#
europe
44. Google network service tiers pricing. https://cloud.google.com/
network-tiers/pricing
45. Google Cloud interconnect pricing. https://cloud.google.com/
network-connectivity/docs/interconnect/pricing
46. Sfiligoi I et al (2021) Managing cloud networking costs for data–
intensive applications by provisioning dedicated network links.
arXiv:2104.06913 [cs.NI]
47. Energy Sciences Network. https://www.es.net
48. Giordano D et al (2023) HEPScore: a new CPU benchmark for the
WLCG. arXiv:2306.08118 [hep-ex]
49. Google Cloud Platform terms of service. https://cloud.google.com/
terms
50. Google data centers: efficiency. https://www.google.com/about/
datacenters/efficiency/
51. Carbon free energy for Google Cloud regions. https://cloud.google.
com/sustainability/region-carbon
52. The OCRE project. https://www.ocre-project.eu/
53. HEPCloud. https://computing.fnal.gov/hep-cloud/
54. CloudBank. https://www.cloudbank.org/
55. Energy Sciences Network peering. https://www.es.net/
engineering-services/the-network/peering-connections/
56. Internet2 cloud access. https://internet2.edu/cloud/cloud-access/
57. GÉANT Network. https://network.geant.org/
58. Grid File Access Library, version 2. https://dmc-docs.web.cern.ch/
dmc-docs/gfal2/gfal2.html
59. Megino F Barreiro et al (2024) Operational experience and R&D
results using the google cloud for high energy physics in the ATLAS
experiment. arXiv:2403.15873 [hep-ex]
60. JupyterHub. https://jupyter.org/hub
61. Dask. https://www.dask.org
62. General Data Protection Regulation. https://gdpr-info.eu
63. Javurkova M et al (2021) The fast simulation chain in the ATLAS
experiment. EPJ Web Conf 251:03012. https://doi.org/10.1051/
epjconf/202125103012
64. CloudBank EU NGI. https://ngiatlantic.eu/funded-experiments/
cloudbank-eu-ngi
65. The European Molecular Biology Laboratory. https://www.embl.
org
66. ATLAS Collaboration, ATLAS Computing Acknowledgements,
ATL-SOFT-PUB-2023-001, 2023. https://cds.cern.ch/record/
2869272
Publisher’s Note Springer Nature remains neutral with regard to juris-
dictional claims in published maps and institutional affiliations.
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2 Page 22 of 35 Comput Softw Big Sci (2025) 9:2
The ATLAS Collaboration
G. Aad104 , E. Aakvaag17 , B. Abbott123 , S. Abdelhameed119a , K. Abeling56 , N. J. Abicht50 , S.H.Abidi
30 ,
M. Aboelela45 , A. Aboulhorma36e , H. Abramowicz154 , H. Abreu153 , Y. Abulaiti120 , B. S. Acharya70a,70b,m,
A. Ackermann64a , C. Adam Bourdarios4, L. Adamczyk87a , S. V. Addepalli27 , M.J.Addison
103 ,
J. Adelman118 , A. Adiguzel22c , T. Adye137 , A. A. Affolder139 ,Y.Ak
40 ,M.N.Agaras
13 ,J.Agarwala
74a,74b ,
A. Aggarwal102 , C. Agheorghiesei28c , F. Ahmadov39,y, W. S. Ahmed106 , S. Ahuja97 ,X.Ai
63e ,
G. Aielli77a,77b ,A.Aikot
166 , M. Ait Tamlihat36e , B. Aitbenchikh36a , M. Akbiyik102 , T.P.Akesson
100 ,
A. V. Akimov38 , D. Akiyama171 , N. N. Akolkar25 , S. Aktas22a , K. Al Khoury42 , G. L. Alberghi24b ,
J. Albert168 , P. Albicocco54 , G. L. Albouy61 , S. Alderweireldt53 , Z. L. Alegria124 , M. Aleksa37 ,
I. N. Aleksandrov39 ,C.Alexa
28b , T. Alexopoulos10 , F. Alfonsi24b ,M.Algren
57 , M. Alhroob170 ,
B. Ali135 , H.M.J.Ali
93 ,S.Ali
32 , S. W. Alibocus94 , M. Aliev34c , G. Alimonti72a , W. Alkakhi56 ,
C. Allaire67 ,B.M.M.Allbrooke
149 , J. F. Allen53 , C. A. Allendes Flores140f , P. P. Allport21 , A. Aloisio73a,73b ,
F. Alonso92 , C. Alpigiani141 , Z.M.K.Alsolami
93 , M. Alvarez Estevez101 , A. Alvarez Fernandez102 ,
M. Alves Cardoso57 , M.G.Alviggi
73a,73b ,M.Aly
103 , Y. Amaral Coutinho84b , A. Ambler106 , C. Amelung37,
M. Amerl103 , C. G. Ames111 , D. Amidei108 , B. Amini55 , K. J. Amirie158 , S. P. Amor Dos Santos133a ,
K. R. Amos166 , D. Amperiadou155 ,S.An
85, V. Ananiev128 , C. Anastopoulos142 , T. Andeen11 ,
J. K. Anders37 , A. C. Anderson60 , S. Y. Andrean48a,48b , A. Andreazza72a ,72b , S. Angelidakis9,
A. Angerami42 , A. V. Anisenkov38 , A. Annovi75a , C. Antel57 , E. Antipov148 , M. Antonelli54 ,
F. Anulli76a , M. Aoki85 , T. Aoki156 , M.A.Aparo
149 , L. Aperio Bella49 , C. Appelt19 , A. Apyan27 ,
S. J. Arbiol Val88 , C. Arcangeletti54 , A.T.H.Arce
52 , J-F. Arguin110 , S. Argyropoulos55 , J.-H. Arling49 ,
O. Arnaez4, H. Arnold148 , G. Artoni76a,76b , H. Asada113 ,K.Asai
121 ,S.Asai
156 , N. A. Asbah37 ,
R. A. Ashby Pickering170 , K. Assamagan30 , R. Astalos29a , K.S.V.Astrand
100 ,S.Atashi
162 , R.J.Atkin
34a ,
M. Atkinson165, H. Atmani36f, P. A. Atmasiddha131 , K. Augsten135 , S. Auricchio73a,73b ,A.D.Auriol
21 ,
V. A. Austrup103 , G. Avolio37 , K. Axiotis57 , G. Azuelos110 ,ad , D. Babal29b , H. Bachacou138 , K. Bachas155,q,
A. Bachiu35 , F. Backman48a,48b , A. Badea40 , T. M. Baer108 , P. Bagnaia76a,76b , M. Bahmani19 ,
D. Bahner55 ,K.Bai
126 , J. T. Baines137 , L. Baines96 , O. K. Baker175 , E. Bakos16 , D. Bakshi Gupta8,
L. E. Balabram Filho84b , V. Balakrishnan123 , R. Balasubramanian117 , E.M.Baldin
38 , P. Balek87a ,
E. Ballabene24b,24a , F. Balli138 , L.M.Baltes
64a , W. K. Balunas33 ,J.Balz
102 , I. Bamwidhi119b ,
E. Banas88 , M. Bandieramonte132 , A. Bandyopadhyay25 , S. Bansal25 , L. Barak154 , M. Barakat49 ,
E. L. Barberio107 , D. Barberis58b,58a , M. Barbero104 , M. Z. Barel117 , T. Barillari112 , M-S. Barisits37 ,
T. Ba r k low146 , P. Baron125 , D. A. Baron Moreno103 , A. Baroncelli63a ,A.J.Barr
129 , J.D.Barr
98 ,
F. Barreiro101 , J. Barreiro Guimarães da Costa14 ,U.Barron
154 , M.G.BarrosTeixeira
133a ,S.Barsov
38 ,
F. Bartels64a , R. Bartoldus146 ,A.E.Barton
93 ,P.Bartos
29a , A. Basan102 , M. Baselga50 , A. Bassalat67,b,
M. J. Basso159a , S. Bataju45 ,R.Bate
167 , R. L. Bates60 , S. Batlamous101, B. Batool144 , M. Battaglia139 ,
D. Battulga19 , M. Bauce76a,76b , M. Bauer80 , P. Bauer25 , L. T. Bazzano Hurrell31 , J. B. Beacham52 ,
T. Beau130 , J. Y. Beaucamp92 , P. H. Beauchemin161 , P. Bechtle25 , H. P. Beck20,p, K. Becker170 ,
A. J. Beddall83 , V. A. Bednyakov39 ,C.P.Bee
148 , L. J. Beemster16 , T. A. Beermann37 , M. Begalli84d ,
M. Begel30 , A. Behera148 , J.K.Behr
49 ,J.F.Beirer
37 , F. Beisiegel25 ,M.Belfkir
119b , G. Bella154 ,
L. Bellagamba24b , A. Bellerive35 , P. Bellos21 , K. Beloborodov38 , D. Benchekroun36a , F. Bendebba36a ,
Y. Benhammou154 , K. C. Benkendorfer62 , L. Beresford49 , M. Beretta54 , E. Bergeaas Kuutmann164 , N. Berger4,
B. Bergmann135 , J. Beringer18a , G. Bernardi5, C. Bernius146 , F. U. Bernlochner25 , F. Bernon37,104 ,
A. Berrocal Guardia13 ,T.Berry
97 ,P.Berta
136 , A. Berthold51 , S. Bethke112 , A. Betti76a,76b , A.J.Bevan
96 ,
N. K. Bhalla55 , S. Bhatta148 , D. S. Bhattacharya169 , P. Bhattarai146 , K.D.Bhide
55 , V. S. Bhopatkar124 ,
R. M. Bianchi132 , G. Bianco24b,24a , O. Biebel111 , R. Bielski126 , M. Biglietti78a , C. S. Billingsley45,
Y. Bimgdi36f , M. Bindi56 , A. Bingul22b ,C.Bini
76a,76b , G.A.Bird
33 ,M.Birman
172 ,M.Biros
136 ,
S. Biryukov149 , T. Bisanz50 , E. Bisceglie44b,44a ,J.P.Biswal
137 ,D.Biswas
144 , I. Bloch49 ,A.Blue
60 ,
U. Blumenschein96 , J. Blumenthal102 , V. S. Bobrovnikov38 , M. Boehler55 , B. Boehm169 , D. Bogavac37 ,
A. G. Bogdanchikov38 , C. Bohm48a , V. Boisvert97 , P. Bokan37 ,T.Bold
87a , M. Bomben5, M. Bona96 ,
M. Boonekamp138 , C. D. Booth97 , A. G. Borbély60 , I. S. Bordulev38 , G. Borissov93 , D. Bortoletto129 ,
D. Boscherini24b ,M.Bosman
13 , J.D.BossioSola
37 , K. Bouaouda36a , N. Bouchhar166 , L. Boudet4,
J. Boudreau132 , E. V. Bouhova-Thacker93 , D. Boumediene41 , R. Bouquet58b,58a , A. Boveia122 ,J.Boyd
37 ,
D. Boye30 , I. R. Boyko39 , L. Bozianu57 , J. Bracinik21 , N. Brahimi4, G. Brandt174 , O. Brandt33 ,
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Comput Softw Big Sci (2025) 9:2 Page 23 of 35 2
F. Braren49 ,B.Brau
105 ,J.E.Brau
126 , R. Brener172 , L. Brenner117 , R. Brenner164 , S. Bressler172 ,
G. Brianti79a,79b , D. Britton60 , D. Britzger112 , I. Brock25 , G. Brooijmans42 , E. M. Brooks159b ,
E. Brost30 ,L.M.Brown
168 , L. E. Bruce62 , T. L. Bruckler129 , P. A. Bruckman de Renstrom88 , B. Brüers49 ,
A. Bruni24b , G. Bruni24b , M. Bruschi24b ,N.Bruscino
76a,76b , T. Buanes17 , Q. Buat141 , D. Buchin112 ,
A. G. Buckley60 , O. Bulekov38 , B. A. Bullard146 , S. Burdin94 ,C.D.Burgard
50 , A. M. Burger37 ,
B. Burghgrave8, O. Burlayenko55 ,J.Burleson
165 , J.T.P.Burr
33 , J. C. Burzynski145 , E. L. Busch42 ,
V. Büscher102 , P.J.Bussey
60 , J.M.Butler
26 , C. M. Buttar60 , J. M. Butterworth98 , W. Buttinger137 ,
C. J. Buxo Vazquez109 , A. R. Buzykaev38 , S. Cabrera Urbán166 , L. Cadamuro67 , D. Caforio59 ,H.Cai
132 ,
Y. Ca i14 ,114c ,Y.Cai
114a , V.M.M.Cairo
37 , O. Cakir3a , N. Calace37 , P. Calafiura18a , G. Calderini130 ,
P. Calfayan69 , G. Callea60 , L. P. Caloba84b ,D.Calvet
41 ,S.Calvet
41 , M. Calvetti75a,75b , R. Camacho Toro130 ,
S. Camarda37 , D. Camarero Munoz27 , P. Camarri77a,77b , M. T. Camerlingo73a,73b , D. Cameron37 ,
C. Camincher168 , M. Campanelli98 , A. Camplani43 , V. Canale73a,73b , A. C. Canbay3a , E. Canonero97 ,
J. Cantero166 ,Y.Cao
165 , F. Capocasa27 , M. Capua44b,44a , A. Carbone72a,72b , R. Cardarelli77a ,
J. C. J. Cardenas8, G. Carducci44b,44a ,T.Carli
37 , G. Carlino73a , J. I. Carlotto13 , B. T. Carlson132,r,
E. M. Carlson168,159a , J. Carmignani94 , L. Carminati72a ,72b , A. Carnelli138 , M. Carnesale76a,76b , S. Caron116 ,
E. Carquin140f ,S.Carrá
72a , G. Carratta24b,24a ,A.M.Carroll
126 , T.M.Carter
53 , M. P. Casado13,j,
M. Caspar49 , F. L. Castillo4, L. Castillo Garcia13 , V. Castillo Gimenez166 ,N.F.Castro
133a,133e ,
A. Catinaccio37 ,J.R.Catmore
128 , T. Cavaliere4, V. Cavaliere30 , N. Cavalli24b,24a , L. J. Caviedes Betancourt23b,
Y. C. Cekmecelioglu49 , E. Celebi83 , S. Cella37 , F. Celli129 , M. S. Centonze71a,71b , V. Cepaitis57 ,
K. Cerny125 , A. S. Cerqueira84a , A. Cerri149 , L. Cerrito77a,77b , F. Cerutti18a ,B.Cervato
144 , A. Cervelli24b ,
G. Cesarini54 , S. A. Cetin83 , D. Chakraborty118 , J. Chan18a , W.Y.Chan
156 , J. D. Chapman33 ,
E. Chapon138 , B. Chargeishvili152b , D.G.Charlton
21 , M. Chatterjee20 , C. Chauhan136 ,Y.Che
114a ,
S. Chekanov6, S. V. Chekulaev159a , G.A.Chelkov
39,a, A. Chen108 , B. Chen154 , B. Chen168 , H. Chen114a ,
H. Chen30 , J. Chen63c , J. Chen145 , M. Chen129 , S. Chen156 , S. J. Chen114a , X. Chen63c , X. Chen15 ,ac ,
Y. Chen63a , C. L. Cheng173 , H. C. Cheng65a , S. Cheong146 , A. Cheplakov39 , E. Cheremushkina49 ,
E. Cherepanova117 , R. Cherkaoui El Moursli36e , E. Cheu7, K. Cheung66 , L. Chevalier138 , V. Chiarella54 ,
G. Chiarelli75a , N. Chiedde104 , G. Chiodini71a , A. S. Chisholm21 , A. Chitan28b , M. Chitishvili166 ,
M. V. Chizhov39 , K. Choi11 , Y. Chou141 , E.Y.S.Chow
116 ,K.L.Chu
172 , M.C.Chu
65a ,X.Chu
14,114c ,
Z. Chubinidze54 , J. Chudoba134 , J. J. Chwastowski88 ,D.Cieri
112 , K.M.Ciesla
87a , V. Cindro95 ,
A. Ciocio18a , F. Cirotto73a,73b , Z.H.Citron
172 , M. Citterio72a , D. A. Ciubotaru28b,A.Clark
57 ,P.J.Clark
53 ,
N. Clarke Hall98 ,C.Clarry
158 , J. M. Clavijo Columbie49 ,S.E.Clawson
49 , C. Clement48a,48b , Y. Coadou104 ,
M. Cobal70a,70c , A. Coccaro58b , R. F. Coelho Barrue133a , R. Coelho Lopes De Sa105 , S. Coelli72a ,B.Cole
42 ,
J. Collot61 , P. Conde Muiño133a,133g , M. P. Connell34c , S. H. Connell34c , E. I. Conroy129 , F. Conventi73a,ae ,
H. G. Cooke21 , A. M. Cooper-Sarkar129 , F. A. Corchia24b,24a , A. Cordeiro Oudot Choi130 , L. D. Corpe41 ,
M. Corradi76a,76b , F. Corriveau106,x, A. Cortes-Gonzalez19 , M.J.Costa
166 , F. Costanza4, D. Costanzo142 ,
B. M. Cote122 , J. Couthures4,G.Cowan
97 , K. Cranmer173 , D. Cremonini24b,24a , S. Crépé-Renaudin61 ,
F. Crescioli130 , M. Cristinziani144 , M. Cristoforetti79a,79b , V. Croft117 ,J.E.Crosby
124 , G. Crosetti44b,44a ,
A. Cueto101 ,H.Cui
98 ,Z.Cui
7, W. R. Cunningham60 , F. Curcio166 , J. R. Curran53 , P. Czodrowski37 ,
M. J. Da Cunha Sargedas De Sousa58b,58a , J. V. Da Fonseca Pinto84b ,C.DaVia
103 , W. Dabrowski87a ,
T. Dado37 , S. Dahbi151 ,T.Dai
108 , D. Dal Santo20 , C. Dallapiccola105 ,M.Dam
43 ,G.Damen
30 ,
V. D’Amico111 ,J.Damp
102 , J. R. Dandoy35 , D. Dannheim37 , M. Danninger145 ,V.Dao
148 , G. Darbo58b ,
S. J. Das30,af , F. Dattola49 ,S.DAuria
72a,72b , A. D’avanzo73a,73b ,C.David
34a , T. Davidek136 ,
I. Dawson96 , H. A. Day-hall135 ,K.De
8, R. De Asmundis73a ,N.DeBiase
49 ,S.DeCastro
24b,24a ,
N. De Groot116 , P. de Jong117 , H. De la Torre118 ,A.DeMaria
114a , A. De Salvo76a , U. De Sanctis77a,77b ,
F. De Santis71a ,71b , A. De Santo149 ,J.B.DeVivieDeRegie
61 , D. V. Dedovich39 , J. Degens94 , A. M. Deiana45 ,
F. Del Corso24b,24a , J. Del Peso101 ,F.DelRio
64a , L. Delagrange130 , F. Deliot138 , C. M. Delitzsch50 ,
M. Della Pietra73a,73b , D. Della Volpe57 , A. Dell’Acqua37 , L. Dell’Asta72a,72b , M. Delmastro4,
P. A. Delsart61 , S. Demers175 , M. Demichev39 , S.P.Denisov
38 , L. D’Eramo41 , D. Derendarz88 ,
F. Derue130 ,P.Dervan
94 , K. Desch25 , C. Deutsch25 , F.A.DiBello
58b,58a , A. Di Ciaccio77a,77b ,
L. Di Ciaccio4, A. Di Domenico76a,76b , C. Di Donato73a,73b , A. Di Girolamo37 , G. Di Gregorio37 ,
A. Di Luca79a,79b , B. Di Micco78a,78b , R. Di Nardo78a,78b , K. F. Di Petrillo40 , M. Diamantopoulou35 ,
F. A. Dias117 ,T.DiasDoVale
145 ,M.A.Diaz
140a,140b , F. G. Diaz Capriles25 , A. R. Didenko39, M. Didenko166 ,
E. B. Diehl108 , S. Díez Cornell49 , C. Diez Pardos144 , C. Dimitriadi164 , A. Dimitrievska21 , J. Dingfelder25 ,
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2 Page 24 of 35 Comput Softw Big Sci (2025) 9:2
T. Dingley129 , I-M. Dinu28b , S. J. Dittmeier64b , F. Dittus37 ,M.Divisek
136 ,F.Djama
104 , T. Djobava152b ,
C. Doglioni103,100 , A. Dohnalova29a , J. Dolejsi136 , Z. Dolezal136 , K. Domijan87a , K.M.Dona
40 ,
M. Donadelli84d , B. Dong109 , J. Donini41 , A. D’Onofrio73a,73b , M. D’Onofrio94 , J. Dopke137 ,
A. Doria73a , N. Dos Santos Fernandes133a , P. Dougan103 ,M.T.Dova
92 ,A.T.Doyle
60 , M. A. Draguet129 ,
E. Dreyer172 , I. Drivas-koulouris10 ,M.Drnevich
120 , M. Drozdova57 ,D.Du
63a ,T.A.duPree
117 ,
F. Dubinin38 , M. Dubovsky29a , E. Duchovni172 , G. Duckeck111 , O. A. Ducu28b , D. Duda53 , A. Dudarev37 ,
E. R. Duden27 , M. D’uffizi103 , L. Duflot67 , M. Dührssen37 , I. Duminica28g , A. E. Dumitriu28b ,
M. Dunford64a , S. Dungs50 , K. Dunne48a,48b , A. Duperrin104 , H. Duran Yildiz3a , M. Düren59 ,
A. Durglishvili152b , B. L. Dwyer118 , G. I. Dyckes18a , M. Dyndal87a , B. S. Dziedzic37 , Z. O. Earnshaw149 ,
G. H. Eberwein129 , B. Eckerova29a , S. Eggebrecht56 , E. Egidio Purcino De Souza84e , L.F.Ehrke
57 ,
G. Eigen17 , K. Einsweiler18a ,T.Ekelof
164 , P. A. Ekman100 , S. El Farkh36b , Y. El Ghazali63a , H. El Jarrari37 ,
A. El Moussaouy36a , V. Ellajosyula164 , M. Ellert164 , F. Ellinghaus174 , N. Ellis37 , J. Elmsheuser30 ,
M. Elsawy119a , M. Elsing37 , D. Emeliyanov137 , Y. Enari85 ,I.Ene
18a , S. Epari13 , P. A. Erland88 ,
D. Ernani Martins Neto88 , M. Errenst174 , M. Escalier67 , C. Escobar166 , E. Etzion154 , G. Evans133a ,
H. Evans69 , L. S. Evans97 , A. Ezhilov38 , S. Ezzarqtouni36a , F. Fabbri24b ,24a , L. Fabbri24b,24a , G. Facini98 ,
V. Fadeyev139 , R. M. Fakhrutdinov38 , D. Fakoudis102 , S. Falciano76a , L. F. Falda Ulhoa Coelho37 ,
F. Fallavollita112 , G. Falsetti44b,44a , J. Faltova136 ,C.Fan
165 ,Y.Fan
14 , Y. Fang14,114c , M. Fanti72a ,72b ,
M. Faraj70a,70b , Z. Farazpay99 , A. Farbin8, A. Farilla78a , T. Farooque109 , S. M. Farrington53 , F. Fassi36e ,
D. Fassouliotis9, M. Faucci Giannelli77a,77b , W.J.Fawcett
33 , L. Fayard67 , P. Federic136 , P. Federicova134 ,
O. L. Fedin38,a, M. Feickert173 , L. Feligioni104 , D.E.Fellers
126 , C. Feng63b , Z. Feng117 , M. J. Fenton162 ,
L. Ferencz49 , R. A. M. Ferguson93 , S. I. Fernandez Luengo140f , P. Fernandez Martinez13 , M. J. V. Fernoux104 ,
J. Ferrando93 , A. Ferrari164 , P. Ferrari117,116 , R. Ferrari74a , D. Ferrere57 , C. Ferretti108 , D. Fiacco76a ,76b ,
F. Fiedler102 , P. Fiedler135 , A. Filipˇciˇc95 , E. K. Filmer1, F. Filthaut116 , M.C.N.Fiolhais
133a,133c,c,
L. Fiorini166 , W. C. Fisher109 , T. Fitschen103 , P. M. Fitzhugh138, I. Fleck144 , P. Fleischmann108 , T. Flick174 ,
M. Flores34d,aa , L. R. Flores Castillo65a , L. Flores Sanz De Acedo37 , F. M. Follega79a,79b ,N.Fomin
33 ,
J. H. Foo158 , A. Formica138 ,A.C.Forti
103 , E. Fortin37 , A. W. Fortman18a , M.G.Foti
18a , L. Fountas9,k,
D. Fournier67 ,H.Fox
93 , P. Francavilla75a,75b , S. Francescato62 , S. Franchellucci57 , M. Franchini24b,24a ,
S. Franchino64a , D. Francis37, L. Franco116 , V. Franco Lima37 , L. Franconi49 , M. Franklin62 , G. Frattari27 ,
Y. Y. Fr i d 154 , J. Friend60 , N. Fritzsche37 , A. Froch55 , D. Froidevaux37 , J.A.Frost
129 ,Y.Fu
63a ,
S. Fuenzalida Garrido140f , M. Fujimoto104 , K. Y. Fung65a , E. Furtado De Simas Filho84e , M. Furukawa156 ,
J. Fuster166 ,A.Gaa
56 , A. Gabrielli24b,24a , A. Gabrielli158 , P. Gadow37 , G. Gagliardi58b,58a , L. G. Gagnon18a ,
S. Gaid163 , S. Galantzan154 , E.J.Gallas
129 , B. J. Gallop137 ,K.K.Gan
122 , S. Ganguly156 ,Y.Gao
53 ,
F. M. Garay Walls140a,140b , B. Garcia30 , C. García166 , A. Garcia Alonso117 , A. G. Garcia Caffaro175 ,
J. E. García Navarro166 , M. Garcia-Sciveres18a , G. L. Gardner131 , R. W. Gardner40 , N. Garelli161 ,D.Garg
81 ,
R. B. Garg146 , J.M.Gargan
53, C. A. Garner158, C.M.Garvey
34a , V.K.Gassmann
161, G. Gaudio74a , V. Gautam13 ,
P. Gauzzi76a,76b , J. Gavranovic95 , I.L.Gavrilenko
38 , A. Gavrilyuk38 ,C.Gay
167 , G. Gaycken126 ,
E. N. Gazis10 , A. A. Geanta28b ,C.M.Gee
139 ,A.Gekow
122, C. Gemme58b , M. H. Genest61 ,A.D.Gentry
115 ,
S. George97 , W. F. George21 , T. Geralis47 , P. Gessinger-Befurt37 , M.E.Geyik
174 , M. Ghani170 ,
K. Ghorbanian96 , A. Ghosal144 , A. Ghosh162 , A. Ghosh7, B. Giacobbe24b , S. Giagu76a,76b , T. Giani117 ,
A. Giannini63a ,S.M.Gibson
97 , M. Gignac139 ,D.T.Gil
87b , A. K. Gilbert87a , B. J. Gilbert42 , D. Gillberg35 ,
G. Gilles117 , L. Ginabat130 , D. M. Gingrich2,ad , M. P. Giordani70a,70c , P.F.Giraud
138 , G. Giugliarelli70a ,70c ,
D. Giugni72a , F. Giuli37 , I. Gkialas9,k, L. K. Gladilin38 ,C.Glasman
101 , G. R. Gledhill126 ,G.Glemža
49 ,
M. Glisic126, I. Gnesi44b,f,Y.Go
30 , M. Goblirsch-Kolb37 , B. Gocke50 , D. Godin110 , B. Gokturk22a ,
S. Goldfarb107 , T. Golling57 , M.G.D.Gololo
34g , D. Golubkov38 , J. P. Gombas109 , A. Gomes133a ,133b ,
G. Gomes Da Silva144 , A. J. Gomez Delegido166 , R. Gonçalo133a , L. Gonella21 , A. Gongadze152c ,
F. Gonnella21 , J. L. Gonski146 , R. Y. González Andana53 , S. González de la Hoz166 , R. Gonzalez Lopez94 ,
C. Gonzalez Renteria18a , M. V. Gonzalez Rodrigues49 , R. Gonzalez Suarez164 , S. Gonzalez-Sevilla57 ,
L. Goossens37 ,B.Gorini
37 ,E.Gorini
71a,71b , A. Gorišek95 , T. C. Gosart131 , A. T. Goshaw52 ,
M. I. Gostkin39 ,S.Goswami
124 , C. A. Gottardo37 ,S.A.Gotz
111 , M. Gouighri36b , V. Goumarre49 ,
A. G. Goussiou141 , N. Govender34c , R. P. Grabarczyk129 , I. Grabowska-Bold87a , K. Graham35 ,
E. Gramstad128 , S. Grancagnolo71a,71b , C.M.Grant
1,138, P. M. Gravila28f , F. G. Gravili71a ,71b ,H.M.Gray
18a ,
M. Greco71a,71b , M. J. Green1, C. Grefe25 , A. S. Grefsrud17 , I. M. Gregor49 ,K.T.Greif
162 , P. Grenier146 ,
S. G. Grewe112, A. A. Grillo139 ,K.Grimm
32 , S. Grinstein13,t,J.-F.Grivaz
67 , E. Gross172 , J. Grosse-Knetter56 ,
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Comput Softw Big Sci (2025) 9:2 Page 25 of 35 2
J. C. Grundy129 , L. Guan108 , J.G.R.GuerreroRojas
166 , G. Guerrieri37 , R. Gugel102 , J. A. M. Guhit108 ,
A. Guida19 , E. Guilloton170 , S. Guindon37 ,F.Guo
14,114c ,J.Guo
63c ,L.Guo
49 ,Y.Guo
108 , R. Gupta132 ,
S. Gurbuz25 , S. S. Gurdasani55 , G. Gustavino76a,76b , P. Gutierrez123 , L. F. Gutierrez Zagazeta131 ,
M. Gutsche51 , C. Gutschow98 , C. Gwenlan129 , C. B. Gwilliam94 , E. S. Haaland128 , A. Haas120 ,
M. Habedank49 , C. Haber18a , H. K. Hadavand8, A. Hadef51 , S. Hadzic112 , A.I.Hagan
93 , J.J.Hahn
144 ,
E. H. Haines98 , M. Haleem169 ,J.Haley
124 ,J.J.Hall
142 , G. D. Hallewell104 ,L.Halser
20 , K. Hamano168 ,
M. Hamer25 , G. N. Hamity53 , E. J. Hampshire97 ,J.Han
63b ,K.Han
63a ,L.Han
114a ,L.Han
63a ,
S. Han18a ,Y.F.Han
158 , K. Hanagaki85 , M. Hance139 , D. A. Hangal42 , H. Hanif145 , M. D. Hank131 ,
J. B. Hansen43 , P. H. Hansen43 , D. Harada57 , T. Harenberg174 , S. Harkusha38 , M.L.Harris
105 ,
Y. T. Harris129 , J. Harrison13 , N. M. Harrison122 , P. F. Harrison170, N.M.Hartman
112 , N. M. Hartmann111 ,
R. Z. Hasan97,137 , Y. Hasegawa143 , F. Haslbeck129 , S. Hassan17 , R. Hauser109 , C.M.Hawkes
21 ,
R. J. Hawkings37 , Y. Hayashi156 , D. Hayden109 , C. Hayes108 , R. L. Hayes117 , C. P. Hays129 ,J.M.Hays
96 ,
H. S. Hayward94 ,F.He
63a ,M.He
14,114c ,Y.He
49 ,Y.He
98 , N. B. Heatley96 , V. Hedberg100 ,
A. L. Heggelund128 , N. D. Hehir96,*, C. Heidegger55 , K. K. Heidegger55 , J. Heilman35 ,S.Heim
49 ,
T. He i m 18a , J. G. Heinlein131 ,J.J.Heinrich
126 , L. Heinrich112 ,ab , J. Hejbal134 ,A.Held
173 , S. Hellesund17 ,
C. M. Helling167 , S. Hellman48a,48b , R. C. W. Henderson93, L. Henkelmann33 , A. M. Henriques Correia37,
H. Herde100 , Y. Hernández Jiménez148 , L. M. Herrmann25 , T. Herrmann51 ,G.Herten
55 , R. Hertenberger111 ,
L. Hervas37 , M. E. Hesping102 , N. P. Hessey159a , M. Hidaoui36b , N. Hidic136 , E. Hill158 , S. J. Hillier21 ,
J. R. Hinds109 , F. Hinterkeuser25 ,M.Hirose
127 ,S.Hirose
160 , D. Hirschbuehl174 , T. G. Hitchings103 ,
B. Hiti95 , J. Hobbs148 , R. Hobincu28e ,N.Hod
172 , M. C. Hodgkinson142 , B. H. Hodkinson129 , A. Hoecker37 ,
D. D. Hofer108 , J. Hofer49 ,T.Holm
25 , M. Holzbock37 ,L.B.A.H.Hommels
33 , B. P. Honan103 , J. J. Hong69 ,
J. Hong63c , T. M. Hong132 , B. H. Hooberman165 , W. H. Hopkins6, M. C. Hoppesch165 ,Y.Horii
113 ,
S. Hou151 ,A.S.Howard
95 ,J.Howarth
60 ,J.Hoya
6, M. Hrabovsky125 , A. Hrynevich49 , T. Hryn’ova4,
P. J. H s u66 ,S.-C.Hsu
141 ,T.Hsu
67 ,M.Hu
18a ,Q.Hu
63a , S. Huang65b , X. Huang14,114c , Y. Huang142 ,
Y. Huang102 , Y. Huang14 , Z. Huang103 , Z. Hubacek135 , M. Huebner25 , F. Huegging25 , T. B. Huffman129 ,
C. A. Hugli49 , M. Huhtinen37 , S. K. Huiberts17 , R. Hulsken106 , N. Huseynov12,h,J.Huston
109 ,J.Huth
62 ,
R. Hyneman146 , G. Iacobucci57 , G. Iakovidis30 , L. Iconomidou-Fayard67 , J. P. Iddon37 , P. Iengo73a,73b ,
R. Iguchi156 , Y. Iiyama156 ,T.Iizawa
129 ,Y.Ikegami
85 , N. Ilic158 ,H.Imam
84c , M. Ince Lezki57 ,
T. Ingebretsen Carlson48a,48b , J.M.Inglis
96 , G. Introzzi74a,74b , M. Iodice78a , V. Ippolito76a,76b ,
R. K. Irwin94 ,M.Ishino
156 ,W.Islam
173 , C. Issever19,49 , S. Istin22a,ah ,H.Ito
171 , R. Iuppa79a ,79b ,
A. Ivina172 , J. M. Izen46 , V. Izzo73a , P. Jacka134 , P. Jackson1, C. S. Jagfeld111 ,G.Jain
159a ,P.Jain
49 ,
K. Jakobs55 , T. Jakoubek172 ,J.Jamieson
60 , W. Jang156 , M. Javurkova105 , P. Jawahar103 , L. Jeanty126 ,
J. Jejelava152a,z, P. Jenni55,g, C. E. Jessiman35 ,C.Jia
63b,J.Jia
148 ,X.Jia
14,114c ,Z.Jia
114a , C. Jiang53 ,
S. Jiggins49 , J. Jimenez Pena13 ,S.Jin
114a , A. Jinaru28b , O. Jinnouchi157 , P. Johansson142 , K. A. Johns7,
J. W. Johnson139 , F.A.Jolly
49 , D. M. Jones149 , E. Jones49 , K. S. Jones8, P. Jones33 , R. W. L. Jones93 ,
T. J. Jones94 , H. L. Joos56 ,37 , R. Joshi122 , J. Jovicevic16 ,X.Ju
18a , J. J. Junggeburth105 , T. Junkermann64a ,
A. Juste Rozas13,t, M. K. Juzek88 , S. Kabana140e , A. Kaczmarska88 , M. Kado112 , H. Kagan122 ,
M. Kagan146 , A. Kahn131 , C. Kahra102 ,T.Kaji
156 , E. Kajomovitz153 , N. Kakati172 , I. Kalaitzidou55 ,
C. W. Kalderon30 , N.J.Kang
139 ,D.Kar
34g ,K.Karava
129 , M. J. Kareem159b , E. Karentzos55 ,
O. Karkout117 , S. N. Karpov39 , Z.M.Karpova
39 , V. Kartvelishvili93 , A. N. Karyukhin38 ,E.Kasimi
155 ,
J. Katzy49 , S. Kaur35 , K. Kawade143 ,M.P.Kawale
123 , C. Kawamoto89 , T. Kawamoto63a , E.F.Kay
37 ,
F. I. Kaya161 , S. Kazakos109 , V. F. Kazanin38 ,Y.Ke
148 , J. M. Keaveney34a , R. Keeler168 , G. V. Kehris62 ,
J. S. Keller35 , A. S. Kelly98, J. J. Kempster149 , P. D. Kennedy102 , O. Kepka134 , B. P. Kerridge137 ,
S. Kersten174 ,B.P.Kerševan
95 , L. Keszeghova29a , S. Ketabchi Haghighat158 , R. A. Khan132 , A. Khanov124 ,
A. G. Kharlamov38 , T. Kharlamova38 , E. E. Khoda141 , M. Kholodenko133a , T.J.Khoo
19 , G. Khoriauli169 ,
J. Khubua152b,*,Y.A.R.Khwaira
130 , B. Kibirige34g ,D.Kim
6,D.W.Kim
48a,48b ,Y.K.Kim
40 ,N.Kimura
98 ,
M. K. Kingston56 , A. Kirchhoff56 ,C.Kirfel
25 , F. Kirfel25 ,J.Kirk
137 , A. E. Kiryunin112 , C. Kitsaki10 ,
O. Kivernyk25 , M. Klassen161 , C. Klein35 , L. Klein169 , M.H.Klein
45 , S. B. Klein57 , U. Klein94 ,
P. Klimek37 , A. Klimentov30 , T. Klioutchnikova37 , P. Kluit117 , S. Kluth112 , E. Kneringer80 , T. M. Knight158 ,
A. Knue50 , D. Kobylianskii172 , S. F. Koch129 , M. Kocian146 , P. Kodyš136 , D. M. Koeck126 , P. T. Koenig25 ,
T. Koffa s 35 , O. Kolay51 , I. Koletsou4, T. Komarek88 , K. Köneke55 , A.X.Y.Kong
1, T. Kono121 ,
N. Konstantinidis98 , P. Kontaxakis57 , B. Konya100 , R. Kopeliansky42 , S. Koperny87a ,K.Korcyl
88 ,
K. Kordas155,d,A.Korn
98 ,S.Korn
56 , I. Korolkov13 , N. Korotkova38 , B. Kortman117 , O. Kortner112 ,
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2 Page 26 of 35 Comput Softw Big Sci (2025) 9:2
S. Kortner112 , W. H. Kostecka118 , V. V. Kostyukhin144 , A. Kotsokechagia37 ,A.Kotwal
52 , A. Koulouris37 ,
A. Kourkoumeli-Charalampidi74a,74b , C. Kourkoumelis9, E. Kourlitis112,ab , O. Kovanda126 , R. Kowalewski168 ,
W. Kozanecki138 , A. S. Kozhin38 , V. A. Kramarenko38 , G. Kramberger95 ,P.Kramer
102 , M.W.Krasny
130 ,
A. Krasznahorkay37 , A.C.Kraus
118 , J.W.Kraus
174 , J.A.Kremer
49 , T. Kresse51 , L. Kretschmann174 ,
J. Kretzschmar94 , K. Kreul19 , P. Krieger158 ,M.Krivos
136 , K. Krizka21 , K. Kroeninger50 , H. Kroha112 ,
J. Kroll134 ,J.Kroll
131 , K.S.Krowpman
109 , U. Kruchonak39 , H. Krüger25 , N. Krumnack82, M. C. Kruse52 ,
O. Kuchinskaia38 , S. Kuday3a , S. Kuehn37 , R. Kuesters55 , T. Kuhl49 , V. Kukhtin39 , Y. Kulchitsky38,a,
S. Kuleshov140d,140b , M. Kumar34g , N. Kumari49 , P. Kumari159b , A. Kupco134 , T. Kupfer50, A. Kupich38 ,
O. Kuprash55 , H. Kurashige86 , L. L. Kurchaninov159a , O. Kurdysh67 , Y. A. Kurochkin38 ,A.Kurova
38 ,
M. Kuze157 ,A.K.Kvam
105 , J. Kvita125 ,T.Kwan
106 , N. G. Kyriacou108 ,L.A.O.Laatu
104 , C. Lacasta166 ,
F. Lacava76a,76b , H. Lacker19 , D. Lacour130 ,N.N.Lad
98 , E. Ladygin39 , A. Lafarge41 , B. Laforge130 ,
T. Lagouri175 , F. Z. Lahbabi36a ,S.Lai
56 , J. E. Lambert168 , S. Lammers69 , W. Lampl7, C. Lampoudis155,d,
G. Lamprinoudis102, A. N. Lancaster118 , E. Lançon30 , U. Landgraf55 , M. P. J. Landon96 , V. S. Lang55 ,
O. K. B. Langrekken128 , A. J. Lankford162 , F. Lanni37 , K. Lantzsch25 , A. Lanza74a , J. F. Laporte138 ,
T. La r i 72a , F. Lasagni Manghi24b , M. Lassnig37 , V. Latonova134 , A. Laurier153 ,S.D.Lawlor
142 ,
Z. Lawrence103 , R. Lazaridou170, M. Lazzaroni72a,72b ,B.Le
103, E. M. Le Boulicaut52 , L. T. Le Pottier18a ,
B. Leban24b,24a , A. Lebedev82 , M. LeBlanc103 , F. Ledroit-Guillon61 ,S.C.Lee
151 ,S.Lee
48a,48b ,T.F.Lee
94 ,
L. L. Leeuw34c , H. P. Lefebvre97 , M. Lefebvre168 , C. Leggett18a , G. Lehmann Miotto37 , M. Leigh57 ,
W. A. Leight105 , W. Leinonen116 ,A.Leisos
155,s, M.A.L.Leite
84c , C. E. Leitgeb19 , R. Leitner136 ,
K. J. C. Leney45 , T. Lenz25 , S. Leone75a , C. Leonidopoulos53 , A. Leopold147 ,R.Les
109 ,C.G.Lester
33 ,
M. Levchenko38 , J. Levêque4, L.J.Levinson
172 ,G.Levrini
24b,24a , M. P. Lewicki88 ,C.Lewis
141 ,
D. J. Lewis4,A.Li
5,B.Li
63b ,C.Li
63a,C-Q.Li
112 ,H.Li
63a ,H.Li
63b ,H.Li
114a ,H.Li
15 ,H.Li
63b ,
J. Li63c ,K.Li
141 ,L.Li
63c ,M.Li
14,114c ,S.Li
14,114c ,S.Li
63d,63c ,T.Li
5,X.Li
106 ,Z.Li
129 ,
Z. Li156 ,Z.Li
14,114c ,Z.Li
63a , S. Liang14,114c , Z. Liang14 , M. Liberatore138 , B. Liberti77a ,K.Lie
65c ,
J. Lieber Marin84e ,H.Lien
69 ,H.Lin
108 ,K.Lin
109 , R. E. Lindley7, J. H. Lindon2,J.Ling
62 , E. Lipeles131 ,
A. Lipniacka17 , A. Lister167 , J. D. Little69 ,B.Liu
14 ,B.X.Liu
114b ,D.Liu
63d,63c , E.H.L.Liu
21 ,
J. B. Liu63a ,J.K.K.Liu
33 ,K.Liu
63d ,K.Liu
63d,63c ,M.Liu
63a ,M.Y.Liu
63a ,P.Liu
14 ,Q.Liu
63d,141,63c ,
X. Liu63a ,X.Liu
63b ,Y.Liu
114b,114c ,Y.L.Liu
63b ,Y.W.Liu
63a ,S.L.Lloyd
96 , E. M. Lobodzinska49 ,
P. Loch7, T. Lohse19 , K. Lohwasser142 , E. Loiacono49 , M. Lokajicek134,*, J.D.Lomas
21 , J. D. Long165 ,
I. Longarini162 , R. Longo165 , I. Lopez Paz68 , A. Lopez Solis49 , N. A. Lopez-canelas7, N. Lorenzo Martinez4,
A. M. Lory111 , M. Losada119a , G. Löschcke Centeno149 , O. Loseva38 ,X.Lou
48a,48b ,X.Lou
14,114c ,
A. Lounis67 , P. A. Love93 ,G.Lu
14,114c ,M.Lu
67 ,S.Lu
131 ,Y.J.Lu
66 , H. J. Lubatti141 ,
C. Luci76a,76b , F. L. Lucio Alves114a , F. Luehring69 ,I.Luise
148 , O. Lukianchuk67 , O. Lundberg147 ,
B. Lund-Jensen147,*, N. A. Luongo6,M.S.Lutz
37 ,A.B.Lux
26 , D. Lynn30 , R. Lysak134 ,E.Lytken
100 ,
V. Lyubushkin39 , T. Lyubushkina39 , M.M.Lyukova
148 , M.Firdaus M. Soberi53 ,H.Ma
30 ,K.Ma
63a ,
L. L. Ma63b ,W.Ma
63a ,Y.Ma
124 , J. C. MacDonald102 , P. C. Machado De Abreu Farias84e , R. Madar41 ,
T. Madula98 , J. Maeda86 , T. Maeno30 , H. Maguire142 , V. Maiboroda138 ,A.Maio
133a,133b,133d ,
K. Maj87a , O. Majersky49 ,S.Majewski
126 , N. Makovec67 , V. Maksimovic16 , B. Malaescu130 ,
Pa. Malecki88 , V.P.Maleev
38 , F. Malek61,o,M.Mali
95 , D. Malito97 , U. Mallik81 , S. Maltezos10,
S. Malyukov39, J. Mamuzic13 , G. Mancini54 , M. N. Mancini27 , G. Manco74a,74b , J. P. Mandalia96 ,
S. S. Mandarry149 , I. Mandi´c95 , L. Manhaes de Andrade Filho84a , I. M. Maniatis172 , J. Manjarres Ramos91 ,
D. C. Mankad172 , A. Mann111 , S. Manzoni37 ,L.Mao
63c , X. Mapekula34c , A. Marantis155,s, G. Marchiori5,
M. Marcisovsky134 , C. Marcon72a , M. Marinescu21 ,S.Marium
49 , M. Marjanovic123 , A. Markhoos55 ,
M. Markovitch67 , E. J. Marshall93 , Z. Marshall18a , S. Marti-Garcia166 , J. Martin98 , T.A.Martin
137 ,
V. J. Martin53 , B. Martin dit Latour17 , L. Martinelli76a,76b , M. Martinez13,t, P. Martinez Agullo166 ,
V. I. Martinez Outschoorn105 , P. Martinez Suarez13 , S. Martin-Haugh137 , G. Martinovicova136 , V. S. Martoiu28b ,
A. C. Martyniuk98 , A. Marzin37 , D. Mascione79a,79b , L. Masetti102 ,J.Masik
103 , A. L. Maslennikov38 ,
P. Massarotti73a,73b , P. Mastrandrea75a ,75b , A. Mastroberardino44b ,44a , T. Masubuchi127 , T. Mathisen164 ,
J. Matousek136 , J. Maurer28b , A.J.Maury
67 ,B.Maˇcek95 , D. A. Maximov38 , A.E.May
103 ,
R. Mazini151 , I. Maznas118 , M. Mazza109 , S. M. Mazza139 , E. Mazzeo72a,72b ,C.McGinn
30 ,
J. P. Mc Gowan168 , S.P.McKee
108 , C. C. McCracken167 , E. F. McDonald107 , A. E. McDougall117 ,
J. A. Mcfayden149 , R. P. McGovern131 , R. P. Mckenzie34g , T. C. Mclachlan49 , D. J. Mclaughlin98 ,
S. J. McMahon137 , C. M. Mcpartland94 , R. A. McPherson168,x, S. Mehlhase111 , A. Mehta94 , D. Melini166 ,
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Comput Softw Big Sci (2025) 9:2 Page 27 of 35 2
B. R. Mellado Garcia34g , A.H.Melo
56 , F. Meloni49 , A. M. Mendes Jacques Da Costa103 , H. Y. Meng158 ,
L. Meng93 , S. Menke112 , M. Mentink37 , E. Meoni44b,44a , G. Mercado118 , S. Merianos155 ,
G. Merino Arevaloe, C. Merlassino70a ,70c , L. Merola73a,73b , C. Meroni72a,72b , J. Metcalfe6,A.S.Mete
6,
E. Meuser102 , C. Meyer69 , J-P. Meyer138 , R. P. Middleton137 , L. Mijovi´c53 , G. Mikenberg172 ,
M. Mikestikova134 , M. Mikuž95 , H. Mildner102 , A. Milic37 , D. W. Miller40 , E. H. Miller146 ,
L. S. Miller35 , A. Milov172 , D. A. Milstead48a,48b,T.Min
114a, A. A. Minaenko38 , I. A. Minashvili152b ,
L. Mince60 , A. I. Mincer120 , B. Mindur87a , M. Mineev39 ,Y.Mino
89 ,L.M.Mir
13 , M. Miralles Lopez60 ,
M. Mironova18a , M.C.Missio
116 , A. Mitra170 , V. A. Mitsou166 , Y. Mitsumori113 ,O.Miu
158 ,
P. S. Miyagawa96 , T. Mkrtchyan64a , M. Mlinarevic98 , T. Mlinarevic98 , M. Mlynarikova37 , S. Mobius20 ,
P. Mogg111 , M. H. Mohamed Farook115 , A.F.Mohammed
14,114c , S. Mohapatra42 , G. Mokgatitswane34g ,
L. Moleri172 , B. Mondal144 , S. Mondal135 , K. Mönig49 , E. Monnier104 , L. Monsonis Romero166,
J. Montejo Berlingen13 , A. Montella48a,48b , M. Montella122 , F. Montereali78a,78b , F. Monticelli92 ,
S. Monzani70a,70c , A. Morancho Tarda43 , N. Morange67 , A. L. Moreira De Carvalho49 , M. Moreno Llácer166 ,
C. Moreno Martinez57 , P. Morettini58b , S. Morgenstern37 ,M.Morii
62 , M. Morinaga156 , F. Morodei76a,76b ,
L. Morvaj37 , P. Moschovakos37 , B. Moser129 , M. Mosidze152b , T. Moskalets45 , P. Moskvitina116 ,
J. Moss32,l, P. Moszkowicz87a , A. Moussa36d , E.J.W.Moyse
105 , O. Mtintsilana34g , S. Muanza104 ,
J. Mueller132 , D. Muenstermann93 , R. Müller37 , G. A. Mullier164 , A. J. Mullin33, J. J. Mullin131, D. P. Mungo158 ,
D. Munoz Perez166 , F. J. Munoz Sanchez103 ,M.Murin
103 ,W.J.Murray
170,137 , M. Muškinja95 ,C.Mwewa
30 ,
A. G. Myagkov38,a, A.J.Myers
8, G. Myers108 , M. Myska135 , B. P. Nachman18a , O. Nackenhorst50 ,
K. Nagai129 , K. Nagano85 , J.L.Nagle
30,af , E. Nagy104 , A.M.Nairz
37 , Y. Nakahama85 , K. Nakamura85 ,
K. Nakkalil5, H. Nanjo127 , E. A. Narayanan115 , I. Naryshkin38 , L. Nasella72a,72b , M. Naseri35 , S. Nasri119b ,
C. Nass25 ,G.Navarro
23a , J. Navarro-Gonzalez166 , R. Nayak154 , A. Nayaz19 , P. Y. Nechaeva38 ,
S. Nechaeva24b,24a , F. Nechansky49 , L. Nedic129 ,T.J.Neep
21 ,A.Negri
74a,74b ,M.Negrini
24b , C. Nellist117 ,
C. Nelson106 ,K.Nelson
108 , S. Nemecek134 , M. Nessi37,i, M. S. Neubauer165 , F. Neuhaus102 , J. Neundorf49 ,
P. R. Newman21 ,C.W.Ng
132 , Y.W.Y.Ng
49 ,B.Ngair
119a , H. D. N. Nguyen110 ,R.B.Nickerson
129 ,
R. Nicolaidou138 , J. Nielsen139 , M. Niemeyer56 , J. Niermann56 , N. Nikiforou37 , V. Nikolaenko38,a,
I. Nikolic-Audit130 , K. Nikolopoulos21 , P. Nilsson30 , I. Ninca49 , G. Ninio154 ,A.Nisati
76a ,N.Nishu
2,
R. Nisius112 , J-E. Nitschke51 , E. K. Nkadimeng34g , T. Nobe156 , T. Nommensen150 , M. B. Norfolk142 ,
B. J. Norman35 , M. Noury36a ,J.Novak
95 ,T.Novak
95 , L. Novotny135 , R. Novotny115 , L. Nozka125 ,
K. Ntekas162 , N. M. J. Nunes De Moura Junior84b , J. Ocariz130 , A. Ochi86 , I. Ochoa133a , S. Oerdek49,u,
J. T. Offermann40 , A. Ogrodnik136 ,A.Oh
103 , C.C.Ohm
147 ,H.Oide
85 ,R.Oishi
156 , M. L. Ojeda49 ,
Y. Okumura156 , L. F. Oleiro Seabra133a , I. Oleksiyuk57 , S. A. Olivares Pino140d , G. Oliveira Correa13 ,
D. Oliveira Damazio30 ,J.L.Oliver
162 , Ö. O. Öncel55 , A. P. O’Neill20 , A. Onofre133a,133e , P.U.E.Onyisi
11 ,
M. J. Oreglia40 , G. E. Orellana92 , D. Orestano78a,78b , N. Orlando13 ,R.S.Orr
158 , L. M. Osojnak131 ,
R. Ospanov63a , G. Otero y Garzon31 , H. Otono90 ,P.S.Ott
64a , G. J. Ottino18a , M. Ouchrif36d ,
F. Ould-Saada128 , T. Ovsiannikova141 , M. Owen60 , R. E. Owen137 , V. E. Ozcan22a , F. Ozturk88 , N. Ozturk8,
S. Ozturk83 , H. A. Pacey129 , A. Pacheco Pages13 , C. Padilla Aranda13 , G. Padovano76a ,76b , S. Pagan Griso18a ,
G. Palacino69 , A. Palazzo71a,71b , J. Pampel25 ,J.Pan
175 ,T.Pan
65a , D. K. Panchal11 , C. E. Pandini117 ,
J. G. Panduro Vazquez137 , H. D. Pandya1, H. Pang15 , P. Pani49 , G. Panizzo70a,70c , L. Panwar130 ,
L. Paolozzi57 , S. Parajuli165 , A. Paramonov6, C. Paraskevopoulos54 , D. Paredes Hernandez65b , A. Pareti74a,74b ,
K. R. Park42 ,T.H.Park
158 ,M.A.Parker
33 , F. Parodi58b,58a , E.W.Parrish
118 , V. A. Parrish53 ,
J. A. Parsons42 , U. Parzefall55 , B. Pascual Dias110 , L. Pascual Dominguez101 , E. Pasqualucci76a ,
S. Passaggio58b ,F.Pastore
97 , P. Patel88 , U.M.Patel
52 , J.R.Pater
103 , T. Pauly37 , C. I. Pazos161 ,
J. Pearkes146 , M. Pedersen128 , R. Pedro133a , S. V. Peleganchuk38 , O. Penc37 , E. A. Pender53 , S. Peng15 ,
G. D. Penn175 , K. E. Penski111 , M. Penzin38 , B. S. Peralva84d , A. P. Pereira Peixoto141 , L. Pereira Sanchez146 ,
D. V. Perepelitsa30,af , G. Perera105 , E. Perez Codina159a , M. Perganti10 , H. Pernegger37 , S. Perrella76a ,76b ,
O. Perrin41 , K. Peters49 , R. F. Y. Peters103 , B. A. Petersen37 , T. C. Petersen43 , E. Petit104 , V. Petousis135 ,
C. Petridou155,d, T. Petru136 , A. Petrukhin144 , M. Pettee18a , A. Petukhov38 , K. Petukhova37 , R. Pezoa140f ,
L. Pezzotti37 , G. Pezzullo175 , T. M. Pham173 , T. Pham107 , P. W. Phillips137 , G. Piacquadio148 ,
E. Pianori18a , F. Piazza126 , R. Piegaia31 , D. Pietreanu28b , A. D. Pilkington103 , M. Pinamonti70a,70c ,
J. L. Pinfold2, B. C. Pinheiro Pereira133a , A. E. Pinto Pinoargote138,138 , L. Pintucci70a,70c , K. M. Piper149 ,
A. Pirttikoski57 ,D.A.Pizzi
35 , L. Pizzimento65b , A. Pizzini117 , M.-A. Pleier30 ,V.Pleskot
136 , E. Plotnikova39,
G. Poddar96 , R. Poettgen100 , L. Poggioli130 , I. Pokharel56 , S. Polacek136 , G. Polesello74a , A. Poley145,159a ,
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2 Page 28 of 35 Comput Softw Big Sci (2025) 9:2
A. Polini24b , C. S. Pollard170 , Z. B. Pollock122 , E. Pompa Pacchi76a,76b , N. I. Pond98 , D. Ponomarenko116 ,
L. Pontecorvo37 , S. Popa28a , G. A. Popeneciu28d , A. Poreba37 , D. M. Portillo Quintero159a , S. Pospisil135 ,
M. A. Postill142 , P. Postolache28c , K. Potamianos170 , P. A. Potepa87a , I.N.Potrap
39 , C. J. Potter33 ,
H. Potti150 , J. Poveda166 , M. E. Pozo Astigarraga37 , A. Prades Ibanez77a,77b ,J.Pretel
168 ,D.Price
103 ,
M. Primavera71a , L. Primomo70a,70c , M. A. Principe Martin101 ,R.Privara
125 , T. Procter60 ,M.L.Proftt
141 ,
N. Proklova131 , K. Prokofiev65c ,G.Proto
112 , J. Proudfoot6, M. Przybycien87a , W. W. Przygoda87b ,
A. Psallidas47 , J. E. Puddefoot142 , D. Pudzha55 , D. Pyatiizbyantseva38 ,J.Qian
108 , D. Qichen103 ,Y.Qin
13 ,
T. Qi u 53 , A. Quadt56 , M. Queitsch-Maitland103 , G. Quetant57 , R. P. Quinn167 , G. Rabanal Bolanos62 ,
D. Rafanoharana55 , F. Raffaeli77a,77b , F. Ragusa72a,72b , J. L. Rainbolt40 , J. A. Raine57 , S. Rajagopalan30 ,
E. Ramakoti38 , L. Rambelli58b ,58a , I. A. Ramirez-Berend35 ,K.Ran
49,114c , D. S. Rankin131 , N. P. Rapheeha34g ,
H. Rasheed28b , V. Raskina130 , D. F. Rassloff64a , A. Rastogi18a ,S.Rave
102 ,S.Ravera
58b,58a ,
B. Ravina56 , I. Ravinovich172 , M. Raymond37 , A. L. Read128 , N. P. Readioff142 , D. M. Rebuzzi74a,74b ,
G. Redlinger30 , A. S. Reed112 ,K.Reeves
27 , J. A. Reidelsturz174 , D. Reikher126 ,A.Rej
50 , C. Rembser37 ,
M. Renda28b , F. Renner49 , A. G. Rennie162 , A. L. Rescia49 , S. Resconi72a , M. Ressegotti58b,58a ,
S. Rettie37 , J. G. Reyes Rivera109 , E. Reynolds18a , O. L. Rezanova38 , P. Reznicek136 , H. Riani36d ,
N. Ribaric93 , E. Ricci79a,79b , R. Richter112 , S. Richter48a ,48b , E. Richter-Was87b , M. Ridel130 ,
S. Ridouani36d , P. Rieck120 , P. Riedler37 , E.M.Riefel
48a,48b , J. O. Rieger117 , M. Rijssenbeek148 ,
M. Rimoldi37 , L. Rinaldi24b,24a , P. Rincke56,164 ,T.T.Rinn
30 , M. P. Rinnagel111 , G. Ripellino164 ,
I. Riu13 , J.C.RiveraVergara
168 , F. Rizatdinova124 , E. Rizvi96 , B. R. Roberts18a , S. S. Roberts139 ,
S. H. Robertson106,x, D. Robinson33 , M. Robles Manzano102 , A. Robson60 , A. Rocchi77a ,77b , C. Roda75a,75b ,
S. Rodriguez Bosca37 , Y. Rodriguez Garcia23a , A. Rodriguez Rodriguez55 , A. M. Rodríguez Vera118 ,S.Roe
37,
J. T. Roemer37 , A. R. Roepe-Gier139 , O. Røhne128 , R. A. Rojas105 , C. P. A. Roland130 , J. Roloff30 ,
A. Romaniouk38 , E. Romano74a,74b , M. Romano24b , A. C. Romero Hernandez165 , N. Rompotis94 ,
L. Roos130 , S. Rosati76a , B.J.Rosser
40 , E. Rossi129 , E. Rossi73a ,73b , L. P. Rossi62 , L. Rossini55 ,
R. Rosten122 , M. Rotaru28b , B. Rottler55 , C. Rougier91 , D. Rousseau67 , D. Rousso49 ,A.Roy
165 ,
S. Roy-Garand158 , A. Rozanov104 , Z. M. A. Rozario60 , Y. Rozen153 , A. Rubio Jimenez166 , A. J. Ruby94 ,
V. H. Ruelas Rivera19 , T. A. Ruggeri1, A. Ruggiero129 , A. Ruiz-Martinez166 , A. Rummler37 ,Z.Rurikova
55 ,
N. A. Rusakovich39 , H. L. Russell168 , G. Russo76a ,76b , J. P. Rutherfoord7, S. Rutherford Colmenares33 ,
M. Rybar136 ,E.B.Rye
128 , A. Ryzhov45 , J. A. Sabater Iglesias57 , H.F-W. Sadrozinski139 , F. Safai Tehrani76a ,
B. Safarzadeh Samani137 , S. Saha1, M. Sahinsoy83 , A. Saibel166 , M. Saimpert138 , M. Saito156 , T. Saito156 ,
A. Sala72a,72b , D. Salamani37 , A. Salnikov146 , J. Salt166 , A. Salvador Salas154 , D. Salvatore44b,44a ,
F. Salvatore149 , A. Salzburger37 , D. Sammel55 , E. Sampson93 , D. Sampsonidis155,d, D. Sampsonidou126 ,
J. Sánchez166 , V. Sanchez Sebastian166 , H. Sandaker128 , C. O. Sander49 , J. A. Sandesara105 , M. Sandhoff174 ,
C. Sandoval23b , L. Sanfilippo64a , D. P. C. Sankey137 , T. Sano89 , A. Sansoni54 , L. Santi37 ,76b , C. Santoni41 ,
H. Santos133a,133b , A. Santra172 , E. Sanzani24b,24a , K. A. Saoucha163 , J. G. Saraiva133a,133d , J. Sardain7,
O. Sasaki85 , K. Sato160 , C. Sauer64b, E. Sauvan4,P.Savard
158,ad , R. Sawada156 , C. Sawyer137 ,
L. Sawyer99 , C. Sbarra24b , A. Sbrizzi24b,24a , T. Scanlon98 , J. Schaarschmidt141 , U. Schäfer102 ,
A. C. Schaffer67,45 , D. Schaile111 , R. D. Schamberger148 , C. Scharf19 , M. M. Schefer20 , V. A. Schegelsky38 ,
D. Scheirich136 , M. Schernau162 , C. Scheulen56 , C. Schiavi58b,58a , M. Schioppa44b,44a , B. Schlag146,n,
K. E. Schleicher55 , S. Schlenker37 , J. Schmeing174 , M. A. Schmidt174 , K. Schmieden102 , C. Schmitt102 ,
N. Schmitt102 , S. Schmitt49 , L. Schoeffel138 , A. Schoening64b , P. G. Scholer35 , E. Schopf129 ,
M. Schott25 , J. Schovancova37 , S. Schramm57 , T. Schroer57 , H-C. Schultz-Coulon64a , M. Schumacher55 ,
B. A. Schumm139 , Ph. Schune138 , A. J. Schuy141 , H. R. Schwartz139 , A. Schwartzman146 , T. A. Schwarz108 ,
Ph. Schwemling138 , R. Schwienhorst109 , F. G. Sciacca20 , A. Sciandra30 , G. Sciolla27 , F. Scuri75a ,
C. D. Sebastiani94 , K. Sedlaczek118 , S. C. Seidel115 , A. Seiden139 , B. D. Seidlitz42 , C. Seitz49 ,
J. M. Seixas84b , G. Sekhniaidze73a , L. Selem61 , N. Semprini-Cesari24b,24a , D. Sengupta57 , V. Senthilkumar166 ,
L. Serin67 , M. Sessa77a,77b ,H.Severini
123 ,F.Sforza
58b,58a ,A.Sfyrla
57 , Q. Sha14 , E. Shabalina56 ,
A. H. Shah33 , R. Shaheen147 , J. D. Shahinian131 , D. Shaked Renous172 , L. Y. Shan14 , M. Shapiro18a ,
A. Sharma37 , A. S. Sharma167 , P. Sharma81 , P. B. Shatalov38 , K. Shaw149 , S. M. Shaw103 , Q. Shen63c ,
D. J. Sheppard145 , P. Sherwood98 , L. Shi98 , X. Shi14 , S. Shimizu85 , C. O. Shimmin175 , J. D. Shinner97 ,
I. P. J. Shipsey129 , S. Shirabe90 , M. Shiyakova39,v, M. J. Shochet40 , D. R. Shope128 , B. Shrestha123 ,
S. Shrestha122,ag , M. J. Shroff168 , P. Sicho134 , A. M. Sickles165 , E. Sideras Haddad34g , A. C. Sidley117 ,
A. Sidoti24b , F. Siegert51 , Dj. Sijacki16 , F. Sili92 , J.M.Silva
53 , I. Silva Ferreira84b , M. V. Silva Oliveira30 ,
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Comput Softw Big Sci (2025) 9:2 Page 29 of 35 2
S. B. Silverstein48a , S. Simion67, R. Simoniello37 , E. L. Simpson103 , H. Simpson149 , L. R. Simpson108 ,
N. D. Simpson100,S.Simsek
83 , S. Sindhu56 , P. Sinervo158 , S. Singh158 , S. Sinha49 , S. Sinha103 ,
M. Sioli24b,24a ,I.Siral
37 , E. Sitnikova49 , J. Sjölin48a ,48b , A. Skaf56 ,E.Skorda
21 , P. Skubic123 ,
M. Slawinska88 , V. Smakhtin172, B.H.Smart
137 , S.Yu. Smirnov38 ,Y.Smirnov
38 ,L.N.Smirnova
38,a,
O. Smirnova100 , A.C.Smith
42 , D.R.Smith
162, E. A. Smith40 , H. A. Smith129 , J. L. Smith103 ,
R. Smith146, M. Smizanska93 ,K.Smolek
135 , A. A. Snesarev38 , S. R. Snider158 , H. L. Snoek117 ,
S. Snyder30 , R. Sobie168,x, A. Soffer154 , C. A. Solans Sanchez37 , E.Yu. Soldatov38 , U. Soldevila166 ,
A. A. Solodkov38 , S. Solomon27 , A. Soloshenko39 , K. Solovieva55 , O. V. Solovyanov41 , P. Sommer51 ,
A. Sonay13 , W. Y. Song159b , A. Sopczak135 , A. L. Sopio98 , F. Sopkova29b , J. D. Sorenson115 ,
I. R. Sotarriva Alvarez157 , V. Sothilingam64a, O. J. Soto Sandoval140c,140b , S. Sottocornola69 , R. Soualah163 ,
Z. Soumaimi36e , D. South49 , N. Soybelman172 , S. Spagnolo71a,71b , M. Spalla112 , D. Sperlich55 ,
G. Spigo37 , B. Spisso73a,73b , D. P. Spiteri60 , M. Spousta136 , E. J. Staats35 ,R.Stamen
64a , A. Stampekis21 ,
M. Standke25 , E. Stanecka88 , W. Stanek-Maslouska49 , M. V. Stange51 , B. Stanislaus18a , M. M. Stanitzki49 ,
B. Stapf49 , E. A. Starchenko38 ,G.H.Stark
139 ,J.Stark
91 , P. Staroba134 , P. Starovoitov64a ,S.Stärz
106 ,
R. Staszewski88 , G. Stavropoulos47 , P. Steinberg30 , B. Stelzer145,159a ,H.J.Stelzer
132 , O. Stelzer-Chilton159a ,
H. Stenzel59 , T. J. Stevenson149 , G.A.Stewart
37 ,J.R.Stewart
124 , M. C. Stockton37 , G. Stoicea28b ,
M. Stolarski133a , S. Stonjek112 , A. Straessner51 , J. Strandberg147 , S. Strandberg48a,48b , M. Stratmann174 ,
M. Strauss123 , T. Strebler104 , P. Strizenec29b , R. Ströhmer169 , D.M.Strom
126 , R. Stroynowski45 ,
A. Strubig48a,48b , S. A. Stucci30 , B. Stugu17 , J. Stupak123 , N.A.Styles
49 ,D.Su
146 ,S.Su
63a ,
W. Su63d ,X.Su
63a , D. Suchy29a , K. Sugizaki156 , V. V. Sulin38 , M.J.Sullivan
94 , D.M.S.Sultan
129 ,
L. Sultanaliyeva38 , S. Sultansoy3b , T. Sumida89 , S. Sun173 , O. Sunneborn Gudnadottir164 , N. Sur104 ,
M. R. Sutton149 , H. Suzuki160 ,M.Svatos
134 , M. Swiatlowski159a ,T.Swirski
169 , I. Sykora29a , M. Sykora136 ,
T. Sykora136 ,D.Ta
102 , K. Tackmann49,u,A.Taffard
162 , R. Tafirout159a , J. S. Tafoya Vargas67 , Y. Takubo85 ,
M. Talby104 , A. A. Talyshev38 ,K.C.Tam
65b ,N.M.Tamir
154, A. Tanaka156 , J. Tanaka156 , R. Tanaka67 ,
M. Tanasini148 ,Z.Tao
167 , S. Tapia Araya140f , S. Tapprogge102 , A. Tarek Abouelfadl Mohamed109 ,
S. Tarem153 ,K.Tariq
14 , G. Tarna28b , G. F. Tartarelli72a ,M.J.Tartarin
91 ,P.Tas
136 ,M.Tasevsky
134 ,
E. Tassi44b ,44a ,A.C.Tate
165 , G. Tateno156 , Y. Tayalati36e,w, G. N. Taylor107 , W. Taylor159b ,
R. Teixeira De Lima146 , P. Teixeira-Dias97 , J.J.Teoh
158 , K. Terashi156 ,J.Terron
101 , S. Terzo13 ,
M. Testa54 , R. J. Teuscher158,x, A. Thaler80 , O. Theiner57 , N. Themistokleous53 , T. Theveneaux-Pelzer104 ,
O. Thielmann174 , D. W. Thomas97 , J. P. Thomas21 , E. A. Thompson18a , P. D. Thompson21 , E. Thomson131 ,
R. E. Thornberry45 ,C.Tian
63a ,Y.Tian
56 , V. Tikhomirov38,a, Yu.A. Tikhonov38 , S. Timoshenko38 ,
D. Timoshyn136 , E.X.L.Ting
1, P. Tipton175 , A. Tishelman-Charny30 , S.H.Tlou
34g , K. Todome157 ,
S. Todorova-Nova136 , S. Todt51, L. Toffolin70a,70c ,M.Togawa
85 ,J.Tojo
90 , S. Tokár29a , K. Tokushuku85 ,
O. Toldaiev69 , M. Tomoto85,113 , L. Tompkins146 ,n, K. W. Topolnicki87b , E. Torrence126 , H. Torres91 ,
E. Torró Pastor166 , M. Toscani31 , C. Tosciri40 ,M.Tost
11 , D. R. Tovey142 , I. S. Trandafir28b , T. Trefzger169 ,
A. Tricoli30 , I. M. Trigger159a , S. Trincaz-Duvoid130 , D. A. Trischuk27 , B. Trocmé61 , A. Tropina39,
L. Truong34c , M. Trzebinski88 , A. Trzupek88 ,F.Tsai
148 ,M.Tsai
108 ,A.Tsiamis
155,d, P. V. Tsiareshka38 ,
S. Tsigaridas159a , A. Tsirigotis155 ,s, V. Tsiskaridze158 , E. G. Tskhadadze152a , M. Tsopoulou155 ,
Y. Tsujikawa89 ,I.I.Tsukerman
38 , V. Tsulaia18a , S. Tsuno85 ,K.Tsuri
121 , D. Tsybychev148 ,Y.Tu
65b ,
A. Tudorache28b , V. Tudorache28b , A. N. Tuna62 , S. Turchikhin58b,58a , I. Turk Cakir3a ,R.Turra
72a ,
T. Turtuvshin39 ,P.M.Tuts
42 , S. Tzamarias155 ,d,E.Tzovara
102 ,F.Ukegawa
160 , P. A. Ulloa Poblete140c,140b ,
E. N. Umaka30 , G. Unal37 , A. Undrus30 , G. Unel162 , J. Urban29b , P. Urrejola140a ,G. Usai
8, R. Ushioda157 ,
M. Usman110 , Z. Uysal83 , V. Vacek135 , B. Vachon106 , T. Vafeiadis37 , A. Vaitkus98 , C. Valderanis111 ,
E. Valdes Santurio48a,48b , M. Valente159a , S. Valentinetti24b,24a , A. Valero166 , E. Valiente Moreno166 ,
A. Vallier91 , J.A.VallsFerrer
166 , D. R. Van Arneman117 , T. R. Van Daalen141 , A. Van Der Graaf50 ,
P. Van Gemmeren6, M. Van Rijnbach37 , S. Van Stroud98 , I. Van Vulpen117 , P. Vana136 , M. Vanadia77a,77b ,
W. Vandelli37 , E. R. Vandewall124 , D. Vannicola154 , L. Vannoli54 ,R.Vari
76a , E. W. Varnes7, C. Varni18b ,
T. Varol151 , D. Varouchas67 , L. Varriale166 ,K.E.Varvell
150 , M. E. Vasile28b , L. Vaslin85, G. A. Vasquez168 ,
A. Vasyukov39 , L. M. Vaughan124 , R. Vavricka102, T. Vazquez Schroeder37 , J. Veatch32 , V. Vecchio103 ,
M. J. Veen105 , I. Veliscek30 , L. M. Veloce158 , F. Veloso133a,133c , S. Veneziano76a , A. Ventura71a,71b ,
S. Ventura Gonzalez138 , A. Verbytskyi112 , M. Verducci75a,75b , C. Vergis96 , M. Verissimo De Araujo84b ,
W. Ver kerke117 , J. C. Vermeulen117 , C. Vernieri146 , M. Vessella105 , M. C. Vetterli145,ad , A. Vgenopoulos102 ,
N. Viaux Maira140f ,T.Vickey
142 , O.E.VickeyBoeriu
142 , G. H. A. Viehhauser129 , L. Vigani64b ,
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2 Page 30 of 35 Comput Softw Big Sci (2025) 9:2
M. Vigl112 , M. Villa24b,24a , M. Villaplana Perez166 , E. M. Villhauer53, E. Vilucchi54 , M. G. Vincter35 ,
A. Visibile117, C. Vittori37 , I. Vivarelli24b ,24a , E. Voevodina112 , F. Vogel111 , J.C.Voigt
51 , P. Vokac135 ,
Yu. Volkotrub87b , J. Von Ahnen49 , E. Von Toerne25 , B. Vormwald37 , V. Vorobel136 , K. Vorobev38 ,M. Vos
166 ,
K. Voss144 , M. Vozak117 , L. Vozdecky123 , N. Vranjes16 , M. Vranjes Milosavljevic16 , M. Vreeswijk117 ,
N. K. Vu63d,63c , R. Vuillermet37 , O. Vujinovic102 , I. Vukotic40 , S. Wada160 , C. Wagner105, J. M. Wagner18a ,
W. Wagner174 , S. Wahdan174 , H. Wahlberg92 , J. Walder137 ,R.Walker
111 , W. Walkowiak144 ,A.Wall
131 ,
E. J. Wallin100 , T. Wamorkar6, A. Z. Wang139 , C. Wang102 , C. Wang11 , H. Wang18a , J. Wang65c ,
P. Wang98 , R. Wang62 , R. Wang6, S. M. Wang151 , S. Wang63b , S. Wang14 , T. Wang63a , W.T.Wang
81 ,
W. Wang14 , X. Wang114a , X. Wang165 , X. Wang63c , Y. Wang63d , Y. Wang114a , Y. Wang63a , Z. Wang108 ,
Z. Wang63d ,52,63c , Z. Wang108 , A. Warburton106 ,R.J.Ward
21 , N. Warrack60 , S. Waterhouse97 ,
A. T. Watson21 ,H.Watson
60 ,M.F.Watson
21 , E. Watton60,137 , G. Watts141 , B. M. Waugh98 ,J.M.Webb
55 ,
C. Weber30 , H. A. Weber19 , M. S. Weber20 , S. M. Weber64a ,C.Wei
63a ,Y.Wei
55 , A. R. Weidberg129 ,
E. J. Weik120 , J. Weingarten50 ,C.Weiser
55 , C.J.Wells
49 , T. Wenaus30 , B. Wendland50 , T. Wengler37 ,
N. S. Wenke112 ,N.Wermes
25 , M. Wessels64a , A. M. Wharton93 , A. S. White62 , A. White8, M. J. White1,
D. Whiteson162 , L. Wickremasinghe127 , W. Wiedenmann173 , M. Wielers137 , C. Wiglesworth43 , D. J. Wilbern123,
H. G. Wilkens37 , J. J. H. Wilkinson33 , D. M. Williams42 , H. H. Williams131, S. Williams33 , S. Willocq105 ,
B. J. Wilson103 , P. J. Windischhofer40 ,F.I.Winkel
31 , F. Winklmeier126 , B. T. Winter55 ,J.K.Winter
103 ,
M. Wittgen146, M. Wobisch99 , T. Wojtkowski61,Z.Wolffs
117 , J. Wollrath162,M.W.Wolter
88 , H. Wolters133a,133c ,
M. C. Wong139 , E. L. Woodward42 ,S.D.Worm
49 , B.K.Wosiek
88 , K.W.Wo´zniak88 , S. Wozniewski56 ,
K. Wraight60 ,C.Wu
21 ,M.Wu
114b ,M.Wu
116 ,S.L.Wu
173 ,X.Wu
57 ,Y.Wu
63a ,Z.Wu
4,
J. Wuerzinger112,ab , T.R.Wyatt
103 , B. M. Wynne53 , S. Xella43 ,L.Xia
114a ,M.Xia
15 ,M.Xie
63a ,
S. Xin14,114c , A. Xiong126 , J. Xiong18a ,D.Xu
14 ,H.Xu
63a ,L.Xu
63a ,R.Xu
131 ,T.Xu
108 ,Y.Xu
15 ,
Z. Xu53 ,Z.Xu
114a, B. Yabsley150 , S. Yacoob34a , Y. Yamaguchi85 , E. Yamashita156 , H. Yamauchi160 ,
T. Yamazaki18a , Y. Yamazaki86 ,J.Yan
63c,S.Yan
60 ,Z.Yan
105 ,H.J.Yang
63c,63d , H. T. Yang63a , S. Yang63a ,
T. Yang65c , X. Yang37 , X. Yang14 , Y. Yang45 , Y. Yang63a, Z. Yang63a ,W-M.Yao
18a ,H.Ye
114a ,
H. Ye56 ,J.Ye
14 ,S.Ye
30 ,X.Ye
63a ,Y.Yeh
98 , I. Yeletskikh39 ,B.K.Yeo
18b , M.R.Yexley
98 ,
T. P. Yildirim129 ,P.Yin
42 , K. Yorita171 , S. Younas28b , C. J. S. Young37 , C. Young146 ,C.Yu
14,114c ,
Y. Yu 63a , J. Yuan14,114c , M. Yuan108 , R. Yuan63d,63c ,L.Yue
98 , M. Zaazoua63a , B. Zabinski88 ,E.Zaid
53,
Z. K. Zak88 , T. Zakareishvili166 , S. Zambito57 , J. A. Zamora Saa140d ,140b , J. Zang156 , D. Zanzi55 ,
O. Zaplatilek135 , C. Zeitnitz174 , H. Zeng14 , J.C.Zeng
165 , D. T. Zenger Jr27 , O. Zenin38 , T. Ženiš29a ,
S. Zenz96 , S. Zerradi36a ,D.Zerwas
67 , M. Zhai14,114c , D. F. Zhang142 , J. Zhang63b , J. Zhang6,
K. Zhang14,114c , L. Zhang63a , L. Zhang114a , P. Zhang14,114c , R. Zhang173 , S. Zhang108 , S. Zhang91 ,
T. Zhang156 , X. Zhang63c , X. Zhang63b , Y. Zhang63c , Y. Zhang98 , Y. Zhang114a , Z. Zhang18a , Z. Zhang63b ,
Z. Zhang67 , H. Zhao141 , T. Zhao63b , Y. Zhao139 , Z. Zhao63a , Z. Zhao63a , A. Zhemchugov39 , J. Zheng114a ,
K. Zheng165 , X. Zheng63a , Z. Zheng146 , D. Zhong165 , B. Zhou108 , H. Zhou7, N. Zhou63c , Y. Zhou15,
Y. Zhou114a , Y. Zhou7,C.G.Zhu
63b ,J.Zhu
108 ,X.Zhu
63d,Y.Zhu
63c ,Y.Zhu
63a , X. Zhuang14 , K. Zhukov69 ,
N. I. Zimine39 , J. Zinsser64b , M. Ziolkowski144 ,L. Živkovc16 , A. Zoccoli24b,24a , K. Zoch62 , T. G. Zorbas142 ,
O. Zormpa47 ,W.Zou
42 , L. Zwalinski37
1Department of Physics, University of Adelaide, Adelaide, Australia
2Department of Physics, University of Alberta, Edmonton, AB, Canada
3(a)Department of Physics, Ankara University, Ankara, Türkiye; (b)Division of Physics, TOBB University of Economics
and Technology, Ankara, Türkiye
4LAPP, Université Savoie Mont Blanc, CNRS/IN2P3, Annecy, France
5APC, Université Paris Cité, CNRS/IN2P3, Paris, France
6High Energy Physics Division, Argonne National Laboratory, Argonne, IL, USA
7Department of Physics, University of Arizona, Tucson, AZ, USA
8Department of Physics, University of Texas at Arlington, Arlington, TX, USA
9Physics Department, National and Kapodistrian University of Athens, Athens, Greece
10 Physics Department, National Technical University of Athens, Zografou, Greece
11 Department of Physics, University of Texas at Austin, Austin, TX, USA
12 Institute of Physics, Azerbaijan Academy of Sciences, Baku, Azerbaijan
13 Institut de Física d’Altes Energies (IFAE), Barcelona Institute of Science and Technology, Barcelona, Spain
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Comput Softw Big Sci (2025) 9:2 Page 31 of 35 2
14 Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China
15 Physics Department, Tsinghua University, Beijing, China
16 Institute of Physics, University of Belgrade, Belgrade, Serbia
17 Department for Physics and Technology, University of Bergen, Bergen, Norway
18 (a)Physics Division, Lawrence Berkeley National Laboratory, Berkeley, CA, USA; (b)University of California, Berkeley,
CA, USA
19 Institut für Physik, Humboldt Universität zu Berlin, Berlin, Germany
20 Albert Einstein Center for Fundamental Physics and Laboratory for High Energy Physics, University of Bern, Bern,
Switzerland
21 School of Physics and Astronomy, University of Birmingham, Birmingham, UK
22 (a)Department of Physics, Bogazici University, Istanbul, Türkiye; (b)Department of Physics Engineering, Gaziantep
University, Gaziantep, Türkiye; (c)Department of Physics, Istanbul University, Istanbul, Türkiye
23 (a)Facultad de Ciencias y Centro de Investigaciónes, Universidad Antonio Nariño, Bogotá, Colombia; (b)Departamento
de Física, Universidad Nacional de Colombia, Bogotá, Colombia
24 (a)Dipartimento di Fisica e Astronomia A. Righi, Università di Bologna, Bologna, Italy; (b)INFN Sezione di Bologna,
Bologna, Italy
25 Physikalisches Institut, Universität Bonn, Bonn, Germany
26 Department of Physics, Boston University, Boston, MA, USA
27 Department of Physics, Brandeis University, Waltham, MA, USA
28 (a)Transilvania University of Brasov, Brasov, Romania; (b)Horia Hulubei National Institute of Physics and Nuclear
Engineering, Bucharest, Romania; (c)Department of Physics, Alexandru Ioan Cuza University of Iasi, Iasi,
Romania; (d)National Institute for Research and Development of Isotopic and Molecular Technologies, Physics
Department, Cluj-Napoca, Romania; (e)National University of Science and Technology Politechnica, Bucharest,
Romania; (f)West University in Timisoara, Timisoara, Romania; (g)Faculty of Physics, University of Bucharest,
Bucharest, Romania
29 (a)Faculty of Mathematics, Physics and Informatics, Comenius University, Bratislava, Slovakia; (b)Department of
Subnuclear Physics, Institute of Experimental Physics of the Slovak Academy of Sciences, Kosice, Slovak Republic
30 Physics Department, Brookhaven National Laboratory, Upton, NY, USA
31 Universidad de Buenos Aires, Facultad de Ciencias Exactas y Naturales, Departamento de Física, y CONICET, Instituto
de Física de Buenos Aires (IFIBA), Buenos Aires, Argentina
32 California State University, Los Angeles, CA, USA
33 Cavendish Laboratory, University of Cambridge, Cambridge, UK
34 (a)Department of Physics, University of Cape Town, Cape Town, South Africa; (b)iThemba Labs, Western Cape, South
Africa; (c)Department of Mechanical Engineering Science, University of Johannesburg, Johannesburg,
South Africa; (d)National Institute of Physics, University of the Philippines Diliman (Philippines), Quezon City,
Philippines; (e)University of South Africa, Department of Physics, Pretoria, South Africa; (f)University of Zululand,
KwaDlangezwa, South Africa; (g)School of Physics, University of the Witwatersrand, Johannesburg, South Africa
35 Department of Physics, Carleton University, Ottawa, ON, Canada
36 (a)Faculté des Sciences Ain Chock, Réseau Universitaire de Physique des Hautes Energies-Université Hassan II,
Casablanca, Morocco; (b)Faculté des Sciences, Université Ibn-Tofail, Kénitra, Morocco; (c)Faculté des Sciences
Semlalia, Université Cadi Ayyad, LPHEA-Marrakech, Marrakech, Morocco; (d)LPMR, Faculté des Sciences, Université
Mohamed Premier, Oujda, Morocco; (e)Faculté des sciences, Université Mohammed V, Rabat, Morocco; (f)Institute of
Applied Physics, Mohammed VI Polytechnic University, Ben Guerir, Morocco
37 CERN, Geneva, Switzerland
38 Affiliated with an institute covered by a cooperation agreement with CERN, Geneva, Switzerland
39 Affiliated with an international laboratory covered by a cooperation agreement with CERN, Geneva, Switzerland
40 Enrico Fermi Institute, University of Chicago, Chicago, IL, USA
41 LPC, Université Clermont Auvergne, CNRS/IN2P3, Clermont-Ferrand, France
42 Nevis Laboratory, Columbia University, Irvington, NY, USA
43 Niels Bohr Institute, University of Copenhagen, Copenhagen, Denmark
44 (a)Dipartimento di Fisica, Università della Calabria, Rende, Italy; (b)INFN Gruppo Collegato di Cosenza, Laboratori
Nazionali di Frascati, Frascati, Italy
45 Physics Department, Southern Methodist University, Dallas, TX, USA
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2 Page 32 of 35 Comput Softw Big Sci (2025) 9:2
46 Physics Department, University of Texas at Dallas, Richardson, TX, USA
47 National Centre for Scientific Research “Demokritos”, Agia Paraskevi, Greece
48 (a)Department of Physics, Stockholm University, Stockholm, Sweden; (b)Oskar Klein Centre, Stockholm, Sweden
49 Deutsches Elektronen-Synchrotron DESY, Hamburg and Zeuthen, Hamburg, Germany
50 Fakultät Physik , Technische Universität Dortmund, Dortmund, Germany
51 Institut für Kern- und Teilchenphysik, Technische Universität Dresden, Dresden, Germany
52 Department of Physics, Duke University, Durham, NC, USA
53 SUPA-School of Physics and Astronomy, University of Edinburgh, Edinburgh, UK
54 INFN e Laboratori Nazionali di Frascati, Frascati, Italy
55 Physikalisches Institut, Albert-Ludwigs-Universität Freiburg, Freiburg, Germany
56 II. Physikalisches Institut, Georg-August-Universität Göttingen, Göttingen, Germany
57 Département de Physique Nucléaire et Corpusculaire, Université de Genève, Genève, Switzerland
58 (a)Dipartimento di Fisica, Università di Genova, Genova, Italy; (b)INFN Sezione di Genova, Genova, Italy
59 II. Physikalisches Institut, Justus-Liebig-Universität Giessen, Giessen, Germany
60 SUPA-School of Physics and Astronomy, University of Glasgow, Glasgow, UK
61 LPSC, Université Grenoble Alpes, CNRS/IN2P3, Grenoble INP, Grenoble, France
62 Laboratory for Particle Physics and Cosmology, Harvard University, Cambridge, MA, USA
63 (a)Department of Modern Physics and State Key Laboratory of Particle Detection and Electronics, University of Science
and Technology of China, Hefei, China; (b)Institute of Frontier and Interdisciplinary Science and Key Laboratory of
Particle Physics and Particle Irradiation (MOE), Shandong University, Qingdao, China; (c)School of Physics and
Astronomy, Shanghai Jiao Tong University, Key Laboratory for Particle Astrophysics and Cosmology (MOE), SKLPPC,
Shanghai, China; (d)Tsung-Dao Lee Institute, Shanghai, China; (e)School of Physics and Microelectronics, Zhengzhou
University, China
64 (a)Kirchhoff-Institut für Physik, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany; (b)Physikalisches Institut,
Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
65 (a)Department of Physics, Chinese University of Hong Kong, Shatin, N.T., Hong Kong; (b)Department of Physics,
University of Hong Kong, Pok Fu Lam, Hong Kong, China; (c)Department of Physics and Institute for Advanced Study,
Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
66 Department of Physics, National Tsing Hua University, Hsinchu, Taiwan
67 IJCLab, Université Paris-Saclay, CNRS/IN2P3, 91405 Orsay, France
68 Centro Nacional de Microelectrónica (IMB-CNM-CSIC), Barcelona, Spain
69 Department of Physics, Indiana University, Bloomington, IN, USA
70 (a)INFN Gruppo Collegato di Udine, Sezione di Trieste, Udine, Italy; (b)ICTP, Trieste, Italy; (c)Dipartimento Politecnico
di Ingegneria e Architettura, Università di Udine, Udine, Italy
71 (a)INFN Sezione di Lecce, Lecce, Italy; (b)Dipartimento di Matematica e Fisica, Università del Salento, Lecce, Italy
72 (a)INFN Sezione di Milano, Milano, Italy; (b)Dipartimento di Fisica, Università di Milano, Milano, Italy
73 (a)INFN Sezione di Napoli, Napoli, Italy; (b)Dipartimento di Fisica, Università di Napoli, Napoli, Italy
74 (a)INFN Sezione di Pavia, Pavia, Italy; (b)Dipartimento di Fisica, Università di Pavia, Pavia, Italy
75 (a)INFN Sezione di Pisa, Pisa, Italy; (b)Dipartimento di Fisica E. Fermi, Università di Pisa, Pisa, Italy
76 (a)INFN Sezione di Roma, Roma, Italy; (b)Dipartimento di Fisica, Sapienza Università di Roma, Roma, Italy
77 (a)INFN Sezione di Roma Tor Vergata, Roma, Italy; (b)Dipartimento di Fisica, Università di Roma Tor Vergata, Roma,
Italy
78 (a)INFN Sezione di Roma Tre, Roma, Italy; (b)Dipartimento di Matematica e Fisica, Università Roma Tre, Roma, Italy
79 (a)INFN-TIFPA, Povo, Italy; (b)Università degli Studi di Trento, Trento, Italy
80 Department of Astro and Particle Physics, Universität Innsbruck, Innsbruck, Austria
81 University of Iowa, Iowa City, IA, USA
82 Department of Physics and Astronomy, Iowa State University, Ames, IA, USA
83 Istinye University, Sariyer, Istanbul, Türkiye
84 (a)Departamento de Engenharia Elétrica, Universidade Federal de Juiz de Fora (UFJF), Juiz de Fora,
Brazil; (b)Universidade Federal do Rio De Janeiro COPPE/EE/IF, Rio de Janeiro, Brazil; (c)Instituto de Física,
Universidade de São Paulo, São Paulo, Brazil; (d)Rio de Janeiro State University, Rio de Janeiro, Brazil; (e)Federal
University of Bahia, Bahia, Brazil
85 KEK, High Energy Accelerator Research Organization, Tsukuba, Japan
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Comput Softw Big Sci (2025) 9:2 Page 33 of 35 2
86 Graduate School of Science, Kobe University, Kobe, Japan
87 (a)Faculty of Physics and Applied Computer Science, AGH University of Krakow, Krakow, Poland; (b)Marian
Smoluchowski Institute of Physics, Jagiellonian University, Krakow, Poland
88 Institute of Nuclear Physics Polish Academy of Sciences, Krakow, Poland
89 Faculty of Science, Kyoto University, Kyoto, Japan
90 Research Center for Advanced Particle Physics and Department of Physics, Kyushu University, Fukuoka , Japan
91 L2IT, Université de Toulouse, CNRS/IN2P3, UPS, Toulouse, France
92 Instituto de Física La Plata, Universidad Nacional de La Plata and CONICET, La Plata, Argentina
93 Physics Department, Lancaster University, Lancaster, UK
94 Oliver Lodge Laboratory, University of Liverpool, Liverpool, UK
95 Department of Experimental Particle Physics, Jožef Stefan Institute and Department of Physics, University of Ljubljana,
Ljubljana, Slovenia
96 School of Physics and Astronomy, Queen Mary University of London, London, UK
97 Department of Physics, Royal Holloway University of London, Egham, UK
98 Department of Physics and Astronomy, University College London, London, UK
99 Louisiana Tech University, Ruston, LA, USA
100 Fysiska institutionen, Lunds universitet, Lund, Sweden
101 Departamento de Física Teorica C-15 and CIAFF, Universidad Autónoma de Madrid, Madrid, Spain
102 Institut für Physik, Universität Mainz, Mainz, Germany
103 School of Physics and Astronomy, University of Manchester, Manchester, UK
104 CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille, France
105 Department of Physics, University of Massachusetts, Amherst, MA, USA
106 Department of Physics, McGill University, Montreal, QC, Canada
107 School of Physics, University of Melbourne, Victoria, Australia
108 Department of Physics, University of Michigan, Ann Arbor, MI, USA
109 Department of Physics and Astronomy, Michigan State University, East Lansing, MI, USA
110 Group of Particle Physics, University of Montreal, Montreal, QC, Canada
111 Fakultät für Physik, Ludwig-Maximilians-Universität München, München, Germany
112 Max-Planck-Institut für Physik (Werner-Heisenberg-Institut), München, Germany
113 Graduate School of Science and Kobayashi-Maskawa Institute, Nagoya University, Nagoya, Japan
114 (a)Department of Physics, Nanjing University, Nanjing, China; (b)School of Science, Shenzhen Campus of Sun Yat-sen
University, Shenzhen, China; (c)University of Chinese Academy of Science (UCAS), Beijing, China
115 Department of Physics and Astronomy, University of New Mexico, Albuquerque, NM, USA
116 Institute for Mathematics, Astrophysics and Particle Physics, Radboud University/Nikhef, Nijmegen, Netherlands
117 Nikhef National Institute for Subatomic Physics and University of Amsterdam, Amsterdam, Netherlands
118 Department of Physics, Northern Illinois University, DeKalb, IL, USA
119 (a)New York University Abu Dhabi, Abu Dhabi, United Arab Emirates; (b)United Arab Emirates University, Al Ain,
United Arab Emirates
120 Department of Physics, New York University, New York, NY, USA
121 Ochanomizu University, Otsuka, Bunkyo-ku, Tokyo, Japan
122 Ohio State University, Columbus, OH, USA
123 Homer L. Dodge Department of Physics and Astronomy, University of Oklahoma, Norman, OK, USA
124 Department of Physics, Oklahoma State University, Stillwater, OK, USA
125 Palacký University, Joint Laboratory of Optics, Olomouc, Czech Republic
126 Institute for Fundamental Science, University of Oregon, Eugene, OR, USA
127 Graduate School of Science, Osaka University, Osaka, Japan
128 Department of Physics, University of Oslo, Oslo, Norway
129 Department of Physics, Oxford University, Oxford, UK
130 LPNHE, Sorbonne Université, Université Paris Cité, CNRS/IN2P3, Paris, France
131 Department of Physics, University of Pennsylvania, Philadelphia, PA, USA
132 Department of Physics and Astronomy, University of Pittsburgh, Pittsburgh, PA, USA
133 (a)Laboratório de Instrumentação e Física Experimental de Partículas - LIP, Lisbon, Portugal; (b)Departamento de Física,
Faculdade de Ciências, Universidade de Lisboa, Lisbon, Portugal; (c)Departamento de Física, Universidade de Coimbra,
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2 Page 34 of 35 Comput Softw Big Sci (2025) 9:2
Coimbra, Portugal; (d)Centro de Física Nuclear da Universidade de Lisboa, Lisbon, Portugal; (e)Departamento de Física,
Universidade do Minho, Braga, Portugal; (f)Departamento de Física Teórica y del Cosmos, Universidad de Granada,
Granada, Spain; (g)Departamento de Física, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
134 Institute of Physics of the Czech Academy of Sciences, Prague, Czech Republic
135 Czech Technical University in Prague, Prague, Czech Republic
136 Charles University, Faculty of Mathematics and Physics, Prague, Czech Republic
137 Particle Physics Department, Rutherford Appleton Laboratory, Didcot, UK
138 IRFU, CEA, Université Paris-Saclay, Gif-sur-Yvette, France
139 Santa Cruz Institute for Particle Physics, University of California Santa Cruz, Santa Cruz, CA, USA
140 (a)Departamento de Física, Pontificia Universidad Católica de Chile, Santiago, Chile; (b)Millennium Institute for
Subatomic physics at high energy frontier (SAPHIR), Santiago, Chile; (c)Instituto de Investigación Multidisciplinario en
Ciencia y Tecnología, y Departamento de Física, Universidad de La Serena, La Serena, Chile; (d)Universidad Andres
Bello, Department of Physics, Santiago, Chile; (e)Instituto de Alta Investigación, Universidad de Tarapacá, Arica,
Chile; (f)Departamento de Física, Universidad Técnica Federico Santa María, Valparaíso, Chile
141 Department of Physics, University of Washington, Seattle, WA, USA
142 Department of Physics and Astronomy, University of Sheffield, Sheffield, UK
143 Department of Physics, Shinshu University, Nagano, Japan
144 Department Physik, Universität Siegen, Siegen, Germany
145 Department of Physics, Simon Fraser University, Burnaby, BC, Canada
146 SLAC National Accelerator Laboratory, Stanford, CA, USA
147 Department of Physics, Royal Institute of Technology, Stockholm, Sweden
148 Departments of Physics and Astronomy, Stony Brook University, Stony Brook, NY, USA
149 Department of Physics and Astronomy, University of Sussex, Brighton, UK
150 School of Physics, University of Sydney, Sydney, Australia
151 Institute of Physics, Academia Sinica, Taipei, Taiwan
152 (a)E. Andronikashvili Institute of Physics, Iv. Javakhishvili Tbilisi State University, Tbilisi, Georgia; (b)High Energy
Physics Institute, Tbilisi State University, Tbilisi, Georgia; (c)University of Georgia, Tbilisi, Georgia
153 Department of Physics, Technion, Israel Institute of Technology, Haifa, Israel
154 Raymond and Beverly Sackler School of Physics and Astronomy, Tel Aviv University, Tel Aviv, Israel
155 Department of Physics, Aristotle University of Thessaloniki, Thessaloniki, Greece
156 International Center for Elementary Particle Physics and Department of Physics, University of Tokyo, Tokyo, Japan
157 Department of Physics, Tokyo Institute of Technology, Tokyo, Japan
158 Department of Physics, University of Toronto, Toronto, ON, Canada
159 (a)TRIUMF, Vancouver, BC, Canada; (b)Department of Physics and Astronomy, York University, Toronto, ON, Canada
160 Division of Physics and Tomonaga Center for the History of the Universe, Faculty of Pure and Applied Sciences,
University of Tsukuba, Tsukuba, Japan
161 Department of Physics and Astronomy, Tufts University, Medford, MA, USA
162 Department of Physics and Astronomy, University of California Irvine, Irvine, CA, USA
163 University of Sharjah, Sharjah, United Arab Emirates
164 Department of Physics and Astronomy, University of Uppsala, Uppsala, Sweden
165 Department of Physics, University of Illinois, Urbana, IL, USA
166 Instituto de Física Corpuscular (IFIC), Centro Mixto Universidad de Valencia-CSIC, Valencia, Spain
167 Department of Physics, University of British Columbia, Vancouver, BC, Canada
168 Department of Physics and Astronomy, University of Victoria, Victoria, BC, Canada
169 Fakultät für Physik und Astronomie, Julius-Maximilians-Universität Würzburg, Würzburg, Germany
170 Department of Physics, University of Warwick, Coventry, UK
171 Waseda University, Tokyo, Japan
172 Department of Particle Physics and Astrophysics, Weizmann Institute of Science, Rehovot, Israel
173 Department of Physics, University of Wisconsin, Madison, WI, USA
174 Fakultät für Mathematik und Naturwissenschaften, Fachgruppe Physik, Bergische Universität Wuppertal, Wuppertal,
Germany
175 Department of Physics, Yale University, New Haven, CT, USA
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Comput Softw Big Sci (2025) 9:2 Page 35 of 35 2
aAlso Affiliated with an institute covered by a cooperation agreement with CERN, Geneva, Switzerland
bAlso at An-Najah National University, Nablus, Palestine
cAlso at Borough of Manhattan Community College, City University of New York, New York, NY, USA
dAlso at Center for Interdisciplinary Research and Innovation (CIRI-AUTH), Thessaloniki, Greece
eAssociated at Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, Spain
fAlso at Centro Studi e Ricerche Enrico Fermi, Roma, Italy
gAlso at CERN, Geneva, Switzerland
hAlso at CMD-AC UNEC Research Center, Azerbaijan State University of Economics (UNEC), Baku, Azerbaijan
iAlso at Département de Physique Nucléaire et Corpusculaire, Université de Genève, Genève, Switzerland
jAlso at Departament de Fisica de la Universitat Autonoma de Barcelona, Barcelona, Spain
kAlso at Department of Financial and Management Engineering, University of the Aegean, Chios, Greece
lAlso at Department of Physics, California State University, Sacramento, USA
mAlso at Department of Physics, King’s College London, London, UK
nAlso at Department of Physics, Stanford University, Stanford, CA, USA
oAlso at Department of Physics, Stellenbosch University, Stellenbosch, South Africa
pAlso at Department of Physics, University of Fribourg, Fribourg, Switzerland
qAlso at Department of Physics, University of Thessaly, Volos, Greece
rAlso at Department of Physics, Westmont College, Santa Barbara, USA
sAlso at Hellenic Open University, Patras, Greece
tAlso at Institucio Catalana de Recerca i Estudis Avancats, ICREA, Barcelona, Spain
uAlso at Institut für Experimentalphysik, Universität Hamburg, Hamburg, Germany
vAlso at Institute for Nuclear Research and Nuclear Energy (INRNE) of the Bulgarian Academy of Sciences, Sofia,
Bulgaria
wAlso at Institute of Applied Physics, Mohammed VI Polytechnic University, Ben Guerir, Morocco
xAlso at Institute of Particle Physics (IPP), Toronto, Canada
yAlso at Institute of Physics, Azerbaijan Academy of Sciences, Baku, Azerbaijan
zAlso at Institute of Theoretical Physics, Ilia State University, Tbilisi, Georgia
aa Also at National Institute of Physics, University of the Philippines Diliman (Philippines), Quezon City, Philippines
ab Also at Technical University of Munich, Munich, Germany
ac Also at The Collaborative Innovation Center of Quantum Matter (CICQM), Beijing, China
ad Also at TRIUMF, Vancouver, BC, Canada
ae Also at Università di Napoli Parthenope, Napoli, Italy
af Also at University of Colorado Boulder, Department of Physics, Colorado, USA
ag Also at Washington College, Chestertown, MD, USA
ah Also at Yeditepe University, Physics Department, Istanbul, Türkiye
Deceased
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers and authorised users (“Users”), for small-
scale personal, non-commercial use provided that all copyright, trade and service marks and other proprietary notices are maintained. By
accessing, sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of use (“Terms”). For these
purposes, Springer Nature considers academic use (by researchers and students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and conditions, a relevant site licence or a personal
subscription. These Terms will prevail over any conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription
(to the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of the Creative Commons license used will
apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may also use these personal data internally within
ResearchGate and Springer Nature and as agreed share it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not
otherwise disclose your personal data outside the ResearchGate or the Springer Nature group of companies unless we have your permission as
detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial use, it is important to note that Users may
not:
use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at
onlineservice@springernature.com
... The recent adoption of ARM processors within ATLAS computing was the result of significant work motivated by both financial and power usage considerations [33]. ATLAS computing has also demonstrated the use of commercial cloud resources for development of new platforms [34,35]. Using cloud resources allows the collaboration to carefully evaluate new platforms, run productions at modest scale, and perform detailed validation without having to purchase resources that might be wasted if the validations are unsuccessful or the resources prove more costly than existing options. ...
Preprint
Full-text available
ATLAS, a general-purpose experiment at the Large Hadron Collider (LHC), operates a large internationally-distributed computing infrastructure, including over 10610^6 TB of managed data on disk and tape and almost one million simultaneously running CPU cores. Upgrades for the High-Luminosity LHC (HL-LHC) will increase the required computing resources by a factor of 3-4 by the beginning of the 2030s, and by an order of magnitude before the conclusion of data taking at the beginning of the 2040s. These resources are spread over around 100 computing sites worldwide. Efforts are underway within the experiment to evaluate and mitigate various aspects of the environmental impact of the sites, with the additional long-term goal of making recommendations to the sites that will significantly reduce the total expected environmental impact in the HL-LHC era. These efforts take several forms: building awareness in the experiment community, adjusting aspects of the computing policy, and modifications of data center configurations, either in ways that take advantage of particular features of ATLAS workloads or in generic ways that reduce the environmental impact of the computing resources. This paper describes the ongoing investigations and approaches that have already provided useful, and actionable outcomes that can be implemented today.
... We gratefully acknowledge the support of ANPCyT This work started as part of the ATLAS Google Cloud Platform project [57]. Later stages of the work utilized computing resources at Southern Methodist University's O'Donnell Data Science and Research Computing Institute and computing resources at the University of Massachusetts Amherst Research Computing. ...
Article
Full-text available
Neural simulation-based inference is a powerful class of machine-learning-based methods for statistical inference that naturally handles high-dimensional parameter estimation without the need to bin data into low-dimensional summary histograms. Such methods are promising for a range of measurements, including at the Large Hadron Collider, where no single observable may be optimal to scan over the entire theoretical phase space under consideration, or where binning data into histograms could result in a loss of sensitivity. This work develops a neural simulation-based inference framework for statistical inference, using neural networks to estimate probability density ratios, which enables the application to a full-scale analysis. It incorporates a large number of systematic uncertainties, quantifies the uncertainty due to the finite number of events in training samples, develops a method to construct confidence intervals, and demonstrates a series of intermediate diagnostic checks that can be performed to validate the robustness of the method. As an example, the power and feasibility of the method are assessed on simulated data for a simplified version of an off-shell Higgs boson couplings measurement in the four-lepton final states. This approach represents an extension to the standard statistical methodology used by the experiments at the Large Hadron Collider, and can benefit many physics analyses.
Article
Full-text available
A measurement of off-shell Higgs boson production in the H * →ZZ→ 4l decay channel is presented. The measurement uses 140 fb ⁻¹ of proton-proton collisions at √s=13 TeV collected by the ATLAS detector at the Large Hadron Collider and supersedes the previous result in this decay channel using the same dataset. The data analysis is performed using a neural simulation-based inference method, which builds per-event likelihood ratios using neural networks. The observed (expected) off-shell Higgs boson production signal strength in the ZZ→4l decay channel at 68% CL is 0.87 +0.75 -0.54 (1.00 +1.04 -0.95 ). The evidence for off-shell Higgs boson production using the ZZ→4l decay channel has an observed (expected) significance of 2.5σ (1.3σ). The expected result represents a significant improvement relative to that of the previous analysis of the same dataset, which obtained an expected significance of 0.5σ. When combined with the most recent ATLAS measurement in the ZZ→2l2ν decay channel, the evidence for off-shell Higgs boson production has an observed (expected) significance of 3.7σ (2.4σ). The off-shell measurements are combined with the measurement of on-shell Higgs boson production to obtain constraints on the Higgs boson total width. The observed (expected) value of the Higgs boson width at 68% CL is 4.3 +2.7 -1.9 (4.1 +3.5 -3.4 ) MeV.
Article
Full-text available
The ATLAS experiment at CERN relies on a Worldwide Distributed Computing Grid infrastructure to support its physics program at the Large Hadron Collider. ATLAS has integrated cloud computing resources to complement its Grid infrastructure and conducted an R&D program on Google Cloud Platform. These initiatives leverage key features of commercial cloud providers: lightweight configuration and operation, elasticity and availability of diverse infrastructures. This paper examines the seamless integration of cloud computing services as a conventional Grid site within the ATLAS workflow management and data management systems, while also offering new setups for interactive, parallel analysis. It underscores pivotal results that enhance the on-site computing model and outlines several R&D projects that have benefited from large-scale, elastic resource provisioning models. Furthermore, this study discusses the impact of cloud-enabled R&D projects in three domains: accelerators and AI/ML, ARM CPUs and columnar data analysis techniques.
Article
Full-text available
The LHCOPN network, which links CERN to all the WLCG Tier 1s, and the LHCONE network, which connects WLCG Tier1s and Tier2s, have successfully provided the necessary bandwidth for the distribution of the data generated by the LHC experiments during first two runs of the LHC accelerator. We give here an overview of the most significant achievements and the current state of the two networks. It also explains how the two networks are evolving to support Run-3 and how they are preparing to meet the high demands foreseen for Run-4, notably by adopting new transmission technologies to increase the available bandwidth, introducing new software tools to improve the efficient utilization of all the links, as well as new monitoring capabilities to increase the understanding of the network traffic.
Article
Full-text available
HEPScore is a new CPU benchmark created to replace the HEPSPEC06 benchmark that is currently used by the WLCG for procurement, computing resource pledges, usage accounting and performance studies. The development of the new benchmark, based on HEP applications or workloads, has involved many contributions from software developers, data analysts, experts of the experiments, representatives of several WLCG computing centres and WLCG site managers. In this contribution, we review the selection of workloads and the validation of the new HEPScore benchmark.
Article
Full-text available
The ATLAS Trigger and Data Acquisition (TDAQ) High Level Trigger (HLT) computing farm contains 120 000 CPU cores. These resources are critical for the online selection and collection of collision data in the ATLAS experiment during LHC operation. Since 2013, during a longer period of LHC inactivity, these resources are being used for offline event simulation via the “Simulation at Point One” project (Sim@P1). With the recent start of LHC Run 3 and the flat computing budget expected in the near future, finding ways to maximize resource utilization efficiency is of paramount importance. Recent improvements in the ATLAS software stack can potentially allow the utilization of the Sim@P1 even during LHC operation for the duration of the LHC inter-fill gaps. While previous papers on the Sim@P1 project emphasized the technical implementation details, the current contribution is presenting results of a variety of tests that led to the optimal configuration of the job submission infrastructure which would allow the use of Sim@P1 during LHC Run 3.
Article
Full-text available
Rucio is a software framework designed to facilitate scientific collaborations in efficiently organising, managing, and accessing extensive volumes of data through customizable policies. The framework enables data distribution across globally distributed locations and heterogeneous data centres, integrating various storage and network technologies into a unified federated entity. Rucio offers advanced features like distributed data recovery and adaptive replication, and it exhibits high scalability, modularity, and extensibility. Originally developed to meet the requirements of the high-energy physics experiment ATLAS, Rucio has been continuously expanded to support LHC experiments and diverse scientific communities. Recent R&D projects within these communities have evaluated the integration of both private and commercially-provided cloud storage systems, leading to the development of additional functionalities for seamless integration within Rucio. Furthermore, the underlying systems, FTS and GFAL/Davix, have been extended to cater to specific use cases. This contribution focuses on the technical aspects of this work, particularly the challenges encountered in building a generic interface for self-hosted cloud storage, such as MinIO or CEPH S3 Gateway, and established providers like Google Cloud Storage and Amazon Simple Storage Service. Additionally, the integration of decentralised clouds like SEAL is explored. Key aspects, including authentication and authorisation, direct and remote access, throughput and cost estimation, are highlighted, along with shared experiences in daily operations.
Article
Full-text available
The ATLAS experiment at CERN is one of the largest scientific machines built to date and will have ever growing computing needs as the Large Hadron Collider collects an increasingly larger volume of data over the next 20 years. ATLAS is conducting R&D projects on Amazon Web Services and Google Cloud as complementary resources for distributed computing, focusing on some of the key features of commercial clouds: lightweight operation, elasticity and availability of multiple chip architectures. The proof of concept phases have concluded with the cloud-native, vendoragnostic integration with the experiment’s data and workload management frameworks. Google Cloud has been used to evaluate elastic batch computing, ramping up ephemeral clusters of up to O(100k) cores to process tasks requiring quick turnaround. Amazon Web Services has been exploited for the successful physics validation of the Athena simulation software on ARM processors. We have also set up an interactive facility for physics analysis allowing endusers to spin up private, on-demand clusters for parallel computing with up to 4 000 cores, or run GPU enabled notebooks and jobs for machine learning applications. The success of the proof of concept phases has led to the extension of the Google Cloud project, where ATLAS will study the total cost of ownership of a production cloud site during 15 months with 10k cores on average, fully integrated with distributed grid computing resources and continue the R&D projects.
Article
Full-text available
The High Luminosity upgrade to the LHC, which aims for a tenfold increase in the luminosity of proton-proton collisions at an energy of 14 TeV, is expected to start operation in 2028/29 and will deliver an unprecedented volume of scientific data at the multi-exabyte scale. This amount of data has to be stored, and the corresponding storage system must ensure fast and reliable data delivery for processing by scientific groups distributed all over the world. The present LHC computing and data management model will not be able to provide the required infrastructure growth, even taking into account the expected hardware technology evolution. To address this challenge, the Data Carousel R&D project was launched by the ATLAS experiment in the fall of 2018. State-of-the-art data and workflow management technologies are under active development, and their current status is presented here.
Article
Full-text available
The ATLAS experiment relies heavily on simulated data, requiring the production of billions of simulated proton-proton collisions every run period. As such, the simulation of collisions (events) is the single biggest CPU resource consumer. ATLAS’s finite computing resources are at odds with the expected conditions during the High Luminosity LHC era, where the increase in proton-proton centre-of-mass energy and instantaneous luminosity will result in higher particle multiplicities and roughly five-fold additional interactions per bunch-crossing with respect to LHC Run-2. Therefore, significant effort within the collaboration is being focused on increasing the rate at which Monte Carlo events can be produced by designing and developing fast alternatives to the algorithms used in the standard Monte Carlo production chain.
Article
Full-text available
The CERN ATLAS Experiment successfully uses a worldwide distributed computing Grid infrastructure to support its physics programme at the Large Hadron Collider (LHC). The Grid workflow system PanDA routinely manages up to 700,000 concurrently running production and analysis jobs to process simulation and detector data. In total more than 500 PB of data are distributed over more than 150 sites in the WLCG and handled by the ATLAS data management system Rucio. To prepare for the ever growing data rate in future LHC runs new developments are underway to embrace industry accepted protocols and technologies, and utilize opportunistic resources in a standard way. This paper reviews how the Google and Amazon Cloud computing services have been seamlessly integrated as a Grid site within PanDA and Rucio. Performance and brief cost evaluations will be discussed. Such setups could offer advanced Cloud tool-sets and provide added value for analysis facilities that are under discussions for LHC Run-4.