I. Soloviev

Petersburg Nuclear Physics Institute, Krasnogwardeisk, Leningrad, Russia

Are you I. Soloviev?

Claim your profile

Publications (54)36.51 Total impact

  • Igor Soloviev
    [Show abstract] [Hide abstract]
    ABSTRACT: The ATLAS experiment at the LHC in Geneva uses a complex and highly distributed Trigger and Data Acquisition system, involving a very large number of computing nodes and custom modules. The configuration of the system is specified by schema and data in more than 1000 XML files, with various experts responsible for updating the files associated with their components. Maintaining an error free and consistent set of XML files proved a major challenge. Therefore a special service was implemented; to validate any modifications; to check the authorization of anyone trying to modify a file; to record who had made changes, plus when and why; and to provide tools to compare different versions of files and to go back to earlier versions if required. This paper provides details of the implementation and exploitation experience, that may be interesting for other applications using many human-readable files maintained by different people, where consistency of the files and traceability of modifications are key requirements.
    Journal of Physics Conference Series 12/2012; 396(1):2047-. DOI:10.1088/1742-6596/396/1/012047
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes P-BEAST, a highly scalable, highly available and durable system for archiving monitoring information of the trigger and data acquisition (TDAQ) system of the ATLAS experiment at CERN. Currently this consists of 20,000 applications running on 2,400 interconnected computers but it is foreseen to grow further in the near future. P-BEAST stores considerable amounts of monitoring information which would otherwise be lost. Making this data accessible, facilitates long term analysis and faster debugging. The novelty of this research consists of using a modern key-value storage technology (Cassandra) to satisfy the massive time series data rates, flexibility and scalability requirements entailed by the project. The loose schema allows the stored data to evolve seamlessly with the information flowing within the Information Service. An architectural overview of P-BEAST is presented alongside a discussion about the technologies considered as candidates for storing the data. The arguments which ultimately lead to choosing Cassandra are explained. Measurements taken during operation in production environment illustrate the data volume absorbed by the system and techniques for reducing the required Cassandra storage space overhead.
    Journal of Physics Conference Series 06/2012; 368(1). DOI:10.1088/1742-6596/368/1/012002
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The reconstruction of photons in the ATLAS detector is studied with data taken during the 2004 Combined Test Beam, where a full slice of the ATLAS detector was exposed to beams of particles of known energy at the CERN SPS. The results presented show significant differences in the longitudinal development of the electromagnetic shower between converted and unconverted photons as well as in the total measured energy. The potential to use the reconstructed converted photons as a means to precisely map the material of the tracker in front of the electromagnetic calorimeter is also considered. All results obtained are compared with a detailed Monte-Carlo simulation of the test-beam setup which is based on the same simulation and reconstruction tools as those used for the ATLAS detector itself.
    Journal of Instrumentation 03/2011; 6(04):P04001. DOI:10.1088/1748-0221/6/04/P04001 · 1.53 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a software environment to automatically configure and run online triggering and dataflow farms for the ATLAS experiment at the Large Hadron Collider (LHC). It provides support for a broad set of users, with distinct knowledge about the online triggering system, ranging from casual testers to final system deployers. This level of automatization improves the overall ATLAS TDAQ work flow for software and hardware tests and speeds-up system modifications and deployment.
    Computer Physics Communications 03/2011; 182(3-182):555-563. DOI:10.1016/j.cpc.2010.10.003 · 2.41 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A new method for calibrating the hadron response of a segmented calorimeter is developed and successfully applied to beam test data. It is based on a principal component analysis of energy deposits in the calorimeter layers, exploiting longitudinal shower development information to improve the measured energy resolution. Corrections for invisible hadronic energy and energy lost in dead material in front of and between the calorimeters of the ATLAS experiment were calculated with simulated Geant4 Monte Carlo events and used to reconstruct the energy of pions impinging on the calorimeters during the 2004 Barrel Combined Beam Test at the CERN H8 area. For pion beams with energies between 20 GeV and 180 GeV, the particle energy is reconstructed within 3% and the energy resolution is improved by between 11% and 25% compared to the resolution at the electromagnetic scale.
    Journal of Instrumentation 12/2010; 6(06). DOI:10.1088/1748-0221/6/06/P06001 · 1.53 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In 2004 at the ATLAS (A Toroidal LHC ApparatuS) combined test beam, one slice of the ATLAS barrel detector (including an Inner Detector set-up and the Liquid Argon calorimeter) was exposed to particles from the H8 SPS beam line at CERN. It was the first occasion to test the combined electron performance of ATLAS. This paper presents results obtained for the momentum measurement p with the Inner Detector and for the performance of the electron measurement with the LAr calorimeter (energy E linearity and resolution) in the presence of a magnetic field in the Inner Detector for momenta ranging from 20 GeV/c to 100 GeV/c. Furthermore the particle identification capabilities of the Transition Radiation Tracker, Bremsstrahlungs-recovery algorithms relying on the LAr calorimeter and results obtained for the E/p ratio and a way how to extract scale parameters will be discussed.
    Journal of Instrumentation 11/2010; 5:11006. DOI:10.1088/1748-0221/5/11/P11006 · 1.53 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: ATLAS is the biggest of the experiments aimed at studying high-energy particle interactions at the Large Hadron Collider (LHC). This paper describes the evolution of the Controls and Configuration system of the ATLAS Trigger and Data Acquisition (TDAQ) from the Technical Design Report (TDR) in 2003 to the first events taken at CERN with circulating beams in autumn 2008. The present functionality and performance and the lessons learned during the development are outlined. At the end we will also highlight some of the challenges which still have to be met by 2010, when the full scale of the trigger farm will be deployed.
    Nuclear Instruments and Methods in Physics Research Section A Accelerators Spectrometers Detectors and Associated Equipment 11/2010; 623(1). DOI:10.1016/j.nima.2010.03.066 · 1.32 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A fully instrumented slice of the ATLAS detector was exposed to test beams from the SPS (Super Proton Synchrotron) at CERN in 2004. In this paper, the results of the measurements of the response of the barrel calorimeter to hadrons with energies in the range 20–350 GeV and beam impact points and angles corresponding to pseudo-rapidity values in the range 0.2–0.65 are reported. The results are compared to the predictions of a simulation program using the Geant 4 toolkit.
    Nuclear Instruments and Methods in Physics Research Section A Accelerators Spectrometers Detectors and Associated Equipment 09/2010; A621:134-150. DOI:10.1016/j.nima.2010.04.054 · 1.32 Impact Factor
  • ICSOFT 2010 - Proceedings of the Fifth International Conference on Software and Data Technologies, Volume 2, Athens, Greece, July 22-24, 2010; 01/2010
  • [Show abstract] [Hide abstract]
    ABSTRACT: A fully instrumented slice of the ATLAS central detector was exposed to test beams from the SPS (Super Proton Synchrotron) at CERN in 2004. In this paper, the response of the central calorimeters to pions with energies in the range between 3 and 9 GeV is presented. The linearity and the resolution of the combined calorimetry (electromagnetic and hadronic calorimeters) was measured and compared to the prediction of a detector simulation program using the toolkit Geant 4.
    Nuclear Instruments and Methods in Physics Research Section A Accelerators Spectrometers Detectors and Associated Equipment 08/2009; A607(2):372-386. DOI:10.1016/j.nima.2009.05.158 · 1.32 Impact Factor
  • Source
    Nuclear Instruments and Methods in Physics Research Section A Accelerators Spectrometers Detectors and Associated Equipment 07/2009; 606(3):362–394. · 1.32 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We report test beam studies of 11% of the production ATLAS Tile Calorimeter modules. The modules were equipped with production front-end electronics and all the calibration systems planned for the final detector. The studies used muon, electron and hadron beams ranging in energy from 3 to 350GeV.Two independent studies showed that the light yield of the calorimeter was ∼70pe/GeV, exceeding the design goal by 40%. Electron beams provided a calibration of the modules at the electromagnetic energy scale. Over 200 calorimeter cells the variation of the response was 2.4%. The linearity with energy was also measured. Muon beams provided an intercalibration of the response of all calorimeter cells. The response to muons entering in the ATLAS projective geometry showed an RMS variation of 2.5% for 91 measurements over a range of rapidities and modules. The mean response to hadrons of fixed energy had an RMS variation of 1.4% for the modules and projective angles studied. The response to hadrons normalized to incident beam energy showed an 8% increase between 10 and 350GeV, fully consistent with expectations for a noncompensating calorimeter. The measured energy resolution for hadrons of σ/E=52.9%/E⊕5.7% was also consistent with expectations.Other auxiliary studies were made of saturation recovery of the readout system, the time resolution of the calorimeter and the performance of the trigger signals from the calorimeter.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The ATLAS detector as installed in its experimental cavern at point 1 at CERN is described in this paper. A brief overview of the expected performance of the detector when the Large Hadron Collider begins operation is also presented.
    Journal of Instrumentation 08/2008; 3:S08003. DOI:10.1088/1748-0221/3/08/S08003 · 1.53 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The ATLAS experiment under construction at CERN is due to begin operation at the end of 2007. The detector will record the results of proton-proton collisions at a center-of-mass energy of 14 TeV. The trigger is a three-tier system designed to identify in real-time potentially interesting events that are then saved for detailed offline analysis. The trigger system will select approximately 200 Hz of potentially interesting events out of the 40 MHz bunch-crossing rate (with 109 interactions per second at the nominal luminosity). Algorithms used in the trigger system to identify different event features of interest will be described, as well as their expected performance in terms of selection efficiency, background rejection and computation time per event. The talk will concentrate on recent improvements and on performance studies, using a very detailed simulation of the ATLAS detector and electronics chain that emulates the raw data as it will appear at the input to the trigger system.
    Journal of Physics Conference Series 07/2008; 119(2):022022. DOI:10.1088/1742-6596/119/2/022022
  • [Show abstract] [Hide abstract]
    ABSTRACT: The ATLAS conditions databases will be used to manage information of quite diverse nature and level of complexity. The usage of a relational database manager like Oracle, together with the object managers POOL and OKS developed in-house, poses special difficulties in browsing the available data while understanding its structure in a general way. This is particularly relevant for the database browser projects where it is difficult to link with the class defining libraries generated by general frameworks such as Athena. A modular approach to tackle these problems is presented here. The database infrastructure is under development using the LCG COOL infrastructure, and provides a powerful information sharing gateway upon many different systems. The nature of the stored information ranges from temporal series of simple values up to very complex objects describing the configuration of systems like ATLAS' TDAQ infrastructure, including also associations to large objects managed outside of the database infrastructure. An important example of this architecture is the Online Objects Extended Database BrowsEr (NODE), which is designed to access and display all data, available in the ATLAS Monitoring Data Archive (MDA), including histograms and data tables. To deal with the special nature of the monitoring objects, a plugin from the MDA framework to the Time managed science Instrument Databases (TIDB2) is used. The database browser is extended, in particular to include operations on histograms such as display, overlap, comparisons as well as commenting and local storage.
    Journal of Physics Conference Series 07/2008; 119(4):042026. DOI:10.1088/1742-6596/119/4/042026
  • [Show abstract] [Hide abstract]
    ABSTRACT: During 2006 and the first half of 2007, the installation, integration and commissioning of trigger and data acquisition (TDAQ) equipment in the ATLAS experimental area have progressed. There have been a series of technical runs using the final components of the system already installed in the experimental area. Various tests have been run including ones where level 1 preselected simulated proton-proton events have been processed in a loop mode through the trigger and dataflow chains. The system included the readout buffers containing the events, event building, level 2 and event filter trigger algorithms. The scalability of the system with respect to the number of event building nodes used has been studied and quantities critical for the final system, such as trigger rates and event processing times, have been measured using different trigger algorithms as well as different TDAQ components. This paper presents the TDAQ architecture, the current status of the installation and commissioning and highlights the main test results that validate the system.
    Journal of Physics Conference Series 07/2008; 119(2):022001. DOI:10.1088/1742-6596/119/2/022001
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes challenging requirements on the configuration service for the ATLAS experiment at CERN. It presents the status of the implementation and testing one year before the start of data taking, providing details of: 1. the capabilities of the underlying OKS object manager to store and to archive configuration descriptions, its user and programming interfaces; 2. the organization of configuration descriptions for different types of data taking runs and combinations of participating sub-detectors; 3. the scalable architecture to support simultaneous access to the service by thousands of processes during the online configuration stage of ATLAS; 4. the experience with the usage of the configuration service during large scale tests, test beam, commissioning and technical runs. The paper also presents pro and contra of the chosen object-oriented implementation compared with solutions based on pure relational database technologies, and explains why after several years of usage we continue with our approach.
    Real-Time Conference, 2007 15th IEEE-NPSS; 07/2008
  • [Show abstract] [Hide abstract]
    ABSTRACT: The access of the ATLAS Trigger and Data Acquisition (TDAQ) system to the ATLAS Conditions Databases sets strong reliability and performance requirements on the database storage and access infrastructures. Several applications were developed to support the integration of Conditions database access with the online services in TDAQ, including the interface to the Information Services (IS) and to the TDAQ Configuration Databases. The information storage requirements were the motivation for the ONline A Synchronous Interface to COOL (ONASIC) from the Information Service (IS) to LCG/COOL databases. ONASIC avoids the possible backpressure from Online Database servers by managing a local cache. In parallel, OKS2COOL was developed to store Configuration Databases into an Offline Database with history record. The DBStressor application was developed to test and stress the access to the Conditions database using the LCG/COOL interface while operating in an integrated way as a TDAQ application. The performance scaling of simultaneous Conditions database read accesses was studied in the context of the ATLAS High Level Trigger large computing farms. A large set of tests were performed involving up to 1000 computing nodes that simultaneously accessed the LCG central database server infrastructure at CERN.
    Journal of Physics Conference Series 07/2008; 119(2). DOI:10.1088/1742-6596/119/2/022005
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: During 2006 and spring 2007, integration and commissioning of trigger and data acquisition (TDAQ) equipment in the ATLAS experimental area has progressed. Much of the work has focused on a final prototype setup consisting of around eighty computers representing a subset of the full TDAQ system. There have been a series of technical runs using this setup. Various tests have been run including those where around 6 k Level-1 preselected simulated proton-proton events have been processed in a loop mode through the trigger and dataflow chains. The system included the readout buffers containing the events, event building, second level and third level trigger processors. Aspects critical for the final system, such as event processing times, have been studied using different trigger algorithms as well as the different dataflow components.
    IEEE Transactions on Nuclear Science 03/2008; 55(1):106-112. DOI:10.1109/TNS.2007.914030 · 1.46 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The access of the ATLAS Trigger and DAQ systems to the conditions databases involves specific requirements on reliability, performance and integration with the online services. We describe the applications that were developed to interface the online information services and the configuration setup to the conditions databases and also to test the direct access from the online computing farms. The online asynchronous interface to cool (ONASIC), interfaces the information service (IS) with LCG/COOL and avoids backpressure from offline database servers. OKS2COOL, is an API developed both to handle schema migration from online configurations database into offline conditions database and to archive TDAQ configurations. To study the performance of simultaneous conditions database read accesses in the context of the ATLAS high level trigger system the DBStressor application was developed and deployed.
    Real-Time Conference, 2007 15th IEEE-NPSS; 01/2007

Publication Stats

700 Citations
36.51 Total Impact Points

Institutions

  • 2008–2011
    • Petersburg Nuclear Physics Institute
      Krasnogwardeisk, Leningrad, Russia
    • University of Barcelona
      Barcino, Catalonia, Spain
  • 2005–2006
    • National Institute for Subatomic Physics
      Amsterdamo, North Holland, Netherlands
    • University of Wisconsin–Madison
      • Department of Physics
      Madison, Wisconsin, United States
  • 1998–2004
    • CERN
      • Physics Department (PH)
      Genève, Geneva, Switzerland