Conference Paper

Benchmarks and implementation of the ALICE high level trigger

Kirchhoff Inst. for Phys., Heidelberg Univ., Germany;
DOI: 10.1109/RTC.2005.1547465 Conference: Real Time Conference, 2005. 14th IEEE-NPSS
Source: IEEE Xplore

ABSTRACT The ALICE high level trigger combines and processes the full information from all major detectors in a large computer cluster. Data rate reduction is achieved by reducing the event rate by selecting interesting events (software trigger) and by reducing the event size by selecting sub-events and by advanced data compression. Reconstruction chains for the barrel detectors and the forward muon spectrometer have been benchmarked. The HLT receives a replica of the raw data via the standard ALICE DDL link into a custom PCI receiver card (HLT-RORC). These boards also provide a FPGA co-processor for data-intensive tasks of pattern recognition. Some of the pattern recognition algorithms (cluster finder, Hough transformation) have been re-designed in VHDL to be executed in the Virtex-4 FPGA on the HLT-RORC. HLT prototypes were operated during the beam tests of the TPC and TRD detectors. The input and output interfaces to DAQ and the data flow inside of HLT were successfully tested. A full-scale prototype of the dimuon-HLT achieved the expected data flow performance. This system was finally embedded in a GRID-like system of several distributed clusters demonstrating the scalability and fault-tolerance of the HLT.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: ALICE (A Large Ion Collider Experiment) is a general-purpose, heavy-ion detector at the CERN LHC which focuses on QCD, the strong-interaction sector of the Standard Model. It is designed to address the physics of strongly interacting matter and the quark-gluon plasma at extreme values of energy density and temperature in nucleus-nucleus collisions. Besides running with Pb ions, the physics programme includes collisions with lighter ions, lower energy running and dedicated proton-nucleus runs. ALICE will also take data with proton beams at the top LHC energy to collect reference data for the heavy-ion programme and to address several QCD topics for which ALICE is complementary to the other LHC detectors. The ALICE detector has been built by a collaboration including currently over 1000 physicists and engineers from 105 Institutes in 30 countries. Its overall dimensions are 16 × 16 × 26 m3 with a total weight of approximately 10 000 t. The experiment consists of 18 different detector systems each with its own specific technology choice and design constraints, driven both by the physics requirements and the experimental conditions expected at LHC. The most stringent design constraint is to cope with the extreme particle multiplicity anticipated in central Pb-Pb collisions. The different subsystems were optimized to provide high-momentum resolution as well as excellent Particle Identification (PID) over a broad range in momentum, up to the highest multiplicities predicted for LHC. This will allow for comprehensive studies of hadrons, electrons, muons, and photons produced in the collision of heavy nuclei. Most detector systems are scheduled to be installed and ready for data taking by mid-2008 when the LHC is scheduled to start operation, with the exception of parts of the Photon Spectrometer (PHOS), Transition Radiation Detector (TRD) and Electro Magnetic Calorimeter (EMCal). These detectors will be completed for the high-luminosity ion run expected in 2010. This paper describes in detail the detector components as installed for the first data taking in the summer of 2008.
    Journal of Instrumentation. 07/2008; 3:8002.
  • Source
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Fault tolerance is the ability to a system to continue its functionality despite the presence of faults in the architecture. For a dynamic system such as the cloud, fault tolerance is required to ensure business continuity. This paper proposes a high availability middleware that en-sures fault tolerance for cloud based applications. Effective Descriptive Set Theory is used to determine the model of fault detection for real life applications running on the open source cloud. A deterministic algorithm of the middleware is provided that achieves automatic allocation of backup nodes to the system based on the faults. After detection of faults, the middleware directs the system to add new nodes as replicas of the failed nodes, ensuring continuity of the cloud applications. Next, a case study including seven real life applications such as PostGreSQL Database, etc are described and fault tolerance is ensured through the proposed middleware. Empirical performance analysis of the algorithm is carried out and results are compared to traditional systems.Results show that in the presence of faults induced during experimentation, the middleware can be effectively used to introduce replica and ensure fault tolerance of bottleneck resources for executing 700 to 1000 processes per unit time.
    ICCIT, Khulna; 12/2013

Full-text (2 Sources)

Available from
Jun 26, 2014