Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In this paper we present the FPGA-based framework IFDAQ which is used for the development of data acquisition systems for detectors in high energy physics. The framework supports Xilinx FPGA and provides a collection of IP cores written in VHDL which use the common interconnect interface. The IP core library offers functionality required for the development of the full DAQ chain. The library consists of SERDES-based TDC channels, an interface to a multi-channel 80 MS/s 10-bit ADC, data transmission and synchronization protocol between FPGAs, event builder and slow control. The functionality is distributed among FPGA modules built in the AMC form factor: front-end and data concentrator. This modular design also helps to scale and adapt the data acquisition system to the needs of the particular experiment. The first application of the IFDAQ framework is the upgrade of the read-out electronics for the drift chambers and the electromagnetic calorimeters of the COMPASS experiment at CERN. The framework will be presented and discussed in the context of this upgrade.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The resource usage was only 109 LUTs and 238 registers, in addition to the two SerDes, implemented in an Artix-7 FPGA. Bai et al. [73], by using two SerDes and one IODELAY primitive, have achieved a resolution of 803 ps and a precision of 229 ps using only 30 LUTs and 42 FFs. The DNL was also very low (0.05 LSBs). ...
Article
Full-text available
A fundamental aspect in the evolution of Time-to-Digital Converters (TDCs) implemented within Field-Programmable Gate Arrays (FPGAs), given the increasing demand for detection channels, is the optimization of resource utilization. This study reviews the principal methodologies employed for implementing low-resource TDCs in FPGAs. It outlines the foundational architectures and interpolation techniques utilized to bolster TDC performances without unduly burdening resource consumption. Low-resource Tapped Delay Line, Vernier Ring Oscillator, and Multi-Phase Shift Counter TDCs, including the use of SerDes, are reviewed. Additionally, novel low-resource architectures are scrutinized, including Counter Gray Oscillator TDCs and interpolation expansions using Process–Voltage–Temperature stable IODELAYs. Furthermore, the advantages and limitations of each approach are critically assessed, with particular emphasis on resolution, precision, non-linearities, and especially resource utilization. A comprehensive summary table encapsulating existing works on low-resource TDCs is provided, offering a comprehensive overview of the advancements in the field.
... Calibration Tools: Helps with sensor calibration and verifying hardware measurement accuracy. So, in summary, it integrates hardware control, configuration, data collection/analysis, and reporting capabilities into an easy to use software package [8]. Figure 14 below shows the Signal acquisition module and software flow chart. ...
Article
Full-text available
Frequency measurement is one of the key technicals of the electronic measurement field, and the digital frequency meter is an indispensable measuring tool for measuring technology engineering personnel. Nowadays, digital frequency meters are indispensable components in daily appliances such as televisions, refrigerators, washing machines, and other smart home appliances. Therefore, the measurement range of digital frequency meters is becoming wider, and their design structures are becoming increasingly complex. Digital frequency meters play a crucial role in developing electronic products as a whole. Use the Very High-speed Hardware Description Language (VHDL) in Electronic Design Automation(EDA)technology to simulate and verify the digital frequency meter. Through simulation analysis, design the measurement circuit of the digital frequency meter and verify its accuracy. In the design process of a digital frequency meter, its function is to measure SIN signals, unit pulse signals, and the frequency range of various signals, achieved by collecting physical quantities within a unit of time.
... The resource usage was of only 109 LUTs and 238 registers in addition to the two SerDes, implemented in an ARTIX-7 FPGA. Bai et al. [69], by using two SerDes and one IODELAY primitive have achieved a resolution of 803 ps and a precision of 229 ps using only 30 LUTs and 42 FFs. The DNL was also very low (0.05 LSBs). ...
Preprint
Full-text available
A fundamental aspect in the evolution of Time-to-Digital Converters (TDCs) implemented within Field-Programmable Gate Arrays (FPGAs), given the increasing demand for detection channels, is the optimization of resource utilization. This study reviews the principal methodologies employed for implementing low-resource TDCs in FPGAs. It outlines the foundational architectures and interpolation techniques utilized to bolster TDC performances without unduly burdening resource consumption. Low-resource Tapped Delay Line, Vernier Ring Oscillator, and Multi-Phase Shift Counter TDCs, including the use of SerDes, are reviewed. Additionally, novel low-resource architectures are scrutinized, including Counter Gray Oscillator TDCs and interpolation expansions using Voltage-Temperature-Consumption stable IODELAYS. Furthermore, the advantages and limitations of each approach are critically assessed, with particular emphasis on resolution, precision, non-linearities, and, especially, resource utilization. A comprehensive summary table encapsulating existing works on low-resource TDCs is provided, offering a comprehensive overview of the advancements in the field.
... As for large spin ensembles, photodetectors (PD) are suitable for the detection of its strong fluorescence, which transfer the fluorescence into analog signals. For further handling of the information, data acquisition (DAQ) systems with signal sampling, pre-processing, and transmission functions are required in applications, which are widely used in scientific research areas [20][21][22][23][24][25]. ...
Preprint
We report a mixed-signal data acquisition (DAQ) system for optically detected magnetic resonance (ODMR) of solid-state spins. This system is designed and implemented based on a Field-Programmable-Gate-Array (FPGA) chip assisted with high-speed peripherals. The ODMR experiments often require high-speed mixed-signal data acquisition and processing for general and specific tasks. To this end, we realized a mixed-signal DAQ system which can acquire both analog and digital signals with precise hardware synchronization. The system consist of 4 analog channels (2 inputs and 2 outputs) and 16 optional digital channels works at up to 125 MHz clock rate. With this system, we performed general-purpose ODMR and advanced Lock-in detection experiments of nitrogen-vacancy (NV) centers, and the reported DAQ system shows excellent performance in both single and ensemble spin cases. This work provides a uniform DAQ solution for NV center quantum control system and could be easily extended to other spin-based systems.
... Additionally, perfect synchronization of data streams received in different channels is required [7]. The programmable devices-Field Programmable Gate Arrays (FPGA) are usually used to provide those functionalities [8][9][10][11][12][13][14]. Their big advantage is high flexibility, enabling significant changes to the communication protocols or data processing algorithms without modifying the underlying hardware. ...
Article
Full-text available
FPGA-based data acquisition and processing systems play an important role in modern high-speed, multichannel measurement systems, especially in High-Energy and Plasma Physics. Such FPGA-based systems require an extended control and diagnostics part corresponding to the complexity of the controlled system. Managing the complex structure of registers while keeping the tight coupling between hardware and software is a tedious and potentially error-prone process. Various existing solutions aimed at helping that task do not perfectly match all specific requirements of that application area. The paper presents a new solution based on the XML system description, facilitating the automated generation of the control system’s HDL code and software components and enabling easy integration with the control software. The emphasis is put on reusability, ease of maintenance in the case of system modification, easy detection of mistakes, and the possibility of use in modern FPGAs. The presented system has been successfully used in data acquisition and preprocessing projects in high-energy physics experiments. It enables easy creation and modification of the control system definition and convenient access to the control and diagnostic blocks. The presented system is an open-source solution and may be adopted by the user for particular needs.
... Additionally, perfect synchronization of data streams received in different 24 channels is required [7]. The programmable devices -Field Programmable Gate Arrays 25 (FPGA) are usually used to provide those functionalities [8][9][10][11][12][13][14]. Their big advantage 26 is high flexibility, enabling significant changes to the communication protocols or data 27 processing algorithms without modifying the underlying hardware. ...
Preprint
Full-text available
FPGA-based data acquisition and processing systems play an important role in modern high-speed, multichannel measurement systems, especially in High-Energy and Plasma Physics. Such FPGA-based systems require an extended control and diagnostics part corresponding to the complexity of the controlled system. Managing the complex structure of registers while keeping the tight coupling between hardware and software is a tedious and potentially error-prone process. Various existing solutions aimed at helping that task do not perfectly match all specific requirements of that application area. The paper presents a new solution based on the XML system description, facilitating the automated generation of the control system’s HDL code and software components and enabling easy integration with the control software. The emphasis is put on reusability, ease of maintenance in case of system modification, easy detection of mistakes, and the possibility of use in modern FPGAs. The presented system has been successfully used in data acquisition and preprocessing projects in High-Energy Physics experiments. It enables easy creation and modification of the control system definition and convenient access to the control and diagnostic blocks. The presented system is an open-source solution and may be adopted by the user for particular needs.
... The cumbersome task here is to obtain the interface with specialized I/O hardware, which can be different for each application. To solve this, the IRIO-OpenCL BSP implements an interface based on the JEDEC Standard JESD204B [18,19] (to stream data from/to very high-rate ADCs/DACs) and an SPI interface to configure the data converters and clock sources around them. ...
Article
The development of high-performance data acquisition (DAQ) and processing systems is crucial for the next-generation diagnostics used in big science experiments. In the ITER experiment, the instrumentation, control hardware, and software architecture selected for this type of application is called a fast controller. The core element of a fast controller is a chassis based on the use of the PCIe eXtension for Instrumentation (PXIe) or Micro Telecommunication Computing Architecture (MTCA). This paper presents a software framework named IRIO-OpenCL that was developed using the ITER CODAC Core System (CCS) Linux-based distribution, oriented toward the development of field-programmable gate array (FPGA)-based DAQ systems using OpenCL. State-of-the-art DAQ-FPGA systems are developed using hardware description languages (HDLs). The approach used in IRIO-OpenCL simplifies DAQ to enable the user to write C-like processing algorithms with OpenCL, minimizing the use of HDLs. The software has been implemented in C++ following ITER’s Nominal Device Support v3 (NDSv3) model that abstracts and generalizes the development of software device drivers and simplifies the interface with the Experimental Physics and Industrial Control System (EPICS). The framework has been validated in an ITER fast controller including an MTCA.4 chassis with an advanced mezzanine card (AMC) module using an Arria 10 FPGA from Intel FPGA and an FPGA mezzanine card (FMC) DAQ module from Analog Devices. The developed application solves the DAQ and processing problems associated with the neutron flux measurement and achieves a sampling rate of 1 GS/s using approximately 40 % of the FPGA resources. The methodology proposed in this paper reduces the development time of these systems while maintaining high performance.
... Priority encoder is a crucial component in a wide variety of applications, such as content-addressable memory [1] , pattern matching [2] , data acquisition [3] , and bitmap-index-based analytics system [4] . A priority encoder (PE) detects the highest priority match and outputs a matching position, or address, from which corresponding data can be retrieved effectively. ...
Article
Although priority encoder (PE) plays a vital role in many applications, they perform poorly with large-sized input data. Our previous work tackles this problem by introducing a so-called one-dimensional-array to two-dimensional-array (1D-to-2D) conversion method, which can effectively handle up to a 4,096-bit PE. Taking advantage of the previous achievement, this paper further explores the benefit of a 1D-to-2D-based multi-match priority encoder (MPE). Specifically, compared with other designs implemented in ASIC and FPGA, the proposed PE and MPE outperformed by 4.4 times and 1.7 times, respectively. A 256-bit MPE chip fabricated in a 65-nm silicon-on-thin-buried-oxide (SOTB) CMOS process was also reported. The measurements showed that at 0.9-V supply voltage, the chip was fully operational at 50 MHz and consumed approximately 140.3 µW. More significantly, by exploiting the clock-gating and reverse back-gate biasing techniques, the standby leakage current at 0.6 V dropped to 1.12 nA, which is equivalent to the 0.67-nW standby leakage power.
Article
Recently, advancements have been made in the design, implementation, and application of time-to-digital converters (TDCs) based on field-programmable gate array (FPGA) technology. The progress can be attributed to the low cost, short development cycle, and easy integration offered by FPGA platforms, which continuously provide TDCs with state-of-the-art high-performance hardware featuring low latency, low jitter, and multiple interfaces. Moreover, the progress has been driven by the demand for high-performance TDCs tailored to specific applications. The demand has driven the improvement, upgrading, and iterative optimization of various modules within TDC systems. Therefore, we present a comprehensive review and summarize the research reports on FPGA-based TDCs. The aim of the review is to clarify the ongoing efforts to improve measurement resolution and precision, reduce nonlinearity, and enhance the reliability of TDCs in specific applications. This review summarizes and analyzes the development of FPGA-based TDCs from both technical and application perspectives to consolidate past research findings and outline the directions for future research. This paper reviews the latest literature on the implementation architecture and performance of FPGA-based TDCs, refining the classification methods for implementation architectures and summarizing the performance evaluation metrics. Through comparisons and analysis, this review identifies the limitations faced by FPGA-based TDC research, highlights pressing issues that need to be addressed, and explores potential challenges that may arise.
Article
2.56 Gbps CMOS CML-transceiver is presented. The key feature of the design is capability of working with specific inductive load and transmitting data to the remote room about 1 meter away from the experimentation vicinity. Some radiation tolerance techniques used are shown. Testing methodology is briefly described, and concepts in test board realization are provided. The transceiver was designed as an interface part of the data concentrator ASIC, intended for the frontend electronics of the time-projection chamber of the MPD experiment at NICA nuclotron.
Article
We report a mixed-signal data acquisition (DAQ) system for optically detected magnetic resonance (ODMR) of solid-state spins. This system is designed and implemented based on a field-programmable-gate-array chip assisted with high-speed peripherals. The ODMR experiments often require high-speed mixed-signal data acquisition and processing for general and specific tasks. To this end, we realized a mixed-signal DAQ system that can acquire both analog and digital signals with precise hardware synchronization. The system consisting of four analog channels (two inputs and two outputs) and 16 optional digital channels works at up to 125 MHz clock rate. With this system, we performed general-purpose ODMR and advanced lock-in detection experiments of nitrogen-vacancy (NV) centers, and the reported DAQ system shows excellent performance in both single and ensemble spin cases. This work provides a uniform DAQ solution for the NV center quantum control system and could be easily extended to other spin-based systems.
Article
The development of research and education. In the first two weeks, the same is accurate, and then a professor involved personally attended the Delft meeting. So, how to change the direction of distance learning. North Star (Polaris) has launched a research-based learning network to improve his education's efficiency, and knowledge poses a direct extension of his past used innovation. Compared with conventional courses, technical facilities more, Example: two computers for student PowerPoint presentations and Harvard small whiteboard, two powerful features for computer projection presentation projector, remote teachers and hall of Delft University of video images, distributed a microphone 50 for student desks discussed for open Delft coordinator hand microphone, a monitor control panel and several techniques, two cameras, focused on the projection wall the combined small webcam, teachers and for remote feedback of a large number of cables. Calendar clock design is called a case of Field-Programmable Gate Array (FPGA)-based digital electronic education programs to create research projects. The project aims to develop a case study of Project-Based Learning (PBL), a case study aimed at electronics/computer engineering students or a new digital circuit designer professor of digital design education. To achieve high-performance support using k-mean algorithm learning support. Calendar clock design is the second in a series of content-rich and engaging examples, designed to enhance use counters, multiplexers, comparators and decoder design skills to accomplish this simultaneously. Another option is to use a Field-Programmable Gate Array (FPGA) device as a hardware accelerator. The application of data mining FPGA hardware accelerators. Three kinds of data mining algorithms: tree classification and regression trees, support vector machine and k-means clustering.
Article
Full-text available
High resolution analog-digital conversion (ADC) is a key instrument to convert analog signals to digital signals, which is deployed in data acquisition system to match high resolution analog signals from seismometers systems. To achieve high resolution, architecture of Σ-△ oversampling or pipeline ADC architecture have following disadvantages: high power consumption, low linearity of modulators, and complex structure. This work presents a novel model architecture, which design principle is validated by mathematical formulations which combined advantages of both pipeline and Σ-△oversampling ADC architecture. By discussing the adverse effects of the whole ADC architecture with an external noise theoretically, an amended theoretical model is proposed according to the assessment result of a noise simulation algorithm. The simulation results represent that the whole performance of combined architecture is determined by the noise level of integrator and subtractor. Using these two components with a noise index no more than 10-7 V/√Hz, the resolution of the prototype can achieve a reservation of 144.5 dB.
Article
Full-text available
COMPASS is a fixed-target experiment at the SPS at CERN dedicated to the study of hadron structure and spectroscopy. Since 2014, a hardware event builder consisting of nine custom designed FPGA-cards replaced the previous online computers increasing compactness and scalability of the DAQ. By buffering data, the system exploits the spill structure of the SPS and averages the maximum on-spill data rate over the whole SPS cycle. From 2016, a crosspoint switch connecting all involved high-speed links shall provide a fully programmable system topology and thus simplifies the compensation for hardware failure and improves load balancing.
Article
Full-text available
The ATCA and μTCA standards include industry-standard data pathway technologies such as Gigabit Ethernet which can be used for control communication, but no specific hardware control protocol is defined. The IPbus suite of software and firmware implements a reliable high-performance control link for particle physics electronics, and has successfully replaced VME control in several large projects. In this paper, we outline the IPbus control system architecture, and describe recent developments in the reliability, scalability and performance of IPbus systems, carried out in preparation for deployment of μTCA-based CMS upgrades before the LHC 2015 run. We also discuss plans for future development of the IPbus suite.
Article
Full-text available
The upgrades of the Belle experiment and the KEKB accelerator aim to increase the data set of the experiment by the factor 50. This will be achieved by increasing the luminosity of the accelerator which requires a significant upgrade of the detector. A new pixel detector based on DEPFET technology will be installed to handle the increased reaction rate and provide better vertex resolution. One of the features of the DEPFET detector is a long integration time of 20 {\mu}s, which increases detector occupancy up to 3 %. The detector will generate about 2 GB/s of data. An FPGA-based two-level read-out system, the Data Handling Hybrid, was developed for the Belle 2 pixel detector. The system consists of 40 read-out and 8 controller modules. All modules are built in {\mu}TCA form factor using Xilinx Virtex-6 FPGA and can utilize up to 4 GB DDR3 RAM. The system was successfully tested in the beam test at DESY in January 2014. The functionality and the architecture of the Belle 2 Data Handling Hybrid system as well as the performance of the system during the beam test are presented in the paper.
Article
Full-text available
The readout chain of the GEM and the silicon detectors of the COMPASS experiment at CERN is based on the APV25 frontend chip. The system utilizes optical fibers for data transmission and is designed to stand high event rates. Using the Multi readout mode of the APV 25, giving three samples per event, a very good time resolution of the detectors can be achieved. The high trigger rates require an efficient zero suppression algorithm. The data sparsification that is performed in hardware features an advanced common mode noise correction utilizing a combination of averaging and histogramming.
Article
Full-text available
With present-day detectors in high energy physics one often faces fast analog pulses of a few nanoseconds length which cover large dynamic ranges. In many experiments both amplitude and timing information have to be measured with high accuracy. Additionally, the data rate per readout channel can reach several MHz, which leads to high demands on the separation of pile-up pulses. For an upgrade of the COMPASS experiment at CERN we have designed the GANDALF transient recorder with a resolution of 12 bit@1 GS/s and an analog bandwidth of 500 MHz. Signals are digitized with high precision and processed by fast algorithms to extract pulse arrival times and amplitudes in real-time and to generate trigger signals for the experiment. With up to 16 analog channels, deep memories and a high data rate interface, this 6U-VME64x/VXS module is not only a dead-time free digitization unit but also has huge numerical capabilities provided by the implementation of a Virtex5-SXT FPGA. Fast algorithms implemented in the FPGA may be used to disentangle possible pile-up pulses and determine timing information from sampled pulse shapes with a time resolution better than 50 ps.
Conference Paper
Full-text available
With present-day detectors in high energy physics one is often faced with short analog pulses of a few nanoseconds length which may cover large dynamic ranges. In many experiments both amplitude and timing information have to be measured with high accuracy. Additionally, the data rate per readout channel can reach several MHz, which makes high demands on the separation of pile-up pulses. For such applications we have built the GANDALF transient recorder with a resolution of 12bit@1GS/s and an analog bandwidth of 500 MHz. Signals are digitized and processed by fast algorithms to extract pulse arrival times and amplitudes in real-time and to generate experiment trigger signals. With up to 16 analog channels, deep memories and a high data rate interface, this 6U-VME64x/VXS module is not only a dead-time free digitization unit but also has huge numerical capabilities provided by the implementation of a Virtex5-SXT FPGA. Fast algorithms implemented in the FPGA may be used to disentangle possible pile-up pulses and determine timing information from sampled pulse shapes with a time resolution in the picosecond range. Recently the application spectrum has been extended by designing a digital input mezzanine card with 64 differential inputs. This allows for the implementation of TDCs, scalers, mean-timers and logic functions in the GANDALF module.
Article
The aim of PENeLOPE (Precision Experiment on Neutron Lifetime Operating with Proton Extraction) at the Forschungsreaktor Munchen II is a high-precision measurement of the neutron lifetime and thereby an improvement of the parameter's precision by one order of magnitude. In order to achieve a higher accuracy, modern experiments naturally require state-of-the-art readout electronics, as well as high-performance data acquisition systems. This paper presents the self-triggering readout system designed for PENeLOPE which features a continuous pedestal tracking, configurable signal detection logic, floating ground up to 30 kV, cryogenic environment and the novel Switched Enabling Protocol (SEP). The SEP is a time-division multiplexing transport level protocol developed for a star network topology.
Article
PENeLOPE is a neutron lifetime measurement developed at the Technische Universität München and located at the Forschungs-Neutronenquelle Heinz Maier-Leibnitz (FRM II) aiming to achieve a precision of 0.1 seconds. The detector for PENeLOPE consists of about 1250 Avalanche Photodiodes (APDs) with a total active area of 1225 cm2. The decay proton detector and electronics will be operated at a high electrostatic potential of −30 kV and a magnetic field of 0.6 T. This includes shaper, preamplifier, ADC and FPGA cards. In addition, the APDs will be cooled to 77 K. The 1250 APDs are divided into 14 groups of 96 channels, including spares. A 12-bit ADC digitizes the detector signals with 1 MSps. A firmware was developed for the detector including a self-triggering readout with continuous pedestal calculation and configurable signal detection. The data transmission and configuration is done via the Switched Enabling Protocol (SEP). It is a time-division multiplexing low layer protocol which provides determined latency for time critical messages, IPBus, and JTAG interfaces. The network has a n:1 topology, reducing the number of optical links.
Article
At the future Belle II experiment the DEPFET (DEPleted Field Effect Transistor) pixel detector will consist of about 8 million channels and is placed as the innermost detector. Because of its small distance to the interaction region and the high luminosity in Belle II, for a trigger rate of about 30kHz with an estimated occupancy of about 3% a data rate of about 22GB/s is expected. Due to the high data rate, a data reduction factor higher than 30 is needed in order to stay inside the specifications of the event builder. The main hardware to reduce the data rate is a xTCA based Compute Node (CN) developed in cooperation between IHEP Beijing and University Giessen. Each node has as main component a Xilinx Virtex-5 FX70T FPGA and is equipped with 2 x 2 GB RAM, GBit Ethernet and 4 x 6.25 Gb/s optical links. An ATCA carrier board is able to hold up to four CN and supplies high bandwidth connections between the four CNs and to the ATCA backplane. To achieve the required data reduction on the CNs, regions of interest (ROI) are used. These regions are calculated in two independent systems by projecting tracks back to the pixel detector. One is the High Level Trigger (HLT) which uses data from the Silicon Vertex Detector (SVD), a silicon strip detector, and outer detectors. The other is the Data Concentrator (DATCON) which calculates ROIs based on SVD data only, in order to get low momentum tracks. With this information, only PXD data inside these ROIs will be forwarded to the event builder, while data outside of these regions will be discarded. First results of the test beam time in January 2014 at DESY with a Belle II vertex detector prototype and full DAQ chain will be presented.
Article
The upcoming Belle II experiment is designed to work at a luminosity of 8×1035 cm-2s-1, 40 times higher than its predecessor. The pixel detector of Belle II with its ~ 8 million channels will deliver ten times more data than all other sub-detectors together. A data rate of 22 Gbytes/s is expected for a trigger rate of 30 kHz and an estimated pixel detector occupancy of 3%, which is by far exceeding the specifications of the Belle II event builder system. Therefore a realtime data reduction of a factor > 30 is needed. A hardware platform capable of processing this amount of data is the ATCA based Compute Node (CN). Each CN consists of an xTCA carrier board and four AMC/xTCA daughter boards. The carrier board supplies the high bandwidth connectivity to the other CNs via Rocket-IO links. In the current prototype design, each AMC board is equipped with a Xilinx Virtex-5 FX70T FPGA, 4 GB of memory, Gbit Ethernet and two bi-directional optical links allowing for a bandwidth of up to 12.5 Gbits/s. IPMI control of mother and daughter board is foreseen. One ATCA shelf containing 10 mother boards/40 daughter boards is sufficient to process the data from the 40 front end boards. The data reduction on the CN is done in two steps. First, the event data delivered by the front end electronics via optical links is stored in memory until the high level trigger (HLT) decision has been made. The HLT rejects >2/3 of these events. In a second step, pixel data of positively triggered events is reduced with the help of regions of interest (ROI), calculated by the HLT from projecting trajectories back to the pixel detector plane. The design allows additional ROI inputs computed from hit cluster properties or tracklets from the surrounding silicon strip detector. The final data reduction is achieved by sending only data within these ROIs to the main event builder.
Conference Paper
The COMPASS experiment (COmpact Muon Proton Apparatus for Structure and Spectroscopy) is a fixed target experiment located at the CERN Super Proton Synchrotron. The physics program is focused on the study of hadron structure and hadron spectroscopy with high intensity muon and hadron beams, up to 160 GeV/c for muons and 190 GeV/c for hadrons respectively. To allow the tracking of charged particles with very low and as well very high momentum, COMPASS comprises two magnetic spectrometer stages extending to a total length of 60 m. From the data acquisition point of view, about 200000 analog detector channels have to be read along the complete experiment. Depending on the detector signal characteristics and the number of channels, this task is realized by frontend electronics using either dedicated ASICs and/or sampling analog-to-digital (ADC) or time-to-digital (TDC) components. The sampling ADC based readout system of the COMPASS experiment comprises today over 127k channels equipped with the APV25 frontend ASIC and 5728 direct sampling channels. An important feature from the beginning was the combination of data transfer, clock and trigger distribution and configuration access within a standardized serial interface between the different ADC modules and the first stage of data concentrator modules. By choosing between a copper or fiber realization for this interface, either a low cost interconnect or a link with galvanic decoupling can be realized. The ongoing development of the sampling ADC electronics is focused on the migration toward the Advanced Telecom Computing Architecture (ATCA) crate standard, to overcome the backplane bandwidth limitations in VME systems. In addition, the ATCA standard provides better cooling and monitoring capabilities. To simplify the transition to ATCA, the MSADC module was already realized as a mezzanine card, which can be mounted on an ATCA based carrier card as well. In addition, the MSADC card fits also to the MicroTCA form factor, - - which allows to provide a handy building block for laboratory based data acquisition systems.
Conference Paper
The architecture, features and performance of the TICS (Trigger Control System) designed and built for COMPASS, a fixed target experiment at CERN's SPS, are discussed. The TICS is a two channel optical distribution system which is able to broadcast information from a single source to about 1000 destinations using a passive optical fiber network. The TCS distributes control information and trigger together with an event identifier and provides reference timing in all front-end cards with a precision of 50 ps RMS. Furthermore it generates random triggers for monitoring of the detector performance during data taking.
Article
A new TDC-chip is under development for the COMPASS experiment at CERN. The ASIC, which exploits the 0:6m CMOS sea-of-gate technology, will allow high resolution time measurements with digitization of 75 ps, and an unprecedented degree of flexibility accompanied by high rate capability and low power consumption. Preliminary specifications of this new TDC chip are presented. Furthermore a FPGA based readout-driver and buffermodule as an interface between the front-end of the COMPASS detector systems and an optical S-LINK is in development. The same module serves also as remote fan-out for the COMPASS trigger distribution and time synchronization system. This readout-driver monitors the trigger and data flow to and from front-ends. In addition, a specific data buffer structure and sophisticated data flow control is used to pursue local pre-event building. At start-up the module controls all necessary front-end initializations. 1 INTRODUCTION The COMPASS experiment at CERN will investi...
Aurora 8B/10B Protocol Specification SP002
  • Aurora