Conference Paper

Event Data Stream Compression Based on Point Cloud Representation

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Recent advancements in lossless coding methods for event data have leveraged existing point cloud compression techniques. Martini et al. [4] and Huang et al. [5] independently proposed similar approaches that represent event sequences as 3D points in a space-time volume. In both methods, each point's coordinates correspond to pixel location (x, y) and timestamp t s , with polarity as an attribute. ...
... Martini et al. [4] splits the event sequence into two distinct 3D point clouds, according to the polarity, i.e. a positive event point cloud and a negative event point cloud, which are compressed separately. Huang et al. [5] first splits the event sequence into multiple segments with a fixed number of events each, and then creates multiple sets of two 3D point clouds, one 3D point cloud for each polarity. Huang et al. [5] also studied the case where the polarity is regarded as an extra attribute. ...
... Huang et al. [5] first splits the event sequence into multiple segments with a fixed number of events each, and then creates multiple sets of two 3D point clouds, one 3D point cloud for each polarity. Huang et al. [5] also studied the case where the polarity is regarded as an extra attribute. Both methods apply Geometry-based Point Cloud Compression (G-PCC) to encode the geometric information (x, y, t s ) of each polaritybased point cloud; Huang et al. [5] applies a transformation to the points' coordinates before G-PCC coding to have a more appropriate range of the coordinates' values. ...
Preprint
Event cameras are a cutting-edge type of visual sensors that capture data by detecting brightness changes at the pixel level asynchronously. These cameras offer numerous benefits over conventional cameras, including high temporal resolution, wide dynamic range, low latency, and lower power consumption. However, the substantial data rates they produce require efficient compression techniques, while also fulfilling other typical application requirements, such as the ability to respond to visual changes in real-time or near real-time. Additionally, many event-based applications demand high accuracy, making lossless coding desirable, as it retains the full detail of the sensor data. Learning-based methods show great potential due to their ability to model the unique characteristics of event data thus allowing to achieve high compression rates. This paper proposes a low-complexity lossless coding solution based on the quadtree representation that outperforms traditional compression algorithms in efficiency and speed, ensuring low computational complexity and minimal delay for real-time applications. Experimental results show that the proposed method delivers better compression ratios, i.e., with fewer bits per event, and lower computational complexity compared to current lossless data compression methods.
... Existing solutions often code event data as two PCs (one for each polarity) to facilitate processing and separately leverage the relationship between Positive (POS) and Negative (NEG) polarity events. These solutions employ either lossless [7] [8] or lossy [9][10] coding techniques. In lossless solutions, [7] generates PCs from event data using time slices of various durations (1s, 5s, 10s, 20s, 30s, and 60s), and uses G-PCC lossless coding. ...
... Larger time slices improve compression efficiency but also introduce larger delays and complexity. In [8], different spatial and temporal scaling factors (1×10³ and 1×10⁶, respectively) are applied, along with two aggregation strategies: fixed event count and fixed time interval. The fixed event count strategy yields better compression by enhancing the spatiotemporal redundancy within the PC. ...
Preprint
Full-text available
Neuromorphic vision sensors, commonly referred to as event cameras, have recently gained relevance for applications requiring high-speed, high dynamic range and low-latency data acquisition. Unlike traditional frame-based cameras that capture 2D images, event cameras generate a massive number of pixel-level events, composed by spatiotemporal and polarity information, with very high temporal resolution, thus demanding highly efficient coding solutions. Existing solutions focus on lossless coding of event data, assuming that no distortion is acceptable for the target use cases, mostly including computer vision tasks. One promising coding approach exploits the similarity between event data and point clouds, thus allowing to use current point cloud coding solutions to code event data, typically adopting a two-point clouds representation, one for each event polarity. This paper proposes a novel lossy Deep Learning-based Joint Event data Coding (DL-JEC) solution adopting a single-point cloud representation, thus enabling to exploit the correlation between the spatiotemporal and polarity event information. DL-JEC can achieve significant compression performance gains when compared with relevant conventional and DL-based state-of-the-art event data coding solutions. Moreover, it is shown that it is possible to use lossy event data coding with its reduced rate regarding lossless coding without compromising the target computer vision task performance, notably for event classification. The use of novel adaptive voxel binarization strategies, adapted to the target task, further enables DL-JEC to reach a superior performance.
... Experimental results also show that ELC-ARES provides average event encoding speed reductions of 21.41%, 91.10%, and 54.74% compared to Bzip2, LZMA and ZLib, respectively, and the LLC-ARES benchmark solution achieves an average event encoding speed 46% lower than the proposed ELC-ARES solution; the average event encoding speed metric measures the (average) time needed to encode one event (see TABLE IV). 14) EVENT DATA STREAM COMPRESSION BASED ON POINT CLOUD REPRESENTATION [36] In 2023, Huang and Ebrahimi proposed a lossless coding solution based on a point cloud representation to code event data generated from a DAVIS event camera (consisting of location, polarity, and timestamp) [36]. ...
... Experimental results also show that ELC-ARES provides average event encoding speed reductions of 21.41%, 91.10%, and 54.74% compared to Bzip2, LZMA and ZLib, respectively, and the LLC-ARES benchmark solution achieves an average event encoding speed 46% lower than the proposed ELC-ARES solution; the average event encoding speed metric measures the (average) time needed to encode one event (see TABLE IV). 14) EVENT DATA STREAM COMPRESSION BASED ON POINT CLOUD REPRESENTATION [36] In 2023, Huang and Ebrahimi proposed a lossless coding solution based on a point cloud representation to code event data generated from a DAVIS event camera (consisting of location, polarity, and timestamp) [36]. ...
Article
Full-text available
In recent years, visual sensors have been quickly improving towards mimicking the visual information acquisition process of human brain by responding to illumination changes as they occur in time rather than at fixed time intervals. In this context, the so-called neuromorphic vision sensors depart from the conventional frame-based image sensors by adopting a paradigm shift in the way visual information is acquired. This new way of visual information acquisition enables faster and asynchronous per-pixel responses/recordings driven by the scene dynamics with a very high dynamic range and low power consumption. However, the huge amount of data outputted by the emerging neuromorphic vision sensors critically demands highly efficient coding solutions in order applications may take full advantage of these new, attractive sensors’ capabilities. For this reason, considerable research efforts have been invested in recent years towards developing increasingly efficient neuromorphic vision data coding (NVDC) solutions. In this context, the main objective of this paper is to provide a comprehensive overview of NVDC solutions in the literature, guided by a novel classification taxonomy, which allows better organizing this emerging field. In this way, more solid conclusions can be drawn about the current NVDC status quo , thus allowing to better drive future research and standardization developments in this emerging technical area.
... While we considered there different point cloud sizes (i.e., splitting a large point cloud into smaller point clouds in the time domain), we did not consider time aggregation and focused on fully lossless compression. The same approach was very recently used in [32], where a different point cloud codec (Draco) was also tested in addition to G-PCC, and the results were compared with the benchmark results we provided in [31]. ...
Article
Full-text available
Neuromorphic Vision Sensors (NVSs) are emerging sensors that acquire visual information asynchronously when changes occur in the scene. Their advantages versus synchronous capturing (frame-based video) include a low power consumption, a high dynamic range, an extremely high temporal resolution, and lower data rates. Although the acquisition strategy already results in much lower data rates than conventional video, NVS data can be further compressed. For this purpose, we recently proposed Time Aggregation-based Lossless Video Encoding for Neuromorphic Vision Sensor Data (TALVEN), consisting in the time aggregation of NVS events in the form of pixel-based event histograms, arrangement of the data in a specific format, and lossless compression inspired by video encoding. In this paper, we still leverage time aggregation but, rather than performing encoding inspired by frame-based video coding, we encode an appropriate representation of the time-aggregated data via point-cloud compression (similar to another one of our previous works, where time aggregation was not used). The proposed strategy, Time-Aggregated Lossless Encoding of Events based on Point-Cloud Compression (TALEN-PCC), outperforms the originally proposed TALVEN encoding strategy for the content in the considered dataset. The gain in terms of the compression ratio is the highest for low-event rate and low-complexity scenes, whereas the improvement is minimal for high-complexity and high-event rate scenes. According to experiments on outdoor and indoor spike event data, TALEN-PCC achieves higher compression gains for time aggregation intervals of more than 5 ms. However, the compression gains are lower when compared to state-of-the-art approaches for time aggregation intervals of less than 5 ms.
Article
The letter proposes an efficient context-adaptive lossless compression method for encoding event frame sequences. A first contribution proposes the use of a deep-ternary-tree of the current pixel position context as the context-tree model selector. The arithmetic codec encodes each trinary symbol using the probability distribution of the associated context-tree-leaf model. Another contribution proposes a novel context design based on several frames, where the context order controls the codec's complexity. Another contribution proposes a model search procedure to replace the context-tree prune-and-encode strategy by searching for the closest “mature” context model between lower-order context-tree models. The experimental evaluation shows that the proposed method provides an improved coding performance of 34.34% and a smaller runtime of up to 5.18×5.18\times compared with state-of-the-art lossless image codec FLIF and, respectively, 6.95% and 14.42×14.42\times compared with our prior work.
Preprint
Emerging event cameras acquire visual information by detecting time domain brightness changes asynchronously at the pixel level and, unlike conventional cameras, are able to provide high temporal resolution, very high dynamic range, low latency, and low power consumption. Considering the huge amount of data involved, efficient compression solutions are very much needed. In this context, this paper presents a novel deep-learning-based lossless event data compression scheme based on octree partitioning and a learned hyperprior model. The proposed method arranges the event stream as a 3D volume and employs an octree structure for adaptive partitioning. A deep neural network-based entropy model, using a hyperprior, is then applied. Experimental results demonstrate that the proposed method outperforms traditional lossless data compression techniques in terms of compression ratio and bits per event.
Article
As the use of neuromorphic, event-based vision sensors expands, the need for compression of their output streams has increased. While their operational principle ensures event streams are spatially sparse, the high temporal resolution of the sensors can result in high data rates from the sensor depending on scene dynamics. For systems operating in communication-bandwidth-constrained and power-constrained environments, it is essential to compress these streams before transmitting them to a remote receiver. Therefore, we introduce a flow-based method for the real-time asynchronous compression of event streams as they are generated. This method leverages real-time optical flow estimates to predict future events without needing to transmit them, therefore, drastically reducing the amount of data transmitted. The flow-based compression introduced is evaluated using a variety of methods including spatiotemporal distance between event streams. The introduced method itself is shown to achieve an average compression ratio of 2.70 on a variety of event-camera datasets with the evaluation configuration used. That compression is achieved with a median temporal error of 0.31 ms and an average spatiotemporal event-stream distance of 4.72. When combined with LZMA compression for non-real-time applications, our method can achieve state-of-the-art average compression ratios ranging from 9.29 to 13.01. Additionally, we demonstrate that the proposed prediction algorithm is capable of performing real-time, low-latency event prediction.
Article
Full-text available
Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of is), very high dynamic range (140dB vs. 60dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world.
Article
Full-text available
This article presents an overview of the recent standardization activities for point cloud compression (PCC). A point cloud is a 3D data representation used in diverse applications associated with immersive media including virtual/augmented reality, immersive telepresence, autonomous driving and cultural heritage archival. The international standard body for media compression, also known as the Motion Picture Experts Group (MPEG), is planning to release in 2020 two PCC standard specifications: video-based PCC (V-CC) and geometry-based PCC (G-PCC). V-PCC and G-PCC will be part of the ISO/IEC 23090 series on the coded representation of immersive media content. In this paper, we provide a detailed description of both codec algorithms and their coding performances. Moreover, we will also discuss certain unique aspects of point cloud compression.
Article
Full-text available
New vision sensors, such as the Dynamic and Active-pixel Vision sensor (DAVIS), incorporate a conventional global-shutter camera and an event-based sensor in the same pixel array. These sensors have great potential for high-speed robotics and computer vision because they allow us to combine the benefits of conventional cameras with those of event-based sensors: low latency, high temporal resolution, and very high dynamic range. However, new algorithms are required to exploit the sensor characteristics and cope with its unconventional output, which consists of a stream of asynchronous brightness changes (called events) and synchronous grayscale frames. For this purpose, we present and release a collection of datasets captured with a DAVIS in a variety of synthetic and real environments, which we hope will motivate research on new algorithms for high-speed and high-dynamic-range robotics and computer-vision applications. In addition to global-shutter intensity images and asynchronous events, we provide inertial measurements and ground-truth camera poses from a motion-capture system. The latter allows comparing the pose accuracy of egomotion estimation algorithms quantitatively. All the data are released both as standard text files and binary files (i.e., rosbag). This paper provides an overview of the available data and describes a simulator that we release open-source to create synthetic event-camera data.
Article
Full-text available
This paper describes a 128 times 128 pixel CMOS vision sensor. Each pixel independently and in continuous time quantizes local relative intensity changes to generate spike events. These events appear at the output of the sensor as an asynchronous stream of digital pixel addresses. These address-events signify scene reflectance change and have sub-millisecond timing precision. The output data rate depends on the dynamic content of the scene and is typically orders of magnitude lower than those of conventional frame-based imagers. By combining an active continuous-time front-end logarithmic photoreceptor with a self-timed switched-capacitor differencing circuit, the sensor achieves an array mismatch of 2.1% in relative intensity event threshold and a pixel bandwidth of 3 kHz under 1 klux scene illumination. Dynamic range is > 120 dB and chip power consumption is 23 mW. Event latency shows weak light dependency with a minimum of 15 mus at > 1 klux pixel illumination. The sensor is built in a 0.35 mum 4M2P process. It has 40times40 mum2 pixels with 9.4% fill factor. By providing high pixel bandwidth, wide dynamic range, and precisely timed sparse digital output, this silicon retina provides an attractive combination of characteristics for low-latency dynamic vision under uncontrolled illumination with low post-processing requirements.
Article
The letter introduces an efficient context-based lossless image codec for encoding event camera frames. The asynchronous events acquired by an event camera in a spatio-temporal neighborhood are collected by an Event Frame (EF). A more efficient way to store the event data is introduced, where up to eight EFs are represented as a pair of an event map image (EMI), containing the spatial information, and a vector, containing the polarity information. The proposed codec encodes the EMI using: (i) a binary map, which signals the positions where at least one event occurs in the EFs; (ii) the number of events for each signalled position; and (iii) their EF index. Template context modelling and adaptive Markov modelling are employed to encode these three types of data. The experimental evaluation for EFs generated at different time intervals demonstrates that the proposed method achieves an improved performance compared with both video coding standards, HEVC and VVC, and state-of-the-art lossless image coding methods, CALIC and FLIF. When all events are collected by EFs, a relative compression of up to 5.8 is achieved compared with the raw event encapsulation provided by the event camera.
Article
This paper presents a novel end-to-end Learned Point Cloud Geometry Compression (a.k.a., Learned-PCGC) system, leveraging stacked Deep Neural Networks (DNN) based Variational AutoEncoder (VAE) to efficiently compress the Point Cloud Geometry (PCG). In this systematic exploration, PCG is first voxelized, and partitioned into non-overlapped 3D cubes, which are then fed into stacked 3D convolutions for compact latent feature and hyperprior generation. Hyperpriors are used to improve the conditional probability modeling of entropy-coded latent features. A Weighted Binary Cross-Entropy (WBCE) loss is applied in training while an adaptive thresholding is used in inference to remove false voxels and reduce the distortion. Objectively, our method exceeds the Geometry-based Point Cloud Compression (G-PCC) algorithm standardized by the Moving Picture Experts Group (MPEG) with a significant performance margin, e.g., at least 60% BD-Rate (Bjöntegaard Delta Rate) savings, using common test datasets, and other public datasets. Subjectively, our method has presented better visual quality with smoother surface reconstruction and appealing details, in comparison to all existing MPEG standard compliant PCC methods. Our method requires about 2.5 MB parameters in total, which is a fairly small size for practical implementation, even on embedded platform. Additional ablation studies analyze a variety of aspects (e.g., thresholding, kernels, etc) to examine the generalization, and application capacity of our Learned-PCGC. We would like to make all materials publicly accessible at https://njuvision.github.io/PCGCv1/ for reproducible research.
Article
Neuromorphic vision sensors such as the Dynamic and Active-pixel Vision Sensor (DAVIS) using silicon retina are inspired by biological vision, they generate streams of asynchronous events to indicate local log-intensity brightness changes. Their properties of high temporal resolution, low-bandwidth, lightweight computation, and low-latency make them a good fit for many applications of motion perception in the intelligent vehicle. However, as a younger and smaller research field compared to classical computer vision, neuromorphic vision is rarely connected with the intelligent vehicle. For this purpose, we present three novel datasets recorded with DAVIS sensors and depth sensor for the distracted driving research and focus on driver drowsiness detection, driver gaze-zone recognition, and driver hand-gesture recognition. To facilitate the comparison with classical computer vision, we record the RGB, depth and infrared data with a depth sensor simultaneously. The total volume of this dataset has 27360 samples. To unlock the potential of neuromorphic vision on the intelligent vehicle, we utilize three popular event-encoding methods to convert asynchronous event slices to event-frames and adapt state-of-the-art convolutional architectures to extensively evaluate their performances on this dataset. Together with qualitative and quantitative results, this work provides a new database and baseline evaluations named NeuroIV in cross-cutting areas of neuromorphic vision and intelligent vehicle.
Article
Dynamic vision sensor (DVS) as a bio-inspired camera, has shown great advantages in wide dynamic range and high temporal resolution imaging in contrast to conventional frame-based cameras. Its ability to capture high speed moving objects enables fast and accurate detection which plays a significant role in the emerging intelligent driving applications. The pixels in DVS independently respond to the luminance changes with output spikes. Thus, the spike stream conveying the x -, y -addresses, the firing time, and the polarity (ON/OFF), is quite different from conventional video frames. How to compress this kind of new data for efficient transmission and storage remains a big challenge, especially for on-board detection, monitoring and recording. To address this challenge, this paper first analyzes the spike firing mechanism and the spatiotemporal characteristics of the spike data, then introduces a cube-based spike coding framework for DVS. In the framework, an octree-based structure is proposed to adaptively partition the spike stream into coding cubes in both spatial and temporal dimensions, then several prediction modes are designed to exploit the spatial and temporal characteristics of spikes for compression, including address-prior mode and time-prior mode. To explore more flexibility, the intercube prediction is discussed extensively involving motion estimation and motion compensation. Finally, the experimental results demonstrate that our approach achieves an impressive coding performance with the average compression ratio of 2.6536 against the raw spike data, which is much higher than the results of conventional lossless coding algorithms.
Article
The development of real-time 3D sensing devices and algorithms (e.g., multiview capturing systems, Time-of-Flight depth cameras, LIDAR sensors), as well as the widespreading of enhanced user applications processing 3D data, have motivated the investigation of innovative and effective coding strategies for 3D point clouds. Several compression algorithms, as well as some standardization efforts, has been proposed in order to achieve high compression ratios and flexibility at a reasonable computational cost. This paper presents a transform-based coding strategy for dynamic point clouds that combines a non-linear transform for geometric data with a linear transform for color data; both operations are region-adaptive in order to fit the characteristics of the input 3D data. Temporal redundancy is exploited both in the adaptation of the designed transform and in predicting the attributes at the current instant from the previous ones. Experimental results showed that the proposed solution obtained a significant bit rate reduction in lossless geometry coding and an improved rate-distortion performance in the lossy coding of color components with respect to state-of-the-art strategies.
Article
Dynamic Vision Sensors (DVS) are emerging neuromorphic visual capturing devices, with great advantages in terms of low power consumption, wide dynamic range, and high temporal resolution in diverse applications such as autonomous driving, robotics, tactile sensing and drones. The capturing method results in lower data rates than conventional video. Still, such data can be further compressed. Recent research has shown great benefits of temporal data aggregation on event-based vision data utilization. According to recent results, time aggregation of DVS data not only reduces the data rate but improves classification and object detection accuracy. In this work, we propose a compression strategy, Time Aggregation based Lossless Video Encoding for Neuromorphic Vision Sensor Data (TALVEN), which utilizes temporal data aggregation, arrangement of the data in a specific format and lossless video encoding techniques to achieve high compression ratios. The detailed experimental analysis on outdoor and indoor datasets shows that our proposed strategy achieves superior compression ratios than the best state-of-the-art strategies.
Article
Event cameras are bio-inspired sensors that work radically different from traditional cameras. Instead of capturing images at a fixed rate, they measure per-pixel brightness changes asynchronously. This results in a stream of events, which encode the time, location and sign of the brightness changes. Event cameras posses outstanding properties compared to traditional cameras: very high dynamic range (140 dB vs. 60 dB), high temporal resolution (in the order of microseconds), low power consumption, and do not suffer from motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as high speed and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world.
Preprint
Thanks to the rapid proliferation of connected devices, sensor-generated time series constitute a large and growing portion of the world's data. Often, this data is collected from distributed, resource-constrained devices and centralized at one or more servers. A key challenge in this setup is reducing the size of the transmitted data without sacrificing its quality. Lower quality reduces the data's utility, but smaller size enables both reduced network and storage costs at the servers and reduced power consumption in sensing devices. A natural solution is to compress the data at the sensing devices. Unfortunately, existing compression algorithms either violate the memory and latency constraints common for these devices or, as we show experimentally, perform poorly on sensor-generated time series. We introduce a time series compression algorithm that achieves state-of-the-art compression ratios while requiring less than 1KB of memory and adding virtually no latency. This method is suitable not only for low-power devices collecting data, but also for servers storing and querying data; in the latter context, it can decompress at over 3GB/s in a single thread, even faster than many algorithms with much lower compression ratios. A key component of our method is a high-speed forecasting algorithm that can be trained online and significantly outperforms alternatives such as delta coding. Extensive experiments on datasets from many domains show that these results hold not only for sensor data but also across a wide array of other time series.
Article
A VLSI architecture is proposed for the realization of real-time two-dimensional (2-D) image filtering in an addressevent -representation (AER) vision system. The architecture is capable of implementing any convolutional kernel F (x; y) as long as it is decomposable into x-axis and y-axis components, i.e., F (x; y)=H(x)V (y), for some rotated coordinate system fx; yg and if this product can be approximated safely by a signed minimum operation. The proposed architecture is intended to be used in a complete vision system, known as the boundary contour system and feature contour system (BCS-FCS) vision model, proposed by Grossberg and collaborators. The present paper proposes the architecture, provides a circuit implementation using MOS transistors operated in weak inversion, and shows behavioral simulation results at the system level operation and some electrical simulations. Index Terms---Analog integrated circuits, communication systems, convolution circuits, Gabor filters, image anal...
A 128× 128 120 db 15
  • Patrick Lichtsteiner
  • Christoph Posch
  • Tobi Delbruck
Learning-based lossless compression of 3d point cloud geometry
  • Maurice Dat Thanh Nguyen
  • Giuseppe Quach
  • Pierre Valenzise
  • Duhamel