CIE 1931 chromaticity diagram (y axis shown) with comparison between ACES (black), professional digital motion picture cameras (red) and reference cinema/display (blue) color gamuts. 

CIE 1931 chromaticity diagram (y axis shown) with comparison between ACES (black), professional digital motion picture cameras (red) and reference cinema/display (blue) color gamuts. 

Source publication
Article
Full-text available
The Academy of Motion Picture Arts and Sciences has been pivotal in the inception, design and later adoption of a vendor-agnostic and open framework for color management, the Academy Color Encoding System (ACES), targeting theatrical, TV and animation features, but also still-photography and image preservation at large. For this reason, the Academy...

Contexts in source publication

Context 1
... leads to a plethora of new color-spaces whose optical/digital definitions are, at best, only partially disclosed by camera manufacturers, e.g., ARRI LogC Wide Gamut, RED DRAGONColorm REDLogFilmn/Gamman, Sony S-Logm S-Gamman (m,n = 1,2,3 according to the camera manufacturers' color-science versions or generation of the sensors), Panasonic V-Log, GoPro CineForm/ProTune™, etc., cfr. Figure 2 and [17]. The native plates for most of the cinematic content are thus born digitally in these partially undefined spaces, but have one thing in common: they use a scene-referred colorimetry [3,18]. ...
Context 2
... and monitors (not only those in the grading theatre/room) are all color-calibrated according to their own reference color standards, dynamic ranges and viewing environments, cfr. Table 1 (and Figure 2). For multiple-delivery shows (e.g., theatrical, web, then SVoD and TV, both HDR and SDR), several color-grading sessions are needed, because there is not yet a universally-accepted HDR-to-SDR gamut-mapping algorithm, [19], that is satisfactory from the creative standpoint-and automatically scales to different shows. ...
Context 3
... AP1 gamut (Figure 8) was introduced in ACES 1.0 to overcome the first two drawbacks above; it is a subset of AP0, it does not cover the whole standard observer's, yet it is "HDR wide gamut", like ARRI WideGamut, DCI P3 and Rec.2020 (Table 2). In fact, AP1 contains P3 and Rec.2020 gamut used in D-Cinema and HDR reference displays (Figure 2). It represents colors that are viewable on displays and projectors, including HDR, without noticeable color skew. ...
Context 4
... is, therefore, up to camera vendors to provide conversion formula from their sensors' native colorimetries into ACES, cfr. Figure 2 and Section 2.1. Any color transformation that takes a non-ACES color-space into ACES2065-1 is called an (ACES) Input Transform (also abbreviated in IDT, whose acronym comes from its deprecated, pre-release name: input device transform). ...
Context 5
... Output Transform is the composition of a fixed conversion from scene-to output-referred colorimetry, by means of a Reference Rendering Transform (RRT), whose formula closely reselbles that of a tonecurve, plus an output device transform (ODT) which maps the output-referred CVs out of the RRT into specific colorimetry standards, cfr. Section 2.3 and Figure 2. ...

Citations

... Finally, the averaging step of ray energies takes place in the linear high-dynamic range (HDR) color space. To obtain a color between range [0, 1] in the standarddynamic range (SDR) space, a tone mapping [36] step should be applied to the averaged color. ...
Article
Full-text available
    Ray tracing algorithm is a category of rendering algorithms that calculate pixel colors by simulating light rays in parallel. The quantum supersampling has achieved a quadratic speedup over classical Monte Carlo method, but its output image contains many detached abnormal noisy dots. In this paper, we improve quantum supersampling by replacing the QFT-based phase estimation in quantum supersampling with a robust quantum counting scheme. We do simulation experiments to show that the quantum ray tracing with improved quantum supersampling does perform better than classical path tracing algorithm as well as the original form of quantum supersampling.
    ... And in quantum ray tracing, the time cost is evaluated by the number of queries Finally, the averaging step of ray energies takes place in the linear high-dynamic range (HDR) color space. To obtain a color between range [0, 1] in the standard-dynamic range (SDR) space, a tone mapping [Arr17] step should be applied to the averaged color. In summary, the framework of quantum ray tracing is illustrated in Fig. 6. ...
    Preprint
    Ray tracing algorithm is a category of rendering algorithms that calculate the color of pixels by simulating the physical movements of a huge amount of rays and calculating their energies, which can be implemented in parallel. Meanwhile, the superposition and entanglement property make quantum computing a natural fit for parallel tasks.Here comes an interesting question, is the inherently parallel quantum computing able to speed up the inherently parallel ray tracing algorithm? The ray tracing problem can be regarded as a high-dimensional numerical integration problem. Suppose $N$ queries are used, classical Monte Carlo approaches has an error convergence of $O(1/\sqrt{N})$, while the quantum supersampling algorithm can achieve an error convergence of approximately $O(1/N)$. However, the outputs of the origin form of quantum supersampling obeys a probability distribution that has a long tail, which shows up as many detached abnormal noisy dots on images. In this paper, we improve quantum supersampling by replacing the QFT-based phase estimation in quantum supersampling with a robust quantum counting scheme, the QFT-based adaptive Bayesian phase estimation. We quantitatively study and compare the performances of different quantum counting schemes. Finally, we do simulation experiments to show that the quantum ray tracing with improved quantum supersampling does perform better than classical path tracing algorithm as well as the original form of quantum supersampling.
    ... Doing this also bounds the per-pixel error = ( ) − ( ), suppressing outliers and making the optimization more robust in scenes with high dynamic range. We illustrate this improvement in Fig. 4c, where an ACES [Arrighetti 2017] tone-mapping operator is applied to the optimized image. Optimizing w.r.t. the original perceptual error (4) yields a noisy and overly dark image compared to the tone-mapped ground truth. ...
    Preprint
    Full-text available
    Realistic image synthesis involves computing high-dimensional light transport integrals which in practice are numerically estimated using Monte Carlo integration. The error of this estimation manifests itself in the image as visually displeasing aliasing or noise. To ameliorate this, we develop a theoretical framework for optimizing screen-space error distribution. Our model is flexible and works for arbitrary target error power spectra. We focus on perceptual error optimization by leveraging models of the human visual system's (HVS) point spread function (PSF) from halftoning literature. This results in a specific optimization problem whose solution distributes the error as visually pleasing blue noise in image space. We develop a set of algorithms that provide a trade-off between quality and speed, showing substantial improvements over prior state of the art. We perform evaluations using both quantitative and perceptual error metrics to support our analysis, and provide extensive supplemental material to help evaluate the perceptual improvements achieved by our methods.
    ... For example, if two cameras made by different manufacturers were placed side-by-side and recorded the same scene, there will be a visible difference in color between the two shots. All cameras have their formats/colorimetries "designed to account for camera sensors' characteristics" (Arrighetti, 2017). An agnostic than linear color spaces do. ...
    ... Examples of different Although the camera provides an initial color space, for television broadcasts color grading does not occur in-camera but in an entirely separate process during post-production. Color grading is typically "set in a TV grading room, dimly lit and equipped with one or more monitors… that are all color-calibrated according to their own reference color standards, dynamic ranges, and viewing environments" (Arrighetti, 2017). The footage has a raw format or log color space, which was embedded from the camera. ...
    ... The footage has a raw format or log color space, which was embedded from the camera. The footage was read from this proprietary format and color corrected from widest-gamut color space possible (Arrighetti, 2017). ...
    ... For example, CIELAB is a color system defined by the Commission Internationale de l'Eclairage (CIE, 2019); it expresses color as three values [L* for lightness ranging from black (0) to white (100), a*) (denoted by A) ranging from green (−) to red (+), and b*) (denoted by B) from blue (−) to yellow (+)] Linear extensions of RGB are noted in the present dataset as "Lin.sRGBx". The Academy Color Encoding System (ACES) is a color system advocated by the still and motion picture industry (Arrighetti, 2017; Academy of Motion Picture Arts and Sciences, 2019). It features six color spaces (AP0 Red, AP0 Green, AP0 Blue, AP1 Red, AP1 Green, AP1 Blue); the AP0 represents the smallest set of primaries that includes the entire CIE 1964 standard-observer spectral locus, while the AP1 is conceived with primaries "bent" to be closer to those of display-referred color spaces. ...
    Article
    Full-text available
    Coal is an important natural resource for global energy production. However, certain types of coal (e.g., lignite) often contain abundant sulfur (S) which can lead to gaseous sulfur dioxide (SO2) emissions when burned. Such emissions subsequently create sulfuric acid (H2SO4), thus causing highly acidic rain which can alter the pH of soil and surface waters. Traditional laboratory analysis (e.g., dry combustion) is commonly used to characterize the S content of lignite, but such approaches are laborious and expensive. By comparison, proximal sensing techniques such as portable X-ray fluorescence (PXRF) spectrometry, visible near infrared (VisNIR) spectroscopy, and optical sensors (e.g., NixPro) can acquire voluminous data which has been successfully used to elucidate fundamental chemistry in a wide variety of matrices. In this study, four active lignite mines were sampled in North Dakota, USA. A total of 249 samples were dried, powdered, then subjected to laboratory-based dry combustion analysis and scanned with the NixPro, VisNIR, and PXRF sensors. 75% of samples (n=186) were used for model calibration, while 25% (n=63) were used for validation. A strong relationship was observed between dry combustion and PXRF S content (r=0.90). Portable X-ray fluorescence S and Fe as well as various NixPro color data were the most important variables for predicting S content. When using PXRF data in isolation, random forest regression produced a validation R2 of 0.80 in predicting total S content. Combining PXRF + NixPro improved R2 to 0.85. Dry combustion S + PXRF S and Fe correctly identified the source mine of the lignite at 55.42% via discriminant analysis. Adding the NixPro color data to the PXRF and dry combustion data, the location classification accuracy increased to 63.45%. Even with VisNIR reflectance values of 10-20%, spectral absorbance associated with water at 1,940 nm was still observed. Principal component analysis was unable to resolve the mine source of the coal in PCA space, but several NixPro vectors were closely clustered. In sum, the combination of the NixPro optical sensor with PXRF data successfully augmented the predictive capability of S determination in lignite ex-situ. Future studies should extend the approach developed herein to in-situ application with special consideration of moisture and matrix efflorescence effects.
    Article
    Rendering is a computer graphics process that generates images from a three-dimensional scene including models, lights and cameras. Ray tracing is a category of rendering algorithms that calculates pixel colors by simulating the physical movements of light rays such as reflections, refractions and scatterings. The core problem of ray tracing is a high-dimensional numerical integration, and the classical Monte Carlo ray tracing requires a big number of rays to reduce error. By leveraging the inherent parallelism of quantum computing, we propose a framework of quantum ray tracing algorithm, and do simulation experiments to prove that it has a quadratically faster error convergence than classical Monte Carlo ray tracing.
    Article
    Synthesizing realistic images involves computing high-dimensional light-transport integrals. In practice, these integrals are numerically estimated via Monte Carlo integration. The error of this estimation manifests itself as conspicuous aliasing or noise. To ameliorate such artifacts and improve image fidelity, we propose a perception-oriented framework to optimize the error of Monte Carlo rendering. We leverage models based on human perception from the halftoning literature. The result is an optimization problem whose solution distributes the error as visually pleasing blue noise in image space. To find solutions, we present a set of algorithms that provide varying trade-offs between quality and speed, showing substantial improvements over prior state of the art. We perform evaluations using quantitative and error metrics and provide extensive supplemental material to demonstrate the perceptual improvements achieved by our methods.
    Article
    Full-text available
    italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">The perception of color in sports teams and branding alike is a highly valued asset in broadcasting and marketing that is worth protecting. The appearance of color is impacted by many factors including capture source, ambient light, and the limitations of screen technology. The current industry practice involves manual adjustment by a technician, who corrects the footage from a dozen or more camera feeds in realtime. This method adjusts the entire video frame based on the producer’s instructions. This paper presents a -pending, machine-learning approach titled “ColorNet” that targets brand color areas of a frame to output a color-corrected feed in realtime. Initial tests of ColorNet demonstrate the ability to reproduce manually created correction masks with high accuracy, while also being computationally efficient. Discussion includes system training and development and plans for a beta test where the system will be applied during a live broadcast of a sporting event in partnership with Clemson University Athletics .
    Conference Paper
    Sports fans know their team’s colors and recognize inconsistencies in brand color display when their beloved team’s colors show up incorrectly during a broadcast. Repetition and consistency build brand recognition, and that equity is valuable and worth protecting. Accurate display of specified colors on-screen is impacted by many factors including the capture source, the display screen technology, and the ambient light, which can range broadly throughout the event due to changes in time and weather (for outdoor fields) or mixed-artificial lighting (for indoor facilities). Changes to any of these factors demand adjustments of the output in order to maintain visual consistency. According to the industry standard, color management is handled by a technician who manually corrects footage from up to two dozen camera feeds in real-time adjusting for consistency across all the camera feeds. In contrast, the AI-powered ColorNet system ingests live video, adjusts each video frame with a trained machine-learning model, and outputs a color corrected feed in real time. This system is demonstrated using Clemson University’s brand specification for orange, Pantone 165. The machine-learning model was trained using a dataset of raw and color-corrected videos of Clemson football games. The footage was manually color-corrected using Adobe Premiere Pro. The resulting AI model targets specific color values and adjusts only the targeted brand colors pixel-by-pixel without shifting any other surrounding colors in the frame. This approach generates localized corrections while adjusting automatically to changes in lighting and weather. The ColorNet model reproduces the manually created correction masks with high accuracy while being highly computationally efficient. This approach has the ability to circumvent human error when color-correcting the display of brand colors, while constantly adjusting for negative impacts caused by lighting or weather changes. Current progress is being made to beta test this model in partnership with Clemson University Athletics during a real-time broadcast of a live sporting event displayed on large-format screens.