Adobe Inc.
  • San Jose, United States
Recent publications
High-fidelity 3D assets with materials composed of fibers (including hair), complex layered material shaders, or fine scattering geometry are critical in high-end realistic rendering applications. Rendering such models is computationally expensive due to heavy shaders and long scattering paths. Moreover, implementing the shading and scattering models is non-trivial and has to be done not only in the 3D content authoring software (which is necessarily complex), but also in all downstream rendering solutions. For example, web and mobile viewers for complex 3D assets are desirable, but frequently cannot support the full shading complexity allowed by the authoring application. Our goal is to design a neural representation for 3D assets with complex shading that supports full relightability and full integration into existing renderers. We provide an end-to-end shading solution at the first intersection of a ray with the underlying geometry. All shading and scattering is precomputed and included in the neural asset; no multiple scattering paths need to be traced, and no complex shading models need to be implemented to render our assets, beyond a single neural architecture. We combine an MLP decoder with a feature grid. Shading consists of querying a feature vector, followed by an MLP evaluation producing the final reflectance value. Our method provides high-fidelity shading, close to the ground-truth Monte Carlo estimate even at close-up views. We believe our neural assets could be used in practical renderers, providing significant speed-ups and simplifying renderer implementations.
We introduce a high-fidelity portrait shadow removal model that can effectively enhance the image of a portrait by predicting its appearance under disturbing shadows and highlights. Portrait shadow removal is a highly ill-posed problem where multiple plausible solutions can be found based on a single image. For example, disentangling complex environmental lighting from original skin color is a non-trivial problem. While existing works have solved this problem by predicting the appearance residuals that can propagate local shadow distribution, such methods are often incomplete and lead to unnatural predictions, especially for portraits with hard shadows. We overcome the limitations of existing local propagation methods by formulating the removal problem as a generation task where a diffusion model learns to globally rebuild the human appearance from scratch as a condition of an input portrait image. For robust and natural shadow removal, we propose to train the diffusion model with a compositional repurposing framework: a pre-trained text-guided image generation model is first fine-tuned to harmonize the lighting and color of the foreground with a background scene by using a background harmonization dataset; and then the model is further fine-tuned to generate a shadow-free portrait image via a shadow-paired dataset. To overcome the limitation of losing fine details in the latent diffusion model, we propose a guided-upsampling network to restore the original high-frequency details (e.g. , wrinkles and dots) from the input image. To enable our compositional training framework, we construct a high-fidelity and large-scale dataset using a lightstage capturing system and synthetic graphics simulation. Our generative framework effectively removes shadows caused by both self and external occlusions while maintaining original lighting distribution and high-frequency details. Our method also demonstrates robustness to diverse subjects captured in real environments.
We introduce an algorithm for sketch vectorization with state-of-the-art accuracy and capable of handling complex sketches. We approach sketch vectorization as a surface extraction task from an unsigned distance field, which is implemented using a two-stage neural network and a dual contouring domain post processing algorithm. The first stage consists of extracting unsigned distance fields from an input raster image. The second stage consists of an improved neural dual contouring network more robust to noisy input and more sensitive to line geometry. To address the issue of under-sampling inherent in grid-based surface extraction approaches, we explicitly predict undersampling and keypoint maps. These are used in our post-processing algorithm to resolve sharp features and multi-way junctions. The keypoint and undersampling maps are naturally controllable, which we demonstrate in an interactive topology refinement interface. Our proposed approach produces far more accurate vectorizations on complex input than previous approaches with efficient running time.
Introduction Cytological analysis of effusion specimens provides critical information regarding the diagnosis and staging of malignancies, thus guiding their treatment and subsequent monitoring. Keeping in view the challenges encountered in the morphological interpretation, we explored convolutional neural networks (CNNs) as an important tool for the cytological diagnosis of malignant effusions. Materials and Methods A retrospective review of patients at our institute, over 3.5 years yielded a dataset of 342 effusion samples and 518 images with known diagnoses. Cytological examination and cell block preparation were performed to establish correlation with the gold standard, histopathology. We developed a deep learning model using PyTorch, fine‐tuned it on a labelled dataset, and evaluated its diagnostic performance using test samples. Results The model exhibited encouraging results in the distinction of benign and malignant effusions with area under curve (AUC) of 0.8674, F ‐measure or F 1 score which denotes the harmonic mean of precision and recall, to be 0.8678 thus, demonstrating optimal accuracy of our CNN model. Conclusion The study highlights the promising potential of transfer learning in enhancing the clinical pathology laboratory efficiency when dealing with malignant effusions.
We have witnessed significant progress in deep learning based 3D vision, ranging from neural radiance field (NeRF) based 3D representation learning to applications in novel view synthesis (NVS). However, existing scene-level datasets for deep learning-based 3D vision, limited to either synthetic environments or a narrow selection of real-world scenes, are quite insufficient. This insufficiency not only hinders a comprehensive benchmark of existing methods but also caps what could be explored in deep learningbased 3D analysis. To address this critical gap, we present DL3DV-10K, a large-scale scene dataset, featuring 51.2 million frames from 10,510 videos captured from 65 types of point-of-interest (POI) locations, covering both bounded and unbounded scenes, with different levels of reflection, transparency, and lighting. We conducted a comprehensive benchmark of recent NVS methods on DL3DV-10K, which revealed valuable insights for future research in NVS. In addition, we have obtained encouraging results in a pilot study to learn generalizable NeRF from DL3DV-10K, which manifests the necessity of a large-scale scene-level dataset to forge a path toward a foundation model for learning 3D representation. Our DL3DV-10K dataset, benchmark results, and models will be publicly accessible.
An interdisciplinary approach to Artificial Intelligence (AI) and Machine Learning (ML) is necessary to address issues arising from the overlap in the areas of Reinforcement Learning (RL), ethics, and the law. Some types of RL, due to their use of evaluative feedback in combination with function approximation, give rise to new strategies for problem-solving that are not easily foreseen or anticipated, and embody the monkey paw problem. This is the problem related to RL that grants what one asked for, and not what one should have asked for or in terms of what was intended. Sometimes these new strategies can be characterized as promoting a social good, but there is the possibility that they could give rise to outcomes that are not aligned with social goods. Control applications in the form of supervised learning (SL)-based solutions may be used to control for unaligned new strategies. These control applications, however, may introduce bias such that ethical and legal regimes may need to be put into place to solve for such biases. These ethical and legal regimes may be based upon generally agreed to social conventions as traditional ethical regimes in the form of utilitarianism and deontological ethics may provide an incomplete solution. Further, these social conventions may need to be implemented by people and ultimately the corporations instructing these people on how to perform their jobs.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
476 members
Zhe Lin
  • Adobe Research
Nathan A. Carr
  • Adobe Research
Gavin S. P. Miller
  • Adobe Research
Information
Address
San Jose, United States