ArticlePDF Available

Magic Decorator: Automatic Material Suggestion for Indoor Digital Scenes

Authors:

Abstract and Figures

Assigning textures and materials within 3D scenes is a tedious and labor-intensive task. In this paper, we present Magic Decorator, a system that automatically generates material suggestions for 3D indoor scenes. To achieve this goal, we introduce local material rules, which describe typical material patterns for a small group of objects or parts, and global aesthetic rules, which account for the harmony among the entire set of colors in a specific scene. Both rules are obtained from collections of indoor scene images. We cast the problem of material suggestion as a combinatorial optimization considering both local material and global aesthetic rules. We have tested our system on various complex indoor scenes. A user study indicates that our system can automatically and efficiently produce a series of visually plausible material suggestions which are comparable to those produced by artists.
Content may be subject to copyright.
A preview of the PDF is not available
... Their method builds upon part-based shape representations and infers part colors by leveraging shape similarity and color compatibility between adjacent parts. (Chen et al. 2015) optimize color suggestions for 3D scenes that satisfy user constraints and maximize an aesthetic score. None Figure 2: Network Architecture. ...
Article
Automatic generation of 3D visual content is a fundamental problem that sits at the intersection of visual computing and artificial intelligence. So far, most existing works have focused on geometry synthesis. In contrast, advances in automatic synthesis of color information, which conveys rich semantic information of 3D geometry, remain rather limited. In this paper, we propose to learn a generative model that maps a latent color parameter space to a space of colorizations across a shape collection. The colorizations are diverse on each shape and consistent across the shape collection. We introduce an unsupervised approach for training this generative model and demonstrate its effectiveness across a wide range of categories. The key feature of our approach is that it only requires one colorization per shape in the training data, and utilizes a neural network to propagate the color information of other shapes to train the generative model for each particular shape. This characteristics makes our approach applicable to standard internet shape repositories.
Article
Conventionally, interior lighting design is technically complex yet challenging and requires professional knowledge and aesthetic disciplines of designers. This paper presents a new digital lighting design framework for virtual interior scenes, which allows novice users to automatically obtain lighting layouts and interior rendering images with visually pleasing lighting effects. The proposed framework utilizes neural networks to retrieve and learn underlying design guidelines and the principles beneath the existing lighting designs, e.g., a newly constructed dataset of 6K 3D interior scenes from professional designers with dense annotations of lights. With a 3D furniture-populated indoor scene as the input, the framework takes two stages to perform lighting design: 1) lights are iteratively placed in the room; 2) the colors and intensities of the lights are optimized by an adversarial scheme, resulting in lighting designs with aesthetic lighting effects. Quantitative and qualitative experiments show that the proposed framework effectively learns the guidelines and principles and generates lighting designs that are preferred over the rule-based baseline and comparable to those of professional human designers.
Article
Full-text available
Color–material furnishing pairing is known as a “black-box” for interior designers. The overall atmosphere of a space can be changed by modifying furnishing combinations, for example, to express modern or classic styles. Designers carefully choose pairings of colors and materials that fit their intended interior design styles based on experience and knowledge. However, no specific principles or rules have yet been established. Therefore, this study aims to derive a furnishing pairing principle based on a novel framework comprising object detection, color extraction, material recognition, and network analysis. We used the proposed framework to analyze large-scale interior design image data (N = 24,194) collected from an online interior design platform. We also used the authenticity algorithm to analyze the relative influence of styles. By using the data-driven method from large-scale data in each of the eight interior styles, we derived authentic color, material, and furnishing combinations. Our study results revealed that images with high authenticity values in each style matched existing style descriptions. Additionally, the proposed framework allows interior style image retrieval based on a specific color, material, and furnishing combination. Our findings have implications for research on the development of style-aware furniture retrieval systems and automatic interior design generation methods.
Article
Full-text available
Virtual reality technology has grown in popularity as the economy and society have changed dramatically, and it is now being used in a wide range of applications. Interactive virtual technology’s deployment in the field of art has a wide range of possibilities due to its interaction, immersion, and real-time capabilities. The advantages of this technology, extremely in interiors and design, are unrivalled by other technologies. Virtual reality interior design can help designers and consumers understand the influence of “what you see is what you get,” while also expecting people to feel the visual concept in the 3D content of this model. The research and studies of interior decoration are conducted in this paper using visually stunning virtual reality technology, with the viewing public housing developments as the research object, creating a conceptual framework for successful research and innovation of interior design assisted by visually stunning Virtual reality (VR) technology. In this paper we applied the reliability to check the accuracy of questionnaire and descriptive statistics which summarises all the data.
Chapter
This paper describes a method to generate 3D meeting rooms for virtual reality (VR) applications using a greedy cost minimization. Our algorithm can create unique meeting rooms during runtime efficiently enough that its suitable for commonly used stand-alone consumer VR Headsets. First, it extracts information about the room, such as its volume and shape. Then it iteratively generates a layout by altering the furniture and subsequently evaluating it. Changes that lead to inferior layouts are reversed, and those that improve the layout are kept. The algorithm takes the functionality of furniture into consideration as well as design guidelines. In contrast to previous research, the algorithm focuses on non-rectangular rooms. For this purpose, we propose improved cost terms. Additionally, hard constraints were implemented at the end of the algorithm to enforce functional and aesthetic standards. To test our generated rooms we conducted a user study, comparing our proposed algorithm with previous work. Results of this study show that our algorithm generates rooms that were consistently preferred by users.KeywordsImmersive environmentsFurniture arrangementContent generationOptimization algorithmGreedy cost minimization
Article
Interior scene colorization is vastly demanded in areas such as personalized architecture design. Existing works either require manual efforts to colorize individual objects, or conform to fixed color patterns automatically learned from prior knowledge, whilst neglecting user preference. Quantitatively identifying user preference is challenging, particularly at the early stage of the design process. The 3D setup also presents new challenges as the inhabitant can observe from any possible viewpoints. We propose a representative view selection method based on visual attention, and a progressive preference inference model. We particularly focus on the progressive integration of eye-tracked user preference, which enables the assistance in creativity support and allows the possibility of convergent thinking. A series of user studies have been conducted to validate the effectiveness of the proposed view selection method, preference inference model and the creativity support.
Article
One of the challenging tasks in virtual scene design for Virtual Reality (VR) is causing it to invoke a particular mood in viewers. The subjective nature of moods brings uncertainty to the purpose. We propose a novel approach to automatic adjustment of the colors of textures for objects in a virtual indoor scene, enabling it to match a target mood. A dataset of 25,000 images, including building/home interiors, was used to train a classifier with the features extracted via deep learning. It contributes to an optimization process that colorizes virtual scenes automatically according to the target mood. Our approach was tested on four different indoor scenes, and we conducted a user study demonstrating its efficacy through statistical analysis with the focus on the impact of the scenes experienced with a VR headset.
Article
We introduce TM-NET, a novel deep generative model for synthesizing textured meshes in a part-aware manner. Once trained, the network can generate novel textured meshes from scratch or predict textures for a given 3D mesh, without image guidance. Plausible and diverse textures can be generated for the same mesh part, while texture compatibility between parts in the same shape is achieved via conditional generation. Specifically, our method produces texture maps for individual shape parts, each as a deformable box, leading to a natural UV map with limited distortion. The network separately embeds part geometry (via a PartVAE) and part texture (via a TextureVAE) into their respective latent spaces, so as to facilitate learning texture probability distributions conditioned on geometry. We introduce a conditional autoregressive model for texture generation, which can be conditioned on both part geometry and textures already generated for other parts to achieve texture compatibility. To produce high-frequency texture details, our TextureVAE operates in a high-dimensional latent space via dictionary-based vector quantization. We also exploit transparencies in the texture as an effective means to model complex shape structures including topological details. Extensive experiments demonstrate the plausibility, quality, and diversity of the textures and geometries generated by our network, while avoiding inconsistency issues that are common to novel view synthesis methods.
Conference Paper
Full-text available
Colorization of a grayscale photograph often requires considerable effort from the user, either by placing numerous color scribbles over the image to initialize a color propagation algorithm, or by looking for a suitable reference image from which color information can be transferred. Even with this user supplied data, colorized images may appear unnatural as a result of limited user skill or inaccurate transfer of colors. To address these problems, we propose a colorization system that leverages the rich image content on the internet. As input, the user needs only to provide a semantic text label and segmentation cues for major foreground objects in the scene. With this information, images are downloaded from photo sharing websites and filtered to obtain suitable reference images that are reliable for color transfer to the given grayscale photo. Different image colorizations are generated from the various reference images, and a graphical user interface is provided to easily select the desired result. Our experiments and user study demonstrate the greater effectiveness of this system in comparison to previous techniques.
Article
Full-text available
This paper presents SymmSketch—a system for creating symmetric 3D free-form shapes from 2D sketches. The reconstruction task usually separates a 3D symmetric shape into two types of shape components, that is, the self-symmetric shape component and the mutual-symmetric shape components. Each type can be created in an intuitive manner. Using a uniform symmetry plane, the user first draws 2D sketch lines for each shape component on a sketching plane. The z-depth information of the hand-drawn input sketches can be calculated using their property of mirror symmetry to generate 3D construction curves. In order to provide more freedom for controlling the local geometric features of the reconstructed free-form shapes (e.g., non-circular cross-sections), our modeling system creates each shape component from four construction curves. Using one pair of symmetric curves and one pair of general curves, an improved cross-sectional surface blending scheme is applied to generate a parametric surface for each component. The final symmetric free-form shape is progressively created, and is represented by 3D triangular mesh. Experimental results illustrate that our system can generate complex symmetric free-form shapes effectively and conveniently.
Article
Full-text available
We present an approach to automatic 3D reconstruction of objects depicted in Web images. The approach reconstructs objects from single views. The key idea is to jointly analyze a collection of images of different objects along with a smaller collection of existing 3D models. The images are analyzed and reconstructed together. Joint analysis regularizes the formulated optimization problems, stabilizes correspondence estimation, and leads to reasonable reproduction of object appearance without traditional multi-view cues.
Article
We propose a new method for estimation in linear models. The ‘lasso’ minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactly 0 and hence gives interpretable models. Our simulation studies suggest that the lasso enjoys some of the favourable properties of both subset selection and ridge regression. It produces interpretable models like subset selection and exhibits the stability of ridge regression. There is also an interesting relationship with recent work in adaptive function estimation by Donoho and Johnstone. The lasso idea is quite general and can be applied in a variety of statistical models: extensions to generalized regression models and tree‐based models are briefly described.
Preprint
It is often important for designers and photographers to convey or enhance desired color themes in their work. A color theme is typically defined as a template of colors and an associated verbal description. This paper presents a data-driven method for enhancing a desired color theme in an image. We formulate our goal as a unified optimization that simultaneously considers a desired color theme, texture-color relationships as well as automatic or user-specified color constraints. Quantifying the difference between an image and a color theme is made possible by color mood spaces and a generalization of an additivity relationship for two-color combinations. We incorporate prior knowledge, such as texture-color relationships, extracted from a database of photographs to maintain a natural look of the edited images. Experiments and a user study have confirmed the effectiveness of our method.
Article
We present a novel solution to automatic semantic modeling of indoor scenes from a sparse set of low-quality RGB-D images. Such data presents challenges due to noise, low resolution, occlusion and missing depth information. We exploit the knowledge in a scene database containing 100s of indoor scenes with over 10,000 manually segmented and labeled mesh models of objects. In seconds, we output a visually plausible 3D scene, adapting these models and their parts to fit the input scans. Contextual relationships learned from the database are used to constrain reconstruction, ensuring semantic compatibility between both object models and parts. Small objects and objects with incomplete depth information which are difficult to recover reliably are processed with a two-stage approach. Major objects are recognized first, providing a known scene structure. 2D contour-based model retrieval is then used to recover smaller objects. Evaluations using our own data and two public datasets show that our approach can model typical real-world indoor scenes efficiently and robustly.
Article
Increasingly, companies are creating product advertisements and catalog images using computer renderings of 3D scenes. A common goal for these companies is to create aesthetically appealing compositions that highlight objects of interest within the context of a scene. Unfortunately, this goal is challenging, not only due to the need to balance the trade-off among aesthetic principles and design constraints, but also because of the huge search space induced by possible camera parameters, object placement, material choices, etc. Previous methods have investigated only optimization of camera parameters. In this paper, we develop a tool that starts from an initial scene description and a set of high-level constraints provided by a stylist and then automatically generates an optimized scene whose 2D composition is improved. It does so by locally adjusting the 3D object transformations, surface materials, and camera parameters. The value of this tool is demonstrated in a variety of applications motivated by product catalogs, including rough layout refinement, detail image creation, home planning, cultural customization, and text inlay placement. Results of a perceptual study indicate that our system produces images preferable for product advertisement compared to a more traditional camera-only optimization.
Article
We present an intuitive and efficient method for editing the appearance of complex spatially-varying datasets, such as images and measured materials. In our framework, users specify rough adjustments that are refined interactively by enforcing the policy that similar edits are applied to spatially-close regions of similar appearance. Rather than proposing a specific user interface, our method allows artists to quickly and imprecisely specify the initial edits with any method or workflow they feel most comfortable with. An energy optimization formulation is used to propagate the initial rough adjustments to the final refined ones by enforcing the editing policy over all pairs of points in the dataset. We show that this formulation is equivalent to solving a large linear system defined by a dense matrix. We derive an approximate algorithm to compute such a solution interactively by taking advantage of the inherent structure of the matrix. We demonstrate our approach by editing images, HDR radiance maps, and measured materials. Finally, we show that our framework generalizes prior methods while providing significant improvements in generality, robustness and efficiency.