Article

HeXA: Haptic-enhanced eXtended reality framework for material-informed Architectural design

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Article
Full-text available
We present Strokes2Surface, an offline geometry reconstruction pipeline that recovers well‐connected curve networks from imprecise 4D sketches to bridge concept design and digital modeling stages in architectural design. The input to our pipeline consists of 3D strokes' polyline vertices and their timestamps as the 4th dimension, along with additional metadata recorded throughout sketching. Inspired by architectural sketching practices, our pipeline combines a classifier and two clustering models to achieve its goal. First, with a set of extracted hand‐engineered features from the sketch, the classifier recognizes the type of individual strokes between those depicting boundaries (Shape strokes) and those depicting enclosed areas (Scribble strokes). Next, the two clustering models parse strokes of each type into distinct groups, each representing an individual edge or face of the intended architectural object. Curve networks are then formed through topology recovery of consolidated Shape clusters and surfaced using Scribble clusters guiding the cycle discovery. Our evaluation is threefold: We confirm the usability of the Strokes2Surface pipeline in architectural design use cases via a user study, we validate our choice of features via statistical analysis and ablation studies on our collected dataset, and we compare our outputs against a range of reconstructions computed using alternative methods.
Conference Paper
Full-text available
The preliminary development of highly detailed BIM models lacks eligible design methods in the conceptual design stage. Initial design solutions cannot be shared among collaborative and consecutive design parties without relying on error-prone briefings and data exchanges. As advances in Artificial Intelligence are increasingly being used for BIM’s big data challenges, this paper presents a workflow based on a small scope of IFC definitions for the generation of conceptual BIM models from 4D semantic sketches, using a machine learning based pipeline. The workflow offers a user-friendly architectural design perspective, while preserving information more reliably throughout the entire life cycle of a BIM model.
Article
Full-text available
State-of-the-art workflows within Architecture, Engineering, and Construction (AEC) are still caught in sequential planning processes. Digital design tools in this domain often lack proper communication between different stages of design and relevant domain knowledge. Furthermore, decisions made in the early stages of design, where sketching is used to initiate, develop, and communicate ideas, heavily impact later stages, resulting in the need for rapid feedback to the architectural designer so they can proceed with adequate knowledge about design implications. Accordingly, this paper presents research on a novel integrative design framework based on a recently developed 4D sketching interface, targeted for architectural design as a form-finding tool coupled with three modules: (1) a Geometric Modelling module, which utilises Points2Surf as a machine learning model for automatic surface mesh reconstruction from the point clouds produced by sketches, (2) a Material Modelling module, which predicts the mechanical properties of biocomposites based on multiscale micromechanics homogenisation techniques, and (3) a Structural Analysis module, which assesses the mechanical performance of the meshed structure on the basis of the predicted material properties using finite element simulations. The proposed framework is a step towards using material-informed design already in the early stages of design.
Article
Full-text available
Pseudo-Haptic techniques, or visuo-haptic illusions, leverage user's visual dominance over haptics to alter the users' perception. As they create a discrepancy between virtual and physical interactions, these illusions are limited to a perceptual threshold. Many haptic properties have been studied using pseudo-haptic techniques, such as weight, shape or size. In this paper, we focus on estimating the perceptual thresholds for pseudo-stiffness in a virtual reality grasping task. We conducted a user study (n = 15) where we estimated if compliance can be induced on a non-compressible tangible object and to what extent. Our results show that (1) compliance can be induced in a rigid tangible object and that (2) pseudo-haptics can simulate beyond 24 N/cm stiffness ( k24N/cmk \geq 24 N / cm , between a gummy bear and a raisin, up to rigid objects). Pseudo-stiffness efficiency is (3) enhanced by the objects' scales, but mostly (4) correlated to the user input force. Taken altogether, our results offer novel opportunities to simplify the design of future haptic interfaces, and extend the haptic properties of passive props in VR.
Article
Full-text available
The mechanical properties of natural fibers, as used to produce sustainable biocomposites, vary significantly-both among different plant species and also within a single species. All plants, however, share a common microstructural fingerprint. They are built up by only a handful of constituents, most importantly cellulose. Through continuum micromechanics multiscale modeling, the mechanical behavior of cellulose nanofibrils is herein upscaled to the technical fiber level, considering 26 different commonly used plants. Model-predicted stiffness and elastic limit bounds, respectively, frame published experimental ones. This validates the model and corroborates that plant-specific physicochemical properties, such as microfibril angle and cellulose content, govern the mechanical fiber performance.
Article
Full-text available
The intuitive and effective haptic feedback interaction is essential for human-machine interfaces. However, most current commercial haptic feedback solutions are rigid, simplex, and insufficient to fully immerse users in virtual and teleoperated environments. Here, we report the design, fabrication and performance of a low-cost and lightweight self-sensing actuator (SenAct) that provides accurate force and vibration feedback to recreate the tactile perception. The SenAct shows good performance such as fast response (10ms), high robustness (>10000 cycles), and large output force (up to 1.55N) with high controllable resolution (up to 0.02N) based on the real-time closed-loop control. It is capable of providing the corresponding haptic feedback to the wearer through external force or vibration signal input. With this prototype, we presented a haptic feedback glove that supports precise operation by enhancing immersion and comprehensive sensation. Furthermore, we demonstrated it could shape appropriate interaction in different scenarios, including touching or grasping objects in virtual reality and teleoperation, illustrating its potential applications in human-in-loop interaction system and metaverse.
Article
Full-text available
Im SFB Advanced Computational Design werden Entwurfswerkzeuge und ‐prozesse durch multi‐ und interdisziplinäre Grundlagenforschung entwickelt. Das Ziel ist, durch eine neue Generation von Computational‐Design‐Methoden höhere Entwurfsqualität und effizientere Prozesse in Architektur und Bauwesen zu ermöglichen. Die Forschung wird in drei Bereichen durchgeführt: Entwurfsmethodik (A1), visuelle und haptische Entwurfsinteraktion (A2) und Formfindung (A3). Der Bereich A1 umfasst die konzeptionelle Grundlage für neuartige digitale Entwurfsmethoden basierend auf Lernmethoden. Zudem werden die in den Bereichen A2 und A3 entwickelten Computational‐Design‐Werkzeuge und ‐Methoden in einer Plattform verknüpft. Der Bereich A2 eröffnet Designern neuartige Feedback‐Kanäle in digitalen Werkzeugen durch Lichtsimulation und haptisches Feedback. Im Bereich A3 werden die Randbedingungen im Formfindungsprozess hinsichtlich Geometrie, Material, Mechanik und Statik untersucht. Die Schwerpunkte liegen in der Unterteilung von komplexen Flächen in Paneele, der Analyse von formaktiven Strukturen sowie der Modellierung und experimentellen Analyse von neuartigen Kompositmaterialien. Die Exploration von Entwürfen in Interaktion mit intelligenten Materialmodellen und Algorithmen zur Bewertung der Tragstruktur dient einerseits der Entwicklung eines methodischen Ansatzes für die Entdeckung von neuen Möglichkeiten in der Formfindung und ermöglicht andererseits die experimentelle Validierung.
Article
Full-text available
We propose a method to transform unstructured 3D sketches into piecewise smooth surfaces that preserve sketched geometric features. Immersive 3D drawing and sketch-based 3D modeling applications increasingly produce imperfect and unstructured collections of 3D strokes as design output. These 3D sketches are readily perceived as piecewise smooth surfaces by viewers, but are poorly handled by existing 3D surface techniques tailored to well-connected curve networks or sparse point sets. Our algorithm is aligned with human tendency to imagine the strokes as a small set of simple smooth surfaces joined along stroke boundaries. Starting with an initial proxy surface, we iteratively segment the surface into smooth patches joined sharply along some strokes, and optimize these patches to fit surrounding strokes. Our evaluation is fourfold: we demonstrate the impact of various algorithmic parameters, we evaluate our method on synthetic sketches with known ground truth surfaces, we compare to prior art, and we show compelling results on more than 50 designs from a diverse set of 3D sketch sources.
Conference Paper
Full-text available
State-of-the-art workflows within Architecture, Engineering, and Construction (AEC) industry are still caught in sequential planning processes. Digital design tools founded in this domain often lack proper communication between different stages of design and relevant domain knowledge. Furthermore, decisions made in the early stages of design, where sketching is used to initiate, develop, and communicate ideas, heavily impact later stages, offering the need for fast feedback to the architectural designer to proceed with adequate knowledge regarding design implications. Accordingly, this paper presents research on a novel AEC workflow based on a 4D sketching application targeted for architectural design as a form-finding tool coupled with two modules: (1) Shape Inference module, which is aided by machine learning enabling automatic surface mesh modelling from sketches, and (2) Structural Analysis module which provides fast feedback with respect to the mechanical performance of the model. The proposed workflow is a step towards a platform integrating implicit and explicit criteria in the early stages of design, allowing a more informed design leading to increased design quality.
Article
Full-text available
Given the popularity of fired clay bricks in increasingly taller buildings, as well as the large variety of raw materials, additives, tempers, and production technology, microstructure-based modeling of the brick strength is essential. This paper aims at linking the microstructural features of bricks, i.e. the volume, shape, and size of mineral phases, pores, and the glassy binding matrix in between, to the multiaxial failure behavior of bricks. Therefore, a continuum micromechanics multiscale model, developed originally for stiffness and thermal conductivity upscaling, is adopted and complemented with a Mohr–Coulomb failure criterion at the microscale. By micromechanics-based downscaling of uniaxial brick strength tests, quantitative insights into the strength of the binding matrix are obtained for the first time. After successful nanoindentation-based validation of the identified micro-strength, the model is used for predicting the macroscopic multiaxial brick strength, which in turn is successfully validated against independent bi- and triaxial compressive strength test results.
Technical Report
Full-text available
TetGen® is a C++ program for generating good quality tetrahedral meshes aimed to support numerical methods and scientific computing [Hang Si]. The problem of quality tetrahedral mesh generation is challenged by many theoretical and practical issues. TetGen uses Delaunay-based algorithms which have theoretical guarantee of correctness. It can robustly handle arbitrary complex 3D geometries and is fast in practice. The source code of TetGen is freely available.
Article
Full-text available
Quantification of elastic stiffness and thermal conductivity of fired clay bricks is still often limited to empirical rules and laboratory testing, which becomes progressively more challenging given the large variety of raw materials used to optimize the properties of modern brick products. Applying a continuum micromechanics multiscale approach, we herein aim at upscaling of microstructural features to quantify the bricks’ macroscopic properties. Microstructural features such as assemblage and morphometry of mineral phases (quartz, feldspar, and micas), of pores, and of the binding matrix phase, respectively, as well as thermoelastic phase properties are provided by recently published results from extensive microscopic testing including electron microscopy imaging, mercury intrusion porosimetry, nanoindentation, and scanning thermal microscopy. These results are incorporated into the micromechanics model by introducing spheroidal phases with characteristic orientation distribution at two observation scales. The homogenized macroscopic stiffness and conductivity agree very well with independent results from novel macroscopic tests for all seven studied brick compositions. This corroborates the microstructure-informed multiscale model approach and its assumptions: the linear increase of the binding matrix properties with the material’s carbonate content, and the inability of large quartz with interface cracks to take over any mechanical loads.
Article
Full-text available
The Architecture, Engineering, Construction, and Operations (AECO) sector has always been recognised as a competitive and complex sector. In recent years, increased demands on construction projects in different domains such as safety, energy, time and cost management have pushed the industry towards new tools and methods, including more efficient use of digital technologies. Among the multiple available digital solutions, Building Information Modelling (BIM) is quickly positioning itself as a fundamental methodology, with its practices and tools being increasingly deployed. However, the remaining challenges to BIM adoption provide an opportunity for the development of supporting tools. The present article analyses Augmented Reality (AR) as such a tool. A systematic review was conducted to examine previous studies in the field of BIM-based AR, shedding light on the integration of this technology in the AECO sector. A list of questions was presented and answered to understand this technology’s role in multiple aspects, from its main stage and field of application to the targeted stakeholders. The applied review methodology is based on PRISMA-P. The databases exploited to search for eligible articles were Scopus, Academic Search Ultimate, Current Content, Web of Science, and ScienceDirect. From an initial cohort of 671 articles, 24 were selected for analysis. Additionally, a comparison between AR and Virtual Reality (VR) applications is presented. This review shows that AR implementation is far from its desired state, showing several limitations such as connection and localisation problems, lack of non-geometric information, and other challenges in using AR techniques in the construction site, (e.g., marker-based AR).
Conference Paper
Full-text available
We present CASSIE, a conceptual modeling system in VR that leverages freehand mid-air sketching, and a novel 3D optimization framework to create connected curve network armatures, predictively surfaced using patches with C⁰ continuity. Our system provides a judicious balance of interactivity and automation, providing a homogeneous 3D drawing interface for a mix of freehand curves, curve networks, and surface patches. Our system encourages and aids users in drawing consistent networks of curves, easing the transition from freehand ideation to concept modeling. A comprehensive user study with professional designers as well as amateurs (N=12), and a diverse gallery of 3D models, show our armature and patch functionality to offer a user experience and expressivity on par with freehand ideation, while creating sophisticated concept models for downstream applications.
Conference Paper
Full-text available
We present CoVR, a novel robotic interface providing strong kinesthetic feedback (100 N) in a room-scale VR arena. It consists of a physical column mounted on a 2D Cartesian ceiling robot (XY displacements) with the capacity of (1) resisting to body-scaled users’ actions such as pushing or leaning; (2) acting on the users by pulling or transporting them as well as (3) carrying multiple potentially heavy objects (up to 80kg) that users can freely manipulate or make interact with each other. We describe its implementation and define a trajectory generation algorithm based on a novel user intention model to support non-deterministic scenarios, where the users are free to interact with any virtual object of interest with no regards to the scenarios’ progress. A technical evaluation and a user study demonstrate the feasibility and usability of CoVR, as well as the relevance of whole-body interactions involving strong forces, such as being pulled through or transported.
Article
Full-text available
Sketching is one of the most natural ways for representing any object pictorially. It is however, challenging to convert sketches to 3D content that is suitable for various applications like movies, games and computer aided design. With the advent of more accessible Virtual Reality (VR) and Augmented Reality (AR) technologies, sketching can potentially become a more powerful yet easy‐to‐use modality for content creation. In this state‐of‐the‐art report, we aim to present a comprehensive overview of techniques related to sketch based content creation, both on the desktop and in VR/AR. We discuss various basic concepts related to static and dynamic content creation using sketches. We provide a structured review of various aspects of content creation including model generation, coloring and texturing, and finally animation. We try to motivate the advantages that VR/AR based sketching techniques and systems can offer into making sketch based content creation a more accessible and powerful mode of expression. We also discuss and highlight various unsolved challenges that current sketch based techniques face with the goal of encouraging future research in the domain.
Article
Full-text available
Tangible objects are used in Virtual Reality (VR) and Augmented Reality (AR) to enhance haptic information on the general shape of virtual objects. However, they are often passive or unable to simulate rich varying mechanical properties. This paper studies the effect of combining simple passive tangible objects and wearable haptics for improving the display of varying stiffness, friction, and shape sensations in these environments. By providing timely cutaneous stimuli through a wearable finger device, we can make an object feel softer or more slippery than it really is, and we can also create the illusion of encountering virtual bumps and holes. We evaluate the proposed approach carrying out three experiments with human subjects. Results confirm that we can increase the compliance of a tangible object by varying the pressure applied through a wearable device. We are also able to simulate the presence of bumps and holes by providing timely pressure and skin stretch sensations. Finally, we show the potential of our techniques in an immersive medical palpation use case in VR. These results pave the way for novel and promising haptic interactions in VR, better exploiting the multiple ways of providing simple, unobtrusive, and inexpensive haptic displays.
Article
Full-text available
Building Information Modeling (BIM) related promises are numerous – reduction of the architecture, engineering and construction (AEC) industry fragmentation, construction cost, and delivery time, as well as lifecycle optimization have been advocated in both literature and practice. But so are the challenges of BIM adoption: establishment and standardization of BIM data structures or ensuring the necessary skills and competencies for planning process participants. In this paper we present ongoing research on the integration of BIM in education through student experiments, based on a BIM-supported integrated design studio (IDS). Thereby the various features of BIM technology adopted in multidisciplinary conceptual design stage are explored and evaluated. Quantitative and qualitative research, in form of questionnaires and focus group discussions, addresses the people and process related challenges in such collaborative BIMsupported building projects. The analysis of three cycles of such IDSs has shown that the participants appreciate the collaborative approach, and benefit from working with other disciplines by sharing knowledge; however BIM technology has not significantly contributed to the improvement of the design quality.
Preprint
Full-text available
Popular Virtual Reality (VR) tools allow users to draw varying-width, ribbon-like 3D brush strokes by moving a hand-held controller in 3D space. Artists frequently use dense collections of such strokes to draw virtual 3D shapes. We propose SurfaceBrush, a surfacing method that converts such VR drawings into user-intended manifold free-form 3D surfaces, providing a novel approach for modeling 3D shapes. The inputs to our method consist of dense collections of artist-drawn stroke ribbons described by the positions and normals of their central polylines, and ribbon widths. These inputs are highly distinct from those handled by existing surfacing frameworks and exhibit different sparsity and error patterns, necessitating a novel surfacing approach. We surface the input stroke drawings by identifying and leveraging local coherence between nearby artist strokes. In particular, we observe that strokes intended to be adjacent on the artist imagined surface often have similar tangent directions along their respective polylines. We leverage this local stroke direction consistency by casting the computation of the user-intended manifold surface as a constrained matching problem on stroke polyline vertices and edges. We first detect and smoothly connect adjacent similarly-directed sequences of stroke edges producing one or more manifold partial surfaces. We then complete the surfacing process by identifying and connecting adjacent similarly directed edges along the borders of these partial surfaces. We confirm the usability of the SurfaceBrush interface and the validity of our drawing analysis via an observational study. We validate our stroke surfacing algorithm by demonstrating an array of manifold surfaces computed by our framework starting from a range of inputs of varying complexity, and by comparing our outputs to reconstructions computed using alternative means.
Conference Paper
Full-text available
Quadcopters have been used as hovering encountered-type haptic devices in virtual reality. We suggest that quadcopters can facilitate rich haptic interactions beyond force feedback by appropriating physical objects and the environment. We present HoverHaptics, an autonomous safe-to-touch quadcopter and its integration with a virtual shopping experience. HoverHaptics highlights three affordances of quadcopters that enable these rich haptic interactions: (1) dynamic positioning of passive haptics, (2) texture mapping, and (3) animating passive props. We identify inherent challenges of hovering encountered-type haptic devices, such as their limited speed, inadequate control accuracy, and safety concerns. We then detail our approach for tackling these challenges, including the use of display techniques, visuo-haptic illusions, and collision avoidance. We conclude by describing a preliminary study (n = 9) to better understand the subjective user experience when interacting with a quadcopter in virtual reality using these techniques.
Article
Full-text available
Introduction A successful project delivery based on building information modeling (BIM) methods is interdependent on an efficient collaboration. This relies mainly on the visualization of a BIM model, which can appear on different mediums. Visualization on mediums such as computer screens, lack some degrees of immersion which may prevent the full utilization of the model. Another problem with conventional collaboration methods such as BIM-Big room, is the need of physical presence of participants in a room. Virtual Reality as the most immersive medium for visualizing a model, has the promise to become a regular part of construction industry. The virtual presence of collaborators in a VR environment, eliminates the need of their physical presence. Simulation of on-site task can address a number of issues during construction, such as feasibility of operations. As consumer VR tools have recently been available in the market, little research has been done on their actual employment in architecture, engineering and construction (AEC) practices. Case description This paper investigates the application of a VR based workflow in a real project. The authors collaborated with a software company to evaluate some of their advanced VR software features, such as simulation of an on-site task. A case study of VR integrated collaboration workflow serves as an example of how firms can overcome the challenge of benefiting this new technology. A group of AEC professionals involved in a project were invited to take part in the experiment, utilizing their actual project BIM models. Discussion and evaluation The results of the feedbacks from the experiment confirmed the supposed benefits of a VR collaboration method. Although the participants of the study were from a wide range of disciplines, they could find benefits of the technology in their practice. It also resulted that an experimental method of clash detection via simulation, could actually be practical. Conclusion The simulation of on-site tasks and perception of architectural spaces in a 1:1 scale are assets unique to VR application in AEC practices. Nevertheless, the study shows the investment in new hardware and software, and resistant against adoption of new technologies are main obstacles of its wide adoption. Further works in computer industry is required to make these technologies more affordable.
Conference Paper
Full-text available
The presence of a third dimension makes accurate drawing in virtual reality (VR) more challenging than 2D drawing. These challenges include higher demands on spatial cognition and motor skills, as well as the potential for mistakes caused by depth perception errors. We present Multiplanes, a VR drawing system that supports both the flexibility of freehand drawing and the ability to draw accurate shapes in 3D by affording both planar and beautified drawing. The system was designed to address the above-mentioned challenges. Multiplanes generates snapping planes and beautification trigger points based on previous and current strokes and the current controller pose. Based on geometrical relationships to previous strokes, beautification trigger points serve to guide the user to reach specific positions in space. The system also beautifies user’s strokes based on the most probable intended shape while the user is drawing them. With Multiplanes, in contrast to other systems, users do not need to manually activate such guides, allowing them to focus on the creative process.
Article
Full-text available
We propose a novel tetrahedral meshing technique that is unconditionally robust, requires no user interaction, and can directly convert a triangle soup into an analysis-ready volumetric mesh. The approach is based on several core principles: (1) initial mesh construction based on a fully robust, yet efficient, filtered exact computation (2) explicit (automatic or user-defined) tolerancing of the mesh relative to the surface input (3) iterative mesh improvement with guarantees, at every step, of the output validity. The quality of the resulting mesh is a direct function of the target mesh size and allowed tolerance: increasing allowed deviation from the initial mesh and decreasing the target edge length both lead to higher mesh quality. Our approach enables "black-box" analysis, i.e. it allows to automatically solve partial differential equations on geometrical models available in the wild, offering a robustness and reliability comparable to, e.g., image processing algorithms, opening the door to automatic, large scale processing of real-world geometric data.
Article
Full-text available
Recycled concrete, i.e., concrete which contains aggregates that are obtained from crushing waste concrete, typically exhibits a smaller strength than conventional concretes. We herein decipher the origin and quantify the extent of the strength reduction by means of multiscale micromechanics-based modeling. Therefore, the microstructure of recycled concrete is represented across four observation scales, spanning from the micrometer-sized scale of cement hydration products to the centimeter-sized scale of concrete. Recycled aggregates are divided into three classes with distinct morphological features: plain aggregates which are clean of old cement paste, mortar aggregates, and aggregates covered by old cement paste. Macroscopic loading is concentrated via interfacial transition zones (ITZs)—which occur mutually between aggregates, old, and new cement paste—to the micrometer-sized hydrates resolved at the smallest observation scale. Hydrate failure within the most unfavorably loaded ITZ is considered to trigger concrete failure. Modeling results show that failure in either of the ITZs might be critical, and that the failure mode is governed by the mutual stiffness contrast between aggregates, old, and new paste, which depend, in turn, on the concrete composition and on the material’s maturity. The model predicts that the strength difference between recycled concrete and conventional concrete is less pronounced (i) at an early age compared to mature ages, (ii) when the old cement paste content is small, and (iii) when recycling a high-quality parent concrete.
Article
We present a novel shape-shifting haptic device, Shiftly, which renders plausible haptic feedback when touching virtual objects in Virtual Reality (VR). By changing its shape, different geometries of virtual objects can be approximated to provide haptic feedback for the user's hand. The device employs only three actuators and three curved origamis that can be programmatically folded and unfolded to create a variety of touch surfaces ranging from flat to curved. In this paper, we present the design of Shiftly, including its kinematic model and integration into VR setups for haptics. We also assessed Shiftly using two user studies. The first study evaluated how well Shiftly can approximate different shapes without visual representation. The second study investigated the realism of the haptic feedback with Shiftly for a user when touching a rendered virtual object. The results showed that our device can provide realistic haptic feedback for flat surfaces, convex shapes of different curvatures, and edge-shaped geometries. Shiftly can less realistically render concave surfaces and objects with small details.
Article
Efficient design, production, and optimization of new safe and sustainable by design materials for various industrial sectors is an on-going challenge for our society, poised to escalate in the future. Wood-based composite materials offer an attractive sustainable alternative to high impact materials such as glass and polymers and have been the focus of experimental research and development for years. Computational and AI-based materials design provides significant speed-up the development of these materials compared to traditional methods of development. However, reliable numerical models are essential for achieving this goal. The AI-TranspWood project, recently funded by the European Commission, has the ambition to develop such computational and AI-based tools in the context of transparent wood (TW), a promising composite with potential applications in various industrial fields. In this project we advance the development specifically by using an Artificial Intelligence (AI)-driven multiscale methodology.
Article
In the process of product design and digital fabrication, the structural analysis of a designed prototype is a fundamental and essential step. However, such a step is usually invisible or inaccessible to designers at the early sketching phase. This limits the user's ability to consider a shape's physical properties and structural soundness. To bridge this gap, we introduce a novel approach Sketch2Stress that allows users to perform structural analysis of desired objects at the sketching stage. This method takes as input a 2D freehand sketch and one or multiple locations of user-assigned external forces. With the specially-designed two-branch generative-adversarial framework, it automatically predicts a normal map and a corresponding structural stress map distributed over the user-sketched underlying object. In this way, our method empowers designers to easily examine the stress sustained everywhere and identify potential problematic regions of their sketched object. Furthermore, combined with the predicted normal map, users are able to conduct a region- wise structural analysis efficiently by aggregating the stress effects of multiple forces in the same direction. Finally, we demonstrate the effectiveness and practicality of our system with extensive experiments and user studies.
Chapter
Structural engineering knowledge can be of significant importance to the architectural design team during the early design phase. However, architects and engineers do not typically work together during the conceptual phase; in fact, structural engineers are often called late into the process. As a result, updates in the design are more difficult and time-consuming to complete. At the same time, there is a lost opportunity for better design exploration guided by structural feedback. In general, the earlier in the design process the iteration happens, the greater the benefits in cost efficiency and informed design exploration, which can lead to higher quality creative results. In order to facilitate an informed exploration in the early design stage, we suggest the automation of fundamental structural engineering tasks and introduce ApproxiFramer, a Machine Learning-based system for the automatic generation of structural layouts from building plan sketches in real-time. The system aims to assist architects by presenting them with feasible structural solutions during the conceptual phase so that they proceed with their design with adequate knowledge of its structural implications. In this paper, we describe the system and evaluate the performance of a proof-of-concept implementation in the domain of orthogonal, metal, rigid structures. We trained a Convolutional Neural Net to iteratively generate structural design solutions for sketch-level building plans using a synthetic dataset and achieved an average error of 2.2% in the predicted positions of the columns.
Article
Encountered-Type Haptic Displays (ETHDs) provide haptic feedback by positioning a tangible surface for the user to encounter. This permits users to freely eliciting haptic feedback with a surface during a virtual simulation. ETHDs differ from most of current haptic devices which rely on an actuator always in contact with the user. This survey paper intends to describe and analyze the different research efforts carried out in this field. In addition, this review analyzes ETHD literature concerning definitions, history, hardware, haptic perception processes involved, interactions and applications. The paper proposes a formal definition of ETHDs, a taxonomy for classifying hardware types, and an analysis of haptic feedback used in literature. Taken together the overview of this survey intends to encourage future work in the ETHD field.
Chapter
A key step in any scanning-based asset creation workflow is to convert unordered point clouds to a surface. Classical methods (e.g., Poisson reconstruction) start to degrade in the presence of noisy and partial scans. Hence, deep learning based methods have recently been proposed to produce complete surfaces, even from partial scans. However, such data-driven methods struggle to generalize to new shapes with large geometric and topological variations. We present Points2Surf, a novel patch-based learning framework that produces accurate surfaces directly from raw scans without normals. Learning a prior over a combination of detailed local patches and coarse global information improves generalization performance and reconstruction accuracy. O5ur extensive comparison on both synthetic and real data demonstrates a clear advantage of our method over state-of-the-art alternatives on previously unseen classes (on average, Points2Surf brings down reconstruction error by 30% over SPR and by 270%+ over deep learning based SotA methods) at the cost of longer computation times and a slight increase in small-scale topological noise in some cases. Our source code, pre-trained model, and dataset are available at: https://github.com/ErlerPhilipp/points2surf.
Article
We propose a new method for reconstructing an implicit surface from an un-oriented point set. While existing methods often involve non-trivial heuristics and require additional constraints, such as normals or labelled points, we introduce a direct definition of the function from the points as the solution to a constrained quadratic optimization problem. The definition has a number of appealing features: it uses a single parameter (parameter-free for exact interpolation), applies to any dimensions, commutes with similarity transformations, and can be easily implemented without discretizing the space. More importantly, the use of a global smoothness energy allows our definition to be much more resilient to sampling imperfections than existing methods, making it particularly suited for sparse and non-uniform inputs.
Conference Paper
Resistive force (e.g., due to object elasticity) and impact (e.g., due to recoil) are common effects in our daily life. However, resistive force continuously changes due to users' movements while impact instantly occurs when an event triggers it. These feedback are still not realistically provided by current VR haptic methods. In this paper, a wearable device, ElasticVR, which consists of an elastic band, servo motors and mechanical brakes, is proposed to provide the continuously-changing resistive force and instantly-occurring impact upon the user's hand to enhance VR realism. By changing two physical properties, length and extension distance, of the elastic band, ElasticVR provides multilevel resistive force with no delay and impact with little delay, respectively, for realistic and versatile VR applications. A force perception study was performed to observe users' force distinguishability of the resistive force and impact, and the prototype was built based on its results. A VR experience study further proves that the resistive force and impact from ElasticVR both outperform those from current approaches in realism. Applications using ElasticVR are also demonstrated.
Article
Eco-friendly materials based on well-preserved and nanostructured wood cellulose fibers are investigated for the purpose of load-bearing applications, where optical transmittance may be advantageous. Wood fibers are subjected to mild delignification, flow orientation, and hot-pressing to form an oriented material of low porosity. Biopolymer composition of the fibers is determined. Morphology is studied by SEM, cellulose orientation is quantified by x-ray diffraction, and effect of beating is investigated. Hot-pressed networks are impregnated by MMA monomer and polymerized to form thermoplastic wood fiber/PMMA biocomposites. Tensile tests are performed, as well as optical transmittance measurements. Structure-property relationships are discussed. High-density molded fibers from holocellulose have mechanical properties comparable with nanocellulose materials, and are recyclable. The thermoplastic matrix biocomposites showed superior mechanical properties (Young’s modulus of 20 GPa and ultimate strength of 310 MPa) at a fiber volume fraction of 52%, with high optical transmittance of 90%. The study presents a scalable approach for strong, stiff and transparent molded fibers/biocomposites.
Conference Paper
Besides sketching in mid-air, Augmented Reality (AR) lets users sketch 3D designs directly attached to existing physical objects. These objects provide natural haptic feedback whenever the pen touches them, and, unlike in VR, there is no need to digitize the physical object first. Especially in Personal Fabrication, this lets non-professional designers quickly create simple 3D models that fit existing physical objects, such as a lampshade for a lamp socket. We categorize guidance types of real objects into flat, concave, and convex surfaces, edges, and surface markings. We studied how accurately these guides let users draw 3D shapes attached to physical vs. virtual objects in AR. Results show that tracing physical objects is 48% more accurate, and can be performed in a similar time compared to virtual objects. Guides on physical objects further improve accuracy especially in the vertical direction. Our findings provide initial metrics when designing AR sketching systems.