Article

A multisensory Interaction Framework for Human-Cyber-Physical System based on Graph Convolutional Networks

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Human-Cyber-Physical Systems (HCPS), as an emerging paradigm centered around humans, provide a promising direction for the advancement of various domains, such as intelligent manufacturing and aerospace. In contrast to Cyber-Physical Systems (CPS), the development of HCPS emphasizes the expansion of human capabilities. Humans no longer solely function as operators or agents working in collaboration with computers and machines but extend their roles to include system design and innovation management. This paper proposes a Multisensory Interaction Framework for HCPS (MS-HCPS) that leverages human senses to facilitate system creation and management. Additionally, the introduced Multisensory Graph Convolutional Network (MS-GCN) model calculates recommendation values for multiple senses, elucidating their relevance to system development. Furthermore, the effectiveness of the proposed framework and model is validated through three practical engineering scenarios. This study explores the research on multisensory interaction in HCPS from a human sensory perspective, aiming to facilitate the progress and development of HCPS across various domains.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Specifically, studies on advanced human-cybernetic interface design and development aim to improve user experience by enhancing the design of human-cybernetic interfaces. These studies focus on creating more intuitive and responsive interfaces that can adapt to the user's needs in real-time, such as multisensory interaction [1], head-up displays [2], and mixed reality [3]. Studies on psychophysiological data analytics aim to fully understand, monitor, and even predict human states and performance. ...
... The framework is the first and one of the most important elements in designing and developing a new system. Two studies proposed frameworks for enhancing HCI [1,3]. Paper [1] summarizes the evolution process of Human-Cyber-Physical Systems (HCPS) and thoroughly investigates five types of interactions from the perspectives of human senses: visual interaction, auditory interaction, olfactory interaction, gustatory interaction, and haptic interaction. ...
... Two studies proposed frameworks for enhancing HCI [1,3]. Paper [1] summarizes the evolution process of Human-Cyber-Physical Systems (HCPS) and thoroughly investigates five types of interactions from the perspectives of human senses: visual interaction, auditory interaction, olfactory interaction, gustatory interaction, and haptic interaction. Based on this analysis, the paper proposes a Multisensory Interaction Framework for HCPS (MS-HCPS). ...
... However, the model takes more time to converge. Qi et al [77] present a multisensory interaction structure for human CPS (HCPS) to improve system design, effective management, and operation. The proposed system anchors on human sensory perception to facilitate advancements in aerospace development and intelligent manufacturing. ...
Article
Full-text available
The advent of advanced technologies in power and energy systems is fortifying the grid's resilience and enhancing the availability of power supply through a network of electrical and communication apparatus. The notable technologies include cyber-physical power systems (CPPS) and transactive energy systems (TES). The CPPS, a derivative of cyber-physical system (CPS), is for operational enhancement, and to boost performance. TES is an energy solution that uses economic and control techniques that enhance the dynamic balance between the supplied energy and energy demand across the electrical infrastructure. Integration of intelligence systems and information and communication technologies has brought new objections and threats to CPPS and TES, where adversaries capitalize on the vulnerabilities in cyber systems to manipulate the system deceitfully. Furthermore, the susceptibility of CPPS to information attacks inherently has the potential to cause cascading failures. Researchers have extensively focused their searchlight on applications of advanced technologies within CPPS. However, leaving out the impact of cascaded failures on the CPPS' efficiency. This work critically assesses intelligent-based techniques used for cyber threat detection and mitigation. It offers insights on how to guide against some of the approaches adopted by cyber-attackers, identifies corresponding gaps, and presents future research directions. Also presented is the conceptualization of applying CPS models for the cyber-security enhancement of TES solutions. The articles selected for this review were evaluated based on recency and the application of intelligent approaches for intrusion and cyberattack detection in CPPS. It was uncovered from the review that topological models are often used to describe cyberattack processes in CPPS. Also, researchers based their investigation on False-Data Injection Attacks and IEEE-118 Bus systems for validation. It was discovered that the deep Reinforcement Learning-based Graph Convolutional Network is a promising solution for intrusion and cyberattack detection in TES owing to its security, detection accuracy, reliability, and scalability.
Article
Full-text available
This study compares the development and impact of smart cities and cyber-physical systems (CPS) integration in Western societies and West African urban centers, highlighting the challenges, opportunities, and disparities that shape these efforts. With rapid urbanization and unique socio-economic contexts in West Africa, the study aims to examine how smart city initiatives can address urban challenges such as inadequate infrastructure and environmental degradation. Using a scoping review methodology, the research explores the integration of technologies like IoT, AI, and data analytics, and how these are applied in both regions to enhance urban services and sustainability. The findings reveal that while Western cities have advanced smart city projects with robust technological infrastructure and governance frameworks, West African cities face infrastructure constraints and limited resources, hindering the full implementation of CPS technologies. However, there are emerging opportunities for West African cities, such as leapfrogging technology and adopting community-driven solutions. This study identifies key challenges in both regions, including data privacy concerns and governance complexities, and offers recommendations for policymakers and urban planners. The research emphasizes the need for localized approaches, capacity building, and cross-sector collaboration to promote sustainable and inclusive smart city development in West Africa, drawing on lessons from Western cities to inform future urban planning strategies.
Article
Full-text available
Manufacturing enterprises are facing how to utilise industrial knowledge and continuously accumulating massive unlabelled data to achieve human‐cyber‐physical collaborative and autonomous intelligence. Recently, artificial intelligence‐generative content has achieved great performance in several domains and scenarios. A new concept of industrial generative pre‐trained Transformer (Industrial‐GPT) for intelligent manufacturing systems is introduced to solve various scenario tasks. It refers to pre‐training with industrial datasets, fine‐tuning with industrial scenarios, and reinforcement learning with domain knowledge. To enable Industrial‐GPT to better empower the manufacturing industry, Model as a Service is introduced to cloud computing as a new service mode, which provides a more efficient and flexible service approach by directly invoking the general model of the upper layer and customising it for specific businesses. Then, the operation mechanism of the Industrial‐GPT driven intelligent manufacturing system is described. Finally, the challenges and prospects of applying the Industrial‐GPT in the manufacturing industry are discussed.
Article
Full-text available
Deep learning-based fault diagnosis models achieve great success with sufficient balanced data, but the imbalanced dataset in real industrial scenarios will seriously affect the performance of various popular deep learning models. Data generation-based strategy provides a solution by expanding the number of minority samples. However, many data-generation methods cannot generate high-quality samples when the imbalanced ratio is high. To address these problems, a dual-attention feature fusion network (DAFFN) with two-stream hybrid-generated data is proposed. First, the two-stream hybrid generator including a generative model and an oversampling technique is adopted to generate minority fault data. Then, the convolutional neural network is used to extract features from hybrid-generated data. In particular, a feature fusion network with a dual-attention mechanism, i.e., a channel attention mechanism and a layer attention mechanism are designed to learn channel-level and layer-level weights of the features. Extensive results on two bearing datasets indicate that the proposed framework achieves outstanding performance in various high imbalanced-ratio cases.
Article
Full-text available
In this paper, we explore the current technical possibilities of eating in virtual reality (VR) and show how this could be used to influence eating behaviors. Cue-based exposure therapy is a well-known method used to treat eating disorders. There are several benefits to using VR in combination with cue-based therapy. However, before VR-based cue-exposure can be used for therapeutic purposes, the ability of the VR environment to elicit craving responses in participants must be assessed. This was the objective of the first part of the study, where we assessed whether our VR environment elicited food craving responses in participants. Results showed that our VR environment elicited food craving responses: Salivation Magnitude, Food Craving State and Urge to Eat was significantly different from the neutral baseline. In addition, results showed that food cravings measured through the salivation magnitude in response to the virtual condition were not significantly different from the real condition, thus showing that VR had a comparable effect on producing food cravings. The second part of the study was conducted to determine whether the addition of olfactory and interaction cues in VR increased the development of food cravings. The results of this part showed that adding synthetic olfactory cues, paired with visual cues, to our system, provided a significant further increase in food cravings. Our results demonstrate that the use of food cues in VR can increase the development of food cravings and that it is possible to provide a simple yet convincing eating experience in VR. Inevitably, food interaction in VR is still underexplored territory and further research is needed to improve utility and application in disciplines related to food and eating.
Article
Full-text available
Occupational safety and health (OSH) should be regarded as a crucial challenge that affects the public world widely. Work-related accidents and occupational illness contribute to considerable mortality and morbidity. As technology advances, mixed reality (MR) has gained popularity. To minimize occupational accidents occurring in the workplace and reduce human training time, an MR-based platform for OSH training combined with CPS and IoT technology is proposed in this paper. Multi-criteria decision-making (MCDM) and fuzzy-analytic hierarchy process (FAHP) were applied to evaluate and select suitable gloves. Only when the MR wearable devices are improved can a more powerful MR-based OSH training program be established. A higher immersive level of OSH training offers people a more realistic experience. They will better understand possible risks in workers’ future work, resulting in a lower occupational accident rate in the workplace.
Article
Full-text available
Nowadays, manufacturing enterprises are struggling to satisfy personalized and dynamic user requirements. Smart Product-service System (Smart PSS), as a promising and sustainable business model, can respond to personalized and dynamic demands through effective configuration/reconfiguration processes. Mass personalization (MP), which emphasizes diversified values to improve user experience by considering both functional and affective elements, can significantly contribute to Smart PSS. However, Smart PSS field still lacks a unified knowledge representation framework and an effective configuration method in the MP context. Aiming to fulfill the gap, this research proposes a novel Smart PSS configuration method oriented to MP to meet personalized and dynamic requirements of users, especially satisfy affective and functional requirements simultaneously throughout the reconfiguration life-cycle. Due to the extreme complexity of heterogeneous data from Smart PSS and MP, Knowledge Graph (KG) is introduced as a powerful tool for managing intricate system knowledge. The common schemas of three KGs are constructed as the basis of configuration, with design knowledge, related field knowledge, and user information integrated comprehensively. KG-based question answering is combined with similarity calculation as a hybrid method in the configuration process to provide satisfactory and complete solutions. Moreover, the configuration process is placed in the reconfiguration life-cycle of Smart PSS to deal with dynamic changes through the latest knowledge in updating KGs. Furthermore, an illustrative case study of the intelligent management system for family medication is demonstrated. Through the comparison with other approaches, the proposed method is proved to be valuable for the implementation of Smart PSS configuration for MP.
Article
Full-text available
Mechanical fault diagnosis is crucial to ensure safe operations of equipment in intelligent manufacturing systems. Deep learning-based methods have been recently developed for fault diagnosis due to their advantages in feature representation. However, most of these methods fail to learn relations between samples and thus perform poorly without sufficient labeled data. In this paper, we propose a new few-shot learning method named Dual Graph Neural network (DGNNet) with residual blocks to address fault diagnosis problems with limited data. Firstly, the residual module learns the feature of samples with image data transferred from original signals. Secondly, two complete graphs built on the sample features are used to extract the instance-level and distribution-level relations between samples. In particular, an alternate update policy between the instance and distribution graphs integrates the multilevel relations to propagate the label information of a few labeled samples to unlabeled samples. This technique leverages labeled and unlabeled samples to identify unseen faults, encouraging DGNNet competent in fault diagnosis tasks with very few labeled samples. Extensive results on various datasets show that DGNNet achieves excellent performance in supervised fault diagnosis tasks and outperforms baselines by a great margin in semi-supervised cases.
Article
Full-text available
Taste perception is influenced by sensory information not only about the food itself but also about the external environment where the food is tasted. Prior studies have shown that both visual attributes of the environment (e.g., light colour, location) and the shape associated to food (e.g., plates, cutlery) can influence people's taste perception and expectations. However, previous studies are typically based on non-edible shapes usually shown as 2D images or presented as 3D tangible objects aimed to be perceived by subjects' hand. Therefore, the effect of mouthfeel of differently shaped foods on taste perception remains unclear. Capitalising on the advantages of virtual reality (VR) to manipulate multisensory features, we explore the effects of coloured (red, blue, neutral) virtual environments on the taste (sweet, neutral) perception of differently shaped taste samples (rounded/spiky shapes according to the Kiki-Bouba paradigm). Overall, our results showed increased ratings of sweetness when participants tasted Bouba-shaped samples (rounded) relative to Kiki-shaped samples (spiky) suggesting that tactile attributes perceived inside the mouth can influence sweetness perception. Furthermore, we concluded that lighting colour in a virtual setting might dampen experiences of sweetness. However, this effect may only be present when there is a cross-modal correspondence with taste. Based on our findings, we conclude by describing considerations for designing eating experiences in VR.
Article
Full-text available
As a typical application of human–machine fusion intelligence, the exoskeleton is an indispensable intelligent interaction device for virtual reality. At present, there are an increasing number of studies on virtual reality systems using exoskeleton technology, especially in the field of medical rehabilitation. In this paper, for the first time, a virtual reality system with the application of exoskeleton technology is considered as the research object. We refer to three key human–machine interaction processes: recognition, perception, and feedback. The virtual reality system that uses exoskeleton technology is divided into positioning technology, multisensory interaction, and feedback technology. First, this study conducts literature research and then summarizes the technical characteristics, system architecture, and research status for key content such as positioning technology, multisensory interaction, and feedback technology. Finally, the three research aspects of the virtual reality system applying exoskeleton technology are summarized, considered, and prospected.
Article
Full-text available
Olfaction has not been explored in virtual reality environments to the same extent as the visual and auditory senses. Much less research has been done with olfactory devices, and very few of them can be easily integrated into virtual reality applications. The inclusion of odor into virtual reality simulations using a chemical device involves challenges such as possible diffusion into undesired areas, slow dissipation, the definition of various parameters (e.g., concentration, frequency, and duration), and an appropriate software solution for controlling the diffusion of the odor. This paper aims to present a non-intrusive, mobile, low cost and wearable olfactory display, and a software service that allows the developer to easily create applications that include olfactory stimuli integrated with virtual reality headset glasses. We also present a case study conducted with 32 people to evaluate their satisfaction when using the olfactory display. Our findings indicate that our solution works as expected, producing odor properly and being easy to integrate to applications.
Article
Full-text available
The industrial landscape is undergoing a series of fundamental changes, because of the advances in cutting-edge digital technologies. Under the framework of Industry 4.0 engineers have focused their effort on the development of new frameworks integrating digital technologies such as Big Data Analytics, Digital Twins, Extended Reality, and Artificial Intelligence, to upscale modern manufacturing systems, reduce uncertainties, and cope with the increased market volatility. However, in the upcoming industrial revolution, i.e., Industry 5.0, the research focus will be directed towards the new generation of human operators, the Operator 5.0. The purpose of this paper is to investigate the key technologies that will be the drivers towards the realization of the Operator 5.0 and to highlight the key challenges. Additional contribution is the proposal of a framework for the training and support of shopfloor technicians based on the utilization of Mixed Reality for manufacturing processes.
Article
Full-text available
In this paper, we proposed HDT-driven HCPS to address the human’s needs and repurpose the roles of humans and machines – shifting from the technology-driven approach to a human-centric approach, where humans and the physical systems share own capabilities and intelligence. It features a new perspective on interactions between humans and physical systems, as well as on the related allocation of functionalities and responsibility of each part with the system. Representative enabling technologies are reviewed in terms of sensing, computing and analysis, and control. Furthermore, applications of HDT-driven HCPS as a promising solution in pandemic preventive control and explosive ordnance disposal are discussed.
Article
Full-text available
In recent years, social media has become a ubiquitous and integral part of social discourse. Homophily is a fundamental topic in network science and can provide insights into the flow of information and behaviours within society. Homophily mainly refers to the tendency of similar-minded people to interact with one another in social groups than with dissimilar-minded people. The study of homophily has been very useful in analyzing the formations of online communities. In this paper, we review and survey the effects of homophily in social networks and summarize the state-of-art methods that have been proposed in the past recent years to identify and measure those effects in multiple types of social networks. We conclude with a critical discussion of open challenges and directions for future research.
Article
Full-text available
Mixed reality as an emerging technology can improve users’ experience. Using this technology, people can interact between virtual objects and the real world. Mixed reality has enormous potential for enhancing the human cyber—physical system for different manufacturing functions: planning, designing, production, monitoring, quality control, training, and maintenance. This study aims to understand the existing development of mixed reality technology in manufacturing by analysing patent publications from the InnovationQ-Plus database. Evaluations of trends in this developing technology have focused on qualitative literature reviews and insufficiently on patent technology analytics. Patents connected to mixed reality will be mapped to give technology experts a better grasp of present progress and insights for future technological development in the industry. Thus 709 patent publications are systematically identified and analysed to discover the technological trends. In addition, we map existing patent publications with manufacturing functions, and illustrate this with a technology function matrix. Finally, we identify future research in human cyber—physical system development by enhancing different human senses to give users more sensations while interacting with virtual objects. We provide insight into this human cyber—physical system development in three industries: automotive, food and beverage, and textiles.
Conference Paper
Full-text available
This paper presents a survey informing a user-first approach todesigning calming affective haptic stimuli by eliciting user prefer-ences in different social scenarios. Prior affective haptics researchpresented users with stimuli and recorded emotional responses. Bycontrast this work focuses on the sensations users wish to expe-rience and how these can be simulated using haptics. The survey(n=81) investigated which users preferences in four social situationsto reduce social anxiety. Using thematic analysis of responses wecreated a coding scheme of stimuli derived from real-world experi-ences to emulate with affective haptics. By cross-referencing thesecategories with affective haptics research, we provide recommen-dations to designers about which calming stimuli users wish toexperience socially and how they can be implemented.
Article
Full-text available
In recent years, the advent of the latest-generation technologies and methods have made it possible to survey, digitise and represent complex scenarios such as archaeological sites and historic buildings. Thanks to computer languages based on Visual Programming Language (VPL) and advanced real-time 3D creation platform, this study shows the results obtained in eXtended Reality (XR) oriented to archaeological sites and heritage buildings. In particular, the scan-to-BIM process, digital photogrammetry (terrestrial and aerial) were oriented towards a digitisation process able to tell and share tangible and intangible values through the latest generation techniques, methods and devices. The paradigm of the geometric complexity of the built heritage and new levels of interactivity between users and digital worlds were investigated and developed to favour the transmissibility of information at different levels of virtual experience and digital sharing with the aim to archive, tell and implement historical and cultural baggage that over the years risks being lost and not told to future generations.
Article
Full-text available
Multisensory integration research has allowed us to better understand how humans integrate sensory information to produce a unitary experience of the external world. However, this field is often challenged by the limited ability to deliver and control sensory stimuli, especially when going beyond audio–visual events and outside laboratory settings. In this review, we examine the scope and challenges of new technology in the study of multisensory integration in a world that is increasingly characterized as a fusion of physical and digital/virtual events. We discuss multisensory integration research through the lens of novel multisensory technologies and, thus, bring research in human–computer interaction, experimental psychology, and neuroscience closer together. Today, for instance, displays have become volumetric so that visual content is no longer limited to 2D screens, new haptic devices enable tactile stimulation without physical contact, olfactory interfaces provide users with smells precisely synchronized with events in virtual environments, and novel gustatory interfaces enable taste perception through levitating stimuli. These technological advances offer new ways to control and deliver sensory stimulation for multisensory integration research beyond traditional laboratory settings and open up new experimentations in naturally occurring events in everyday life experiences. Our review then summarizes these multisensory technologies and discusses initial insights to introduce a bridge between the disciplines in order to advance the study of multisensory integration.
Article
Full-text available
Sensory cues are often encountered sequentially (rather than simultaneously) in retailing, food packaging, and other consumption contexts. While prior studies on effects of sensory cues have examined scenarios where the sensory cues are encountered simultaneously, this research takes the novel approach of examining order effects of different sensory cues encountered sequentially. Specifically, four experiments examine the effects of sequentially encountered visual and olfactory sensory cues on food taste perception. We theorize and find empirical evidence that an olfactory cue benefits from first encountering a visual cue, but not vice versa. More specifically, encountering a visual cue before (vs. after) an olfactory cue (i.e., V‐O vs. O‐V sequence) results in more positive outcomes (higher taste perception, volume consumed, product recommendation, and choice). Moreover, ease of processing the olfactory cue mediates the effect of sensory cue sequence on taste perception. These findings highlight the sensory cross‐modal effects of sequential visual and olfactory cues on gustatory perceptions and have implications for consumer well‐being as well as for food/beverage packaging and for designing retail outlets and restaurants.
Article
Full-text available
Online retailers are increasingly using augmented reality (AR) and virtual reality (VR) technologies to solve mental and physical intangibility issues in a product evaluation. Moreover, the technologies are easily available and accessible to consumers via their smartphones. The authors conducted three experiments to examine consumer responses to technology interfaces (AR/VR and mobile apps) for hedonic and utilitarian products. The results show that AR is easier to use (vs. app), and users find AR more responsive when buying a hedonic (vs. utilitarian) product. Touch interface users are likely to have a more satisfying experience and greater recommendation intentions, as compared with AR, for buying utilitarian products. In contrast, a multisensory environment (AR) results in a better user experience for purchasing a hedonic product. Moreover, multisensory technologies lead to higher visual appeal, emotional appeal, and purchase intentions. The research contributes to the literature on computer‐mediated interactions in a multisensory environment and proposes actionable recommendations to online marketers.
Article
Full-text available
3D models of objects and scenes are critical to many academic disciplines and industrial applications. Of particular interest is the emerging opportunity for 3D graphics to serve artificial intelligence: computer vision systems can benefit from synthetically‐generated training data rendered from virtual 3D scenes, and robots can be trained to navigate in and interact with real‐world environments by first acquiring skills in simulated ones. One of the most promising ways to achieve this is by learning and applying generative models of 3D content: computer programs that can synthesize new 3D shapes and scenes. To allow users to edit and manipulate the synthesized 3D content to achieve their goals, the generative model should also be structure‐aware: it should express 3D shapes and scenes using abstractions that allow manipulation of their high‐level structure. This state‐of‐the‐art report surveys historical work and recent progress on learning structure‐aware generative models of 3D shapes and scenes. We present fundamental representations of 3D shape and scene geometry and structures, describe prominent methodologies including probabilistic models, deep generative models, program synthesis, and neural networks for structured data, and cover many recent methods for structure‐aware synthesis of 3D shapes and indoor scenes.
Article
Full-text available
In our study, we tested a combination of virtual reality (VR) and robotics in the original adjuvant method of post-stroke lower limb walk restoration in acute phase using a simulation with visual and tactile biofeedback based on VR immersion and physical impact to the soles of patients. The duration of adjuvant therapy was 10 daily sessions of 15 min each. The study showed the following significant rehabilitation progress in Control (N = 27) vs. Experimental (N = 35) groups, respectively: 1.56 ± 0.29 (mean ± SD) and 2.51 ± 0.31 points by Rivermead Mobility Index (p = 0.0286); 2.15 ± 0.84 and 6.29 ± 1.20 points by Fugl-Meyer Assessment Lower Extremities scale (p = 0.0127); and 6.19 ± 1.36 and 13.49 ± 2.26 points by Berg Balance scale (p = 0.0163). P-values were obtained by the Mann–Whitney U test. The simple and intuitive mechanism of rehabilitation, including through the use of sensory and semantic components, allows the therapy of a patient with diaschisis and afferent and motor aphasia. Safety of use allows one to apply the proposed method of therapy at the earliest stage of a stroke. We consider the main finding of this study that the application of rehabilitation with implicit interaction with VR environment produced by the robotics action has measurable significant influence on the restoration of the affected motor function of the lower limbs compared with standard rehabilitation therapy.
Article
Full-text available
This paper first discusses development of new-generation intelligent manufacturing from the perspectives of the problems and challenges of the manufacturing industry, major opportunities brought by new-generation artificial intelligence, and core technologies of the new round of industrial revolution. By analyzing the evolution of intelligent manufacturing, this paper points out that the process of developing from traditional manufacturing to intelligent manufacturing is also a process of developing from the original human-physical systems (HPS) to human-cyber-physical systems (HCPS). An HCPS reveals the basic principles of intelligent manufacturing development and is the theoretical basis for supporting the development of new-generation intelligent manufacturing. Based on system integration of HCPS and intelligent manufacturing, the prospect of new-generation intelligent manufacturing is described from the perspective of the revolutionary changes brought to the manufacturing sector and to human society.
Article
Full-text available
An intelligent manufacturing system is a composite intelligent system comprising humans, cyber systems, and physical systems with the aim of achieving specific manufacturing goals at an optimized level. This kind of intelligent system is called a human–cyber–physical system (HCPS). In terms of technology, HCPSs can both reveal technological principles and form the technological architecture for intelligent manufacturing. It can be concluded that the essence of intelligent manufacturing is to design, construct, and apply HCPSs in various cases and at different levels. With advances in information technology, intelligent manufacturing has passed through the stages of digital manufacturing and digital-networked manufacturing, and is evolving toward new-generation intelligent manufacturing (NGIM). NGIM is characterized by the in-depth integration of new-generation artificial intelligence (AI) technology (i.e., enabling technology) with advanced manufacturing technology (i.e., root technology); it is the core driving force of the new industrial revolution. In this study, the evolutionary footprint of intelligent manufacturing is reviewed from the perspective of HCPSs, and the implications, characteristics, technical frame, and key technologies of HCPSs for NGIM are then discussed in depth. Finally, an outlook of the major challenges of HCPSs for NGIM is proposed.
Conference Paper
Full-text available
The current position paper discusses vital challenges related to the user experience design in unsupervised, highly automated cars. These challenges are: (1) how to avoid motion sickness, (2) how to ensure users' trust in the automation, (3) how to ensure usability and support the formation of accurate mental models of the automation system, and (4) how to provide a pleasant and enjoyable experience. We argue for that auditory displays have the potential to help solve these issues. While auditory displays in modern vehicles typically make use of discrete and salient cues, we argue that the use of less intrusive continuous sonic interaction could be more beneficial for the user experience.
Article
Full-text available
With the advent of the Internet of Things and Industry 4.0 concepts, cyber-physical systems in civil engineering experience an increasing impact on structural health monitoring (SHM) and control applications. Designing, optimizing, and documenting cyber-physical system on a formal basis require platform-independent and technology-independent metamodels. This study, with emphasis on communication in cyber-physical systems, presents a metamodel for describing cyber-physical systems. First, metamodeling concepts commonly used in computing in civil engineering are reviewed and possibilities and limitations of describing communication-related information are discussed. Next, communication-related properties and behavior of distributed cyber-physical systems applied for SHM and control are explained, and system components relevant to communication are specified. Then, the metamodel to formally describe cyber-physical systems is proposed and mapped into the Industry Foundation Classes (IFC), an open international standard for building information modeling (BIM). Finally, the IFC-based approach is verified using software of the official IFC certification program, and it is validated by BIM-based example modeling of a prototype cyber-physical system, which is physically implemented in the laboratory. As a result, cyber-physical systems applied for SHM and control are described and the information is stored, documented, and exchanged on the formal basis of IFC, facilitating design, optimization, and documentation of cyber-physical systems.
Chapter
Advances in the Internet, communication technologies, and computation power have accelerated the cycle of new product development as well as supply chain efficiency in an unprecedented manner. Digital technology provides not only an important means for the optimization of production efficiency through simulations prior to the start of actual operations but also facilitates manufacturing process automation through efficient and effective automatic tracking of production data from the flow of materials, finished goods, and people to the movement of equipment and assets in the value chain. There are two major applications of digital technology in manufacturing. The first deals with the modeling, simulation, and visualization of manufacturing systems, and the second deals with the automatic “acquisition, retrieval, and processing of manufacturing data used in the supply chain.” This chapter summarizes the state of the art of digital manufacturing which is based on virtual manufacturing (VM) systems, smart manufacturing (SM) systems, and industrial Internet of Things (IIoT). The associated technologies, their key techniques, and current research work are highlighted. In addition, the social and technological obstacles in the development of a VM system and SM system and an IIoT-based manufacturing process automation system and some practical application case studies of digital manufacturing are also discussed.KeywordsDigital manufacturingSmart manufacturingVirtual manufacturing, Industrial Internet of Things (IIoT) automationRadio frequency identification (RFID), Industry 4.0
Article
The sense of smell, olfaction, is seldom engaged in digital interactive systems, but, supported by the proper technology, olfaction might open up new interaction domains. Human olfactory experience involves active exploration, directed sniffing and nuanced judgements about odour identity, concentrations, and blends, yet to date most compact olfactory displays do not directly support these experiences. We describe the development and validation of a compact, low-cost olfactory display fitted to the hand controller of the HTC Vive Virtual Reality (VR) system that employs stepless valves to enable control of scent magnitude and blending (Fig. 1). Our olfactory display allows for concealed (i.e., unknown to the user) combinations of odours with virtual objects and contexts, making it well suited to applications involving interactions with odorous objects in virtual space for recreational, educational, scientific, or therapeutic functions. Through a user study and gas sensor analysis, we have been able to demonstrate that our device presents clear and consistent scent output, is intuitive from a user perspective, and supports gameplay interactions. We present results from a smell training game in a virtual wine tasting cellar in which the initial task of identifying wine aroma components is followed by evaluating more complex blends, allowing the player to “level up” as they proceed to higher degrees of connoisseurship. Novice users were able to quickly adapt to the display, and we found that the device affords sniffing and other gestures that add verisimilitude to olfactory experience in virtual environments. Test-retest reliability was high when participants performed the task two times with the same odours. In sum, the results suggest our olfactory display may facilitate use in game settings and other olfactory interactions.
Article
In this study, we propose a graph convolution network (GCN)-based patent-link prediction to predict technology convergence. We address the limitations of previous works, which neglect both the global information of a convergence network and the node features. We employ three features: GCN node features to represent global information, node features to characterize what kinds of information they have and how they are similar, and edge similarity to represent how frequently the two nodes are connected. Considering three categories of information, we conduct link prediction using machine learning (ML) to identify potential opportunities. To identify areas of technology convergence, we also support firm-level decision making using portfolio analysis. This study consists of two main stages: opportunity discovery which employs both GCN-based link prediction and ML, and opportunity validation which evaluates whether the identified technology opportunities are suitable from the firm's perspective. A case study is conducted for the mobile payment industry. A total of 17,540 patent documents with 36,871 positive links are used for GCN link prediction and ML. As a result of firm-level opportunity validation, a total of 395 cooperative patent classifications (CPC) were predicted to be possibly linked with 32 current CPCs of the target firm. The contributions come from two main aspects. From a theoretical perspective, this study employs GCN and node features to reflect the global graph structure for technology convergence. From a practical perspective, this study suggests how to validate the identified opportunities for firm-level applications.
Article
The emerging digital technologies such as virtual reality (VR) provide an alternative platform for construction safety training. In order to explore how digital-driven technologies affect the effectiveness of safety training, there is a need to empirically test the differences in performance between digital 3D/VR safety training and traditional 2D/paper approach. This research conducted a performance evaluation that emphasises both the training process and learning outcomes of trainees based on researchers’ self-developed immersive construction safety training platform. Data related to physiological indicators such as skin resistance were collected to measure safety performance before and after the training. The detailed measurement indicators included nine categories (e.g., immersion, inspiration) to form a holistic list of evaluation dimensions. The findings revealed that VR-driven immersive safety training outperformed the traditional way for trainees in terms of both process and outcome-based indicators. Results confirmed that safety training was no longer constrained by understanding or memorizing 2D information (texts and images). Instead, trainees experienced a stronger sense of embodied cognition through the immersive experience and multi-sensory engagement by interacting with the VR-driven system. By engaging the theory of embodied cognition, this research provides both the empirical evidence and in-depth analysis of how immersive virtual safety training outperforms traditional training in terms of both training process and outcomes.
Article
Advances in human-centric smart manufacturing (HSM) reflect a trend towards the integration of human-in-the-loop with technologies, to address challenges of human-machine relationships. In this context, the human-cyber-physical systems (HCPS), as an emerging human-centric system paradigm, can bring insights to the development and implementation of HSM. This study presents a systematic review of HCPS theories and technologies on HSM with a focus on the human-aspect is conducted. First, the concepts, key components, and taxonomy of HCPS are discussed. HCPS system framework and subsystems are analyzed. Enabling technologies (e.g., domain technologies , unit-level technologies, and system-level technologies) and core features (e.g., connectivity, integration, intelligence , adaptation, and socialization) of HCPS are presented. Applications of HCPS in smart manufacturing are illustrated with the human in the design, production, and service perspectives. This research offers key knowledge and a reference model for the human-centric design, evaluation, and implementation of HCPS-based HSM.
Article
The maturity of Industrial 4.0 technologies (smart wearable sensors, Internet of things [IoT], cloud computing, etc.) has facilitated the iteration and digitization of rehabilitation assistive devices (RADs) and the innovative development of intelligent manufacturing systems of RADs, expanding the value-added component of smart healthcare services. The intelligent manufacturing service mode, based on the concept of the product life cycle, completes the multi-source data production process analysis and the optimization of manufacturing, operation, and maintenance through intelligent industrial Internet of things and other means and improves the product life cycle management and operation mechanism. The smart product-service system (PSS) realizes the value-added of products by providing users with personalized products and value-added services, service efficiency, and sustainable development and gradually forms an Internet-product-service ecosystem. However, research on the PSS of RADs for special populations is relatively limited. Thus, this paper provides an overview of an IoT-based production model for RADs and a smart PSS-based development method of multimodal healthcare value-added services for special people. Taking the hand rehabilitation training devices for autistic children as a case, this paper verifies the effectiveness and availability of the proposed method. Compared with the traditional framework, the method used in this paper primarily helps evaluate rehabilitation efficacy, personalizes schemes for patients, provides auxiliary intelligent manufacturing service data and digital rehabilitation data for RAD manufacturers, and optimizes the product iteration development procedures by combining user-centered product interaction, multimodal evaluation, and value-added design. This study incorporates the iterative design of RADs into the process of smart PSS to provide some guidance to the RADs design manufacturers.
Article
Assembly workstation converges various resources, typically including humans to carry out parts combination activities with the performances determined by the interactions among the resources. In the Industry 4.0 (I4.0) era, the penetration of emerging technologies leads to the intelligent networking of hyper objects with hyper-automation and hyper-connectivity that fundamentally change the organisation of an assembly workstation and brings it into the era of ‘Assembly Workstation 4.0 (AW4.0)’. However, the volatile market demands and the autonomy of the hyper objects with human integration bring new challenges in reducing the uncertainty and complexity of AW4.0 systems. In order to achieve an effective and efficient orchestration among hyper objects by fully harnessing enabling technologies of I4.0, a humancyber- physical system (HCPS) framework for AW4.0 systems is proposed to support the intelligent networking of hyper objects and to leverage the strengths and compensate the limitations of humans. Based on this, a spatio-temporal synchronisation (ST-Sync) strategy is introduced to achieve coordinated decision-making with consideration of customer requirements and spatio-temporal constraints of hyper objects with enhanced flexibility and responsiveness. Finally, a full-scale prototype is developed, and a real-life case is used to validate the potential benefits of AW4.0 systems in the overall performance improvement.
Article
This paper reviews the state-of-the-art smart building research by a bibliometric analysis, a content analysis, and a qualitative review. The bibliometric analysis of 364 academic papers shows that smart building is a burgeoning, interdisciplinary field with a relatively high international collaboration level. Keyword's clustering identified two major themes: (1) IoT, WSN, and cloud computing for automation control and (2) the balance between energy efficiency and human comfort based on continuous monitoring and machine learning. The content analysis statistically detected a transition from the cyber-physical system (CPS) to the human-cyber-physical system (HCPS) in smart building research. We therefore proposed an HCPS framework with three dimensions—cyber-physical scale, human needs, and human roles—to summarize current research and discover potential gaps. Under this framework, five HCPS future research directions for occupants-centered smart buildings were proposed: adaptive building envelope, integrated building management system, enhanced building energy management, adaptive thermal comfort, and microgrid adoption.
Article
The development of intelligent and data-driven product quality control system are emerging as key engineering technologies for industrial manufacturing process. And many studies have been made to investigate the application of quality control of industrial valve manufacturing process in cyber-physical systems (CPS). The purpose of this article is to provide a quality control and management system by using the modern electronics technology, information technology and network technology. Firstly, we propose an intelligent and data-driven framework model of product quality based on the advanced technology of digital twin (DT) and simulation methods for CPS. Secondly, we emphasize the manufacturing enterprise should hold a data accumulation, and give some useful advises on how to carry out a successful quality analysis system of industrial valve manufacturing process in CPS. Then, as a case, the intelligent method of BP neural network is constructed according to lots of quality characteristics (QCs) of the mechanical and electrical product of industrial valve, and the BP network is trained by using many quality failures of manufacturing process. Finally, the results show that the new quality control system has good accuracy and practicability by the practical example.
Article
Smart manufacturing offers a high level of adaptability and autonomy to meet the ever-increasing demands of product mass customization. Although digitalization has been used on the shop floor of modern factory for decades, some manufacturing operations remain manual and humans can perform these better than machines. Under such circumstances, a feasible solution is to have human operators collaborate with computational intelligence (CI) in real time through augmented reality (AR). This study conducts a systematic review of the recent literature on AR applications developed for smart manufacturing. A classification framework consisting of four facets, namely interaction device, manufacturing operation, functional approach, and intelligence source, is proposed to analyze the related studies. The analysis shows how AR has been used to facilitate various manufacturing operations with intelligence. Important findings are derived from a viewpoint different from that of the previous reviews on this subject. The perspective here is on how AR can work as a collaboration interface between human and CI. The outcome of this work is expected to provide guidelines for implementing AR assisted functions with practical applications in smart manufacturing in the near future.
Article
The human sense of smell is powerful. However, the way we use smell as an interaction modality in human–computer interaction (HCI) is limited. We lack a common reference point to guide designers’ choices when using smell. Here, we map out an olfactory design space to provide designers with such guidance. We identified four key design features: (i) chemical, (ii) emotional, (iii) spatial, and (iv) temporal. Each feature defines a building block for smell-based interaction design and is grounded in a review of the relevant scientific literature. We then demonstrate the design opportunities in three application cases. Each application (i.e., one desktop, two virtual reality implementations) highlights the design choices alongside the implementation and evaluation possibilities in using smell. We conclude by discussing how identifying those design features facilitates a healthy growth of this research domain and contributes to an intermediate-level knowledge space. Finally, we discuss further challenges the HCI community needs to tackle.
Article
Graph convolutional networks (GCNs) have attracted increasing attention in recent years. Many important tasks in graph analysis involve graph classification which aims to map a graph to a certain category. However, as the number of convolutional layers increases, most existing GCNs suffer from the problem of over-smoothing, which makes it difficult to extract the hierarchical information and global patterns of graphs when learning its representations. In this paper, we propose a multi-level coarsening based GCN (MLC-GCN) for graph classification. Specifically, from the perspective of graph analysis, we develop new insights into the convolutional architecture of image classification. Inspired by this, the two-stage MLC-GCN architecture is presented. In the architecture, we first introduce an adaptive structural coarsening module to produce a series of coarsened graphs and then construct the convolutional network based on these graphs. In contrast to existing GCNs, MLC-GCN has the advantages of learning graph representations at multiple levels while preserving the local and global information of graphs. Experimental results on multiple benchmark datasets demonstrate that the proposed MLC-GCN method is competitive with the state-of-the-art graph classification methods.
Conference Paper
Graph neural networks, which generalize deep neural network models to graph structured data, have attracted increasing attention in recent years. They usually learn node representations by transforming, propagating and aggregating node features and have been proven to improve the performance of many graph related tasks such as node classification and link prediction. To apply graph neural networks for the graph classification task, approaches to generate thegraph representation from node representations are demanded. A common way is to globally combine the node representations. However, rich structural information is overlooked. Thus a hierarchical pooling procedure is desired to preserve the graph structure during the graph representation learning. There are some recent works on hierarchically learning graph representation analogous to the pooling step in conventional convolutional neural (CNN) networks. However, the local structural information is still largely neglected during the pooling process. In this paper, we introduce a pooling operator \pooling based on graph Fourier transform, which can utilize the node features and local structures during the pooling process. We then design pooling layers based on the pooling operator, which are further combined with traditional GCN convolutional layers to form a graph neural network framework \m for graph classification. Theoretical analysis is provided to understand \pooling from both local and global perspectives. Experimental results of the graph classification task on 6 commonly used benchmarks demonstrate the effectiveness of the proposed framework.
Article
Even though full autonomy in Cyber-Physical Systems (CPSs) is a challenge that has been confronted in different application domains and industrial sectors, the current scenario still requires human intervention in these autonomous systems in order to accomplish tasks that are better performed with human-in-the-loop. Humans, machines, and software systems are required to interact and understand each other in order to work together in an effective and robust way. This human integration introduces an important number of challenges and problems to be solved in order to achieve seamless and solid participation. To manage this complexity, appropriate techniques and methods must be used to help CPS developers analyze and design this kind of human-in-the-loop integration. The goal of this paper is to identify the technological challenges and limitations of integrating humans into the CPSs autonomy loop and to break new ground for design solutions in order to develop what we call HiL-ACPS systems. This work defines a conceptual framework to characterize the cooperation between humans and autonomous CPSs and provides techniques for applying the framework in order to design proper human integration. The emergent autonomous car domain is considered as a running example. It covers some of the current limitations of involving drivers into the autonomous functionalities. Finally, to validate the proposal, an autonomous car prototype was built applying the conceptual framework. This prototype was evaluated to check whether the human integration implemented behaves as defined in its specification.
Chapter
In this paper, we propose an alarm sound recommendation system based on music generation. The recommendation system will be integrated with an application named iSmile, which is a sleep analysis and depression detection application built by the authors in previous work. We use a music generating algorithm based on GAN (Generative Adversarial Nets) as the core of the recommendation system. To the best of our knowledge, it is the first application recommending real-time generated music rather than existing music. In the following part of the paper, we detail the algorithm, the experiment we conducted and the result analysis. The result shows that the recommendation system can effectively generate and recommend proper alarm sound according to the emotion prediction.