Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In order to develop novel solutions for complex systems and in increasingly competitive markets, it may be advantageous to generate large numbers of design concepts then identify the most novel and valuable ideas. However, it can be difficult to process, review and assess thousands of design concepts. Based on this need, we develop and demonstrate an automated method for design concept assessment. In the method, machine learning technologies are first applied to extract ontological data from design concepts. Then a filtering strategy and quantitative metrics are introduced that enable creativity rating based on the ontological data. This method is tested empirically. Design concepts are crowd -generated for a variety of actual industry design problems-opportunities. Over 4,000 design concepts were generated by humans for assessment. Empirical evaluation assesses: (1) correspondence of the automated ratings with human creativity ratings; (2) whether concepts selected using the method are highly scored by another set of crowd raters; and finally (3) if high scoring designs have a positive correlation or relationship to industrial technology development. The method provides a possible avenue to rate design concepts deterministically. This could harmonize design studies across different organizations. Another highlight is that a subset of designs selected automatically out of a large set of candidates was scored higher than a subset selected by humans when evaluated by a set of third-party raters.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The task of DCE is to obtain the optimal design concept among design solution candidates after the concept generation step, and the experienced designers usually are able to evaluate few design concepts based on the manual comparing method. However, with the diversified development of market demand and the fast advancement of technology, more and more new design concepts with various functional properties are generated to satisfy the differentiated requirements, which lead to DCE becoming a complex multi-criteria decision making (MCDM) problem inherent with a number of difficulties [3]. Thus, the designers began to use MCDM techniques to facilitate solving complex DCE situations, and the effective MCDM-based DCE is a popular research direction in conceptual design study. ...
... As mentioned before, to eliminate the vagueness and subjectivity existing in design evaluation, this paper adopts rough number to represent the quantitative crisp attribute values and subjective preferences in matrixes V and X, and uses the semantic terms (red, green, aluminium, etc.) to describe the values of qualitative attributes. Different from preference value (1,3,5,7), each quantitative attribute has its own dimension, e.g., 0.15 Mpa, 3.8 s, 25 m 3 /h, so the initial crisp attribute values in matrix V need to be normalized to facilitate comparisons in the following way. ...
... Taking attribute values V = {red, red, green, green, blue, yellow} and corresponding rough preference values X = { [1,3] , [1,3] , [3,5] , [3,5] , [3,7] , [3,5]} for example. The selections of z + and z − are shown as follows: ...
Article
Design Concept Evaluation (DCE) is a crucial step in new product development. In complex DCE task, the designer as a decision-maker has to make a comprehensive choice by considering the design concept’s inherent objective design factor as well as external subjective preference factor from evaluators (designer, expert or customer). However, most of DCE methods only limited to one of two factors, which unilaterally evaluate the alternatives and miss the optimal one. To find more reasonable design concept, this study attempts to better compatible with objective design and subjective preference factors in evaluation process, and proposes a new DCE method using new integrated ideal solution definition (I-ISD) approach in modified VIKOR model based on rough number, named as R-VIKOR(I). To be specific, this study puts forward four definition rules to select the positive and negative ideal solution elements respectively for benefit-like quantitative attribute, cost-like quantitative attribute, important qualitative attribute and less important qualitative attribute by utilizing the information originated from design and preference data, and calculates the deviation between alternative and redefined ideal solution through rough VIKOR to obtain the best one. Three comparative experiments have been carried out to validate the performance of R-VIKOR(I) by analyzing its robustness (experiment I), comparing it with other classical DCE methods based on rough TOPSIS, rough WASPAS and rough COPRAS (experiment II) and exploring the applicable of the proposed I-ISD approach (experiment III). Experimental results verify that R-VIKOR(I) could better balance the objective design attribute values and the subjective evaluator preference values to provide more reasonable evaluation result, especially this method has obvious advantage when evaluators have different preferences for design attribute values, a common case in modern personalized product development.
... This distance is calculated as text similarity using the SAPPhIRE model. Camburn et al. (2020) evaluated the novelty of a large quantity of crowdsourced design ideas according to the semantic distance among the terms in the design idea description texts, with semantic distance derived from Freebase. Han et al. (2020) employed ConceptNet to assess the novelty of new design ideas based on the semantic distance between elemental concepts. ...
... is an inverse indicator of the originality of the new concepts that appeared for the first time in a year. It follows the spirit of a few recent studies that similarly used measures of semantic distance between words or terms as novelty indicator of design ideas (Camburn et al., 2020;Goucher-Lambert & Cagan, 2019). This metric will further allow us to detect the longitudinal change in the originality of the new concepts appearing for the first time each year. ...
Preprint
Full-text available
The creation of new technological concepts through design reuses, recombination, and synthesis of prior concepts to create new ones may lead to exponential growth of the concept space over time. However, our statistical analysis of a large-scale technology semantic network consisting of over four million concepts from patent texts found evidence of a persistent deceleration in the pace of concept creation and a decline in the originality of newly created concepts. These trends may be attributed to the limitations of human intelligence in innovating beyond an expanding space of prior art. To sustain innovation, we recommend the development and implementation of creative artificial intelligence that can augment various aspects of the innovation process, including learning, creation, and evaluation.
... Therefore, data-driven novelty evaluation metrics have been explored in recent years [98,105,106]. Especially, the semantic distance or relevancy between terms have been measured based on word embeddings to represent novelty [107,108]. The word embeddings are vector representations learned from a large corpus of textual data to encode the meaning of the words [109]. ...
... Therefore, baseline verification is absent, i.e., how high the WMD score or how low the TechNet relevancy score represents good novelty? There also exist other metrics to automatically evaluate novelty and additional dimensions of new design concepts (e.g., Ref. [107]). More extensive experiments on varied subjects and alternative evaluation metrics with human evaluation are needed to discover more insightful patterns or guidelines for using generative transformers in design concept generation. ...
Article
Generating novel and useful concepts is essential during the early design stage to explore a large variety of design opportunities, which usually requires advanced design thinking ability and a wide range of knowledge from designers. Growing works on computer-aided tools have explored the retrieval of knowledge and heuristics from design data. However, they only provide stimuli to inspire designers from limited aspects. This study explores the recent advance of the natural language generation (NLG) technique in the artificial intelligence (AI) field to automate the early-stage design concept generation. Specifically, a novel approach utilizing the generative pre-trained transformer (GPT) is proposed to leverage the knowledge and reasoning from textual data and transform them into new concepts in understandable language. Three concept generation tasks are defined to leverage different knowledge and reasoning: domain knowledge synthesis, problem-driven synthesis, and analogy-driven synthesis. The experiments with both human and data-driven evaluation show good performance in generating novel and useful concepts.
... Therefore, data-driven novelty evaluation metrics have been explored in recent years [98,105,106]. Especially, the semantic distance or relevancy between terms have been measured based on word embeddings to represent novelty [107,108]. The word embeddings are vector representations learned from a large corpus of textual data to encode the meaning of the words [109]. ...
... Therefore, baseline verification is absent, i.e., how high the WMD score or how low the TechNet relevancy score represents good novelty? There also exist other metrics to automatically evaluate novelty and additional dimensions of new design concepts (e.g., [107]). More extensive experiments on varied subjects and alternative evaluation metrics with human evaluation are needed to discover more insightful patterns or guidelines for using generative transformers in design concept generation. ...
Preprint
Full-text available
Generating novel and useful concepts is essential during the early design stage to explore a large variety of design opportunities, which usually requires advanced design thinking ability and a wide range of knowledge from designers. Growing works on computer-aided tools have explored the retrieval of knowledge and heuristics from design data. However, they only provide stimuli to inspire designers from limited aspects. This study explores the recent advance of the natural language generation (NLG) technique in the artificial intelligence (AI) field to automate the early-stage design concept generation. Specifically, a novel approach utilizing the generative pre-trained transformer (GPT) is proposed to leverage the knowledge and reasoning from textual data and transform them into new concepts in understandable language. Three concept generation tasks are defined to leverage different knowledge and reasoning: domain knowledge synthesis, problem-driven synthesis, and analogy-driven synthesis. The experiments with both human and data-driven evaluation show good performance in generating novel and useful concepts.
... AI-concept benchmarking systems, capable of analysing many design proposals, evaluating them according to the parameters of novelty and level of detail, and ranking them accordingly from best to worst. (Camburn et al., 2020) (Schmitt & Weiß, 2018); (2) sketching assistant (Fan et al., 2019); (3) model generator and modifier (Oh et al., 2019); (4) facilitator (Adobe Sensei); (5) concept evaluator (Camburn et al., 2020). ...
... AI-concept benchmarking systems, capable of analysing many design proposals, evaluating them according to the parameters of novelty and level of detail, and ranking them accordingly from best to worst. (Camburn et al., 2020) (Schmitt & Weiß, 2018); (2) sketching assistant (Fan et al., 2019); (3) model generator and modifier (Oh et al., 2019); (4) facilitator (Adobe Sensei); (5) concept evaluator (Camburn et al., 2020). ...
... By taking a data-driven approach, innovators can automatically evaluate and validate a very large quantity of diverse design concepts to accelerate the design evaluation, validation, and selection process. For instance, innovators can automatically evaluate and filter many new design concepts with a pretrained commonsense knowledge base [29]. One can also train deep neural networks with the data of prior experiments or successful/failed designs in the same context for automatically and predictively evaluating the performances and value of next design concepts (e.g., Atomwise [11], [16]). ...
... The expanded design space may contain more novel and more useful candidate designs to choose and implement and, thus, benefit creativity. Meanwhile, data-trained models can automate the evaluation of many opportunities [23] and many design concepts [29] from the enlarged opportunity and design spaces, accelerate the convergent search and ensure the identification of the best innovation opportunity to design for and the best design for implementation. ...
Article
The future of innovation processes is anticipated to be more data-driven and empowered by the ubiquitous digitalization, increasing data accessibility and rapid advances in machine learning, artificial intelligence, and computing technologies. While the data-driven innovation (DDI) paradigm is emerging, it has yet been formally defined and theorized and often confused with several other data-related phenomena. This article defines and crystalizes “DDI” as a formal innovation process paradigm, dissects its value creation, and distinguishes it from data-driven optimization, data-based innovation, and the traditional innovation processes that purely rely on human intelligence. With real-world examples and theoretical framing, I elucidate what DDI entails and how it addresses uncertainty and enhance creativity in the innovation process and present a process-based taxonomy of different DDI approaches. On this basis, I recommend the strategies and actions for innovators, companies, R&D organizations, and governments to enact DDI.
... By taking a data-driven approach, innovators can automatically evaluate and validate a very large quantity of diverse design concepts to accelerate the design evaluation, validation, and selection process. For instance, innovators can automatically evaluate and filter many new design concepts with a pre-trained common-sense knowledge base [29]. One can also train deep neural networks with the data of prior experiments or successful/failed designs in the same context for automatically and predictively evaluating the performances and value of next design concepts (e.g., Atomwise [11]; [16]). ...
... The expanded design space may contain more novel and more useful candidate designs to choose and implement and thus benefit creativity. Meanwhile, data-trained models can automate the evaluation of many opportunities [23] and many design concepts [29] from the enlarged opportunity and design spaces, accelerate the convergent search and ensure the identification of the best innovation opportunity to design for and the best design for implementation. Therefore, both the divergence-oriented actions (opportunity discovery and design generation) and the convergence-oriented actions (opportunity evaluation and design evaluation) can be augmented by different suitable data-driven approaches (See the taxonomy in Table 1) to achieve greater creativity of the innovation process. ...
Preprint
Full-text available
The future of innovation processes is anticipated to be more data-driven and empowered by the ubiquitous digitalization, increasing data accessibility and rapid advances in machine learning, artificial intelligence, and computing technologies. While the data-driven innovation (DDI) paradigm is emerging, it has yet been formally defined and theorized and often confused with several other data-related phenomena. This paper defines and crystalizes "data-driven innovation" as a formal innovation process paradigm, dissects its value creation, and distinguishes it from data-driven optimization (DDO), data-based innovation (DBI), and the traditional innovation processes that purely rely on human intelligence. With real-world examples and theoretical framing, I elucidate what DDI entails and how it addresses uncertainty and enhance creativity in the innovation process and present a process-based taxonomy of different data-driven innovation approaches. On this basis, I recommend the strategies and actions for innovators, companies, R&D organizations, and governments to enact data-driven innovation.
... By taking a data-driven approach, innovators can automatically evaluate and validate a very large quantity of diverse design concepts to accelerate the design evaluation, validation, and selection process. For instance, innovators can automatically evaluate and filter many new design concepts with a pretrained commonsense knowledge base [29]. One can also train deep neural networks with the data of prior experiments or successful/failed designs in the same context for automatically and predictively evaluating the performances and value of next design concepts (e.g., Atomwise [11], [16]). ...
... The expanded design space may contain more novel and more useful candidate designs to choose and implement and, thus, benefit creativity. Meanwhile, data-trained models can automate the evaluation of many opportunities [23] and many design concepts [29] from the enlarged opportunity and design spaces, accelerate the convergent search and ensure the identification of the best innovation opportunity to design for and the best design for implementation. ...
Preprint
The future of innovation processes is anticipated to be more data-driven and empowered by the ubiquitous digitalization, increasing data accessibility and rapid advances in machine learning, artificial intelligence, and computing technologies. While the data-driven innovation (DDI) paradigm is emerging, it has yet been formally defined and theorized and often confused with several other data-related phenomena. This paper defines and crystalizes “data-driven innovation” as a formal innovation process paradigm, dissects its value creation, and distinguishes it from data-driven optimization (DDO), data-based innovation (DBI), and the traditional innovation processes that purely rely on human intelligence. With real-world examples and theoretical framing, I elucidate what DDI entails and how it addresses uncertainty and enhance creativity in the innovation process and present a processbased taxonomy of different data-driven innovation approaches. On this basis, I recommend the strategies and actions for innovators, companies, R&D organizations, and governments to enact data-driven innovation.
... Han et al. [14,33] utilize ConceptNet relationships to obtain analogies and combinations for a search entity. To evaluate crowdsourced design ideas and extract entities from these, Camburn et al. [16] use the TextRazor 11 platform that is built using models trained on DBPedia, Freebase, etc. Chen and Krishnamurthy [15] facilitate human-AI collaboration in completing problem formulation mind maps with the help of ConceptNet and the underlying relationships. These common-sense knowledge bases utilized by the scholars, however, were not built for the engineering purposes. ...
... We report a comparison (Sec. 4.1) of a small portion (30 facts) of our knowledge graph against a similar portion of triples obtained from TechNet 16 and ConceptNet, both of which are publicly accessible via their APIs. We also report the size and coverage of our knowledge graph against some well-known benchmarks (Sec. ...
Article
We propose a large scalable engineering knowledge base as an integrated knowledge graph, comprising sets of (entity, relationship, entity) triples that are real-world engineering ‘facts’ found in the patent database. We apply a set of rules based on the syntactic and lexical properties of claims in a patent document to extract entities and their associated relationships that are supposedly meaningful from an engineering design perspective. Such a knowledge base is expected to support inferencing, reasoning, recalling in various engineering design tasks. The knowledge base has a greater size and coverage in comparison with the previously used knowledge bases in the engineering design literature.
... Engineers have used reactive AI assistance tools in both product design (Koch and Paris-Saclay, 2017) and concurrent-engineering design (Jin and Levit, 1996). In addition, AI assistance has been used at the concept generation (Camburn, Arlitt, et al., 2020), concept evaluation (Camburn, He, et al., 2020), prototyping (Dering et al., 2018), and manufacturing (Williams et al., 2019) stages,. Work has studied the impacts of AI assistance in aspects of engineering design, including decisionmaking, optimization, and computational tasks Rao et al., 1999), and its effects on mental workload, effort, and frustration (Maier et al., 2020(Maier et al., , 2021. ...
Article
Full-text available
The evolution of Artificial Intelligence (AI) and Machine Learning (ML) enables new ways to envision how computer tools will aid, work with, and even guide human teams. This paper explores this new paradigm of design by considering emerging variations of AI-Human collaboration: AI used as a design tool versus AI employed as a guide to human problem solvers, and AI agents which only react to their human counterparts versus AI agents which proactively identify and address needs. The different combinations can be mapped onto a 2×2 AI-Human Teaming Matrix which isolates and highlights these different AI capabilities in teaming. The paper introduces the matrix and its quadrants, illustrating these different AI agents and their application and impact, and then provides a road map to researching and developing effective AI team collaborators.
... Human-social approaches such as design thinking [1] and ideation techniques such as TRIZ [2] and design heuristics [3] are often used to support and guide such design activities. During the era of datadriven innovation [4], machine learning and data-driven approaches have been growingly adopted to discover and evaluate design opportunities and generate and evaluate design concepts by drawing information, knowledge, and inspiration from data [5][6][7]. Recent contributions also showed the capability of cutting-edge data-driven artificial intelligence (AI) such as generative pretrained transformers (GPT) for automatic design concept generation [8,9]. ...
Conference Paper
Full-text available
In the early stages of the design process, designers explore opportunities by discovering unmet needs and developing innovative concepts as potential solutions. From a human-centered design perspective, designers must develop empathy with people to truly understand their needs. However, developing empathy is a complex and subjective process that relies heavily on the designer's empathetic capability. Therefore, the development of empathetic understanding is intuitive, and the discovery of underlying needs is often serendipitous. This paper aims to provide insights from artificial intelligence research to indicate the future direction of AI-driven human-centered design, taking into account the essential role of empathy. Specifically, we conduct an interdisciplinary investigation of research areas such as data-driven user studies, empathetic understanding development, and artificial empathy. Based on this foundation, we discuss the role that artificial empathy can play in human-centered design and propose an artificial empathy framework for human-centered design. Building on the mechanisms behind empathy and insights from empathetic design research, the framework aims to break down the rather complex and subjective concept of empathy into components and modules that can potentially be modeled computationally. Furthermore, we discuss the expected benefits of developing such systems and identify current research gaps to encourage future research efforts.
... Human-social approaches such as design thinking [1] and ideation techniques such as TRIZ [2] and design heuristics [3] are often used to support and guide such design activities. During the era of data-driven innovation [4], machine learning and data-driven approaches have been growingly adopted to discover and evaluate design opportunities and generate and evaluate design concepts by drawing information, knowledge, and inspiration from data [5][6][7]. Recent contributions also showed the capability of cutting-edge datadriven artificial intelligence (AI) such as generative pretrained transformers (GPT) for automatic design concept generation [8,9]. ...
Preprint
Full-text available
In the early stages of the design process, designers explore opportunities by discovering unmet needs and developing innovative concepts as potential solutions. From a human-centered design perspective, designers must develop empathy with people to truly understand their needs. However, developing empathy is a complex and subjective process that relies heavily on the designer's empathetic capability. Therefore, the development of empathetic understanding is intuitive, and the discovery of underlying needs is often serendipitous. This paper aims to provide insights from artificial intelligence research to indicate the future direction of AI-driven human-centered design, taking into account the essential role of empathy. Specifically, we conduct an interdisciplinary investigation of research areas such as data-driven user studies, empathetic understanding development, and artificial empathy. Based on this foundation, we discuss the role that artificial empathy can play in human-centered design and propose an artificial empathy framework for human-centered design. Building on the mechanisms behind empathy and insights from empathetic design research, the framework aims to break down the rather complex and subjective concept of empathy into components and modules that can potentially be modeled computationally. Furthermore, we discuss the expected benefits of developing such systems and identify current research gaps to encourage future research efforts.
... By comparison, the dynamic definition of  threshold is more flexible to fit different evaluation situations. Thirdly, more MCDM models, even neural networks [53], machine learning model [54] and knowledge-based model [55] can be applied into Z-CDCE-α to satisfy the requirements from more complex design evaluation tasks with more evaluation criteria and more alternatives to be evaluated. ...
Preprint
Full-text available
The aim of customer-oriented design concept evaluation (CDCE) is to select the best product design solution from the perspective of customer. Traditionally, most of CDCE methods mainly focus on the customer preference judgement but ignore the confidence attitude of customer, namely, the reliability of preference. However, the customer’s uncertain attitude means he is unsure about his decision and could probably change his mind. With the help of Z-number, more complete customer preference information is recorded (Z-preference). The main contribution of this paper is to propose a new Z-preference-based multi-criteria decision-making (MCDM) for CDCE method that retains the confidence coefficient α in evaluation value (Z-CDCE- α ) to highlight the role of confidence attitude for CDCE, rather than simply translating Z-preference into a regular fuzzy preference value. By integrating with multiple information such as preference value, confidence coefficient α and the importance rating of design attribute, a novel ideal solution definition (ISD) strategy is put forward. For the re-defined ideal solutions, the distances of alternative to ideal solutions are deduced to get the priority degree δ to sort alternatives. According to the proposed ISD strategy of Z-CDCE- α , the best concept is that its important attribute values are preferred by customers with higher certainty or least preferred by customers with lower certainty, and the customers’ preferences and confidence attitudes for its less important attribute values are opposite. A case study and two comparison experiments are carried out to validate the reasonability and feasibility of Z-CDCE- α for CDCE by comparing with different evaluation values, ISD rules and MCDM models.
... For example, doctors are advised by an AI to interpret medical images [3,4]; computer users employ AI prediction for the next word or phrase they want to type [5,6]. Various AIs are also applied in multiple phases of the engineering design process to solve specific design tasks alone [7][8][9]. Research results demonstrate that a well-trained AI can perform a specified design task as well as, or sometimes even better than, human designers [10,11]. However, when an AI advises human designers to solve a design problem, the results from a recent cognitive study show that the AI only improves the initial performance of low-performing teams but always hurts the performance of highperforming teams [12]. ...
Article
Full-text available
Advances in artificial intelligence (AI) offer new opportunities for human-AI cooperation in engineering design. Human trust in AI is a crucial factor in ensuring an effective human-AI cooperation, and several approaches to enhance human trust in AI have been explored in prior studies. However, it remains an open question in engineering design whether human designers have more trust in an AI and achieve better joint performance when they are deceived into thinking they are working with another human designer. This research assesses the impact of design facilitator identity ("human" vs. AI) on human designers through a human subjects study, where participants work with the same AI design facilitator and they can adopt their AI facilitator's design anytime during the study. Half of participants are told that they work with an AI, and the other half of participants are told that they work with another human participant but in fact they work with the AI design facilitator. The results demonstrate that, for this study, human designers adopt their facilitator's design less often on average when they are deceived about the identity of the AI design facilitator as another human designer. However, design facilitator identity does not have a significant impact on human designers' average performance, perceived workload, and perceived competency and helpfulness of their design facilitator in the study. These results caution against deceiving human designers about the identity of an AI design facilitator in engineering design.
... However, manual creation of the knowledge base (created by examining the products obtained from popular e-commerce sites) prolongs the process. Camburn et al. (2020) studied text classification from crowdsourcing. Large databases of ideas obtained were scanned with an unsupervised neural network, and success levels were evaluated. ...
Article
Having passed the primitive phases and starting to revolutionize many different fields in some way, artificial intelligence is on its way to becoming a disruptive technology. It is also foreseen to totally change human-centred traditional engineering design approaches. Although still in the early phases, AI-powered engineering applications enable them to work with ambiguous design parameters and solve complex engineering problems, not otherwise possible with traditional design methods. This work attempts to shine a light on current progress and future research trends in AI applications in design/engineering design concepts, covering the last 15 years which is the ramp-up period for AI. Methods such as machine learning, genetic algorithm, and fuzzy logic have been carefully examined from an engineering design perspective. AI-powered design studies have been categorized and critically reviewed for various design stages such as inspiration, idea and concept generation, evaluation, optimization, decision-making, and modelling. As an overview result of this review, we can confidently say that the interest in data-based design methods and Explainable Artificial Intelligence (XAI) has increased in recent years. Furthermore, the use of AI methods in engineering design applications helps to obtain efficient, fast, accurate, and comprehensive results. Especially with deep learning methods and combinations, situations where human capacity is insufficient can be addressed efficiently. However, choosing the right AI method for a design problem under consideration is significantly important for such successful results. Hence, we have given an outline perspective on choosing the right AI method based on the literature outcomes for design problems.
... Considering UGC is a typical type of unstructured data that contains much noise, the identification efficiency is expected to be low. Therefore, NER application programming interfaces (e.g., Textrazor [44]) pre-trained by generic KGs are selected as they outperform other methods for entity recognition across general human knowledge domains [45,46]. Based on word types, there are two types of potential PUC entities. ...
Article
Full-text available
User-driven customization is a particular design paradigm where customers act as co-designers to configure products based on their own needs. However, due to insufficient product usage experience, customers may design a product incompatible with their environment and needs. Such incompatibility can negatively affect the performance of some customized features or even cause product failure. As a result, customers may hesitate to customize products because additional complexities and uncertainties are perceived. Product usage context (PUC), as all the environment and application factors that affect customer needs and product performance, can be used to facilitate customer co-design in user-driven customization. Identifying individual customers' PUC can help customers foresee potential design failures, make more holistic decisions, and be confident with their designs. Against the background, this paper proposes a PUC knowledge graph (PUCKG) construction method using user-generated content (UGC). The proposed method can convert crowdsourced corner cases into structured PUCKG to support personal PUC prediction, summarization, and reasoning. A case study of robot vacuum cleaners is conducted to validate the efficacy of the proposed method.
... Artificial intelligence (AI) assistance methods have proven to be efficient in this area, supporting engineering teams in completing such challenging tasks rapidly and effectively. Engineers have used AI assistance tools to design products and explore the solution space more rapidly [19] and at different stages of the design process, including concept generation [20], concept evaluation [21], prototyping [22]. However, human-AI collaboration can also restrict team performance. ...
Conference Paper
Full-text available
Managing the design process of teams has been shown to considerably improve problem-solving behaviors and resulting final outcomes. Automating this activity presents significant opportunities in delivering interventions that dynamically adapt to the state of a team to reap the most impact. In this work, an Artificial Intelligence (AI) agent is created to manage the design process of engineering teams in real time, tracking features of teams’ actions and communications during a complex design and path-planning task with multidisciplinary team members. Teams are also placed under the guidance of human process managers for comparison. Regarding outcomes, teams perform equally as well under both types of management, with trends towards even superior performance from the AI-managed teams. The managers’ intervention strategies and team perceptions of those strategies are also explored, illuminating some intriguing similarities. Both the AI and human process managers focus largely on communication-based interventions, though differences start to emerge in the distribution of interventions across team roles. Furthermore, team members perceive the interventions from the both the AI and human manager as equally relevant and helpful and believe the AI agent to be just as sensitive to the needs of the team. Thus, the overall results show that the AI manager agent introduced in this work matches the capabilities of humans, showing potential in automating the management of a complex design process.
... For example, doctors are advised by an AI to interpret medical images [3,4]; computer users employ AI prediction for the next word or phrase they want to type [5,6]. Various AIs are also applied in multiple phases of the engineering design process to solve specific design tasks alone [7][8][9]. Research results demonstrate that a welltrained AI can perform a specified design task as good as, or sometimes even better than, human designers [10,11]. However, when an AI advises human designers to solve a design problem, the results from a recent cognitive study show that the AI only improves the initial performance of low-performing teams but always hurts the performance of high-performing teams [12]. ...
Conference Paper
Full-text available
Advances in artificial intelligence (AI) offer new opportunities for human-AI collaboration in engineering design. Human trust in AI is a crucial factor in ensuring an effective human -AI collaboration, and several approaches to enhance human trust in AI have been suggested in prior studies. However, it remains an open question in engineering design whether a strategy of deception about the identity of an AI teammate can effectively calibrate human trust in AI and improve human-AI joint performance. This research assesses the impact of the strategy of deception on human designers through a human subjects study where half of participants are told that they work with an AI teammate (i.e., without deception), and the other half of participants are told that they work with another human participant but in fact they work with an AI teammate (i.e., with deception). The results demonstrate that, for this study, the strategy of deception improves high proficiency human design-ers' perceived competency of their teammate. However, the strategy of deception does not raise the average number of team collaborations and does not improve the average performance of high proficiency human designers. For low proficiency human designers, the strategy of deception does not change their perceived competency and helpfulness of their teammate, and further reduces the average number of team collaborations while hurting their average performance at the beginning of the study. The potential reasons behind these results are discussed with an argument against using the strategy of deception in engineering design.
... Here, shape constraints need to match mechanical performanceas well as manufacturing and material constraints, and AI algorithms can intervene to generate the best forms to satisfy these needs (Tool 34, Autodesk 2021). In the evaluation activity, AI can provide a way to assess the mechanical/functional performance of several alternative instances of a product (Tool 36, Camburn et al., 2020); moreover, we find here the second leg of some of the aforementioned data-driven design tools (05), as well as a tool that spans both the ideation and evaluation activities (Tool 10, Kwonsang et al., 2021). This indicates how the data prowess of AI technologies can provide a way to anticipate the evaluation phase, by incorporating it with earlier phases of the design process. ...
Chapter
We are witnessing a growing trend in the development of AI-enabled design tools. Some of these are already focussing on improving and replacing design activities. This field is so recent and fermenting that it lacks a state of the art. Thus, we created a preliminary overview by searching and systematizing current AI-enabled design tools. To do so, we collected and mapped the distribution of existing/under-development design tools on the design process. It emerged that only a few AI applications have taken hold in design so far, and many others only exist as research or concepts. Our study highlights how current AI-enabled design tools cover mostly the ideation and development phases, uncovering areas where AI can be leveraged to augment the design process. Finally, it shows what types of AI applications are currently being adopted in design-related activities, paving the way for the investigation of unexplored opportunities.KeywordsArtificial IntelligenceDesign processDesign toolsAI tools
... Liu et al. (2020, p. 6) summarise 1757 scientific articles (solutions to a transmission problem) by building Word2Vec-based semantic networks around the central keywords -{transmission, line, location, measurement, sensor and wave}. Camburn et al. (2020aCamburn et al. ( , 2020b utilise HDBSCAN 23 for clustering crowdsourced concepts and TextRazor 24 for extracting entities and topics from these. ...
Article
Full-text available
We review the scholarly contributions that utilise natural language processing (NLP) techniques to support the design process. Using a heuristic approach, we gathered 223 articles that are published in 32 journals within the period 1991–present. We present state-of-the-art NLP in-and-for design research by reviewing these articles according to the type of natural language text sources: internal reports, design concepts, discourse transcripts, technical publications, consumer opinions and others. Upon summarising and identifying the gaps in these contributions, we utilise an existing design innovation framework to identify the applications that are currently being supported by NLP. We then propose a few methodological and theoretical directions for future NLP in-and-for design research.
... With the advance of artificial intelligence (AI) systems, AI has increasingly been proving its usefulness in engineering design, including areas such as customer preference identification (Chen et al., 2013), concept evaluation (Camburn et al., 2020), and manufacturing (Williams et al., 2019). As of now, however, human designers remain in the loop as their creativity and agility are yet to be reproduced by an AI and are still crucial in the design process (Song et al., 2020). ...
Article
Full-text available
For successful human-artificial intelligence (AI) collaboration in design, human designers must properly use AI input. Some factors affecting that use are designers’ self-confidence and competence and those variables' impact on reliance on AI. This work studies how designers’ self-confidence before and during teamwork and overall competence are associated with their performance as teammates, measured by AI reliance and overall team score. Results show that designers’ self-confidence and competence have very different impacts on their collaborative performance depending on the accuracy of AI.
... One reason might be the possible overlapping of the areas, e.g. engineers are increasingly adopting data science for design automation applications (Camburn et al., 2020;Jiang et al., 2022). About a quarter are using digital engineering methods in their daily business and nearly half of the companies are in a pilot or concept phase of integrating digital engineering methods. ...
Article
Full-text available
Digital Engineering is an emerging trend and aims to support engineering design by integrating computational technologies like design automation, data science, digital twins, and product lifecycle management. To enable alignment of industrial practice with state of the art, an industrial survey is conducted to capture the status and identify obstacles that hinder implementation in the industry. The results show companies struggle with missing know-how and available experts. Future work should elaborate on methods that facilitate the integration of Digital Engineering in design practice.
... Research has shown that AI-assistive technologies can significantly improve problem-solving and learning outcomes. These benefits have been instrumental across a variety of domains and applications, such as instructional agents in educational tutoring (Roll et al., 2014;Hu and Taylor, 2016), design problem-solving and in exploring complex design spaces (Camburn et al., 2020;Koch, 2017;Schimpf et al., 2019), cognitive assistants (Graesser et al., 2001;Costa et al., 2018), and in the facilitation of collaboration (Dellermann et al., 2019;Gunning et al., 2019). Ginni Rometty, the Chief Operating Office of IBM, argued at the 2017 World Economic Forum (Lewkowicz, 2020) that instead of fully replacing humans, AI should be meant to augment humans, thus setting the stage for human-AI hybrid teaming (Sadiku and Musa, 2021). ...
Article
Full-text available
This work studies the perception of the impacts of AI and human process managers during a complex design task. Although performance and perceptions by teams that are AI- versus human-managed are similar, we show that how team members discern the identity of their process manager (human/AI), impacts their perceptions. They discern the interventions as significantly more helpful and manager sensitive to the needs of the team, if they believe to be managed by a human. Further results provide deeper insights into automating real-time process management and the efficacy of AI to fill that role.
... For example, doctors are advised by an AI to interpret medical images [3,4]; computer users employ AI prediction for the next word or phrase they want to type [5,6]. Various AIs are also applied in multiple phases of the engineering design process to solve specific design tasks alone [7][8][9]. Research results demonstrate that a welltrained AI can perform a specified design task as good as, or sometimes even better than, human designers [10,11]. However, when an AI advises human designers to solve a design problem, the results from a recent cognitive study show that the AI only improves the initial performance of low-performing teams but always hurts the performance of high-performing teams [12]. ...
Preprint
Full-text available
Advances in artificial intelligence (AI) offer new opportunities for human-AI collaboration in engineering design. Human trust in AI is a crucial factor in ensuring an effective human -AI collaboration, and several approaches to enhance human trust in AI have been suggested in prior studies. However, it remains an open question in engineering design whether a strategy of deception about the identity of an AI teammate can effectively calibrate human trust in AI and improve human-AI joint performance. This research assesses the impact of the strategy of deception on human designers through a human subjects study where half of participants are told that they work with an AI teammate (i.e., without deception), and the other half of participants are told that they work with another human participant but in fact they work with an AI teammate (i.e., with deception). The results demonstrate that, for this study, the strategy of deception improves high proficiency human designers' perceived competency of their teammate. However, the strategy of deception does not raise the average number of team collaborations and does not improve the average performance of high proficiency human designers. For low proficiency human designers, the strategy of deception does not change their perceived competency and helpfulness of their teammate, and further reduces the average number of team collaborations while hurting their average performance at the beginning of the study. The potential reasons behind these results are discussed with an argument against using the strategy of deception in engineering design.
... Although there are optimization algorithms which are commonly used to aid the design process (namely size, shape and topology optimization), the process of designing a structure or part is still mostly manual and iterative. There is however some research on the field, such as by use of generative adversarial networks (in works of Oh et al. [4] and Shu et al. [5]), machine learning (Sharpe et al. [6] and Camburn et al. [7]), generative design (Oh et al. [8]), among others. Therefore, the field would highly benefit from a method which could streamline the structural design process from its inception. ...
Conference Paper
Using beams as a modeling and design tool in structural design has long been displaced by more recent numerical methods, such as finite element analysis and structural optimization, while those concepts became more restricted to the design of trusses and shafts. But is there still room for it to be applied in contemporary design of continuum structures? This research investigates some possible aplications of beam theory and beam sizing concepts when used along with contemporary technologies such as topology optimization, additive manufacturing and numerical methods, and how it could impact the structural design process.
... AI assistance for design allows human designers to work faster with increased effectiveness and efficiency; therefore, improving the company's competitiveness in today's fast evolving market. For example, designers have used AI tools to design products and explore the solution space more rapidly [3] and different AI approaches have been used to support different stages of the engineering design process including concept generation [4], concept evaluation [5], prototyping [6], and manufacturing [7]. Moreover, research involving 1500 companies where humans and AI worked together found a significant improvement in their overall performance [8,9]. ...
Conference Paper
Full-text available
As Artificial Intelligence (AI) assistance tools become more ubiquitous in engineering design, it becomes increasingly necessary to understand the influence of AI assistance on the design process and design effectiveness. Previous work has shown the advantages of incorporating AI design agents to assist human designers. However, the influence of AI assistance on the behavior of designers during the design process is still unknown. This study examines the differences in participants’ design process and effectiveness with and without AI assistance during a complex drone design task using the HyForm design research platform. Data collected from this study is analyzed to assess the design process and effectiveness using quantitative methods, such as Hidden Markov Models and network analysis. The results indicate that AI assistance is most beneficial when addressing moderately complex objectives but exhibits a reduced advantage in addressing highly complex objectives. During the design process, the individual designers working with AI assistance employ a relatively explorative search strategy, while the individual designers working without AI assistance devote more effort to parameter design.
... In the engineering design literature, the network map of technology domains [40,41] and semantic networks [54,55] has been created and used to guide the retrieval of analogical design stimuli based on the quantified knowledge distance between domains [15] or the semantic distance between concepts [70]. Meanwhile, the most frequently used knowledge bases are the large pre-trained commonsense knowledge graphs such as WordNet, ConceptNet, and FreeBase [120,121]. ...
Conference Paper
Full-text available
Design-by-Analogy (DbA) is a design methodology that draws inspiration from a source domain to a target domain to generate new solutions to problems or designs, which can benefit designers in mitigating design fixation and improving design ideation outcomes. Recently, the increasingly available design databases and rapidly advancing data science and artificial intelligence technologies have presented new opportunities for developing data-driven methods and tools for DbA support. Herein, we survey the prior data-driven DbA studies and categorize and analyze individual study according to the data, methods and applications in four categories including analogy encoding, retrieval, mapping, and evaluation. Based on such structured literature analysis, this paper elucidates the state of the art of data-driven DbA research to date and benchmarks it with the frontier of data science and AI research to identify promising research opportunities and directions for the field.
... The parameters included in questionnaire were as follows-1) relevance, 2) uniqueness, 3) clarity, 4) choice of colours, 5) sketching ability, 6) language processing, and 7) narration (Camburn et al., 2020;Demirkan and Afacan, 2012;Chaudhuri et al., 2020;Chaudhuri et al., 2021;Takai et al., 2015;Schumann et al., 1996;Berbague et al., 2021). Firstly, relevance verifies whether a solution is appropriate for a question. ...
Article
An inherent criterion of evaluation in Design education is novelty. Novelty is a measure of newness in solutions which is evaluated based on relative comparison with its frame of reference. Evaluating novelty is subjective and generally depends on expert’s referential metrics based on their knowledge and persuasion. Pedagogues compare and contrast solution for cohort of students in mass examination aspiring admission to Design schools. Large number of students participate in mass examinations, and in situations like this, examiners are confronted with multiple challenges in subjective evaluation such as- 1) Errors encountered in evaluation due to stipulated timeline, 2) Errors encountered due to prolonged working hours, 3) Errors encountered due to stress in performing repeated task on a large-scale. Pedagogues remain ever-inquisitive and vigilant about the evaluation process being consistent and accurate due to monotony of repeated task. To mitigate these challenges, a computational model is proposed for automating evaluation of novelty in image-based solutions. This model is developed by mixed-method research, where features for evaluating novelty are investigated by conducting a survey study. Further, these features were utilized to evaluate novelty and generate score for image-based solutions using Computer Vision (CV) and Deep Learning (DL) techniques. The performance metric of the model when measured reveals a negligible difference between scores of experts and scores of proposed model. These comparative analysis of the proposed model with human experts’ confirm the competence of the devised model and would go a long way to establish trust of pedagogues by ensuring reduced error and stress during the evaluation process.
... Artificial intelligence (AI) assistance methods have proven to be efficient in this area, supporting engineering teams in completing such challenging tasks rapidly and effectively. Engineers have used AI assistance tools to design products and explore the solution space more rapidly [186] and at different stages of the design process, including concept generation [187], concept evaluation [188], prototyping [189], and manufacturing [190], and concurrent-engineering design [191]. However, human-AI collaboration can also restrict team performance. ...
Thesis
Teams are a major facet of engineering and are commonly thought to be necessary when solving dynamic and complex problems, such as engineering design tasks. Even though teams collectively bring a diversity of knowledge and perspectives to problem solving, previous work has demonstrated that in certain scenarios, such as in language-based and configuration design problems, the production by a team is inferior to that of a similar number of individuals solving independently (i.e., nominal teams). Aid in the form of design stimuli catalyze group creativity and help designers overcome impasses. However, methods for applying stimuli in the engineering design literature are largely static; they do not adapt to the dynamics of either the designer or the design process, both of which evolve throughout the problem-solving process. Thus, the overarching goal of this dissertation is to explore, better understand, and facilitate problem solving computationally, via adaptive, process management. This dissertation first compares individual versus group problem solving within the domain of engineering design. Through a behavioral study, our results corroborate previous findings, exhibiting that individuals outperform teams in the overall quality of their design solutions, even in this more free-flowing and explorative setting of conceptual design. Exploiting this result, we consider and explore whether a human, process manager can lessen this underperformance of design teams compared to nominal teams, and help teams overcomepotential deterrents that may be contributing to their inferior performance. The managerial interactions with the design teams are investigated and post-study interviews with the human process managers are conducted, in an attempt to uncover some of the cognitive rationale and strategies that may be beneficial throughout problem solving. Motivated from these post-study interviews, a topic-modeling approach then analyzes team cognition and the impact of these process manager interventions. The results from his approach show that the impacts of these interventions can be computationally detected through team discourse. Overall, these studies provide a conceptual basis for the detection and facilitation of design interventions based on real-time, discourse data.Next, two novel frameworks are studied, both of which take steps towards tracking features of design teams and utilizing that information to intervene. The first study analyzes the impact of modulating the distance of design stimuli from a designers’ current state, in this case, their current design solution, within a broader design space. Utilizing semantic comparisons between their current solution and a broad database of related example solutions, designers receive computationally selected inspirational stimuli midway through a problem-solving session. Through a regression analysis, the results exhibit increased performance when capturing their design state and providing increased stimulus quality. The second framework creates an artificial intelligent process manager agent to manage the design process of engineering teams in real-time, tracking features of teams’ actions and communications during a complex design and path-planning task with multidisciplinary team members. Teams are also placed under the guidance of human process managers for comparison. Across several dimensions, the overall results show that the AI manager agent introduced matches the capabilities of the human managers, showing potential in automating the management of a complex design process.Before and after analyses of the interventions indicate mixed adherence to the different types of interventions as induced in the intended process changes in the teams, and regression analyses show the impact of different interventions. Overall, this dissertation lays the groundwork for a computational development and deployment of adaptive process management, with the hope to make engineering designs as efficient as possible.
... Artificial intelligence (AI) assistance methods have proven to be efficient in this area, supporting engineering teams in completing such challenging tasks rapidly and effectively. Engineers have used AI-assistance tools to design products and explore the solution space more rapidly [19] and at different stages of the design process, including concept generation [20], concept evaluation [21], prototyping [22], and manufacturing [23], and concurrent-engineering design [24]. However, human-AI collaboration can also restrict team performance. ...
Article
Full-text available
Managing the design process of teams has been shown to considerably improve problem-solving behaviors and resulting final outcomes. Automating this activity presents significant opportunities in delivering interventions that dynamically adapt to the state of a team in order to reap the most impact. In this work, an Artificial Intelligent (AI) agent is created to manage the design process of engineering teams in real time, tracking features of teams' actions and communications during a complex design and path-planning task with multidisciplinary team members. Teams are also placed under the guidance of human process managers for comparison. Regarding outcomes, teams perform equally as well under both types of management, with trends towards even superior performance from the AI-managed teams. The managers' intervention strategies and team perceptions of those strategies are also explored, illuminating some intriguing similarities. Both the AI and human process managers focus largely on communication-based interventions, though differences start to emerge in the distribution of interventions across team roles. Furthermore, team members perceive the interventions from the both the AI and human manager as equally relevant and helpful, and believe the AI agent to be just as sensitive to the needs of the team. Thus, the overall results show that the AI manager agent introduced in this work is able to match the capabilities of humans, showing potential in automating the management of a complex design process.
... Furthermore, the knowledge contained in academic papers and patents is usually not up-to-the-minute, as it is time-consuming to publish papers and file patents.In recent years, there is an emerging interest in applying crowdsourcing approaches to create databases for supporting engineering design activities. For example, Goucher-Lambert and Cagan[49] and He et al.[34] used crowdsourced idea descriptions as sources of design stimulation for supporting idea generation; Forbes et al.[95] introduced a crowdsourcing approach to construct a knowledge base for product innovation; and Camburn et al.[96] employed crowdsourcing to gather actual industry design concepts. Crowdsourcing produces massive, diverse and up-to-the-minute knowledge in a cost-effective manner, which presents a promising choice for constructing semantic networks for engineering design. ...
Article
Full-text available
In the past two decades, there has been increasing use of semantic networks in engineering design for supporting various activities, such as knowledge extraction, prior art search, idea generation and evaluation. Leveraging large-scale pre-trained graph knowledge databases to support engineering design-related natural language processing (NLP) tasks has attracted a growing interest in the engineering design research community. Therefore, this paper aims to provide a survey of the state-of-the-art semantic networks for engineering design and propositions of future research to build and utilize large-scale semantic networks as knowledge bases to support engineering design research and practice. The survey shows that WordNet, ConceptNet and other semantic networks, which contain common-sense knowledge or are trained on non-engineering data sources, are primarily used by engineering design researchers to develop methods and tools. Meanwhile, there are emerging efforts in constructing engineering and technical-contextualized semantic network databases, such as B-Link and TechNet, through retrieving data from technical data sources and employing unsupervised machine learning approaches. On this basis, we recommend six strategic future research directions to advance the development and uses of large-scale semantic networks for artificial intelligence applications in engineering design.
... AI assistance for design allows human designers to work faster with increased effectiveness and efficiency; therefore, improving a company's competitiveness in today's fast evolving market. For example, designers have used AI tools to design products and explore the solution space more rapidly [3] and different AI approaches have been used to support different stages of the engineering design process including concept generation [4], concept evaluation [5], prototyping [6], and manufacturing [7]. Moreover, research involving 1500 companies where humans and AI worked together found a significant improvement in their overall performance [8,9]. ...
Article
Full-text available
As Artificial Intelligence (AI) assistance tools become more ubiquitous in engineering design, it becomes increasingly necessary to understand the influence of AI assistance on the design process and design effectiveness. Previous work has shown the advantages of incorporating AI design agents to assist human designers. However, the influence of AI assistance on the behavior of designers during the design process is still unknown. This study examines the differences in participants' design process and effectiveness with and without AI assistance during a complex drone design task using the HyForm design research platform. Data collected from this study is analyzed to assess the design process and effectiveness using quantitative methods, such as Hidden Markov Models and network analysis. The results indicate that AI assistance is most beneficial when addressing moderately complex objectives but exhibits a reduced advantage in addressing highly complex objectives. During the design process, the individual designers working with AI assistance employ a relatively explorative search strategy, while the individual designers working without AI assistance devote more effort to parameter design.
... In the engineering design literature, the network map of technology domains [40,41] and semantic networks [54,55] have been created and used to guide the retrieval of analogical design stimuli based on the quantified knowledge distance between domains [15] or the semantic distance between concepts [70]. Meanwhile, the most frequently used knowledge bases are the large pretrained commonsense knowledge graphs such as WordNet, ConceptNet and FreeBase [120,121]. ...
Article
Full-text available
Design-by-Analogy (DbA) is a design methodology wherein new solutions, opportunities or designs are generated in a target domain based on inspiration drawn from a source domain; it can benefit designers in mitigating design fixation and improving design ideation outcomes. Recently, the increasingly available design databases and rapidly advancing data science and artificial intelligence technologies have presented new opportunities for developing data-driven methods and tools for DbA support. In this study, we survey existing data-driven DbA studies and categorize individual studies according to the data, methods, and applications in four categories, namely, analogy encoding, retrieval, mapping, and evaluation. Based on both nuanced organic review and structured analysis, this paper elucidates the state of the art of data-driven DbA research to date and benchmarks it with the frontier of data science and AI research to identify promising research opportunities and directions for the field. Finally, we propose a future conceptual data-driven DbA system that integrates all propositions.
... Engineering design researchers utilize AI-based algorithms methods, especially machine learning, for rapid design data learning and processing [17]- [19] and have achieved successful results in their research contributions. Such contributions include evaluating design concepts [20], decision making for design support systems [21], design for additive manufacturing [22], predicting strain fields in microstructure designs [23], predicting performance of design based on its shape and vice-versa [24], material selection for sustainable product design [25] etc. Certain applications of AI that have proven efficient in analyzing computer-aided design (CAD) data include predicting the function of CAD model from its form [26], suitable feature-removal in CAD models for simulations [27], and CAD design shape matching [28]. Certain studies [29]- [31] potentially offer common ground between human designers and AI to provide opportunities for hybrid human-agent design. ...
Article
Recent advances in artificial intelligence (AI) have shed light on the potential uses and applications of AI tools in engineering design. However, the aspiration of a fully automated engineering design process still seems out of reach of AI’s current capabilities, and therefore, the need for human expertise and cognitive skills persists. Nonetheless, a collaborative design process that emphasizes and uses the strengths of both AI and human engineers is an appealing direction for AI in design. Touncover the current applications of AI, the authors review literature pertaining to AI applications in design research and engineering practice. This highlights the importance of integrating AI education into engineering design curricula in post-secondary institutions. Next, a pilot studyassessment of undergraduate mechanical engineering course descriptions at the University of Waterloo and University of Toronto reveals that only one out of a total of 153 courses provides both AI and design-related knowledge together in a course. This result identifies possible gaps in Canadian engineering curricula and potential deficiencies in the skills of graduating Canadianengineers.
... Han et al. [9], [32] utilise Con-ceptNet relationships to obtain analogies and combinations for a search entity. To evaluate crowdsourced design ideas and extract entities from these, Camburn et al. [12] use the TextRazor 10 platform that is built using models trained on DBPedia, Freebase etc. Chen and Krishnamurthy [10] facilitate human-AI collaboration in completing problem formulation mind maps with the help of ConceptNet and the underlying relationships. These common-sense knowledge bases utilised by the scholars, however, were not built for the engineering purposes. ...
Preprint
Full-text available
We propose a large, scalable engineering knowledge graph, comprising sets of (entity, relationship, entity) triples that are real-world engineering facts found in the patent database. We apply a set of rules based on the syntactic and lexical properties of claims in a patent document to extract facts. We aggregate these facts within each patent document and integrate the aggregated sets of facts across the patent database to obtain the engineering knowledge graph. Such a knowledge graph is expected to support inference, reasoning, and recalling in various engineering tasks. The knowledge graph has a greater size and coverage in comparison with the previously used knowledge graphs and semantic networks in the engineering literature.
... In the engineering design literature, the network map of technology domains [40,41] and semantic networks [54,55] have been created and used to guide the retrieval of analogical design stimuli based on the quantified knowledge distance between domains [15] or the semantic distance between concepts [70]. Meanwhile, the most frequently used knowledge bases are the large pre-trained commonsense knowledge graphs such as WordNet, ConceptNet and FreeBase [120,121]. ...
Preprint
Full-text available
Design-by-Analogy (DbA) is a design methodology wherein new solutions, opportunities or designs are generated in a target domain based on inspiration drawn from a source domain; it can benefit designers in mitigating design fixation and improving design ideation outcomes. Recently, the increasingly available design databases and rapidly advancing data science and artificial intelligence technologies have presented new opportunities for developing data-driven methods and tools for DbA support. In this study, we survey existing data-driven DbA studies and categorize individual studies according to the data, methods, and applications in four categories, namely, analogy encoding, retrieval, mapping, and evaluation. Based on both nuanced organic review and structured analysis, this paper elucidates the state of the art of data-driven DbA research to date and benchmarks it with the frontier of data science and AI research to identify promising research opportunities and directions for the field. Finally, we propose a future conceptual data-driven DbA system that integrates all propositions.
Conference Paper
Full-text available
In this paper, we describe a conceptual product design approach based on variational 3D models, parametric optimization, and rapid prototyping for facilitating the exploration of the solution space for a design problem. We demonstrate the proposed strategy through a case study on the design of an ophthalmic instrument. A template 3D model was manually created and fed to a parametric optimization tool to automatically generate design alternatives based on a set of criteria, which were then exported for 3D printing and testing in both dry and wet lab environments. Our results show that the method facilitates parallel prototyping and enables the exploration of a wider range of solutions more quickly and efficiently, particularly in highly constrained scenarios, but it requires designers to think of the initial models as families of solutions.
Article
Full-text available
The aim of customer-oriented design concept evaluation (CDCE) is to select the best product design solution from the perspective of customer. Traditionally, most of CDCE methods mainly focus on the customer preference judgement but ignore the confidence attitude of customer, namely, the reliability of preference. However, the customer’s uncertain attitude means he is unsure about his decision and could probably change his mind. With the help of Z-number, more complete customer preference information is recorded (Z-preference). The main contribution of this paper is to propose a new Z-preference-based multi-criteria decision-making (MCDM) for CDCE method that retains the confidence coefficient α in evaluation value (Z-CDCE-α) to highlight the role of confidence attitude for CDCE, rather than simply translating Z-preference into a regular fuzzy preference value. By integrating with multiple information such as preference value, confidence coefficient α and the importance rating of design attribute, a novel ideal solution definition (ISD) strategy is put forward. For the re-defined ideal solutions, the distances of alternative to ideal solutions are deduced to get the priority degree δ to sort alternatives. According to the proposed ISD strategy of Z-CDCE-α, the best concept is that its important attribute values are preferred by customers with higher certainty or least preferred by customers with lower certainty, and the customers’ preferences and confidence attitudes for its less important attribute values are opposite. A case study and two comparison experiments are carried out to validate the reasonability and feasibility of Z-CDCE-α for CDCE by comparing with different evaluation values, ISD rules and MCDM models.
Article
Recent advancements in artificial intelligence (AI) offer the opportunity for human designers and AI to collaborate in new, hybrid modes throughout various stages of the product design process. Computational design tools for topology optimization and generative design facilitate the creation of higher performing and more complex products. This paper explores how the use of these computational tools may impact the design process, designer behavior, and overall outcomes. Six in-depth interviews were conducted with practicing and student designers from different disciplines who use commercial topology optimization and generative design tools, detailing the design processes they followed in this hybrid intelligent mode. From a grounded theory-based analysis of the interviews, a provisional process diagram for hybrid intelligence and its uses in the early-stage design process is proposed. The early stages of defining tool inputs bring about a constraint-driven process in which designers focus on the abstraction of the design problem. Designers will iterate through the inputs to improve both performance and non-performance metrics. The learning-through-iteration allows designers to gain a thorough understanding of the design problem and solution space. This can bring about creative applications of computational tools in early-stage design to provide guidance for traditionally designed products.
Chapter
Engineering design relies on the human ability to make complex decisions, but design activities are increasingly supported by computation. Although computation can help humans make decisions, over- or under-reliance on imperfect models can prevent successful outcomes. To investigate the effects of assistance from a computational agent on decision making, a behavioral experiment was conducted (N = 33). Participants chose between pairs of aircraft brackets while optimizing the design across competing objectives (mass and displacement). Participants received suggestions from a simulated model which suggested correct (i.e., better) and incorrect (i.e., worse) designs based on the global design space. In an uncertain case, both options were approximately equivalent but differed along the objectives. The results indicate that designers do not follow suggestions when the relative design performances are notably different, often underutilizing them to their detriment. However, they follow the suggestions more than expected when the better design choice is less clear.
Chapter
This paper investigates team psychological safety (N = 34 teams) in a synchronous online engineering design class spanning 4 weeks. While work in this field has suggested that psychological safety in virtual teams can facilitate knowledge-sharing, trust among teams, and overall performance, there have been limited investigations of the longitudinal trajectory of psychological safety, when the construct stabilizes in a virtual environment, and what factors impact the building of psychological safety in virtual teams.
Chapter
The successful adoption of artificial intelligence (AI)-enabled tools in engineering design requires an understanding of designers’ mental models of such tools. This work explores how professional and student engineering designers (1) develop mental models of a novel AI-driven engineering design tool and (2) speculate AI-enabled functionalities that can aid them. Student (N = 7) and professional (N = 8) designers completed a task using an AI-enabled tool, and were interviewed to uncover their mental model of the tool and speculations on future AI-enabled functionalities. Both professional and student designers developed accurate mental models of the AI tool, and speculated functionalities that were similarly “near” and “far” in terms of analogical distance from the AI tool’s functionality. These findings suggest that mental models and cross-application of AI tool functionality are readily accessible to designers, offering several implications for widespread adoption of AI-enabled design tools.
Article
The customer-oriented design concept evaluation (CDCE) enables companies to select the best design concept from the perspective of customer to win the customer-centered market. However, previous CDCE studies only focus on the customer’s preference value (PV), but neglect the customer’s confidence attitude on this preference, i.e., the preference reliability (PR), and some design specifications, e.g., the design attribute’s importance (DAI). To address such drawbacks, we propose a new CDCE by using improved Z-number-based multi-criteria decision-making (IZ-MCDM) method to better express and utilize customer’s uncertain opinion. In IZ-MCDM, the Z-number is used to express the customer’s opinion (Z-opinion) that includes PV and its affiliated PR information. Z-opinion is translated into an interval Z-number to form a new type of evaluation value and decision matrix. Based on the evaluation value, a new ideal solution selection (ISS) strategy integrating with PV, PR and DAI information is employed in IZ-MCDM. By comparing with the re-defined ideal solution, the alternative that attracts certain high-preferences for its importance attribute values and uncertain low-preferences for its less importance attribute values is more likely to be recommended as the best one. Hence, IZ-MCDM can get more reasonable design concept than classical PV-only CDCE method. Two empirical experiments from existing CDCE examples have been carried out in this study, and the comparison experimental results further validate the significance of IZ-MCDM, which show that 1) besides PV factor, PR and DAI factors could also significantly impact the evaluation result; 2) these two factors should be acted together to select the ideal solution; 3) IZ-MCDM has suitability as it supports different MCDM models with different deviation measurement metrics to evaluate the alternatives.
Conference Paper
Full-text available
Novel concepts are essential for design innovation and can be generated with the aid of data stimuli and computers. However, current generative design algorithms focus on diagrammatic or spatial concepts that are either too abstract to understand or too detailed for early phase design exploration. This paper explores the uses of generative pre-trained transformers (GPT) for natural language design concept generation. Our experiments involve the use of GPT-2 and GPT-3 for different creative reasonings in design tasks. Both show reasonably good performance for verbal design concept generation.
Article
Increased adoption of wind-energy technology helps address climate change, but also requires disposition of retired wind-turbine blades that are not easily recycled. This pressing environmental problem is used as the prompt in a creativity study, where participants are asked to identify potential reuses in a Wind-turbine-blade Repurposing Task (WRT). In past iterations of this study, participants consistently struggled with correctly incorporating the large physical size of wind-turbine blades in their reuse concepts. The Alternate Uses Task (AUT) is an established measure of creativity and asks participants to identify uses for much smaller objects like bricks and paper clips. The current work explored whether an AUT can be adapted as an intervention to help overcome the scale challenge in the WRT. Students in a fourth-year undergraduate engineering design course (N=28) underwent both of two conditions, a scaled-AUT intervention and a control, typical AUT before the WRT. AUT fluency and flexibility (number and categories of ideas) were significantly lower in the scaled AUT than the typical AUT. This result supports that object scale more than unfamiliarity is the main WRT challenge, since the AUT objects were relatively common. Notably, correctly scaled WRT concepts significantly increased after the scaled AUT, supporting the intervention's effectiveness. Finally, the WRT is proposed as a standard design-study task whose solutions help address a real-world problem.
Article
Function drives many early design considerations in product development, highlighting the importance of finding functionally similar examples if searching for sources of inspiration or evaluating designs against existing technology. However, it is difficult to capture what people consider is functionally similar and therefore, if measures that quantify and compare function using the products themselves are meaningful. In this work, human evaluations of similarity are compared to computationally determined values, shedding light on how quantitative measures align with human perceptions of functional similarity. Human perception of functional similarity is considered at two levels of abstraction: (1) the high-level purpose of a product and (2) how the product works. These human similarity evaluations are quantified by crowdsourcing 1360 triplet ratings at each functional abstraction and creating low-dimensional embeddings from the triplets. The triplets and embeddings are then compared to similarities that are computed between functional models using six representative measures, including both matching measures (e.g., cosine similarity) and network-based measures (e.g., spectral distance). The outcomes demonstrate how levels of abstraction and the fuzzy line between “highly similar” and “somewhat similar” products may impact human functional similarity representations and their subsequent alignment with computed similarity. The results inform how functional similarity can be leveraged by designers, with applications in creativity support tools, such as those used for design-by-analogy, or other computational methods in design that incorporate product function.
Preprint
Full-text available
We review the scholarly contributions that utilise Natural Language Processing (NLP) methods to support the design process. Using a heuristic approach, we collected 223 articles published in 32 journals and within the period 1991-present. We present state-of-the-art NLP in-and-for design research by reviewing these articles according to the type of natural language text sources: internal reports, design concepts, discourse transcripts, technical publications, consumer opinions, and others. Upon summarizing and identifying the gaps in these contributions, we utilise an existing design innovation framework to identify the applications that are currently being supported by NLP. We then propose a few methodological and theoretical directions for future NLP in-and-for design research.
Article
Customer-involved design concept evaluation (CDCE) is a key issue for developing new product welcomed by customers, but seldom studies have considered the integrated utilization of objective design values (DVs) and customers’ subjective preference values (PVs). Our previous study has attempted to fuse with DVs and PVs in CDCE, while this is only limit to few situations under benefit-like and cost-like evaluation criteria. For better CDCE, this study further fuses with DVs and PVs in more complex situations, and puts forward an improved version of rough distance to redefined ideal solution (RD-RIS II) in multi-criteria decision-making (MCDM) scope to select the optimal concept. Different from old RD-RIS in previous study, RD-RIS II not only supports the ideal solution definition (ISD) processes for both quantitative and qualitative evaluation criteria, and but more importantly, it utilizes more useful information (value, feature, number and impact) from DVs and PVs to redefine the new positive ideal solution (PIS) and negative ideal solution (NIS). Through the rough distance calculation, the alternative which is close to PIS and far away from NIS is selected as the best one. Besides, the feasibility of RD-RIS II is validated via the application in real design evaluation example, and three empirical comparisons confirm that RD-RIS II makes more comprehensive decision than other MCDM-based evaluation methods, especially when the choices of customers and designers are conflicting, therefore it could provide more reasonable evaluation result which has better credibility and stability than others.
Conference Paper
Full-text available
Engineers often need to discover and learn designs from unfamiliar domains for inspiration or other particular uses. However, the complexity of the technical design descriptions and the unfamiliarity to the domain make it hard for engineers to comprehend the function, behavior, and structure of a design. To help engineers quickly understand a complex technical design description new to them, one approach is to represent it as a network graph of the design-related entities and their relations as an abstract summary of the design. While graph or network visualizations are widely adopted in the engineering design literature, the challenge remains in retrieving the design entities and deriving their relations. In this paper, we propose a network mapping method that is powered by Technology Semantic Network (TechNet). Through a case study, we showcase how TechNet’s unique characteristic of being trained on a large technology-related data source advantages itself over common-sense knowledge bases, such as WordNet and ConceptNet, for design knowledge representation.
Article
To assist designers in making comprehensive decisions for objective design values (DVs) and subjective preference values (PVs) during the design solution evaluation stage, this study builds an information-intensive design solution evaluator (IIDSE) that combines multi-information from DVs and PVs. In the IIDSE, the importance degrees of the DVs and PVs are analysed based on their differences. Then, according to the importance classifications, values, characteristics, and numbers of DVs and PVs, a multi-information fusion (MIF)-based ideal solution definition strategy, which covers quantitative criteria with i) benefit characteristics, ii) cost characteristics, and iii) qualitative criteria, is proposed. A rough multi-criteria decision-making (R-MCDM) model is used to evaluate an alternative by computing its deviation from the defined ideal solution. The effectiveness of the IIDSE was validated via empirical comparisons. Experiment I showed that the MIF-based strategy is compatible with different R-MCDM models for selecting the preferred and best performing solution. In experiment II, among the R-MCDM models, R-COPRAS plus the MIF-based strategy is the best combination for constructing the IIDSE. Experiments III and IV demonstrated that the IIDSE can obtain more reasonable solutions compared with classical evaluators, especially in the case where conflictions between the objective DVs and subjective PVs exist.
Article
Full-text available
Textual idea data from online crowdsourcing contains rich information of the concepts that underlie the original ideas and can be recombined to generate new ideas. But representing such information in a way that can stimulate new ideas is not a trivial task, because crowdsourced data are often vast and in unstructured natural languages. This paper introduces a method that uses natural language processing to summarize a massive number of idea descriptions and represents the underlying concept space as word clouds with a core-periphery structure to inspire recombinations of such concepts into new ideas. We report the use of this method in a real public-sector-sponsored project to explore ideas for future transportation system design. Word clouds that represent the concept space underlying original crowdsourced ideas are used as ideation aids and stimulate many new ideas with varied novelty, usefulness and feasibility. The new ideas suggest that the proposed method helps expand the idea space. Our analysis of these ideas and a survey with the designers who generated them shed light on how people perceive and use the word clouds as ideation aids and suggest future research directions.
Article
Full-text available
Economic use of early stage prototyping is of paramount importance to companies engaged in the development of innovative products, services and systems because it directly impacts their bottom-line. There is likewise a need to understand the dimensions, and lenses that make up an economic profile of prototypes. Yet, there is little reliable understanding of how resources expended and views of dimensionality across prototyping translate into value. To help practitioners, designers, and researchers leverage prototyping most economically, we seek to understand the tradeoff between design information gained through prototyping and the resources expended prototyping. We investigate this topic by conducting an inductive study on industry projects across disciplines and knowledge domains while collecting and analyzing empirical data on their prototype creation and test processes. Our research explores ways of quantifying prototyping value and reinforcing the asymptotic relationship between value and fidelity. Most intriguingly, the research reveals insightful heuristics that practitioners can exploit to generate high value from low and high fidelity prototypes alike.
Article
Full-text available
Traditionally, design opportunities and directions are conceived based on expertise, intuition, or time-consuming user studies and marketing research at the fuzzy front end of the design process. Herein, we propose the use of the total technology space map (TSM) as a visual ideation aid for rapidly conceiving high-level design opportunities. The map is comprised of various technology domains positioned according to knowledge proximity, which is measured based on a large quantity of patent data. It provides a systematic picture of the total technology space to enable stimulated ideation beyond the designer's knowledge. Designers can browse the map and navigate various technologies to conceive new design opportunities that relate different technologies across the space. We demonstrate the process of using TSM as a rapid ideation aid and then analyze its applications in two experiments to show its effectiveness and limitations. Furthermore, we have developed a cloud-based system for computer-aided ideation, that is, InnoGPS, to integrate interactive map browsing for conceiving high-level design opportunities with domain-specific patent retrieval for stimulating concrete technical concepts, and to potentially embed machine-learning and artificial intelligence in the map-aided ideation process.
Conference Paper
Full-text available
Recently, the term knowledge graph has been used frequently in research and business, usually in close association with Semantic Web technologies, linked data, large-scale data analytics and cloud computing. Its popularity is clearly influenced by the introduction of Google's Knowledge Graph in 2012, and since then the term has been widely used without a definition. A large variety of interpretations has hampered the evolution of a common understanding of knowledge graphs. Numerous research papers refer to Google's Knowledge Graph, although no official documentation about the used methods exists. The prerequisite for widespread academic and commercial adoption of a concept or technology is a common understanding, based ideally on a definition that is free from ambiguity. We tackle this issue by discussing and defining the term knowledge graph, considering its history and diversity in interpretations and use. Our goal is to propose a definition of knowledge graphs that serves as basis for discussions on this topic and contributes to a common vision.
Article
Full-text available
Invention arises from novel combinations of prior technologies. However, prior studies of creativity have suggested that overly novel combinations may be harmful to invention. Apart from the factors of expertise, market, etc., there may be such a thing as ‘too much’ or ‘too little’ novelty that will determine an invention’s future value, but little empirical evidence exists in the literature. Using technical patents as the proxy of inventions, our analysis of 3.9 million patents identifies a clear ‘sweet spot’ in which the mix of novel combinations of prior technologies favors an invention’s eventual success. Specifically, we found that the invention categories with the highest mean values and hit rates have moderate novelty in the center of their combination space and high novelty in the extreme of their combination space. Too much or too little central novelty suppresses the positive contribution of extreme novelty in the invention. Furthermore, the combination of scientific and broader knowledge beyond patentable technologies creates additional value for invention and enlarges the advantage of the novelty sweet spot. These findings may further enable data-driven methods both for assessing invention novelty and for profiling inventors, and may inspire a new strand of data-driven design research and practice.
Conference Paper
Full-text available
Design is a ubiquitous human activity. Design is valued by individuals, teams, organizations, and cultures. There are patterns and recurrent phenomena across the diverse set of approaches to design and also variances. Designers can benefit from leveraging conceptual tools like process models, methods, and design principles to amplify design phenomena. There are many variant process models, methods, and principles for design. Likewise, usage of these conceptual tools differentiates in industrial contexts. We present an integrated process model, with exemplar methods and design principles that is synthesized from a review of several case studies in client based industrial design projects for product, service, and system development, professional education courses, and literature review. Concepts from several branches of design practice: (1) design thinking, (2) business design, (3) systems engineering, and (4) design engineering are integrated. A design process model, method set, and set of abstracted design principles are porposed.
Article
Full-text available
Data-driven engineering designers often search for design precedents in patent databases to learn about relevant prior arts, seek design inspiration, or assess the novelty of their own new inventions. However, patent retrieval relevant to the design of a specific product or technology is often unstructured and unguided, and the resultant patents do not sufficiently or accurately capture the prior design knowledge base. This paper proposes an iterative and heuristic methodology to comprehensively search for patents as precedents of the design of a specific technology or product for data-driven design. The patent retrieval methodology integrates the mining of patent texts, citation relationships, and inventor information to identify relevant patents; particularly, the search keyword set, citation network, and inventor set are expanded through the designer's heuristic learning from the patents identified in prior iterations. The method relaxes the requirement for initial search keywords while improving patent retrieval completeness and accuracy. We apply the method to identify self-propelled spherical rolling robot (SPSRRs) patents. Furthermore, we present two approaches to further integrate, systemize, visualize, and make sense of the design information in the retrieved patent data for exploring new design opportunities. Our research contributes to patent data-driven design.
Article
Full-text available
This paper examines Parametric Design (PD) in contemporary architectural practice. It considers three case studies: The Future of Us pavilion, the Louvre Abu Dhabi and the Morpheus Hotel. The case studies illustrate how, compared to non-parametrically and older parametrically designed projects, PD is employed to generate, document and fabricate designs with a greater level of detail and differentiation, often at the level of individual building components. We argue that such differentiation cannot be achieved with conventional Building Information Modelling and without customizing existing software. We compare the case studies' PD approaches (objected-oriented programming, functional programming, visual programming and distributed visual programming) and decomposition, algorithms and data structures as crucial factors for the practical viability of complex parametric models and as key aspects of PD thinking.
Article
Full-text available
Design is a ubiquitous human activity. Design is valued by individuals, teams, organizations, and cultures. There are patterns and recurrent phenomena across the diverse set of approaches to design and also variances. Designers can benefit from leveraging conceptual tools like process models, methods, and design principles to amplify design phenomena. There are many variant process models, methods, and principles for design. Likewise, usage of these conceptual tools differentiates in industrial contexts. We present an integrated process model, with exemplar methods and design principles that is synthesized from a review of several case studies in client based industrial design projects for product, service, and system development, professional education courses, and literature review. Concepts from several branches of design practice: (1) design thinking, (2) business design, (3) systems engineering, and (4) design engineering are integrated. A design process model, method set, and set of abstracted design principles are porposed. OPENING There are patterns and consistent styles to the approach that humans take in design. Patterns of design activity often emerge quite differently according to context, and styles of the designer [4]. In this paper we explore the integration of several extant conceptual tools that support design innovation [5]. Specifically, aspects of design thinking, business design, systems engineering, and design engineering are explored. The distinction and interrelationships between design process model, methods, and principles is also approached The paper is organized according to the following three objectives: 1. Explore professional education workshops 2. Explore industrial case studies 3. Develop an integrated design innovation process model, methods set, and principle set Numerous design studies have explored process models, design methods, and design principles. Yet there is an ongoing need to distinguish these concepts and to build an integrated toolset. These conceptual tools are distinct, yet support each other. Process models guide overall activity flows and trends, methods support shorter term activites and help in planning work tasks, while principles guide designers mentally. Figure 1 provides an abstract illustration of the inter-woven relationship between these conceptual tools and designers.
Article
Full-text available
Everybody experiences every day the need to manage a huge amount of heterogeneous shared resources, causing information overload and fragmentation problems. Collaborative annotation tools are the most common way to address these issues, but collaboratively tagging resources is usually perceived as a boring and time consuming activity and a possible source of conflicts. To face this challenge, collaborative systems should effectively support users in the resource annotation activity and in the definition of a shared view. The main contribution of this paper is the presentation and the evaluation of a set of mechanisms (personal annotations over shared resources and tag suggestions) that provide users with the mentioned support. The goal of the evaluation was to ( 1 ) assess the improvement with respect to the situation without support; ( 2 ) evaluate the satisfaction of the users, with respect to both the final choice of annotations and possible conflicts; ( 3 ) evaluate the usefulness of the support mechanisms in terms of actual usage and user perception. The experiment consisted in a simulated collaborative work scenario, where small groups of users annotated a few resources and then answered a questionnaire. The evaluation results demonstrate that the proposed support mechanisms can reduce both overload and possible disagreement.
Conference Paper
Full-text available
Empirical work in design science has highlighted that the process of ideation can significantly affect design outcome. Exploring the design space with both breadth and depth increases the likelihood of achieving better design outcomes. Furthermore, iteratively attempting to solve challenging design problems in large groups over a short time period may be more effective than protracted exploration by an isolated set of individuals. There remains a substantial opportunity to explore the structure of various design concept sets. In addition, many empirical studies cap analysis at sample sizes of less than one hundred individuals. This has provided substantial, though partial, models of the ideation space. This work explores one new territory in large scale ideation. Two conditions are evaluated. In the first condition, an ideation session was run with 2400 practicing designers and engineers from one organization. In the second condition 1000 individuals ideate on the same problem in a completely distributed environment and without awareness of each other. We compare properties of solution sets produced by each of these groups and activities. Analytical tools from network modeling theory are applied as well as traditional ideation metrics such as concept binning with saturation analysis. Structural network modeling is applied to evaluate the interconnectivity of design concepts. This is a strictly quantitative, and at the same time graphically expressive, means to evaluate the diversity of a design solution set. Observations indicate that the group condition approached saturation of distinct categories more rapidly than the individual, distributed condition. The total number of solution categories developed in the group condition was also higher. Additionally, individuals generally provided concepts across a greater number of solution categories in the group condition. The indication for design practice is that groups of just under forty individuals would provide category saturation within group ideation for a system level design, while distributed individuals may provide additional concept differentiation. This evidence can support development of more systematic ideation strategies. Furthermore, we provide an algorithmic approach for quantitative evaluation of variety in design solution sets using networking analysis techniques. These methods can be used in complex or wicked problems, and system development where the design space is vast.
Article
Full-text available
Climate change, resource depletion, and worldwide urbanization feed the demand for more energy and resource-efficient buildings. Increasingly, architectural designers and consultants analyze building designs with easy-to-use simulation tools. To identify design alternatives with good performance, designers often turn to optimization methods. Randomized, metaheuristic methods such as genetic algorithms are popular in the architectural design field. However, are metaheuristics the best approach for architectural design problems that often are complex and ill defined? Metaheuristics may find solutions for well-defined problems, but they do not contribute to a better understanding of a complex design problem. This paper proposes surrogate-based optimization as a method that promotes understanding of the design problem. The surrogate method interpolates a mathematical model from data that relate design parameters to performance criteria. Designers can interact with this model to explore the approximate impact of changing design variables. We apply the radial basis function method, a specific type of surrogate model, to two architectural daylight optimization problems. These case studies, along with results from computational experiments, serve to discuss several advantages of surrogate models. First, surrogate models not only propose good solutions but also allow designers to address issues outside of the formulation of the optimization problem. Instead of accepting a solution presented by the optimization process, designers can improve their understanding of the design problem by interacting with the model. Second, a related advantage is that designers can quickly construct surrogate models from existing simulation results and other knowledge they might possess about the design problem. Designers can thus explore the impact of different evaluation criteria by constructing several models from the same set of data. They also can create models from approximate data and later refine them with more precise simulations. Third, surrogate-based methods typically find global optima orders of magnitude faster than genetic algorithms, especially when the evaluation of design variants requires time-intensive simulations.
Article
Full-text available
This work lends insight into the meaning and impact of "near" and "far" analogies. A cognitive engineering design study is presented that examines the effect of the distance of analogical design stimuli on design solution generation, and places those findings in context of results from the literature. The work ultimately sheds new light on the impact of analogies in the design process and the significance of their distance from a design problem. In this work, the design repository from which analogical stimuli are chosen is the U.S. patent database, a natural choice, as it is one of the largest and easily accessed catalogued databases of inventions. The "near" and "far" analogical stimuli for this study were chosen based on a structure of patents, created using a combination of latent semantic analysis and a Bayesian based algorithm for discovering structural form, resulting in clusters of patents connected by their relative similarity. The findings of this engineering design study are juxtaposed with the findings of a previous study by the authors in design by analogy, which appear to be contradictory when viewed independently. However, by mapping the analogical stimuli used in the earlier work into similar structures along with the patents used in the current study, a relationship between all of the stimuli and their relative distance from the design problem is discovered. The results confirm that "near" and "far" are relative terms, and depend on the characteristics of the potential stimuli. Further, although the literature has shown that "far" analogical stimuli are more likely to lead to the generation of innovative solutions with novel characteristics, there is such a thing as too far. That is, if the stimuli are too distant, they then can become harmful to the design process. Importantly, as well, the data mapping approach to identify analogies works, and is able to impact the effectiveness of the design process. This work has implications not only in the area of finding inspirational designs to use for design by analogy processes in practice, but also for synthesis, or perhaps even unification, of future studies in the field of design by analogy. [DOI: 10.1115/1.4023158]
Article
Full-text available
The goal of many blue-sky idea generation techniques is to generate a large quantity of ideas with the hope of obtaining a few outstanding, creative ideas that are worth pursuing. As such, a rapid means of screening the resulting sketches to select a manageable set of promising ideas is needed. This study explores a metric for evaluating large quantities of early-stage product sketches and tests the metric through an online service called Mechanical Turk. Reviewers’ subjective ratings of idea creativity had a strong correlation with ratings of idea novelty (r =0.80), but negligible correlation with idea usefulness (r =0.16). The clarity of the sketch positively influenced ratings of idea creativity. Additionally, the quantity of ideas generated by an individual participant had a strong correlation with that participant's overall creativity scores (r =0.82). The authors suggest a metric of three attributes to be used as a first pass in narrowing a large pool of product ideas to the most innovative: novel, useful (or valuable), and feasible (as determined by experts).
Article
Full-text available
Design by analogy is a powerful part of the design process across the wide variety of modalities used by designers such as linguistic descriptions, sketches, and diagrams. We need tools to support people's ability to find and use analogies. A deeper understanding of the cognitive mechanisms underlying design and analogy is a crucial step in developing these tools. This paper presents an experiment that explores the effects of representation within the modality of sketching, the effects of functional models, and the retrieval and use of analogies. We find that the level of abstraction for the representation of prior knowledge and the representation of a current design problem both affect people's ability to retrieve and use analogous solutions. A general semantic description in memory facilitates retrieval of that prior knowledge. The ability to find and use an analogy is also facilitated by having an appropriate functional model of the problem. These studies result in a number of important implications for the development of tools to support design by analogy. Foremost among these implications is the ability to provide multiple representations of design problems by which designers may reason across, where the verb construct in the English language is a preferred mode for these representations.
Article
Full-text available
This paper provides an introduction to a new design methodology known as A-Design, which combines aspects of multi-objective optimization, multi-agent systems, and automated design synthesis. The A-Design theory is founded on the notion that engineering design occurs in interaction with an ever-changing environment, and therefore computer tools developed to aid in the design process should be adaptive to these changes. In this paper, A-Design is introduced along with some simple test problems to demonstrate the capabilities of different aspects of the theory. The theory of A-Design is then shown as the basis for a design tool that adaptively creates electro-mechanical configuration designs for changing user preferences.
Article
Full-text available
In 1956, Miller [1] conjectured that there is an upper limit on our capacity to process information on simultaneously interacting elements with reliable accuracy and with validity. This limit is seven plus or minus two elements. He noted that the number 7 occurs in many aspects of life, from the seven wonders of the world to the seven seas and seven deadly sins. We demonstrate in this paper that in making preference judgments on pairs of elements in a group, as we do in the analytic hierarchy process (AHP), the number of elements in the group should be no more than seven. The reason is founded in the consistency of information derived from relations among the elements. When the number of elements increases past seven, the resulting increase in inconsistency is too small for the mind to single out the element that causes the greatest inconsistency to scrutinize and correct its relation to the other elements, and the result is confusion to the mind from the existing information. The AHP as a theory of measurement has a basic way to obtain a measure of inconsistency for any such set of pairwise judgments. When the number of elements is seven or less the inconsistency measurement is relatively large with respect to the number of elements involved; when the number is more it is relatively small. The most inconsistent judgment is easily determined in the first case and the individual providing the judgments can change it in an effort to improve the overall inconsistency. In the second case, as the inconsistency measurement is relatively small, improving inconsistency requires only small perturbations and the judge would be hard put to determine what that change should be, and how such a small change could be justified for improving the validity of the outcome. The mind is sufficiently sensitive to improve large inconsistencies but not small ones. And the implication of this is that the number of elements in a set should be limited to seven plus or minus two.
Article
Full-text available
Metrics are used by firms for a variety of commendable purposes. The authors maintain that every metric, however used, will affect actions and decisions. But, of course, choosing the right one is critical to success.The authors focus on the selection of good metrics and, based on their own experience and the academic literature, summarize seven pitfalls in the use of metrics which can cause them to be counter-productive and fail. The article then goes on to outline a seven-step system to design effective, `lean' metrics, which depends on a close understanding of customers, employees, work processes, and the underlying properties of metrics themselves.
Conference Paper
Full-text available
User studies are important for many aspects of the design process and involve techniques ranging from informal surveys to rigorous laboratory studies. However, the costs involved in engaging users often requires practitioners to trade off between sample size, time requirements, and monetary costs. Micro-task markets, such as Amazon's Mechanical Turk, offer a potential paradigm for engaging a large number of users for low time and monetary costs. Here we investigate the utility of a micro-task market for collecting user measurements, and discuss design considerations for developing remote micro user evaluation tasks. Although micro-task markets have great potential for rapidly collecting user measurements at low costs, we found that special care is needed in formulating tasks in order to harness the capabilities of the approach. Author Keywords Remote user study, Mechanical Turk, micro task, Wikipedia.
Conference Paper
Full-text available
We present data from detailed observation of 24 information workers that shows that they experience work fragmentation as common practice. We consider that work fragmentation has two components: length of time spent in an activity, and frequency of interruptions. We examined work fragmentation along three dimensions: effect of collocation, type of interruption, and resumption of work. We found work to be highly fragmented: people average little time in working spheres before switching and 57% of their working spheres are interrupted. Collocated people work longer before switching but have more interruptions. Most internal interruptions are due to personal work whereas most external interruptions are due to central work. Though most interrupted work is resumed on the same day, more than two intervening activities occur before it is. We discuss implications for technology design: how our results can be used to support people to maintain continuity within a larger framework of their working spheres.
Conference Paper
Full-text available
For easing the exchange of news, the International Press Telecommunication Council (IPTC) has developed the NewsML Architecture (NAR), an XML-based model that is specialized into a number of languages such as NewsML G2 and EventsML G2. As part of this architecture, specific controlled vocabularies, such as the IPTC News Codes, are used to categorize news items together with other industry-standard thesauri. While news is still mainly in the form of text-based stories, these are often illustrated with graphics, images and videos. Media-specific metadata formats, such as EXIF, DIG35 and XMP, are used to describe the media. The use of different metadata formats in a single production process leads to interoperability problems within the news production chain itself. It also excludes linking to existing web knowledge resources and impedes the construction of uniform end-user interfaces for searching and browsing news content. In order to allow these different metadata standards to interoperate within a single information environment, we design an OWL ontology for the IPTC News Architecture, linked with other multimedia metadata standards. We convert the IPTC NewsCodes into a SKOS thesaurus and we demonstrate how the news metadata can then be enriched using natural language processing and multimedia analysis and integrated with existing knowledge already formalized on the Semantic Web. We discuss the method we used for developing the ontology and give rationale for our design decisions. We provide guidelines for re-engineering schemas into ontologies and formalize their implicit semantics. In order to demonstrate the appropriateness of our ontology infrastructure, we present an exploratory environment for searching and browsing news items.
Article
Full-text available
Although Mechanical Turk has recently become popular among social scientists as a source of experimental data, doubts may linger about the quality of data provided by subjects recruited from online labor markets. We address these potential concerns by presenting new demographic data about the Mechanical Turk subject population, reviewing the strengths of Mechanical Turk relative to other online and offline methods of recruiting subjects, and comparing the magnitude of effects obtained using Mechanical Turk and traditional subject pools. We further discuss some additional benefits such as the possibility of longitudinal, cross cultural and prescreening designs, and offer some advice on how to best manage a common subject pool.
Conference Paper
Early stages of the engineering design process are vital to shaping the final design; each subsequent step builds from the initial concept. Innovation-driven engineering problems require designers to focus heavily on early-stage design generation, with constant application and evaluation of design changes. Strategies to reduce the amount of time and effort designers spend in this phase could improve the efficiency of the design process as a whole. This paper seeks to create and demonstrate a two-tiered design grammar that encodes heuristic strategies to aid in the generation of early solution concepts. Specifically, this two-tiered grammar mimics the combination of heuristic-based strategic actions and parametric modifications employed by human designers. Rules in the higher-tier are abstract and potentially applicable to multiple design problems across a number of fields. These abstract rules are translated into a series of lower-tier rule applications in a spatial design grammar, which are inherently domain-specific. This grammar is implemented within the HSAT agent-based algorithm. Agents iteratively select actions from either the higher-tier or lower-tier. This algorithm is applied to the design of wave energy converters, devices which use the motion of ocean waves to generate electrical power. Comparisons are made between designs generated using only lower-tier rules and those generated using only higher-tier rules.
Conference Paper
Concept clustering is an important element of the product development process. The process of reviewing multiple concepts provides a means of communicating concepts developed by individual team members and by the team as a whole. Clustering, however, can also require arduous iterations and the resulting clusters may not always be useful to the team. In this paper, we present a machine learning approach on natural language descriptions of concepts that enables an automatic means of clustering. Using data from over 1,000 concepts generated by student teams in a graduate new product development class, we provide a comparison between the concept clustering performed manually by the student teams and the work automated by a machine learning algorithm. The goal of our machine learning tool is to support design teams in identifying possible areas of “over-clustering” and/or “under-clustering” in order to enhance divergent concept generation processes.
Conference Paper
Collaborative knowledge bases that make their data freely available in a machine-readable form are central for the data strategy of many projects and organizations. The two major collaborative knowledge bases are Wikimedia's Wikidata and Google's Freebase. Due to the success of Wikidata, Google decided in 2014 to offer the content of Freebase to the Wikidata community. In this paper, we report on the ongoing transfer efforts and data mapping challenges, and provide an analysis of the effort so far. We describe the Primary Sources Tool, which aims to facilitate this and future data migrations. Throughout the migration, we have gained deep insights into both Wikidata and Freebase, and share and discuss detailed statistics on both knowledge bases.
Conference Paper
The task of keyword extraction aims at capturing expressions (or entities) that best represent the main topics of a document. Given the rapid adoption of these online semantic annotators and their contribution to the growth of the Semantic Web, one important task is to assess their quality. This article presents an evaluation of the quality and stability of semantic annotators on domain-specific and open domain corpora. We evaluate five semantic annotators and compare them to two state-of-the-art keyword extractors, namely KP-miner and Maui. Our evaluation demonstrates that semantic annotators are not able to outperform keyword extractors and that annotators perform best on domains having a high keyword density.
Conference Paper
In this paper, we propose and investigate a novel distance-based approach for measuring the semantic dissimilarity between two concepts in a knowledge graph. The proposed Normalized Semantic Web Distance (NSWD) extends the idea of the Normalized Web Distance, which is utilized to determine the dissimilarity between two textural terms, and utilizes additional semantic properties of nodes in a knowledge graph. We evaluate our proposal on two different knowledge graphs: Freebase and DBpedia. While the NSWD achieves a correlation of up to 0.58 with human similarity assessments on the established Miller-Charles benchmark of 30 term-pairs on the Freebase knowledge graph, it reaches an even higher correlation of 0.69 in the DBpedia knowledge graph. We thus conclude that the proposed NSWD is an efficient and effective distance-based approach for assessing semantic dissimilarity in very large knowledge graphs.
Article
Representations in engineering design can be hand sketches, photographs, CAD, functional models, physical models, or text. Using representations allows engineers to gain a clearer picture of how a design works. We present an experiment that compares the influence of representations on fixation and creativity. This experiment presents designers with an example solution represented as a function tree and a sketch, we compare how these different external representations influence design fixation as they complete a design task. Results show that function trees do not cause fixation to ideas compared to a control group, and that function trees reduce fixation when compared to sketches. Results from this experiment show that function tree representations offer advantages for reducing fixation during idea generation.
Article
Analogical reasoning appears to play a key role in creative design. In briefly reviewing recent research on analogy-based creative design, this article first examines characterizations of creative design and then analyzes theories of analogical design in terms of four questions: why, what, how and when? After briefly describing recent AI theories of analogy-based creative design, the article focuses on three theories instantiated in operational computer programs: Syn, DSSUA (Design Support System Using Analogy) and Ideal. From this emerges a related set of research issues in analogy-based creative design. The main goal is to sketch the core issues, themes and directions in building such theories.
Conference Paper
Measuring design creativity is crucial to evaluating the effectiveness of idea generation methods. Historically, there has been a divide between easily-computable metrics, which are often based on arbitrary scoring systems, and human judgement metrics, which accurately reflect human opinion but rely on the expensive collection of expert ratings. This research bridges this gap by introducing a probabilistic model that computes a family of repeatable creativity metrics trained on expert data. Focusing on metrics for variety, a combination of submodular functions and logistic regression generalizes existing metrics, accurately recovering several published metrics as special cases and illuminating a space of new metrics for design creativity. When tasked with predicting which of two sets of concepts has greater variety, our model matches two commonly used metrics to 96% accuracy on average. In addition, using submodular functions allows this model to efficiently select the highest variety set of concepts when used in a design synthesis system.
Article
Applying natural language processing for mining and intelligent information access to tweets (a form of microblog) is a challenging, emerging research area. Unlike carefully authored news text and other longer content, tweets pose a number of new challenges, due to their short, noisy, context-dependent, and dynamic nature. Information extraction from tweets is typically performed in a pipeline, comprising consecutive stages of language identification, tokenisation, part-of-speech tagging, named entity recognition and entity disambiguation (e.g. with respect to DBpedia). In this work, we describe a new Twitter entity disambiguation dataset, and conduct an empirical analysis of named entity recognition and disambiguation, investigating how robust a number of state-of-the-art systems are on such noisy texts, what the main sources of error are, and which problems should be further investigated to improve the state of the art.
Article
Advances in innovation processes are critically important as economic and business landscapes evolve. There are many concept generation techniques that can assist a de-signer in the initial phases of design. Unfortunately, few studies have examined these techniques that can provide evidence to suggest which techniques should be preferred or how to implement them in an optimal way. This study systematically investigates the underlying factors of four common and well-documented techniques: brainsketching, gallery, 6-3-5, and C-sketch. These techniques are resolved into their key parameters, and a rigorous factorial experiment is performed to understand how the key parameters affect the outcomes of the techniques. The factors chosen for this study with undergradu-ate mechanical engineers include how concepts are displayed to participants (all are viewed at once or subsets are exchanged between participants, i.e., "rotational viewing") and the mode used to communicate ideas (written words only, sketches only, or a com-bination of written words and sketches). Four metrics are used to evaluate the data: quantity, quality, novelty, and variety. The data suggest that rotational viewing of sets of concepts described using sketches combined with words produces more ideas than having all concepts displayed in a "gallery view" form, but a gallery view results in more high quality concepts. These results suggest that a hybrid of methods should be used to maxi-mize the quality and number of ideas. The study also shows that individuals gain a significant number of ideas from their teammates. Ideas, when shared, can foster new idea tracks, more complete layouts, and a diverse synthesis. Finally, as teams develop more concepts, the quality of the concepts improves. This result is a consequence of the team-sharing environment and, in conjunction with the quantity of ideas, validates the effectiveness of group idea generation. This finding suggests a way to go beyond the observation that some forms of brainstorming can actually hurt productivity.