Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In order to develop novel solutions for complex systems and in increasingly competitive markets, it may be advantageous to generate large numbers of design concepts then identify the most novel and valuable ideas. However, it can be difficult to process, review and assess thousands of design concepts. Based on this need, we develop and demonstrate an automated method for design concept assessment. In the method, machine learning technologies are first applied to extract ontological data from design concepts. Then a filtering strategy and quantitative metrics are introduced that enable creativity rating based on the ontological data. This method is tested empirically. Design concepts are crowd -generated for a variety of actual industry design problems-opportunities. Over 4,000 design concepts were generated by humans for assessment. Empirical evaluation assesses: (1) correspondence of the automated ratings with human creativity ratings; (2) whether concepts selected using the method are highly scored by another set of crowd raters; and finally (3) if high scoring designs have a positive correlation or relationship to industrial technology development. The method provides a possible avenue to rate design concepts deterministically. This could harmonize design studies across different organizations. Another highlight is that a subset of designs selected automatically out of a large set of candidates was scored higher than a subset selected by humans when evaluated by a set of third-party raters.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The task of DCE is to obtain the optimal design concept among design solution candidates after the concept generation step, and the experienced designers usually are able to evaluate few design concepts based on the manual comparing method. However, with the diversified development of market demand and the fast advancement of technology, more and more new design concepts with various functional properties are generated to satisfy the differentiated requirements, which lead to DCE becoming a complex multi-criteria decision making (MCDM) problem inherent with a number of difficulties [3]. Thus, the designers began to use MCDM techniques to facilitate solving complex DCE situations, and the effective MCDM-based DCE is a popular research direction in conceptual design study. ...
... As mentioned before, to eliminate the vagueness and subjectivity existing in design evaluation, this paper adopts rough number to represent the quantitative crisp attribute values and subjective preferences in matrixes V and X, and uses the semantic terms (red, green, aluminium, etc.) to describe the values of qualitative attributes. Different from preference value (1,3,5,7), each quantitative attribute has its own dimension, e.g., 0.15 Mpa, 3.8 s, 25 m 3 /h, so the initial crisp attribute values in matrix V need to be normalized to facilitate comparisons in the following way. ...
... Taking attribute values V = {red, red, green, green, blue, yellow} and corresponding rough preference values X = { [1,3] , [1,3] , [3,5] , [3,5] , [3,7] , [3,5]} for example. The selections of z + and z − are shown as follows: ...
Article
Design Concept Evaluation (DCE) is a crucial step in new product development. In complex DCE task, the designer as a decision-maker has to make a comprehensive choice by considering the design concept’s inherent objective design factor as well as external subjective preference factor from evaluators (designer, expert or customer). However, most of DCE methods only limited to one of two factors, which unilaterally evaluate the alternatives and miss the optimal one. To find more reasonable design concept, this study attempts to better compatible with objective design and subjective preference factors in evaluation process, and proposes a new DCE method using new integrated ideal solution definition (I-ISD) approach in modified VIKOR model based on rough number, named as R-VIKOR(I). To be specific, this study puts forward four definition rules to select the positive and negative ideal solution elements respectively for benefit-like quantitative attribute, cost-like quantitative attribute, important qualitative attribute and less important qualitative attribute by utilizing the information originated from design and preference data, and calculates the deviation between alternative and redefined ideal solution through rough VIKOR to obtain the best one. Three comparative experiments have been carried out to validate the performance of R-VIKOR(I) by analyzing its robustness (experiment I), comparing it with other classical DCE methods based on rough TOPSIS, rough WASPAS and rough COPRAS (experiment II) and exploring the applicable of the proposed I-ISD approach (experiment III). Experimental results verify that R-VIKOR(I) could better balance the objective design attribute values and the subjective evaluator preference values to provide more reasonable evaluation result, especially this method has obvious advantage when evaluators have different preferences for design attribute values, a common case in modern personalized product development.
... Han et al. [14,33] utilize ConceptNet relationships to obtain analogies and combinations for a search entity. To evaluate crowdsourced design ideas and extract entities from these, Camburn et al. [16] use the TextRazor 11 platform that is built using models trained on DBPedia, Freebase, etc. Chen and Krishnamurthy [15] facilitate human-AI collaboration in completing problem formulation mind maps with the help of ConceptNet and the underlying relationships. These common-sense knowledge bases utilized by the scholars, however, were not built for the engineering purposes. ...
... We report a comparison (Sec. 4.1) of a small portion (30 facts) of our knowledge graph against a similar portion of triples obtained from TechNet 16 and ConceptNet, both of which are publicly accessible via their APIs. We also report the size and coverage of our knowledge graph against some well-known benchmarks (Sec. ...
Article
We propose a large scalable engineering knowledge base as an integrated knowledge graph, comprising sets of (entity, relationship, entity) triples that are real-world engineering ‘facts’ found in the patent database. We apply a set of rules based on the syntactic and lexical properties of claims in a patent document to extract entities and their associated relationships that are supposedly meaningful from an engineering design perspective. Such a knowledge base is expected to support inferencing, reasoning, recalling in various engineering design tasks. The knowledge base has a greater size and coverage in comparison with the previously used knowledge bases in the engineering design literature.
... By taking a data-driven approach, innovators can automatically evaluate and validate a very large quantity of diverse design concepts to accelerate the design evaluation, validation, and selection process. For instance, innovators can automatically evaluate and filter many new design concepts with a pretrained commonsense knowledge base [29]. One can also train deep neural networks with the data of prior experiments or successful/failed designs in the same context for automatically and predictively evaluating the performances and value of next design concepts (e.g., Atomwise [11], [16]). ...
... The expanded design space may contain more novel and more useful candidate designs to choose and implement and, thus, benefit creativity. Meanwhile, data-trained models can automate the evaluation of many opportunities [23] and many design concepts [29] from the enlarged opportunity and design spaces, accelerate the convergent search and ensure the identification of the best innovation opportunity to design for and the best design for implementation. ...
Article
The future of innovation processes is anticipated to be more data-driven and empowered by the ubiquitous digitalization, increasing data accessibility and rapid advances in machine learning, artificial intelligence, and computing technologies. While the data-driven innovation (DDI) paradigm is emerging, it has yet been formally defined and theorized and often confused with several other data-related phenomena. This article defines and crystalizes “DDI” as a formal innovation process paradigm, dissects its value creation, and distinguishes it from data-driven optimization, data-based innovation, and the traditional innovation processes that purely rely on human intelligence. With real-world examples and theoretical framing, I elucidate what DDI entails and how it addresses uncertainty and enhance creativity in the innovation process and present a process-based taxonomy of different DDI approaches. On this basis, I recommend the strategies and actions for innovators, companies, R&D organizations, and governments to enact DDI.
... By taking a data-driven approach, innovators can automatically evaluate and validate a very large quantity of diverse design concepts to accelerate the design evaluation, validation, and selection process. For instance, innovators can automatically evaluate and filter many new design concepts with a pre-trained common-sense knowledge base [29]. One can also train deep neural networks with the data of prior experiments or successful/failed designs in the same context for automatically and predictively evaluating the performances and value of next design concepts (e.g., Atomwise [11]; [16]). ...
... The expanded design space may contain more novel and more useful candidate designs to choose and implement and thus benefit creativity. Meanwhile, data-trained models can automate the evaluation of many opportunities [23] and many design concepts [29] from the enlarged opportunity and design spaces, accelerate the convergent search and ensure the identification of the best innovation opportunity to design for and the best design for implementation. Therefore, both the divergence-oriented actions (opportunity discovery and design generation) and the convergence-oriented actions (opportunity evaluation and design evaluation) can be augmented by different suitable data-driven approaches (See the taxonomy in Table 1) to achieve greater creativity of the innovation process. ...
Preprint
Full-text available
The future of innovation processes is anticipated to be more data-driven and empowered by the ubiquitous digitalization, increasing data accessibility and rapid advances in machine learning, artificial intelligence, and computing technologies. While the data-driven innovation (DDI) paradigm is emerging, it has yet been formally defined and theorized and often confused with several other data-related phenomena. This paper defines and crystalizes "data-driven innovation" as a formal innovation process paradigm, dissects its value creation, and distinguishes it from data-driven optimization (DDO), data-based innovation (DBI), and the traditional innovation processes that purely rely on human intelligence. With real-world examples and theoretical framing, I elucidate what DDI entails and how it addresses uncertainty and enhance creativity in the innovation process and present a process-based taxonomy of different data-driven innovation approaches. On this basis, I recommend the strategies and actions for innovators, companies, R&D organizations, and governments to enact data-driven innovation.
... By taking a data-driven approach, innovators can automatically evaluate and validate a very large quantity of diverse design concepts to accelerate the design evaluation, validation, and selection process. For instance, innovators can automatically evaluate and filter many new design concepts with a pretrained commonsense knowledge base [29]. One can also train deep neural networks with the data of prior experiments or successful/failed designs in the same context for automatically and predictively evaluating the performances and value of next design concepts (e.g., Atomwise [11], [16]). ...
... The expanded design space may contain more novel and more useful candidate designs to choose and implement and, thus, benefit creativity. Meanwhile, data-trained models can automate the evaluation of many opportunities [23] and many design concepts [29] from the enlarged opportunity and design spaces, accelerate the convergent search and ensure the identification of the best innovation opportunity to design for and the best design for implementation. ...
Article
The future of innovation processes is anticipated to be more data-driven and empowered by the ubiquitous digitalization, increasing data accessibility and rapid advances in machine learning, artificial intelligence, and computing technologies. While the data-driven innovation (DDI) paradigm is emerging, it has yet been formally defined and theorized and often confused with several other data-related phenomena. This paper defines and crystalizes “data-driven innovation” as a formal innovation process paradigm, dissects its value creation, and distinguishes it from data-driven optimization (DDO), data-based innovation (DBI), and the traditional innovation processes that purely rely on human intelligence. With real-world examples and theoretical framing, I elucidate what DDI entails and how it addresses uncertainty and enhance creativity in the innovation process and present a processbased taxonomy of different data-driven innovation approaches. On this basis, I recommend the strategies and actions for innovators, companies, R&D organizations, and governments to enact data-driven innovation.
... Artificial intelligence (AI) assistance methods have proven to be efficient in this area, supporting engineering teams in completing such challenging tasks rapidly and effectively. Engineers have used AI-assistance tools to design products and explore the solution space more rapidly [19] and at different stages of the design process, including concept generation [20], concept evaluation [21], prototyping [22], and manufacturing [23], and concurrent-engineering design [24]. However, human-AI collaboration can also restrict team performance. ...
Article
Full-text available
Managing the design process of teams has been shown to considerably improve problem-solving behaviors and resulting final outcomes. Automating this activity presents significant opportunities in delivering interventions that dynamically adapt to the state of a team in order to reap the most impact. In this work, an Artificial Intelligent (AI) agent is created to manage the design process of engineering teams in real time, tracking features of teams' actions and communications during a complex design and path-planning task with multidisciplinary team members. Teams are also placed under the guidance of human process managers for comparison. Regarding outcomes, teams perform equally as well under both types of management, with trends towards even superior performance from the AI-managed teams. The managers' intervention strategies and team perceptions of those strategies are also explored, illuminating some intriguing similarities. Both the AI and human process managers focus largely on communication-based interventions, though differences start to emerge in the distribution of interventions across team roles. Furthermore, team members perceive the interventions from the both the AI and human manager as equally relevant and helpful, and believe the AI agent to be just as sensitive to the needs of the team. Thus, the overall results show that the AI manager agent introduced in this work is able to match the capabilities of humans, showing potential in automating the management of a complex design process.
... The indication is based on prior human experiments (Luo et al., 2018) as well as big data experiments (Alstott et al., 2017). With regards to using idea distances for concept evaluation, other semantic measurement tools, such as SEMILAR and TechNet, as well as several existing computational idea evaluation methods such as the InnoGPS and the machine learning-based concept evaluation method proposed by Camburn et al. (2019;2020), which have employed the use of semantic distances, could be used. However, further research is needed to explore the 'definition' of far-related and closely-related ideas in computational measurements. ...
Article
Full-text available
Conceptual design, as an early phase of the design process, is known to have the highest impact on determining the innovation level of design results. Although many tools exist to support designers in conceptual design, additional knowledge, especially knowledge related to emerging technologies, is still often needed. In this paper the authors aim to propose a data-driven creative concept generation and evaluation approach to support designers in incorporating emerging technologies in the new product early development stage. The approach is demonstrated by means of an illustrated example.
... Han et al. (2020a) also proposed to evaluate new ideas by measuring the semantic similarity between design concepts using ConceptNet. Camburn et al. (2020) proposed a set of new metrics for automatic evaluation of the natural language descriptions of a large number of crowdsourced design ideas, and their evaluation was based on the Freebase (Bollacker et al., 2008), another large public structured knowledge database managed by Google. ...
Article
There are growing efforts to mine public and common-sense semantic network databases for engineering design ideation stimuli. However, there is still a lack of design ideation aids based on semantic network databases that are specialized in engineering or technology-based knowledge. In this study, we present a new methodology of using the Technology Semantic Network (TechNet) to stimulate idea generation in engineering design. The core of the methodology is to guide the inference of new technical concepts in the white space surrounding a focal design domain according to their semantic distance in the large TechNet, for potential syntheses into new design ideas. We demonstrate the effectiveness in general, and use strategies and ideation outcome implications of the methodology via a case study of flying car design idea generation.
... In the engineering design literature, the network map of technology domains [40,41] and semantic networks [54,55] have been created and used to guide the retrieval of analogical design stimuli based on the quantified knowledge distance between domains [15] or the semantic distance between concepts [70]. Meanwhile, the most frequently used knowledge bases are the large pre-trained commonsense knowledge graphs such as WordNet, ConceptNet and FreeBase [120,121]. ...
Preprint
Full-text available
Design-by-Analogy (DbA) is a design methodology wherein new solutions, opportunities or designs are generated in a target domain based on inspiration drawn from a source domain; it can benefit designers in mitigating design fixation and improving design ideation outcomes. Recently, the increasingly available design databases and rapidly advancing data science and artificial intelligence technologies have presented new opportunities for developing data-driven methods and tools for DbA support. In this study, we survey existing data-driven DbA studies and categorize individual studies according to the data, methods, and applications in four categories, namely, analogy encoding, retrieval, mapping, and evaluation. Based on both nuanced organic review and structured analysis, this paper elucidates the state of the art of data-driven DbA research to date and benchmarks it with the frontier of data science and AI research to identify promising research opportunities and directions for the field. Finally, we propose a future conceptual data-driven DbA system that integrates all propositions.
... Han et al. [9], [32] utilise Con-ceptNet relationships to obtain analogies and combinations for a search entity. To evaluate crowdsourced design ideas and extract entities from these, Camburn et al. [12] use the TextRazor 10 platform that is built using models trained on DBPedia, Freebase etc. Chen and Krishnamurthy [10] facilitate human-AI collaboration in completing problem formulation mind maps with the help of ConceptNet and the underlying relationships. These common-sense knowledge bases utilised by the scholars, however, were not built for the engineering purposes. ...
Preprint
Full-text available
We propose a large, scalable engineering knowledge graph, comprising sets of (entity, relationship, entity) triples that are real-world engineering facts found in the patent database. We apply a set of rules based on the syntactic and lexical properties of claims in a patent document to extract facts. We aggregate these facts within each patent document and integrate the aggregated sets of facts across the patent database to obtain the engineering knowledge graph. Such a knowledge graph is expected to support inference, reasoning, and recalling in various engineering tasks. The knowledge graph has a greater size and coverage in comparison with the previously used knowledge graphs and semantic networks in the engineering literature.
... Engineering design researchers utilize AI-based algorithms methods, especially machine learning, for rapid design data learning and processing [17]- [19] and have achieved successful results in their research contributions. Such contributions include evaluating design concepts [20], decision making for design support systems [21], design for additive manufacturing [22], predicting strain fields in microstructure designs [23], predicting performance of design based on its shape and vice-versa [24], material selection for sustainable product design [25] etc. Certain applications of AI that have proven efficient in analyzing computer-aided design (CAD) data include predicting the function of CAD model from its form [26], suitable feature-removal in CAD models for simulations [27], and CAD design shape matching [28]. Certain studies [29]- [31] potentially offer common ground between human designers and AI to provide opportunities for hybrid human-agent design. ...
Article
Recent advances in artificial intelligence (AI) have shed light on the potential uses and applications of AI tools in engineering design. However, the aspiration of a fully automated engineering design process still seems out of reach of AI’s current capabilities, and therefore, the need for human expertise and cognitive skills persists. Nonetheless, a collaborative design process that emphasizes and uses the strengths of both AI and human engineers is an appealing direction for AI in design. Touncover the current applications of AI, the authors review literature pertaining to AI applications in design research and engineering practice. This highlights the importance of integrating AI education into engineering design curricula in post-secondary institutions. Next, a pilot studyassessment of undergraduate mechanical engineering course descriptions at the University of Waterloo and University of Toronto reveals that only one out of a total of 153 courses provides both AI and design-related knowledge together in a course. This result identifies possible gaps in Canadian engineering curricula and potential deficiencies in the skills of graduating Canadianengineers.
... In the engineering design literature, the network map of technology domains [40,41] and semantic networks [54,55] have been created and used to guide the retrieval of analogical design stimuli based on the quantified knowledge distance between domains [15] or the semantic distance between concepts [70]. Meanwhile, the most frequently used knowledge bases are the large pretrained commonsense knowledge graphs such as WordNet, ConceptNet and FreeBase [120,121]. ...
Article
Full-text available
Design-by-Analogy (DbA) is a design methodology wherein new solutions, opportunities or designs are generated in a target domain based on inspiration drawn from a source domain; it can benefit designers in mitigating design fixation and improving design ideation outcomes. Recently, the increasingly available design databases and rapidly advancing data science and artificial intelligence technologies have presented new opportunities for developing data-driven methods and tools for DbA support. In this study, we survey existing data-driven DbA studies and categorize individual studies according to the data, methods, and applications in four categories, namely, analogy encoding, retrieval, mapping, and evaluation. Based on both nuanced organic review and structured analysis, this paper elucidates the state of the art of data-driven DbA research to date and benchmarks it with the frontier of data science and AI research to identify promising research opportunities and directions for the field. Finally, we propose a future conceptual data-driven DbA system that integrates all propositions.
... AI assistance for design allows human designers to work faster with increased effectiveness and efficiency; therefore, improving a company's competitiveness in today's fast evolving market. For example, designers have used AI tools to design products and explore the solution space more rapidly [3] and different AI approaches have been used to support different stages of the engineering design process including concept generation [4], concept evaluation [5], prototyping [6], and manufacturing [7]. Moreover, research involving 1500 companies where humans and AI worked together found a significant improvement in their overall performance [8,9]. ...
Article
Full-text available
As Artificial Intelligence (AI) assistance tools become more ubiquitous in engineering design, it becomes increasingly necessary to understand the influence of AI assistance on the design process and design effectiveness. Previous work has shown the advantages of incorporating AI design agents to assist human designers. However, the influence of AI assistance on the behavior of designers during the design process is still unknown. This study examines the differences in participants' design process and effectiveness with and without AI assistance during a complex drone design task using the HyForm design research platform. Data collected from this study is analyzed to assess the design process and effectiveness using quantitative methods, such as Hidden Markov Models and network analysis. The results indicate that AI assistance is most beneficial when addressing moderately complex objectives but exhibits a reduced advantage in addressing highly complex objectives. During the design process, the individual designers working with AI assistance employ a relatively explorative search strategy, while the individual designers working without AI assistance devote more effort to parameter design.
... Furthermore, the knowledge contained in academic papers and patents is usually not up-to-the-minute, as it is time-consuming to publish papers and file patents.In recent years, there is an emerging interest in applying crowdsourcing approaches to create databases for supporting engineering design activities. For example, Goucher-Lambert and Cagan[49] and He et al.[34] used crowdsourced idea descriptions as sources of design stimulation for supporting idea generation; Forbes et al.[95] introduced a crowdsourcing approach to construct a knowledge base for product innovation; and Camburn et al.[96] employed crowdsourcing to gather actual industry design concepts. Crowdsourcing produces massive, diverse and up-to-the-minute knowledge in a cost-effective manner, which presents a promising choice for constructing semantic networks for engineering design. ...
Article
Full-text available
In the past two decades, there has been increasing use of semantic networks in engineering design for supporting various activities, such as knowledge extraction, prior art search, idea generation and evaluation. Leveraging large-scale pre-trained graph knowledge databases to support engineering design-related natural language processing (NLP) tasks has attracted a growing interest in the engineering design research community. Therefore, this paper aims to provide a survey of the state-of-the-art semantic networks for engineering design and propositions of future research to build and utilize large-scale semantic networks as knowledge bases to support engineering design research and practice. The survey shows that WordNet, ConceptNet and other semantic networks, which contain common-sense knowledge or are trained on non-engineering data sources, are primarily used by engineering design researchers to develop methods and tools. Meanwhile, there are emerging efforts in constructing engineering and technical-contextualized semantic network databases, such as B-Link and TechNet, through retrieving data from technical data sources and employing unsupervised machine learning approaches. On this basis, we recommend six strategic future research directions to advance the development and uses of large-scale semantic networks for artificial intelligence applications in engineering design.
... With the advance of artificial intelligence (AI) systems, AI has increasingly been proving its usefulness in engineering design, including areas such as customer preference identification (Chen et al., 2013), concept evaluation (Camburn et al., 2020), and manufacturing (Williams et al., 2019). As of now, however, human designers remain in the loop as their creativity and agility are yet to be reproduced by an AI and are still crucial in the design process (Song et al., 2020). ...
Article
Full-text available
For successful human-artificial intelligence (AI) collaboration in design, human designers must properly use AI input. Some factors affecting that use are designers’ self-confidence and competence and those variables' impact on reliance on AI. This work studies how designers’ self-confidence before and during teamwork and overall competence are associated with their performance as teammates, measured by AI reliance and overall team score. Results show that designers’ self-confidence and competence have very different impacts on their collaborative performance depending on the accuracy of AI.
... Artificial intelligence (AI) assistance methods have proven to be efficient in this area, supporting engineering teams in completing such challenging tasks rapidly and effectively. Engineers have used AI assistance tools to design products and explore the solution space more rapidly [186] and at different stages of the design process, including concept generation [187], concept evaluation [188], prototyping [189], and manufacturing [190], and concurrent-engineering design [191]. However, human-AI collaboration can also restrict team performance. ...
Thesis
Teams are a major facet of engineering and are commonly thought to be necessary when solving dynamic and complex problems, such as engineering design tasks. Even though teams collectively bring a diversity of knowledge and perspectives to problem solving, previous work has demonstrated that in certain scenarios, such as in language-based and configuration design problems, the production by a team is inferior to that of a similar number of individuals solving independently (i.e., nominal teams). Aid in the form of design stimuli catalyze group creativity and help designers overcome impasses. However, methods for applying stimuli in the engineering design literature are largely static; they do not adapt to the dynamics of either the designer or the design process, both of which evolve throughout the problem-solving process. Thus, the overarching goal of this dissertation is to explore, better understand, and facilitate problem solving computationally, via adaptive, process management. This dissertation first compares individual versus group problem solving within the domain of engineering design. Through a behavioral study, our results corroborate previous findings, exhibiting that individuals outperform teams in the overall quality of their design solutions, even in this more free-flowing and explorative setting of conceptual design. Exploiting this result, we consider and explore whether a human, process manager can lessen this underperformance of design teams compared to nominal teams, and help teams overcomepotential deterrents that may be contributing to their inferior performance. The managerial interactions with the design teams are investigated and post-study interviews with the human process managers are conducted, in an attempt to uncover some of the cognitive rationale and strategies that may be beneficial throughout problem solving. Motivated from these post-study interviews, a topic-modeling approach then analyzes team cognition and the impact of these process manager interventions. The results from his approach show that the impacts of these interventions can be computationally detected through team discourse. Overall, these studies provide a conceptual basis for the detection and facilitation of design interventions based on real-time, discourse data.Next, two novel frameworks are studied, both of which take steps towards tracking features of design teams and utilizing that information to intervene. The first study analyzes the impact of modulating the distance of design stimuli from a designers’ current state, in this case, their current design solution, within a broader design space. Utilizing semantic comparisons between their current solution and a broad database of related example solutions, designers receive computationally selected inspirational stimuli midway through a problem-solving session. Through a regression analysis, the results exhibit increased performance when capturing their design state and providing increased stimulus quality. The second framework creates an artificial intelligent process manager agent to manage the design process of engineering teams in real-time, tracking features of teams’ actions and communications during a complex design and path-planning task with multidisciplinary team members. Teams are also placed under the guidance of human process managers for comparison. Across several dimensions, the overall results show that the AI manager agent introduced matches the capabilities of the human managers, showing potential in automating the management of a complex design process.Before and after analyses of the interventions indicate mixed adherence to the different types of interventions as induced in the intended process changes in the teams, and regression analyses show the impact of different interventions. Overall, this dissertation lays the groundwork for a computational development and deployment of adaptive process management, with the hope to make engineering designs as efficient as possible.
... AI assistance for design allows human designers to work faster with increased effectiveness and efficiency; therefore, improving the company's competitiveness in today's fast evolving market. For example, designers have used AI tools to design products and explore the solution space more rapidly [3] and different AI approaches have been used to support different stages of the engineering design process including concept generation [4], concept evaluation [5], prototyping [6], and manufacturing [7]. Moreover, research involving 1500 companies where humans and AI worked together found a significant improvement in their overall performance [8,9]. ...
Conference Paper
Full-text available
As Artificial Intelligence (AI) assistance tools become more ubiquitous in engineering design, it becomes increasingly necessary to understand the influence of AI assistance on the design process and design effectiveness. Previous work has shown the advantages of incorporating AI design agents to assist human designers. However, the influence of AI assistance on the behavior of designers during the design process is still unknown. This study examines the differences in participants’ design process and effectiveness with and without AI assistance during a complex drone design task using the HyForm design research platform. Data collected from this study is analyzed to assess the design process and effectiveness using quantitative methods, such as Hidden Markov Models and network analysis. The results indicate that AI assistance is most beneficial when addressing moderately complex objectives but exhibits a reduced advantage in addressing highly complex objectives. During the design process, the individual designers working with AI assistance employ a relatively explorative search strategy, while the individual designers working without AI assistance devote more effort to parameter design.
... Research has shown that AI-assistive technologies can significantly improve problem-solving and learning outcomes. These benefits have been instrumental across a variety of domains and applications, such as instructional agents in educational tutoring (Roll et al., 2014;Hu and Taylor, 2016), design problem-solving and in exploring complex design spaces (Camburn et al., 2020;Koch, 2017;Schimpf et al., 2019), cognitive assistants (Graesser et al., 2001;Costa et al., 2018), and in the facilitation of collaboration (Dellermann et al., 2019;Gunning et al., 2019). Ginni Rometty, the Chief Operating Office of IBM, argued at the 2017 World Economic Forum (Lewkowicz, 2020) that instead of fully replacing humans, AI should be meant to augment humans, thus setting the stage for human-AI hybrid teaming (Sadiku and Musa, 2021). ...
Article
Full-text available
This work studies the perception of the impacts of AI and human process managers during a complex design task. Although performance and perceptions by teams that are AI- versus human-managed are similar, we show that how team members discern the identity of their process manager (human/AI), impacts their perceptions. They discern the interventions as significantly more helpful and manager sensitive to the needs of the team, if they believe to be managed by a human. Further results provide deeper insights into automating real-time process management and the efficacy of AI to fill that role.
... For example, doctors are advised by an AI to interpret medical images [3,4]; computer users employ AI prediction for the next word or phrase they want to type [5,6]. Various AIs are also applied in multiple phases of the engineering design process to solve specific design tasks alone [7][8][9]. Research results demonstrate that a welltrained AI can perform a specified design task as good as, or sometimes even better than, human designers [10,11]. However, when an AI advises human designers to solve a design problem, the results from a recent cognitive study show that the AI only improves the initial performance of low-performing teams but always hurts the performance of high-performing teams [12]. ...
Preprint
Full-text available
Advances in artificial intelligence (AI) offer new opportunities for human-AI collaboration in engineering design. Human trust in AI is a crucial factor in ensuring an effective human -AI collaboration, and several approaches to enhance human trust in AI have been suggested in prior studies. However, it remains an open question in engineering design whether a strategy of deception about the identity of an AI teammate can effectively calibrate human trust in AI and improve human-AI joint performance. This research assesses the impact of the strategy of deception on human designers through a human subjects study where half of participants are told that they work with an AI teammate (i.e., without deception), and the other half of participants are told that they work with another human participant but in fact they work with an AI teammate (i.e., with deception). The results demonstrate that, for this study, the strategy of deception improves high proficiency human designers' perceived competency of their teammate. However, the strategy of deception does not raise the average number of team collaborations and does not improve the average performance of high proficiency human designers. For low proficiency human designers, the strategy of deception does not change their perceived competency and helpfulness of their teammate, and further reduces the average number of team collaborations while hurting their average performance at the beginning of the study. The potential reasons behind these results are discussed with an argument against using the strategy of deception in engineering design.
... Although there are optimization algorithms which are commonly used to aid the design process (namely size, shape and topology optimization), the process of designing a structure or part is still mostly manual and iterative. There is however some research on the field, such as by use of generative adversarial networks (in works of Oh et al. [4] and Shu et al. [5]), machine learning (Sharpe et al. [6] and Camburn et al. [7]), generative design (Oh et al. [8]), among others. Therefore, the field would highly benefit from a method which could streamline the structural design process from its inception. ...
Conference Paper
Using beams as a modeling and design tool in structural design has long been displaced by more recent numerical methods, such as finite element analysis and structural optimization, while those concepts became more restricted to the design of trusses and shafts. But is there still room for it to be applied in contemporary design of continuum structures? This research investigates some possible aplications of beam theory and beam sizing concepts when used along with contemporary technologies such as topology optimization, additive manufacturing and numerical methods, and how it could impact the structural design process.
... In the engineering design literature, the network map of technology domains [40,41] and semantic networks [54,55] has been created and used to guide the retrieval of analogical design stimuli based on the quantified knowledge distance between domains [15] or the semantic distance between concepts [70]. Meanwhile, the most frequently used knowledge bases are the large pre-trained commonsense knowledge graphs such as WordNet, ConceptNet, and FreeBase [120,121]. ...
Conference Paper
Full-text available
Design-by-Analogy (DbA) is a design methodology that draws inspiration from a source domain to a target domain to generate new solutions to problems or designs, which can benefit designers in mitigating design fixation and improving design ideation outcomes. Recently, the increasingly available design databases and rapidly advancing data science and artificial intelligence technologies have presented new opportunities for developing data-driven methods and tools for DbA support. Herein, we survey the prior data-driven DbA studies and categorize and analyze individual study according to the data, methods and applications in four categories including analogy encoding, retrieval, mapping, and evaluation. Based on such structured literature analysis, this paper elucidates the state of the art of data-driven DbA research to date and benchmarks it with the frontier of data science and AI research to identify promising research opportunities and directions for the field.
... The parameters included in questionnaire were as follows-1) relevance, 2) uniqueness, 3) clarity, 4) choice of colours, 5) sketching ability, 6) language processing, and 7) narration (Camburn et al., 2020;Demirkan and Afacan, 2012;Chaudhuri et al., 2020;Chaudhuri et al., 2021;Takai et al., 2015;Schumann et al., 1996;Berbague et al., 2021). Firstly, relevance verifies whether a solution is appropriate for a question. ...
Article
An inherent criterion of evaluation in Design education is novelty. Novelty is a measure of newness in solutions which is evaluated based on relative comparison with its frame of reference. Evaluating novelty is subjective and generally depends on expert’s referential metrics based on their knowledge and persuasion. Pedagogues compare and contrast solution for cohort of students in mass examination aspiring admission to Design schools. Large number of students participate in mass examinations, and in situations like this, examiners are confronted with multiple challenges in subjective evaluation such as- 1) Errors encountered in evaluation due to stipulated timeline, 2) Errors encountered due to prolonged working hours, 3) Errors encountered due to stress in performing repeated task on a large-scale. Pedagogues remain ever-inquisitive and vigilant about the evaluation process being consistent and accurate due to monotony of repeated task. To mitigate these challenges, a computational model is proposed for automating evaluation of novelty in image-based solutions. This model is developed by mixed-method research, where features for evaluating novelty are investigated by conducting a survey study. Further, these features were utilized to evaluate novelty and generate score for image-based solutions using Computer Vision (CV) and Deep Learning (DL) techniques. The performance metric of the model when measured reveals a negligible difference between scores of experts and scores of proposed model. These comparative analysis of the proposed model with human experts’ confirm the competence of the devised model and would go a long way to establish trust of pedagogues by ensuring reduced error and stress during the evaluation process.
... Second, the IA-generated context is a direct retrieval from the label in ConceptNet and they are mostly represented in a single word or phrase. As research in psychology reveals that a knowledge component is much like a sentence that expresses a particular idea [80], recent works in engineering design have also started to explore analysis on sentence-level design concepts [50,81]. Therefore, we believe that adding phrases and sentences to the map will allow more clarity for the user about the ideas being explored and enhance comprehension and learning of the scope of the central MD-19-1780 Chen and Krishnamurthy 29 topic. ...
Article
Full-text available
In this paper, we report on our investigation of human-AI collaboration for mind-mapping. We specifically focus on problem exploration in pre-conceptualization stages of early design. Our approach leverages the notion of query expansion — the process of refining a given search query for improving information retrieval. Assuming a mind-map as a network of nodes, we reformulate mind-mapping as a two-player game wherein both players (a human and an intelligent agent) take turns to add one node to the network at a time. Our contribution is the design, implementation, and evaluation of algorithm that powers the intelligent agent (AI). This paper is an extension of our prior work [1] wherein we developed this algorithm, dubbed Mini-Map, and implemented a web-based workflow enabled by ConceptNet (a large graph-based representation of “commonsense” knowledge). In this paper, we extend our prior work through a comprehensive comparison between human-AI collaboration and human-human collaboration for mind-mapping. We specifically extend our prior work by: (a) expanding on our previous quantitative analysis using established metrics and semantic studies, (b) presenting a new detailed video protocol analysis of the mind-mapping process, and (c) providing design implications for digital mind-mapping tools.
Conference Paper
Full-text available
Engineers often need to discover and learn designs from unfamiliar domains for inspiration or other particular uses. However, the complexity of the technical design descriptions and the unfamiliarity to the domain make it hard for engineers to comprehend the function, behavior, and structure of a design. To help engineers quickly understand a complex technical design description new to them, one approach is to represent it as a network graph of the design-related entities and their relations as an abstract summary of the design. While graph or network visualizations are widely adopted in the engineering design literature, the challenge remains in retrieving the design entities and deriving their relations. In this paper, we propose a network mapping method that is powered by Technology Semantic Network (TechNet). Through a case study, we showcase how TechNet’s unique characteristic of being trained on a large technology-related data source advantages itself over common-sense knowledge bases, such as WordNet and ConceptNet, for design knowledge representation.
Preprint
Full-text available
We review the scholarly contributions that utilise Natural Language Processing (NLP) methods to support the design process. Using a heuristic approach, we collected 223 articles published in 32 journals and within the period 1991-present. We present state-of-the-art NLP in-and-for design research by reviewing these articles according to the type of natural language text sources: internal reports, design concepts, discourse transcripts, technical publications, consumer opinions, and others. Upon summarizing and identifying the gaps in these contributions, we utilise an existing design innovation framework to identify the applications that are currently being supported by NLP. We then propose a few methodological and theoretical directions for future NLP in-and-for design research.
Article
Digital Engineering is an emerging trend and aims to support engineering design by integrating computational technologies like design automation, data science, digital twins, and product lifecycle management. To enable alignment of industrial practice with state of the art, an industrial survey is conducted to capture the status and identify obstacles that hinder implementation in the industry. The results show companies struggle with missing know-how and available experts. Future work should elaborate on methods that facilitate the integration of Digital Engineering in design practice.
Article
Function drives many early design considerations in product development, highlighting the importance of finding functionally similar examples if searching for sources of inspiration or evaluating designs against existing technology. However, it is difficult to capture what people consider is functionally similar and therefore, if measures that quantify and compare function using the products themselves are meaningful. In this work, human evaluations of similarity are compared to computationally determined values, shedding light on how quantitative measures align with human perceptions of functional similarity. Human perception of functional similarity is considered at two levels of abstraction: (1) the high-level purpose of a product and (2) how the product works. These human similarity evaluations are quantified by crowdsourcing 1360 triplet ratings at each functional abstraction and creating low-dimensional embeddings from the triplets. The triplets and embeddings are then compared to similarities that are computed between functional models using six representative measures, including both matching measures (e.g., cosine similarity) and network-based measures (e.g., spectral distance). The outcomes demonstrate how levels of abstraction and the fuzzy line between “highly similar” and “somewhat similar” products may impact human functional similarity representations and their subsequent alignment with computed similarity. The results inform how functional similarity can be leveraged by designers, with applications in creativity support tools, such as those used for design-by-analogy, or other computational methods in design that incorporate product function.
Article
Customer-involved design concept evaluation (CDCE) is a key issue for developing new product welcomed by customers, but seldom studies have considered the integrated utilization of objective design values (DVs) and customers’ subjective preference values (PVs). Our previous study has attempted to fuse with DVs and PVs in CDCE, while this is only limit to few situations under benefit-like and cost-like evaluation criteria. For better CDCE, this study further fuses with DVs and PVs in more complex situations, and puts forward an improved version of rough distance to redefined ideal solution (RD-RIS II) in multi-criteria decision-making (MCDM) scope to select the optimal concept. Different from old RD-RIS in previous study, RD-RIS II not only supports the ideal solution definition (ISD) processes for both quantitative and qualitative evaluation criteria, and but more importantly, it utilizes more useful information (value, feature, number and impact) from DVs and PVs to redefine the new positive ideal solution (PIS) and negative ideal solution (NIS). Through the rough distance calculation, the alternative which is close to PIS and far away from NIS is selected as the best one. Besides, the feasibility of RD-RIS II is validated via the application in real design evaluation example, and three empirical comparisons confirm that RD-RIS II makes more comprehensive decision than other MCDM-based evaluation methods, especially when the choices of customers and designers are conflicting, therefore it could provide more reasonable evaluation result which has better credibility and stability than others.
Article
To assist designers in making comprehensive decisions for objective design values (DVs) and subjective preference values (PVs) during the design solution evaluation stage, this study builds an information-intensive design solution evaluator (IIDSE) that combines multi-information from DVs and PVs. In the IIDSE, the importance degrees of the DVs and PVs are analysed based on their differences. Then, according to the importance classifications, values, characteristics, and numbers of DVs and PVs, a multi-information fusion (MIF)-based ideal solution definition strategy, which covers quantitative criteria with i) benefit characteristics, ii) cost characteristics, and iii) qualitative criteria, is proposed. A rough multi-criteria decision-making (R-MCDM) model is used to evaluate an alternative by computing its deviation from the defined ideal solution. The effectiveness of the IIDSE was validated via empirical comparisons. Experiment I showed that the MIF-based strategy is compatible with different R-MCDM models for selecting the preferred and best performing solution. In experiment II, among the R-MCDM models, R-COPRAS plus the MIF-based strategy is the best combination for constructing the IIDSE. Experiments III and IV demonstrated that the IIDSE can obtain more reasonable solutions compared with classical evaluators, especially in the case where conflictions between the objective DVs and subjective PVs exist.
Article
Full-text available
Recent advances in artificial intelligence (AI) offer opportunities for integrating AI into human design teams. Although various AIs have been developed to aid engineering design, the impact of AI usage on human design teams has received scant research attention. This research assesses the impact of a deep learning AI on distributed human design teams through a human subject study that includes an abrupt problem change. The results demonstrate that, for this study, the AI boosts the initial performance of low-performing teams before the problem change but always hurts the performance of high-performing teams. The potential reasons behind these results are discussed and several suggestions and warnings for utilizing AI in engineering design are provided.
Conference Paper
Full-text available
Novel concepts are essential for design innovation and can be generated with the aid of data stimuli and computers. However, current generative design algorithms focus on diagrammatic or spatial concepts that are either too abstract to understand or too detailed for early phase design exploration. This paper explores the uses of generative pre-trained transformers (GPT) for natural language design concept generation. Our experiments involve the use of GPT-2 and GPT-3 for different creative reasonings in design tasks. Both show reasonably good performance for verbal design concept generation.
Article
Full-text available
Early-stage ideation is a critical step in the design process. Mind maps are a popular tool for generating design concepts and in general for hierarchically organizing design insights. We explore an application for high-level concept synthesis in early stage design, which is typically difficult due to the broad space of options in early stages (e.g., as compared to parametric automation tools which are typically applicable in concept refinement stages or detail design). However, developing a useful mind map often demands a considerable time investment from a diverse design team. To facilitate the process of creating mind maps, we present an approach to crowdsourcing both concepts and binning of said concepts, using a mix of human evaluators and machine learning. The resulting computer-aided mind map has a significantly higher average concept novelty, and no significant difference in average feasibility (quantity can be set independently) as manually generated mind maps, includes distinct concepts, and reduces cost in terms of the designers’ time. This approach has the potential to make early-stage ideation faster, scalable and parallelizable, while creating alternative approaches to searching for a breadth and diversity of ideas. Emerging research explores the use of machine learning and other advanced computational techniques to amplify the mind mapping process. This work demonstrates the use of the both the EM-SVD, and HDBSCAN algorithms in an inferential clustering approach to reduce the number of one-to-one comparisons required in forming clusters of concepts. Crowdsourced human effort assists the process for both concept generation and clustering in the mind map. This process provides a viable approach to augment ideation methods, reduces the workload on a design team, and thus provides an efficient and useful machine learning based clustering approach.
Article
Full-text available
Textual idea data from online crowdsourcing contains rich information of the concepts that underlie the original ideas and can be recombined to generate new ideas. But representing such information in a way that can stimulate new ideas is not a trivial task, because crowdsourced data are often vast and in unstructured natural languages. This paper introduces a method that uses natural language processing to summarize a massive number of idea descriptions and represents the underlying concept space as word clouds with a core-periphery structure to inspire recombinations of such concepts into new ideas. We report the use of this method in a real public-sector-sponsored project to explore ideas for future transportation system design. Word clouds that represent the concept space underlying original crowdsourced ideas are used as ideation aids and stimulate many new ideas with varied novelty, usefulness and feasibility. The new ideas suggest that the proposed method helps expand the idea space. Our analysis of these ideas and a survey with the designers who generated them shed light on how people perceive and use the word clouds as ideation aids and suggest future research directions.
Article
Full-text available
Economic use of early stage prototyping is of paramount importance to companies engaged in the development of innovative products, services and systems because it directly impacts their bottom-line. There is likewise a need to understand the dimensions, and lenses that make up an economic profile of prototypes. Yet, there is little reliable understanding of how resources expended and views of dimensionality across prototyping translate into value. To help practitioners, designers, and researchers leverage prototyping most economically, we seek to understand the tradeoff between design information gained through prototyping and the resources expended prototyping. We investigate this topic by conducting an inductive study on industry projects across disciplines and knowledge domains while collecting and analyzing empirical data on their prototype creation and test processes. Our research explores ways of quantifying prototyping value and reinforcing the asymptotic relationship between value and fidelity. Most intriguingly, the research reveals insightful heuristics that practitioners can exploit to generate high value from low and high fidelity prototypes alike.
Article
Full-text available
Traditionally, design opportunities and directions are conceived based on expertise, intuition, or time-consuming user studies and marketing research at the fuzzy front end of the design process. Herein, we propose the use of the total technology space map (TSM) as a visual ideation aid for rapidly conceiving high-level design opportunities. The map is comprised of various technology domains positioned according to knowledge proximity, which is measured based on a large quantity of patent data. It provides a systematic picture of the total technology space to enable stimulated ideation beyond the designer's knowledge. Designers can browse the map and navigate various technologies to conceive new design opportunities that relate different technologies across the space. We demonstrate the process of using TSM as a rapid ideation aid and then analyze its applications in two experiments to show its effectiveness and limitations. Furthermore, we have developed a cloud-based system for computer-aided ideation, that is, InnoGPS, to integrate interactive map browsing for conceiving high-level design opportunities with domain-specific patent retrieval for stimulating concrete technical concepts, and to potentially embed machine-learning and artificial intelligence in the map-aided ideation process.
Conference Paper
Full-text available
Recently, the term knowledge graph has been used frequently in research and business, usually in close association with Semantic Web technologies, linked data, large-scale data analytics and cloud computing. Its popularity is clearly influenced by the introduction of Google's Knowledge Graph in 2012, and since then the term has been widely used without a definition. A large variety of interpretations has hampered the evolution of a common understanding of knowledge graphs. Numerous research papers refer to Google's Knowledge Graph, although no official documentation about the used methods exists. The prerequisite for widespread academic and commercial adoption of a concept or technology is a common understanding, based ideally on a definition that is free from ambiguity. We tackle this issue by discussing and defining the term knowledge graph, considering its history and diversity in interpretations and use. Our goal is to propose a definition of knowledge graphs that serves as basis for discussions on this topic and contributes to a common vision.
Article
Full-text available
Invention arises from novel combinations of prior technologies. However, prior studies of creativity have suggested that overly novel combinations may be harmful to invention. Apart from the factors of expertise, market, etc., there may be such a thing as ‘too much’ or ‘too little’ novelty that will determine an invention’s future value, but little empirical evidence exists in the literature. Using technical patents as the proxy of inventions, our analysis of 3.9 million patents identifies a clear ‘sweet spot’ in which the mix of novel combinations of prior technologies favors an invention’s eventual success. Specifically, we found that the invention categories with the highest mean values and hit rates have moderate novelty in the center of their combination space and high novelty in the extreme of their combination space. Too much or too little central novelty suppresses the positive contribution of extreme novelty in the invention. Furthermore, the combination of scientific and broader knowledge beyond patentable technologies creates additional value for invention and enlarges the advantage of the novelty sweet spot. These findings may further enable data-driven methods both for assessing invention novelty and for profiling inventors, and may inspire a new strand of data-driven design research and practice.
Conference Paper
Full-text available
Design is a ubiquitous human activity. Design is valued by individuals, teams, organizations, and cultures. There are patterns and recurrent phenomena across the diverse set of approaches to design and also variances. Designers can benefit from leveraging conceptual tools like process models, methods, and design principles to amplify design phenomena. There are many variant process models, methods, and principles for design. Likewise, usage of these conceptual tools differentiates in industrial contexts. We present an integrated process model, with exemplar methods and design principles that is synthesized from a review of several case studies in client based industrial design projects for product, service, and system development, professional education courses, and literature review. Concepts from several branches of design practice: (1) design thinking, (2) business design, (3) systems engineering, and (4) design engineering are integrated. A design process model, method set, and set of abstracted design principles are porposed.
Article
Full-text available
Data-driven engineering designers often search for design precedents in patent databases to learn about relevant prior arts, seek design inspiration, or assess the novelty of their own new inventions. However, patent retrieval relevant to the design of a specific product or technology is often unstructured and unguided, and the resultant patents do not sufficiently or accurately capture the prior design knowledge base. This paper proposes an iterative and heuristic methodology to comprehensively search for patents as precedents of the design of a specific technology or product for data-driven design. The patent retrieval methodology integrates the mining of patent texts, citation relationships, and inventor information to identify relevant patents; particularly, the search keyword set, citation network, and inventor set are expanded through the designer's heuristic learning from the patents identified in prior iterations. The method relaxes the requirement for initial search keywords while improving patent retrieval completeness and accuracy. We apply the method to identify self-propelled spherical rolling robot (SPSRRs) patents. Furthermore, we present two approaches to further integrate, systemize, visualize, and make sense of the design information in the retrieved patent data for exploring new design opportunities. Our research contributes to patent data-driven design.
Article
Full-text available
This paper examines Parametric Design (PD) in contemporary architectural practice. It considers three case studies: The Future of Us pavilion, the Louvre Abu Dhabi and the Morpheus Hotel. The case studies illustrate how, compared to non-parametrically and older parametrically designed projects, PD is employed to generate, document and fabricate designs with a greater level of detail and differentiation, often at the level of individual building components. We argue that such differentiation cannot be achieved with conventional Building Information Modelling and without customizing existing software. We compare the case studies' PD approaches (objected-oriented programming, functional programming, visual programming and distributed visual programming) and decomposition, algorithms and data structures as crucial factors for the practical viability of complex parametric models and as key aspects of PD thinking.
Article
Full-text available
Design is a ubiquitous human activity. Design is valued by individuals, teams, organizations, and cultures. There are patterns and recurrent phenomena across the diverse set of approaches to design and also variances. Designers can benefit from leveraging conceptual tools like process models, methods, and design principles to amplify design phenomena. There are many variant process models, methods, and principles for design. Likewise, usage of these conceptual tools differentiates in industrial contexts. We present an integrated process model, with exemplar methods and design principles that is synthesized from a review of several case studies in client based industrial design projects for product, service, and system development, professional education courses, and literature review. Concepts from several branches of design practice: (1) design thinking, (2) business design, (3) systems engineering, and (4) design engineering are integrated. A design process model, method set, and set of abstracted design principles are porposed. OPENING There are patterns and consistent styles to the approach that humans take in design. Patterns of design activity often emerge quite differently according to context, and styles of the designer [4]. In this paper we explore the integration of several extant conceptual tools that support design innovation [5]. Specifically, aspects of design thinking, business design, systems engineering, and design engineering are explored. The distinction and interrelationships between design process model, methods, and principles is also approached The paper is organized according to the following three objectives: 1. Explore professional education workshops 2. Explore industrial case studies 3. Develop an integrated design innovation process model, methods set, and principle set Numerous design studies have explored process models, design methods, and design principles. Yet there is an ongoing need to distinguish these concepts and to build an integrated toolset. These conceptual tools are distinct, yet support each other. Process models guide overall activity flows and trends, methods support shorter term activites and help in planning work tasks, while principles guide designers mentally. Figure 1 provides an abstract illustration of the inter-woven relationship between these conceptual tools and designers.
Article
Full-text available
Everybody experiences every day the need to manage a huge amount of heterogeneous shared resources, causing information overload and fragmentation problems. Collaborative annotation tools are the most common way to address these issues, but collaboratively tagging resources is usually perceived as a boring and time consuming activity and a possible source of conflicts. To face this challenge, collaborative systems should effectively support users in the resource annotation activity and in the definition of a shared view. The main contribution of this paper is the presentation and the evaluation of a set of mechanisms (personal annotations over shared resources and tag suggestions) that provide users with the mentioned support. The goal of the evaluation was to ( 1 ) assess the improvement with respect to the situation without support; ( 2 ) evaluate the satisfaction of the users, with respect to both the final choice of annotations and possible conflicts; ( 3 ) evaluate the usefulness of the support mechanisms in terms of actual usage and user perception. The experiment consisted in a simulated collaborative work scenario, where small groups of users annotated a few resources and then answered a questionnaire. The evaluation results demonstrate that the proposed support mechanisms can reduce both overload and possible disagreement.
Conference Paper
Full-text available
Empirical work in design science has highlighted that the process of ideation can significantly affect design outcome. Exploring the design space with both breadth and depth increases the likelihood of achieving better design outcomes. Furthermore, iteratively attempting to solve challenging design problems in large groups over a short time period may be more effective than protracted exploration by an isolated set of individuals. There remains a substantial opportunity to explore the structure of various design concept sets. In addition, many empirical studies cap analysis at sample sizes of less than one hundred individuals. This has provided substantial, though partial, models of the ideation space. This work explores one new territory in large scale ideation. Two conditions are evaluated. In the first condition, an ideation session was run with 2400 practicing designers and engineers from one organization. In the second condition 1000 individuals ideate on the same problem in a completely distributed environment and without awareness of each other. We compare properties of solution sets produced by each of these groups and activities. Analytical tools from network modeling theory are applied as well as traditional ideation metrics such as concept binning with saturation analysis. Structural network modeling is applied to evaluate the interconnectivity of design concepts. This is a strictly quantitative, and at the same time graphically expressive, means to evaluate the diversity of a design solution set. Observations indicate that the group condition approached saturation of distinct categories more rapidly than the individual, distributed condition. The total number of solution categories developed in the group condition was also higher. Additionally, individuals generally provided concepts across a greater number of solution categories in the group condition. The indication for design practice is that groups of just under forty individuals would provide category saturation within group ideation for a system level design, while distributed individuals may provide additional concept differentiation. This evidence can support development of more systematic ideation strategies. Furthermore, we provide an algorithmic approach for quantitative evaluation of variety in design solution sets using networking analysis techniques. These methods can be used in complex or wicked problems, and system development where the design space is vast.
Article
Full-text available
Climate change, resource depletion, and worldwide urbanization feed the demand for more energy and resource-efficient buildings. Increasingly, architectural designers and consultants analyze building designs with easy-to-use simulation tools. To identify design alternatives with good performance, designers often turn to optimization methods. Randomized, metaheuristic methods such as genetic algorithms are popular in the architectural design field. However, are metaheuristics the best approach for architectural design problems that often are complex and ill defined? Metaheuristics may find solutions for well-defined problems, but they do not contribute to a better understanding of a complex design problem. This paper proposes surrogate-based optimization as a method that promotes understanding of the design problem. The surrogate method interpolates a mathematical model from data that relate design parameters to performance criteria. Designers can interact with this model to explore the approximate impact of changing design variables. We apply the radial basis function method, a specific type of surrogate model, to two architectural daylight optimization problems. These case studies, along with results from computational experiments, serve to discuss several advantages of surrogate models. First, surrogate models not only propose good solutions but also allow designers to address issues outside of the formulation of the optimization problem. Instead of accepting a solution presented by the optimization process, designers can improve their understanding of the design problem by interacting with the model. Second, a related advantage is that designers can quickly construct surrogate models from existing simulation results and other knowledge they might possess about the design problem. Designers can thus explore the impact of different evaluation criteria by constructing several models from the same set of data. They also can create models from approximate data and later refine them with more precise simulations. Third, surrogate-based methods typically find global optima orders of magnitude faster than genetic algorithms, especially when the evaluation of design variants requires time-intensive simulations.
Article
Full-text available
This work lends insight into the meaning and impact of "near" and "far" analogies. A cognitive engineering design study is presented that examines the effect of the distance of analogical design stimuli on design solution generation, and places those findings in context of results from the literature. The work ultimately sheds new light on the impact of analogies in the design process and the significance of their distance from a design problem. In this work, the design repository from which analogical stimuli are chosen is the U.S. patent database, a natural choice, as it is one of the largest and easily accessed catalogued databases of inventions. The "near" and "far" analogical stimuli for this study were chosen based on a structure of patents, created using a combination of latent semantic analysis and a Bayesian based algorithm for discovering structural form, resulting in clusters of patents connected by their relative similarity. The findings of this engineering design study are juxtaposed with the findings of a previous study by the authors in design by analogy, which appear to be contradictory when viewed independently. However, by mapping the analogical stimuli used in the earlier work into similar structures along with the patents used in the current study, a relationship between all of the stimuli and their relative distance from the design problem is discovered. The results confirm that "near" and "far" are relative terms, and depend on the characteristics of the potential stimuli. Further, although the literature has shown that "far" analogical stimuli are more likely to lead to the generation of innovative solutions with novel characteristics, there is such a thing as too far. That is, if the stimuli are too distant, they then can become harmful to the design process. Importantly, as well, the data mapping approach to identify analogies works, and is able to impact the effectiveness of the design process. This work has implications not only in the area of finding inspirational designs to use for design by analogy processes in practice, but also for synthesis, or perhaps even unification, of future studies in the field of design by analogy. [DOI: 10.1115/1.4023158]
Article
Full-text available
The goal of many blue-sky idea generation techniques is to generate a large quantity of ideas with the hope of obtaining a few outstanding, creative ideas that are worth pursuing. As such, a rapid means of screening the resulting sketches to select a manageable set of promising ideas is needed. This study explores a metric for evaluating large quantities of early-stage product sketches and tests the metric through an online service called Mechanical Turk. Reviewers’ subjective ratings of idea creativity had a strong correlation with ratings of idea novelty (r =0.80), but negligible correlation with idea usefulness (r =0.16). The clarity of the sketch positively influenced ratings of idea creativity. Additionally, the quantity of ideas generated by an individual participant had a strong correlation with that participant's overall creativity scores (r =0.82). The authors suggest a metric of three attributes to be used as a first pass in narrowing a large pool of product ideas to the most innovative: novel, useful (or valuable), and feasible (as determined by experts).
Article
Full-text available
Design by analogy is a powerful part of the design process across the wide variety of modalities used by designers such as linguistic descriptions, sketches, and diagrams. We need tools to support people's ability to find and use analogies. A deeper understanding of the cognitive mechanisms underlying design and analogy is a crucial step in developing these tools. This paper presents an experiment that explores the effects of representation within the modality of sketching, the effects of functional models, and the retrieval and use of analogies. We find that the level of abstraction for the representation of prior knowledge and the representation of a current design problem both affect people's ability to retrieve and use analogous solutions. A general semantic description in memory facilitates retrieval of that prior knowledge. The ability to find and use an analogy is also facilitated by having an appropriate functional model of the problem. These studies result in a number of important implications for the development of tools to support design by analogy. Foremost among these implications is the ability to provide multiple representations of design problems by which designers may reason across, where the verb construct in the English language is a preferred mode for these representations.
Article
Full-text available
This paper provides an introduction to a new design methodology known as A-Design, which combines aspects of multi-objective optimization, multi-agent systems, and automated design synthesis. The A-Design theory is founded on the notion that engineering design occurs in interaction with an ever-changing environment, and therefore computer tools developed to aid in the design process should be adaptive to these changes. In this paper, A-Design is introduced along with some simple test problems to demonstrate the capabilities of different aspects of the theory. The theory of A-Design is then shown as the basis for a design tool that adaptively creates electro-mechanical configuration designs for changing user preferences.
Article
Full-text available
In 1956, Miller [1] conjectured that there is an upper limit on our capacity to process information on simultaneously interacting elements with reliable accuracy and with validity. This limit is seven plus or minus two elements. He noted that the number 7 occurs in many aspects of life, from the seven wonders of the world to the seven seas and seven deadly sins. We demonstrate in this paper that in making preference judgments on pairs of elements in a group, as we do in the analytic hierarchy process (AHP), the number of elements in the group should be no more than seven. The reason is founded in the consistency of information derived from relations among the elements. When the number of elements increases past seven, the resulting increase in inconsistency is too small for the mind to single out the element that causes the greatest inconsistency to scrutinize and correct its relation to the other elements, and the result is confusion to the mind from the existing information. The AHP as a theory of measurement has a basic way to obtain a measure of inconsistency for any such set of pairwise judgments. When the number of elements is seven or less the inconsistency measurement is relatively large with respect to the number of elements involved; when the number is more it is relatively small. The most inconsistent judgment is easily determined in the first case and the individual providing the judgments can change it in an effort to improve the overall inconsistency. In the second case, as the inconsistency measurement is relatively small, improving inconsistency requires only small perturbations and the judge would be hard put to determine what that change should be, and how such a small change could be justified for improving the validity of the outcome. The mind is sufficiently sensitive to improve large inconsistencies but not small ones. And the implication of this is that the number of elements in a set should be limited to seven plus or minus two.
Article
Full-text available
Metrics are used by firms for a variety of commendable purposes. The authors maintain that every metric, however used, will affect actions and decisions. But, of course, choosing the right one is critical to success.The authors focus on the selection of good metrics and, based on their own experience and the academic literature, summarize seven pitfalls in the use of metrics which can cause them to be counter-productive and fail. The article then goes on to outline a seven-step system to design effective, `lean' metrics, which depends on a close understanding of customers, employees, work processes, and the underlying properties of metrics themselves.
Conference Paper
Full-text available
User studies are important for many aspects of the design process and involve techniques ranging from informal surveys to rigorous laboratory studies. However, the costs involved in engaging users often requires practitioners to trade off between sample size, time requirements, and monetary costs. Micro-task markets, such as Amazon's Mechanical Turk, offer a potential paradigm for engaging a large number of users for low time and monetary costs. Here we investigate the utility of a micro-task market for collecting user measurements, and discuss design considerations for developing remote micro user evaluation tasks. Although micro-task markets have great potential for rapidly collecting user measurements at low costs, we found that special care is needed in formulating tasks in order to harness the capabilities of the approach. Author Keywords Remote user study, Mechanical Turk, micro task, Wikipedia.
Conference Paper
Full-text available
We present data from detailed observation of 24 information workers that shows that they experience work fragmentation as common practice. We consider that work fragmentation has two components: length of time spent in an activity, and frequency of interruptions. We examined work fragmentation along three dimensions: effect of collocation, type of interruption, and resumption of work. We found work to be highly fragmented: people average little time in working spheres before switching and 57% of their working spheres are interrupted. Collocated people work longer before switching but have more interruptions. Most internal interruptions are due to personal work whereas most external interruptions are due to central work. Though most interrupted work is resumed on the same day, more than two intervening activities occur before it is. We discuss implications for technology design: how our results can be used to support people to maintain continuity within a larger framework of their working spheres.
Conference Paper
Full-text available
For easing the exchange of news, the International Press Telecommunication Council (IPTC) has developed the NewsML Architecture (NAR), an XML-based model that is specialized into a number of languages such as NewsML G2 and EventsML G2. As part of this architecture, specific controlled vocabularies, such as the IPTC News Codes, are used to categorize news items together with other industry-standard thesauri. While news is still mainly in the form of text-based stories, these are often illustrated with graphics, images and videos. Media-specific metadata formats, such as EXIF, DIG35 and XMP, are used to describe the media. The use of different metadata formats in a single production process leads to interoperability problems within the news production chain itself. It also excludes linking to existing web knowledge resources and impedes the construction of uniform end-user interfaces for searching and browsing news content. In order to allow these different metadata standards to interoperate within a single information environment, we design an OWL ontology for the IPTC News Architecture, linked with other multimedia metadata standards. We convert the IPTC NewsCodes into a SKOS thesaurus and we demonstrate how the news metadata can then be enriched using natural language processing and multimedia analysis and integrated with existing knowledge already formalized on the Semantic Web. We discuss the method we used for developing the ontology and give rationale for our design decisions. We provide guidelines for re-engineering schemas into ontologies and formalize their implicit semantics. In order to demonstrate the appropriateness of our ontology infrastructure, we present an exploratory environment for searching and browsing news items.
Article
Full-text available
Although Mechanical Turk has recently become popular among social scientists as a source of experimental data, doubts may linger about the quality of data provided by subjects recruited from online labor markets. We address these potential concerns by presenting new demographic data about the Mechanical Turk subject population, reviewing the strengths of Mechanical Turk relative to other online and offline methods of recruiting subjects, and comparing the magnitude of effects obtained using Mechanical Turk and traditional subject pools. We further discuss some additional benefits such as the possibility of longitudinal, cross cultural and prescreening designs, and offer some advice on how to best manage a common subject pool.
Conference Paper
Early stages of the engineering design process are vital to shaping the final design; each subsequent step builds from the initial concept. Innovation-driven engineering problems require designers to focus heavily on early-stage design generation, with constant application and evaluation of design changes. Strategies to reduce the amount of time and effort designers spend in this phase could improve the efficiency of the design process as a whole. This paper seeks to create and demonstrate a two-tiered design grammar that encodes heuristic strategies to aid in the generation of early solution concepts. Specifically, this two-tiered grammar mimics the combination of heuristic-based strategic actions and parametric modifications employed by human designers. Rules in the higher-tier are abstract and potentially applicable to multiple design problems across a number of fields. These abstract rules are translated into a series of lower-tier rule applications in a spatial design grammar, which are inherently domain-specific. This grammar is implemented within the HSAT agent-based algorithm. Agents iteratively select actions from either the higher-tier or lower-tier. This algorithm is applied to the design of wave energy converters, devices which use the motion of ocean waves to generate electrical power. Comparisons are made between designs generated using only lower-tier rules and those generated using only higher-tier rules.
Conference Paper
Concept clustering is an important element of the product development process. The process of reviewing multiple concepts provides a means of communicating concepts developed by individual team members and by the team as a whole. Clustering, however, can also require arduous iterations and the resulting clusters may not always be useful to the team. In this paper, we present a machine learning approach on natural language descriptions of concepts that enables an automatic means of clustering. Using data from over 1,000 concepts generated by student teams in a graduate new product development class, we provide a comparison between the concept clustering performed manually by the student teams and the work automated by a machine learning algorithm. The goal of our machine learning tool is to support design teams in identifying possible areas of “over-clustering” and/or “under-clustering” in order to enhance divergent concept generation processes.
Conference Paper
Collaborative knowledge bases that make their data freely available in a machine-readable form are central for the data strategy of many projects and organizations. The two major collaborative knowledge bases are Wikimedia's Wikidata and Google's Freebase. Due to the success of Wikidata, Google decided in 2014 to offer the content of Freebase to the Wikidata community. In this paper, we report on the ongoing transfer efforts and data mapping challenges, and provide an analysis of the effort so far. We describe the Primary Sources Tool, which aims to facilitate this and future data migrations. Throughout the migration, we have gained deep insights into both Wikidata and Freebase, and share and discuss detailed statistics on both knowledge bases.
Conference Paper
The task of keyword extraction aims at capturing expressions (or entities) that best represent the main topics of a document. Given the rapid adoption of these online semantic annotators and their contribution to the growth of the Semantic Web, one important task is to assess their quality. This article presents an evaluation of the quality and stability of semantic annotators on domain-specific and open domain corpora. We evaluate five semantic annotators and compare them to two state-of-the-art keyword extractors, namely KP-miner and Maui. Our evaluation demonstrates that semantic annotators are not able to outperform keyword extractors and that annotators perform best on domains having a high keyword density.
Conference Paper
In this paper, we propose and investigate a novel distance-based approach for measuring the semantic dissimilarity between two concepts in a knowledge graph. The proposed Normalized Semantic Web Distance (NSWD) extends the idea of the Normalized Web Distance, which is utilized to determine the dissimilarity between two textural terms, and utilizes additional semantic properties of nodes in a knowledge graph. We evaluate our proposal on two different knowledge graphs: Freebase and DBpedia. While the NSWD achieves a correlation of up to 0.58 with human similarity assessments on the established Miller-Charles benchmark of 30 term-pairs on the Freebase knowledge graph, it reaches an even higher correlation of 0.69 in the DBpedia knowledge graph. We thus conclude that the proposed NSWD is an efficient and effective distance-based approach for assessing semantic dissimilarity in very large knowledge graphs.
Article
Representations in engineering design can be hand sketches, photographs, CAD, functional models, physical models, or text. Using representations allows engineers to gain a clearer picture of how a design works. We present an experiment that compares the influence of representations on fixation and creativity. This experiment presents designers with an example solution represented as a function tree and a sketch, we compare how these different external representations influence design fixation as they complete a design task. Results show that function trees do not cause fixation to ideas compared to a control group, and that function trees reduce fixation when compared to sketches. Results from this experiment show that function tree representations offer advantages for reducing fixation during idea generation.
Article
Analogical reasoning appears to play a key role in creative design. In briefly reviewing recent research on analogy-based creative design, this article first examines characterizations of creative design and then analyzes theories of analogical design in terms of four questions: why, what, how and when? After briefly describing recent AI theories of analogy-based creative design, the article focuses on three theories instantiated in operational computer programs: Syn, DSSUA (Design Support System Using Analogy) and Ideal. From this emerges a related set of research issues in analogy-based creative design. The main goal is to sketch the core issues, themes and directions in building such theories.
Conference Paper
Measuring design creativity is crucial to evaluating the effectiveness of idea generation methods. Historically, there has been a divide between easily-computable metrics, which are often based on arbitrary scoring systems, and human judgement metrics, which accurately reflect human opinion but rely on the expensive collection of expert ratings. This research bridges this gap by introducing a probabilistic model that computes a family of repeatable creativity metrics trained on expert data. Focusing on metrics for variety, a combination of submodular functions and logistic regression generalizes existing metrics, accurately recovering several published metrics as special cases and illuminating a space of new metrics for design creativity. When tasked with predicting which of two sets of concepts has greater variety, our model matches two commonly used metrics to 96% accuracy on average. In addition, using submodular functions allows this model to efficiently select the highest variety set of concepts when used in a design synthesis system.
Article
Applying natural language processing for mining and intelligent information access to tweets (a form of microblog) is a challenging, emerging research area. Unlike carefully authored news text and other longer content, tweets pose a number of new challenges, due to their short, noisy, context-dependent, and dynamic nature. Information extraction from tweets is typically performed in a pipeline, comprising consecutive stages of language identification, tokenisation, part-of-speech tagging, named entity recognition and entity disambiguation (e.g. with respect to DBpedia). In this work, we describe a new Twitter entity disambiguation dataset, and conduct an empirical analysis of named entity recognition and disambiguation, investigating how robust a number of state-of-the-art systems are on such noisy texts, what the main sources of error are, and which problems should be further investigated to improve the state of the art.
Article
Advances in innovation processes are critically important as economic and business landscapes evolve. There are many concept generation techniques that can assist a de-signer in the initial phases of design. Unfortunately, few studies have examined these techniques that can provide evidence to suggest which techniques should be preferred or how to implement them in an optimal way. This study systematically investigates the underlying factors of four common and well-documented techniques: brainsketching, gallery, 6-3-5, and C-sketch. These techniques are resolved into their key parameters, and a rigorous factorial experiment is performed to understand how the key parameters affect the outcomes of the techniques. The factors chosen for this study with undergradu-ate mechanical engineers include how concepts are displayed to participants (all are viewed at once or subsets are exchanged between participants, i.e., "rotational viewing") and the mode used to communicate ideas (written words only, sketches only, or a com-bination of written words and sketches). Four metrics are used to evaluate the data: quantity, quality, novelty, and variety. The data suggest that rotational viewing of sets of concepts described using sketches combined with words produces more ideas than having all concepts displayed in a "gallery view" form, but a gallery view results in more high quality concepts. These results suggest that a hybrid of methods should be used to maxi-mize the quality and number of ideas. The study also shows that individuals gain a significant number of ideas from their teammates. Ideas, when shared, can foster new idea tracks, more complete layouts, and a diverse synthesis. Finally, as teams develop more concepts, the quality of the concepts improves. This result is a consequence of the team-sharing environment and, in conjunction with the quantity of ideas, validates the effectiveness of group idea generation. This finding suggests a way to go beyond the observation that some forms of brainstorming can actually hurt productivity.
Article
The word2vec software of Tomas Mikolov and colleagues (https://code.google.com/p/word2vec/ ) has gained a lot of traction lately, and provides state-of-the-art word embeddings. The learning models behind the software are described in two research papers. We found the description of the models in these papers to be somewhat cryptic and hard to follow. While the motivations and presentation may be obvious to the neural-networks language-modeling crowd, we had to struggle quite a bit to figure out the rationale behind the equations. This note is an attempt to explain equation (4) (negative sampling) in "Distributed Representations of Words and Phrases and their Compositionality" by Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado and Jeffrey Dean.
Article
This paper introduces a new perspective/direction on assessing and encouraging creativity in concept design for application in engineering design education and industry. This research presents several methods used to assess the creativity of similar student designs using metrics and judges to determine which product is considered the most creative. Two methods are proposed for creativity concept evaluation during early design, namely the Comparative Creativity Assessment (CCA) and the Multi-Point Creativity Assessment (MPCA) methods. A critical survey is provided along with a comparison of prominent creativity assessment methods for personalities, products, and the design process. These comparisons culminate in the motivation for new methodologies in creative product evaluation to address certain shortcomings in current methods. The paper details the creation of the two creativity assessment methods followed by an application of the CCA and MPCA to two case studies drawn from engineering design classes.
Article
A wide range of formal methods have been devised and used for idea generation in conceptual design. Experimental evidence is needed to support claims regarding the effectiveness of these methods in promoting idea generation in engineering design. Towards that goal this paper presents a set of effectiveness metrics experimental methods, data collection and analysis techniques. Statistically based Design of Experiments (DOE) principles were used in developing the guidelines. Four classes of operating variables were considered to characterize the design problem and the environment. The effectiveness metrics proposed are based on outcome and consists of the quantity, quality, novelty, and variety of ideas generated. Two experimental approaches have been developed. In the Direct Method. the influence of the! type of design problem and various parameters related to the procedure of an idea generation method is measured by using the method in its entirety. In the Indirect Method each idea generation method is decomposed into key components and its overall effectiveness is predicted by experimentally studying the effectiveness of its components and their mutual interactions. [S1050-0472(00)02004-3].
Article
A new automated approach to engineering design known as A-design is presented that creates design configurations through the interaction of software agents. By combining unique problem solving strategies, these agents are able to generate solutions to open-ended design problems. The A-design methodology makes several theoretical claims through its combination of multiagent systems, multiobjective design selection, and stochastic optimization, and is currently implemented to solve general electromechanical design problems. While this paper presents an overview of the theoretical basis for A-design, it primarily focuses on the method for representing electromechanical design configurations and the reasoning of the agents that construct these configurations. Results from an electromechanical test problem show the generality of the functional representation. [S1050-0472(00)00701-7].
Article
The positive effect of having environmental information is generally taken for granted in design for sustainability and ecodesign. Research in the field of creativity, however, has shown that the exposure to examples can provoke fixation and reduce the overall creativity of the idea-generation process. Different sorts and levels of information – commonly available for designers – was delivered to 56 people, all of whom were asked to generate different design ideas. Results prove that having detailed information – be it of previous models or of competing products – significantly reduces the creativity of the design ideas. Soft information, on the other hand, does not present this effect. Successful tools in the future must deliver relevant information avoiding this fixation effect.
Conference Paper
We present conditional random fields, a framework for building probabilistic models to segment and label sequence data. Conditional random fields offer several advantages over hidden Markov models and stochastic grammars for such tasks, including the ability to relax strong independence assumptions made in those models. Conditional random fields also avoid a fundamental limitation of maximum entropy Markov models (MEMMs) and other discriminative Markov models based on directed graphical models, which can be biased towards states with few successor states. We present iterative parameter estimation algorithms for conditional random fields and compare the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.
Article
In recent years, academics and educators have begun to use software mapping tools for a number of education-related purposes. Typically, the tools are used to help impart critical and analytical skills to students, to enable students to see relationships between concepts, and also as a method of assessment. The common feature of all these tools is the use of diagrammatic relationships of various kinds in preference to written or verbal descriptions. Pictures and structured diagrams are thought to be more comprehensible than just words, and a clearer way to illustrate understanding of complex topics. Variants of these tools are available under different names: “concept mapping”, “mind mapping” and “argument mapping”. Sometimes these terms are used synonymously. However, as this paper will demonstrate, there are clear differences in each of these mapping tools. This paper offers an outline of the various types of tool available and their advantages and disadvantages. It argues that the choice of mapping tool largely depends on the purpose or aim for which the tool is used and that the tools may well be converging to offer educators as yet unrealised and potentially complementary functions. KeywordsConcept mapping–Mind mapping–Computer-aided argument mapping–Critical thinking–Argument–Inference-making–Knowledge mapping
Article
Systematic methods for idea generation in engineering design have come about from a variety of sources. Do these methods really aid ideation? Some empirical studies have been conducted by researchers to answer this question. These studies include highly controlled lab experiments by cognitive psychologists, as well as experiments in simulated design environments carried out by engineering design theorists. A key factor in design and analysis of empirical studies is characterization and measurement of ideation effectiveness. This paper describes four objective measures of ideation effectiveness. The theoretical basis of each is discussed and procedures for application of each are outlined and illustrated with case studies.
Conference Paper
Freebase is a practical, scalable tuple database used to struc- ture general human knowledge. The data in Freebase is collaboratively created, structured, and maintained. Free- base currently contains more than 125,000,000 tuples, more than 4000 types, and more than 7000 properties. Public read/write access to Freebase is allowed through an HTTP- based graph-query API using the Metaweb Query Language (MQL) as a data query and manipulation language. MQL provides an easy-to-use object-oriented interface to the tuple data in Freebase and is designed to facilitate the creation of collaborative, Web-based data-oriented applications.
Article
In the analysis of social dominance in groups of animals, linearity has been used by many researchers as the main structural characteristic of a dominance hierarchy. In this paper we propose, alongside linearity, a quantitative measure for another property of a dominance hierarchy, namely its steepness. Steepness of a hierarchy is defined here as the absolute slope of the straight line fitted to the normalized David’s scores (calculated on the basis of a dyadic dominance index corrected for chance) plotted against the subjects’ ranks. This correction for chance is an improvement of an earlier proposal by de Vries (appendix 2 in de Vries, Animal Behaviour, 1998, 55, 827–843). In addition, we present a randomization procedure for determining the statistical significance of a hierarchy’s steepness, which can be used to test the observed steepness against the steepness expected under the null hypothesis of random win chances for all pairs of individuals. Whereas linearity depends on the number of established binary dominance relationships and the degree of transitivity in these relationships, steepness measures the degree to which individuals differ from each other in winning dominance encounters. Linearity and steepness are complementary measures to characterize a dominance hierarchy.
Conference Paper
Explores the use of discrete-time recurrent neural networks for part-of-speech disambiguation of textual corpora. Our approach does not need a hand-tagged text for training the tagger, being probably the first neural approach doing so. Preliminary results show that the performance of this approach is, at least, similar to that of a standard hidden Markov model trained using the Baum-Welch algorithm
Article
We present conditional random fields, a framework for building probabilistic models to segment and label sequence data. Conditional random fields offer several advantages over hidden Markov models and stochastic grammars for such tasks, including the ability to relax strong independence assumptions made in those models. Conditional random fields also avoid a fundamental limitation of maximum entropy Markov models (MEMMs) and other discriminative Markov models based on directed graphical models, which can be biased towards states with few successor states. We present iterative parameter estimation algorithms for conditional random fields and compare the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.
Article
When the probability of measuring a particular value of some quantity varies inversely as a power of that value, the quantity is said to follow a power law, also known variously as Zipf's law or the Pareto distribution. Power laws appear widely in physics, biology, earth and planetary sciences, economics and finance, computer science, demography and the social sciences. For instance, the distributions of the sizes of cities, earthquakes, solar flares, moon craters, wars and people's personal fortunes all appear to follow power laws. The origin of power-law behaviour has been a topic of debate in the scientific community for more than a century. Here we review some of the empirical evidence for the existence of power-law forms and the theories proposed to explain them.
TextRazor: Technology
  • T Crayston
Crayston, T., "TextRazor: Technology" 2019, [Online], https://www.textrazor. com/technology, Accessed February 19, 2019.
List of IPTC NewsCodes and Other Vocabularies
IPTC, 2019, "List of IPTC NewsCodes and Other Vocabularies" [Online], http:// cv.iptc.org/newscodes/.
Exploring Concept Representations for Concept Drift Detection
  • Becher
Becher, O., Hollink, L., and Elliott, D., 2017, "Exploring Concept Representations for Concept Drift Detection," SEMANTICS Workshops, Amsterdam, Sept. 11-14.
A Corpus of Images and Text in Online News
  • L Hollink
  • A Bedjeti
  • M Van Harmelen
Hollink, L., Bedjeti, A., van Harmelen, M., and Elliott, D., 2016, "A Corpus of Images and Text in Online News," Portorož, Slovenia, May 23-28, LREC.