Figure 10
Contexts in source publication
Context 1
... also allows us to evaluate the copter's responsiveness to radio transmitter commands, the flight controller's interpretive capacity, and the copter's overall ascent stability. The motion is depicted in Figure 10 (left). ...
Context 2
... test significantly illuminates the copter's capacity to achieve and maintain stable flight, a vital characteristic of any functioning copter. The hovering motion is demonstrated in Figure 10 (mid). ...
Similar publications
Citations
... Process task guidance requires the development of methods and technology for AI assistants that can help technicians perform complex tasks [14]. Task guidance with AI assistance in manufacturing remains a challenging problem [36]. ...
In this work we explore utilizing LLMs for data augmentation for manufacturing task guidance system. The dataset consists of representative samples of interactions with technicians working in an advanced manufacturing setting. The purpose of this work to explore the task, data augmentation for the supported tasks and evaluating the performance of the existing LLMs. We observe that that task is complex requiring understanding from procedure specification documents, actions and objects sequenced temporally. The dataset consists of 200,000+ question/answer pairs that refer to the spec document and are grounded in narrations and/or video demonstrations. We compared the performance of several popular open-sourced LLMs by developing a baseline using each LLM and then compared the responses in a reference-free setting using LLM-as-a-judge and compared the ratings with crowd-workers whilst validating the ratings with experts.
... There has recently been a large interest in applying AI for industrial use cases, especially in what is known as the 4 th Industrial Revolution (or Industry 4.0) [19]. More recently, LLMs have been applied throughout the product development lifecycle [20], including design [15], [21] as well as with conversational assistants [22]. ...
Instructions for Build, Assembly, and Test (IBAT) refers to the process used whenever any operation is conducted on hardware, including tests, assembly, and maintenance. Currently, the generation of IBAT documents is time-intensive, as users must manually reference and transfer information from engineering diagrams and parts lists into IBAT instructions. With advances in machine learning and computer vision, however, it is possible to have an artificial intelligence (AI) model perform the partial filling of the IBAT template, freeing up engineer time for more highly skilled tasks. AiBAT is a novel system for assisting users in authoring IBATs. It works by first analyzing assembly drawing documents, extracting information and parsing it, and then filling in IBAT templates with the extracted information. Such assisted authoring has potential to save time and reduce cost. This paper presents an overview of the AiBAT system, including promising preliminary results and discussion on future work.
... Previous work by Makatura et al. [7] offers an extensive study of GPT-4's capabilities for automating computational design and manufacturing pipelines. Their work shows that GPT-4 can reason about high-level properties and incorporate textually defined descriptions into the generation process but suffers from several limitations and a reliance on human feedback to correct its mistakes. ...
... The use of LLMs in manufacturing is nascent and largely unexplored as of mid-2023. The closest work to our approach is a recent preprint by Makatura et al. [7], which makes a deep dive into how a specific LLM (Chat-GPT) can be used in design and manufacturing applications. Another recent work by Badini et al. [31] has looked into utilizing a specific LLM -ChatGPT -for AM process parameter optimization. ...
... • We provide user feedback to solutions that are incomplete or omit key parts of the g-code. This is necessary to observe the support for iteration noted in [7]. For these prompts, we maintain as much uniformity as possible between tasks and models. ...
3D printing or additive manufacturing is a revolutionary technology that enables the creation of physical objects from digital models. However, the quality and accuracy of 3D printing depend on the correctness and efficiency of the G-code, a low-level numerical control programming language that instructs 3D printers how to move and extrude material. Debugging G-code is a challenging task that requires a syntactic and semantic understanding of the G-code format and the geometry of the part to be printed. In this paper, we present the first extensive evaluation of six state-of-the-art foundational large language models (LLMs) for comprehending and debugging G-code files for 3D printing. We design effective prompts to enable pre-trained LLMs to understand and manipulate G-code and test their performance on various aspects of G-code debugging and manipulation, including detection and correction of common errors and the ability to perform geometric transformations. We analyze their strengths and weaknesses for understanding complete G-code files. We also discuss the implications and limitations of using LLMs for G-code comprehension.
Manufacturability is vital for product design and production, with accessibility being a key element, especially in subtractive manufacturing. Traditional methods for geometric accessibility analysis are time-consuming and struggle with scalability, while existing deep learning approaches in manufacturability analysis often neglect geometric challenges in accessibility and are limited to specific model types. In this paper, we introduce DeepMill, the first neural framework designed to accurately and efficiently predict inaccessible and occlusion regions under varying machining tool parameters, applicable to both CAD and freeform models. To address the challenges posed by cutter collisions and the lack of extensive training datasets, we construct a cutter-aware dual-head octree-based convolutional neural network (O-CNN) and generate an inaccessible and occlusion regions analysis dataset with a variety of cutter sizes for network training. Experiments demonstrate that DeepMill achieves 94.7% accuracy in predicting inaccessible regions and 88.7% accuracy in identifying occlusion regions, with an average processing time of 0.04 seconds for complex geometries. Based on the outcomes, DeepMill implicitly captures both local and global geometric features, as well as the complex interactions between cutters and intricate 3D models.
Large Language Models (LLMs) have emerged as pivotal technology in the evolving world. Their significance in design lies in their transformative potential to support engineers and collaborate with design teams throughout the design process. However, it is not known whether LLMs can emulate the cognitive and social attributes which are known to be important during design, such as cognitive style. This research evaluates the efficacy of LLMs to emulate aspects of Kirton's Adaption-Innovation theory, which characterizes individual preferences in problem-solving. Specifically, we use LLMs to generate solutions for three design problems using two different cognitive style prompts (adaptively-framed and innovatively-framed). Solutions are evaluated with respect to feasibility and paradigm relatedness, which are known to have discriminative value in other studies of cognitive style. We found that solutions generated using the adaptive prompt tend to display higher feasibility and are paradigm preserving, while solutions generated using the innovative prompts were more paradigm modifying. This aligns with prior work and expectations for design behavior based on Kirton's Adaption-Innovation theory. Ultimately, these results demonstrate that LLMs can be prompted to accurately emulate cognitive style.
This paper delves into the cutting-edge applications of Machine Learning (ML) within modern Additive Manufacturing (AM), employing bibliometric analysis as its methodology. Formulated around three pivotal research questions, the study navigates through the current landscape of the research field. Utilizing data sourced from Web of Science, the paper conducts a comprehensive statistical and visual analysis to unveil underlying patterns within the existing literature. Each category of ML techniques is elucidated alongside its specific applications, providing researchers with a holistic overview of the research terrain and serving as a practical checklist for those seeking to address particular challenges. Culminating in a vision for the Smart Additive Manufacturing Factory (SAMF), the paper envisions seamless integration of reviewed ML techniques. Furthermore, it offers critical insights from a practical standpoint, thereby facilitating shaping future research directions in the field.