Conference PaperPDF Available

Trust in AI and Implications for AEC Research: A Literature Analysis

Authors:

Figures

Content may be subject to copyright.
Proceedings Paper Formatting Instructions 1 Rev. 10/2015
Trust in AI and Implications for the AEC Research: A Literature Analysis
Newsha Emaminejad1, Alexa Maria North1, and Reza Akhavian, Ph.D., M.ASCE1
1Department of Civil, Construction, and Environmental Engineering, San Diego State University,
5500 Campanile Dr., San Diego, CA 92182; e-mails: {nemaminejad859; anorth3467;
rakhavian@sdsu.edu}
ABSTRACT
Engendering trust in technically acceptable and psychologically embraceable systems requires
domain-specific research to capture unique characteristics of the field of application. The
architecture, engineering, and construction (AEC) research community has been recently
harnessing advanced solutions offered by artificial intelligence (AI) to improve project workflows.
Despite the unique characteristics of work, workers, and workplaces in the AEC industry, the
concept of trust in AI has received very little attention in the literature. This paper presents a
comprehensive analysis of the academic literature in two main areas of trust in AI and AI in
the AEC, to explore the interplay between AEC projects’ unique aspects and the sociotechnical
concepts that lead to trust in AI. A total of 490 peer-reviewed scholarly articles are analyzed in
this study. The main constituents of human trust in AI are identified from the literature and are
characterized within the AEC project types, processes, and technologies.
INTRODUCTION
Artificial Intelligence (AI) has recently gained tremendous traction in the architecture,
engineering, and construction (AEC) industry. AI applications in architectural design (Darko et al.
2020), site logistic planning (Braun and Borrmann 2019), safety management (Baker et al. 2020),
progress monitoring and productivity improvement (Sacks et al. 2020), and building operations
and maintenance (López et al. 2013) have been studied extensively by researchers and (to a lesser
extent) implemented by practitioners. There is a consensus among researchers that the technology
adoption rate in the AEC industry is stagnant (Czarnowski et al. 2018). This is despite the fact that
the industry faces grand challenges such as poor safety and productivity records that can be
addressed using AI-enabled solutions similar to other industries (Baker et al. 2020; Delgado et al.
2019). In the information systems and implementation science, the absence of trust is a known
hindrance to adopting new technologies (Danks 2019). For a technology such as AI with opaque
back-end processes, and in the context of AEC which traditionally lags behind advanced
technology, this is a more severe problem (Pan and Zhang 2021). AI algorithms are generally not
easy to explain in layman terms and the processes between the input and the output are not
sufficiently transparent to an ordinary end-user (Arrieta et al. 2020). In such a situation and in the
absence of proven performance, a construction project team whose work has to be delivered under
a predetermined time and budget limitations tends to refuse experimentations and continues to use
methods that are trusted traditionally. This lack of confidence or “trust” to introduce new
workflows can be traced back to both technical and psychological factors. Trustworthy AI, as a
relatively new research paradigm, seeks to investigate mechanisms that enable building trust
between AI-enabled systems and end-users, thus enhancing adoption levels (Siau and Wang 2018).
This paper presents the results of a thorough investigation of the literature to develop a foundation
for exploring factors that enhance the trust in, and thus the adoption of AI in the AEC industry.
Proceedings Paper Formatting Instructions 2 Rev. 10/2015
METHODOLOGY OF THE LITERATURE ANALYSIS
In this study, first a keyword search was conducted to find articles published from 1985 to 2021
on Google Scholar and Scopus search engines. The search keywords included a combination of
“Trust in AI”, “Trustworthy AI”, “Trust in robots”, “Ethical AI”, “Transparent and explainable
AI”, “Reliable and safe AI”, “Artificial Intelligence applications in construction management”,
“AI applications in construction management”, “Robotics in construction management”, and
“Construction Automation”. Next, manual filters were implemented to ensure that the papers to be
analyzed are peer-reviewed and written in the English language to enable further screening if
needed, and discuss directly related topics. Through these filtered search, a total of 490 articles
were identified. Out of this total number, 210 articles are focused only on the applications of AI in
the AEC industry, while the remaining discuss acceptance and trust in AI with no tie to the AEC
concepts. Paper keywords were analyzed using VOSviewer software to establish co-occurrence
maps in which the studies identified by their main focus are shown by circles linked together using
lines with varying widths and lengths (Van Eck and Waltman 2013).
TRUSTWORTHY AI
Trustworthy AI is an interdisciplinary field of research, involving disciplines such as computer
science, human-computer interaction, human factors, robotics, engineering, management
information systems, and psychology. In the past, the importance of the relationship between the
end-user and AI-based processes was often overshadowed by pure technical advancements in the
field (Hatami et al. 2019). More recently, however, experts have identified trust as an important
element that determines adoption levels. Therefore, trustworthy AI research has evolved as a
human-centered interdisciplinary field in which the needs, perceptions, and behaviors of human
users are taken into account in the system’s design (Canal et al. 2020).
Trust can be defined as a set of specific beliefs dealing with competence, integrity,
predictability, and the willingness of someone to depend on another in a risky situation (Gefen et
al. 2003). The concept of trust in AI and its unique aspects that are different from trust in other
technologies have been extensively studied in the literature (Gillath et al. 2021; Li et al. 2008;
Toreini et al. 2020). Theoretical studies indicate that factors that can affect the level of trust in
technology systems can be categorized as those related to the user (e.g., expertise, attitudes towards
robots), the hardware or machine (e.g., reliability, anthropomorphism), and the environment (e.g.,
characteristics of the team and task) (Lewis et al. 2018).
To create an overview of the parameters that influence trust in AI, the bibliometric data of
the reviewed publications were fed into VOSviewer to generate maps of keywords and thematic
co-occurrences. Figure 1 shows a network of the keywords where the size of the nodes is
proportionate with the frequency of the keyword occurrence, and the distance between two nodes
is inversely proportionate with the strength of the relation between the keywords in the literature
analyzed. Additionally, the nodes and links are color-coded to reflect the publication age. Articles
published before 2010 (limited in numbers relative to those published after 2010) were not
included for more distinguishing color-coding). Major parameters identified in Figure 1 are
explained below. In most cases, more than one term is used to describe a parameter to contextualize
the concept, and/or to bundle closely related parameters that frequently co-appear in the literature.
It is worth mentioning that these parameters are not mutually exclusive nor collectively exhaustive.
In some cases they have semantic overlaps and not all of them are required to build trust.
Proceedings Paper Formatting Instructions 3 Rev. 10/2015
Transparency and Explainability (T&E). In AI applications, transparency is often related to the
concept of interpretability where the operations of a system can be understood by a human through
introspection or explanation (Arrieta et al. 2020). Transparent and explainable AI systems are
designed and implemented to be able to translate their operations into intelligible outputs that
include information about how, when, and where they are used (Gillath et al. 2021; Toreini et al.
2020). The classic technology acceptance model (TAM) information systems theory (one of the
oldest keywords in Figure 1) recognizes perception as a key element of technology adoption.
Privacy and Security (P&S). Humans’ trust in technology is highly influenced by the levels of
privacy involved in technology implementation (Li and Zhang 2017). Especially when sensitive
(e.g., human health) data are in use, individuals’ privacy and the risks of using the data both for
the development of and the decisions made by the AI systems should be managed properly to
enable trustworthiness (Yampolskiy 2018). To be trusted, AI systems must also be secure against
being compromised by unauthorized agents (Siau and Wang 2018).
Safety and Reliability (S&R). AI systems that may pose risks of physical injury to users cannot
be trusted (Tixier et al. 2016). This, of course, is a more important issue in embodied intelligence
(e.g., intelligent robots), but even software and distributed computer networks will be distrusted
should they manifest signs of safety and health threats (Baker et al. 2020). Reliability for trust is
associated with the capacity of the models to avoid malfunctions; the vulnerabilities of AI models
have to be identified and technical, or behavioral, solutions have to be implemented to ensure that
autonomous systems will not be manipulated by an adversary (Ryan 2020).
Ethics and Fairness (E&F). Bias in training AI models, the ethical implications of developing
biased intelligent systems, and the consequences of trusting and adopting them have been
Figure 1. Keyword co-occurrences network.
Proceedings Paper Formatting Instructions 4 Rev. 10/2015
discussed extensively in the literature (Chakraborty et al. 2020; Siau and Wang 2018). Research
has strongly advocated for diversity and inclusion to maximize fairness, minimize discrimination,
and strengthen the basis of trust (Bartneck et al. 2021; Ryan and Stahl 2020; Toreini et al. 2020).
Human-Centered Technology (HC). A great deal of distrust in AI stems from a hypothesis that
AI-powered technologies, such as intelligent robots, are developed to eliminate large segments of
the workforce (Jarrahi 2018; Manzo et al. 2018). Many researchers conclude that to initiate trust
between humans and AI, human-centered systems must be the cornerstone of research and
development, where AI systems are designed to serve humankind, upskill workers, and promote
human values (Dignum 2017; Lewis et al. 2018; Shneiderman 2020).
Benevolence and Affect (BA). It has been reported in behavioral science studies that the level of
trust between humans and AI can be enhanced if they can bond socially and become “friends”.
Most of the studies focused on this concept fall in the broader category of human-AI or human-
robot interaction (Pitardi and Marriott 2021; Toreini et al. 2020; Wang et al. 2016).
TOWARD TRUSTWORTHY AI IN THE AEC RESEARCH AND PRACTICE
The remarkable growth of the AI applications in the AEC domains has led to a number of review
studies that highlight the status quo and the future potentials of the field. Most of these published
studies focus on the value of implementing AI in a specific subfield, such as structural engineering
(Salehi and Burgueño 2018), building information modeling (BIM) (Jianfeng et al. 2020),
automated construction manufacturing (Hatami et al. 2019), and computer vision (Zhang et al.
2020). There are a limited number of studies that review general applications of AI in the AEC.
Recent examples include Darko et al. (2020) and Pan and Zhang (2021) who reviewed and
analyzed the use of AI in construction using a scientometric approach and identified the most
commonly addressed topics, as well as future opportunities. However, to the best of the authors’
knowledge, the topic of trust in AI with the AEC applications has never been studied before;
neither as a literature analysis nor an independent research study. To present a comprehensive
analysis of the applications of AI in the AEC literature through the lens of trust, this study identifies
different categories within which the use of AI, and how it can be trusted, are substantially
different. For example, engendering trust to leverage AI during the design phase calls for
addressing “fairness” much more than “safety,” while during the construction phase, “safety and
reliability,” and during the operations phase, “privacy and security,” can have a more influential
effect in building trust. Similarly, the project sector (e.g., building versus infrastructure), the
objective within an AEC project (e.g., enhancing productivity versus safety), and the technology
that is powered by AI (e.g., BIM versus robotics) are important factors that can determine the
approach towards establishing trust. Table 1 shows this categorization proposed based on the
frequency of these categories in the keywords of the AI in AEC literature analyzed.
Table 1. Proposed AEC projects categories and subcategories for trustworthy AI studies.
Category
Subcategories
Project Phase
Pre-Construction, Construction, Post-Construction
Construction Type
Building Construction, Horizontal Construction
Application
Safety, Productivity, Sustainability, Scheduling
Technology
BIM, Robotics, Mobile Computing, Blockchain
Proceedings Paper Formatting Instructions 5 Rev. 10/2015
RESULTS AND DISCUSSION
To understand the growth of research on the topic of trustworthy AI (and the lack thereof within
the AEC industry), the results of the conducted literature analysis are tabulated and visualized in
this section. A temporal bibliographic analysis is provided in Figure 2 (a), where a histogram of
publication dates of articles reviewed in this study is presented in two categories: those discussing
trust in AI, and those focused on developing or adopting AI models for AEC applications. With
441 reviewed papers for both categories from 2005 to 2021 (out of the total 490 from 1985), rapid
growth can be seen in this right-skewed histogram, as more than 92% of these papers have been
published between 2016 and 2021. Furthermore, as the interest in leveraging AI in the AEC
industry grows, the number of studies targeting trust in AI (in fields other than AEC) is also on the
rise. This can indicate a major demand for this research in the future within the AEC community.
Another informative visualization is a comprehensive view of the VOSviewer network,
presented in Figure 2 (b). The top cluster indicating studies pertinent to AEC with major
connections to topics such as AI, machine learning, deep learning, and robotics is distant and
virtually disconnected from the trust keywords clustered below it. In addition, a comparison
between Figure 1 and Figure 2 (b) reveals that the relatively older topic of TAM (the blue circle
on top) is now appearing closer to the AEC topics, with direct links to topics such as BIM and
augmented reality in the top cluster and AI topics. This indicates that TAM has received substantial
attention in both the AEC and the trust in AI literature for technology adoption, and can play a
major role in theoretical as well as empirical studies related to AI in the AEC.
Finally, a thematic investigation of the literature analyzed in this paper in terms of the
trustworthy AI parameters identified in Figure 1, as well as the proposed categories of the AEC
concepts in Table 1, can help better identify the existing and future potentials of this topic. The
subcategories indicated in Table 2 are those identified explicitly in the papers analyzed. The
applicable trust parameters, however, are determined through a thematic analysis performed by
Figure 2 (a). A histogram of the analyzed papers publication years; (b). A comprehensive
network of the keywords with two distant clusters.
(a)
(b)
Proceedings Paper Formatting Instructions 6 Rev. 10/2015
the research team members, based on the content of the papers classified in each category. It is
worth mentioning that the sum of the number of papers in Table 2 is more than the total number
of papers reviewed since some papers discuss more than one subcategory.
Table 2. Thematic analysis of the reviewed papers with identified subcategories and
applicable trust parameters.
Subcategory
Applicable Trust Parameters
T&E
P&S
S&R
E&F
H&C
B&A
Pre-Construction
x
x
Construction
x
x
Post-Construction
x
x
x
Building Construction
x
x
x
Infrastructure Projects
x
x
x
x
Safety
x
x
x
x
x
Productivity
x
x
x
x
Sustainability
x
x
x
x
Scheduling
x
x
x
x
BIM
x
x
x
Robotics
x
x
x
x
x
x
Mobile Computing
x
x
x
x
x
Blockchain
x
x
x
x
CONCLUSION
A detailed cross-referencing of the papers analyzed in this research through temporal study,
keyword co-occurrence network creation, and thematic analysis indicates a substantial potential
and need for exploring trust concepts in adopting AI in AEC applications. The interdisciplinary
nature of the topic allows for observing the interplay of the subcategories and parameters identified
in this paper from the lens of different disciplinary fields. For example, from the AEC perspective
and within the project phase category, “Construction” appears to have the potential to engage all
the trust parameters identified. A similar argument is valid for “Robotics” within the technology
category. From a trust in AI perspective, transparency and explainability are identified as major
trust dimensions across all AEC subcategories. Safety and reliability, and to a lesser extent privacy
and security, as well as ethics and fairness, are trust concepts that can be widely applied to the use
of AI in AEC research and practice.
The presented study describes preliminary findings of a larger research endeavor to study
trust development and calibration for AI in AEC applications. The study has a few limitations that
can be addressed in future work. The thematic analysis presented in Table 2 is a subjective
assessment of the research team. Further analysis can incorporate survey or interview results with
experts in the field. Additionally, search keywords can be expanded to incorporate subtopics of AI
that may have been listed instead of AI keywords in the original publications.
Proceedings Paper Formatting Instructions 7 Rev. 10/2015
REFERENCES
Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S.,
Gil-López, S., Molina, D., and Benjamins, R. (2020). "Explainable Artificial Intelligence
(XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI."
Information Fusion, 58, 82-115.
Baker, H., Hallowell, M. R., and Tixier, A. J.-P. (2020). "AI-based prediction of independent
construction safety outcomes from universal attributes." AutCon, 118, 103146.
Bartneck, C., Lütge, C., Wagner, A., and Welsh, S. (2021). An introduction to ethics in robotics
and AI, Springer Nature.
Braun, A., and Borrmann, A. (2019). "Combining inverse photogrammetry and BIM for automated
labeling of construction site images for machine learning." AutCon, 106, 102879.
Canal, G., Borgo, R., Coles, A., Drake, A., Huynh, D., Keller, P., Krivić, S., Luff, P., Mahesar,
Q.-a., and Moreau, L. (2020). "Building Trust in Human-Machine Partnerships."
ELSEVIER ADVANCED TECHNOLOGY OXFORD FULFILLMENT CENTRE THE
BOULEVARD .
Chakraborty, J., Peng, K., and Menzies, T. "Making fair ML software using trustworthy
explanation." Proc., 2020 35th IEEE/ACM International Conference on Automated
Software Engineering (ASE), IEEE, 1229-1233.
Czarnowski, J., Dąbrowski, A., Maciaś, M., Główka, J., and Wrona, J. (2018). "Technology gaps
in human-machine interfaces for autonomous construction robots." AutCon, 94, 179-190.
Danks, D. "The value of trustworthy AI." Proc., Proceedings of the 2019 AAAI/ACM Conference
on AI, Ethics, and Society, 521-522.
Darko, A., Chan, A. P., Adabre, M. A., Edwards, D. J., Hosseini, M. R., and Ameyaw, E. E. (2020).
"Artificial intelligence in the AEC industry: Scientometric analysis and visualization of
research activities." AutCon, 112, 103081.
Delgado, J. M. D., Oyedele, L., Ajayi, A., Akanbi, L., Akinade, O., Bilal, M., and Owolabi, H.
(2019). "Robotics and automated systems in construction: Understanding industry-specific
challenges for adoption." Journal of Building Engineering, 26, 100868.
Dignum, V. (2017). "Responsible artificial intelligence: designing AI for human values."
Gefen, D., Karahanna, E., and Straub, D. W. (2003). "Trust and TAM in online shopping: An
integrated model." MIS quarterly, 51-90.
Gillath, O., Ai, T., Branicky, M. S., Keshmiri, S., Davison, R. B., and Spaulding, R. (2021).
"Attachment and trust in artificial intelligence." Computers in Human Behavior, 115,
106607.
Hatami, M., Flood, I., Franz, B., and Zhang, X. (2019). "State-of-the-Art Review on the
Applicability of AI Methods to Automated Construction Manufacturing." Computing in
Civil Engineering 2019: Data, Sensing, and Analytics, 368-375.
Jarrahi, M. H. (2018). "Artificial intelligence and the future of work: Human-AI symbiosis in
organizational decision making." Business Horizons, 61(4), 577-586.
Jianfeng, Z., Yechao, J., and Fang, L. "Construction of Intelligent Building Design System Based
on BIM and AI." Proc., 2020 5th International Conference on Smart Grid and Electrical
Automation (ICSGEA), IEEE, 277-280.
Lewis, M., Sycara, K., and Walker, P. (2018). "The role of trust in human-robot interaction."
Foundations of trusted autonomy, Springer, Cham, 135-159.
Proceedings Paper Formatting Instructions 8 Rev. 10/2015
Li, X., Hess, T. J., and Valacich, J. S. (2008). "Why do we trust new technology? A study of initial
trust formation with organizational information systems." The Journal of Strategic
Information Systems, 17(1), 39-71.
Li, X., and Zhang, T. "An exploration on artificial intelligence application: From security, privacy
and ethic perspective." Proc., 2017 IEEE 2nd International Conference on Cloud
Computing and Big Data Analysis (ICCCBDA), IEEE, 416-420.
López, J., Pérez, D., Paz, E., and Santana, A. (2013). "WatchBot: A building maintenance and
surveillance system based on autonomous robots." Robotics and Autonomous Systems,
61(12), 1559-1571.
Manzo, J., Manzo, F., and Bruno, R. (2018). "The potential economic consequences of a highly
automated construction industry." What If Construction Becomes the Next Manufacturing.
Pan, Y., and Zhang, L. (2021). "Roles of artificial intelligence in construction engineering and
management: A critical review and future trends." AutCon, 122, 103517.
Pitardi, V., and Marriott, H. R. (2021). "Alexa, she's not human but… Unveiling the drivers of
consumers' trust in voice‐based artificial intelligence." Psychology & Marketing, 38(4),
626-642.
Ryan, M. (2020). "In AI We Trust: Ethics, Artificial Intelligence, and Reliability." Science and
Engineering Ethics, 26(5), 2749-2767.
Ryan, M., and Stahl, B. C. (2020). "Artificial intelligence ethics guidelines for developers and
users: clarifying their content and normative implications." Journal of Information,
Communication and Ethics in Society.
Sacks, R., Girolami, M., and Brilakis, I. (2020). "Building information modelling, artificial
intelligence and construction tech." Developments in the Built Environment, 4, 100011.
Salehi, H., and Burgueño, R. (2018). "Emerging artificial intelligence methods in structural
engineering." Engineering structures, 171, 170-189.
Shneiderman, B. (2020). "Human-centered artificial intelligence: Reliable, safe & trustworthy."
International Journal of HumanComputer Interaction, 36(6), 495-504.
Siau, K., and Wang, W. (2018). "Building trust in artificial intelligence, machine learning, and
robotics." Cutter Business Technology Journal, 31(2), 47-53.
Tixier, A. J.-P., Hallowell, M. R., Rajagopalan, B., and Bowman, D. (2016). "Application of
machine learning to construction injury prediction." AutCon, 69, 102-114.
Toreini, E., Aitken, M., Coopamootoo, K., Elliott, K., Zelaya, C. G., and van Moorsel, A. "The
relationship between trust in AI and trustworthy machine learning technologies." Proc.,
Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 272-
283.
Toreini, E., Aitken, M., Coopamootoo, K. P., Elliott, K., Zelaya, V. G., Missier, P., Ng, M., and
van Moorsel, A. (2020). "Technologies for Trustworthy Machine Learning: A Survey in a
Socio-Technical Context." arXiv preprint arXiv:2007.08911.
Van Eck, N. J., and Waltman, L. (2013). "VOSviewer manual." Leiden: Univeristeit Leiden, 1(1),
1-53.
Wang, N., Pynadath, D. V., and Hill, S. G. "The impact of pomdp-generated explanations on trust
and performance in human-robot teams." Proc., Proceedings of the 2016 international
conference on autonomous agents & multiagent systems, 997-1005.
Yampolskiy, R. V. (2018). Artificial intelligence safety and security, CRC Press.
Zhang, Y., Liu, H., Kang, S.-C., and Al-Hussein, M. (2020). "Virtual reality applications for the
built environment: Research trends and opportunities." AutCon, 118, 103311.
... Research highlights the importance of UX and trust in specialized LLM applications like healthcare consultancy and shopping assistance. These domains have unique requirements; healthcare prioritizes accuracy and privacy, while shopping emphasizes smooth and fast interactions [40][41][42]. ...
... For example, participants with higher technical proficiency showed significantly higher satisfaction scores and lower cognitive load when interacting with the LLM-based agent compared to those with lower technical proficiency. Similarly, younger participants (aged [22][23][24][25][26][27][28][29][30] reported higher satisfaction and lower cognitive load than older participants (aged [31][32][33][34][35][36][37][38][39][40][41]. ...
Article
Full-text available
This study explores the enhancement of user experience (UX) and trust in advanced Large Language Model (LLM)-based conversational agents such as ChatGPT. The research involves a controlled experiment comparing participants using an LLM interface with those using a traditional messaging app with a human consultant. The results indicate that LLM-based agents offer higher satisfaction and lower cognitive load, demonstrating the potential for LLMs to revolutionize various applications from customer service to healthcare consultancy and shopping assistance. Despite these positive findings, the study also highlights significant concerns regarding transparency and data security. Participants expressed a need for clearer understanding of how LLMs process information and make decisions. The perceived opacity of these processes can hinder user trust, especially in sensitive applications such as healthcare. Additionally, robust data protection measures are crucial to ensure user privacy and foster trust in these systems. To address these issues, future research and development should focus on enhancing the transparency of LLM operations and strengthening data security protocols. Providing users with clear explanations of how their data is used and how decisions are made can build greater trust. Moreover, specialized applications may require tailored solutions to meet specific user expectations and regulatory requirements. In conclusion, while LLM-based conversational agents have demonstrated substantial advantages in improving user experience, addressing transparency and security concerns is essential for their broader acceptance and effective deployment. By focusing on these areas, developers can create more trustworthy and user-friendly AI systems, paving the way for their integration into diverse fields and everyday use.
... Blockchain technology can promote peer-to-peer transaction management that offers a secure and transparent way to handle digital interactions. In simple words, it can add value to AI by explaining AI decisions, reducing risks, increasing efficiency, and improving data accessibility and decentralization [29]. ...
... To date, the literature on trust in AI systems remains sparse in the built environment and its various subfields. For example, an analysis of 490 articles published in 1985-2021 revealed that trust in AI systems in the context of architecture, engineering, and construction (AEC) applications was not studied before (Emaminejad et al., 2021). In another study, involving the review of 102 articles, it was reported that the literature on trust in AI was fragmented and primarily focused on examining trust formation in experimental settings (Lockey et al., 2021). ...
Conference Paper
While artificial intelligence (AI) has transformed the planning, design, construction, and operation of physical infrastructure and spaces, it has also raised concerns about algorithmic bias, data privacy, and ethical use in built environment decision-making. Addressing these issues is crucial for designing, developing, and deploying trustworthy AI systems that promote human safety, infrastructure security, and resource allocation. This paper reviews trust issues in AI through the lens of several built environment decision scenarios, e.g., weather prediction, disaster mitigation and response, urban sensing, and bridge health monitoring. It then outlines a framework to formalize trust, aiding researchers, policymakers, and practitioners in designing AI systems that serve societal interests.
... It is well accepted that AI is a key technique for building powerful digital twins systems. As AI increasingly enters the construction industry, building a trustworthy AI has become a challenge [100]. Firstly, AI systems are often seen as 'black boxes', with decisionmaking processes lacking transparency. ...
Article
Full-text available
Carbon emissions present a pressing challenge to the traditional construction industry, urging a fundamental shift towards more sustainable practices and materials. Recent advances in sensors, data fusion techniques, and artificial intelligence have enabled integrated digital technologies (e.g., digital twins) as a promising trend to achieve emission reduction and net-zero. While digital twins in the construction sector have shown rapid growth in recent years, most applications focus on the improvement of productivity, safety and management. There is a lack of critical review and discussion of state-of-the-art digital twins to improve sustainability in this sector, particularly in reducing carbon emissions. This paper reviews the existing research where digital twins have been directly used to enhance sustainability throughout the entire life cycle of a building (including design, construction, operation and maintenance, renovation, and demolition). Additionally, we introduce a conceptual framework for this industry, which involves the elements of the entire digital twin implementation process, and discuss the challenges faced during deployment, along with potential research opportunities. A proof-of-concept example is also presented to demonstrate the validity of the proposed conceptual framework and potential of digital twins for enhanced sustainability. This study aims to inspire more forward-thinking research and innovation to fully exploit digital twin technologies and transform the traditional construction industry into a more sustainable sector.
Article
Full-text available
The usage of Internet of things (IoT) in higher education is still emerging especially in developing countries. The purpose of this study is to examine the information and the system and service quality on the Usage of IoT (UIoT) among students and academic staff and non-academic staff. The study, based on Information System Success model (ISS), proposes that Information Quality (IQ), System Quality (SYSQ), and Service Quality (SQ) have a positive impact on UIoT. The research further proposes that IoT awareness acts as a moderator. The data were collected with a use of a questionnaire. Stratified random sampling was used and the data collected from a sample of 423 participants completed a process of validation and pilot testing. The data analysis was conducted using Smart PLS 4. The findings of the study indicate that SQ, IQ, and SYSQ do have positive effects on UIoT. IoT awareness moderated the effect of IQ only on UIoT. To increase the UIoT, it is advised to focus on enhancing the awareness about the IoT and provide reliable information.
Article
Full-text available
The objective of this study is to develop a predictive maintenance algorithm for the ABB IRB 4600, a 6-axis robotic arm, using digital simulations. A variety of tests were conducted using SolidWorks, including calculations pertaining to stress, strain, fatigue, and heat. The simulations included an analysis of the materials used in the construction of the robotic arm, which are gray cast iron and aluminum alloy. The robotic arm was tested in three positions—picking, raised, and placing—with loads of 100 kg, 200 kg, and 300 kg, respectively. The findings indicated that elevated stress, strain, and displacement levels diminish the robot's operational lifespan and accelerate its deterioration over time. The placing position was found to experience the greatest stress, displacement, and strain. The fatigue test also demonstrated that after 10 million cycles, the arm had accumulated damage. The gradient boosting regression algorithm was selected as the Machine Learning (ML) algorithm for the study following a comparison of the performance of various ML regression models. This finding underscores the significance of predictive maintenance in preventing breakdowns and extending the robot's lifespan.
Conference Paper
Full-text available
Advancements in Information and Communication Technology (ICT) are reshaping the Architecture Engineering and Construction (AEC) industry, challenging traditional business practices. Mobile devices and mobile apps, cloud computing, Building Information Modeling (BIM), additive manufacturing are some of the disruptive innovations that are compelling a reevaluation of strategies for enhanced industry efficiency. Among emerging innovations, advances in artificial intelligence (AI) is one of the most debated technological advances of today. Large language models (LLM) have the potential to profoundly impact the AEC industry, akin to other transformative technologies. This paper explores the intersection of AI and AEC research, leveraging a meta-classification framework for literature analysis through an LLM AI tool in an attempt to give an overview to researchers who aim to use these tools for research purposes. The study introduces preparatory steps and compares analysis results with prior research, demonstrating the promising outcomes of AI integration in research processes. Initial findings suggest the potential for faster, more focused, and efficient research outcomes, contingent on effective AI training methodologies. However, there are limitations that researchers should be aware of while using the assessed AI tools for construction management research.
Chapter
The construction industry plays a prominent role in global economic and social development, with 15% share of the world’s GDP. Yet, it suffers from inefficient decisions, unproductivity, time-consuming activities, resistance to change, and high rates of accidents, wasting significant monetary and natural resources. With the recent advancements of industry 4.0 technologies, Artificial Intelligence, and Digital Twins, the construction sector is experiencing a drastic shift toward automation, optimization, and digitalization, which could be the perfect solution for the issues mentioned above. However, the potential harms, biases, and discriminations embedded in and caused by such technologies in ethical and social contexts are overlooked. Moving toward the Industry 5.0 revolution, emphasizing the human-centric technology concept, the role of developing ethical standards and regulations, designing ethical systems to make fair and moral decisions, and assessing the productivity of projects based on their social impacts and not merely financial profit become critical. As creators of the built environment, engineers, architects, and construction managers have a vital social responsibility to represent the needs of all social groups, regardless of ethnicity, race, and gender, in their projects to serve sustainable development goals. This chapter aims to delineate the potential harms and ethical issues that might arise during different stages of digital technologies’ application process, as well as the criteria to consider while designing an ethics-aware technology implementation framework. It can serve as a driver and a basis for an objective and ethical cost–benefit analysis of technology integration with current processes in the industry.
Article
Full-text available
In recent years, due to the rapid development of the fourth industrial revolution and new platforms of information technologies, intelligent systems have received widespread attention in many industries and have brought the potential to improve the efficiency of the construction industry. These facts led to the appearance of a new concept in construction industry called Construction 4.0. Therefore, this article seeks to explore the state of implementation of Industry 4.0 technologies in the construction industry and analyze their impact on the formation of the Construction 4.0 concept. In order to achieve the aim of this article, a literature review was conducted using the most relevant publication in this field. Moreover, authors carried out a bibliometric analysis among 195 selected research articles related to the Industry 4.0 and Construction 4.0 to identify interconnections between these concepts. The results show that Industry 4.0 has the greatest impact on productivity growth in construction and that interest in digital technologies is growing every year, but their penetration into the construction industry is currently slow and limited. The authors suggest that further research needs to be focused on future ethical issues that may arise and on synergies between Construction 4.0 technologies.
Article
Full-text available
Under big data environment, machine learning has been rapidly developed and widely used. It has been successfully applied in computer vision, natural language processing, computer security and other application fields. However, there are many security problems in machine learning under big data environment. For example, attackers can add “poisoned” sample to the data source, and big data process system will process these “poisoned” sample and use machine learning methods to train model, which will directly lead to wrong prediction results. In this paper, machine learning system and machine learning pipeline are proposed. The security problems that maybe occur in each stage of machine learning system under big data processing pipeline are analyzed comprehensively. We use four different attack methods to compare the attack experimental results.The security problems are classified comprehensively, and the defense approaches to each security problem are analyzed. Drone-deploy MapEngine is selected as a case study, we analyze the security threats and defense approaches in the Drone-Cloud machine learning application envirolment. At last,the future development drections of security issues and challenages in the machine learning system are proposed.
Article
Full-text available
With the development of deep connections between humans and Artificial Intelligence voice‐based assistants (VAs), human and machine relationships have transformed. For relationships to work it is essential for trust to be established. Although the capabilities of VAs offer retailers and consumers enhanced opportunities, building trust with machines is inherently challenging. In this paper, we propose integrating Human–Computer Interaction Theories and Para‐Social Relationship Theory to develop insight into how trust and attitudes toward VAs are established. By adopting a mixed‐method approach, first, we quantitatively examine the proposed model using Covariance‐Based Structural Equation Modeling on 466 respondents; based on the findings of this study, a second qualitative study is employed to reveal four main themes. Findings show that while functional elements drive users' attitude toward using VAs, the social attributes, being social presence and social cognition, are the unique antecedents for developing trust. Additionally, the research illustrates a peculiar dynamic between privacy and trust and it shows how users distinguish two different sources of trustworthiness in their interactions with VAs, identifying the brand producers as the data collector. Taken together, these results reinforce the idea that individuals interact with VAs treating them as social entities and employing human social rules, thus supporting the adoption of a para‐social perspective.
Article
The growth of the construction industry is severely limited by the myriad complex challenges it faces such as cost and time overruns, health and safety, productivity and labour shortages. Also, construction industry is one the least digitized industries in the world, which has made it difficult for it to tackle the problems it currently faces. An advanced digital technology, Artificial Intelligence (AI), is currently revolutionising industries such as manufacturing, retail, and telecommunications. The subfields of AI such as machine learning, knowledge-based systems, computer vision, robotics and optimisation have successfully been applied in other industries to achieve increased profitability, efficiency, safety and security. While acknowledging the benefits of AI applications, numerous challenges which are relevant to AI still exist in the construction industry. This study aims to unravel AI applications, examine AI techniques being used and identify opportunites and challenges for AI applications in the construction industry. A critical review of available literature on AI applications in the construction industry such as activity monitoring, risk management, resource and waste optimisation was conducted. Furthermore, the opportunities and challenges of AI applications in construction were identified and presented in this study. This study provides insights into key AI applications as it applies to construction-specific challenges, as well as the pathway to realise the acrueable benefits of AI in the construction industry.
Article
The pursuit of responsible AI raises the ante on both the trustworthy computing and formal methods communities.
Article
The usage of AI-empowered Industrial Robots (InRos) is booming in the Auto Component Manufacturing Companies (ACMCs) across the globe. Based on a model leveraging the Technology, Organisation, and Environment (TOE) framework, this work examines the adoption of InRos in ACMCs in the context of an emerging economy. This research scrutinises the adoption intention and potential use of InRos in ACMCs through a survey of 460 senior managers and owners of ACMCs in India. The findings indicate that perceived compatibility, external pressure, perceived benefits and support from vendors are critical predictors of InRos adoption intention. Interestingly, the study also reveals that IT infrastructure and government support do not influence InRos adoption intention. Furthermore, the analysis suggests that perceived cost issues negatively moderate the relationship between the adoption intention and potential use of InRos in ACMCs. This study offers a theoretical contribution as it deploys the traditional TOE framework and discovers counter-intuitively that IT resources are not a major driver of technology adoption: as such, it suggests that a more comprehensive framework than the traditional RBV should be adopted. The work provides managerial recommendations for managers, shedding light on the antecedents of adoption intention and potential use of InRos at ACMCs in a country where the adoption of InRos is in a nascent stage.
Article
The construction industry has a higher occupational casualty rate than other industries. As a proactive approach to safety management, Construction Hazard Prevention through Design (CHPtD) can significantly eliminate or reduce the construction safety risk. However, this concept is not implemented effectively in practice because the technical issues that underlie CHPtD have not been addressed. This paper proposes a novel method of quantitative construction safety risk assessment for building projects at the design stage. This method consists of three indexes: Likelihood, consequence and exposure. These indexes are calculated using occupational injury, fatality and specific construction planning data which are accurate and objective. A plug-in that links building information modeling (BIM) with safety risk data is developed in Autodesk Revit, which can automatically calculate construction safety risk to help architects and structural designers quickly select design alternatives. A case study is presented to demonstrate the feasibility and effectiveness of the proposed method.