Conference PaperPDF Available

AI-ware: Bridging AI and Software Engineering for responsible and sustainable intelligent artefacts

Authors:

Abstract

The Computer Science fields Artificial Intelligence (AI) and Software Engineering need to strengthen at their interception. This paper contributes to it by defining what is meant by AI concretely (spoiler: it is not only machine learning), by presenting related work that builds upon bridging the above-mentioned fields, and by suggesting the AI-ware approach for paving the path to the intelligent artefacts of the future.
Management von Risiko, Regulation und Compliance
Workshop Proceedings der GI Jahrestagung 2021
AI-ware: Bridging AI and Software
Engineering for responsible and
sustainable intelligent artefacts
Dagmar Monett1, Claudia Lemke2
1 Berlin School of Economics and Law, Computer Science Department, Alt-Friedrichsfelde
60, Berlin, 10315, dagmar.monett-diaz@hwr-berlin.de
2 Berlin School of Economics and Law, Information Systems Department, Alt-Friedrichsfelde
60, Berlin, 10315, claudia.lemke@hwr-berlin.de
Management von Künstlicher Intelligenz
66
Abstract: The Computer Science elds Articial Intelligence (AI) and Software Engineering
need to strengthen at their interception. This paper contributes to it by dening what is
meant by AI concretely (spoiler: it is not only machine learning), by presenting related work
that builds upon bridging the above mentioned elds, and by suggesting the AI-ware
approach for paving the path to the intelligent artefacts of the future.
Keywords: Articial Intelligence, Software Engineering, AI-ware, AI ethics,
machinelearnization fallacy
Management von Risiko, Regulation und Compliance
Workshop Proceedings der GI Jahrestagung 2021
1 The AI background
1.1 What is AI?
Historically, there has been a typical and profound misunderstanding of what machine or articial
intelligence (AI) is or what it can do. This has been chasing the advancements in the AI eld thereby
lacerating its credibility since its foundation [Mc76; Mi19; Wo20]. Promises of intelligent capabilities
in machines have been pervasive in the AI eld for decades, especially during the “AI Springs” or
periods of time where (perceived) signicant results are achieved. Enthusiasm grows, consequently,
as do expectations. New developments and the eld itself are “ballooned to an importance beyond
[their] actual accomplishments” [Sh56] and it does not take long until disappointment arises, funding
is cut, and interest in the eld decreases. AI remains far from a magic bullet that can solve all our
1
problems.
Many are the reasons for those AI highs and lows. Perhaps one of the most signicant ones is not
having consensus on defining AI, both as a eld and as its most important concept [MLT20; Wa19].
Understandable, there will always be subelds that flourish more or less depending on their success;
other new ones originate and others merge. But AI silos and corresponding ivory towers are
characteristic of the inability to reach consensus on AI’s most fundamental concepts and
techniques. Few has changed since 1956, when the eld is said to be founded.
Most recently, the “machinelearnization fallacy, as we dene and refer to it in our approach, has
dominated public and practitioners’ perception of AI. We argue that this fallacy is the urge to reduce
reality to models and treat all problems by applying machine learning (ML), i.e. data-driven statistical-
based algorithms (like articial –deep– neural networks and related techniques), even when they are
not the most suitable ones. The machinelearnization fallacy appears when it is thought that ML
algorithms are the only ones that can solve any problem in any domain or lead to truly intelligent
systems because they have already had success optimizing certain problems in very narrow
domains. The implications of misclassifying cats or objects in an image, for instance, should never
be equalled to those impacting society and its structures. Moreover, the implications of referring to
AI when it is actually meant only one of its subelds can be harmful [Cr21], also to the AI community
and its progress [La21].
The incremental advances that have been made in AI are mainly in low-level pattern recognition
skills, however [Jo19]. They are nowhere near advanced enough to be considered intelligent [MD19;
Mi19; Sm19; Wo20]. In fact, the lately over-celebrated advances in the AI eld are recording an
unparalleled eclipsing of other AI subelds, simultaneously obscuring the harms and risks that their
outcomes can pose. The hype is nothing but a product of loud promoted and corporate-backed
achievements in a few AI areas. Consequently, the public and other stakeholders have been provided
a narrow, partial view of AI. Yet, ML is one among many AI subfields, the one that is currently getting
the most attention and investments, though.
1.2 Why now?
AI, the new star in the emerging technology sky, owes its popularity to various technological
developments from recent decades. On the one hand, AI’s technological progress has been possible
due to an exponential growth in the performance of hardware components, the ongoing increase of
the internet of things, as well as the high computing and storage capacities through cloud
computing and improved algorithms. Furthermore, open source technology has facilitated the rapid
spread of algorithms’ code and experiments, allowing for new implementations and applications. On
These other periods are known as “AI Winters.”
1
Management von Künstlicher Intelligenz
67
Management von Risiko, Regulation und Compliance
Workshop Proceedings der GI Jahrestagung 2021
the other hand, the Big Tech’s drive for collecting and analysing consumer data fosters a fairy-tale
imagination of all established companies across industries to be able to emulate them. Techniques
or approaches like big data offer new ways for business analytics and especially for using primary or
very specic ML approaches to predict a certain future of these companies. It seems so easy on the
surface, that the training and test purposes of AI algorithms using the apparently available own
business data are aimed at guaranteeing complete and timely information business decisions. At the
same time, AI- driven algorithms promote a further automation of operational processes and
structures, which, in the end, secure the efciency and effectiveness of the company in the medium
to long term. Therefore, it is not surprising that AI projects are on the strategic agendas of company
leaders, actually for years now, with high annual investments. Other factors like open access
publications and peer reviewed literature on AI, available on preprint servers even before they are
nally published, also offer an easy entry to the academic or scientic AI community, combined with
open, online education about AI.
Together, these factors have transformed the power relations between consumers, producers, and
prosumers of digital technologies, either based on AI or not. For example, it is now straightforward to
learn about and produce AI applications with a minimum of effort and resources. Consumers of
these technologies become also data sources, testers, or even producers of new applications
themselves. Nevertheless, and despite the factors that fuel the current AI momentum, it is often
alleged that the design and implementation of AI algorithms or AI-based systems must be subject to
other “laws” distinct from the ones guiding the development of traditional IT in the eld of Software
Engineering. Which are these differences exactly? Are there any similarities? Could these two core
areas of the Computer Science eld learn from each other?
1.3 AI is software!
Actually, it all comes down to software, yet it is treated as if it were magic in the case of AI. A blurry
line, which got thicker with time, was articially produced between AI and Software Engineering, in
particular, and Computer Science, in general. It does not come as a surprise that whilst Software
Engineering evolved to the very established discipline that it is today, the AI eld was stuck in
algorithmic development, mainly improvised in academic settings with few practical applications at
large scale. The traditional software structure that encompasses not only all software development
phases but also its management and quality assurance [So16] is indeed different for some AI-
enabled systems, but not all of them. They require more algorithmic tuning, uncertainty, ambiguity, a
different handling of data (for those where it is a critical resource), and a higher margin of error and
instability along the systems [HMO19; Oz20].
2 The AI-ware agenda
2.1 What does AI-ware mean and what is it not?
We dene AI-ware as a term with a twofold meaning. On the one hand, we mean the need to be
aware of what AI is and is not, its subelds, as well as a critical appraisal of where does applying AI
make sense at all. Section 1 claries these aspects. On the other hand, we mean the conjunction of
AI and Software Engineering, mainly –but not only– the former learning from the variety of well-
established techniques and good practices from the latter, at the same time extending them.
Additionally, AI-ware combines research movements and/or subfields from the Software Engineering
and AI elds, like AI4SE and SE4AI [Mc20], but focused on an equal consideration more than a causal
argumentation. AI-ware is neither a new subeld of Software Engineering nor a new form of
experimenting with AI. Rather, it should take into account the specic features in the application of
Management von Künstlicher Intelligenz
68
Management von Risiko, Regulation und Compliance
Workshop Proceedings der GI Jahrestagung 2021
the various AI methods and models, enable a targeted and economically arguable development of
AI-based applications, and be understood as an emerging, modern eld of Computer Science.
Software Engineering principles do apply to engineering AI-based systems [HMO19]. Together with
good practices and better methodologies that are coming from some AI subelds [MBL20], they are
paving the way to bridge both fields. Additionally, a shift from “model-centric AI” to “data-centric AI” is
happening in some industries, along with the extension of DevOps to MLOps for the specific case of
the ML pipeline. This resembles only part of the AI subeld, however, since not all problems need to
be solved by applying ML (refer to the machinelearnization fallacy discussed above).
An AI-ware approach should bring the pieces of a complex puzzle together and open directions for
further research, thereby
acknowledging what AI is and which subelds, methods, and algorithms are part of it or come
into question when a particular problem is to be solved;
including the ethical stance in all of its phases, particularly when developing data- centric AI;
applying Software Engineering principles, methodologies, and good practices, e.g. those
particularly helpful for AI-based software development;
making decisions regarding platforms, technologies, and architectures suitable for AI, as well
as for managing resources including data, people, and processes.
2.2 What a possible research agenda could look like
Future AI applications for business environments need the design and development of responsible
and sustainable intelligent artefacts [Di19]. There is often a gap between the idea that programmers
have when solving a problem and the creative use of its solutions by users. Humans make techno-
moral choices, which can have positive but also negative reinforcing effects. It is therefore essential
that an AI-ware approach, primarily and above all, can close these gaps. One of the main niches for
research action would be to consider how to integrate AI ethical challenges into classical Software
Engineering frameworks. Thus, suggested research perspectives for an AI-ware agenda are:
Societal and ethical perspective regarding the purpose of AI-driven applications, their
impact on society and nature, and their contribution to a sustainable future. Further,
regarding which problems urge to be solved and which AI applications need regulation
because of their high risk to erode human rights, for instance.
Algorithmic perspective regarding a realistic consideration of what kind of AI is needed, if
at all, and for solving which problems. The machinelearnization fallacy, among other
fallacies intrinsically related to the nature and function of AI algorithms, should be kept under
criticism as long as other approaches (e.g. from symbolic AI, hybrid techniques, etc.) have
not been evaluated. The solution to a problem does not necessarily require AI, let alone ML
spectacles.
Data-oriented perspective regarding the value of data, like property rights of data, its
variety, and quality. It is no longer just the interesting user data but the big object-based data
that allows for completely new power structures in the sense of measuring our complete
world and surroundings once having access to this data.
Framework-based perspective regarding both hardware and software architectural issues,
potentially different when designing AI or classical IT software (compare DevOps to MLOps).
Areas that have so far been considered in an undifferentiated way should probably be
Management von Künstlicher Intelligenz
69
Management von Risiko, Regulation und Compliance
Workshop Proceedings der GI Jahrestagung 2021
separated, too. For example, the application of statistical methods in business analytics is
also possible by using ML algorithms, but they still belong to the domain of data mining or
data science and, thus, require other competences than the classical Software Engineering
ones.
Economics perspective regarding the efforts and resources when designing AI- based
applications, the entire software process, and the corresponding operational tasks.
Examples show time and again that AI algorithms work well in lab- controlled environments,
but fail in practical applications in a larger context. Here, suitable measuring instruments
must be sought or adapted.
Interdisciplinary perspective regarding the teams that design, develop, deploy, and even
use AI applications, thereby combining different subject areas, expertise, and experiences
[HMO19].
3 Conclusion
This paper suggests a solution to the currently differentiated development of algorithms under the
label of AI in a business environment by suggesting ideas on how to reframe the insights from
Software Engineering with the challenges of designing responsible and sustainable artefacts. Some
of the perspectives that we put forward, which are certainly not exhaustive, support an
interdisciplinary discussion aimed to combine insights from both AI and Software Engineering with a
management-oriented view.
Management von Künstlicher Intelligenz
70
Management von Risiko, Regulation und Compliance
Workshop Proceedings der GI Jahrestagung 2021
References
[Cr21] Crawford, K.: Atlas of AI: Power, Politics, and the Planetary Costs of Artificial
Intelligence. Yale University Press, New Haven, 2021.
[Di19] Dignum, V.: Responsible Artificial Intelligence: How to Develop and Use AI in a
Responsible Way. Springer, 2019.
[HMO19] Horneman, A.; Mellinger, A.; Ozkaya, I.: AI Engineering: 11 Foundational Practices.
Carnegie Mellon University Software Engineering Institute, Pittsburgh, 2019.
https://resources.sei.cmu.edu/asset_files/WhitePaper/
2019_019_001_634648.pdf
[Jo19] Jordan, M.: Artificial Intelligence—The Revolution Hasn’t Happened Yet. Harvard
Data Science Review, 1(1), 2019. https://doi.org/10.1162/99608f92.f06c6e61
[La21] Larson, E. J.: The Myth of Artificial Intelligence: Why Computers Can’t Think the
Way We Do. Harvard University Press, Cambridge, MA, 2021.
[MBL20] Musgrave K.; Belongie, S.; Lim, S.: A Metric Learning Reality Check. In Vedaldi, A.;
Bischof, H.; Brox, T.; Frahm, J.M. (eds.) Computer Vision – ECCV 2020, Part XXV,
Lecture Notes in Computer Science, 12370, Springer, Cham, 2020. https://doi.org/
10.1007/978-3-030-58595-2_41
[Mc76] McDermott, D.: Artificial intelligence meets natural stupidity. ACM SIGART
Bulletin, 57, pp. 4-9, April 1976. https://doi.org/10.1145/1045339.1045340
[Mc20] McDermott, T.; DeLaurentis, D.; Beling, P.; Blackburn, M.; Bone, M.: AI4SE and
SE4AI: A Research Roadmap. InSight Special Feature, 23(1), pp. 8-14, 2020.
[MD19] Marcus, G.; Davis, E.: Rebooting AI: Building Artificial Intelligence We Can Trust.
Pantheon Books, New York, 2019.
[Mi19] Mitchell, M.: Artificial Intelligence: A Guide for Thinking Humans. Pelican Random
House, UK, 2019.
[MLT20] Monett, D.; Lewis, C. W. P.; Thórisson, K. R. (eds.): Special Issue “On Defining
Articial Intelligence.” Journal of Articial General Intelligence, 11(2), pp. 1-100,
2020.
[Oz20] Ozkaya, I.: What Is Really Different in Engineering AI-Enabled Systems? IEEE
Software, pp. 3-6, July/August 2020, DOI: 10.1109/MS.2020.2993662
[Sh56] Shannon, C. E.: The Bandwagon. IRE Transactions on Information Theory, 3,
1956.
[Sm19] Smith, B. C.: The Promise of Artificial Intelligence: Reckoning and Judgment. The
MIT Press, Cambridge, MA, 2019.
[So16] Sommerville, I.: Software Engineering, 10th edition. Pearson, 2016.
[Wa19] Wang, P.: On Defining Articial Intelligence. Journal of Articial General
Intelligence, 10(2), pp. 1-37, 2019.
Management von Künstlicher Intelligenz
71
Management von Risiko, Regulation und Compliance
Workshop Proceedings der GI Jahrestagung 2021
[Wo20] Wooldridge, M.: The Road to Conscious Machines: The Story of AI. Pelican
Random House, UK, 2020.
Management von Künstlicher Intelligenz
72
Management von Risiko, Regulation und Compliance
Workshop Proceedings der GI Jahrestagung 2021
Author information
Management von Künstlicher Intelligenz
73
... Some studies give the impression of constantly expanding application areas for AI solutions in companies (Chui et al., 2021;Rao & Greenstein, 2022), though without clarifying what the term "AI" actually means. On top of that, AI is often equated with its most prominent subfield from the last years, machine learning, which is largely seen as the sole technical solution to operational challenges-paraphrased as the machinelearnization fallacy in (Monett & Lemke, 2021). In spite of this, AI continues to open the doors for external investments, thereby increasing the company's image in public, also as an attractive employer, and suggests modern working methods and innovative technology use. ...
Article
Full-text available
Narratives about intelligent artefacts have influenced both the public’s imaginary and the actual development of the AI field since its foundation. Yet, in times where the field seems to be flourishing on the one hand, but rushing into an AI winter on the other, factual narratives about AI applications and advancements are more essential than ever. What is the gap between the actual capabilities of today’s AI and the vocabulary used to report about them? In particular, what is the AI lingua used in official, legal documents in business? To find out, we analysed leading share index companies’ annual reports from a representative fraction of the German economy (DAX 30), as a starting step in this direction. In this paper, we present a fact-based methodology for systematically assessing the true state of enterprise AI of those companies. Our initial empirical investigation covers only the annual reports of leading listed German enterprises in the DAX 30 as of May 2021 (i.e. before the DAX’s expansion to 40 members). For this concrete example, we collected their annual reports from 2010 to 2020 (N=312). We then built upon previous work by extending natural language processing (NLP) algorithms we developed for these purposes. The idea is to systematically process and automatically detect the use of AI-related terminology in those annual reports. Such a terminology is part of a classification schema we introduce for differentiating concrete types of AI-related terms. We also compare different NLP libraries regarding their suitability and speculate on the reasons behind the poor performance of some of them. Furthermore, we look at relevant AI keywords and phrases, thereby conducting a human-based semantic analysis of the context – tasks that machines still cannot do effectively. We also give guidance on how to proceed in similar studies, i.e. on how to extend our methodology and the key findings to other national economies. This way, we are contributing not only to an informed perception about the state of enterprise AI, but also to filling the gap between the narratives it uses and the actual state of AI development.
Article
Full-text available
In 2019, the Research Council of the Systems Engineering Research Center (SERC), a US Defense Department sponsored University Affiliated Research Center (UARC), developed a roadmap structuring and guiding artificial intelligence (AI) and autonomy research. This paper presents that roadmap and key underlying Digital Engineering transformation aspects both enabling traditional systems engineering practice automation (AI4SE), and encourage new systems engineering practices supporting a new wave of automated, adaptive, and learning systems (SE4AI).
Article
Full-text available
This article systematically analyzes the problem of defining “artificial intelligence.” It starts by pointing out that a definition influences the path of the research, then establishes four criteria of a good working definition of a notion: being similar to its common usage, drawing a sharp boundary, leading to fruitful research, and as simple as possible. According to these criteria, the representative definitions in the field are analyzed. A new definition is proposed, according to it intelligence means “adaptation with insufficient knowledge and resources.” The implications of this definition are discussed, and it is compared with the other definitions. It is claimed that this definition sheds light on the solution of many existing problems and sets a sound foundation for the field.
Article
Full-text available
As a field, artificial intelligence has always been on the border of respectability, and therefore on the border of crackpottery. Many critics <Dreyfus, 1972>, <Lighthill, 1973> have urged that we are over the border. We have been very defensive toward this charge, drawing ourselves up with dignity when it is made and folding the cloak of Science about us. On the other hand, in private, we have been justifiably proud of our willingness to explore weird ideas, because pursuing them is the only way to make progress.
Chapter
Deep metric learning papers from the past four years have consistently claimed great advances in accuracy, often more than doubling the performance of decade-old methods. In this paper, we take a closer look at the field to see if this is actually true. We find flaws in the experimental methodology of numerous metric learning papers, and show that the actual improvements over time have been marginal at best. Code is available at github.com/KevinMusgrave/powerful-benchmarker.
Article
Advances in machine learning (ML) algorithms and increasing availability of computational power have resulted in huge investments in systems that aspire to exploit artificial intelligence (AI), in particular ML. AIenabled systems, software-reliant systems that include data and components that implement algorithms mimicking learning and problem solving, have inherently different characteristics than software systems alone.<sup>1</sup> However, the development and sustainment of such systems also have many parallels with building, deploying, and sustaining software systems. A common observation is that although software systems are deterministic and you can build and test to a specification, AI-enabled systems, in particular those that include ML components, are generally probabilistic. Systems with ML components can have a high margin of error due to the uncertainty that often follows predictive algorithms. The margin of error can be related to the inability to predict the result in advance or the same result cannot be reproduced. This characteristic makes AI-enabled systems hard to test and verify.<sup>2</sup> Consequently, it is easy to assume that what we know about designing and reasoning about software systems does not immediately apply in AI engineering. AI-enabled systems are software systems. The sneaky part about engineering AI systems is they are "just like" conventional software systems we can design and reason about until they?re not.
Book
In this book, the author examines the ethical implications of Artificial Intelligence systems as they integrate and replace traditional social structures in new sociocognitive-technological environments. She discusses issues related to the integrity of researchers, technologists, and manufacturers as they design, construct, use, and manage artificially intelligent systems; formalisms for reasoning about moral decisions as part of the behavior of artificial autonomous systems such as agents and robots; and design methodologies for social agents based on societal, moral, and legal values. Throughout the book the author discusses related work, conscious of both classical, philosophical treatments of ethical issues and the implications in modern, algorithmic systems, and she combines regular references and footnotes with suggestions for further reading. This short overview is suitable for undergraduate students, in both technical and non-technical courses, and for interested and concerned researchers, practitioners, and citizens.
AI Engineering: 11 Foundational Practices
  • A Horneman
  • A Mellinger
  • I Ozkaya
Horneman, A.; Mellinger, A.; Ozkaya, I.: AI Engineering: 11 Foundational Practices. Carnegie Mellon University Software Engineering Institute, Pittsburgh, 2019. https://resources.sei.cmu.edu/asset_files/WhitePaper/ 2019_019_001_634648.pdf