PreprintPDF Available

Using No-code AI to Teach Machine Learning in Higher Education

Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

With recent advances in artificial intelligence, machine learning (ML) has been identified as particularly useful for organizations seeking to create value from data. However, as ML is commonly associated with technical professions, such as computer science and engineering, incorporating training in use of ML into non-technical educational programs, such as social sciences courses, is challenging. Here, we present an approach to address this challenge by using no-code AI in a course for students with diverse educational backgrounds. The approach was tested in an empirical, case-based educational setting, in which students engaged in data collection and trained ML models using a no-code AI platform. In addition, a framework consisting of five principles of instruction (problem-centered learning, activation, demonstration, application, and integration) was applied. This paper contributes to the literature on IS education by providing information for instructors on how to incorporate no-code AI in their courses, and insights into the benefits and challenges of using no-code AI tools to support the ML workflow in educational settings.
Content may be subject to copyright.
Teaching Tip:
Using No-code AI to Teach
Machine Learning in Higher Education
Leif Sundberg
Jonny Holmström
Department of Informatics
Umeå University
Umeå, 901 87, Sweden,
With recent advances in artificial intelligence, machine learning (ML) has been identified as particularly useful for organizations
seeking to create value from data. However, as ML is commonly associated with technical professions, such as computer science
and engineering, incorporating training in use of ML into non-technical educational programs, such as social sciences courses, is
challenging. Here, we present an approach to address this challenge by using no-code AI in a course for students with diverse
educational backgrounds. The approach was tested in an empirical, case-based educational setting, in which students engaged in
data collection and trained ML models using a no-code AI platform. In addition, a framework consisting of five principles of
instruction (problem-centered learning, activation, demonstration, application, and integration) was applied. This paper contributes
to the literature on IS education by providing information for instructors on how to incorporate no-code AI in their courses, and
insights into the benefits and challenges of using no-code AI tools to support the ML workflow in educational settings.
Keywords: Artificial intelligence, Machine learning, IS education research, Information systems education
Sundberg, L & Holmström, J. (2024, forthcoming). Using No-code AI to Teach Machine Learning
in Higher Education. Journal of Information Systems Education (JISE). Vol 35, Issue 1.
Machine learning (ML) is a discipline devoted to the
construction, application and analysis of computer systems that
learn from experience. In a common variant, supervised ML, a
system is shown numerous examples of a type of data, e.g.,
images of, or texts describing certain objects or phenomena, to
train it to ‘learn’ or recognize patterns in them. The system can
then use this learning to make predictions about new 'unseen'
data, i.e., data that it has not previously encountered (Jordan and
Mitchell, 2015; Kühl et al., 2022). Leavitt et al. (2021 pp. 750)
define ML as “a broad subset of artificial intelligence, wherein
a computer program applies algorithms and statistical models
to construct complex patterns of inference within data” (see
also, Bishop, 2006).
Massive increases in processing power of digital
technology and available data, in combination with better
algorithms, e.g., deep learning algorithms (see Lecun et al.,
2015) have set the stage for increases in the use of ML in many
contexts (Dwivedi et al., 2021). Accordingly, organizations are
increasingly deploying intelligent systems that can process
large amounts of data, provide knowledge and insights, and
operate autonomously (Simsek et al., 2019; Sturm et al., 2021).
As noted by Ma and Siau (2019, p. 1), "Higher education
needs to change and evolve quickly and continuously to prepare
students for the upheavals in the job market caused by AI,
machine learning, and automation." Among other things, these
authors argue that AI must be integrated into academic
curricula, and not only those of science, technology,
engineering, and mathematics (STEM) departments. However,
despite abundant research on applications of AI in educational
settings (e.g., Luan and Tsai, 2021; Humble and Mozelius,
2022), much less attention has been paid to instruction of
students with non-technical backgrounds in ML’s practical use
and applications (Kayhan, 2022). As ML is commonly
associated with technical professions, such as computer science
and engineering, incorporating training in its use into non-
technical educational programs, such as business- and
management-oriented social sciences and Information Systems
(IS) programs, is challenging. Similar issues have been raised
in previous research on novel intelligent systems (Liebowitz,
1992; 1995) as educators have sought to integrate their use into
business and IS programs. Recently, scholars have identified a
need to integrate AI curricula in ways that enable students to
develop sufficient understanding of technology such as ML to
apply it without detailed knowledge of AI algorithms (Chen,
2022). In this paper, we assess ‘no-code’ AI platforms’
potential utility in efforts to meet this need. In contrast to
conventional AI systems, which require significant resources
for installation and use, these platforms can be readily applied
in educational contexts. Thus, they are easy-to-use and
affordable forms of AI, and they guide users through the
process of developing and deploying AI models, with no need
to learn all about the intricacies associated with complex
algorithms (Lins et al., 2021; Richardson and Ojeda, 2022).
Hence, in this paper, we pose two research questions (RQs):
RQ1: How can no-code AI be used to teach ML in non-technical
educational programs?
RQ2: What are the benefits and challenges of using no-code AI
in education?
As already mentioned, ‘non-technical’ refers here to non-
STEM programs, such as business- and management-oriented
IS courses. To answer the RQs, we present a teaching tip based
on a case study of a master’s level AI for business course at
Umeå University, Sweden, in which qualitative data were
collected through interactions with, and observations of, the
students. In the remaining sections of the paper we: summarize
previous research on no-code software, describe the educational
setting, describe the materials and methods used, present the
results, discuss them, and finally offer concluding remarks.
In this section, we present a brief overview of the ML workflow
(sub-section 2.1), then summarize literature on the emergence
of no-code AI platforms (sub-section 2.2).
2.1 What is Machine Learning?
ML refers to a broad set of AI applications in which computers
build models based on patterns they recognize in datasets and
use the models to generate hypotheses about the world. Such
models have myriads of uses in problem-solving software
exploited in industrial and other organizations (Russel and
Norvig, 2022). The general ML workflow (see e.g., Chapman
et al., 1999; Kelleher and Brendan, 2018; Schröer et al., 2021)
begins with creation of a training dataset from which a machine
can learn something (Figure 1). Most applications today are
based on supervised learning procedures through which a
machine learns from labeled data, e.g., text describing an
image, such as a photo or drawing of a dog or cat (Fredriksson
et al., 2020). Then the training dataset is processed by an
algorithm that ‘trains’ the machine to recognize corresponding
patterns. The outcome of this process is a ML model that can
be used to make predictions regarding previously unseen data.
During the training process, part of a dataset (e.g., 20% of the
images in an image classifier case) is reserved for testing the
model to avoid problems such as overfitting. Acceptable
performance of the model on the test datasets indicates that it
may be used to solve problems in real world contexts, such as
organizational settings, if the data provide relevant
representations of the things or phenomena that must be
recognized to solve the problems.
Figure 1. A Simplified ML Workflow
This description is a somewhat simplified version of the ML
workflow. In reality, it takes several iterations of data collection
loops and knowledge consolidation processes to create a model
that provides meaningful results as experts may have diverging
perceptions of what data represent (see Lebovitz et al., 2021 for
a detailed discussion on experts’ disagreements during data
2.2 No-code AI
No-code solutions for software development have been subject
to previous research as they enable non-programmers with little
or no coding experience to produce various applications
(Bhattacharyya and Kumar, 2021; Luo et al., 2021; Lethbridge,
2021; Sahay et al., 2020; Yan, 2021). By adopting low-code
principles, enterprises may not only save time and costs, but
also narrow the gaps between business operations and
information technologies, thereby enabling more rapid
development and improvements in product and service quality
(Rokis and Kirikova, 2022).
As noted by Sundberg and Holmström (2022, see also
Sundberg and Holmström, 2023), a new generation of
‘lightweight’ no-code AI platforms—also known as AI as a
service (Lins et al., 2021) or simply AI service (Geske et al.,
2021) platforms—enables non-data scientists to train ML
models to make predictions. Such platforms may match, or even
outperform coded solutions (Kling et al., 2022). Hence, no-code
AI platforms may be widely applied in diverse settings,
including citizen science and as low-cost solutions in emerging
markets. In the long run, it has been argued that access to user-
friendly, low-code AI could democratize adoption of these
systems and stimulate their multidisciplinary use (How et al.,
2021). For example, new ‘drag-and-drop’ interfaces enable
anyone to develop, train and test AI algorithms in a few hours.
In combination with a range of open-source solutions and
plugins, this vastly simplifies algorithm development and
deployment (Coffin, 2021). The advances are so rapid that
within two years of Woo (2020, pp. 961) stating that “AI might
be able to automatically produce code”, advances in generative
AI, tools such as GitHub Copilot and ChatGPT are enabling
code generation based on the input of a user. Computer
scientists have always dreamt of writing programs that write
themselves, and the dream is becoming a commonplace reality.
Recently, authors have also recognized the powerful potential
utility of no-code apps in educational settings. For instance,
Wang and Wang (2021) argue that no-code (or low-code) app
development is transforming traditional software development
practices, and present a teaching case involving development of
a business app.
As noted by Holmström et al. (2011), rapid technological
developments create challenges for maintaining up-to-date
curricula for educating professionals who will work in
environments with high levels of technology. They highlight
several important issues regarding IS teaching, including the
importance of ensuring that the students acquire practically
relevant skills, through use of appropriate pedagogical
approaches, and generic types of knowledge. As AI is being
increasingly adopted in diverse domains (Dwivedi et al., 2021),
most, if not all, professionals will engage with or be affected by
intelligent systems in their careers. However, as mentioned, AI
is associated with needs to understand algorithms and hence
skills rooted in computer science and engineering. This poses
challenges for professionals rooted in other disciplines, not
because they have nothing to contribute to AI or gain from its
use, but because of a lack of fundamental knowledge of how,
for example, a ML system works. A potential remedy, also
already mentioned, is to use ‘lightweight’ AI (Sundberg and
Holmström, 2022) in the form of AI service platforms (Geske
et al., 2021; Lins et al., 2021), which are easy to use with little
to no installation requirements (as they are cloud-based) and
have graphical interfaces that help users to train ML models.
Here we present an approach for using such a system, the
Peltarion (2022) ‘no-code’ deep learning AI platform (hereafter
‘the no-code AI platform’, or just ‘the platform’), in a higher
education setting at the Department of Informatics, Umeå
University, Sweden. The department is part of the university’s
faculty of social sciences and provides three undergraduate
educational programs (on behavioral science with an
orientation towards IT-environments, digital media production,
and system science) and two master programs (on human-
computer interaction and IT management), together with
individual courses.
The mentioned AI solution enables non-data scientists to
upload data, then train and evaluate a ML model that can be
deployed via an application programming interface (API). The
platform guides users via a graphical interface together with
suggestions regarding problem types, workflows, pre-trained
models and iterative improvements. The platform was used in
an ‘AI for business’ course (15 credits) at Umeå University, to
give the students hands-on experience in training ML models
by engaging in a case-based task. The course is open for
students with diverse educational backgrounds, as requirements
for enrolment are 90 credits in informatics, computer science,
business administration, media and communication studies,
pedagogics, psychology, political science, sociology (or
equivalent competence). In line with the course curriculum
(Umeå University, 2022), the learning objectives of the exercise
were to “Account for and explain the role of AI in
organizational value creation”, by giving the students first-hand
experience of training ML models. The educational approach is
further described in the following section.
To address the RQs posed in Section 1, we followed a group-
based project approach presented by Mathiassen and Purao
(2002) in the course, inviting the students to engage in
development of ways of working and participating in
communicative activities regarding ‘real-life’ problems. As
noted by Leidner and Jarvenpaa (1995), such approaches
provide opportunities for students to understand the ‘messiness’
professionals face in industry, acknowledging the social
situatedness of these contexts, and that the problems students
will face are “unstructured, ambiguous, and immune to purely
technical solutions” (Holmström et al., 2011, pp. 2).
We applied the principles of instructions framework advocated
by Merrill (2007, 2013) in the educational setting. This
incorporates five principles summarized in Table 1: problem-
centered learning, activation, demonstration, application, and
integration. The framework provides an integrated, multi-strand
strategy for teaching students how to solve real-world
problems, or complete complex real-world tasks.
Humans learn better when they are
solving problems, so learning is
promoted when learners acquire skills
in contexts of real-world problems.
Learning is promoted when learners
activate existing knowledge and skills
as foundations for a new skill. An
important step here is to start at the
learner’s level. Activation requires
learning activities that stimulate the
development of mental models and
schemes that can help learners to
incorporate new knowledge or skill
into their existing knowledge
Learning is promoted when learners
observe a demonstration of the skill to
be learned, e.g., by exposure to
examples of good and bad practice.
Learning is promoted when learners
apply new skills they have acquired to
solve problems. Applying new
knowledge or skills to real-world
problems is treated as almost essential
for effective learning.
Learning is promoted when learners
reflect on, discuss, and defend
knowledge or skill they have acquired.
The effectiveness of a course is
enhanced when learners are provided
opportunities to discuss and reflect on
what they have learned in order to
revise, synthesize, re
combine and
modify their new knowledge or skills.
Table 1. Principles of the educational approach
The case presented to the students described a fictive
organization, ‘WeldCorp’, specialized in welding, seeking to
expand and acquire customers in additional geographical
markets while retaining and automating quality measures. To
assist the company, we invited the students to develop ways to
use ML as a tool to assess welding points. The course module
described in this paper consisted of a workshop, a Q&A session,
supervising sessions, and a final seminar. Its content is further
outlined in Section 5.1. Nineteen students attended the course
(14 male and five female), with educational backgrounds
including bachelor's degrees in business and administration,
computer science, and behavioral science. The empirical
materials used in the study presented here, as summarized in
Table 2, stem from interactions with the students, the no-code
AI platform, and teachers’ reflections.
and course
E-mails, notes taken during the
course, written evaluations and
feedback from students.
assignments and
Two written group reports, and two
presentations during a final seminar.
models and
created by the
The Peltarion (2022) no-code AI
Teachers’ experiences and reflections
during and after the course
Table 2. Materials
These materials allowed us to both provide educators with
recommendations for using no-code AI and present interesting
findings on the benefits and challenges associated with these
platforms’ use in educational settings. We identified the
benefits and challenges by subjecting the empirical data to
thematic analysis (Braun and Clarke 2012; Clarke and Braun
2014) through inductively coding the students’ activities during
the module. More specifically, we coded the activities
undertaken by the students in our empirical setting mentioned
and observed in the materials, and then aggregated them into
themes, informed by the steps in the ML workflow presented in
Section 2.1.
This section is divided into three parts. In line with Lending and
Vician (2012), in Section 5.1 we provide a description of our
educational procedures to enable instructors to adopt our
approach. Then, the benefits of using no-code AI in education
are presented in Section 5.2, followed by challenges we
experienced in Section 5.3.
5.1 Detailed Educational Approach
The course module was initiated on December 2, 2021, and the
final seminar was held on January 10, 2022. Thus, the duration
of the module was a little over a month, including Christmas
holiday breaks. The module was initiated with a 3 h workshop
session that included an introduction to ML, followed by a
demonstration of the no-code AI platform’s functionalities, and
description of the group assignment. The information
presented, and considerations applied, in this workshop are
summarized in the following text.
As the students came from different backgrounds, it was clearly
stated that the workshop would not include deep examination
of phenomena such as neural networks,and focus instead on
providing students with sufficient information to get hands-on
experience of training ML or deep learning (DL) models. An
overview of the current status of ML was presented as increases
in the scale of datasets, together with improvements in
algorithms and processing speed have increased capabilities for
machines to ‘learn’. This included presentation of:
A short video showing how neural networks ‘see’ things
in image data:
Figures from an overview by Hilbert and López (2011)
of how the capacities of storing data rapidly shifted from
analogue to digital formats.
A comparison of the world’s fastest supercomputer in
1997 (ASCI Red), which reached a speed of 1.8
teraflop, and the SONY Playstation 3 video game
console that reached the same speed nine years later.
Then, the differences between supervised, unsupervised, and
reinforced ML were briefly presented. We emphasized that the
module would focus largely on supervised learning, the basis of
most commercial and industrial applications of ML today, so
the students would need to engage with data labeling. This is
important for two reasons. First, collecting and annotating data
are crucial but time-consuming activities that take most of the
time spent during ML development (Fredriksson et al., 2020).
Second, if this element is neglected or poorly done, the resulting
ML models will perform poorly and generate inaccurate,
irrelevant or even harmful results (Sambasivan et al., 2021).
Next, the lecture outlined the kinds of problems that can be
solved by using ML. As noted by Kayhan (2022, p. 123), “many
students lack the preparation for the workforce because they
cannot conceptualize valid input-output relationships for the
problems they propose to solve using ML”. Thus, despite the
widespread hype surrounding intelligent systems, there is a lack
of specificity of the kinds of problems algorithms can actually
solve. As noted in Section 2.1, ML is a set of technologies that
involve training of algorithms to create models that can provide
predictions concerning previously unseen datasets. Hence, ML
cannot solve ‘general’ problems such as ‘increasing efficiency’
or ‘improving quality’: they need specific problem formulations
accompanied by relevant datasets. Thus, in this part of the
lecture we presented a checklist for determining whether ML
would be suitable to apply:
1. Do you have a use case?
2. Can the use case be solved by AI / ML (or simpler
3. Do you have data?
4. Do you have annotated data?
We also presented examples of various problems/use cases that
ML can solve, such as anomaly detection, classification
problems (identifying features in texts and images), building
chatbots based on text similarity functions, and various
regression problems, such as predictions of sales and housing
costs. Before demonstrating the functionality of the no-code
platform, we described the ML workflow, both generally as
shown in Figure 1 and more specifically for the Peltarion
platform, as displayed in Figure 2. Although the platform is
now discontinued, this workflow (data collection + preparation,
training, evaluation and deployment of an ML model) is at the
core of most ML development efforts and protocols applied in
other no-code AI platforms (such as BigML, Amazon
SageMaker, Google AutoML and Teachable Machine etc.).
Figure 2. The ML Workflow in the No-code AI
After presenting the above activities in a traditional lecture,
supplemented by visual aids and other materials, we turned
attention to the no-code platform.
An important step during the use of no-code AI is to check
requirements of the platform of choice in terms of data types
(e.g., tabular, images, or text). Familiarity with the selected
platform’s tools for processing and labeling data is also
important. Thus, to provide participating students with an
understanding of how the no-code AI platform handled
different data types, we used free datasets from Kaggle (2023):
To explore tabular data, we used the popular “IRIS”
dataset, which can be used to predict the species of a
flower based on the size of petals and sepals.
For image data, images of cats and dogs can be used to
train a binary classifier. Images of craters on the Moon
and / or Mars can be used to train object detectors (if
this feature is available in the platform. See Figure 3 for
an example).
To train a model that can make predictions based on
NLP (natural language processing), data from the
Internet Movie Database (IMDB) can be used to predict
whether a text is ‘positive’ or ‘negative’.
Figure 3. Image Annotation for Object Detection in the
BigML Platform
During the demonstration of how to upload data, we briefly
described and outlined procedures for various possible formats
for tabular and text data (e.g., csv and npy), but not procedures
for connecting to ‘data warehouses’, such as BigQuery or Azure
Synapse, as it was irrelevant for the planned task. Instead, we
focused more on how to upload image data to the platform, as
this was the type of data the students would handle in the
following case. An advantage of using no-code AI in such cases
is that images can be annotated by placing them in folders that
acts as labels, compressing them into zip-files, and then
uploading them to the platform. The platform then takes care of
processing and cropping the images to standardized formats. A
negative effect, which we informed students about, is that
important features near edges of the images may be cropped.
Then, we demonstrated various examples of ML problems,
and their possible solutions using the no-code AI platform.
Depending on the type of data involved, the platform suggests
certain problems, as the user chooses the input (data), and one
or more targets (labels). As mentioned, examples of such
problems include image classifications and image/text
similarity searches. Thus, in this phase we also displayed
examples of ways to use pre-trained ImageNeT-based and NLP
(e.g., BERT)-based models for classifying and predicting
patterns in images and texts, respectively. Use of pre-trained
models relaxes the requirements to use big datasets, as users can
fine-tune these models with their own data. Links to online
tutorials and datasets (e.g., Kaggle) were uploaded to the course
teaching platform, for students who wanted to proceed by
experimenting with different types of data and problems.
In another important part of this demonstration, we showed
how ML models can be evaluated. This is done by splitting the
dataset(s) into a training set and test (and/or validation) set. The
algorithm is not exposed to the test set during training, so it can
be used to evaluate how a model performs on previously unseen
data. Common pitfalls, such as data bias and overfitting, were
also introduced during this session. The platform enabled
generation of two indicators that are commonly used for
evaluating models: receiver operating characteristic (ROC)
curves and confusion matrices, which are especially useful for
enhancing students’ understanding of the output of ML models,
and why their deployment requires careful consideration.
Essentially, an image model outputs a probability of what it
thinks is present in an image, e.g., ‘0.76 cat’. Depending on the
problem at hand, and associated requirements, a threshold can
be set to determine how ‘certain’ a model must be before it can
classify something. Important measures here include accuracy,
recall and precision. While accuracy is a measure of a model’s
overall performance, there is always a trade-off between recall
and precision. Students can be taught the relevance of this
tradeoff using two types of examples: ML-based spam-filters,
and medical diagnostics. When constructing a spam filter it is
often more important to minimize numbers of ‘false positives’
(potentially important emails that end up in the spam filter) than
numbers of ‘false negatives’ (spam emails that end up in the
inbox). Thus, precision is a good measure for such a model, as
it assesses whether what is being classified as ‘spam’ really is
spam. In contrast, during medical diagnosis avoiding false
negatives is often much more crucial than avoiding false
positives (as assessed by a recall measure), because wrongly
classifying ill people as healthy can have severe consequences
for them. For understanding such issues, knowledge of ROC-
curves is important, because they illustrate three key aspects of
ML models. First, they output probabilities (in contrast to ‘exact
knowledge’). Second, configuring these outputs involves active
choices of thresholds. Third, these choices entail trade-offs
between different evaluation measures.
At the end of the demonstration session, the students were
divided into two groups and assigned the problem-centered task
of helping ‘WeldCorp’ to use ML as an instrument to assess the
quality of their welding joints. A rubric for the task provided a
backstory, stating that WeldCorp was launched in 1994 in
Gothenburg, and subsequent expansion to other Swedish cities
led to the CEO experiencing problems with maintaining quality
control. So, s/he is now turning to ML for this purpose. The
rubric then told the students:
Your assignment is to help WeldCorp to sustain their growth by
leveraging machine learning. Specifically, your task is to
analyze welding images (images of good and bad welding
points) to develop a model using the no-code AI platform
that can be useful for WeldCorp in a quality assurance context.
1. Describe and justify your choices regarding the data
processing, problem selection, and model training in
the no-code AI platform.
2. Describe how you evaluated your model’s
predictions. Are they accurate enough to use live for
WeldCorp? Why/why not?
3. Discuss: What could be done by WeldCorp to
improve the model’s results? How would they
implement this type of solution in their business?
An important aim during this assignment was to prompt
students to think about and justify their choices during training,
and the output of their model(s), rather than simply striving to
optimize the performance of the model(s). As the module is a
part of an AI business course, we also wanted the students to
discuss how WeldCorp could integrated AI in their
The students were divided into two groups. The start of the
course included a presentation exercise, in which the students
were asked to state their name and educational background. As
two of the students had experience in computer science, we
intentionally placed these students in separate groups. To get
the students started, they were given a small dataset of 157
images of good and bad welds. The groups were then given
enterprise accounts providing access to the no-code AI
platform. Before engaging in a similar project, we advise
instructors to carefully assess the kinds of user configurations
that candidate platforms offer, as their user management
options vary, and potential issues must be addressed before the
students attempt to use them.
Five days after the initial workshop, a Q&A session was
held with the student groups. No instructions were given before
this session and the content was largely based on the students’
queries. Most questions concerned data. This was consistent
with expectations, as models trained using the intentionally
limited dataset handed out during the previous session would
perform badly, regardless of the platform settings that the
students chose. As already mentioned, data collection and
processing play a key role in ML and “there is no AI without
data” (Gröger, 2021). Illustrative queries from the students
concerned the quality of the supplied dataset, tentative
workarounds, and image formats. However, the main
conclusion the students drew was that more data was needed to
train a model that would produce relevant results.
Between the Q&A and final seminar, the students were
supposed to email or book appointments with the responsible
teachers if they needed supervision. The teachers could observe
and aid the students as they uploaded data then trained and
evaluated ML models. After the Q&A session we observed how
the students engaged in data collection and uploaded larger
datasets with various images to the platform. As the students
aimed to train models based on a binary classification of good
and bad welds, they needed two labels (‘good’ and ‘bad’). The
students applied the procedures previously demonstrated to
them, trained several models, and iteratively fine-tuned the
platform settings, using several sources of data, including social
media, Google image search, and Kaggle.
While the workshop and Q&A session were held on
campus, the final seminar was held via Zoom (January 10) as
this was during a time when staff and students at higher
education institutions were gradually returning to campus after
the COVID-19 pandemic. The written assignment included the
following instructions:
You will be presenting your results both in the form of a short
paper, max ten pages, and orally in the final seminar. During
the seminar each group will get 30 minutes to present their
results. You must also participate actively by answering
questions and comments regarding the presentation. Your short
paper should begin with a cover page on which you state the
names of the group participants, the name of the course and the
semester. It is to be handed in at the start of the seminar.
During discussions in a final seminar the students were
encouraged to reflect upon the ML process, to enable them to
integrate their acquired skills. In addition to discussing the ML
workflow, the students also proposed ideas for operationalizing
their work in a live setting, such as using automated cameras to
feed data on welding points for evaluation by the DL model. In
this seminar the teachers mainly played a facilitating role, as the
students posed questions and reflected on their results. The
students received pass or fail grades for the task. To pass they
needed to:
Present a logically coherent suggestion for WeldCorp,
both in writing and orally during the seminar.
Formulate results and associated discussion in a
grammatically correct way and with consistent use of
concepts and terms.
The teaching activities outlined above are linked to the five
instruction principles and summarized in Table 3. Depending
on the course, and available data and case(s), these activities
can be varied. For example, the workshop can be divided into
two separate events, with an initial lecture focusing on
theoretical aspects of ML, followed by a more hands-on
workshop. Moreover, the group case can be presented as an
individual or pair-wise task, although this might neglect the
collective character of data work.
The students were presented with a
case of a welding company, WeldCorp,
seeking to expand and scale up its
business while improving quality
control. To help these efforts they were
encouraged to apply ML to
differentiate between good and bad
weld points.
Since the students had diverse
educational backgrounds (business and
administration, computer science, and
behavioral science), we chose to use a
code AI platform. This enabled
them to incorporate previous skills and
work during the course, even if they
lacked previous experience of data
We showed the students several
examples of ways to train ML models
via the no-code AI platform. Students
were encouraged to take tutorials and
experiment with different types of open
datasets (e.g., table-, text-, and image-
based), and problems that can be
accessed through the platform.
The students were divided into two
groups and each student was given
access to an enterprise account
enabling them to use the no-code AI
platform to address a new type of
problem by applying the previously
demonstrated procedures.
Students were encouraged to reflect on
their learning during the final seminar
in both a survey and the course
evaluation. During the final seminar
they were also expected to learn from
each other by preparing questions for
the other group.
Table 3. Activities that we and the students
engaged in, linked to the five principles of instruction
5.2 The Benefits of Using No-code AI in Education
This subsection presents observed benefits of using no-code AI
to teach ML, which are described below and summarized in
Table 4.
Benefit 1: Visualization of data and provision of a graphical
interface for uploading data.
As already mentioned, a crucial and time-consuming part of
working with ML is collecting and processing data. As the no-
code AI platform automated many parts of the ML workflow,
students had time to spend during the exercise on consideration
and labeling of the data. This was an anticipated and important
part of the task, especially as previous studies have highlighted
tensions among people involved in labeling data for supervised
learning (Lebovitz et al., 2021).
In their course evaluations and written feedback the
students heavily emphasized an increase in their awareness of
the importance of data, and how the no-code approach enabled
them to focus on important features of the datasets used,
potential flaws in them, and problem-solving rather than model-
optimization, as illustrated by the following three quotations:
“I’ve obtained practical knowledge and experience of the
impact of data. And I’ve seen the impact of flaws in the dataset
first-hand. Thus, I think this was an optimal learning method
considering our (and my) educational background.” – student
“[I’ve learnt] that data matters! The choice, generating and
cleansing of data are crucial.” student evaluation.
“For me, the barrier to understanding the practical use of AI (or
to ever try it myself) has been my lack of programming and
coding skills. With the no-code approach, I got the opportunity
to try experiments and thus got a ‘black-boxed’ grasp of how it
works. With that, I could focus on the problem that I wanted to
solve, the learning dataset and its effect on the results, and also
on the result itself. So, I think I learned more about AI in this
course than I have in all the other courses combined, and that is
without any code.” student evaluation.
Both groups chose to label their images in a binary fashion as
‘good’ or ‘bad’. To establish the consensus required for creating
‘ground truths’, one of the groups formalized the data labeling
process in their report with a ‘weld quality framework’. The
other group strongly engaged in data augmentation as they
extended their dataset 4-to 5-fold by manipulating the images
through zooming, cutting and rotating them. These slightly
different approaches were displayed in the results and reflected
upon in the student reports. While the group that applied data
augmentation focused more on the performance of the models
they created, and thus achieved better measures (lower rates of
false positives or negatives), the other group focused more on
trying to explain the output of the models they created, i.e., why
the models made certain predictions.
Benefit 2: Access to a portfolio of pre-trained models, tutorials
and datasets, as well as automatic selection and fine-tuning of
algorithm(s) for training.
Both groups ended up using a pre-trained model
(EfficientNetB0) to solve an image classification problem
(single label) in the platform. Each group formed training,
validation and test sets respectively containing 80, 10 and 10%
of their full datasets (images), which is common practice and a
default option in the platform. The students refined their
models’ outputs in two ways. First, they iteratively adjusted
settings in the platform, such as increasing the training rate
(with careful monitoring of the variances of performance
measures of the predictions generated by splitting the dataset to
avoid overtraining the model). The platform assists such
adjustment by suggesting settings to enhance the models’
performance, e.g., switching to a different pretrained model,
and modifying the learning rate (Figure 4).
Benefit 3: Visual interface for evaluating and comparing the
performance of models (e.g., through ROC curve- and
confusion matrix-based analyses).
Second, as particularly strongly emphasized by one of the
groups, the students strove to ensure that included data were
contextually relevant, and suitable for WeldCorp’s purposes.
This was done after they received output from the ML model in
the form of confusion matrices and ROC-curves (Figures 5 and
6) and could assess whether certain types of images were
incorrectly classified, identify potential biases in the data, and
signs of model overtraining. Examples mentioned during the
final seminar were images of painted welds, which would not
be relevant in the industrial context they imagined.
Figure 5. Illustrative Model Evaluation Output
Available features briefly mentioned in the course included
tools to deploy the models created in the platform. This was not
relevant to the assigned task, as the students were not expected
to integrate their solution in a live environment, we presented a
few paths to do so. Examples included plugins for common
software (such as Excel, Google Sheets, Bubble) as well as the
ability to call APIs for easy integration of a model in an
operating environment. The platform also includes a graphical
interface for making predictions regarding new images, as
shown in Figure 7. We used this function during the final
seminar, to show the students how their models performed on
selected images of good and bad welds.
Figure 7. Results of a test of a model’s performance on
unseen data during the final seminar
Thus, by simplifying parts of the ML workflow related to
training, evaluating and deploying models, learners can focus
on data collection, and interpreting outputs of the models, to
gain a sense of whether the chosen approach is suitable and
feasible rather than engaging in model optimization. Based on
our materials, we generated themes in the form of distinct ways
that no-code AI facilitates learning about ML. These themes are
described in Table 4.
ML workflow
Role of no-code AI
collection /
Provision of a graphical interface for
visualization, uploading and
processing data.
Model training
Access to a portfolio of pre-trained
models, tutorials and datasets, as well
as automatic selection and fine-tuning
of one or more algorithm(s) for
Visual interface for evaluating and
comparing the performance of models
(e.g., through ROC curve- and
confusion matrix-based analyses).
API-interfaces with complementary
plugins to aid integration in
organizational settings.
Table 4. Ways that no-code AI can facilitate learning
about ML
5.3 Challenges with using no-code AI in education
Our approach was not free of challenges, including three that
are summarized here. First, it is important to formulate a live
case in terms of ML and make a preliminary judgement of the
feasibility of the students collecting the necessary data during
the task. Finding an appropriate case may be time consuming,
but data repositories, such as Kaggle, may aid this process.
Second, as mentioned, the teachers also encountered challenges
related to user management routines before the module started
and needed help from the platform owners to set up separate
organizations for the students. These challenges highlight the
importance of considering and addressing potential user
management issues in advance and choosing an appropriate
platform for the intended purposes. The market for these
platforms is rapidly evolving. While the Peltarion platform is
now discontinued, several alternatives are available, such as
BigML, HuggingFace and solutions from large tech companies
(e.g., Microsoft Azure, Amazon SageMaker, Google AutoML,
and Teachable Machine). These often come in both free and
paid versions. For individual use, the free versions may be
suitable for smaller tasks and datasets. A common advantage of
paid versions is incorporation of more collaborative features,
which enables re-use and comparisons of student projects over
the years. Whichever platform and version is chosen it is also
important to ensure that students do not upload sensitive data,
depending on the regulatory context of the educational setting.
Third, the student feedback included proposals that groups
should be smaller in future versions of the course, as they
experienced difficulties in engaging everyone simultaneously
when using the platform.
As the no-code approach enabled students to engage in
collective data work the selected empirical setting provided an
ideal opportunity to address our two questions:
RQ1: How can no-code AI be used to teach ML in non-
technical educational programs?
RQ2: What are the benefits and challenges of using no-code
AI in education?
We answer RQ1 by proposing a problem-centered approach to
using no-code AI in higher education, with instruction to
teachers. Regarding RQ2, we show how no-code AI can help
to guide students through the ML workflow (data processing,
model training, evaluation and deployment), and present
important challenges (ML case construction, platform
selection and user management, and student group
composition) that we encountered during the course.
Our contribution to the IS education literature is two-fold.
First, we provide information for instructors on how to
incorporate no-code AI in their courses. Second, we provide
insights into the benefits and challenges of using no-code AI
tools to support the ML workflow in educational settings.
Through this study we have set the stage for incorporating
a new generation of AI tools in IS curricula by showing how
they can be used to support students in analyzing live cases,
particularly in conjunction with an approach based on
principles of instruction. By doing so, in this paper we have
proposed an innovative solution to an IS teaching need,
grounded in theory, and tested in an educational setting
(Lending and Vician, 2012). The novelty of our approach is
the application of tools that are usually only accessible to
computer scientists to problems related to business practices
and phenomena addressed in social sciences. As the no-code
AI tools available are rapidly increasing and evolving (a few,
of many, examples of contemporary no-code or low-code
solutions that support the ML workflow include BigML,
Huggingface and Teachable Machine) we urge educators to
keep track of this development, and find approaches to
implement such tools in their curricula, in combination with
lessons on how to use AI in effective and responsible ways.
Bhattacharyya, S. S., & Kumar, S. (2021). Study of deployment
of “low code no code” applications toward improving
digitization of supply chain management. Journal of
Science and Technology Policy Management, 14(2).
Bishop, C. M. (2006). Pattern recognition and machine
learning. New York, NY: Springer.
Braun, V., & Clarke, V. 2012. Thematic analysis, In H. Cooper,
P. M. Camic, D. L. Long, A. T. Panter, D. Rindskopf, & K.
J. Sher (Eds.), APA handbook of research methods in
psychology, Vol. 2. Research designs: Quantitative,
qualitative, neuropsychological, and biological (pp. 57–
71). American Psychological Association.
Chapman, P., Clinton, J., Kerber, R., Khabaza, T., Reinartz, T.,
Shearer, C., & Wirth, R. (1999, March). The CRISP-DM
user guide. In 4th CRISP-DM SIG Workshop in Brussels in
March (Vol. 1999).
Chen, L. (2022). Current and Future Artificial Intelligence (AI)
Curriculum in Business School: A Text Mining Analysis.
Journal of Information Systems Education, 33(4), 416-426.
Coffin, E. (2021). I Think I Need AI! What is AI? BNP Media.
Clarke, V., & Braun, V. (2014). Thematic analysis. In
Encyclopedia of Critical Psychology (pp. 1947-1952).
Springer, New York, NY.
Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs,
C., Crick, T., ... & Williams, M. D. (2021). Artificial
Intelligence (AI): Multidisciplinary perspectives on
emerging challenges, opportunities, and agenda for
research, practice and policy, International Journal of
Information Management, 57. 
Fredriksson, T., Mattos, D. I., Bosch, J., & Olsson, H. H.
(2020). Data labeling: An empirical investigation into
industrial challenges and mitigation strategies. In
International Conference on Product-Focused Software
Process Improvement (pp. 202-216). Springer, Cham.
Geske, F., Hofmann, P., Lämmermann, L., Schlatt, V., &
Urbach, N. (2021). Gateways to Artificial Intelligence:
Developing a taxonomy for AI Service platforms,
European Conference on Information Systems (ECIS).
Gröger, C. (2021). There is no AI without data.
Communications of the ACM, 64(11), 98-108.
Hilbert, M., & López, P. (2011). The world’s technological
capacity to store, communicate, and compute information.
Science, 332(6025), 60-65.
Holmström, J., Sandberg, J., & Mathiassen, L. (2011).
Educating reflective practitioners: The design of an IT
Management Masters Program, Americas Conference on
Information Systems (AMCIS).
How, M. L., Chan, Y. J., Cheah, S. M., Khor, A. C., & Say, E.
M. P. (2021). Artificial Intelligence for Social Good in
Responsible Global Citizenship Education: An Inclusive
Democratized Low-Code Approach. In Proceedings of the
3rd World Conference on Teaching and Education, 81-89.
Humble, N., & Mozelius, P. (2022). The threat, hype, and
promise of artificial intelligence in education. Discover
Artificial Intelligence, 2(1), 1-13.
Jordan, M. I., & Mitchell, T. M. 2015. Machine learning:
Trends, perspectives, and prospects, Science, 349(6245),
Kaggle (2023)., last accessed, March 2023.
Kayhan, V. (2022). When to Use Machine Learning: A Course
Assignment, Communications of the Association for
Information Systems, vol 50, 122-142.
Kelleher, J.D., Brendan, T. (2018). Machine Learning 101. In
Data Science, MIT Press, 97-150.
Kling, N., Runte, C., Kabiraj, S., & Schumann, C. A. (2022).
Harnessing Sustainable Development in Image Recognition
Through No-Code AI Applications: A Comparative
Analysis. In International Conference on Recent Trends in
Image Processing and Pattern Recognition, 146-155.
Springer, Cham.
Kühl, N., Schemmer, M., Goutier, M., & Satzger, G. (2022).
Artificial intelligence and machine learning. Electronic
Markets, 32, 2235-2244.
Leavitt, K., Schabram, K., Hariharan, P., & Barnes, C. M.
(2021). Ghost in the machine: On organizational theory in
the age of machine learning. Academy of Management
Review, 46(4), 750-777.
Lebovitz, S., Levina, N., & Lifshitz-Assaf, H. (2021). Is AI
Ground Truth Really ‘True’? The Dangers of Training and
Evaluating AI Tools Based on Experts’ Know-What. MIS
Quarterly, 1501-1525.
LeCun, Y., Bengio, Y., & Hinton, G. (2015) Deep learning,
Nature, 521(7553), 436-444.
Lending, D., & Vician, C. (2012). Writing IS Teaching Tips:
Guidelines for JISE Submission. Journal of Information
Systems Education, 23(1), 11-18.
Lethbridge, T. C. (2021, October). Low-code is often high-
code, so we must design low-code platforms to enable
proper software engineering. In International Symposium
on Leveraging Applications of Formal Methods, 202-212.
Springer, Cham.
Liebowitz, J. (1992). Invited Paper: Teaching an Applied
Expert Systems Course: A Content Outline. Journal of
Information Systems Education, 4(3), 5-10.
Liebowitz, J. (1995). Integrating Expert Systems Throughout
the Undergraduate Curriculum. Journal of Information
Systems Education, 7(1), 34-36.
Lins, S., Pandl, K. D., Teigeler, H., Thiebes, S., Bayer, C., &
Sunyaev, A. (2021). Artificial Intelligence as a service,
Business & Information Systems Engineering, 63(4), 441-
Luan, H., & Tsai, C. C. (2021). A review of using machine
learning approaches for precision education, Educational
Technology & Society, 24(1), 250-266.
Luo, Y., Liang, P., Wang, C., Shahin, M., & Zhan, J. (2021,
October). Characteristics and Challenges of Low-Code
Development: The Practitioners' Perspective. In
Proceedings of the 15th ACM/IEEE International
Symposium on Empirical Software Engineering and
Measurement (ESEM), 1-11.
Ma, Y & Siau, K. (2019). Higher Education in the AI Age.
Americas Conference on Information Systems (AMCIS)
Proceedings. 4.
Mathiassen, L., & Purao, S. (2002). Educating reflective
systems developers, Information Systems Journal, 12(2),
Merrill, M.D. (2007). A task-centered instructional strategy,
Journal of Research on Technology in Education, 40(1), 5-
Merrill, M.D. (2013). First principles of instruction: Identifying
and designing effective, efficient and engaging instruction,
Hoboken, NJ: Pfeiffer/John Wiley & Sons.  
Peltarion (2022). The Peltarion deep learning platform., accessed September 2022
Richardson, M. L., & Ojeda, P. I. (2022). A “Bumper-Car”
Curriculum for Teaching Deep Learning to Radiology
Residents. Academic Radiology, 29(5), 763-770.
Rokis, K., & Kirikova, M. (2022). Challenges of Low-
Code/No-Code Software Development: A Literature
Review. In International Conference on Business
Informatics Research, 3-17. Springer, Cham.
Sahay, A., Indamutsa, A., Di Ruscio, D., & Pierantonio, A.
(2020, August). Supporting the understanding and
comparison of low-code development platforms. In 2020
46th Euromicro Conference on Software Engineering and
Advanced Applications (SEAA), 171-178. IEEE.
Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh,
P., & Aroyo, L. M. (2021, May). “Everyone wants to do the
model work, not the data work”: Data Cascades in High-
Stakes AI. In proceedings of the 2021 CHI Conference on
Human Factors in Computing Systems, 1-15.
Schröer, C., Kruse, F., & Gómez, J. M. (2021). A systematic
literature review on applying CRISP-DM process model.
Procedia Computer Science, 181, 526-534.
Simsek, Z., Vaara, E., Paruchuri, S., Nadkarni, S., & Shaw, J.
D. (2019). New ways of seeing big data, Academy of
Management Journal, 62(4), 971-978.
Sturm, T., Gerlach, J. P., Pumplun, L., Mesbah, N., Peters, F.,
Tauchert, C., ... & Buxmann, P. (2021). Coordinating
human and machine learning for effective organizational
learning. MIS Quarterly, 45(3).
Sundberg, L. & Holmström, J. (2022). Towards ‘Lightweight’
Artificial Intelligence: A Typology of AI Service
Platforms. Americas Conference on Information Systems
Sundberg, L., & Holmström, J. (2023). Democratizing artificial
intelligence: How no-code AI can leverage machine
learning operations. Business Horizons.
Umeå University (2022). AI for business course curriculum,
accessed March 2023.
Yan, Z. (2021). The Impacts of Low/No-Code Development on
Digital Transformation and Software Development. arXiv
preprint arXiv:2112.14073.
Wang, S., & Wang, H. (2021). A Teaching Module of No-Code
Business App Development. Journal of Information
Systems Education, 32(1), 1-8.
Woo, M. (2020). The rise of no/low code software
development—No experience needed? Engineering, 6(9),
Leif Sundberg is an associate professor at the Department of
Informatics, Umeå University.
Sundberg’s research interests
involve digital government, the use
of no-code artificial intelligence,
philosophy of technology and risk
society studies. Sundberg has a
broad teaching experience from
engineering management and
information systems. He has
published his work in journals such as Safety Science and
Information Polity and presented it at international conferences
Jonny Holmström is a professor of Information Systems at
Umeå University and director and co-
founder of Swedish Center for Digital
Innovation. His research interests are
digital innovation, digital
transformation and digital
entrepreneurship. He is serving at the
editorial boards of CAIS, EJIS,
Information and Organization, and
JAIS. His work has appeared in
journals such as Communications of the AIS, Design Issues,
European Journal of Information Systems, Information and
Organization, Information Systems Journal, Information
Technology and People, Journal of the AIS, Journal of
Information Technology, Journal of Strategic Information
Systems, MIS Quarterly, Research Policy, and The Information
ResearchGate has not been able to resolve any citations for this publication.
Full-text available
The idea of building intelligent machines has been around for centuries, with a new wave of promising artificial intelligence (AI) in the twenty-first century. Artificial Intelligence in Education (AIED) is a younger phenomenon that has created hype and promises, but also been seen as a threat by critical voices. There have been rich discussions on over-optimism and hype in contemporary AI research. Less has been written about the hyped expectations on AIED and its potential to transform current education. There is huge potential for efficiency and cost reduction, but there is also aspects of quality education and the teacher role. The aim of the study is to identify potential aspects of threat, hype and promise in artificial intelligence for education. A scoping literature review was conducted to gather relevant state-of-the art research in the field of AIED. Main keywords used in the literature search were: artificial intelligence, artificial intelligence in education, AI, AIED, teacher perspective, education, and teacher. Data were analysed with the SWOT-framework as theoretical lens for a thematic analysis. The study identifies a wide variety of strengths, weaknesses, opportunities, and threats for artificial intelligence in education. Findings suggest that there are several important questions to discuss and address in future research, such as: What should the role of the teacher be in education with AI? How does AI align with pedagogical goals and beliefs? And how to handle the potential leak and misuse of user data when AIED systems are developed by for-profit organisations?
Full-text available
Within the last decade, the application of “artificial intelligence” and “machine learning” has become popular across multiple disciplines, especially in information systems. The two terms are still used inconsistently in academia and industry—sometimes as synonyms, sometimes with different meanings. With this work, we try to clarify the relationship between these concepts. We review the relevant literature and develop a conceptual framework to specify the role of machine learning in building (artificial) intelligent agents. Additionally, we propose a consistent typology for AI-based information systems. We contribute to a deeper understanding of the nature of both concepts and to more terminological clarity and guidance—as a starting point for interdisciplinary discussions and future research.
Full-text available
The number of institutions offering machine learning courses is on the rise. Supplementary materials that help teach these courses fail to address one of the most important steps of the machine learning process, namely identifying a problem, and determining whether it is appropriate for machine learning. We address this problem by first reviewing frameworks in extant work, then proposing a decision flow to help students determine whether an input-output relationship is appropriate for machine learning. Following the discussion of the steps in the decision flow, we present a course assignment that reinforces the concepts in the decision flow. We conclude by discussing the lessons learned after using this assignment in a graduate class.
Full-text available
Organizational decision-makers need to evaluate AI tools in light of increasing claims that such tools out-perform human experts. Yet, measuring the quality of knowledge work is challenging, raising the question of how to evaluate AI performance in such contexts. We investigate this question through a field study of a major U.S. hospital, observing how managers evaluated five different machine-learning (ML) based AI tools. Each tool reported high performance according to standard AI accuracy measures, which were based on ground truth labels provided by qualified experts. Trying these tools out in practice, however, revealed that none of them met expectations. Searching for explanations, managers began confronting the high uncertainty of experts’ know-what knowledge captured in ground truth labels used to train and validate ML models. In practice, experts address this uncertainty by drawing on rich know-how practices, which were not incorporated into these ML-based tools. Discovering the disconnect between AI’s know-what and experts’ know-how enabled managers to better understand the risks and benefits of each tool. This study shows dangers of treating ground truth labels used in ML models objectively when the underlying knowledge is uncertain. We outline implications of our study for developing, training, and evaluating AI for knowledge work.
Conference Paper
Full-text available
Background: In recent years, Low-code development (LCD) is growing rapidly, and Gartner and Forrester have predicted that the use of LCD is very promising. Giant companies, such as Microsoft, Mendix, and Outsystems have also launched their LCD platforms. Aim: In this work, we explored two popular online developer communities, Stack Overflow (SO) and Reddit, to provide insights on the characteristics and challenges of LCD from a practitioners' perspective. Method: We used two LCD related terms to search the relevant posts in SO and extracted 73 posts. Meanwhile, we explored three LCD related subreddits from Reddit and collected 228 posts. We extracted data from these posts and applied the Constant Comparison method to analyze the descriptions, benefits, and limitations and challenges of LCD. For platforms and programming languages used in LCD, implementation units in LCD, supporting technologies of LCD, types of applications developed by LCD, and domains that use LCD, we used descriptive statistics to analyze and present the results. Results: Our findings show that: (1) LCD may provide a graphical user interface for users to drag and drop with little or even no code; (2) the equipment of out-of-the-box units (e.g., APIs and components) in LCD platforms makes them easy to learn and use as well as speeds up the development; (3) LCD is particularly favored in the domains that have the need for automated processes and workflows; and (4) practitioners have conflicting views on the advantages and disadvantages of LCD. Conclusions: Our findings suggest that researchers should clearly define the terms when they refer to LCD, and developers should consider whether the characteristics of LCD are appropriate for their projects. CCS CONCEPTS • Software and its engineering → Software development techniques.
Organizations are increasingly seeking to generate value and insights from their data by integrating advances in artificial intelligence (AI) such as machine learning (ML) systems into their operations. However, there are several managerial challenges associated with ML operations (MLOps). In this article we outline three key challenges and discuss how an emerging form of AI platforms – ‘no-code AI’ – may help organizations to address and overcome them. We outline how no-code AI can leverage MLOps by closing the gap between business and technology experts, enabling faster iterations between problems and solutions, and aiding infrastructure management. After outlining important remaining challenges associated with no-code AI and MLOps we propose three managerial recommendations. By doing so, we provide insights into an important novel, emerging phenomenon in AI software and set the stage for further research in the area.
Rationale and Objectives Our goal was to create an artificial intelligence (AI) training curriculum for residents that taught them to create, train, evaluate and refine deep learning (DL) models. Hands-on training of models was emphasized and didactic presentations of the mathematical and programmatic underpinnings of DL were minimized. Materials and Methods We created a three-session, 6-hour curriculum based on a “no-code” machine learning system called This class met weekly in June 2021. Pre-class homework included reading assignments, software installation, dataset downloads, and image-collection and labeling. The class sessions included several short, didactic presentations, but were largely devoted to hands-on training of DL models. After the course, our residents completed a short, anonymous, online survey about the course. Results Our residents learned to acquire and label a wide variety of image datasets. They quickly learned to train DL models to classify these datasets, as well as how to evaluate and refine these models. Our survey showed that most residents felt AI to be important and worth learning, but most were not very interested in learning to program. Most felt that the course taught them useful things about DL, and they were now more interested in the topic. Most would recommend the course to other residents, as well as to medical students and to radiology faculty. Conclusion The course met our objectives of teaching our residents to create, train, evaluate, and refine DL models. We hope that the hands-on experience they gained in this course will enable them to recognize problems in diagnostic AI systems, and to help solve such problems in their own radiology practices.
Purpose The purpose of this study is to understand the concept of “Low Code No Code” applications and study its scope of application for web designing, rapid application development (RAD) and supply chain digitization (SCD). Design/methodology/approach A qualitative exploratory study was conducted for this exploratory study. A semi-structured open-ended questionnaire was prepared by the authors. Based on the questionnaire in-depth interviews were conducted with subject matter experts having more than 10 years of experience in the domain of supply chain management and digitization. The study questionnaire focused on the current reach and future potential of “Low Code No Code” platforms. A total of 20 responses were collected from experts as post this point thematic saturation was reached. A non-probabilistic convenience sampling was applied to identify the experts The data was content analyzed for themes. Findings The major findings that emerged from the study was that “Low Code No Code” platforms applications could be used across end-to-end SCD. The study also revealed that RAD through “Low Code No Code” platforms could reduce organizations dependency on coders. In the case of procurement, “Low Code No Code” applications could improve vendor and supplier management by streamlining processes. The cost-effective and easy-to-maintain “Low Code No Code” application development could help Medium and Small-Scale Enterprises level the playing field against large organizations. The lack of adoption strategy and low perceived usefulness was identified as major barriers to the adoption of “Low Code No Code” applications by organizations. Research limitations/implications “Low Code No Code” application-based automation would enable better utilization of organizational supply chain (SC) resources and capabilities. This would improve the sustainability performance of the firm. Furthermore, it would also enable the provision of SC services at a lower cost level, thus benefiting customers. Practical implications “Low Code No Code” application-based automation would help organizations to reduce the dependency on coders and Information Technology developers SCD. This could also allow SC managers to make more apps to be built in less time without the need of complex coding. This could potentially reduce app development costs toward digitizing SCs. Originality/value To the best of the authors’ knowledge, this was one of the very first studies regarding how “Low Code No Code” applications could revolutionize the SC using these app development capabilities. This study also provided an extensive study of Diffusion of Innovations and Technological Organizational Theory frameworks for in the context of “Low Code No Code” technology adoption.
In recent years, in the field of education, there has been a clear progressive trend toward precision education. As a rapidly evolving AI technique, machine learning is viewed as an important means to realize it. In this paper, we systematically review 40 empirical studies regarding machine-learning-based precision education. The results showed that the majority of studies focused on the prediction of learning performance or dropouts, and were carried out in online or blended learning environments among university students majoring in computer science or STEM, whereas the data sources were divergent. The commonly used machine learning algorithms, evaluation methods, and validation approaches are presented. The emerging issues and future directions are discussed accordingly.