ArticlePDF Available

Abstract and Figures

With recent advances in artificial intelligence (AI), machine learning (ML) has been identified as particularly useful for organizations seeking to create value from data. However, as ML is commonly associated with technical professions, such as computer science and engineering, incorporating training in the use of ML into non-technical educational programs, such as social sciences courses, is challenging. Here, we present an approach to address this challenge by using no-code AI in a course for university students with diverse educational backgrounds. This approach was tested in an empirical, case-based educational setting, in which students engaged in data collection and trained ML models using a no-code AI platform. In addition, a framework consisting of five principles of instruction (problem-centered learning, activation, demonstration, application, and integration) was applied. This paper contributes to the literature on IS education by providing information for instructors on how to incorporate no-code AI in their courses and insights into the benefits and challenges of using no-code AI tools to support the ML workflow in educational settings.
Content may be subject to copyright.
Journal of
Information
Systems
Education
Volume 35
Issue 1
Winter 2024
Teaching Tip
Using No-Code AI to Teach Machine Learning in Higher
Education
Leif Sundberg and Jonny Holmström
Recommended Citation: Sundberg, L., & Holmström, J. (2024). Teaching Tip:
Using No-Code AI to Teach Machine Learning in Higher Education. Journal of
Information Systems Education, 35(1), 56-66. https://doi.org/10.62273/CYPL2902
Article Link: https://jise.org/Volume35/n1/JISE2024v35n1pp56-66.html
Received: December 5, 2022
First Decision: March 1, 2023
Accepted: May 2, 2023
Published: March 15, 2024
Find archived papers, submission instructions, terms of use, and much more at the JISE website:
https://jise.org
ISSN: 2574-3872 (Online) 1055-3096 (Print)
Journal of Information Systems Education, 35(1), 56-66, Winter 2024
https://doi.org/10.62273/CYPL2902
56
Teaching Tip
Using No-Code AI to Teach Machine Learning in Higher
Education
Leif Sundberg
Jonny Holmström
Swedish Center for Digital Innovation, Department of Informatics
Umeå University
Umeå, Sweden
leif.sundberg@umu.se, jonny.holmstrom@umu.se
ABSTRACT
With recent advances in artificial intelligence (AI), machine learning (ML) has been identified as particularly useful for
organizations seeking to create value from data. However, as ML is commonly associated with technical professions, such as
computer science and engineering, incorporating training in the use of ML into non-technical educational programs, such as social
sciences courses, is challenging. Here, we present an approach to address this challenge by using no-code AI in a course for
university students with diverse educational backgrounds. This approach was tested in an empirical, case-based educational setting,
in which students engaged in data collection and trained ML models using a no-code AI platform. In addition, a framework
consisting of five principles of instruction (problem-centered learning, activation, demonstration, application, and integration) was
applied. This paper contributes to the literature on IS education by providing information for instructors on how to incorporate no-
code AI in their courses and insights into the benefits and challenges of using no-code AI tools to support the ML workflow in
educational settings.
Keywords: Artificial intelligence, Machine learning, IS education research, Information systems education
1. INTRODUCTION
Machine learning (ML), a subfield of artificial intelligence
(AI), focuses on the development, application, and analysis of
computer systems capable of learning from experience. In a
common variant, supervised ML, a system is shown numerous
examples of a type of data, e.g., images or texts describing
particular objects or phenomena, to train it to learn or
recognize patterns in them. The system can then use this
learning to predict new unseen data, i.e., data it has not
previously encountered (Jordan & Mitchell, 2015; Kühl et al.,
2022). Leavitt et al. (2021, p. 750) define ML as “a broad subset
of artificial intelligence, wherein a computer program applies
algorithms and statistical models to construct complex patterns
of inference within data” (see also Bishop, 2006).
Massive increases in the processing power of digital
technology and available data, in combination with better
algorithms, e.g., deep learning algorithms (see Lecun et al.,
2015) have set the stage for increases in the use of ML in many
contexts (Dwivedi et al., 2021). Accordingly, organizations are
increasingly deploying intelligent systems that can process
large amounts of data, provide knowledge and insights, and
operate autonomously (Simsek et al., 2019; Sturm et al., 2021).
As noted by Ma and Siau (2019, p. 1), Higher education
needs to change and evolve quickly and continuously to prepare
students for the upheavals in the job market caused by AI,
machine learning, and automation.Among other things, these
authors argue that AI must be integrated into academic
curricula, and not only those of science, technology,
engineering, and mathematics (STEM) departments. However,
despite abundant research on applications of AI in educational
settings (e.g., Humble & Mozelius, 2022; Luan & Tsai, 2021),
much less attention has been paid to instruction of students with
non-technical backgrounds in ML’s practical use and
applications (Kayhan, 2022). As ML is commonly associated
with technical professions, such as computer science and
engineering, incorporating training in its use into non-technical
educational programs, such as business- and management-
oriented social sciences and Information Systems (IS)
programs, is challenging. Similar issues have been raised in
previous research on novel intelligent systems (Liebowitz,
1992, 1995) as educators have sought to integrate their use into
business and IS programs. Recently, scholars have identified a
need to integrate AI curricula in ways that enable students to
develop a sufficient understanding of technology such as ML to
apply it without detailed knowledge of AI algorithms (Chen,
2022). In this paper, we assess no-code AI platforms’
potential utility in an effort to meet this need. In contrast to
conventional AI systems, which require significant resources
for installation and use, these platforms can be readily applied
in educational contexts. Thus, they are easy-to-use and
affordable forms of AI, and they guide users through the
process of developing and deploying AI models, with no need
to learn all about the intricacies associated with complex
Journal of Information Systems Education, 35(1), 56-66, Winter 2024
https://doi.org/10.62273/CYPL2902
57
algorithms (Lins et al., 2021; Richardson & Ojeda, 2022).
Hence, in this paper, we pose two research questions (RQs):
RQ1: How can no-code AI be used to teach ML in non-
technical educational programs?
RQ2: What are the benefits and challenges of using no-
code AI in education?
As already mentioned, non-technicalrefers here to non-
STEM programs, such as business- and management-oriented
courses. To answer the RQs, we present a teaching tip based on
a case study of a master’s level AI for Business course at Umeå
University, Sweden, in which qualitative data were collected
through interactions with, and observations of, the students. In
the remaining sections of the paper: we summarize previous
research on no-code software, describe the educational setting,
describe the materials and methods used, present the results,
discuss them, and finally offer concluding remarks.
2. BACKGROUND: TOWARDS LIGHTWEIGHTAI
In this section, we present a brief overview of the ML workflow
(subsection 2.1) and then summarize the literature on the
emergence of no-code AI platforms (subsection 2.2).
2.1 What is Machine Learning?
ML refers to a broad set of AI applications in which computers
build models based on patterns they recognize in datasets and
use the models to generate hypotheses about the world. Such
models have myriads of uses in problem-solving software
exploited in industrial and other organizations (Russell &
Norvig, 2022). The general ML workflow (e.g., Chapman et al.,
1999; Kelleher & Brendan, 2018; Schröer et al., 2021) begins
with the creation of a training dataset from which a machine can
learn something (Figure 1). Most applications today are based
on supervised learning procedures through which a machine
learns from labeled data, e.g., text describing an image, such as
a photo or drawing of a dog or cat (Fredriksson et al., 2020).
Then the training dataset is processed by an algorithm that
trainsthe machine to recognize corresponding patterns. The
outcome of this process is an ML model that can be used to
make predictions regarding previously unseen data. During the
training process, part of a dataset (e.g., 20% of the images in an
image classifier case) is reserved for testing the model to avoid
problems such as overfitting. Acceptable performance of the
model on the test datasets indicates that it may be used to solve
problems in real-world contexts, such as organizational
settings, if the data provide relevant representations of the
things or phenomena that must be recognized to solve the
problems.
This description is a somewhat simplified version of the
ML workflow. In reality, it takes several iterations of data
collection loops and knowledge consolidation processes to
create a model that provides meaningful results as experts may
have diverging perceptions of what data represent (see Lebovitz
et al., 2021 for a detailed discussion on experts’ disagreements
during data annotation).
2.2 No-Code AI
No-code solutions for software development have been subject
to previous research as they enable non-programmers with little
or no coding experience to produce various applications
(Bhattacharyya & Kumar, 2021; Luo et al., 2021; Lethbridge,
2021; Sahay et al., 2020; Yan, 2021). By adopting low-code
principles, enterprises may not only save time and costs but also
narrow the gaps between business operations and information
technologies, thereby enabling more rapid development and
improvements in product and service quality (Rokis &
Kirikova, 2022).
Figure 1. A Simplified Machine Learning Workflow
As noted by Sundberg and Holmström (2022, 2023), a new
generation of lightweightno-code AI platformsalso known
as AI as a service (Lins et al., 2021) or simply AI service (Geske
et al., 2021) platformsenables non-data scientists to train ML
models to make predictions. Such platforms may match, or even
outperform, coded solutions (Kling et al., 2022). Hence, no-
code AI platforms may be widely applied in diverse settings,
including citizen science, and as low-cost solutions in emerging
markets. In the long run, it has been argued that access to user-
friendly, low-code AI could democratize the adoption of these
systems and stimulate their multidisciplinary use (How et al.,
2021). For example, new drag-and-drop interfaces enable
anyone to develop, train, and test AI algorithms in a few hours.
In combination with a range of open-source solutions and
plugins, this vastly simplifies algorithm development and
deployment (Coffin, 2021). The advances are so rapid that
within two years of Woo (2020, p. 961) stating that “AI might
be able to automatically produce code,” advances in generative
AI, tools such as GitHub Copilot and ChatGPT are enabling
code generation based on the input of a user. Computer
scientists have always dreamt of writing programs that write
themselves, and the dream is becoming a commonplace reality.
Recently, academic researchers have also recognized the
powerful potential utility of no-code apps in educational
settings. For instance, Wang and Wang (2021) argue that no-
code (or low-code) app development is transforming traditional
software development practices and present a teaching case
involving the development of a business app.
3. EDUCATIONAL SETTING
As noted by Holmström et al. (2011), rapid technological
developments create challenges for maintaining up-to-date
curricula for educating professionals who will work in
environments with high levels of technology. They highlight
several important issues regarding IS teaching, including the
Journal of Information Systems Education, 35(1), 56-66, Winter 2024
https://doi.org/10.62273/CYPL2902
58
importance of ensuring that the students acquire practically
relevant skills through the use of appropriate pedagogical
approaches and generic types of knowledge. As AI is being
increasingly adopted in diverse domains (Dwivedi et al., 2021),
most, if not all, professionals will engage with or be affected by
intelligent systems in their careers. However, as mentioned, AI
is associated with the need to understand algorithms, and hence,
skills rooted in computer science and engineering. This poses
challenges for professionals rooted in other disciplines, not
because they have nothing to contribute to AI or gain from its
use, but because of a lack of fundamental knowledge of how,
for example, an ML system works. A potential remedy, also
already mentioned, is to use lightweight AI (Sundberg &
Holmström, 2022) in the form of AI service platforms (Geske
et al., 2021; Lins et al., 2021), which are easy to use with little
to no installation requirements (as they are cloud-based) and
have graphical interfaces that help users to train ML models.
Here we present an approach for using such a system, the
Peltarion (2022) no-code deep learning AI platform
(hereafter the no-code AI platform,” or just the platform), in
a higher education setting at the Department of Informatics,
Umeå University, Sweden. The department is part of the
university’s faculty of social sciences and provides three
undergraduate educational programs (on behavioral science
with an orientation towards IT environments, digital media
production, and system science) and two master programs (on
human-computer interaction and IT management), together
with individual courses.
The mentioned AI solution enables non-data scientists to
upload data and then train and evaluate an ML model that can
be deployed via an application programming interface (API).
The platform guides users via a graphical interface together
with suggestions regarding problem types, workflows, pre-
trained models, and iterative improvements. The platform was
used in an AI for Business course (15 credits) at Umeå
University, to give the students hands-on experience in training
ML models by engaging in a case-based task. The course is
open for students with diverse educational backgrounds, as
requirements for enrolment are 90 credits in informatics,
computer science, business administration, media and
communication studies, pedagogics, psychology, political
science, sociology (or equivalent competence). In line with the
course curriculum (Umeå University, 2022), the learning
objectives of the exercise were to “Account for and explain the
role of AI in organizational value creation,” by giving the
students first-hand experience of training ML models. The
educational approach is further described in the following
section.
4. MATERIALS AND METHODS
To address the RQs posed in Section 1, we followed a group-
based project approach presented by Mathiassen and Purao
(2002) in the course, inviting the students to engage in the
development of ways of working and participating in
communicative activities regarding real-life problems. As
noted by Leidner and Jarvenpaa (1995), such approaches
provide opportunities for students to understand the
messinessprofessionals face in the industry, acknowledging
the social situatedness of these contexts, and that the problems
students will face are “unstructured, ambiguous, and immune to
purely technical solutions” (Holmström et al., 2011, p. 2).
We applied the principles of instruction framework
advocated by Merrill (2007, 2013) in the educational setting.
This incorporates five principles summarized in Table 1:
problem-centered learning, activation, demonstration,
application, and integration. The framework provides an
integrated, multi-strand strategy for teaching students how to
solve real-world problems or complete complex real-world
tasks.
Principle
Description
Problem-
centered
learning
Humans learn better when they solve
problems, so learning is promoted
when learners acquire skills in real-
world contexts.
Activation
Learning is promoted when learners
activate existing knowledge and skills
as foundations for a new skill. An
important step here is to start at the
learner’s level. Activation requires
learning activities that stimulate the
development of mental models and
schemes that can help learners
incorporate new knowledge or skills
into their existing knowledge
framework.
Demonstration
Learning is promoted when learners
observe a demonstration of the skill to
be learned, e.g., by exposure to
examples of good and bad practices.
Application
Learning is promoted when learners
apply new skills they have acquired to
solve problems. Applying new
knowledge or skills to real-world
problems is considered almost
essential for effective learning.
Integration
Learning is promoted when learners
reflect on, discuss, and defend
knowledge or skills they have
acquired. The effectiveness of a course
is enhanced when learners are
provided opportunities to discuss and
reflect on what they have learned in
order to revise, synthesize, recombine,
and modify their new knowledge or
skills.
Table 1. Principles of the Educational Approach
The case presented to the students described a fictive
organization, “WeldCorp,” which specialized in welding,
seeking to expand and acquire customers in additional
geographical markets while retaining and automating quality
measures. To assist the company, we invited the students to
develop ways to use ML as a tool to assess welding points. The
course module described in this paper consisted of a workshop,
a Q&A session, supervising sessions, and a final seminar. Its
content is further outlined in Section 5.1. Nineteen students
attended the course (14 male and five female), with educational
backgrounds including bachelors degrees in business and
administration, computer science, and behavioral science. The
empirical materials used in the study presented here, as
Journal of Information Systems Education, 35(1), 56-66, Winter 2024
https://doi.org/10.62273/CYPL2902
59
summarized in Table 2, stem from interactions with the
students, the no-code AI platform, and teachers’ reflections.
These materials allowed us to both provide educators with
recommendations for using no-code AI and present interesting
findings on the benefits and challenges associated with these
platforms’ use in educational settings. We identified the
benefits and challenges by subjecting the empirical data to
thematic analysis (Braun & Clarke 2012; Clarke & Braun 2014)
through inductively coding the students’ activities during the
module. More specifically, we coded the activities undertaken
by the students in our empirical setting mentioned and observed
in the materials and then aggregated them into themes,
informed by the steps in the ML workflow presented in
Section 2.1.
Materials
Source(s)
Students’ feedback
and course evaluations 
E-mails, notes taken during
the course, written evaluations
and feedback from students.
Students’ written
assignments and
presentations
Two written group reports,
and two presentations during a
final seminar.
Datasets, models and
deployments created
by the students
The Peltarion (2022) no-code
AI platform
Observations
Teachers’ experiences and
reflections during and after
the course
Table 2. Materials and Sources
5. RESULTS: USING NO-CODE AI IN AN
EDUCATIONAL CONTEXT
This section is divided into three parts. In line with Lending and
Vician (2012), in Section 5.1 we provide a description of our
educational procedures to enable instructors to adopt our
approach. Then, the benefits of using no-code AI in education
are presented in Section 5.2, followed by the challenges we
experienced in Section 5.3.
5.1 Detailed Educational Approach
The course module was initiated on December 2, 2021, and the
final seminar was held on January 10, 2022. Thus, the duration
of the module was a little over a month, including Christmas
holiday breaks. The module was initiated with a 3-hour
workshop session that included an introduction to ML, followed
by a demonstration of the no-code AI platform’s functionalities
and a description of the group assignment. The information
presented and considerations applied in this workshop are
summarized in the following text.
As the students came from different backgrounds, it was
clearly stated that the workshop would not include a deep
examination of phenomena such as neural networks and would
focus instead on providing students with sufficient information
to get hands-on experience in training ML or deep learning
(DL) models. An overview of the current status of ML was
presented as increases in the scale of datasets, together with
improvements in algorithms and processing speed have
increased capabilities for machines to learn.This included a
presentation of:
A short video showing how neural networks see
things in image data:
https://www.youtube.com/watch?v=xS2G0oolHpo&ab
_channel=NOVAPBSOfficial
Figures from an overview by Hilbert and López (2011)
of how the capacities of storing data rapidly shifted from
analogue to digital formats.
A comparison of the world’s fastest supercomputer in
1997 (ASCI Red), which reached a speed of 1.8
teraflops, and the SONY Playstation 3 video game
console that reached the same speed nine years later.
Then, the differences between supervised, unsupervised,
and reinforced ML were briefly presented. We emphasized that
the module would focus largely on supervised learning, the
basis of most commercial and industrial applications of ML
today, so the students would need to engage with data labeling.
This is important for two reasons. First, collecting and
annotating data are crucial but time-consuming activities that
take most of the time spent during ML development
(Fredriksson et al., 2020). Second, if this element is neglected
or poorly done, the resulting ML models will perform poorly
and generate inaccurate, irrelevant, or even harmful results
(Sambasivan et al., 2021).
Next, the lecture outlined the kinds of problems that can be
solved by using ML. As noted by Kayhan (2022, p. 123),
“Many students lack the preparation for the workforce because
they cannot conceptualize valid input-output relationships for
the problems they propose to solve using ML.” Thus, despite
the widespread hype surrounding intelligent systems, there is a
lack of specificity of the kinds of problems algorithms can
actually solve. As noted in Section 2.1, ML is a set of
technologies that involve the training of algorithms to create
models that can provide predictions concerning previously
unseen datasets. Hence, ML cannot solve general problems
such as increasing efficiency or improving quality: they
need specific problem formulations accompanied by relevant
datasets. Thus, in this part of the lecture, we presented a
checklist for determining whether ML would be suitable to
apply:
1. Do you have a use case?
2. Can the use case be solved by AI/ML (or simpler
means)?
3. Do you have data?
4. Do you have annotated data?
We also presented examples of various problems/use cases
that ML can solve, such as anomaly detection, classification
problems (identifying features in texts and images), building
chatbots based on text similarity functions, and various
regression problems, such as predictions of sales and housing
costs. Before demonstrating the functionality of the no-code
platform, we described the ML workflow, both generally as
shown in Figure 1 and more specifically for the Peltarion
platform, as displayed in Figure 2. Although the platform is
now discontinued, this workflow (data collection + preparation,
training, evaluation and deployment of an ML model) is at the
core of most ML development efforts and protocols applied in
other no-code AI platforms (such as BigML, Amazon
SageMaker, Google AutoML and Teachable Machine).
Journal of Information Systems Education, 35(1), 56-66, Winter 2024
https://doi.org/10.62273/CYPL2902
60
After presenting the above activities in a traditional lecture,
supplemented by visual aids and other materials, we turned our
attention to the no-code platform.
Figure 2. The ML Workflow in the No-Code AI
Platform
An important step during the use of no-code AI is to check
the requirements of the platform of choice in terms of data types
(e.g., tabular, images, or text). Familiarity with the selected
platform’s tools for processing and labeling data is also
important. Thus, to provide participating students with an
understanding of how the no-code AI platform handled
different data types, we used free datasets from Kaggle (2023):
To explore tabular data, we used the popular “IRIS”
dataset, which can be used to predict the species of a
flower based on the size of petals and sepals.
For image data, images of cats and dogs can be used to
train a binary classifier. Images of craters on the Moon
and/or Mars can be used to train object detectors (if this
feature is available in the platform. See Figure 3 for an
example).
Data from the Internet Movie Database (IMDB) can be
used to predict whether a text is positive” or negative
to train a model that can make predictions based on NLP
(natural language processing).
Figure 3. Image Annotation for Object Detection in the
BigML Platform
During the demonstration of how to upload data, we briefly
described and outlined procedures for various possible formats
for tabular and text data (e.g., CSV and npy), but not procedures
for connecting to data warehousessuch as BigQuery or Azure
Synapse, as it was irrelevant for the planned task. Instead, we
focused more on how to upload image data to the platform, as
this was the type of data the students would handle in the
following case. An advantage of using no-code AI in such cases
is that images can be annotated by placing them in folders that
act as labels, compressing them into zip files, and then
uploading them to the platform. The platform then takes care of
processing and cropping the images to standardized formats. A
negative effect, which we informed students about, is that
important features near the edges of the images may be cropped.
Then, we demonstrated various examples of ML problems,
and their possible solutions using the no-code AI platform.
Depending on the type of data involved, the platform suggests
certain problems as the user chooses the input (data) and one or
more targets (labels). As mentioned, examples of such
problems include image classifications and image/text
similarity searches. Thus, in this phase, we also displayed
examples of ways to use pre-trained ImageNeT-based and
NLP-based (e.g., BERT) models for classifying and predicting
patterns in images and texts, respectively. The use of pre-
trained models relaxes the requirements to use big datasets, as
users can fine-tune these models with their own data. Links to
online tutorials and datasets (e.g., Kaggle) were uploaded to the
course teaching platform, for students who wanted to proceed
by experimenting with different types of data and problems.
In another important part of this demonstration, we showed
how ML models can be evaluated. This is done by splitting the
dataset(s) into a training set and test (and/or validation) set. The
algorithm is not exposed to the test set during training, so it can
be used to evaluate how a model performs on previously unseen
data. Common pitfalls, such as data bias and overfitting, were
also introduced during this session. The platform enabled the
generation of two indicators that are commonly used for
evaluating models: receiver operating characteristic (ROC)
curves and confusion matrices, which are especially useful for
enhancing students’ understanding of the output of ML models,
and why their deployment requires careful consideration.
Essentially, an image model outputs a probability of what it
thinks is present in an image, e.g., 0.76 cat.Depending on the
problem at hand, and associated requirements, a threshold can
be set to determine how certaina model must be before it can
classify something. Important measures here include accuracy,
recall, and precision. While accuracy is a measure of a model’s
overall performance, there is always a trade-off between recall
and precision. Students can be taught the relevance of this
tradeoff using two types of examples: ML-based spam filters,
and medical diagnostics. When constructing a spam filter, it is
often more important to minimize the number of false
positives(potentially important emails that end up in the spam
filter) than the number of false negatives(spam emails that
end up in the inbox). Thus, precision is a good measure for such
a model, as it assesses whether what is being classified as
spamreally is spam. In contrast, during medical diagnosis,
avoiding false negatives is often much more crucial than
avoiding false positives (as assessed by a recall measure),
because wrongly classifying ill people as healthy can have
severe consequences for them. For understanding such issues,
knowledge of ROC curves is important because they illustrate
three key aspects of ML models. First, they output probabilities
(in contrast to exact knowledge). Second, configuring these
outputs involves active choices of thresholds. Third, these
choices entail trade-offs between different evaluation measures.
At the end of the demonstration session, the students were
divided into two groups and assigned the problem-centered task
of helping “WeldCorp” use ML as an instrument to assess the
quality of their welding joints. A rubric for the task provided a
backstory, stating that WeldCorp was launched in 1994 in
Gothenburg, and subsequent expansion to other Swedish cities
led to the CEO experiencing problems with maintaining quality
control. So, s/he is now turning to ML for this purpose. The
rubric then told the students:
Journal of Information Systems Education, 35(1), 56-66, Winter 2024
https://doi.org/10.62273/CYPL2902
61
Your assignment is to help WeldCorp sustain its growth by
leveraging machine learning. Specifically, your task is to
analyze welding images (images of good and bad welding
points) to develop a model using the no-code AI platform
that can be useful for WeldCorp in a quality assurance context.
1. Describe and justify your choices regarding the data
processing, problem selection, and model training in
the no-code AI platform.
2. Describe how you evaluated your model’s predictions.
Are they accurate enough to use live for WeldCorp?
Why/why not?
3. Discuss: What could be done by WeldCorp to improve
the model’s results? How would they implement this
type of solution in their business?
An important aim during this assignment was to prompt
students to think about and justify their choices during training,
and the output of their model(s), rather than simply striving to
optimize the performance of the model(s). As the module is a
part of an AI business course, we also wanted the students to
discuss how WeldCorp could integrate AI into their
organization.
The students were divided into two groups. The start of the
course included a presentation exercise in which the students
were asked to state their names and educational background. As
two of the students had experience in computer science, we
intentionally placed these students in separate groups. To get
the students started, they were given a small dataset of 157
images of good and bad welds. The groups were then given
enterprise accounts providing access to the no-code AI
platform. Before engaging in a similar project, we advise
instructors to carefully assess the kinds of user configurations
that candidate platforms offer, as their user management
options vary, and potential issues must be addressed before the
students attempt to use them.
Five days after the initial workshop, a Q&A session was
held with the student groups. No instructions were given before
this session and the content was largely based on the students’
queries. Most questions concerned data. This was consistent
with expectations, as models trained using the intentionally
limited dataset handed out during the previous session would
perform badly, regardless of the platform settings that the
students chose. As already mentioned, data collection and
processing play a key role in ML, and “there is no AI without
data” (Gröger, 2021, p. 98). Illustrative queries from the
students concerned the quality of the supplied dataset, tentative
workarounds, and image formats. However, the main
conclusion the students drew was that more data was needed to
train a model that would produce relevant results.
Between the Q&A and final seminar, the students were
supposed to email or book appointments with the responsible
teachers if they needed supervision. The teachers could observe
and aid the students as they uploaded data and then trained and
evaluated ML models. After the Q&A session, we observed
how the students engaged in data collection and uploaded larger
datasets with various images to the platform. As the students
aimed to train models based on a binary classification of good
and bad welds, they needed two labels (goodand bad). The
students applied the procedures previously demonstrated to
them, trained several models, and iteratively fine-tuned the
platform settings, using several sources of data, including social
media, Google image search, and Kaggle.
While the workshop and Q&A session were held on
campus, the final seminar was held via Zoom (January 10,
2022) as this was during a time when staff and students at higher
education institutions were gradually returning to campus after
the COVID-19 pandemic. The written assignment included the
following instructions:
You will be presenting your results both in the form of a
short paper, max ten pages, and orally in the final seminar.
During the seminar, each group will get 30 minutes to present
their results. You must also participate actively by answering
questions and comments regarding the presentation. Your short
paper should begin with a cover page on which you state the
names of the group participants, the name of the course, and
the semester. It is to be handed in at the start of the seminar.
During discussions in a final seminar, the students were
encouraged to reflect upon the ML process, to enable them to
integrate their acquired skills. In addition to discussing the ML
workflow, the students also proposed ideas for operationalizing
their work in a live setting, such as using automated cameras to
feed data on welding points for evaluation by the DL model. In
this seminar, the teachers mainly played a facilitating role, as
the students posed questions and reflected on their results. The
students received pass or fail grades for the task. To pass they
needed to:
Present a logically coherent suggestion for WeldCorp,
both in writing and orally during the seminar.
Formulate results and associated discussion in a
grammatically correct way and with consistent use of
concepts and terms.
The teaching activities outlined above are linked to the five
instruction principles and summarized in Table 3. Depending
on the course, and available data and case(s), these activities
can be varied. For example, the workshop can be divided into
two separate events, with an initial lecture focusing on
theoretical aspects of ML, followed by a more hands-on
workshop. Moreover, the group case can be presented as an
individual or pair-wise task, although this might neglect the
collective character of data work.
5.2 The Benefits of Using No-Code AI in Education
This subsection presents the observed benefits of using no-code
AI to teach ML, which are described below and summarized in
Table 4.
5.2.1 Benefit 1: Visualization of Data and Provision of a
Graphical Interface for Uploading Data. As already
mentioned, a crucial and time-consuming part of working with
ML is collecting and processing data. As the no-code AI
platform automated many parts of the ML workflow, students
had time to spend during the exercise on consideration and
labeling of the data. This was an anticipated and important part
of the task, especially as previous studies have highlighted
tensions among people involved in labeling data for supervised
learning (Lebovitz et al., 2021).
In their course evaluations and written feedback, the
students heavily emphasized an increase in their awareness of
the importance of data, and how the no-code approach enabled
them to focus on important features of the datasets used,
potential flaws in them, and problem-solving rather than model-
optimization, as illustrated by the following three quotations:
Journal of Information Systems Education, 35(1), 56-66, Winter 2024
https://doi.org/10.62273/CYPL2902
62
“I’ve obtained practical knowledge and experience of the
impact of data. And I’ve seen the impact of flaws in the dataset
first-hand. Thus, I think this was an optimal learning method
considering our (and my) educational background.” Student
Evaluation.
“[I’ve learnt] that data matters! The choice, generating and
cleansing of data is crucial.” Student Evaluation.
“For me, the barrier to understanding the practical use of AI
(or to ever try it myself) has been my lack of programming and
coding skills. With the no-code approach, I got the opportunity
to try experiments and thus got a black-boxedgrasp of how it
works. With that, I could focus on the problem that I wanted to
solve, the learning dataset and its effect on the results, and also
on the result itself. So, I think I learned more about AI in this
course than I have in all the other courses combined, and that is
without any code.” Student Evaluation.
Both groups chose to label their images in a binary fashion
as good or bad. To establish the consensus required for
creating ground truths,” one of the groups formalized the data
labeling process in their report with a weld quality
framework.” The other group strongly engaged in data
augmentation as they extended their dataset 4 to 5-fold by
manipulating the images by zooming, cutting, and rotating
them. These slightly different approaches were displayed in the
results and reflected upon in the student reports. While the
group that applied data augmentation focused more on the
performance of the models they created, and thus achieved
better measures (lower rates of false positives or negatives), the
other group focused more on trying to explain the output of the
models they created, i.e., why the models made certain
predictions.
5.2.2 Benefit 2: Access to a Portfolio of Pre-Trained Models,
Tutorials, and Datasets, as Well as the Automatic Selection
and Fine-Tuning of Algorithm(s) for Training. Both groups
ended up using a pre-trained model (EfficientNetB0) to solve
an image classification problem (single label) in the platform.
Each group formed training, validation, and test sets,
respectively, containing 80%, 10%, and 10% of their full
datasets (images), which is common practice and a default
option in the platform. The students refined their models’
outputs in two ways. First, they iteratively adjusted settings in
the platform, such as increasing the training rate (with careful
monitoring of the variances of performance measures of the
predictions generated by splitting the dataset to avoid
overtraining the model). The platform assists such adjustment
by suggesting settings to enhance the models’ performance,
e.g., switching to a different pre-trained model, and modifying
the learning rate (Figure 4).
Figure 4. Suggestions to Improve Model
Performance
Description
centered
learning
The students were presented with a
case of a welding company,
WeldCorp, seeking to expand and
scale up its business while improving
quality control. To help these efforts
they were encouraged to apply ML to
differentiate between good and bad
weld points.
Since the students had diverse
educational backgrounds (business
and administration, computer science,
and behavioral science), we chose to
use a no-code AI platform. This
enabled them to incorporate previous
skills and work during the course,
even if they lacked previous
experience in data science.
We showed the students several
examples of ways to train ML models
via the no-code AI platform. Students
were encouraged to take tutorials and
experiment with different types of
open datasets (e.g., table, text, and
image-based), and problems that can
be accessed through the platform.
The students were divided into two
groups, and each student was given
access to an enterprise account
enabling them to use the no-code AI
platform to address a new type of
problem by applying the previously
demonstrated procedures.
Students were encouraged to reflect on
their learning during the final seminar
in both a survey and the course
evaluation. During the final seminar,
they were also expected to learn from
each other by preparing questions for
the other group.
Table 3. Activities That We and the Students
Engaged in, Linked to the Five Principles of
Instruction
Journal of Information Systems Education, 35(1), 56-66, Winter 2024
https://doi.org/10.62273/CYPL2902
63
5.2.3 Benefit 3: Visual Interface for Evaluating and
Comparing the Performance of Models (e.g., Through ROC
Curve- and Confusion Matrix-Based Analyses). Second, as
particularly strongly emphasized by one of the groups, the
students strove to ensure the data included were contextually
relevant and suitable for WeldCorp’s purposes. This was done
after they received output from the ML model in the form of
confusion matrices and ROC curves (Figures 5 and 6) and could
assess whether certain types of images were incorrectly
classified, identify potential biases in the data, and signs of
model overtraining. Examples mentioned during the final
seminar were images of painted welds, which would not be
relevant in the industrial context they imagined.
Figure 5. Illustrative Model Evaluation Output
Figure 6. ROC-Curve From One of the Student
Reports
Available features briefly mentioned in the course included
tools to deploy the models created in the platform. This was not
relevant to the assigned task, as the students were not expected
to integrate their solution in a live environment; we presented a
few paths to do so. Examples included plugins for common
software (such as Excel, Google Sheets, and Bubble) and the
ability to call APIs for easy integration of a model in an
operating environment. The platform also includes a graphical
interface for predicting new images, as shown in Figure 7. We
used this function during the final seminar to show the students
how their models performed on selected images of good and
bad welds.
Figure 7. Results of a Test of a Model’s Performance on
Unseen Data During the Final Seminar
Thus, by simplifying parts of the ML workflow related to
training, evaluating, and deploying models, learners can focus
on data collection and interpreting outputs of the models to gain
a sense of whether the chosen approach is suitable and feasible
rather than engaging in model optimization. Based on our
materials, we generated themes in the form of distinct ways that
no-code AI facilitates learning about ML. These themes are
described in Table 4.
ML workflow
Role of no-code AI
Data
collection /
preparation
Provision of a graphical interface for
visualization, uploading, and
processing data.
Model training
Access to a portfolio of pre-trained
models, tutorials and datasets, as well
as automatic selection and fine-tuning
of one or more algorithm(s) for
training.
Evaluation
Visual interface for evaluating and
comparing the performance of models
(e.g., through ROC curve- and
confusion matrix-based analyses).
Deployment
APIs with complementary plugins to
aid integration in organizational
settings.
Table 4. How No-Code AI Can Facilitate Learning
About ML
Journal of Information Systems Education, 35(1), 56-66, Winter 2024
https://doi.org/10.62273/CYPL2902
64
5.3 Challenges With Using No-Code AI in Education
Our approach was not free of challenges, including three
summarized here. First, it is important to formulate a live case
in terms of ML and make a preliminary judgement of the
feasibility of the students collecting the necessary data during
the task. Finding an appropriate case may be time consuming,
but data repositories, such as Kaggle, may aid this process.
Second, as mentioned, the teachers also encountered challenges
related to user management routines before the module started
and needed help from the platform owners to set up separate
organizations for the students. These challenges highlight the
importance of considering and addressing potential user
management issues in advance and choosing an appropriate
platform for the intended purposes. The market for these
platforms is rapidly evolving. While the Peltarion platform is
now discontinued, several alternatives are available, such as
BigML, HuggingFace, and solutions from large tech companies
(e.g., Microsoft Azure, Amazon SageMaker, Google AutoML,
and Teachable Machine). These often come in both free and
paid versions. For individual use, the free versions may be
suitable for smaller tasks and datasets. A common advantage of
paid versions is the incorporation of more collaborative
features, which enables re-use and comparisons of student
projects over the years. Whichever platform and version is
chosen, it is also important to ensure that students do not upload
sensitive data, depending on the regulatory context of the
educational setting. Third, the student feedback included
proposals that groups should be smaller in future versions of the
course, as they experienced difficulties in engaging everyone
simultaneously when using the platform.
6. CONCLUDING DISCUSSION
As the no-code approach enabled students to engage in
collective data work the selected empirical setting provided an
ideal opportunity to address our two questions:
RQ1: How can no-code AI be used to teach ML in non-
technical educational programs?
RQ2: What are the benefits and challenges of using no-
code AI in education?
We answer RQ1 by proposing a problem-centered approach
to using no-code AI in higher education, with instruction to
teachers. Regarding RQ2, we show how no-code AI can help to
guide students through the ML workflow (data processing,
model training, evaluation, and deployment), and present
important challenges (ML case construction, platform selection
and user management, and student group composition) that we
encountered during the course.
Our contribution to the IS education literature is two-fold.
First, we provide information for instructors on how to
incorporate no-code AI in their courses. Second, we provide
insights into the benefits and challenges of using no-code AI
tools to support the ML workflow in educational settings.
Through this study, we have set the stage for incorporating
a new generation of AI tools in IS curricula by showing how
they can be used to support students in analyzing live cases,
particularly in conjunction with an approach based on
principles of instruction. By doing so, in this paper, we have
proposed an innovative solution to an IS teaching need,
grounded in theory and tested in an educational setting
(Lending & Vician, 2012). The novelty of our approach is the
application of tools that are usually only accessible to computer
scientists to problems related to business practices and
phenomena addressed in social sciences. As the no-code AI
tools available are rapidly increasing and evolving (a few, of
many, examples of contemporary no-code or low-code
solutions that support the ML workflow include BigML,
Huggingface, and Teachable Machine), we urge educators to
keep track of this development and find approaches to
implement such tools in their curricula, in combination with
lessons on how to use AI in effective and responsible ways.
7. REFERENCES
Bhattacharyya, S. S., & Kumar, S. (2021). Study of
Deployment ofLow Code No Code” Applications Toward
Improving Digitization of Supply Chain Management.
Journal of Science and Technology Policy Management,
14(2), 271-287. https://doi.org/10.1108/JSTPM-06-2021-
0084
Bishop, C. M. (2006). Pattern Recognition and Machine
Learning. New York, NY: Springer.
Braun, V., & Clarke, V. (2012). Thematic Analysis, In H.
Cooper, P. M. Camic, D. L. Long, A. T. Panter, D.
Rindskopf, & K. J. Sher (Eds.), APA Handbook of Research
Methods in Psychology, Vol. 2. Research Designs:
Quantitative, Qualitative, Neuropsychological, and
Biological (pp. 57-71). American Psychological
Association.
Chapman, P., Clinton, J., Kerber, R., Khabaza, T., Reinartz, T.,
Shearer, C., & Wirth, R. (1999). The CRISP-DM User
Guide. 4th CRISP-DM SIG Workshop in Brussels in March
(Vol. 1999).
Chen, L. (2022). Current and Future Artificial Intelligence (AI)
Curriculum in Business School: A Text Mining Analysis.
Journal of Information Systems Education, 33(4), 416-426.
Coffin, E. (2021). I Think I Need AI! What Is AI? BNP Media.
Clarke, V., & Braun, V. (2014). Thematic Analysis. In
Encyclopedia of Critical Psychology (pp. 1947-1952).
Springer, New York, NY. https://doi.org/10.1007/978-1-
4614-5583-7_311
Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs,
C., Crick, T., Duan, Y., Dwivedi, R., Edwards, J., Eirug, A.,
Galanos, V., Ilavarasan, P. V., Janssen, M., Jones, P., Kar,
A. K., Kizgin, H., Kronemann, B., Lal, B., Lucini, B.,
Medaglia, R., Meunier-FitzHugh, K. L., Meunier-
FitzHugh, L. C. L., Misra, S., Mogaji, E., Sharma, S. K.,
Singh, J. B., Raghavan, V., Raman, R., Rana, N. P.,
Samothrakis, S., Spencer, J., Tamilmani, K., Tubadji, A.,
Walton, P., Williams, M. D. (2021). Artificial Intelligence
(AI): Multidisciplinary Perspectives on Emerging
Challenges, Opportunities, and Agenda for Research,
Practice and Policy. International Journal of Information
Management, 57. 
https://doi.org/10.1016/j.ijinfomgt.2019.08.002
Fredriksson, T., Mattos, D. I., Bosch, J., & Olsson, H. H.
(2020). Data Labeling: An Empirical Investigation Into
Industrial Challenges and Mitigation Strategies.
International Conference on Product-Focused Software
Process Improvement (pp. 202-216). Springer, Cham.
https://doi.org/10.1007/978-3-030-64148-1_13
Geske, F., Hofmann, P., Lämmermann, L., Schlatt, V., &
Urbach, N. (2021). Gateways to Artificial Intelligence:
Journal of Information Systems Education, 35(1), 56-66, Winter 2024
https://doi.org/10.62273/CYPL2902
65
Developing a Taxonomy for AI Service Platforms.
European Conference on Information Systems (ECIS).
Gröger, C. (2021). There Is No AI Without Data.
Communications of the ACM, 64(11), 98-108.
https://doi.org/10.1145/3448247
Hilbert, M., & López, P. (2011). The World’s Technological
Capacity to Store, Communicate, and Compute
Information. Science, 332(6025), 60-65.
https://doi.org/10.1126/science.1200970
Holmström, J., Sandberg, J., & Mathiassen, L. (2011).
Educating Reflective Practitioners: The Design of an IT
Management Masters Program. Americas Conference on
Information Systems 2011 Proceedings. 247.
https://aisel.aisnet.org/amcis2011_submissions/247
How, M. L., Chan, Y. J., Cheah, S. M., Khor, A. C., & Say, E.
M. P. (2021). Artificial Intelligence for Social Good in
Responsible Global Citizenship Education: An Inclusive
Democratized Low-Code Approach. Proceedings of the 3rd
World Conference on Teaching and Education (pp. 81-89).
https://doi.org/10.33422/2nd.worldcte.2021.01.08
Humble, N., & Mozelius, P. (2022). The Threat, Hype, and
Promise of Artificial Intelligence in Education. Discover
Artificial Intelligence,  2(1), 1-13.
https://doi.org/10.1007/s44163-022-00039-z
Jordan, M. I., & Mitchell, T. M. (2015). Machine Learning:
Trends, Perspectives, and Prospects. Science, 349(6245),
255-260. https://doi.org/10.1126/science.aaa8415
Kaggle (2023). https://www.kaggle.com/
Kayhan, V. (2022). When to Use Machine Learning: A Course
Assign ment. Communications of the Association for
Information Systems, 50, 122-142.
https://doi.org/10.17705/1CAIS.05005
Kelleher, J. D., & Brendan, T. (2018). Machine Learning 101.
In Data Science (pp. 97-150), MIT Press.
https://doi.org/10.7551/mitpress/11140.003.0008
Kling, N., Runte, C., Kabiraj, S., & Schumann, C. A. (2022).
Harnessing Sustainable Development in Image Recognition
Through No-Code AI Applications: A Comparative
Analysis. International Conference on Recent Trends in
Image Processing and Pattern Recognition, 146-155.
Springer, Cham. https://doi.org/10.1007/978-3-031-07005-
1_14
Kühl, N., Schemmer, M., Goutier, M., & Satzger, G. (2022).
Artificial Intelligence and Machine Learning. Electronic
Markets, 32, 2235-2244. https://doi.org/10.1007/s12525-
022-00598-0
Leavitt, K., Schabram, K., Hariharan, P., & Barnes, C. M.
(2021). Ghost in the Machine: On Organizational Theory in
the Age of Machine Learning. Academy of Management
Review, 46(4), 750-777.
https://doi.org/10.5465/amr.2019.0247
Lebovitz, S., Levina, N., & Lifshitz-Assaf, H. (2021). Is AI
Ground Truth Really True? The Dangers of Training and
Evaluating AI Tools Based on Experts’ Know-What. MIS
Quarterly, 45(3b), 1501-1525.
https://doi.org/10.25300/MISQ/2021/16564
LeCun, Y., Bengio, Y., & Hinton, G. (2015) Deep Learning.
Nature, 521(7553), 436-444.
https://doi.org/10.1038/nature14539
Lending, D., & Vician, C. (2012). Writing IS Teaching Tips:
Guidelines for JISE Submission. Journal of Information
Systems Education, 23(1), 11-18.
Leidner, D. E., & Jarvenpaa, S. L. (1995). The Use of
Information Technology to Enhance Management School
Education: A Theoretical View. MIS Quarterly, 19(3), 265-
291. https://doi.org/10.2307/249596
Lethbridge, T. C. (2021, October). Low-Code Is Often High-
Code, So We Must Design Low-Code Platforms to Enable
Proper Software Engineering. International Symposium on
Leveraging Applications of Formal Methods (pp. 202-212).
Springer, Cham. https://doi.org/10.1007/978-3-030-89159-
6_14
Liebowitz, J. (1992). Invited Paper: Teaching an Applied
Expert Systems Course: A Content Outline. Journal of
Information Systems Education, 4(3), 5-10.
Liebowitz, J. (1995). Integrating Expert Systems Throughout
the Undergraduate Curriculum. Journal of Information
Systems Education, 7(1), 34-36.
Lins, S., Pandl, K. D., Teigeler, H., Thiebes, S., Bayer, C., &
Sunyaev, A. (2021). Artificial Intelligence as a Service.
Business & Information Systems Engineering, 63(4), 441-
456. https://doi.org/10.1007/s12599-021-00708-w
Luan, H., & Tsai, C. C. (2021). A Review of Using Machine
Learning Approaches for Precision Education. Educational
Technology & Society, 24(1), 250-266.
Luo, Y., Liang, P., Wang, C., Shahin, M., & Zhan, J. (2021,
October). Characteristics and Challenges of Low-Code
Development: The PractitionersPerspective. Proceedings
of the 15th ACM/IEEE International Symposium on
Empirical Software Engineering and Measurement (pp. 1-
11). https://doi.org/10.1145/3475716.3475782
Ma, Y., & Siau, K. (2019). Higher Education in the AI Age.
Americas Conference on Information Systems 2019
Proceedings. 4.
https://aisel.aisnet.org/amcis2019/treo/treos/4
Mathiassen, L., & Purao, S. (2002). Educating Reflective
Systems Developers. Information Systems Journal, 12(2),
81-102. https://doi.org/10.1046/j.1365-2575.2002.00122.x
Merrill, M. D. (2007). A Task-Centered Instructional Strategy.
Journal of Research on Technology in Education, 40(1), 5-
22. https://doi.org/10.1080/15391523.2007.10782493
Merrill, M. D. (2013). First Principles of Instruction:
Identifying and Designing Effective, Efficient and Engaging
Instruction. Hoboken, NJ: Pfeiffer/John Wiley & Sons.  
Peltarion (2022). The Peltarion Deep Learning Platform.
Richardson, M. L., & Ojeda, P. I. (2022). A “Bumper-Car”
Curriculum for Teaching Deep Learning to Radiology
Residents. Academic Radiology, 29(5), 763-770.
https://doi.org/10.1016/j.acra.2021.11.016
Rokis, K., & Kirikova, M. (2022). Challenges of Low-
Code/No-Code Software Development: A Literature
Review. Proceedings of the International Conference on
Business Informatics Research (pp. 3-17). Springer, Cham.
https://doi.org/10.1007/978-3-031-16947-2_1
Russell, S., & Norvig, P. (2020). Artificial Intelligence: A
Modern Approach (4th Edition). Pearson. ISBN 978-
0134610993.
Sahay, A., Indamutsa, A., Di Ruscio, D., & Pierantonio, A.
(2020, August). Supporting the Understanding and
Comparison of Low-code Development Platforms. 46th
Euromicro Conference on Software Engineering and
Advanced Applications (SEAA) (pp. 171-178). IEEE.
https://doi.org/10.1109/SEAA51224.2020.00036
Journal of Information Systems Education, 35(1), 56-66, Winter 2024
https://doi.org/10.62273/CYPL2902
66
Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh,
P., & Aroyo, L. M. (2021, May). Everyone Wants to do the
Model Work, not the Data Work: Data Cascades in High-
Stakes AI. Proceedings of the 2021 CHI Conference on
Human Factors in Computing Systems (pp. 1-15).
https://doi.org/10.1145/3411764.3445518
Schröer, C., Kruse, F., & Gómez, J. M. (2021). A Systematic
Literature Review on Applying CRISP-DM Process Model.
Procedia Computer Science, 181, 526-534.
https://doi.org/10.1016/j.procs.2021.01.199
Simsek, Z., Vaara, E., Paruchuri, S., Nadkarni, S., & Shaw, J.
D. (2019). New Ways of Seeing Big Data. Academy of
Management Journal, 62(4), 971-978.
https://doi.org/10.5465/amj.2019.4004
Sturm, T., Gerlach, J. P., Pumplun, L., Mesbah, N., Peters, F.,
Tauchert, C., Nan, N., & Buxmann, P. (2021). Coordinating
Human and Machine Learning for Effective Organizational
Learning. MIS Quarterly, 45(3), 1581-1602.
https://doi.org/10.25300/MISQ/2021/16543
Sundberg, L. & Holmström, J. (2022). Towards ‘Lightweight’
Artificial Intelligence: A Typology of AI Service
Platforms. Americas Conference on Information Systems
2022 Proceedings. 13.
https://aisel.aisnet.org/amcis2022/sig_odis/sig_odis/13
Sundberg, L., & Holmström, J. (2023). Democratizing
Artificial Intelligence: How No-Code AI Can Leverage
Machine Learning Operations. Business Horizons.
https://doi.org/10.1016/j.bushor.2023.04.003
Umeå University (2022). AI for Business Course Curriculum,
https://www.umu.se/en/education/syllabus/2in408/
Yan, Z. (2021). The Impacts of Low/No-Code Development on
Digital Transformation and Software Development. arXiv
preprint arXiv:2112.14073.
Wang, S., & Wang, H. (2021). A Teaching Module of No-Code
Business App Development. Journal of Information
Systems Education, 32(1), 1-8.
Woo, M. (2020). The Rise of No/Low Code Software
DevelopmentNo Experience Needed? Engineering, 6(9),
960-961. https://doi.org/10.1016/j.eng.2020.07.007
AUTHOR BIOGRAPHIES
Leif Sundberg is an associate professor at the Department of
Informatics, Umeå University.
Sundberg’s research interests
involve digital government, the use
of no-code artificial intelligence, and
risk society studies. Sundberg has a
broad teaching experience in
engineering management and
information systems. He has
published his work in journals such
as Safety Science and Information Polity and presented it at
international conferences like IFIP EGOV-CeDEM-EPART
and AMCIS.
Jonny Holmström is a professor of information systems at
Umeå University and director and
co-founder of Swedish Center for
Digital Innovation. His research
interests are digital innovation,
digital transformation and digital
entrepreneurship. He is serving on
the editorial boards of CAIS, EJIS,
Information and Organization, and
JAIS. His work has appeared in
journals such as Communications of the AIS, Design Issues,
European Journal of Information Systems, Information and
Organization, Information Systems Journal, Information
Technology and People, Journal of the AIS, Journal of
Information Technology, Journal of Strategic Information
Systems, MIS Quarterly, Research Policy, and The Information
Society.
STATEMENT OF PEER REVIEW INTEGRITY
All papers published in the Journal of Information Systems Education have undergone rigorous peer review. This includes an
initial editor screening and double-blind refereeing by three or more expert referees.
Copyright ©2024 by the Information Systems & Computing Academic Professionals, Inc. (ISCAP). Permission to make digital
or hard copies of all or part of this journal for personal or classroom use is granted without fee provided that copies are not made
or distributed for profit or commercial use. All copies must bear this notice and full citation. Permission from the Editor is
required to post to servers, redistribute to lists, or utilize in a for-profit or commercial use. Permission requests should be sent to
the Editor-in-Chief, Journal of Information Systems Education, editor@jise.org.
ISSN: 2574-3872 (Online) 1055-3096 (Print)
... According to van Slyke et al. (2023), educating students to become AI-literate is essential for the workforce of tomorrow. For instance, Sundberg and Holmström (2024) advocate for the use of code-free AI platforms that democratize machine learning education across various academic disciplines, thus improving learning outcomes for students with varied technical backgrounds. ...
... e.g., Long & Magerko (2020), Southworth et al. (2023), and Sundberg et al. (2024) Data Management and Analytics Collecting, processing, and interpreting data for AI model training. e.g., Long & Magerko (2020), Southworth et al. (2023), and Sundberg et al. (2024) Yet, this boundary is blurred, as indicated by the dashed lines in our figure. ...
... e.g., Long & Magerko (2020), Southworth et al. (2023), and Sundberg et al. (2024) Data Management and Analytics Collecting, processing, and interpreting data for AI model training. e.g., Long & Magerko (2020), Southworth et al. (2023), and Sundberg et al. (2024) Yet, this boundary is blurred, as indicated by the dashed lines in our figure. This blurring arises from two key factors: first, the broad and evolving range of GAI applications, and second, the fact that GAI is a subset of AI systems. ...
Article
We investigate the integration of generative artificial intelligence (GAI), such as ChatGPT, into higher education courses and assignments to understand how GAI tools mediate learning and support the development of students’ subject and GAI literacy. By investigating the embedding of GAI into educational contexts, we address both the opportunities and challenges of GAI in higher education teaching. Utilizing technology-mediated learning (TML) theory, our case study explores how incorporating ChatGPT and other GAI tools into courses and assignments can enhance learning outcomes, foster interactive and collaborative learning, support critical thinking, and prepare students for professional use of GAI. We examine the role of GAI tools in facilitating learning and reflect on the implications for teachers and higher education institutions. Our findings demonstrate that GAI tools can mediate learning by bridging subject knowledge gaps, enabling adaptive and scalable support, and fostering GAI literacy through hands-on engagement, while underscoring the continued importance of human educators in providing critical, ethical, and contextual guidance.
... Learning process can be promoted and encouraged when students engage in reflection, dialogue, and justification of the knowledge or abilities that they have obtained. The efficacy of a course is boosted when learners are given chances to discuss and reflect on their acquired knowledge or skills, which enables them to "revise, synthesize, recombine, and modify their new knowledge or skills" (Sundberg and Holmström 2024.). In order to achieve that, university staff may use a variety of AI tools in order to enhance the teaching methods. ...
Chapter
Full-text available
The aim of this paper is to analyze the significance of ethical use of the AI, introduce and discuss some of the best practices as well as shed light on integration in education and associated processes. It attempts to identify the most significant issues related to ethical use of AI tools in higher education based on desk research. Next, it outlines best practices related to the ethical use of AI tools, designed by universities in the United States. The third chapter discusses the need for integration of AI tools into higher education as well as associated challenges.
... This approach converges areas such cognitive science, social sciences, psychology, and user-experience design. Sundberg & Holmstroem (2024) recommend that Gen AI has its limitations. This includes applications such as ChatGPT and chatbots. ...
Article
Full-text available
Recent artificial intelligence (AI) developments provide the means to create new knowledge to enhance existing knowledge at individual, business, and societal levels much faster and more efficiently than ever before. There appears to be a widening gap between AI capabilities and people’s abilities to adopt/adapt technologies. This research investigated how this extending gap could be closed. Appropriate AI training needs to be provided to those working in industries such as education and manufacturing, with a rollout into other sectors. The main emphasis needs to be on improving the understanding of this technology’s capabilities and what this technology encompasses. The public needs to be better educated on the benefits of this modern technology. More international open, honest, and constructive sharing of the AI technologies employed/deployed must be implemented to reduce the likelihood of increased multi-directional diversion. More industries need to be encouraged to adopt/adapt the application of AI optimally. Teachers and students need to be trained appropriately to employ applications such as ChatGPT to create new knowledge and insights.
... They are also revolutionizing programming by enabling natural language interaction with machines, challenging traditional code-centric approaches [16]. In higher education, no-code AI tools are being used to teach machine learning to students from diverse backgrounds, making complex concepts more accessible [17]. LCNC platforms are particularly popular in cloud-based application development, reflecting their growing importance in the software industry [18]. ...
... Recent advancements in artificial intelligence (AI) have underscored the utility of machine learning (ML) for organizations aiming to extract value from data. Nevertheless, given ML's predominant association with technical domains like computer science and engineering, integrating ML training into non-technical educational curricula, such as social sciences courses, poses notable challenges [9]. Using machine learning for facial recognition is also an important research direction in the field of computer vision. ...
Article
Full-text available
Machine learning (ML) is profoundly transforming decision-making processes across a range of industries, simultaneously presenting challenges to existing labor markets and facilitating significant technological and innovative advancements. This paper undertakes a thorough exploration of the broad societal implications of ML and its pivotal contributions towards achieving sustainable development goals. It provides an in-depth examination of the foundational theories underpinning machine learning and presents a detailed taxonomy of ML algorithms, with a particular emphasis on the roles of convolutional and pooling layers in deep learning models utilized for image recognition tasks. Through rigorous evaluative analyses of various models, this study offers valuable insights into the comparative efficacy of these models. Specifically, the FaceRecognition model is highlighted for its superior accuracy, reduced latency, and enhanced throughput in face recognition tasks, outperforming established benchmarks. However, the study also identifies a notable deficiency in the model’s capability for object classification. Recommendations are made for reducing the complexity of models while simultaneously increasing their diversity, aiming to enhance overall performance. Furthermore, the paper underscores the significant impact these technologies on societal decision-making processes, emphasizing the need for careful consideration of both the potential benefits and the ethical implications associated with machine learning deployment.
... Incorporating training in the use of ML into non-STEM educational programs presents both challenges and opportunities. For design students with limited coding skills, introducing no-code AI tools offers a promising alternative (Sundberg & Holmström, 2024), enabling them to engage with machine learning concepts in a more accessible and intuitive way. We chose Waikato Environment for Knowledge Analysis (Weka) as the ML tool for this study. ...
Article
Full-text available
The integration of computational thinking (CT) to enhance creativity in design students has often been under-explored in design education. While design thinking has traditionally been the cornerstone of university design pedagogy and remains essential, the increasing role of digital tools and artificial intelligence in modern design practices presents new opportunities for innovation. By introducing CT alongside design thinking, students can expand their creative toolkit and engage with emerging technologies more effectively. Although many design students may have limited experience with programming, incorporating accessible, no-code tools can help them confidently embrace computational methods, unlocking new pathways for creative exploration and innovation. This study proposes an alternative approach to improve the motivation of design students by introducing machine learning tools into product design processes. We developed an experimental pedagogy in which 56 industrial design university students learned how to use Waikato Environment for Knowledge Analysis (Weka), a machine learning tool, for three hours of design work a week, for a total of eight weeks. Our covariate analysis of data collected in the pretest and posttest shows that the general learning motivations in the group using Weka are significantly higher than those in the group without Weka. However, no significant differences were found between the two groups in terms of learning strategies, collaboration, or critical thinking. Students using Weka spent more time focusing on model training and tended to improve their algorithmic thinking, and the introduction of Weka appeared to enhance their motivation to learn. On the other hand, these students might have been focusing on working individually at their computers, potentially neglecting communication and collaboration. The findings suggest that teaching machine learning applications without requiring coding has the potential to boost design students' motivation to engage with CT skills, though care must be taken to maintain collaborative practices.
Chapter
The rapid advancement of artificial intelligence (AI) has forced the reassessment of how individuals are prepared for an AI-integrated future and what is needed to reshape education, work, and digital interaction. In this chapter authors studied and explored the need for a collaborative approach between educational institutions and industry to bridge the gap between academic training and workplace demands. This research study highlights the importance of critical thinking, practical application, and confidence building in AI-enhanced environments in addition to tool-specific training. The analysis is structured around three core areas: (a) Academic Integration – the incorporation of AI and digital skills across university curricula; (b) Industry Requirements – the role of businesses in shaping curriculum through real-world insights; and (c) Collaborative Frameworks – the development of industry-education partnerships that foster integrated learning opportunities.
Conference Paper
AI in education has been advantageous for both learners and teachers. Yet, some key things require exceptional skills to make it accessible and fully functioning. This paper discusses the potential and drawbacks of AI and the impact of GenAI chatbots in higher education. We examined 20 scholarly articles from the PRISMA systematic review. The findings indicate the benefits of AI for higher education, including career opportunities, positive learning outcomes, personalised learning, learning support, detection of plagiarism, real-time feedback, and an enhanced teaching experience for teachers, while the drawbacks suggest limited technologies, privacy, and ethical concerns, plagiarism, loss of jobs, biased algorithms, and readiness and competencies are the challenges. We also explored generative AI chatbots to provide more comprehensive AI applications in higher education. This research is suitable for those who would like to investigate the usefulness of GenAI for higher education.
Chapter
The use of artificial intelligence (AI) tools is becoming increasingly widespread in educational sciences. In addition to the use of these tools as support in the educational context, it is very important to gain knowledge and skills for the design and development of tools to ensure reliability and effectiveness in the process of using these technologies by pre-service teachers. Although there are many studies on teaching machine learning (ML) processes, which is an application area of AI, to young age groups, there is a limited number of studies on teaching these processes to pre-service teachers. When the realization steps of ML are examined, it is seen that design-based learning (DBL) activities are suitable for teaching these steps to pre-service teachers. In this chapter, an implementation example of AI activities prepared based on DBL perspective for computational thinking (CT) skills was realized. This chapter is organized as follows: developing pre-service teachers’ CT skills in the first section, teaching AI to pre-service teachers in the second section, CT integrated AI-Based activity design with DBL perspective in the third section, methodology of the implementation example in the fourth section, findings and discussion in the fifth section, recommendations for practitioners and researchers in the sixth section, AI-Based activity examples for CT Skills in the seventh section, and limitations and future researches in the eighth section.
Article
Full-text available
Organizations are increasingly seeking to generate value and insights from their data by integrating advances in artificial intelligence (AI) such as machine learning (ML) systems into their operations. However, there are several managerial challenges associated with ML operations (MLOps). In this article we outline three key challenges and discuss how an emerging form of AI platforms – ‘no-code AI’ – may help organizations to address and overcome them. We outline how no-code AI can leverage MLOps by closing the gap between business and technology experts, enabling faster iterations between problems and solutions, and aiding infrastructure management. After outlining important remaining challenges associated with no-code AI and MLOps we propose three managerial recommendations. By doing so, we provide insights into an important novel, emerging phenomenon in AI software and set the stage for further research in the area.
Article
Full-text available
The idea of building intelligent machines has been around for centuries, with a new wave of promising artificial intelligence (AI) in the twenty-first century. Artificial Intelligence in Education (AIED) is a younger phenomenon that has created hype and promises, but also been seen as a threat by critical voices. There have been rich discussions on over-optimism and hype in contemporary AI research. Less has been written about the hyped expectations on AIED and its potential to transform current education. There is huge potential for efficiency and cost reduction, but there is also aspects of quality education and the teacher role. The aim of the study is to identify potential aspects of threat, hype and promise in artificial intelligence for education. A scoping literature review was conducted to gather relevant state-of-the art research in the field of AIED. Main keywords used in the literature search were: artificial intelligence, artificial intelligence in education, AI, AIED, teacher perspective, education, and teacher. Data were analysed with the SWOT-framework as theoretical lens for a thematic analysis. The study identifies a wide variety of strengths, weaknesses, opportunities, and threats for artificial intelligence in education. Findings suggest that there are several important questions to discuss and address in future research, such as: What should the role of the teacher be in education with AI? How does AI align with pedagogical goals and beliefs? And how to handle the potential leak and misuse of user data when AIED systems are developed by for-profit organisations?
Article
Full-text available
Within the last decade, the application of “artificial intelligence” and “machine learning” has become popular across multiple disciplines, especially in information systems. The two terms are still used inconsistently in academia and industry—sometimes as synonyms, sometimes with different meanings. With this work, we try to clarify the relationship between these concepts. We review the relevant literature and develop a conceptual framework to specify the role of machine learning in building (artificial) intelligent agents. Additionally, we propose a consistent typology for AI-based information systems. We contribute to a deeper understanding of the nature of both concepts and to more terminological clarity and guidance—as a starting point for interdisciplinary discussions and future research.
Article
Full-text available
The number of institutions offering machine learning courses is on the rise. Supplementary materials that help teach these courses fail to address one of the most important steps of the machine learning process, namely identifying a problem, and determining whether it is appropriate for machine learning. We address this problem by first reviewing frameworks in extant work, then proposing a decision flow to help students determine whether an input-output relationship is appropriate for machine learning. Following the discussion of the steps in the decision flow, we present a course assignment that reinforces the concepts in the decision flow. We conclude by discussing the lessons learned after using this assignment in a graduate class.
Chapter
Low-code/no-code software development is an emerging approach delivering the opportunity to build software with a minimal need for manual coding and enhancing the involvement of non-programmers in software development. Low-code principles allow enterprises to save time and costs through a more rapid development pace and to improve software products quality by bringing closer together business and information technologies as well as promoting automation. Nevertheless, the low-code/no-code approach is a relatively new and continuously progressing domain that requires understanding of existing challenges and identification of improvement directions. In this paper, challenges in the low-code software development process and suggestions for their mitigation are identified and amalgamated with the purpose to deliver insights into the current state of the low-code/no-code development process and identify areas for further research and development.KeywordsLow-codeNo-codeSoftware developmentRequirementsLow-code development platformCitizen developer
Chapter
Artificial intelligence (AI) solutions and sustainable development have increasingly received public attention and research interest. In this study, the authors discuss the emerging trend of no-code AI solutions, with regards to their contribution to achieving the Sustainable Development Goals, specifically goal 8 (decent work and economic growth) and goal 10 (reduced inequalities). To demonstrate the opportunities that no-code AI may facilitate, the authors compare the performance of conventionally coded models with a no-code model created in Microsoft Lobe, based on secondary data from a dataset offered by Kermany et al. [1] of chest x-rays for pneumonia detection. A total of 5840 JPEG images is used for training and testing, 1575 for normal and 4265 for pneumonia, respectively. Results indicate that the output generated by the studied no-code solution can keep up with coded ones, and partly even outperform them. Possible applications for industries and society include usability cases beyond image recognition, the application in citizen science as well as the exploration of economic development opportunities of no-code AI. Finally, no-code AI could perhaps offer an alternative and emerge as an industry best practice for delivering efficient low-cost solutions in emerging markets, where demographic data is scattered across various homogenous groups.KeywordsNo-code AISustainable developmentSDGsInequalityLabour market shift
Article
Rationale and Objectives Our goal was to create an artificial intelligence (AI) training curriculum for residents that taught them to create, train, evaluate and refine deep learning (DL) models. Hands-on training of models was emphasized and didactic presentations of the mathematical and programmatic underpinnings of DL were minimized. Materials and Methods We created a three-session, 6-hour curriculum based on a “no-code” machine learning system called Lobe.ai. This class met weekly in June 2021. Pre-class homework included reading assignments, software installation, dataset downloads, and image-collection and labeling. The class sessions included several short, didactic presentations, but were largely devoted to hands-on training of DL models. After the course, our residents completed a short, anonymous, online survey about the course. Results Our residents learned to acquire and label a wide variety of image datasets. They quickly learned to train DL models to classify these datasets, as well as how to evaluate and refine these models. Our survey showed that most residents felt AI to be important and worth learning, but most were not very interested in learning to program. Most felt that the course taught them useful things about DL, and they were now more interested in the topic. Most would recommend the course to other residents, as well as to medical students and to radiology faculty. Conclusion The course met our objectives of teaching our residents to create, train, evaluate, and refine DL models. We hope that the hands-on experience they gained in this course will enable them to recognize problems in diagnostic AI systems, and to help solve such problems in their own radiology practices.
Article
Purpose The purpose of this study is to understand the concept of “Low Code No Code” applications and study its scope of application for web designing, rapid application development (RAD) and supply chain digitization (SCD). Design/methodology/approach A qualitative exploratory study was conducted for this exploratory study. A semi-structured open-ended questionnaire was prepared by the authors. Based on the questionnaire in-depth interviews were conducted with subject matter experts having more than 10 years of experience in the domain of supply chain management and digitization. The study questionnaire focused on the current reach and future potential of “Low Code No Code” platforms. A total of 20 responses were collected from experts as post this point thematic saturation was reached. A non-probabilistic convenience sampling was applied to identify the experts The data was content analyzed for themes. Findings The major findings that emerged from the study was that “Low Code No Code” platforms applications could be used across end-to-end SCD. The study also revealed that RAD through “Low Code No Code” platforms could reduce organizations dependency on coders. In the case of procurement, “Low Code No Code” applications could improve vendor and supplier management by streamlining processes. The cost-effective and easy-to-maintain “Low Code No Code” application development could help Medium and Small-Scale Enterprises level the playing field against large organizations. The lack of adoption strategy and low perceived usefulness was identified as major barriers to the adoption of “Low Code No Code” applications by organizations. Research limitations/implications “Low Code No Code” application-based automation would enable better utilization of organizational supply chain (SC) resources and capabilities. This would improve the sustainability performance of the firm. Furthermore, it would also enable the provision of SC services at a lower cost level, thus benefiting customers. Practical implications “Low Code No Code” application-based automation would help organizations to reduce the dependency on coders and Information Technology developers SCD. This could also allow SC managers to make more apps to be built in less time without the need of complex coding. This could potentially reduce app development costs toward digitizing SCs. Originality/value To the best of the authors’ knowledge, this was one of the very first studies regarding how “Low Code No Code” applications could revolutionize the SC using these app development capabilities. This study also provided an extensive study of Diffusion of Innovations and Technological Organizational Theory frameworks for in the context of “Low Code No Code” technology adoption.
Article
In recent years, in the field of education, there has been a clear progressive trend toward precision education. As a rapidly evolving AI technique, machine learning is viewed as an important means to realize it. In this paper, we systematically review 40 empirical studies regarding machine-learning-based precision education. The results showed that the majority of studies focused on the prediction of learning performance or dropouts, and were carried out in online or blended learning environments among university students majoring in computer science or STEM, whereas the data sources were divergent. The commonly used machine learning algorithms, evaluation methods, and validation approaches are presented. The emerging issues and future directions are discussed accordingly.
Chapter
The concept of low-code (and no-code) platforms has been around for decades, even before the term was used. The idea is that applications on these platforms can be built by people with less technical expertise than a professional programmer, yet can leverage powerful technology such as, for example, for databases, financial analysis, web development and machine learning. However, in practice, software written on such platforms often accumulates large volumes of complex code, which can be worse to maintain than in traditional languages because the low-code platforms tend not to properly support good engineering practices such as version control, separation of concerns, automated testing and literate programming. In this paper we discuss experiences with several low-code platforms and provide suggestions for directions forward towards an era where the benefits of low-code can be obtained without accumulation of technical debt. Our recommendations focus on ensuring low-code platforms enable scaling, understandability, documentability, testability, vendor-independence, and the overall user experience for developers those end-users who do some development.