Conference PaperPDF Available

A Method for Evaluating End-User Development Technologies

Authors:

Abstract and Figures

End-user development (EUD) is a strategy that can reduce a considerable amount of business demand on IT departments. Empowering the end-user in the context of software development is only possible through technologies that allow them to manipulate data and information without the need for deep programming knowledge. The successful selection of appropriate tools and technologies is highly dependent on the context in which the end-user is embedded. End-users should be a central piece in any software package evaluation, being key in the evaluation process in the end-user development context. However, little research has empirically examined software package evaluation criteria and techniques in general, and in the end-user development context in particular. This paper aims to provide a method for technology evaluation in the context of end-user development and to present the evaluation of two platforms. We conclude our study proposing a set of suggestions for future research.
Content may be subject to copyright.
Method for Evaluating End-User Development Technologies
A Method for Evaluating End-User
Development Technologies
Full Paper
Claudia de O. Melo
University of Brasília
claudiam@unb.br
Jonathan H. M. de Moraes
University of Brasília
jonathanhmaiademoraes@gmail.com
Marcelo Ferreira
University of Brasília
marcelohpf@gmail.com
Rejane C. Figueiredo
University of Brasília
rejane@unb.br
Abstract
End-user development (EUD) is a strategy that can reduce a considerable amount of business demand on
IT departments. Empowering the end-user in the context of software development is only possible
through technologies that allow them to manipulate data and information without the need for deep
programming knowledge. The successful selection of appropriate tools and technologies is highly
dependent on the context in which the end-user is embedded. End-users should be a central piece in any
software package evaluation, being key in the evaluation process in the end-user development context.
However, little research has empirically examined software package evaluation criteria and techniques in
general, and in the end-user development context in particular. This paper aims to provide a method for
technology evaluation in the context of end-user development and to present the evaluation of two
platforms. We conclude our study proposing a set of suggestions for future research.
Keywords
End-user development, EUD, Technology evaluation, Development tools.
Introduction
End-user development (EUD) aims at enabling end-users and non-specialists in application programming
to develop and adapt systems according to their professional, educational or leisure needs (Lieberman,
2006). From the point of view of Software Engineering, EUD means, in general, the active participation
of end users in the software development process(Costabile, 2005).
EUD is a strategy that can reduce a considerable amount of business demand on IT departments,
generating multiple benefits (McGill, 2004) as higher customer satisfaction with IT. However,
empowering the end-user in the context of software development is only possible through technologies
that allow them to manipulate data and information without the need for deep programming knowledge
(Fischer, 2004). Failures in software package acquisition are not caused by the technology, but by the
failure in choosing it in the right way (Misra, 2017), without prioritizing the end-user context and their
capabilities.
In the absence of a quality system to evaluate software packages, vendors and users might play their role
without any focus and relevance on the requirement of an IT project (Misra and Mohanty, 2003). The
success and failure of end-user development within an organization ultimately depends on how effective
software packages are used (Montazemi, Cameron, and Gupta, 1996). Therefore, end-users should be a
central piece in any software package evaluation, being key in the evaluation process in the end-user
development context. Little research has empirically examined software package evaluation criteria and
techniques in general, and in the end-user development context in particular (Harnisch, 2014; Jadhav and
Sonar, 2009; Jadhav and Sonar, 2011; Misra and Mohanty, 2003). This paper aims at investigating how to
evaluate EUD technologies. We developed a model to evaluate end-user development software packages
Method for Evaluating End-User Development Technologies
that can be further extended to other types of technologies. We analyze and present the evaluation results
from two platforms using the proposed method. We discuss the evaluation process and results,
limitations, and possible future research paths.
Literature Review
According to Lieberman (2006), EUD has encompassed different fields of research, as Human-Computer
Interaction (HCI), Software Engineering (SE), Corporate Work Supported by Computers (CSCW), and
Artificial Intelligence (AI). We carried out a review of the literature on aspects related to the process of
evaluating technologies for end-user developers. Considering the scope of our study (technology
evaluation), we found during our review that three different areas have important, but partial,
contributions to our research purpose: 1) software package acquisition research; 2) software quality
models & CSCW/HCI research, and 3) technology acceptance research.
Software package acquisition research
There are multiple models available which attempt to increase the level of understanding of general
software package acquisition processes (Jadhav and Sonar, 2009, 2011; Misra, 2017). Jadhav and Sonar,
(2009) presented a systematic review that investigates methodologies for selecting software packages,
software evaluation techniques, software evaluation criteria, and systems that support decision makers in
evaluating software packages. They selected 60 papers published in journals and conference proceedings.
They concluded that there is a lack of a common list of generic software evaluation criteria and its
meaning, and that there is a need to develop a framework comprising of software selection methodology,
evaluation technique, evaluation criteria, and system to assist decision makers in software selection. The
same authors present the framework in a later study (Jadhav and Sonar, 2011).
Damsgaard and Karlsbjerg, (2010) presents seven principles for selecting software packages. First, when a
given organization buys packaged software, they join its network. Secondly, they recommend
organizations to take a long-term perspective, looking ahead but reasoning back. Then, when choosing
packaged software, there is safety in numbers, but organizations should focus on compatibility and be
wary of false gold. Finally, they recommend choosing a software package with accessible knowledge, with
the right type of standardization, and that all journeys start with a first step. Harnisch, (2014) reviewed
the literature on enterprise-level package software acquisition through the lens of IT governance to assess
the state-of-the-art of software acquisition governance. His research aims at helping decision-makers to
optimize the software procurement processes, governance, and behaviors.
Software quality models & HCI/CSCW research
Software package acquisition methodologies usually compare user requirements with the packages’
capabilities. There are different types of requirements, as managerial, political, and quality requirements
(Franch and Carvallo, 2003). In their systematic review of software package evaluation and selection,
Jadhav and Sonar (2009) found that quality characteristics such as functionality, reliability, usability,
efficiency, maintainability, and portability have been used as evaluation criteria in several studies. They
also found that, among the ISO/IEC standards related to software quality, ISO/IEC 9126-1 specifically
addresses quality model definition and its use as a framework for software evaluation. ISO/IEC 9126 was
replaced by ISO/IEC 25010.
In the context of end-user development technology evaluation, we consider the ISO an important guide to
define characteristics, attributes, and metrics. However, to be able to capture the needs of an end-user
developer, which is not the same as a professional developer nor a final end-user, further refining is
probably required.
In addition, we argue that usability is a central characteristic for any model focusing on evaluating
software packages. We have noticed that both software package acquisition research and software quality
models research did not pay enough attention on investigating better usability evaluation techniques.
There is widely accepted well-documented evaluation method for diagnosing potential usability problems
in user interfaces, such as Nielsen’s heuristic evaluation (Nilsen, 2012). Based on Nielsen’s work, Baker,
Greenberg, and Gutwin, (2002) developed an evaluation method that looks for groupware-specific
Method for Evaluating End-User Development Technologies
usability problems. Their results suggested that an evaluation using their groupware heuristics can be an
effective and efficient method to identify teamwork problems in shared workspace groupware systems.
Technology acceptance research
In general, technology acceptance research focuses on the perceived usefulness, ease of use, and user
acceptance of Information Technology. Davis (1989) presented a seminal work in this field, the
Technology Acceptance Model (TAM), aimed at explaining user behavior across a broad range of end-user
computing technologies and user populations.
TAM has already evolved into an unified theory of acceptance and use of technology (UTAUT)
(Venkatesh, Morris, Davis, and Davis, 2003). UTAUT has four independent variables as performance and
effort expectancy, social influence, and facilitating Conditions. It is a useful tool for managers to assess the
likelihood of success for new technologies introduced and the drivers of acceptance. Thus, they can design
conditions to facilitate their adoption.
Along the same line as TAM and UTAUT, Doll and Torkzadeh, (1988) developed a model called End-User
Computing Satisfaction (EUCS) to measure the affective attitude from an user while interacting with a
specific computer application. The model contains dimensions such as content, accuracy, format, ease of
use, and timeliness.
Despite the fact that this body of research is extremely valuable for understanding end-users in general,
we argue that end-user developers have a different role when interacting with software packages. They
develop working software through them, so technology acceptance research might not answer what is
needed for an organization to decide which software package is more suitable for end-user developers.
A method for evaluating EUD Technologies
Evaluating and selecting software packages that meet organizational and end-user requirements is a non-
trivial Software Engineering process (Jadhav and Sonar, 2009). To the best of our knowledge, there is no
technology evaluation methodology particularly focused on end-user developers and their context.
Therefore, we propose a method based on a new extended literature review on the three aforementioned
research areas. In addition, we evaluate two platforms to test and refine our method. In the context of
EUD, a technology evaluation model may consider elements from the three research fields we identified in
the literature review: software package acquisition models, software quality models and CSCW/HCI
models, and finally technology acceptance models. Based on our interpretation, the model should have:
Essential qualities that enable the end-user developer to manipulate the tool and produce useful results
in a certain application domain (from software quality models and CSCW/HCI models);
General qualities inherent to software packages (from software package acquisition models and
technology acceptance models);
Essential qualities for management and technological governance (from software package acquisition
models);
An evaluation method based on already-established and tested techniques, even if they come from a
different context (from all models).
Evaluation criteria, characteristics, sub characteristics, and attributes
To be able to evaluate EUD technologies, we designed a structured quality model (Franch and Carvallo,
2003) that provides a taxonomy of software quality features and metrics to compute their value. Based on
the specific EUD domain and our literature review, we selected appropriate quality characteristics,
determined quality sub characteristics, decomposed them into attributes and finally developed
questions and metrics from different points-of-view.
Points-of-view are particularly important as they indicate who should answer the question, or the most
important stakeholder interested for that question. Moreover, as the model focuses on technology
evaluation, sometimes the software packages are the main subject of the evaluation, and sometimes the
output of the software packages (that is also software) is the main subject.
Method for Evaluating End-User Development Technologies
All characteristics, sub-characteristics and related attributes were defined after a thorough analysis of the
reviewed literature and a pilot testing where two platforms were evaluated. We also organized the quality
characteristics into 4 criteria presented on the systematic review of Jadhav and Sonar, (2009): 1)
Functional, 2) Cost and benefit, 3) Vendor, and 4) Software quality. Tables 1 and 2 present our evaluation
model, that comprises criteria, characteristics, sub characteristics, literature references, attributes, and
points-of-view. The complete model, also containing the questions and metrics is available at
https://itrac.github.io/eud_technology_evaluation. From our literature review, we selected 11
fundamental characteristics to evaluate EUD technologies, which in turn are refined into 20 sub-
characteristics, and finally 30 attributes. These attributes are measured by 300 questions, all of them
initially collected from already established and tested techniques. We detail the model in the following
sections.
Functional characteristics
Functionality is the capability of the software product to provide functions that meet stated and implied
needs when the software is used under specified conditions. There are many possible related sub-
characteristics that are already covered by our model. So we selected the main target application domain,
that is the functional area(s) for which the software is especially oriented or strong (Jadhav and Sonar,
2009). We used the taxonomy provided by Richardson and Rymer (2016). Collaboration is the ability to
edit documents synchronously (Iacob, 2011). When the tool meets the heuristics, it indicates that there
are few or no usability errors for collaborative development (Baker et al., 2002). Despite that, end-user
developers develop solutions for themselves and less frequently for their peers, and reuse software in an
unplanned fashion. It is also expected that technologies support collaboration between professional
software developers and end-user developers, or among end-user developers (Ko, 2011). Thus, we selected
Collaboration Technology as a sub-characteristic, further derived into 5 attributes to understand to what
extent the technology provides support (Baker et al., 2002; Iacob, 2011): 1) Shareability; 2) Coordination
of actions; 3) Consequential communication; 4) Finding collaborators and establishing contact; and 5)
Concurrent protection.
Data management is the business function of planning for controlling and delivering data and
information assets (Cupoli, Earley, and Henderson, 2009). For the context of end-user development, it is
important to ensure the platform’s data and database capabilities evolve while the application is
developed (Sadalage and Fowler, 2016) and the platform capabilities to send and retrieve data from
external systems and databases (Mika, 2006). Data processing is one of the common tasks performed by
an end-user and the data management should be simplified in technical terms, to ensure simplicity in the
development (Doll and Torkzadeh, 1988).
Cost and benefit & Vendor characteristics
We adopted all cost attributes from the literature review conducted by Jadhav and Sonar, (2009). Benefits
are covered by other characteristics from our model. The selected cost attributes are: 1) License cost of the
product in terms of number of users; 2) Cost of training to users of the system; 3) Cost of installation and
implementation of the product; 4) Maintenance cost of the product; and 5) Cost of upgrading the product
when new version are launched.
General vendor characterization Jadhav and Sonar, (2009) found a number of vendor attributes, some of
them are already covered in our method (e.g., user manual and tutorial are covered by the usability
characteristic). We thus selected three essential attributes for characterizing a vendor: 1) Experience of
vendor about development of the software product, 2) Popularity of vendor product in the market, and 3)
Number of installations of the software package. Vendor dependency/Switching costs are a consequence
of buyer switching between alternative suppliers of essentially the same product. Large switching costs
can make buyers reluctant to switch suppliers (Greenstein, 1997). The more dependent on a vendor, the
higher is the probability to incur into large switching costs. Long terms and being dependent imply more
cost and less ability to innovate through Information Technology as the company is locked-in with a
specific IT supplier. We argue that anticipating the vendor dependency analysis will increase an
Method for Evaluating End-User Development Technologies
organizations’ ability to understand possible future switching costs and reflect on the trade-offs that
dependency brings to a company innovativeness.
Software quality characteristics
Compatibility is the degree with which two or more systems or components can exchange information
and/or perform their required functions while sharing the same hardware or software environment
(ISO/IEC 25023, 2011). Four compatibility attributes were considered: 1) Technical knowledge
requirement; 2) Data exchangeability; 3) Connectivity with external component/system; and 4)
Reusability. Maintainability is the capability of the software product to be modified. Modifications may
include corrections, improvements or adaptation of the software to changes in the environment, and in
requirements and functional specifications (ISO/IEC 25023, 2011). Two main attributes were considered:
1) Modifiability and 2) Reusability.
Usability is the capability of the software product to be understood, learned, used, and being enticing to
the user when used under specific conditions (ISO/IEC 25023, 2011). Fifteen attributes were considered
in terms of usability: 1) Visibility of system status; 2) Match between system and the real world; 3) User
control and freedom; 4) Consistency and Standards; 5) Help for users to recognize, diagnose, and recover
from errors; 6) Error prevention; 7) Recognition rather than recall; 8) Flexibility and minimalist design;
9) Aesthetic and minimalist design; 10) Help and documentation; 11) Skills; 12) Pleasurable and
respectful interaction with the user; 13) Privacy; 14) Accessibility; and 15) Localization.
Reliability is the capability of the software product to maintain a specified level of performance when used
under specified conditions (ISO/IEC 25023, 2011). Two reliability attributes were considered: 1)
Availability; and 2) Vendor support. Performance Efficiency is the capability of the software product to
provide appropriate performance, relative to the amount of resources used, under stated conditions
(ISO/IEC 25023, 2011). Two attributes were considered: 1) Response time; and 2) Turnaround time.
Security is the capability of the software product to protect information and data so that persons or other
products or systems have the degree of data access appropriate to their types and levels of authorization
(ISO/IEC 25023, 2011). Six security attributes were considered: 1) Access behaviors; 2) Security
behaviors; 3) Update behaviors; 4) File upload Security, 5) Report behaviors; and 6) Security algorithms.
Points-of-view
We defined two variables to define the points-of-view: stakeholder and technology. From a stakeholder
perspective, our model has questions related to end-user developers and to the organization’s governance
team. From a technology perspective, the questions focus on the software package being evaluated
(platform) or on its output (the application generated).
Evaluating two platforms with the proposed method: Results and
Discussion
We applied our technology evaluation method for analyzing two platforms. The research team has a
Software Engineering background, both in academia and industry. To report the overall steps, we
structure the stages using the sequence proposed by Jadhav and Sonar, (2009). We explain how we
conducted each step in the EUD context. The last two stages proposed - negotiating a contract and
purchasing and implementing the software package - are outside the scope of this study.
Determining the need, including high-level investigation of software features and
capabilities provided by vendors Literature and market research helped to obtain the available EUD
technologies. The first criterion to form a list of candidate tools was market analysis. Reports such as the
one provided by Richardson and Rymer, (2016) help to get an overall picture of the market and of the
needs to be updated, as the software market is always changing. Between August/2016 and October/2016
we carried out a literature review and contacted the leaders of public and private organizations to build a
general list of tools. The literature review on tools was comprehensive, iterative, and incremental. We
looked for technologies associated with the following search strings:
Method for Evaluating End-User Development Technologies
“EUAD
OR
“EUD”
OR
“citizen development”
OR
“end-user development”
OR
“end-user software
engineering” OR
“Low code”
OR
“Shadow
IT” OR
“User-centric”
OR
“RAD”
OR
“customer-facing
applications
OR “End-user computing” OR “End-user programming”.
Criteria
Characteristic
Sub characteristic
References
Point-of-View
Functional
Functionality
Main target
(Richardson and
Rymer, 2016)
Governance/platform
Functional
Collaboration
Collaboration
(Andriessen, 2012)
End-user
Technology
(Baker, Greenberg,
and Gutwin, 2002)
developer/platform
Functional
Data
Data Management
(Sadalage and
End-user
Management
Fowler, 2016)
(Mika,
developer/platform
2006)
Governance/platform
Cost and
benefit
Cost
Cost
(Jadhav and Sonar,
Governance/platform
2009)
Vendor
Vendor
Vendor
Characterization
(Greenstein, 1997)
Governance/platform
Vendor Dependency/
Switching costs
(Lichtenstein,
2004)
Governance/application
Table 1: Criteria related to functional, cost & benefits, and vendor characteristics
After a high-level investigation of the results, we narrowed the search space to software packages. This is
because there are is wide variety of solutions available to support the end-user and thus a large
combination of analytical methods needed to evaluate them.
Short listing candidate packages and eliminating the candidate packages that do
not have the required feature. In this stage, we decided to consider only the most solid market
offers. We thus selected two tools based on the market leadership scenario (Richardson and Rymer,
2016): Oracle Apex (Oracle Application Express) and OutSystems (Outsystems Platform).
Using the proposed evaluation technique to evaluate remaining packages and
obtain a score We applied our technology evaluation method to the two selected platforms. Table 3
describes a summary of the evaluation results obtained. We present in this study only the results
consolidated by each attribute. Figure 1 illustrates the usability evaluation results separately, since it has
15 attributes. A complete evaluation can be found at https://itrac.github.io/eud_technology_evaluation.
We chose the heuristic evaluation to perform this analysis and used four scenarios that consist basically of
creating an application, either using the platform’s predefined templates or not.
To perform a heuristic evaluation, 3 evaluators should inspect the interface separately, so 3 members of
our research team participated. Only after completing their evaluations, they communicated and
aggregated their findings. This procedure is important to ensure independent and unbiased evaluations
Method for Evaluating End-User Development Technologies
(Nielsen and Mack, 1994). During the evaluation session, the evaluator went through the interface several
times, inspected the many dialogue elements and compared them with the list of questions from our
model.
Criteria
Characteristic
Sub characteristic
References
Attributes
Point-of-View
Software
Compatibility
Interoperability
(Sherman, 2016)
Technical knowledge
End-user
quality
(Srivastava, Sridhar,
requirement (1 item)
developer/application
and Dehwal, 2012)
Data exchangeability (1 item)
generated
(ISO/IEC 25023,
Connectivity with external
2011)
component/system (3 items)
Reusability (1 items)
Software
Maintainability
Modifiability
(ISO/IEC 25023,
Modifiability (3 items)
End-user
quality
Reusability
2011)
Reusability (3 items)
developer/application
generated
End-user
developer/platform
Software
Usability
Appropriateness
(ISO/IEC 25023,
Visibility of System Status (20
End-user
quality
recognizability
2011)
items)
developer/platform
Learnability
(Weiss, 1994)
Match Between System and the
Operability
(Nielsen and Mack,
Real World (12 items)
User error protection
1994)
User Control and Freedom (19
User interface
(Pierotti, 2004)
items)
aesthetics
(IBM Corporation,
Consistency and Standards (26
Accessibility
2016A)
items)
(IBM Corporation,
Help Users Recognize,
2016B)
Diagnose, and Recover from
(Localization Testing
Errors (16 items)
Checklist - A Handy
Error Prevention (8 items)
Guide for
Recognition Rather Than Recall
Localization Testing)
(25 items)
Flexibility and Minimalist
Design (9 items)
Aesthetic and Minimalist
Design (10 items)
Help and Documentation (18
items)
Skills (11 items)
Pleasurable and Respectful
Interaction with the User (6
items)
Privacy (5 items)
Accessibility (12 items)
Localization (15 items)
Software
Reliability
Availability
(Banerjee, Srikanth,
Availability (2 items)
Governance/platform
quality
and Cukic, 2010)
Vendor support (5 items)
(Gray and Siewiorek,
1991)
(Lehman, Perry, and
Ramil, 1998)
Software
Performance
Time behavior
(ISO/IEC 25023,
Response time (2 items)
End-user
quality
efficiency
2011)
Turnaround time (4 items)
developer/platform
Software
Security
Integrity
(Stanton, Stam,
Access behaviors (5 items)
End-user
quality
Confidentiality
Mastrangelo, and
Security behaviors (2 items)
developer/application
Jolton, 2005)
Update behaviors (2 items)
generated
(Hausawi, 2016)
File Upload Security (2 items)
End-user
(ISO/IEC 25023,
Report behaviors (1 item)
developer/platform
2011)
Security algorithms (3 items)
Governance/platform
Table 2: Criteria related to software quality characteristics
From the evaluation results, it is possible to contrast characteristics of both platforms and, depending on
the organization’s priorities, to rank them. Without any prioritization, we can interpret that Outsystems
has the best chance to fulfil an end-user developers’ requirements as it scored better than Oracle Apex.
Method for Evaluating End-User Development Technologies
Pilot testing the tool in an appropriate environment We performed this stage in parallel
with the previous stage as we used four scenarios to create applications across platforms, simulating the
behavior of an end-user developer. This stage was fundamental to refine the model proposed, and to
remove, rewrite, and add questions/metrics to it. The evaluation model and the platform evaluation
results presented in this work are already the result from a second evaluation iteration. The successful
selection of tools and technologies for end-user developers is highly dependent on the context in which
the end-user is embedded, such as business domain characteristics, the organization’s culture, and the
end-user motivation to apply or develop technical skills (Fischer, 2004). One limitation in our study is
that we did not evaluate the platforms in a real world scenario. To address this limitation, we developed
four common scenarios of simple information systems that enable create, read, update, and delete
information (CRUD scenarios).
Characteristic
Attributes
Oracle Apex
OutSystems
Functionality
Application domain
Database
General-purpose
data analysis; graphics
generation; employer’s control;
calendar; data mining; spatial
database; responsive interfaces
Process-based development; Web
and mobile applications; Library
with more than 100 base interfaces;
Deploy control tool
Compatibility
Technical knowledge requirement
Advanced
Advanced
Data exchangeability
N/A
100%
Connectivity with external
component/system
RPC; Service Oriented
Integration; Messaging
RPC call; Messaging passing;
Software service
Reusability
100%
100%
Maintainability
Modifiability
100%
100%
Reusability
Possible
Possible
Reliability
Availability
N/A
N/A
Vendor support
Avg time for new release: 179,58
Avg time for new release: 20.30
Avg fixes in each release: 73.46
Avg fixes in each release: 8.76
Performance
efficiency
Response time
100%
100%
Turnaround time
50%
100%
Security
Access behaviors
80%
60%
Security behaviors
100%
100%
Update behaviors
100%
50%
File Upload Security
100%
100%
Report behaviors
100%
100%
Security algorithms
67%
100%
Cost
License cost
$ 164,839.00
$ 2,072,601.74
Maintenance cost
N/A
N/A
Vendor
Contract dependency
100%
100%
Technology dependency
100%
0%
Collaboration
Shareability
100%
100%
Coordination of actions
100%
100%
Consequential communication
60%
60%
Finding collaborators and
establishing contact
75%
0%
Concurrent protection
0%
50%
Data
Management
Data input and output
input: TXT, XML, CSV, SQL, REST,
SOAP. output: CSV, REST.
input: CSV, TXT, XML, XLS, JSON,
SQL, SOAP, REST. output: CSV,
REST
Required technical knowledge
33%
33%
Table 3: Summary of Oracle Apex and OutSystems evaluation results
Conclusion and Future Work
We propose a method for evaluating end-user development (EUD) technologies here, based on an
extensive work of literature research. We also presented the evaluation of two platforms using four
scenarios. The evaluations have improved and refined the model. This paper sheds light on under-
researched questions related to the end-user development context in general, and in the EUD technology
Method for Evaluating End-User Development Technologies
evaluation in particular. The major original contributions of this paper are (1) a detailed method for
evaluating EUD technologies
Figure 1: Usability analysis result of the two platforms
that comprises 11 characteristics, 20 sub-characteristics, 30 attributes, and 300 questions/metrics, and
(2) examples of two evaluations using our method against leading EUD platforms in the market.
This work points to the need for some interesting future studies. The next step is to refine the method
through real-world scenarios, and to evolve the model to assist decision makers with the evaluation and
selection of software packages, using evaluation techniques such as analytic hierarchy process or weighted
scoring method (Jadhav and Sonar, 2011). There is an avenue for exploring evaluation automation, for
instance with the use of dynamic application security testing tools to support security characteristics’
evaluation. We also plan to explore existing research on user requirements determination to improve end-
user developer acceptance, their success, and consequently the success of the software package
acquisition. In addition, and given the proliferation of technologies, we defined that, in the context of a
first analysis of tools, it would be appropriate to select tools with a higher degree of maturity. The higher
the maturity, the lower the risk of a given technology. Emerging technologies, however, are riskier and
potentially more innovative. Researchers could look at them in future studies.
REFERENCES
Andriessen, J. (2012). Working with Groupware: Understanding and Evaluating Collaboration Technology. Computer
Supported Cooperative Work. Springer-Verlag London.
Baker, K., Greenberg, S., and Gutwin, C. (2002). “Empirical development of a heuristic evaluation methodology for
shared workspace groupware.” Proceedings of the 2002 ACM conference on Computer Supported Cooperative
Work, CSCW ’02. ACM Press, USA, 96.
Banerjee, S., Srikanth, H., and Cukic, B. (2010). “Log-Based Reliability Analysis of Software as a Service (SaaS).”
IEEE 21st International Symposium on Software Reliability Engineering. IEEE, 239248.
Costabile, M. F. et al. (2005). “A Meta-Design Approach to End-User Development.” Proceedings of the 2005 IEEE
Symposium on Visual Languages and Human-Centric Computing (VLHCC '05), pp. 308-310.
Cupoli, P., Earley, S., and Henderson, D. (2009). DAMA - Data Management Book of Knowledge. 1st ed. Technics
Publications, LLC, Post Office Box 161 Bradley Beach, NJ 07720.
Damsgaard, J. and Karlsbjerg, J. (2010). “Seven Principles for Selecting Software Packages.” Communications of the
ACM 53.8, 5562.
Davis, F. D. (1989). “Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology.”
MIS Quarterly 13.3, 319.
Doll, W. J. and Torkzadeh, G. (1988). “The Measurement of End-User Computing Satisfaction.” MIS Quarterly 12.2,
259274.
Fischer, G. et al. (2004). “Meta-design.” Communications of the ACM 47.9, 33.
Method for Evaluating End-User Development Technologies
Franch, X. and Carvallo, J. P. (2003). “Using Quality Models in Software Package Selection.” IEEE Software 20.1,
3441.
Golota, H. Localization Testing Checklist - A Handy Guide for Localization Testing. URL:
https://www.globalme.net/blog/localization-testing-checklist (visited on 03/01/2017).
Gray, J. and Siewiorek, D. (1991). “High-availability computer systems.” Computer 24.9, 3948.
Greenstein, S. M. (1997). “Lock-in and the Costs of Switching Mainframe Computer Vendors: What Do Buyers
See?”Industrial and Corporate Change 6.2, 247.
Harnisch, S. (2014). “Enterprise-level Packaged Software Acquisition: a Structured Literature Review Through the
Lens of IT Governance.” European Conference on Information Systems (ECIS) 2014.
Hausawi, Y. M. (2016). “Current Trend of End-Users’ Behaviours Towards Security Mechanisms.” Springer, Cham,
140151.
Iacob, C. (2011). “Design Patterns in the Design of Systems for Creative Collaborative Processes.” Lecture Notes in
Computer Science. Springer Berlin Heidelberg, 359362.
IBM Corporation (2016B). IBM Accessibility Checklist for Software.
IBM Corporation (2016A). IBM Accessibility Checklist for Web and Web-Based Documentation.
ISO/IEC 25023 (2011). Systems and software engineering Systems and software Quality Requirements and
Evaluation (SQuaRE) Measurement of system and software product quality. Technical Report.
Jadhav, A. S. and Sonar, R. M. (2009). “Evaluating and Selecting Software Packages: A Review.” Information and
Software Technology 51.3, 555563.
Jadhav, A. S. and Sonar, R. M. (2011). “Framework for Evaluation and Selection of the Software Packages: A Hybrid
Knowledge Based System Approach.The Journal of Systems and Software 84.8, 13941407.
Ko, A. J. et al. (2011). “The State of the Art in End-user Software Engineering.” ACM Computing Surveys (CSUR)
43.3,21:121:44.
Lehman, M., Perry, D., and Ramil, J. (1998). “Implications of evolution metrics on software maintenance.”
Proceedings. International Conference on Software Maintenance (Cat. No. 98CB36272), 208217.
Lichtenstein, Y. (2004). “Puzzles in Software Development Contracting.” Communications of ACM 47.2, 6165.
Lieberman, H., Paterno, F., Klann, M. and Wulf, V. (2006). End-user development: An emerging paradigm. End User
Development. (2006), 18.
McGill, T. (2004). Advanced Topics in End User Computing. Vol. 3. IGI Global.
Mika, S. (2006). Five Steps to Database Integration. URL: http://www.government-
fleet.com/article/story/2006/03/five-steps-to-database-integration.aspx (visited on 12/12/2016).
Misra, H. (2017). “Managing User Capabilities in Information Systems Life Cycle: Conceptual Modelling.”
International Journal of Information Science and Management 15.1, 3958.
Misra, H. and Mohanty, B. (2003). The IT-acquisition models and user’s perspective: a review. Tech. rep.
Montazemi, A. R., Cameron, D. A., and Gupta, K. M. (1996). “An Empirical Study of Factors Affecting Software
Package Selection.” Journal of Management Information Systems 13.1, 89105.
Nielsen, J. and Mack, R. L. (1994). Usability inspection methods. Wiley. Nilsen, J. (2012). Usability 101: Introduction
to Usability.
Oracle. Oracle Application Express. URL: https://apex.oracle.com/en (visited on 03/01/2017).
Outsystems. Outsystems Platform. URL: https://www.outsystems.com (visited on 03/01/2017).
Pierotti, D. (2004). Heuristic Evaluation - A System Checklist. URL: ftp://ftp.cs.uregina.ca/pub
/class/305/lab2/example-he.html (visited on 03/01/2017).
Richardson, C. and Rymer, J. R. (2016). The Forrester Wave ™: Low-Code Development Platforms. URL:
http://agilepoint.com/wp-content/uploads/Q2-2016-Forrester-Low-Code.pdf(visited on 12/2016).
Sadalage, P. and Fowler, M. (2016). Evolutionary Database Design. URL: https://martinfowler.com/articles/evodb.
html (visited on 02/20/2017).
Sherman, R. (2016). How to evaluate the features of data integration products. URL:
http://searchdatamanagement.techtarget.com/feature/How-to-evaluate-the-features-of-a-data integration-
product (visited on 10/24/2016).
Srivastava, K., Sridhar, P. S.V. S., and Dehwal, A. (2012). “Data Integration Challenges and Solutions: A Study.”
International Journal of Advanced Research in Computer Science and Software Engineering 2.7, 3437.
Stanton, J. M., Stam, K. R., Mastrangelo, P., and Jolton, J. (2005). “Analysis of end user security behaviors.”
Computers & Security 24.2, 124133.
Venkatesh, V., Morris, M. G., Davis, G. B., and Davis, F. D. (2003). “User Acceptance of Information Technology:
Toward a Unified View.” MIS Quarterly 27.3, 425478.
Weiss, E. (1994). Making Computers People-Literate. 1st. Jossey-Bass Inc., Publishers.
... Then, the authors compared their approach with eleven low-code platforms against four security features. Melo et al. [61] considered the ISO/IEC 25010 standard besides a set of quality aspects, such as vendor and cost, to evaluate Oracle Apex and OutSystems. ...
... Studies based on benchmarking [57][58][59][60][61]66,69,72] and statistical analysis (SA) [5,7,17,18,63,65] are typically timeconsuming approaches and mainly applicable to a limited set of alternatives and criteria, as they require in-depth knowledge of programming languages and concepts. Such analysis is subject to increased error, particularly when a relational analysis is used to attain a higher interpretation level. ...
Article
Full-text available
Model-driven development platforms shift the focus of software development activity from coding to modeling for enterprises. A significant number of such platforms are available in the market. Selecting the best fitting platform is challenging, as domain experts are not typically model-driven deployment platform experts and have limited time for acquiring the needed knowledge. We model the problem as a multi-criteria decision-making problem and capture knowledge systematically about the features and qualities of 30 alternative platforms. Through four industry case studies, we confirm that the model supports decision-makers with the selection problem by reducing the time and cost of the decision-making process and by providing a richer list of options than the enterprises considered initially. We show that having decision knowledge readily available supports decision-makers in making more rational, efficient, and effective decisions. The study’s theoretical contribution is the observation that the decision framework provides a reliable approach for creating decision models in software production.
... Hence, general governance for SIT and BMIT (GG) includes policies to regulate IT usage in BUs [25], increase of awareness [26], or instance monitoring and identification [4]. Instance governance for overt BMIT instances (IG) subsumes instances categorization [27], instance decommission [28], or governance allocation to the ITOrg [29], the BU [30], or in a split responsibility model [31,32]. As a result, Klotz et al. [9] find that common themes exist for SIT and BMIT research. ...
... A discussion of the current and potential future application of the category-spanning theories for SIT and BMIT provides a favorable starting point to establish a broader theory base in the research field. x x x x x [51] x Knowledge-based theory of the firm F [7,49] x [50] x Organizational frustration theory I [17] x Production possibility frontier I, G, F, S [13,14] x (Contingency) resource-based theory F [52] x Transaction cost theory* F [6] x x x x x [29][30][31]53] x x [50] x Sociotechnical Actor-network theory I [14,54,55] x Affective events theory I [65] x x Circuits of power theory I, G, F, S [56] x Deterrence theory I [57] x [58] x x Deviance theory I [19] x x x [21,59,60] x Empowerment theory I [60] x [24] x [23,67] x x x [16] x x x x Neutralization theory I [57,68] x x [25] x Norm salience theory I [17] x x x Punctuated equilibrium theory G, F, S [69] x Self-determination theory I [20] x Sensemaking theory I [70] x x Social influence theory I, G, F [72] x [71] x Social learning theory I, G [17,22] x Social presence theory I [72] x x [73] x Socio-technical theory I, G, F, S [18,28] x x x Structuration theory I, G, F, S [74] x x x [13,14] x Technological frames* I, G, F, S [15] x x x x x x (Unified) technology acceptance model/use of technology I [25,72,75,76] x [27] x x Theory of anomie I, G [21] x Theory of planned behavior I [51,58] x [76] x x Theory of reasoned action I [26,58] x Theory of workarounds I [51] x x x x [77] x x x [19] x [53] x [29] x x Viable systems model F [78] x x Webber's Lebensstil I [13,14] x Work system theory I, F [51] x [69,79] x x Total 37 52 6 30 9 10 9 9 8 32 12 13 We also discuss the theory of workarounds [51] because it is a reference theory which was explicitly derived for the research field and has been drawn on by SIT and BMIT scholars in the existing literature. We suggest to further develop this process theory [80], from primarily covering the emergence of SIT/BMIT to also capturing subsequent phases, i.e., by extending it to governance. ...
Conference Paper
To create transparency on the rising plethora of Shadow IT (SIT) and Business-managed IT (BMIT) research, this paper reviews how existing research in the field uses reference theories. A review of 107 SIT and BMIT literature items shows that 52 (49%) of the literature items drew on 37 reference theories for theory testing and theory building. The remaining 55 (51%) literature items did not base their research on reference theories. To guide theorizing in future SIT and BMIT research, this paper argues in favor of “category-spanning” reference theories, i.e., reference theories already used across the SIT and BMIT research categories “causing factors,” “outcomes,” and “governance of SIT and BMIT.” The paper identifies four category-spanning theories: (1) transaction cost theory, (2) agency theory, (3) loose coupling theory, and (4) technological frames. Analyzing the prior use of these reference theories in SIT and BMIT, the paper at hand suggests and discusses extensions for their application in future SIT and BMIT research. In addition to the category-spanning reference theories, the theory of workarounds seems to be highly applicable. Due to the novelty of the research field and the explorative character of most existing studies, the paper calls for enhanced theorizing in the field of SIT and BMIT since currently most of the literature items do not build on reference theories.
... This also applies to cloud offerings with simpler application distribution models [9]. In addition, platforms for end-user development, such as low-code platforms, make it easier for business units to implement their solutions [18,44,45]. End-user hardware, such as smartphones [41], and IT consumerization [46] make it easier to access applications and solutions [47][48][49]. ...
... Overt Business-managed IT instances can be categorized, for example, by type of IT/solution [92,98,104], creator of solution [103], type of project [62,63], intention [121], or process/technology and time dimensions [20]. Nevertheless, a categorization by criticality and quality of instances [45,122,123], by functional scope and scope of use [17,71,122], or by strategic importance and stakeholder [96], is required to define a suitable governance approach. ...
Article
Shadow IT and Business-managed IT describe the autonomous deployment/procurement or management of Information Technology (IT) instances, i.e., software, hardware, or IT services, by business entities. For Shadow IT, this happens covertly, i.e., without alignment with the IT organization; for Business-managed IT this happens overtly, i.e., in alignment with the IT organization or in a split responsibility model. We conduct a systematic literature review and structure the identified research themes in a framework of causing factors, outcomes, and governance. As causing factors, we identify enablers, motivators, and missing barriers. Outcomes can be benefits as well as risks/shortcomings of Shadow IT and Business-managed IT. Concerning governance, we distinguish two subcategories: general governance for Shadow IT and Business-managed IT and instance governance for overt Business-managed IT. Thus, a specific set of governance approaches exists for Business-managed IT that cannot be applied to Shadow IT due to its covert nature. Hence, we extend the existing conceptual understanding and allocate research themes to Shadow IT, Business-managed IT, or both concepts and particularly distinguish the governance of the two concepts. Besides, we find that governance themes have been the primary research focus since 2016, whereas older publications (until 2015) focused on causing factors.
... A method specifically designed for EUD evaluation has been proposed in [14], which analyses EUD software by using criteria related to software quality characteristics, including usability, whose evaluation is articulated according to Nielsen's principles. ...
... This includes "bring your own device" (BYOD), which allows controlled usage of end-user devices through "mobile device management" (MDM) . Another form in this context is represented by platforms or development technologies that are provided to end-users (Melo et al. 2017). Bygstad (2016) describes innovation with lightweight IT in the form of small innovative apps that are created by users on platform systems, for example, Business Intelligence (BI) platforms (Kretzer and Maedche 2014). ...
Article
Full-text available
Shadow IT describes covert/hidden IT systems that are managed by business entities themselves. Additionally, there are also overt forms in practice, so-called Business- managed IT, which share most of the characteristics of Shadow IT. To better understand this phenomenon, we interviewed 29 executive IT managers about positive and negative cases of Shadow IT and Business-managed IT. By applying qualitative comparative analysis (QCA), we derived four conditions that characterize these cases: Aligned, local, simple, and volatile. The results show that there are three sufficient configurations of conditions that lead to a positive outcome; one of them even encompasses Shadow IT. The most important solution indicates that IT systems managed by business entities are viewed as being positive if they are aligned with the IT department and limited to local requirements. This allows to balance local responsiveness to changing requirements and global standardization. In contrast, IT systems that are not aligned and permanent (and either organization-wide or simple) are consistently considered as negative. Our study is the first empirical quantitative– qualitative study to shed light on the success and failure of Shadow IT and Business-managed IT.
... In our interviews, some of them had clearly defined processes to evaluate identified Shadow IT (P10, 28). This can include general risk assessments that take into account the criticality (P13) (Ferneley and Sobreperez [27]; Melo et al. [71]; Rentrop and Zimmermann [83]), aspects around security (P25), or the scope of the instance (Fürstenau et al. [30]). Another factor might be costs (P10, P26) or the number of resources required to operate the system (non-standard systems require additional expertise) (P14). ...
Article
Two concepts describe the autonomous deployment of IT by business entities: Shadow IT and Business-managed IT. Shadow IT is deployed covertly, that is, software, hardware, or IT services created/procured or managed by business entities without alignment with the IT organization. In contrast, Business-managed IT describes the overt deployment of IT, that is, in alignment with the IT organization or in a split responsibility model. The purpose of this paper is to extend the conceptual understanding of Shadow IT and Business-managed IT, comparing the perceptions of 29 CIOs and senior IT managers with the results of a systematic literature review. By doing so, this paper presents a structured and comprehensive view of causing factors, outcomes, and governance of Shadow IT and Business-managed IT in practice. A comparison of academic literature and practitioner perceptions reveals the limitations and gaps of the current research and highlights avenues for future research. The authors find three category-spanning themes occurring as causing factors, outcomes, and—as part of governance measures—factors to improve the IT organization: (1) (Poor) business-IT alignment (2) (lack of) agility, and (3) (lack of) policies. This study is innovative with its comprehensive qualitative interview data that the authors compare to the existing literature. Therefore, the paper brings together theoretical and practical insights into Shadow IT and Business-managed IT, which should aid practitioners and scholars in decision making and future research.
... This includes "bring your own device" (BYOD) which allows controlled usage of end-user devices through "mobile device management" (MDM) (Ortbach et al., 2014). Another form in this context is represented by platforms or development technologies that are provided to end users (Melo et al., 2017). Bygstad (2016) describes generative innovation with lightweight IT in form of small innovative apps which are created by users on platform systems, for example BI platforms (Kretzer and Maedche, 2014). ...
Conference Paper
Full-text available
Research on Shadow IT is facing a conceptual dilemma in cases where previously "covert" systems developed by business entities (individual users, business workgroups, or business units) are integrated in the organizational IT management. These systems become visible, are therefore not "in the shadows" anymore, and subsequently do not fit to existing definitions of Shadow IT. Practice shows that some information systems share characteristics of Shadow IT, but are created openly in alignment with the IT department. This paper therefore proposes the term "Business-managed IT" to describe "overt" information systems developed or managed by business entities. We distinguish Business-managed IT from Shadow IT by illustrating case vignettes. Accordingly, our contribution is to suggest a concept and its delineation against other concepts. In this way, IS researchers interested in IT originated from or maintained by business entities can construct theories with a wider scope of application that are at the same time more specific to practical problems. In addition, value-laden terminology is complemented by a vocabulary that values potentially innovative developments by business entities more adequately. From a practical point of view, the distinction can be used to discuss the distribution of task responsibilities for information systems.
Article
Contexto: A transformação digital está mudando a indústria e a maneira como usamos a tecnologia da informação (TI). A maior parte dos programas já são escritos por usuários finais com conhecimento do negócio que precisam cumprir objetivos com o uso da TI, embora a governança estabelecida nem sempre seja adequada. A compreensão de como organizar a descentralização da função de TI para usuários finais, de modo a preparar a organização para os desafios e oportunidades do século 21, torna-se chave. Objetivo: Este trabalho tem como objetivo investigar, sob uma ótica socio-técnica, como organizações descentralizam a TI por meio de End-User Development (EUD). Método: Foram conduzidos dois estudos de caso em organizações que buscam a descentralização por EUD há pelo menos 3 anos. Foi descrito um framework conceitual para análise dos casos. Resultados: A maturidade prévia em governança da TI foi fator importante na capacidade de descentralização através de EUD, embora ambos casos não tenham atingido alto grau de descentralização. A TI sombra permaneceu em ambos os casos. Ambas organizações não demonstraram buscar inovação com as iniciativas de descentralização, embora esta seja uma visão fundamental da função de TI no século 21.
Article
Full-text available
Information Systems (IS) acquisition prescribes induction of information technologies (IT) in the organization. At times, IS is used for managing broader enterprise level issues like implementing e-business, e-commerce. Enterprise level issues are addressed through adequate involvement of people in the organization which are termed as “user capabilities”. Managing user capabilities is critical since their roles are very important in the entire Systems Development Life Cycle (SDLC). It is seen that failures in IT acquisition are not because of the technology, but failure in choosing it rightly, and poor user capabilities. If planned properly, user capabilities can contribute effectively to SDLC. SDLC works in stages with different sets of users and insensitivities to users contribute to the gap between satisfying organizational needs and end-user deliveries. This problem can be addressed by carefully integrating user capability issues in the SDLC process. In this paper a framework is suggested to capture user capabilities in an IT acquiring organization. User capabilities are identified in two categories: IT users who are IT experts and involved in design, development, and implementation of SDLC driven projects, and, second, non-IT users who, despite having inadequate or no exposure to IT, contribute to SDLC driven projects. The framework is implemented through Unified Modelling Language (UML) based approach. A case discusses the utility of this approach.
Research
Full-text available
Working paper-IRMA # 173 in IT acquisition
Article
Full-text available
Even if the usefulness of a knowledge repository represented as a collection of design patterns is largely recognized in the literature, little work has been done in investigating and measuring the impact such a collection would have on collaborative design processes involving designers. The paper describes the results of a case study designed to bring some insight into the matter. 18 design workshops were conducted with 18 teams of undergraduate students in Computer Science. Making use of a collection of design patterns for the design of synchronous applications, they were asked to design the GUI and the interaction process of applications which support synchronous collaboration in activities such as drawing, text editing, game solving, and searching. To answer the questions addressed by the case study, the results of the workshops were triangulated from: i) audio recordings of the conversations of each team, ii) notes taken on the participants interactions by a facilitator present during the workshops, and iii) feedback provided by each participant through a questionnaire, at the end of each workshop.
Chapter
Full-text available
We think that over the next few years, the goal of interactive systems and services will evolve from just making systems easy to use (even though that goal has not yet been completely achieved) to making systems that are easy to develop by end users. By now, most people have become familiar with the basic functionality and interfaces of computers, but they are not able to manage any programming language. Therefore, they cannot develop new applications or modify current ones according to their needs. In order to address such challenges it is necessary a new paradigm, based on a multidisciplinary approach involving several types of expertise, such as software engineering, human-computer interaction, CSCW, which are now rather fragmented and with little interaction. The resulting methods and tools can provide results useful across many application domains, such as ERP, multi-device services (accessible through both mobile and stationary devices), and professional applications. Key words. tailorability, end user programming, flexibility, usability
Conference Paper
End user’s Security-related behaviors are key factors on success or failure of information security mechanisms’ application. Such security mechanisms are being rapidly modified sophisticatedly. Consequently, end-users’ behaviors are being changed, newly developed, and/or innovated as a result of the modifications of the mechanisms. Therefore, tracing the change of the end-user’s security related behaviors is an essential activity that should get continual attention from the security professionals. Unfortunately, behavioral studies on information security are out of most security professionals’ scope, despite the common believe that end-users must be involved in security mechanisms’ development. This article focuses on tracking the current trend of both positive and negative behaviors of end-users who are not security experts. The tracking process is based on semi-structured interviews with security experts who deal with end users on daily bases.
Article
From the Publisher:More than 80 percent of American office workers now use a computer in their work. But when a system is poorly designed, organizations can waste enormous amounts of time and money trying to make it work effectively. Elaine Weiss offers practical guidelines and techniques for uncovering, diagnosing, and correcting problems in the user interface - the menus, icons, data-entry screens, on-line help, and messages through which the computer communicates with the user and the user with the computer. Weiss provides twenty-six generously illustrated checklists to guide human performance technologists, instructional designers, trainers, and human resource professionals through each step of the process - from initial surveys and interviews to making redesign recommendations and measuring results. By focusing on four key areas of human-computer interaction - presentation, conversation, navigation, and explanation - Weiss demonstrates how computers can be made to serve the user - not the other way around.