Content uploaded by Robin Holding Kay
Author content
All content in this area was uploaded by Robin Holding Kay on Nov 02, 2018
Content may be subject to copyright.
Developing a Framework to Help Educators Select and
Use Mobile Apps in the Classroom
Robin Kay
Professor
University of Ontario Institute of Technology
Oshawa, Canada
robin.kay@uoit.ca
Abstract: In 2017, the projected number of educational mobile apps available was estimated at 750,000
making the selection process incredibly challenging for educators. The purpose of this paper was to
develop a coherent model for selecting and guiding the use of mobile apps. An emergent content analysis
was conducted on 11 previous classification schemes to produce a coherent, robust framework consisting
of eight primary categories including instructive, practice-based, meta-cognitive, constructive, productive,
communication, collaboration and game-based/augmented reality apps. Clear definitions, educational
goals, and examples are presented for each category.
Introduction
An educational app is a software application that works on a mobile device and is designed to support learning
(Bouck, Satsangi, & Flanagan, 2016; Papadakis, Kalogiannakis & Zaranis, 2017). The projected number of apps
available in 2017, both free and paid, is over 5 million, 15% of which focus on education (Statista, 2017; Technavio,
2015). This means that there are roughly 750,000 apps available in the domain of education. It is not surprising,
then, that educators find choosing appropriate apps a daunting task (Bouck et al., 2016; Papadakis et al., 2017; Alon,
An & Fuentes, 2015). Consequently, there is a clear need for a coherent model to organize and select educational
apps.
Over the past seven years, at least at 11 papers have proposed classification schemes for educational apps in
general education (Chergui, Begdouri & Groux-Leclet, 2017; Cherner, Dix & Lee, 2014), mathematics (Alon et al.,
2015; Ebner, 2015; Grandgenett, Harris & Hoffer, 2011; Handal, Campbell, Cavanaugh & Petocz, 2016), science
(Zydney & Warner, 2016), augmented reality (O’Shea & Elliot, 2016), and higher education (Pechenkina, 2017).
Over 30 distinct main categories have been identified in these schemes making it challenging and somewhat
confusing for educators to select education apps efficiently.
Some models organized apps based on only three or four main categories (Cherner et al., 2014; Ebner, 2015;
Grandgenett et al., 2011) thereby restricting the classification process. Other models incorporated 14 to 32
subcategories (Alon et al., 2015; Chergui et al., 2017; Grandgenett et al., 2011; Handal e al., 2016; Pechenkina,
2017) leading to an unwieldy and overwhelming classification process. The purpose of this study, then, is to review
previous classification schemes and to develop a comprehensive but workable framework for classifying and using
educational apps.
Method
Procedure
Several steps were followed to ensure a high-quality review and analysis of the literature on classifying
educational apps. First, a comprehensive search of peer-reviewed journals, but not conference papers or reports,
combining terms for apps (e.g., apps, online tools, web-based tools, learning objects, mobile apps) with descriptors
for classification systems (e.g., classification, types, models, frameworks) was conducted. Numerous databases
were examined including AACE Digital Library, Academic Search Premiere, EBSCOhost, ERIC, Google Scholar,
and Scholars Portal Journals. Second, the reference section for relevant articles was searched in order to find
additional papers. Third, key educational and technology journals from around the world were examined
independently including following publications: Australasian Journal of Educational Technology, British Journal of
Educational Technology, Canadian Journal of Learning and Technology, Computers and Education, Computers in
Human Behavior, Educational Technology Journal of Computer Assisted Learning, Journal of Educational
Computing Research, and the Turkish Journal Online Journal of Distance Education. The search process uncovered
11 peer-reviewed articles published from 2011 to 2017. Each paper was analyzed based on subject domain (if
applicable), classification categories, and category description.
-1315-
E-Learn 2018 - Las Vegas, NV, United States, October 15-18, 2018
Results
Domain
The content domain for a classification scheme can have an impact on the type of categories included. Four of the
eleven papers reviewed focused on classifying general education [Chergui et al., 2017; Cherner et al., 2014; Murray
& Olcese, 2011; Orehovacki & Bubas, 2012), four targeted math-based apps (Alon et al., 2015; Ebner, 2015;
Grandgenett et al., 2011; Handl et al., 2016) and the remaining three looked at augmented reality (O’Shea & Elliot,
2016), higher education administrative apps (Pechenkina, 2017) and science (Zydney & Warner, 2016). General
classification schemes focused somewhat predictably on a wide range of categories including collaboration,
communication, exploration, knowledge organization, knowledge building, instructional tools, metacognition, skills,
and producing artefacts (Chergui et al., 2017; Cherner et al., 2014; Murray & Olcese, 2011; Orehovacki et al.,
2012). An amalgamation of these the four general classification schemes might have produce a more comprehensive
framework, but individually each proposed model was lacking.
Regarding math and science-based apps, Alon et. al (2015) captured the typical learning practices and activities
for these domains with seven primary categories: making sense of information, practicing techniques, interpreting
and explore concepts, producing artefacts and representations, applying math to the real world, evaluation, and
creating products and resources. A reasonable case could be made for developing domain-specific classification
systems if only to reduce the complexity and volume for educators. However, non-traditional math and science apps
targeting collaboration or game-based learning, for example, might be omitted if the classification system were too
narrow (Ebner, 2015).
Pechenkina’s (2017) assessment of higher education mobile apps is an example of a classification scheme that
might be too narrow. She developed a framework consisting of 13 primary and 19 secondary categories, however,
the classification labels appeared to be based exclusively on content and focused on administrative as opposed to
instructional and learning activities. On the other hand, O'Shea & Elliott’s (2016) specialized classification of
augmented reality mobile apps is narrow but may be appropriate because of the nuances of these tools. They looked
at triggers and content to provide information in an engaging way, content delivery and assessment, and learning in
an authentic real-world context. These categories cover the current features of typical augmented-reality apps.
Collaboration, metacognition, and knowledge construction, while appropriate app categories for other types of apps,
don’t currently fall into the realm of augmented-reality tools. Consequently, O'Shea & Elliott’s (2016) relatively
narrow framework works and may be relatively efficient for educators interested in using augmented reality apps.
Classification Categories
Next, an emergent content analysis was completed based on descriptions of categories offered for each
classification scheme. After reviewing 47 main categories and 125 sub-categories, a table was created consisting of
eight main categories (instructive, practice-based, organization, constructive, productive, communication,
collaborative, game-based/augmented reality) and the defining criteria (Table 1).
Productive (n=7), instruction (n=6), and practice-based (n=6) apps were the most frequently cited mobile apps
categories. Communication (n=3), collaboration (n=3) and game-based/AR were the least mentioned. No one
framework incorporated more than five out of the eight general categories with most including only three (Cherner
et al., 2014; Ebner, 2015; Zydney & Warner, 2016; O’Shea & Elliot, 2016; Orehovacki & Bubas, 2012) or four
(Alon et al., 2015; Grandgenett et al., 2011; Handal et al., 2016; Murray & Olcese, 2011). While some models
included a limited number of app categories, others included 19 to 31 sub-categories (Alon et al., 2015; Chergui et
al., 2017; Grandgenett et al., 2011; Pechenkina, 2017) making it difficult and inefficient for educators and
researchers to classify mobile apps accurately. Finally, researchers varied with respect to providing coherent,
comprehensive classification models. Some studies presented no coherent thread within their frameworks, offering
somewhat random category labels (Ebner, 2015; Pechinkina, 2017; Murray & Olcese, 2011). Others relied on more
traditional and passive forms of direct instruction when organizing and describing their frameworks (Chergui et al.,
2017; Cherner et al., 2014). The majority of studies developed classification schemes that incorporated more
practice-based and constructive models of learning (Alon et al., 2015; Grandgenett et al., 2011; Handal et al., 2016;
Zydney& Warner, 2016).
Table 1. Descriptors for Eight Mobile App Categories
Category Descriptors from Reviewed Studies
-1316-
E-Learn 2018 - Las Vegas, NV, United States, October 15-18, 2018
Instructive access to content, comprehension support, content delivery, memorization, retrieving factual
information, tutor
Practice-Based drill-and-practice, stand-alone practice, feedback, recall, evaluation, quizzes, testing, learning
analytics for teacher, acquiring new skills, practicing techniques, strategy-based problem
solving, applying math to the real-world
Metacognitive time management, self-monitoring, setting learning goals, planning, learning management,
compare and contrast strategies
Constructive categorize information, collect data and making calculations, compare and contrast, consider
a problem, develop an argument, estimate, explore with manipulatives, generalize
relationships, interpret and explore concepts/representations, investigate, knowledge
construction, knowledge organization, make conjectures, make sense of information,
recognize patterns, test a solution
Productive producing artefacts and representations, creating products and resources, content creation,
creating visual organizers, notes, documents, presentations, applying, creating, graphically
representing objects, creating multimedia, tools, creation of communal artefacts, sketch and
video creation
Communication social learning, communication
Collaboration interaction, collaboration, sharing information, creation of communal artefacts, engage with
peers, teachers and experts about concepts process and practices, learning communities
Game-Based /
Augmented-
Reality
motivation, game-based learning, gamification, content delivery and assessment in authentic
environment, games and simulations
Category Descriptions
An effective app classification scheme should offer a coherent organization and presentation of category
descriptions, clear and unique definitions for each category, educational goals, and specific app examples that are
relatively current. Six of the eleven studies reviewed presented descriptions in a tabular or visual format making it
relatively easy to contrast and compare app categories (Alon et al., 2015; Chergui et al., 2017; Cherner et al., 2014;
Grandgenett et al., 2011; Orehovački & Bubsa, 2012).
Only three studies offered category descriptions that were clear and distinct from each other (Alon et al., 2015;
Chergui et al., 2017; Handal et al., 2016). Cherner et al. (2014) offered partial descriptions, however, some of the
links to Bloom’s Taxonomy activities did not match the app category descriptions. Grandgenett et al. (2011) and
Orehovacki et al. (2012) articulated relatively clear category descriptions, but there appeared to be considerable
overlap among categories. At least five studies offered category labels but limited or no category descriptions
making to challenging to use and assess the proposed classification systems (Ebner, 2015; Zydney & Warner, 2016;
O’Shea & Elliot, 2016; Pechenkina, 2017; Murray & Olcese, 2011).
While the majority of studies offered clear app examples for each of the app categories (Alon et al., 2015;
Cherner et al., 2014; Ebner, 2015; Grandgenett et al., 2011; Zydney & Warner, 2016; Pechenkina, 2017; Orehovački
& Bubsa, 2012), only five studies offered current samples of mobile apps (Cherner et al., 2014; Ebner, 2015;
Zydney & Warner, 2016; Pechenkina, 2017; Orehovački & Bubsa, 2012). Several studies illustrated app categories
with dated examples such as website content, graphing calculators, and stand-alone software tools (Alon et al., 2015;
Grandgenett et al., 2011) while some researchers offered no examples at all (O’Shea & Elliot, 2016).
In order to facilitate the use of the categories that emerged from the analysis, concise definitions were created
based on an amalgamation of the category descriptions presented in the 11 reviewed papers. Table 2 offers
definitions for each of the eight app categories identified in the study.
Table 2. Definition of Categories
Category Definition
Instructive The primary purpose of this type of app is to teach a student a new concept or provide
tutoring/training. These apps tend to guide students by providing organized, step-by-
step, systematic scaffolding. Testing of understanding is typically not available.
-1317-
E-Learn 2018 - Las Vegas, NV, United States, October 15-18, 2018
Practice-Based Practice-based apps are designed to help students learn content and apply specific
skills. Students are tested for their understanding.
Metacognitive Metacognitive apps focus on goal setting, planning and execution, reasoning,
problem-solving, working memory, and organization.
Constructive Constructive apps focus on exploration, making sense of new information, skill
acquisition and data management, and the active manipulation of ideas and concepts.
Productive Productive or tool-based apps are used to produce artefacts or create products.
Typical, productive apps would be used as a culminating activity to demonstrate and
apply key knowledge, concepts, and understanding.
Communication Communicative apps, including a wide array of social media tools, allow students to
communicate and exchange ideas with their peers in a variety of ways, anytime,
anyplace.
Collaboration Collaborate apps allow students to work with others to create questions, discuss ideas,
explore problems and solutions, complete tasks and reflect.
Game-Based /
Augmented-Reality
Game-based apps involve learning and practicing concepts while playing games. In a
typical education-based game, students are exposed to challenging activities
structured with a narrative, rules, goals, progression and rewards.
While clear definitions are helpful, teachers require direction with respect to the expected pedagogical tasks
associated with using specific apps. Table 3 provides a list of the typical educational activities and learning
outcomes linked to each of the eight app categories.
Table 3. Educational Activities for App Categories
Category Educational Activity
Instructive Direct-instruction, explanation or presentation of material
Practice-Based Practice content-knowledge, understanding and skills
Metacognitive Develop metacognitive skills
Constructive Construct and build understanding/knowledge by organizing, categorizing, exploring,
interpreting, testing, comparing
Productive Produce artefacts that demonstrate or help consolidate knowledge, understanding and
skill acquisition
Communication Communicate, discuss, and debate ideas
Collaboration Work with others to share, discuss and resolve issues or to co-create artefacts
demonstrating knowledge skills and understanding
Game-Based /
Augmented-Reality
Challenging activity, structured with rules, goals, progression and rewards. Safe space
for experimentation, mistake-making, and creativity. Learn through experience and
reflection
In addition to clear definitions and educational activities, examples are helpful for instructors to fully understand
the eight categories in the proposed framework. Table 4 provides sample apps for each of the eight categories that
emerged from the literature review.
Table 4. Examples of Apps by Category
Category Sample Apps
Instructive Blinkist, TED Talks, Starfall, wikiHow
Practice-Based Duolingo, Khan Academy, Quizlet, TED-Ed
Metacognitive Google Calendar, Mindomo, myHomework, PearlTrees
Constructive Desmos, Gizmos, Pocket Code, Storybird
Productive Google Docs, Piktochart, Thinglink, Weebly
-1318-
E-Learn 2018 - Las Vegas, NV, United States, October 15-18, 2018
Communication Google Classroom, edublogs, Padlet, Seesaw
Collaboration Google Slides, Peergrade, Slack
Game-Based / Augmented-Reality Lure of the Labyrinth, Minecraft, Prodigy,
Conclusions and Future Research
This study reviewed and combined 11 previous mobile app classification schemes in order to develop well
defined, practical and manageable categories for educators. Concise definitions with educational activities and clear
examples were provided to guide teachers in the selection and use of mobile apps. However, robust classification
metrics provide statistical estimates of reliability and validity for proposed constructs. Reliability, in the form of
inter-rater agreement, helps establish clarity and consistency when assigning categories to specific mobile apps.
Validity helps determine whether a specific app category is distinct and unique from the other app classifications.
Two out of the eleven studies examined mentioned inter-rate agreement or reliability for their respective mobile app
categories (Cherner et al., 2014; Handal et al., 2016). Cherner et al. (2014) offered the only classification scheme,
that presented inter-rater agreement values for categories ranging from 0.05 to 1.00. Only one out of the eleven
studies reviewed attempted, albeit unsuccessfully, to establish the construct validity for the projected app categories
(Handal et al., 2016). Consequently, the next step is to establish reliability and validity for the eight categories
proposed in the selection framework.
Finally, the merits of an effective classification system are strengthened by research evidence. However, only
one of the eleven studies review for this paper included research studies to support the design and creation of app
categories (Zydney & Warner, 2016). Zydney & Warner (2016) conducted and presented an extensive review of 37
articles focusing on science-based apps and supporting their final app classification categories. To date, then, the
majority of previous app classification schemes are not backed by empirical evidence. Therefore, the framework for
this study needs to be assessed with qualitative and quantitative research data in the classroom with teachers and
students.
References
Alon, S., An, H., & Fuentes, D. (2015). Teaching mathematics with Tablet PCs: A professional development
program targeting primary school teachers. Christou, G., Maromoustakos, S., Mavrou, K., Meletiou-
Mavrothers, M. & Stylianou, G. (Eds.), Tablets in K-12 education: Integrated experiences and implications
(pp.175-197). Hershey, PA: IGI Global.
Bouck, E .C., Flanagan, S., & Bouck, M. (2015). Learning area and perimeter with virtual
manipulatives. Journal of Computers in Mathematics and Science Teaching, 34(4), 381-393.
Chergui, O., Begdouri, A., & Groux-Leclet, D. (2017). A classification of educational mobile use for learners and
teachers. International Journal of Information and Education Technology, 7(5), 324-330.
Cherner, T., Dix, J., & Lee, C. (2014). Cleaning up that mess: A framework for classifying educational apps.
Contemporary Issues in Technology and Teacher Education, 14(2), 158-193. Retrieved from
https://citejournal.s3.amazonaws.com/wp-content/uploads/2016/04/v14i2general1.pdf
Ebner, M. (2015). Mobile applications for math education – how should they be done? In Crompton, H., & Traxler,
J. (Eds.). Mobile learning and mathematics. Foundations, design, and case studies (pp. 20.32), New York:
Routledge.
Grandgenett, N., Harris, J., & Hofer, M. (2011). An activity-based approach to technology integration in the
mathematics classroom. NCSM Journal of Mathematics Education Leadership, 13(1), 19–28.
Handal, B., Campbell, C., Cavanagh, M., & Petocz, P. (2016). Characterising the perceived value of mathematics
educational apps in preservice teachers. Mathematics Education Research Journal, 28(1), 199–221. doi:
10.1007/s13394-015-0160-0
Murray, O. T., & Olcese, N. R. (2011). Teaching and learning with iPads, ready or not? TechTrends, 55(6), 42-48.
doi: 10.1007/s11528-011-0540-6
Orehovacki, T., Bubas, G., & Kovacic, A. (2012). Taxonomy of web 2.0 applications with educational potential. In
Cheal, C., Coughlin, J., Moore, S. (Eds.). Transformation in teaching: Social media strategies in higher
education. Santa Rosa, CA: Informing Science Pres. (pp. 43-72)
-1319-
E-Learn 2018 - Las Vegas, NV, United States, October 15-18, 2018
O’Shea, P. M., & Elliot, J. B. (2016). Augmented reality in education: An exploration and analysis of currently
available educational apps. In: Allison C., Morgado L., Pirker J., Beck D., Richter J., Gütl C. (Eds.), Immersive
Learning Research Network. iLRN 2016. Communications in Computer and Information Science, Vol 621 (pp.
Papadakis, S., Kalogiannakis, M., & Zaranis, N. (2017). Designing and creating an educational app rubric for
preschool teachers. Education and Information Technology, 1-19 doi: 10.1007/s10639-017-9579-0
Pechenkina, E. (2017). Developing a typology of mobile apps in Higher Education: A national case-study.
Australasian Journal of Educational Technology, 33(4), 134-146.
Statista (2017). Compound annual growth rate of free and paid education app downloads worldwide from 2012 to
2017. Retrieved from https://www.statista.com/statistics/273971/cagr-of-free-and-paid-education-app-
downloads-worldwide/
Technavio (2015 ). Global education apps market: Market study 2015-2019. Retrieved from
http://www.reportsnreports.com/contacts/inquirybeforebuy.aspx?name=426935
Zydney, J. M. & Warner, Z. (2016). Mobile apps for science learning: review of research. Computers & Education,
94, 1-17. doi: https://doi.org/10.1016/j.compedu.2015.11.001
-1320-
E-Learn 2018 - Las Vegas, NV, United States, October 15-18, 2018