Conference PaperPDF Available

ARTISTS: A virtual Reality culTural experIence perSonalized arTworks System: The “Children Concert” painting case study



Current work aims in designing and implementing the “ARTISTS” Virtual Reality (VR) system that immerses visitors of galleries in the virtual world through the representation of paintings. The paper focuses on providing personalized Cultural User eXperience (CUX) using User Personas methodology and enhance user’s perspective of an artwork by creating virtual environments that bring to life the displaying content. The proposed framework will allow users to interact with the virtual objects in real time using gesture recognition techniques that are handled by the Leap Motion controller, and takes into consideration user’s preferences and profiles based on user’s input data in order to adjust its displayed content. Finally, through the use of UX evaluation methodologies, user feedback has been collected in order to improve the system characteristics based on the famous painting “Children Concert” created by Georgios Iakovidis, which currently belongs to Athens National Gallery, while also it can be seen in digital form at the “Georgios Iakovidis” digital museum, in Lesvos island.
George Trichopoulos1, John Aliprantis2, Markos Konstantakis3, George
1Researcher, Department of Cultural Technology and Communication, University of
the Aegean, e-mail:
2PhD Candidate, Department of Cultural Technology and Communication, University
of the Aegean, e-mail:
3PhD Candidate, Department of Cultural Technology and Communication, University
of the Aegean, e-mail:
4Assistant Professor, Department of Cultural Technology and Communication,
University of the Aegean, e-mail:
Abstract. In recent years, there is a constant tendency in integrating modern technologies into mobile
guides and applications in Cultural Heritage (CH) domain, aiming in enriching cultural user
experience. Amongst them, Virtual Reality (VR) has widely been used in digital reconstruction or
restoration of damaged cultural artifacts and monuments, allowing a deeper perception in their
characteristics and unique history. This work presents a VR environment that takes into account the
diverse needs and characteristics of visitors and digitally immerses them into paintings, giving them the
ability to directly interact with their characteristics with the Leap Motion controller. To test our
proposed system, a mobile prototype application has been designed, focused on the famous painting
“Children Concert” created by Georgios Iakovidis, which also integrates the User Personas and the
different scenarios depending on users profile.
Keywords: Cultural Heritage; Cultural User Experience; Natural Interaction; User personas; Virtual
In recent years, various works argue about the positive influence that Augmented
Reality (AR) and Virtual Reality (VR) could have on the fields of language studies,
social sciences, mathematics and physics, medical science, art, entertainment,
advertising and marketing (Chang Kuo-En, 2014). According to Chang, Chang, Hou,
Sung, Chao, Lee, VR and AR technologies promote art appreciation to museum
visitors during a visit. In other words, visitors that used those technologies to guide
through a museum learned more about the exhibits comparing to all other visitors that
used conventional guides (audio guides) or walked freely without any kind of
guidance. A VR guide can boost mental and visual focus on exhibits, achieving a
level of flow (Mihaly Csikszentmihalyi, 1975), which motivates user to seek more
knowledge and extend his visit.
Meanwhile, personalization methods in User Experience (UX) and Cultural User
Experience (CUX) appear to give a new perspective to mobile guides and applications
in Cultural Heritage (CH). Personalization (Antoniou & Lepouras, 2010) is based on
the assumption that an computer system can understand the user’s needs, while its
success relies greatly on the accurate elicitation of the user profile. The main reason
for personalization need is simple: Everyone is unique. Matching visitor’s experience,
knowledge and demands is a highly challenging and demanding task. Capturing
special personal characteristics, before or during the visit in a cultural site, has been
implemented using several methods, for example using ontologies, methodological
approach, statistical approach (Pujol Laia et al., 2012), or indirect approach by taking
advantage of social networks like Facebook (Antoniou Angeliki et al., 2016), or
finally, according to visitor’s age and behavior.
Current work presents a Virtual Reality interface that represents digitally the world of
paintings, allowing users to interact with the aspects of the painting in a 3D
environment. The presented framework also integrates personalization, user personas
(based on the User Personas methodology [Konstantakis Markos et al., 2017]) and
context awareness techniques to improve users’ experience. In Section 2 we briefly
present our ARTISTS framework, the technologies that we used along and how we
integrate them to the application, the frameworks’ architecture and a use case scenario
with our prototype based on and the famous painting “Children Concert” created by
Georgios Iakovidis. Finally in Section 3 we discuss our future work.
ARTISTS Framework
ARTISTS is a mobile application that brings to life famous paintings, by digitally
construct its aspects in a Virtual Reality environment, where users can interact with its
3D models. Users immerse into the VR world by using their own devices mounted on
a VR headset (Google Cardboard), and then interact with the 3D environment using
gestures that are captured by the Leap Motion controller, that’s attached on the
headset. The proposed interface not only puts user inside a painting, allowing them to
observe and interact with the 3D models in many angles, but also uses various
methodologies (context-awareness, personalization, and gesture-recognition) in order
to enhance user’s cultural experience.
ARTISTS prototype has been designed based on the famous painting “Children’s
Concert” by Greek painter George Iakovidis, which can be found in Athens National
Gallery Greece, and in a digital format in “George Iakovidis” digital gallery, in
Hidira village Lesvos. For this painting, seven 3D human models were created,
along with their animations and sounds, in accordance with the 7 characters found in
the original painting. Painting’s surrounding space (a bright room having some
furniture) has been digitally reconstructed in a VR environment, taking into
consideration the limited resources of mobile devices.
ARTISTS prior version was a mobile application in which users were also able to
interact with the 3D version of a painting by just tapping on mobile device’s screen,
thus without totally immersion to the VR environment. Application settings like
sound, running scenarios, animations etc were depending on user’s profile and
interests, a functionality that still stands in ARTISTS, but with the use of more
accurate methodologies.
Technologies used in ARTISTS
Context Awareness
In ARTISTS design, we take into consideration parts of the context like the ambient
noise level, processing power of the mobile device and screen resolution, trying to
improve users’ experience regardless of environmental conditions. In particular, in a
quite noisy environment (to the noise level of 50dB), sound volume can be increased
up to 50%, whilst in extremely noisy conditions (noise level more than 70dB),
application audio volume mutes to avoid Lombard effect (Varadarajan Vaishnevi,
Hansen John H.L., 2006). In a full scale application of ARTISTS, noise levels would
be measured by a sensors network, in accordance with user’s position in space.
Furthermore, processing power of the portable device in use can be a crucial asset
which can deeply affect user experience. Ιnsufficient resources could affect the
reproduction of high-resolution 3D animation and graphics needed to construct the
VR environment, while also screen resolution could be a negative factor in displaying
high resolution graphics. A short benchmark on the background, during application
installation can easily adjust applications’ settings to the appropriate level based in
devices’ capabilities before the initialization of the application, thus avoiding
malfunctions during users’ experience.
Personalized User Experience
In our case, we use the User Personas method, which categorize users based on their
profile during a museum visit. User Personas (Morris, Hargreaves and McIntyre,
2004) are not real people but avatars created studying real people’s characteristics.
We use 4 User Personas with the names “Follower”, “Browser”, “Searcher” and
“Researcher”. Followers try to follow any guidance provided by the museum or
cultural site, trying also to learn something by it. Browsers won’t follow a guide but
go anywhere, in every place that looks interesting, and then, they search for
information about it. Searchers will search and collect detailed information on specific
exhibits or collections whilst Researchers step further on a scientific research about
specific exhibits (Konstantakis et al., 2018).
Gesture Recognition and 3D Interaction
Gesture recognition refers to computers’ ability to understand gestures involving
physical movements of multiple body parts (fingers, arms, hands, head, feet, etc) and
execute commands based on the corresponding gesture, thus allowing interaction with
the computer environment. Many gesture recognition approaches suggest that gestures
used as interaction methods between humans, can also been successfully applied as a
natural and intuitive way to interact with machines [Ren et al., 2016][Yeo et al, 2015].
In ARTISTS framework, we use the Leap Motion controller to track users’ hands and
match their movements with commands in the virtual environment. As users’ mobile
device is found into a Google Cardboard type VR device, it is impossible to tap on the
screen. Leap Motion API gives us the tools to interact with the app interface by using
hands. Simple tasks like selecting a character, dragging the volume slider, selecting
from menus and pressing on UI buttons can be done with natural hand movements in
space, in a quite accurate, intuitive and entertaining way.
User Personas
The design of personas as ‘fictional’ characters is considered as a very consistent and
representative way to define actual users and their goals. However, it is important to
clarify the exact number of personas in each occasion in order to focus on the visitor
profiles to be examined. On ARTISTS, we take into consideration these UPs and their
characteristics and we create more Personas by splitting Followers and Browsers into
3 Levels. Searchers and Researchers are combined and split into 2 Levels. These
Levels have a quantitative meaning. For example, Level 2 Researcher has done more
research and shows more of the initial Researcher characteristics than Level 1
In order to match each museum (or any other cultural site) visitor to an ARTIST
persona, the system collects and process various data about visitors. Data mining is
ARTISTS involves no user interference or preparation and it’s a 3-stages process:
1. Face recognition: Using Microsoft Cognitive Services, user age and emotions
are calculated by their face picture taken from the device’s front camera that is
sent over network. In addition, a database of visitors is created, turning every
possible upcoming visit into a more successfully personalized experience.
2. Social networks data mining: Using data mining algorithms, visitor’s data
(profile and prior experience) are extracted from user social profiles
(Facebook, Twitter or Instagram). Fully compatible with GDPR rules,
algorithms can only use data that users expose as public.
3. Behavior study: Sensors embedded into the visiting area monitor visitors’ path
and behavior into space, providing ARTISTS more personalization data.
System Architecture
Image 1: System architecture in ARTISTS
ARTISTS is a Client Server system, as shown in Image 1. Core of the system is a
server, located either in a museum (or any cultural site) or in a remote position. Server
supports communication between database, application and sensors network (installed
in museum). Furthermore, more server tasks are responsible for matching visitors to
predefined personas, or displaying multimedia for the VR environment.
The mobile application creates the appropriate interface between user and ARTISTS
system. Depending on visitors’ profile, the system shows a different scenario and
service. Server also is responsible for handling sensors’ and Smart Objects (SO) input
that can alter applications’ content.
Use Case Scenario
After getting necessary visitor data and assigning one persona from Table 1, one of
the 19 usage scenarios may initiate. These scenarios are 19 in total and matching a
visitor to a scenario is a dynamic process. For example, user can start visiting a
museum as a Level 3 Follower, but after a while, his behavior can turn him into Level
1 Browser and then Level 2 Browser. This happens because behavior monitoring is an
ongoing process that gives feedback data which can eventually change the flow of
user experience. Each one of the scenarios in Table 2 is different in functionality,
interactivity, display quality and load, audio (Table 2).
Conclusion - Future work
In this work, we describe the ARTISTS framework, a mobile application that displays
a VR reconstructed environment of a painting, and immerses users allowing them to
Table 1: Interaction usage scenarios in ARTISTS.
Image 2: The VR representation famous painting “Children Concert” created by Georgios Iakovidis
interact with its 3D aspects. We used the Leap Motion controller as a sensor for
detecting gestures, alongside with Unity, Microsoft’s Azure Cognitive Services and
Android Studio for the implementation of the application and the MySQL database
that stores the 3D environment and painting’s data.
Our next step includes the ARTISTS evaluation stage, in which we will test our
framework to evaluate user’s experience and the efficiency of our integrated
Antoniou, A. & Lepouras, G. (2010). Modelling visitors' profiles: A study to investigate
adaptation aspects for museum learning technologies. J. Comput. Cult. Herit. 3 (2), Article
No.7, pp. 1-19.
Antoniou Angeliki et al. (2016). Capturing the Visitor Profile for a Personalized Mobile
Museum Experience: an Indirect Approach, University of Peloponnese, University of Athens,
Pompeu Fabra University, CEUR Workshop Proceedings, Vol-1618.
Chang Kuo-En et al. (2014). Development and behavioral pattern analysis of a mobile guide
system with augmented reality for painting appreciation instruction in an art museum,
Elsevier Computers & Education 71, p. 185-197.
Dey A., Abowd G., Salber D. (2001). A conceptual framework and toolkit for supporting the
rapid prototyping of context-aware applications in special issue on context-aware computing,
Human Computer Interaction, J. 16 (2-4), pp. 97-166.
Eardley W.A. et al. (2016). An Ontology Engineering Approach to User Profiling for Virtual
Tours of Museums and Galleries, International Journal of Knowledge Engineering, Vol. 2.
Katz Shahar et al. (2014). Preparing Personalized Multimedia Presentations for a Mobile
Museum Visitors’ Guide – a Methodological Approach, The University of Haifa - Israel, ITC-
irst Italy.
Konstantakis Markos et al. (2017). Formalising and evaluating Cultural User Experience,
University of the Aegean, IEEE.
Konstantakis Markos et al. (2018). A Methodology for Optimised Cultural User peRsonas
Experience - CURE Architecture, British HCI 2018 Conference, Belfast, Northern Ireland,
Morris G. et al. (2004). Learning Journeys: Using technology to connect the four stages of
meaning making, Birmingham: Morris, Hargreaves, McIntyre Website.
Naismith Laura, Smith M. Paul (2006). Using mobile technologies for multimedia tours in a
traditional museum setting, mLearn 2006: Across generations and cultures, p.23, Canada.
Pujol Laia et al. (2012). Personalizing interactive digital storytelling in archaeological
museums: the CHESS project, The CHESS Consortium.
Ren, Z., Yuan, J., Meng, J., & Zhang, Z. (2016). Robust part-based hand gesture recognition
using kinect sensor. IEEE Transactions on Multimedia, 15.
Roto V. et al. (2010). User Experience white paper. Bringing clarity to the concept of user
experience, Dagstuhl Seminar on Demarcating User Experience.
Varadarajan Vaishnevi S., Hansen John H.L. (2006). Analysis of Lombard effect under
different types and levels of noise with application to In-set Speaker ID systems, University of
Texas at Dallas, USA.
Yeo, H. S., Lee, B. G., & Lim, H. (2015). Hand tracking and gesture recognition system for
human-computer interaction using low-cost hardware. Multimedia Tools and Applications,
74(8), 2687-2715.
... Research on gamification in tourism is for the most part descriptive, showcasing success stories and best practices [28][29][30]. A few qualitative studies employ case studies [31,32] or focus groups [15], and very few employ research techniques. ...
Full-text available
In this paper, we discuss the gamification strategies and methodologies used by TRIPMENTOR—a game-oriented cultural tourism application in the region of Attica. Its primary purpose is to provide visitors with rich media content via the web and mobile environments by redirecting travellers, highlighting points of interest, and providing information for tour operators. Gamification is a critical component of the project; it relates users to specific sites and activities, improves their visiting experiences, and encourages a constant interaction with the application through a playful experience. In TRIPMENTOR, gamification serves both as a tourism marketing strategy and as a tool for encouraging users to share their experiences while exploring Attica in a way designed to meet their personal needs, interests, and habits. This paper aims to describe and analyse the gamification mechanisms applied, following the Octalysis framework, and discuss the opportunities and challenges of gamification as a tourist marketing strategy.
... Narratologists believe that a text must tell a story, exist in a world, be placed in time, contain intelligent agents, and have some causal sequence of events to be considered a narrative. Simultaneously, it generally aims to communicate something significant to an audience [16]. ...
Full-text available
Augmented reality (AR) provides excellent learning potential, especially in a school environment. Multiple students can share the virtual scene and interact with it using the mobile interface as a hand-held display in AR children’s books. Students’ participation is an essential element of learning, and one of AR’s greatest strengths is its ability to promote collaborative experiences. An augmented reality children’s book edutainment through participatory content creation and promotion based on the pastoral life of Psiloritis has been recommended through this study, highlighting the features of AR to reveal educational values unique to AR and studying approaches for incorporating these characteristics into the typical education curriculum.
... In recent years, there is a constant tendency in integrating modern technologies into mobile guides and applications in the Cultural Heritage (CH) domain, aiming to enrich cultural user experience [14]. Based on this assumption, Augmented Reality (AR) has developed into a cutting-edge technology, providing new ways to interact with computer -generated information. ...
Conference Paper
Full-text available
This paper explores a digital approach to reintroduce the pastoral life of Psiloritis to children using the technology of Augmented Reality (AR) books. Pastoral life's existence as intangible cultural heritage is declining nowadays because children prefer popular activities with exciting visuals and presentations. At the same time, AR technology assimilation to conventional media has started to emerge. Previous studies show the great potential of AR-based books for children and indicate that this technology enhances their learning experience. This paper adapts participatory content creation, including data collection, analysis, synthesis, development, and communication within an AR book. The AR book is expected to provide a tangible experience for children to enjoy and preserve traditions and life scenes from the mountain of Psiloriti in Crete.
Conference Paper
Full-text available
Cultural Heritage Institutions (CHI) are increasingly aiming at enhancing their visitors' experiences in a personalised, immersive and engaging way. A personalised system for cultural heritage promotion potentially adapts, in terms of relevance, content and presentation according to the user’s interests and needs. However, since a typical visit may be short and unrepeatable, the identification of user’s profile must be quick and efficient, ensuring the successful respective personalisation process. Current paper discusses a methodology for cultural user personas extraction and identification. The CURE approach eliminates the requirement of explicit user input via registration or similar data acquisition methods and involves three main stages: data acquisition from the user’s online and social activity, reasoning regarding persona similarity and finally data and experiences reuse from previous visits. Regarding the constructed personas, the proposed approach continuously adapts and refines the personas features from data gathered during multiple cultural experiences and accordingly creates, deletes or merges personas in case of significant deviation, poor correlation and convergence respectively.
Full-text available
—This paper describes a study of the development of a hierarchical ontology for producing and maintaining personalized profiles to improve the experience of visitors to virtual art galleries and museums. The paper begins by describing some of the features of virtual exhibitions and offers examples of virtual tours that the reader may wish to examine in more detail. The paper then discusses the ontology engineering (OE) approach and domain modelling languages (e.g. KACTUS, SENSUS and METHONTOLOGY). It then follows a basic OE approach to define classes for a cultural heritage virtual tour and to produce a Visitor Profile Ontology that is hierarchical and has static and dynamic elements. It concludes by suggesting ways in which the ontology may be automated to provide a richer, more immersive personalized visitor experience.
Full-text available
Multimedia provides new opportunities for museums to enhance their visitors' experience. However, its use poses new challenges for presentation preparation, among which are: How to enrich the visit while not diverting the visitors' attention from the actual objects in the museum, which should remain the focus of the visit? How to provide a rich information space suitable for a wide variety of visitors? These challenges need to be addressed during planning and preparation of information presentations for mobile, multimedia museum visitors' guides. This work presents lessons learned about the preparation of multimedia presentations for museum visitors' guides in the course of the PEACH and PIL projects. While planning the presentations, the designers need to consider the exhibition as a whole, its objectives, its objects, and the information in which users may be interested. Then, in light of the resulting generic goals, elicit relevant text and images and combine them using cinematographic techniques into integrated multimedia presentations. All the above is abstracted in a nine-step multimedia presentation preparation framework, described in this paper.
Full-text available
The recently developed depth sensors, e.g., the Kinect sensor, have provided new opportunities for human-computer interaction (HCI). Although great progress has been made by leveraging the Kinect sensor, e.g., in human body tracking, face recognition and human action recognition, robust hand gesture recognition remains an open problem. Compared to the entire human body, the hand is a smaller object with more complex articulations and more easily affected by segmentation errors. It is thus a very challenging problem to recognize hand gestures. This paper focuses on building a robust part-based hand gesture recognition system using Kinect sensor. To handle the noisy hand shapes obtained from the Kinect sensor, we propose a novel distance metric, Finger-Earth Mover's Distance (FEMD), to measure the dissimilarity between hand shapes. As it only matches the finger parts while not the whole hand, it can better distinguish the hand gestures of slight differences. The extensive experiments demonstrate that our hand gesture recognition system is accurate (a 93.2% mean accuracy on a challenging 10-gesture dataset), efficient (average 0.0750 s per frame), robust to hand articulations, distortions and orientation or scale changes, and can work in uncontrolled environments (cluttered backgrounds and lighting conditions). The superiority of our system is further demonstrated in two real-life HCI applications.
Full-text available
Human-Computer Interaction (HCI) exists ubiquitously in our daily lives. It is usually achieved by using a physical controller such as a mouse, keyboard or touch screen. It hinders Natural User Interface (NUI) as there is a strong barrier between the user and computer. There are various hand tracking systems available on the market, but they are complex and expensive. In this paper, we present the design and development of a robust marker-less hand/finger tracking and gesture recognition system using low-cost hardware. We propose a simple but efficient method that allows robust and fast hand tracking despite complex background and motion blur. Our system is able to translate the detected hands or gestures into different functional inputs and interfaces with other applications via several methods. It enables intuitive HCI and interactive motion gaming. We also developed sample applications that can utilize the inputs from the hand tracking system. Our results show that an intuitive HCI and motion gaming system can be achieved with minimum hardware requirements.
The time restrictions that apply in museum learning increase the need for adaptive and/or adaptable technologies. However, deriving a visitor's profile is not an easy task, since most common ways (asking direct questions, recording user actions) are either intrusive or time consuming. Observing the visitors' movement (visiting style) within the museum could provide valuable information regarding adaptivity. In the present study, issues of visiting style were explored and statistical significance was found once different factors were analyzed. Most importantly, there seems to be a connection between the way people move in a museum and the way they prefer to approach and process information cognitively. Environmental factors that can affect the expression of visiting style were also identified.