Article

Semiotic analysis of multi-touch interface design: The MuTable case study

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Although multi-touch applications and user interfaces have become increasingly common in the last few years, there is no agreed-upon multi-touch user interface language yet. In order to gain a deeper understanding of the design of multi-touch user interfaces, this paper presents semiotic analysis of multi-touch applications as an interesting approach to gain deeper understanding of the way users use and understand multi-touch interfaces. In a case study example, user tests of a multi-touch tabletop application platform called MuTable are analysed with the Communicability Evaluation Method to evaluate to what extent users understand the intended messages (e.g., cues about interaction and functionality) the MuTable platform communicates. The semiotic analysis of this case study shows that although multi-touch interfaces can facilitate user exploration, the lack of well-known standards in multi-touch interface design and in the use of gestures makes the user interface difficult to use and interpret. This conclusion points to the importance of the elusive balance between letting users explore multi-touch systems on their own on one hand, and guiding users, explaining how to use and interpret the user interface, on the other.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... It seems that advancements in this field is hampered by the lack of well-known standards in multi-touch interface design and in the use of gestures, which makes the user interface difficult to use and interpret [5]. Moreover, few multi-touch tabletops currently support constructive activities or allow user-generated materials to be creatively incorporated into learning activities. ...
... There is an elusive balance between letting users explore multi-touch systems on their own on one hand, and guiding users, explaining how to use and interpret the user interface, on the other [5]. This gap was made apparent when the technological designs were tested in our lab with youths the same age as the students. ...
Conference Paper
Full-text available
Multi-touch tabletops have noteworthy and promising affordances for co-composition as an activity in co-located and collaborative learning. In this paper we describe the use of co-composition as a guide in design of a multi-touch application for a museum's touring two-day workshop on architecture. The goal of the application is to support groups of students (12-13 year-olds) when they are creating, selecting, organizing, and presenting digital representations (co-composition) for an architectural workshop project. The goals of the application are discussed in relation to specific features in the user interface that we designed to take advantage of a multi touch approach. We ask how these features relate to co-composition and the pedagogical aims we had for the touring workshop. The features we envisioned would benefit from more well known standards for gestures and user interface components for multi-touch tables.
... The quality of the feedforward, how easy it is to discover, learn, and understand is often also called discoverability or self-descriptiveness. Touchscreen interfaces show the feedforward directly at the location of the interaction, which improves the perceptibility and is one of the reasons for the popularity of touch interfaces (Brandenburg & Backhaus, 2013). Yet modern touchscreen user interfaces often lack comprehensive feedforward, especially for complex interaction forms like multi-finger touch gestures (Derboven, Roeck, & Verstraete, 2012) (see 4.2.1.4, 4.2.1.5), ...
Thesis
Full-text available
The goal of this reasearch was to examine if modern touch screen interaction concepts that are established on consumer electronic devices like smartphones can be used in time-critical and safety-critical use cases like for machine control or healthcare appliances. Several prevalent interaction concepts with and without touch gestures and virtual physics were tested experimentally in common use cases to assess their efficiency, error rate and user satisfaction during task completion. Based on the results, design recommendations for list scrolling and horizontal dialog navigation are given.
... x x x [62] [63] x x [64] x x [65] x [66] x x x x [67] x x x x x x x x x x [68] x x [69] x [70] x x x x x x x x x [71] x [72] x x x x x x x x [73] x x x x [74] x x x x [75] x x x x x x x Ref. a b c d e f g h i j k l m n o p q r s t u v w x y z aa ab ac ad ae af ag ah ai aj ak [76] x x [77] x [78] x x x x Table 8 al am an ao ap aq ar as at au av aw ax ay az ba bb bc bd be bf bg bh bi [ Only 14.06% of analyses were performed qualitatively, valuing the subjectivity of the answers. Economou et al. [44] analysed the experiment recording, where they extracted subjective information about aspects defined by them, such as the user's focus. ...
Article
Full-text available
Natural user interface (NUI) is considered a recent topic in human–computer interaction (HCI) and provides innovative forms of interaction, which are performed through natural movements of the human body like gestures, voice, and gaze. In the software development process, usability and user eXperience (UX) evaluations are a relevant step, since they evaluate several aspects of the system, such as efficiency, effectiveness, user satisfaction, and immersion. Thus, the goal of the authors’ systematic mapping study (SMS) is to identify usability and UX evaluation technologies used by researchers and developers in software with NUIs. Their SMS selected 56 papers containing evaluation technologies for NUI. Overall, the authors identified 30 different usability and UX evaluation technologies for NUI. The analysis of these technologies reveals most of them are used to evaluate software in general, without considering the specificities of NUI. Besides, most technologies evaluate only one aspect, Usability or UX. In other words, these technologies do not consider Usability and UX together. For future work, they intend to develop an evaluation technology for NUIs that fills the gaps identified in their SMS and combining Usability and UX.
... Sovereignty and loyalty are taken away from the people and handed over to the godfathers. Only those who are willing to follow orders, serve and protect the interests of the godfathers are enthroned and they are changed as soon as they attempt to dislodge the godfathers (Derboven, Roeck, and Verstraete 2012). The case of the immediate past governor of Lagos State, Akinwunmi Ambode aptly fits in here. ...
Article
Full-text available
This study examines the symbolic representations of Nigerian social reality in the movie “King of Boys”. Social reality in this instance is examined from the perspective of official and unofficial political and economic power, influence, and coercion. Based on the framework of Ferdinand de Saussure and Charles Sanders Peirce’s semiotics and semiology theory, the study aims to examines the symbolic relationship between characters, scenes, and institutions in the Kemi Adetiba’s “King of Boys” and the realities in the Nigerian political and security sectors. Using the methodological approach of Critical Discourse Analysis, the study found that the movie is a bold attempt at representing the intricate power relations that exists within Nigerian political circles as well as that which exists between the Nigerian political class and organized crime
... The industry practitioners and academic scholars in recent years have also been reported to be developing 'Multi-touch User Interfaces' design. With the focus on users as the centre of study, which are in general a common evaluation area in interactivity, this has seen numerous research describing usability and user experience assessment as a fraction of its advancement (Derboven, De Roeck, & Verstraete, 2012). In regards to this study, Debron et.al further argued that Communicability Evaluation Method (CEM) in semiotics are capable to contribute towards a significant framework in evaluating multi-touch interfaces. ...
Thesis
Full-text available
The development of media rich IT environment has developed sets of aptitude and attitude for students of Generation Y and beyond. In significance to this development, lecturers as the interface designer are facing new challenges, due to the importance of having a better understanding towards interactive multimedia teaching aid design which impacts a learning engagement. As learning content become more complex, with demanding challenges and multidisciplinary curriculum, it is essential that the multimedia teaching aid is well developed to ensure the learning engagement is enhanced. This research project is intended to explore and ascertain perceptions and experiences of lecturers understanding in designing the interface of a multimedia teaching aid. Data were collected from various sources, primary, and secondary through several complementing methods. This includes, contextual document analysis, content analysis of the multimedia teaching aid, and in-depth interview among lecturers from non-creative design field of studies in Malaysian public and private tertiary institutions. The data produced is being inductively analysed through interpretation of the meaning from the arising perceptions. Through comprehensive comparative analysis, this research has established the effective interface design principles and elements for multimedia teaching aid in Malaysian tertiary education level. Through this establishment, principles, elements and characteristic guidelines of interface design for multimedia teaching aids is further proposed. This recommended guidelines can be applied as a new foundation for the future development of multimedia teaching aid interface design, which is anticipated in decreasing the gap between the design of multimedia teaching aids and its ability in encouraging and sustaining effective learning engagement in a teaching and learning process faced by lecturers and students in the current challenging media rich ICT education environment.
... Hence, designers of educational applications targeted to kindergarten children need adequate graphic strategies to enable them to interpret different and diverse information about the applications, such as the gestures to be performed at a given time, the actions needed to go ahead, or information about the spatial location of objects in the virtual world. Therefore, the design of appropriate visual cues must be addressed since multi-touch interfaces can facilitate dialogic learning scenarios in which the dialog is centered around the learning activity itself rather than on the interactions the children are expected to perform each time (Derboven, De Roeck, & Verstraete, 2012). ...
Article
A myriad of educational applications using tablets and multi-touch technology for kindergarten children have been developed in the last decade. However, despite the possible benefits of using visual prompts to communicate information to kindergarteners, these visual techniques have not been fully studied yet. This article therefore investigates kindergarten children’s abilities to understand and follow several visual prompts about how to proceed and interact in a virtual 2D world. The results show that kindergarteners are able to effectively understand several visual prompts with different communication purposes despite being used simultaneously. The results also show that the use of the evaluated visual prompts to communicate data when playing reduces the number of interferences about technical nature fostering dialogues related to the learning activity guided by the instructors or caregivers. Hence, this work is a starting point for designing dialogic learning scenarios tailored to kindergarten children.
... Therefore, the design of efficient and effective communication visual prompts which gives the user information about the application and the expected actions that the users should make is a key challenge. The design of appropriate semiotics must be addressed since, as pointed out by Derboven et al. (2012), multi-touch interfaces can facilitate learning activity itself rather than on the interactions the children are expected to perform each time. Hence, designing visual prompts that avoid the continuous technical scaffolding by adults (i.e. the gestures to be performed, the direction in which a game character should move, etc.) will help caretakers to concentrate on giving cognitive scaffolding (i.e. the learning content to be acquired by the children). ...
Article
Although a myriad of educational applications using tablets and multi-touch technology for kindergarten children have been developed in the last decade, most of these applications do not fully exploit multi-touch technology since the game world used is limited to the screen only. Considering a larger digital space in tablet-based educational scenarios would be beneficial since it would enable the design of engaging activities driven by curiosity, exploration, discovery and decisions on where the next action is situated in the digital virtual space by directional awareness. This paper therefore investigates kindergarten children's abilities to use a virtual world beyond the screen and evaluates three different types of visual prompts for communicating directional awareness. The results obtained show, firstly, that these specific users are able to use the space beyond the screen boundaries and that the evaluated prompts can effectively communicate information to kindergarten children. The paper also makes a set of recommendations to help designers choose the appropriate type of prompt for their application requirements.
... The lack of or low number of communicability breakdown, which were identified using CEM in the present study, were related with completed task executions. This methodology, a systematic and qualitative procedure, adequately evaluated the users' experience of interaction with the interface by emphasizing on the aspects of communication, as done in previous studies [15,23]. When applying CEM in this study, our expectation was that the early involvement of pregnant women users in the prototyping phase of the birth plan interface would help to identify bottlenecks and the improvements that need to be made in the app. ...
Article
Full-text available
Background: Birth plans are meant to be a declaration of the expectations and preferences of pregnant woman regarding childbirth. The My-Prenatal-Care application (app) engages pregnant women in an educational intervention for a healthy pregnancy. We hypothesized that users’ positive perception of an in-app birth plan is a relevant step for establishing direct communication between pregnant women and the healthcare team, based on an online report available at app. Objective: To evaluate pregnant women’s perception about the communicability of birth plan preparation using a mobile app. Methods: This was an observational, exploratory, and descriptive study. The methodology was user-centered, and both qualitative and quantitative approaches were employed. The tools of the Communicability Evaluation Method were applied. Eleven pregnant women evaluated their experience of using a birth plan prototype interface. The evaluation was performed in a controlled environment, with authorized video recording. Eight task-oriented interactions were proposed to evaluate interface communicability with users when using the Birth Plan menu. For evaluating perceptions and experiences, a survey with structured and open-ended questions, in addition to the free expression of participants, was conducted. The primary outcomes assessed were interface communicability and user’s perception of the Birth Plan prototype interface in the My-Prenatal-Care mobile app. Secondarily, we involved users in the prototyping phase of the interface to identify bottlenecks for making improvements in the app. Results: Regarding users’ performance in accomplishing previously prepared tasks, we found that 10/11 (91%) women were capable of completing at least 6/8 (75%) tasks. A positive relationship was found between the number of communicability problems and the success of completing the tasks. An analysis of the records revealed three communicability breakdowns related to the data entry, save, and scrollbar functions. The participants freely expressed suggestions for improvements, such as for the Save function and the process of sharing the birth plan form upon completion. Conclusions: The users had a positive perception regarding the Birth Plan menu of the My-Prenatal-Care app. This user-centered validation enabled the identification of solutions for problems, resulting in improvements in the app.
... De Souza et al. (2006) have deconstructed the meta-communication procedure and the messages which explain this interplay; in other words an understanding of who the users are, their needs, preferences, and motives, all of which are encouraged by both the qualitative and interpretative processes utilised by semiotic engineering procedures. Semiotics has been utilised in more novel regions of HCI, such as the MuTable interface and the Multi-touch interface in a Tabletop application platform (Derboven et al., 2012). A comprehensive analysis of this MuTable Interface was conducted by De Souza et al. (2009) using the Communicability Evaluation Procedure (CEM). ...
Conference Paper
Signs and icons are a fundamental part of the user interface (UI). They are mechanisms of interaction between humans and devices (including mobile devices), and they enable users to achieve tasks. Users typically act on icons as a consequence of their understanding of the icon's visual representation. Meaningful icons enhance users' comprehension, whereas ambiguous icons increase users' confusion. However, little is known about the confusion that may be caused by ambiguous icons in a sequence of interactions. In other words, there is scarce research regarding the effect of the navigational order or interaction sequence that users undertake on their interpretation of ambiguous icons. Thus, this paper explores the impact of ambiguous icons on the accuracy of users' interpretations. Ambiguity is controlled by using pairs of visually identical icons with two different meanings. A semiotic analysis was conducted to investigate users' comprehension of the icons and elicit the causes for their interpretations. We also provide insight into the cognitive processes within working memory and the inherent deliberation, reasoning and comprehension processes that influence users' interpretations.
... The lack of or low number of communicability disruptions in the CEM test observed in the present study were related with correct task executions. This methodology, a systematic and qualitative procedure, adequately evaluated the users' experience of interaction with the interface by emphasizing on the aspects of communication, as done in others reports [18,23]. When applying CEM in the present study, our expectation was an early involvement of pregnant women users in the prototyping phase of the BP interface, to identify bottlenecks and direct improvements that need to be made in the app. ...
Preprint
Background: The Birth Plan (BP) is supposed to be a declaration of expectations and preferences of the pregnant woman regarding childbirth. Although several mobile applications (apps) have been developed to offer support during pregnancy, only a few offer the BP. An academic team developed an app named My-Prenatal-Care, to provide scientific information directly to women about healthy habits and practices during pregnancy, delivery, and puerperium. More recently, a questionnaire was introduced to facilitate the development of a BP as a strategy for childbirth, and to provide a new channel of communication between her and the maternity team. Objective: To validate a template for BP preparation mediated by a mobile app, based on perceptions of pregnant users. Methods: This was an observational, exploratory, and descriptive study. The methodology of the evaluation was user-centered, and both qualitative and quantitative approaches were employed. Participation in the study was voluntary, and data were collected in a Brazilian public and university health unit for prenatal care. Eleven pregnant women evaluated their experience of using the BP prototype interface. Eight tasks were proposed to measure the users’ efficiency and the effectiveness of the interface. Tests of communicability intended to identify communication problems. The evaluation took place in a controlled environment, with authorized video recording. Data collected from a survey with structured and free questions, in addition to free expression of participants’ perceptions and experiences with using the BP interface, were analyzed. The primary outcomes assessed were users’ satisfaction and perception of the BP function of the My-Prenatal-Care mobile app. Results: Regarding users’ efficacy in performing previously prepared tasks, we found that 10 (91%) women were capable of executing at least 6 (75%) tasks. We observed a positive relationship between the number communicability problems and the success of completing the tasks without errors. An analysis of the records revealed three communication disruptions related to the data entry, save, and scrollbar functions. All participants reported a positive opinion about the interface. However, they expressed suggestions for improvements in functionalities, such as saving and sharing the BP form on completion. Conclusions: Users had a positive perception regarding the BP function of the My-Prenatal-Care app. This user-centered validation enabled the identification of new solutions for problems, resulting in improvements in the app. These findings provide insights for the development of an in-app BP to facilitate information sharing among pregnant women and their healthcare team.
... Because of the multi-touch collaborative task, the authors added a new label -"Who did that" -among the breakdowns classification. The outcomes highlighted the potential of CEM to discover the exact nature of the flaws that occurred during the interaction; and also that the level of detailing achieved by analyzing the CEM outcomes was especially interesting in the evaluation of innovative interface paradigms, such as multi-touch interaction [15]. ...
Conference Paper
The use of social networks and mobile devices for communication has increased in recent years. However, the accessibility issues have been receiving few attentions in these environments. Focusing on the deaf people's needs, this paper aims to present a case study in which the communicability of Facebook mobile application is evaluated by literate deaf people in Brazil. The case study (planning, conduction and evaluation) was carried out based on the Communicability Evaluation Method (CEM) guidelines that support the identification and classification of application-to-users communicability breaks. The findings revealed us that there is no difference in the application communicability for both - deaf and hearing. In the conclusion we outlined some reflections about the need of proposing guidelines to building mobile applications for deaf people.
... Existe un amplio abanico de métodos de evaluación de usabilidad (e. g., métodos de inspección, indagación y test) que pueden ser usados dependiendo del contexto y las características de un proyecto. Generalmente estos métodos se combinan para obtener mejores resultados [6], [4,[7][8][9][10]. Así, dentro de los métodos de inspección se pueden combinar la evaluación heurística [7,11,2] y la caminata o recorrido cognitivo [12]. ...
Article
Full-text available
This paper presents methodological criteria for evaluating usability of course management systems (CMS). These criteria are based on traditional methods of usability evaluation within a mixed approach; some new methods emerged while evaluating the usability as well as the pedagogical and functional features of a CMS called Lingweb. In this case study, Tester User (TU) sessions were carried out and fully exploited. They brought us important elements which led us to make a rigorous and detailed analysis from a three-fold usability-functionality-pedagogy perspective. We describe and discuss here the results of the analysis of Lingweb efficiency.
... In understanding the form of the object, a user can then manipulate it accordingly and receive feedback of a certain form (Krippendorff & Butter, 1984). The extent of the button's cultural significance can be seen in the prevalence of skeuomorphic design archetypes where a digital interface might mimic real-world objects (Derboven, De Roeck, & Verstraete, 2012). ...
Conference Paper
Full-text available
There is an emergent body of research linking the nature of form to design, functionality and user experience. This paper builds on these recent studies to propose a new approach connecting conceptual-design with advanced manufacturing techniques. Using the properties of work materials and advanced forming manufacturing processes, radical approaches to design and production could be open to designers and engineers, offering novel modes of user experience. By firstly reviewing the literature on product form and its bond with the concepts within the fields of user interaction and user experience, a number of " functional mechanisms " are introduced that could potentially be integrated into this new and more homogeneous manufacturing framework. 1. Background Modern manufacturing technology presents designers and engineers exciting possibilities in the expression of form and function. Prominent examples include increasing sophistication of computer numerically controlled (CNC) forming technology, incremental sheet forming and 3D printing technologies. These processes present very good capabilities in terms of geometric forming options – particularly 5 axis CNC milling machine configurations, which have the ability to create complex freeform surfaces directly applicable to many consumer products. Despite the manufacturing parameters being relatively well understood, what is less closely considered within the design research community is how these processes can be used to produce particular product experiences for the user. The central aim of this work is to address this by proposing a new framework for manufacturing practices where mechanism and functionality can be articulated through form and material properties. Bridging the gulf between design knowledge and the more technical knowledge associated with manufacturing engineering, potentially creating novel experiences for the users of products.
... A key challenge is, therefore, the efficient and effective communication of the gestures that this special type of user must perform, that is, languages need to be designed for applications that convey their underlying design intent and interactive principles (Prates et al., 2000) with respect to touch interaction. The importance of these languages must not be underestimated because, as pointed out by Derboven et al. (2012), multitouch interfaces can facilitate dialogic learning scenarios in which the dialog is centered around the learning activity itself rather than on the interactions the children are expected to perform each time. This would help caretakers to concentrate on the learning content to be acquired by the children and would prevent the continuous interference required if the children do not know how to actively participate in the collaborative learning activity. ...
Article
The direct manipulation interaction style of multitouch technology makes it an ideal mechanism for implementing learning activities for prekindergarten children. This is feasible because recent studies have shown that users in this early age range are able to perform a complete set of basic multitouch gestures. However, it is still unknown whether applications based on this interaction style can use effective communication strategies that enable these very young users to understand the multitouch nature of the interactive content that is presented to them in their everyday educational applications. This understanding is particularly important in collaborative dialogic learning scenarios in which several children may participate under the supervision of an adult instructor and in which they need to discover interactive content. In order to answer this question, this work evaluates three approaches to communicate different touch gestures to prekindergarten users. The results obtained show, firstly, that it is possible to effectively communicate these gestures using visual cues; secondly, that animated semiotic approaches are more effective and learnable than iconic approaches and, finally, that gender differences exist in some gestures due to visual–spatial cognition development differences.
... In other words, designers embed meaning is into the interface, and these embedded meanings are then interpreted by end-users (C.S. De Souza 2005). Since the dissemination of their work, HCI researchers have explored specific interaction contexts, including embodied interaction (O'Neill 2008), mobile user interface design (Kjeldskov and Paay 2010), touch-based communication (Gumtau 2005), and multitouch interface design (Derboven, De Roeck, and Verstraete 2012). But while some of this work has hinted towards a semiotics of texture in technologically-mediated contexts, many opportunities exist to bridge the gaps within and between disciplines and modalities. ...
Conference Paper
Full-text available
Our mediated world is increasingly embracing tactile forms of interaction and information presentation in tandem with visual and auditory modes. Multimodal objects, from touch-screen devices, such as the iPad, to vibrotactile and kinaesthetic game controllers, such as the Nintendo Wiimote, are melding sensory modes and enhancing their communicative potential. While such objects have received much scholarly attention, particularly with respect to their haptic (invoking the sense of touch), gestural and kinaesthetic capacities and effects, little work has explored the visual presentation of texture in a haptically augmented medium. In this paper, I explore how texture is visually applied in the graphical user interface (GUI) of the iPhone, a touch-based mobile device. First, I will characterize the application of texture using Djonov and Van Leeuwen's (2011) six parameters for describing the visual-tactile quality of texture dimensions. I will then highlight and analyze how touch-based interaction with visual texture in the GUI establish and reinforce patterns of meaning, including texture understanding and conceptual mapping of tactile domains. I seek to contribute to the developing area of software semiotics through an analysis of a touch-based mobile device context.
... For example, the semiotic inspection method (SIM) has been used to analyse robot-user interfaces (Bento et al., 2009), as well as the International Children's Digital Library web interface (Salgado et al., 2009). Semiotic engineering theory is applied to interface design (De Souza and Cypher, 2008;Silveira et al., 2001) and for analyse of multi-touch interfaces (Derboven et al., 2012). Connolly and Phillips (2002) have used Stamper's (2000) semiotic framework in interface design, while Tétard (2013, 2014) use semiotics in usability. ...
Article
Full-text available
Although signs like navigation links, small images, buttons and thumbnails are important elements of web user interfaces, they are often poorly understood. Based on data gathered over a 3-year period (2011–2013) making use of observations in a usability testing lab, by expert review and by structured and semi-structured interviewing users, we developed a Semiotic Interface sign Design and Evaluation (SIDE) framework, consisting of five semiotic layers: syntactic, pragmatic, social, environment and semantic. The framework includes an extended set of determinants and heuristics, based on four empirical studies that help practitioners design and evaluate intuitive interface signs that can be accurately interpreted by users with less effort.
... Andersen, 2001;Bolchini, Chatterji, & Speroni, 2009; C. S. de Souza, Barbosa, & da Silva, 2001; Clarisse Sieckenius de Souza, Barbosa, & Prates, 2001;Clarisse Sieckenius De Souza & Cypher, 2008;Clarisse Sieckenius De Souza, 2005;F. de Souza & Bevan, 1990;De Souza, 2001;Derboven, De Roeck, & Verstraete, 2012;Islam, 2011;Leite, 2002;Meystel, 1996;Prates & de Souza, 1998). ...
Conference Paper
Full-text available
Metaphors are a widely used resource for interface design and analysis. Based on Lakoff and Johnson’s seminal work on metaphor, Barr (2002) developed a model that acknowledges three types of metaphors commonly used by designers to give individuals who interact with an interface a sense of its logic from first sight, and to scaffold their understanding of it and its general affordances for action. These are known as orientation, ontological, and structural metaphors. As an addition to Lakoff and Johnson’s taxonomy of metaphors, Barr proposed two supplementary metaphors that derive from the structural, which he called element and process metaphors. Generally speaking, it is possible to assert that the first group of metaphors assists a user in becoming acquainted with the function of the interface, and the second group, added by Barr, relates to the actual operation of that interface. During the analysis of the collected data, it was found that those participants who took what was termed a “reflexive” stance towards the Workflow interface tended to assess the appropriateness of the orientation, ontological, or structural metaphors only. Those participants who decided to take what was termed an “active” stance tended to address issues related to element and process metaphors of the interface instead. In the current INKE prototype design cycle, lower-fidelity digital prototypes have been favoured over low-fidelity paper or pdf-based prototypes as an experiment in interface design and testing. The findings of this study suggest that prototype testing for digital humanities experimental software could productively include one more step in the interface development protocol, prior to the digital prototype stage. The purpose of this phase would be the exploration, proposition, or assessment of possible metaphors at a conceptual level with actual user groups working on low-fidelity representations of the interface. In this stage, based on user group expertise of the editorial process, the participants might participate in focusing the design team towards which resources for meaning-transference are more suitable for the task. After this design process, a digital prototype would then be created to test the congruency between the structural metaphors initially chosen and the actual operation of the interface (e.g., De Souza et al., 2001, Nadin, 2001, Reilly et al., 2005). Our findings suggest that such approaches would be valuable for use in humanities-oriented software prototyping as well.
... The MiLE? was developed to evaluate Web usability. It integrated the concept of semiotics to analyze the application-independent features of Web interfaces [17] A semiotic analysis approach for multi-touch applications This approach uses the concept of semiotic engineering and provides support to obtain insight into the way users understand and use multi-touch interfaces (c) Guidelines, principles or heuristics for UI [29] Provide a set of guidelines for interface design The guidelines are divided into the following categories, from a semiotics perspective: navigation, iconic representation, aesthetics and world of references (d) Assess, explore, and demonstrate semiotic theories [4,44] Demonstrate the applicability of semiotic concepts in HCI They investigated and demonstrated the applicability of de Souza's et al. [14] SIM to analyze a robot user interface [4] and to analyze the ICDL (International Children's Digital Library) [44] Univ Access Inf Soc effectiveness of W-SIDE, and Bolchini and Garzotto [6] measure the quality of MiLE?. These studies are discussed in greater detail in Sect. ...
Article
Full-text available
Purpose: A semiotic framework (Semiotic Interface sign Design and Evaluation - SIDE) was developed to help designers to deal with user-intuitive interface signs. Examples of signs are small images, navigational links, buttons and thumbnails, which users make use of when interacting with web UIs. This paper assesses the SIDE framework on the quality of the evaluation of interface signs, and the contributions of the framework as perceived by evaluators. Methods: Two empirical user studies were carried out, involving 23 participants. Data were collected via interviews, problem-solving assignments and feedback questionnaires, and analyzed quantitatively and qualitatively. Results: The study shows that the evaluation of using the SIDE framework leads to acceptable score on quality metrics; and the subjects evaluate the framework’s ease-of-use, contribution, and usage in practice positively. Conclusions: The SIDE framework is applicable to design and evaluate interface signs and contributes to the improvement of understanding the intuitive nature of interface signs.
... Nadin (1988Nadin ( , 2001 broadly discussed the elements of UI and pointed out the importance of semiotics in HCI. Derboven et al. (2012) presented a semiotic analysis for multi-touch applications to get the insight of users' understanding and uses of multi-touch interface. Algebraic semiotics was introduced by Goguen (1999) and was significantly applied to UI design (Malcolm and Goguen, 1998). ...
Article
Full-text available
Purpose – The purpose of this empirical study was to address two important concerns of Web usability: how user-intuitive interface signs affect Web usability and how applying semiotics (i.e. the doctrine of signs) in user interface (UI) design and evaluation helps to improve usability. Design/methodology/approach – An empirical research approach is followed here to conduct three user tests. These tests were conducted on a Web application with 17 test participants. Data were gathered through laboratory-based think-aloud usability test, questionnaires and interviews. Following an empirical research approach, statistics and user behavior analysis were used to analyze the data. Findings – This study explores two important concerns of UI design and evaluation. First, users’ interpretation accuracy of interface signs impact on Web usability. The study found that users’ interpretation of signs might be accurate, moderate, conflicting, erroneous or incapable; user-intuitive interface signs led participants to interpret signs’ meaning accurately; and users’ inaccurate interpretation of one or a few task-related interface sign(s) led users to usability problems, resulting in participants performing tasks with lower task-completion performance. Second, considering semiotics perception in UI design and evaluation is important to improve Web usability. This study showed that interface signs, when re-designed considering the semiotics guidelines, have increased the end-users’ interpretation accuracy and the interface signs’ intuitiveness. This study also provides a small set of semiotics guidelines for sign design and evaluation. Originality/value – This study empirically demonstrated that signs’ intuitiveness impact on Web usability and that considering the semiotics perception in sign design and evaluation is important to improve Web usability. These outcomes are valuable in a number of ways to HCI researchers and practitioners: the results provide awareness of the importance of user-intuitive interface signs in UI design; practitioners can easily adopt the concept of interpretation accuracy classification to conduct a sign test to obtain an “overall impression of interface signs’ intuitiveness”; practitioners can easily adopt the methodological approach followed in this study to conduct usability test without additional resources; and the results raised important fundamental questions for future research such as “what does a practitioner need to be aware of when designing or evaluating interface signs?”
... In order for users to work with these devices smoothly, effortlessly and effectively, the user interface -the communication gateway between user and technical device -has to be natural, intuitive and self-explanatory. Recent research has focused on the development of user interfaces (UI) that provide a natural human input experience through hand gestures (e.g., Buchinger, Hotop, Hlavacs, Simone, & Ebrahimi, 2010;Derboven, De Roeck, & Verstraete, 2012;Koike, Nishikawa, & Fukuchi, 2009). This socalled Gesture Interface (GI) is able to recognize and interpret a specific set of gestures performed by a human user. ...
... Attentional and motivational factors are also important in moderating motor capabilities that lead to performing gestures successfully. Thus, we also plan to investigate the suitability of existing semiotic approaches, such as those proposed by Derboven et al. (2012), to advise users of the gestures they are expected to perform in multi-touch applications for pre-kindergarteners. ...
Article
Full-text available
The direct manipulation interaction style of multi-touch technology makes it the ideal mechanism for learning activities from pre-kindergarteners to adolescents. However, most commercial pre-kindergarten applications only support tap and drag operations. This paper investigates pre-kindergarteners’ (2-3 years of age) ability to perform other gestures on multi-touch surfaces. We found that these infants could effectively perform additional gestures, such as one-finger rotation and two-finger scale up and down, just as well as basic gestures, despite gender and age differences. We also identified cognitive and precision issues that may have an impact on the performance and feasibility of several types of interaction (double tap, long press, scale down and two-finger rotation) and propose a set of design guidelines to mitigate the associated problems and help designers envision effective interaction mechanisms for this challenging age range.
Article
Similar to product design of function and quality, product emotional appeal becomes a more and more important factor influencing consumer purchases. This paper proposes a two-step method for product emotional elements reasoning based on image metaphor. It aims to solve the depth deficits and boundary constraints of traditional methods. Firstly, based on image metaphor and guided interview, the emotional needs of consumers for products are initially extracted. Secondly, tacit emotional needs are transformed into explicit design elements by using a three-level image metaphor decoding method of symbolic-index-perceptual. The effectiveness of the proposed method is implemented and verified by taking the emotional design of electric vehicle as an example. The results could be a reference for electric vehicle development. The findings suggest that image metaphor could be one of the ideal ways to reduce the deviation between designers’ goals and users’ emotional needs, and the proposed method can provide more innovative support for designers and engineers. Highlights • The concept of image metaphor is introduced into the field of product emotion design, and a cross-domain mapping method of emotional needs based on image metaphor is proposed. This method can describe consumers’ emotional needs easily and provide the possibility to mine extra-domain elements. • A three-level emotional needs decoding method based on image metaphor is proposed, which can convert users’ implicit emotional needs into explicit product design elements. • The feasibility of the proposed method is verified by a case study of emotional design of electric vehicle. • The findings suggest that image metaphor can solve some depth deficits and boundary constraints of traditional methods in emotional design, so as to provide more innovative support for product designers and engineers.
Article
Full-text available
Human-computer interaction (HCI) technology plays a critically essential role in the computer-aided design of railway line locations. However, the traditional interactive design with a mouse+keyboard cannot well meet the rapid generation requirements of the railway line during scheme discussion. This research presents a fitting algorithm for the rapid generation of railway lines by using a multi-touch gesture algorithm. The fitting method from free hand-drawing lines to satisfied railway lines is proposed. Then the interactive operation hand gestures are defined and implemented into the railway line location design. The hand-drawing lines generated by defined gestures are automatically fitted with the target horizontal line by using the inflection detection algorithm based on Euclidean Distance (ED). Meanwhile, the vertical line can be fitted by a similar algorithm with an extreme point-to-point (EPP) and chord-to-point distance accumulation (CPDA). Moreover, a real-world example verification is carried out. The multi-touch gesture algorithm is applied for the automatic fitting of the railway line. Compared with the traditional interactive methods, the consumption time of railway line generation by using the multi-touch interactive mode is decreased by about 15%. This research provides fundamental support for rapid scheme discussion of railway line generation based on natural HCI, which is well-matched with modern handheld devices, and the requirements of rapid selection as well as the quick comparison of railway line schemes in the early stage of design.
Article
Full-text available
Orchestrating scientific work in educational research laboratories is demanding, especially when many interdisciplinary perspectives are involved. A monolithic approach does not suffice here. This paper describes an open-source architecture for an educational research laboratory. The presented system assists interdisciplinary scientists in implementing prototypes, evaluating new didactical approaches, researching collaborative learning processes, collecting learning analytics data, abstracting and automating experimental procedures, and securely monitoring progress within the lab, even in hybrid or remote setups. Due to both asynchronous and synchronous capabilities, the presented components make it easy to virtualize experiments, incorporate them in courses and lectures, and empower self-regulated but still monitorable learning processes. The ecosystem described originated in the context of collaborative educational Serious Games for interactive table-top displays but is implemented modular and scales well regarding concurrent experiments, connected clients, and other use cases like VR and smart environments.
Thesis
Le Multitouch est le mode d'interaction homme-machine qui nous permet de contrôler nos smartphones et tablettes par la production de gestes. La cognition incarnée est un paradigme qui cherche à montrer que la cognition est basée sur des actions motrices, à caractère spatial, comme les gestes. Les habiletés visuo-spatiales sont un prédicteur de succès dans la compréhension des phénomènes dynamiques en sciences. Le but de ce travail est tout d’abord d'étudier les effets des gestes Multitouch, sur l'apprentissage, lors de l'étude de phénomènes dynamiques sur un simulateur, avec une tablette tactile. La recherche du lien entre les habiletés visuo-spatiales de l’apprenant et la production de gestes est également un objectif de ce travail. Trois expériences ont été menées visant à considérer le geste, soit comme une modalité d’encodage d’informations, soit comme un modulateur de l’attention. Deux de ces expériences portaient sur des tâches scolaires et la troisième portait sur une tâche simplement spatiale. La nature des gestes produits a été testée, ainsi que la nature de l’interaction (Multitouch, clavier, souris). L’effet de l'existence de cette interaction et son rôle selon le niveau d’habiletés visuo-spatiales des participants ont également été testés. Les expériences n’ont pas permis de mettre en évidence d’effet bénéfique du geste sur les résultats d’apprentissage. Cependant les résultats ont permis de montrer que le fait d’interagir avec le contenu pédagogique limitait le rôle joué par les habiletés visuo-spatiales dans les résultats observés. Ces résultats sont interprétés dans une approche visant à questionner le rôle joué par la nature des gestes produits et le type de connaissances à acquérir dans les modèles de la cognition incarnée et de la mémoire de travail.
Article
The key of deep learning is how to extract abstract, deep and nonlinear target features, in which algorithm plays a crucial role. In this paper, the authors analyze the intelligent system design of entrepreneurship education classroom based on artificial intelligence and image feature retrieval. Pyramid pooling is used to transform any size feature map into fixed size feature vector, which is finally sent to the full connection layer for classification and regression. Experimental results show that the algorithm accelerates the convergence of the whole network and improves the detection speed. The education taught by entrepreneurial class is not only to help college students to seek a stable career, but also to help college students develop their own potential, cultivate entrepreneurial awareness, improve entrepreneurial quality and ability. Entrepreneurship education should not only stay in the design of subject courses, but should integrate entrepreneurship education with internet entrepreneurship practice. On this basis, we provide new countermeasures and suggestions for improving the quality and ability of college students in the process of entrepreneurial activities.
Article
With the bursting development of the website techniques, the optimal design pattern needs to be analysis and found out. Under this background, this paper conducts analysis on HTML5 UI design paradigm under the background of data flow and the interactive experience. Software interface design as the basis of the software design, software development in the long, the interface design work has not been taken seriously. People who do interface design have also been derogatory known as the "art." A friendly and beautiful interface will give people the comfortable visual enjoyment, closer to the distance between people and computers while creating a selling point for the businesses. Software interface design is not a simple art painting with the needs to locate the user, the use of the environment, the use of methods and designed for end users that is a purely scientific art design. HTML5 permission procedure through Web browser movement among, and at the present and so on video frequency needs the plug-in unit and other is accommodating the multimedia contents which the method can use also to integrate, this will cause the browser to become one kind of general platform, and by using this general feature, this paper construct the robust system with the empirical implementation. © 2016, Revista Tecnica de la Facultad de Ingeniera. All rights reserved.
Chapter
A Natural User Interface (NUI) is a type of human-machine interaction based on the automatic analysis of the user’s natural behavior. These human actions are interpreted by the machine as commands that control system operations. Natural behavior is a group of activities performed by humans in everyday life to interact with their animated and unanimated environment. The main purpose of using the NUI is to simplify the access to the computer. NUI is to support intuitive interaction methods that are very easy to teach. Users who have never been exposed to a particular application before should, with help of the NUI, handle it much faster than if they were to use a traditional graphic user interface (GUI) with a mouse or keyboard. This chapter discusses the currently available interfaces for image communication between humans and computers with their use in practical applications and mobile devices, but it also presents an innovative GDL technology. The authors of this monograph have proposed the GDL technology which allows users not only to communicate with computers, but also supports sophisticated analyses of human motions and gestures e.g. during navigation, behavioural analyses, the psychological interpretation of gestures, and also in physical (exercise) therapy.
Article
Full-text available
The challenge of designing universal access to knowledge demands considerations on multi-device interaction. A systematic review of inclusive environments built from multiple devices was conducted based on studies published during the period of 2002–2013. The search strategy combined manual and automatic searches from which 8889 studies were identified; 34 studies were found proposing software tools for building multi-device inclusive environments (0.38 % of the original sample). Thus, this study analyzes the ways academic and industrial communities have developed tools for building inclusive environments. The main findings of this review are: (1) an urgent need for the recognition of accessibility as an important non-functional requirement; (2) a need for taking into account the social conditions of users, such as illiteracy and people living in underserved communities; and (3) the identification of new research questions in the context of multi-device inclusive environments.
Article
Recent studies on how traditional HCI methods are applied in practice entail re-conceptualization of the nature of such methods, leading to the notion of 'method-as-set-of-resources'. Re-usable resources provide some, but not all, of the required resources for design work. Others must be provided within design work contexts. The expanding scope of use contexts alongside the shift of emphasis to user experience calls for the development of alternative HCI practices. These two trends can influence each other. Understanding, via structured case studies, how HCI professionals transfer the same (set) of design and evaluation methods across use contexts in terms of appropriating and configuring method-resources can provide applied knowledge for: (i) creating new methods, (ii) training novices, and (iii) laying a firmer groundwork for formal analysis of HCI methods. This workshop aims to bring together HCI professionals who have method-transfer experience and knowledge to share, analyze and synthesize insights so gained. Methods, transfer, approaches, resources, use context, case study, usability, user experience, practice
Article
This study proposes an analytical approach to the creation of multitouch control-gesture vocabularies applicable to mobile devices. The approach consists of four steps: (a) identifying target commands, (b) extracting gesture features of the target commands, (c) analyzing usage patterns based on elements that consist of multitouch gestures, and (d) creating gesture vocabularies based on the gesture features and elements. Usefulness and practicality of the proposed approach were validated in a case study. The case study created 11 mobile web browsing gestures to improve short-cut interactions. Six volunteers created gestures based on systematic procedures and practical methods. A total of 314 gestures were created in the case study, and the results were compared with those of a previous study that used an empirical approach to design control gestures. The proposed approach helped designers to create appropriate gestures for various commands on mobile devices. It was very practicable for all designers, including even novice users.
Article
The paper presents design, simulation analysis, and measurements of parameters of optical multi touch panel backlight system. Comparison of optical technology with commercially available solutions was also performed. The numerical simulation of laser based backlight system was made. The influence of the laser power, beam divergence, and placing reflective surfaces on the uniformity of illumination were examined. Optimal illumination system was used for further studies.
Article
Cross-disciplinary research involving semiotics and computer science is rare. With the Web 2.0, contemporary activities of users can be properly described as real ‘life on the screen’. One of the challenges for the design of interactive languages is to support these activities and to express the much wider variety of meanings that users want to exchange through and with software. As the discipline whose aim is to investigate meanings, through representation and interpretation processes, semiotics is remarkably well-positioned to contribute with new knowledge in our field. This viewpoint article examines the reasons why in spite of this positioning, semiotics remains unpopular among researchers interested in interactive computer languages. In particular, it proposes that a semiotic approach can help us think about computer languages to represent our individual and collective ‘selves’ on the screen.
Article
Full-text available
The communicability evaluation method evolved within the Semiotic Engineering framework and its main goal is to assess how designers communicate to users, through the interface, both their design intent and the interactive principles they have selected for the application. The method consists of 3 steps: tagging, interpretation, semiotic profiling and was originally developed to evaluate how well users get the designer's message from interacting with single-user interfaces. In order to extend the communicability evaluation method to account for groupware applications, we must identify new utterances and problem categories that apply to interacting with and through these applications. In this paper we take the first step in that direction and, based on the results of two case studies, we propose four types of problems that should be added to the original set of HCI problems to characterize interactive breakdowns in groupware applications.
Article
Full-text available
This paper describes semiotic inspection, a semiotic engineering evaluation method. It briefly identifies the essence of theory-based evaluation methods in HCI. Then it provides a detailed description and illustration of this method, which is based on a semiotic theory of HCI. It discusses its theoretical stance in semi-otic engineering compared to the communicability evaluation method, as well as the perceived advantages and disadvantages of semiotic inspection. Finally, it points at the next steps in the se-miotic inspection research agenda.
Conference Paper
Full-text available
In order to improve the three-dimensional (3D) exploration of virtual spaces above a tabletop, we developed a set of navigation techniques using a handheld magic lens. These techniques allow for an intuitive interaction with two-dimensional and 3D information spaces, for which we contribute a classification into volumetric, layered, zoomable, and temporal spaces. The proposed PaperLens system uses a tracked sheet of paper to navigate these spaces with regard to the Z-dimension (height above the tabletop). A formative user study provided valuable feedback for the improvement of the PaperLens system with respect to layer interaction and navigation. In particular, the problem of keeping the focus on selected layers was addressed. We also propose additional vertical displays in order to provide further contextual clues.
Conference Paper
Full-text available
Multi-touch large display interfaces are becoming increasingly popular in public spaces. These spaces impose specific requirements on the accessibility of the user interfaces: most users are not familiar with the interface and expectations with regard to user experience are very high. Multi-touch interaction beyond the traditional move-rotate-scale interactions is often unknown to the public and can become exceedingly complex. We introduce TouchGhosts: visual guides that are embedded in the multi-touch user interface and that demonstrate the available interactions to the user. TouchGhosts are activated while using an interface, providing guidance on the fly and within the context-of-use. Our approach allows to define reconfigurable strategies to decide how or when a TouchGhost should be activated and which particular visualization will be presented to the user.
Conference Paper
Full-text available
We have recently begun to see hardware support for the tabletop user interface, offering a number of new ways for humans to interact with computers. Tabletops offer great potential for face-to-face social interaction; advances in touch technology and computer graphics provide natural ways to directly manipulate virtual objects, which we can display on the tabletop surface. Such an interface has the potential to benefit a wide range of the population and it is important that we design for usability and learnability with diverse groups of people. This paper describes the design of SharePic - a multi- user, multi-touch, gestural, collaborative digital photograph sharing application for a tabletop - and our evaluation with both young adult and elderly user groups. We describe the guidelines we have developed for the design of tabletop interfaces for a range of adult users, including elders, and the user interface we have built based on them. Novel aspects of the interface include a design strongly influenced by the metaphor of physical photographs placed on the table with interaction techniques designed to be easy to learn and easy to remember. In our evaluation, we gave users the final task of creating a digital postcard from a collage of photographs and performed a realistic think-aloud with pairs of novice participants learning together, from a tutorial script.
Conference Paper
Full-text available
We present Silicone iLluminated Active Peripherals (SLAP), a system of tangible, transparent widgets for use on vision-based multi-touch tabletops. SLAP Widgets are cast from silicone or made of acrylic and include sliders, knobs, keyboards, and keypads. They add tactile feedback to multi-touch tables and can be dynamically relabeled with rear projection. They are inexpensive, battery-free, and untethered widgets combining the flexibility of virtual objects with tangible affordances of physical objects. Our demonstration shows how SLAP Widgets can augment input on multi-touch tabletops with modest infrastructure costs.
Conference Paper
Full-text available
In this paper we compare the affordances of presenting educational material on a tabletop display with presenting the same material using traditional paper handouts. Ten pairs of undergraduate students used digital or paper materials to prepare for exams during four one-hour study sessions over the course of a term. Students studying with the tabletop display solved problems on their own before resorting to answer keys and repeated activities more often than students studying with paper documents. We summarize study activities and discuss the benefits and drawbacks of each medium. Author Keywords Tabletop, paper, education, affordance, collaboration, study
Conference Paper
Full-text available
We present data from detailed observations of CityWall, a large multi-touch display installed in a central location in Helsinki, Finland. During eight days of installation, 1199 persons interacted with the system in various social con- figurations. Videos of these encounters were examined qualitatively as well as quantitatively based on human cod- ing of events. The data convey phenomena that arise uniquely in public use: crowding, massively parallel inter- action, teamwork, games, negotiations of transitions and handovers, conflict management, gestures and overt re- marks to co-present people, and "marking" the display for others. We analyze how public availability is achieved through social learning and negotiation, why interaction becomes performative and, finally, how the display restruc- tures the public space. The multi-touch feature, gesture- based interaction, and the physical display size contributed differentially to these uses. Our findings on the social or- ganization of the use of public displays can be useful for designing such systems for urban environments.
Conference Paper
Full-text available
In this paper we describe our findings from a field study that was conducted at the Vancouver Aquarium to investigate how visitors interact with a large interactive table exhibit using multi-touch gestures. Our findings show that the choice and use of multi-touch gestures are influenced not only by general preferences for certain gestures but also by the interaction context and social context they occur in. We found that gestures are not executed in isolation but linked into sequences where previous gestures influence the formation of subsequent gestures. Furthermore, gestures were used beyond the manipulation of media items to support social encounters around the tabletop exhibit. Our findings indicate the importance of versatile many-to-one mappings between gestures and their actions that, other than one-to-one mappings, can support fluid transitions between gestures as part of sequences and facilitate social information exploration.
Conference Paper
Full-text available
Communicability evaluation is a method based on semiotic engineering that aims at assessing how designers communicate to users their design intents and chosen interactive principles, and thus complements traditional usability evaluation methods.In this paper, we present a case study in which we evaluate how communicablity tagging of an application changes along users learning curves. Our main goal was to have indications of how communicability evaluation along a learning period helps provide valuable information about interface designs, and identify communicative and interactive problems, as users become more proficient in the application.
Conference Paper
Full-text available
Many research efforts today explore how digitally augmented tables enable face-to-face interaction with digital content and applications. Yet the design of digital tables is still largely driven by the constraints and requirements of the underlying sensing technologies and digital systems. In order to move digital tables into real-world physical spaces, researchers need to work closely with architects and industrial designers in order to engage the knowledge and skills from a long history of physical design and fabrication. Architales is an interactive story table for gallery exhibition developed as an experiment in the physical/digital co-design of the physical table and environment with the digital story system and content.
Conference Paper
Full-text available
We report on a design exploration into how a large multi-touch tabletop display can be used for information visualization. We designed an interface where users explored a tagged photo collection by bi-manual manipulation of the collections' tag cloud. User feedback showed that despite the availability of multi-touch most of the actual interactions were single-touch. However, some particular natural actions, such as grabbing the tag cloud and partitioning it into two parts, were often carried with both hands. Thus our user study indicates that multi-touch can act as a useful complementary interaction method in information visualization interfaces.
Conference Paper
Full-text available
This paper describes a comparative study between the usage of low-fidelity and a high-fidelity prototyping for the creation of multi-user multi-touch interfaces. The multi-touch interface presented in this paper allows users to collaboratively search for existing multimedia content, create new compositions with this content, and finally integrate it in a layout for presenting it. The study we conducted consists of a series of parallel user tests using both low-fidelity and high-fidelity prototypes to inform the design of the multi-touch interface. Based on a comparison of the two test sessions, we found that one should be cautious in generalising high-level user interactions from a low towards a high-fidelity prototype. However, the low-fidelity prototype approach presented proved to be very valuable to generate design ideas concerning both high and low-level user interactions on a multi-touch tabletop.
Conference Paper
Full-text available
Many research projects have demonstrated the benefits of bimanual interaction for a variety of tasks. When choosing bimanual input, system designers must select the input device that each hand will control. In this paper, we argue for the use of pen and touch two-handed input, and describe an experiment in which users were faster and committed fewer errors using pen and touch input in comparison to using either touch and touch or pen and pen input while performing a representative bimanual task. We present design principles and an application in which we applied our design rationale toward the creation of a learnable set of bimanual, pen and touch input commands.
Conference Paper
Full-text available
In educational settings, current digital technologies often work counter-productively because people using them with separation and isolation. This paper describes a set of multi-touch multimedia interaction applications that were especially designed to enhance collaboration between users. We present the underlying framework for creating such applications. Our applications were created for supporting typical collaborative tasks performed by secondary students. We present our findings on the usage of these applications by the users in the settings of a secondary school classroom.
Poster
Full-text available
Tabletop and tangible interfaces have become common in recent years. Technology trends in this area can be found in commercial products, such as Apple's iPhone™ and Microsoft Surface™, as well as in research ventures, such as Reactable and Perceptive Pixel initiatives. Nevertheless, natural human computer interfaces (HCI) to support this hardware technology are still non-intuitive.
Article
Full-text available
John Leslie King's interest in history was evident at the first CSCW conference in 1986. His review of 15 years of research with technology to support real-time collocated interaction, then called Group Decision Support Systems, revealed that we sometimes ...
Article
Full-text available
A lot of research has been done within the area of mobile computing and context-awareness over the last 15 years, and the idea of systems adapting to their context has produced promising results for overcoming some of the challenges of user interaction with mobile devices within various specialized domains. However, today it is still the case that only a limited body of theoretically grounded knowledge exists that can explain the relationship between users, mobile system user interfaces, and their context. Lack of such knowledge limits our ability to elevate learning from the mobile systems we develop and study from a concrete to an abstract level. Consequently, the research field is impeded in its ability to leap forward and is limited to incremental steps from one design to the next. Addressing the problem of this void, this article contributes to the body of knowledge about mobile interaction design by promoting a theoretical approach for describing and understanding the relationship between user interface representations and user context. Specifically, we promote the concept of indexicality derived from semiotics as an analytical concept that can be used to describe and understand a design. We illustrate the value of the indexicality concept through an analysis of empirical data from evaluations of three prototype systems in use. Based on our analytical and empirical work we promote the view that users interpret information in a mobile computer user interface through creation of meaningful indexical signs based on the ensemble of context and system.
Article
Full-text available
Gestural interfaces which have been a part of the interface scene are discussed. Gestural systems are no different from any other form of interaction. Some systems are trying to develop a gestural language, sometimes with the number of touch points as a meta-signal about the scope of the movement. Gesture and touch-based systems are already so well accepted that people make gestures to systems that do not understand them such as tapping the screens of non-touch sensitive displays, pinching and expanding the fingers or sliding the finger across the screen on systems that do not support these actions, and for that matter and waving hands in front of sinks that use old-fashioned handles. Gestural systems are indeed one of the important future paths for a more holistic, human interaction of people with technology.
Book
Touch and gestural devices have been hailed as next evolutionary step in human-computer interaction. As software companies struggle to catch up with one another in terms of developing the next great touch-based interface, designers are charged with the daunting task of keeping up with the advances in new technology and this new aspect to user experience design. Product and interaction designers, developers and managers are already well versed in UI design, but touch-based interfaces have added a new level of complexity. They need quick references and real-world examples in order to make informed decisions when designing for these particular interfaces. Brave NUI World is the first practical book for product and interaction developers and designing touch and gesture interfaces. Written by developers of industry-first, multi-touch, multi-user products, this book gives you the necessary tools and information to integrate touch and gesture practices into your daily work, presenting scenarios, problem solving, metaphors, and techniques intended to avoid making mistakes. *Provides easy-to-apply design guidance for the unique challenge of creating touch- and gesture-based user interfaces *Considers diverse user needs and context, real world successes and failures, and a look into the future of NUI *Presents thirty scenarios, giving practitioners a multitude of considerations for making informed design decisions and helping to ensure that missteps are never made again.
Conference Paper
We present Silicone iLluminated Active Peripherals (SLAP), a system of tangible, translucent widgets for use on multitouch tabletops. SLAP Widgets are cast from silicone or made of acrylic, and include sliders, knobs, keyboards, and buttons. They add tactile feedback to multi-touch tables, improving input accuracy. Using rear projection, SLAP Widgets can be relabeled dynamically, providing inexpensive, battery-free, and untethered augmentations. Furthermore, SLAP combines the flexibility of virtual objects with physical affordances. We evaluate how SLAP Widgets influence the user experience on tabletops compared to virtual controls. Empirical studies show that SLAPWidgets are easy to use and outperform virtual controls significantly in terms of accuracy and overall interaction time.
Article
This paper presents semiotics as a framework for understanding and designing computer systems as sign systems. Although semiotic methods can be applied to all levels of computer systems, they view computer systems under a particular perspective, namely as targets of interpretations. When we need to see computer systems as automata, semiotics has little to offer. The main focus of the paper is semiosis, the process of sign formation a n d interpretation. The paper discusses different semiotic paradigms, and advocates the European structuralist paradigm in combination with the American Peircean tradition. Programming is described a s a process of sign-creation, and a semiotic approach to pro- gramming is compared to the object-oriented method. The importance of the work situation as a context of interpretation is emphasized
Article
When we use the term ‘human–computer interaction’ (HCI), the image that is conjured up is of a person sitting at a visual display unit staring in at the world of ‘information’; the person is very much outside the space of information. But when we think of other activities such as going shopping, having a meeting or driving across town, we do not think of the person as outside this space. On the contrary, we see the person as inside a space of activities, surrounded by, and interacting with, assorted artefacts and people. Navigation of Information Space is an alternative conceptualisation of HCI that sees people as existing inside information spaces. Looking at HCI in this way means looking at HCI design as the creation of information spaces. This paper explores these ideas in more detail, arguing that Navigation of Information Space is not just a metaphor for HCI; it is a ‘paradigm shift’. The paper illustrates how Semiotics has informed this conception and discusses why such a paradigm shift is needed.
Article
Semiotics is ‘the mathematics of the humanities’ in the sense that it provides an abstract language covering a diversity of special sign-usage (language, pictures, movies, theatre, etc.). In this capacity, Semiotics is helpful for bringing insights from older media to the task of interface design, and for defining the special characteristics of the computer medium. However, Semiotics is not limited to interface design but may also contribute to the proper design of program texts and yield predictions about the interaction between computer systems and their context of use.
Conference Paper
There is growing interest in tabletop interfaces that enable remote collaboration by providing shared workspaces. This approach assumes that these remote tabletops afford the same beneficial work practices as co-located tabletop interfaces and traditional tables. This assumption has not been tested in practice. We explore two such work practices in remote tabletop collaboration: (a) coordination by territorial partitioning of space; and (b) transitioning between individual and group work within a shared task. We have evaluated co-located and remote tabletop collaboration. We found that remote collaborators did not coordinate territorially as co-located collaborators did. We found no differences between remote and co-located interfaces in their ability to afford individual and group work. However, certain interaction techniques impaired the ability to transition fluidly between these working styles. We discuss causes and the implications for the design and future study of these interfaces. Author Keywords Remote tabletop interfaces, territoriality, coupling, fluidity.
Conference Paper
Many surface computing prototypes have employed gestures created by system designers. Although such gestures are appropriate for early investigations, they are not necessarily reflective of user behavior. We present an approach to designing tabletop gestures that relies on eliciting gestures from non-technical users by first portraying the effect of a gesture, and then asking users to perform its cause. In all, 1080 gestures from 20 participants were logged, analyzed, and paired with think-aloud data for 27 commands performed with 1 and 2 hands. Our findings indicate that users rarely care about the number of fingers they employ, that one hand is preferred to two, that desktop idioms strongly influence users' mental models, and that some commands elicit little gestural agreement, suggesting the need for on-screen widgets. We also present a complete user-defined gesture set, quantitative agreement scores, implications for surface technology, and a taxonomy of surface gestures. Our results will help designers create better gesture sets informed by user behavior.
Conference Paper
Though interaction designers critique interfaces as a regular part of their research and practice, the field of HCI lacks a proper discipline of interaction criticism. By interaction criticism we mean rigorous, evidence-based interpretive analysis that explicates relationships among elements of an interface and the meanings, affects, moods, and intuitions they produce in the people that interact with them; the immediate goal of this analysis is the generation of innovative design insights. We summarize existing work offering promising directions in interaction criticism to build a case for a proper discipline. We then propose a framework for the discipline, relating each of its parts to recent HCI research.
Conference Paper
This paper presents a design case study of SIDES: Shared Interfaces to Develop Effective Social Skills. SIDES is a tool designed to help adolescents with Asperger's Syndrome practice effective group work skills using a four-player cooperative computer game that runs on tabletop technology. We present the design process and evaluation of SIDES conducted over six months with a middle school social group therapy class. Our findings indicate that cooperative tabletop computer games are a motivating and supportive tool for facilitating effective group work among our target population and reveal several design lessons to inform the development of similar systems.
Conference Paper
We present Ripples, a system which enables visualizations around each contact point on a touch display and, through these visualizations, provides feedback to the user about successes and errors of their touch interactions. Our visualization system is engineered to be overlaid on top of existing applications without requiring the applications to be modified in any way, and functions independently of the application's responses to user input. Ripples reduces the fundamental problem of ambiguity of feedback when an action results in an unexpected behaviour. This ambiguity can be caused by a wide variety of sources. We describe the ambiguity problem, and identify those sources. We then define a set of visual states and transitions needed to resolve this ambiguity, of use to anyone designing touch applications or systems. We then present the Ripples implementation of visualizations for those states, and the results of a user study demonstrating user preference for the system, and demonstrating its utility in reducing errors.
Book
Semiotic engineering was originally proposed as a semiotic approach to designing user interface languages. Over the years, with research done at the Department of Informatics of the Pontifical Catholic University of Rio de Janeiro, it evolved into a semiotic theory of human-computer interaction (HCI). It views HCI as computer-mediated communication between designers and users at interaction time. The system speaks for its designers in various types of conversations specified at design time. These conversations communicate the designers' understanding of who the users are, what they know the users want or need to do, in which preferred ways, and why. The designers' message to users includes even the interactive language in which users will have to communicate back with the system in order to achieve their specific goals. Hence, the process is, in fact, one of communication about communication, or metacommunication. Semiotic engineering has two methods to evaluate the quality of metacommunication in HCI: the semiotic inspection method (SIM) and the communicability evaluation method (CEM). Up to now, they have been mainly used and discussed in technical contexts, focusing on how to detect problems and how to improve the metacommunication of specific systems. In this book, Clarisse de Souza and Carla Leito discuss how SIM and CEM, which are both qualitative methods, can also be used in scientific contexts to generate new knowledge about HCI. The discussion goes into deep considerations about scientific methodology, calling the reader's attention to the essence of qualitative methods in research and the kinds of results they can produce. To illustrate their points, the authors present an extensive case study with a free open-source digital audio editor called Audacity. They show how the results obtained with a triangulation of SIM and CEM point at new research avenues not only for semiotic engineering and HCI but also for other areas of computer science such as software engineering and programming. Table of Contents: Introduction / Essence of Semiotic Engineering / Semiotic Engineering Methods / Case Study with Audacity / Lessons Learned with Semiotic Engineering Methods / The Near Future of Semiotic Engineering
Book
The author discusses the existing theoretical approaches of semiotically informed research in HCI, what is useful and the limitations. He proposes a radical rethink to this approach through a re-evaluation of important semiotic concepts and applied semiotic methods. Using a semiotic model of interaction he explores this concept through several studies that help to develop his argument. He concludes that this semiotics of interaction is more appropriate than other versions because it focuses on the characteristics of interactive media as they are experienced and the way in which users make sense of them rather than thinking about interface design or usability issues.
Article
Hartson (1998) has pointed out that although people have studied interfaces and applied theories (mostly cognitive psychology) to them, and that the majority of the guidelines and principles applied have arisen mostly out of practice than theory. He claims that the HCI field, especially in real-world practice, could benefit a great deal more from theory. As the discipline whose aim is to investigate processes of communication and signification amongst agents in general, Semiotics is bound to contribute to the field of human-computer interaction with complementary perspectives, new methods and concepts, which can shed light on some of the major HCI challenges in design and evaluation. Viewing HCI as a complex human communication process, involving designers and users, and the mediation of communicative artifacts, Computer Semiotics and Semiotic Engineering, for instance, are some of the approaches in Applied Semiotics that directly address the issues bearing on human-computer interaction. The workshop aims to bring together researchers and practitioners of HCI and Semiotics and to give them the opportunity to discuss how the two fields can provide new knowledge and a new interdisciplinary research agenda in HCI.
Article
Interactive digital television (iDTV) is a social medium and must therefore be tested in a context as close to real life as possible. This explains why we saw the potential and importance for the involvement of real life couples in iDTV usability testing. We will describe an experiment that compares single user testing and co-participation testing with couples for the evaluation of several Flemish iDTV applications. First, we found that there was less probing needed by the facilitator to think out loud in the think aloud/co-participation method with couples than in the think aloud/single test user method. Secondly, couples did not encounter difficulties working together with the iDTV applications. Further, couples did not lose time by discussing irrelevant issues during the test session. A fourth finding is that couples detected more usability hits than single test users. The quality of comments, however, was the same in both conditions. 60 % of the comments consisted of intrinsic suggestions and 40 % of general problem detections. An other issue was raised through findings during the test. Couples in general were enthusiastic to participate, put little effort on their part in the test session and evaluated the test session as easy and fun to do. On the contrary, single test users in general were not sure whether they would like to participate again in future tests, declared the test session demanded considerable effort and evaluated the test session less positively.
Article
Despite causing many debates in human-computer interaction (HCI), the term “metaphor” remains a central element of design practice. This article investigates the history of ideas behind user-interface (UI) metaphor, not only technical developments, but also less familiar perspectives from education, philosophy, and the sociology of science. The historical analysis is complemented by a study of attitudes toward metaphor among HCI researchers 30 years later. Working from these two streams of evidence, we find new insights into the way that theories in HCI are related to interface design, and offer recommendations regarding approaches to future UI design research.
Article
Tabletop groupware systems have natural advantages for collaboration, but they present a challenge for application designers because shared work and interaction progress in different ways than in desktop systems. As a result, tabletop systems still have problems with usability. We have developed a usability evaluation technique, T-CUA, that focuses attention on teamwork issues and that can help designers determine whether prototypes provide adequate support for the basic actions and interactions that are fundamental to table-based collaboration. We compared T-CUA with expert review in a user study where 12 evaluators assessed an early tabletop prototype using one of the two evaluation methods. The group using T-CUA found more teamwork problems and found problems in more areas than those using expert review; in addition, participants found T-CUA to be effective and easy to use. The success of T-CUA shows the benefits of using a set of activity primitives as the basis for discount usability techniques.
Article
Semiotic approaches to design have recently shown that systems are messages sent from designers so users. In this paper we examine the nature of such messages and show that systems are messages that can send and receive other messages—they are metacommunication artefacts that should be engineered according to explicit semiotic principles. User interface languages are the primary expressive resource for such complex communication environments. Existing cognitively-based research has provided results which set the target interface designers should hit, but little is said about how to make successful decisions during the process of design itself. In an attempt to give theoretical support to the elaboration of user interface languages, we explore Eco's Theory of Sign Production (U. Eco, A Theory of Semiotics, Bloomington, IN: Indiana University Press, 1976) and build a semiotic framework within which many design issues can be explained and predicted.
Conference Paper
Multi-touch interfaces allow users to translate, rotate, and scale digital objects in a single interaction. However, this freedom represents a problem when users intend to perform only a subset of manipulations. A user trying to scale an object in a print layout program, for example, might find that the object was also slightly translated and rotated, interfering with what was already carefully laid out earlier. We implemented and tested interaction techniques that allow users to select a subset of manipulations. Magnitude Filtering eliminates transformations (e.g., rotation) that are small in magnitude. Gesture Matching attempts to classify the user's input into a subset of manipulation gestures. Handles adopts a conventional single-touch handles approach for touch input. Our empirical study showed that these techniques significantly reduce errors in layout, while the Handles technique was slowest. A variation of the Gesture Matching technique presented the best combination of speed and control, and was favored by participants.
The challenge of emotional innovation. UX Week
  • D Wixon
Wixon, D., 2008.The challenge of emotional innovation. UX Week. Available from: /http://vimeo.com/2893051S (accessed 19.07.11).
The Semiotic Engineering of Human-Computer Interaction (Acting with Technology) Semiotic Engineering Methods for Scientific Research in HCI
  • De Souza
  • Uk De
  • C S Souza
  • C F Leit~ Ao
  • C F Ao
  • R O Prates
  • E J Silva
De Souza, C.S., 2005. The Semiotic Engineering of Human-Computer Interaction (Acting with Technology). The MIT Press, London, UK. De Souza, C.S., Leit~ ao, C.F., 2009. Semiotic Engineering Methods for Scientific Research in HCI. Morgan & Claypool, San Francisco, CA. De Souza, C.S.,Leit~ ao, C.F., Prates, R.O., da Silva, E.J., 2006. The semiotic inspection method In: Proceedings of VII Brazilian Sympo-sium on Human Factors in Computing Systems, IHC '06, 323, ACM, New York, NY. 148–157.
Unnecessary Explanations. /subtraction.comS
  • K Vinh
Vinh, K., 2011. Unnecessary Explanations. /subtraction.comS Feb 01, 2011 (accessed 25.07.11).
How People Really Use the iPhone Create with Context Research Report Available from: /www.createwithcontext. com/how-people-really-use-the-iphone.htmlS (accessed 15
  • Westerman
Westerman, 2008. How People Really Use the iPhone. Create with Context Research Report. Available from: /www.createwithcontext. com/how-people-really-use-the-iphone.htmlS (accessed 15.01.12).
Interactive Media: The Semiotics of Embodied Inter-action Synthesizer 76 iPad App Shows Delights and Pitfalls of ''Skeuomorphic'' UI's
  • J Derboven
J. Derboven et al. / Int. J. Human-Computer Studies 70 (2012) 714–728 O'Neill, S., 2008. Interactive Media: The Semiotics of Embodied Inter-action. Springer, London, UK. Pavlus, J., 2010. Synthesizer 76 iPad App Shows Delights and Pitfalls of ''Skeuomorphic'' UI's. /fastcodesign.comS (accessed 19.07.11).
iOS Human Interface Guidelines. User Experience Available from
  • Apple Inc
Apple Inc., 2011. iOS Human Interface Guidelines. User Experience. Available from: /http://developer.apple.com/library/ios/documenta tion/userexperience/conceptual/mobilehig/MobileHIG.pdfS (accessed 20.07.11).