Figure 14 - uploaded by Philipp Wicke
Content may be subject to copyright.
Ocial Deadpool movie billboard advertisement from Twentieth Century Fox (Twentieth Century Fox, 2016). Read: dead-poo-l. Photo reference: (MONSTER Blog, 2016). 

Ocial Deadpool movie billboard advertisement from Twentieth Century Fox (Twentieth Century Fox, 2016). Read: dead-poo-l. Photo reference: (MONSTER Blog, 2016). 

Source publication
Thesis
Full-text available
Our everyday virtual communication underwent a shift in recent years, when the Unicode Standard introduced Emoji (Unicode-Standard, since 2000), the set of more than one thousand pictograms, which became a standard in most of our online messaging services. Now Emoji are a substantial part of our virtual communication with more and more words becomi...

Similar publications

Article
Full-text available
The current study examined within- and cross-language connectivity in four priming conditions: repetition, translation, within-language semantic and cross-language semantic priming. Unbalanced Hebrew–English bilinguals (N = 89) completed a lexical decision task in one of the four conditions in both languages. Priming effects were significantly larg...
Article
Full-text available
Studies employing primed lexical decision tasks have revealed morphological facilitation effects in children and young adults. It is unknown if this effect is preserved or diminished in older adults. In fact, only few studies have investigated age-related changes in morphological processing and results are inconsistent across studies. To address th...
Article
Full-text available
The adult lexicon links concepts and labels with related meanings (e.g. dog–cat). We asked how children’s encounters with concepts versus labels contribute to their semantic development. We investigated semantic priming in monolinguals and bilinguals, who have similar experience with concepts, but different experience with labels (i.e. monolinguals...
Article
Full-text available
To directly investigate the reciprocal causal relationship of the conceptual and affective meaning of words, two priming experiments were conducted with the lexical decision task. In Experiment 1, the influence of semantic relatedness on the affective priming effect was explored by manipulating the semantic associative strength between the prime an...

Citations

... The relation of the word "window" and the thing is arbitrary. While in Emoji is vice versa; whereas the iconic and the meaning are linked together, and it is not arbitrary (Wicke, 2017). Robertson et al. (2021) offered the first longitudinal study of how emoji semantics changes over time, using techniques from computational linguistics to six years of Twitter data. ...
Article
Full-text available
Due to the polysemous and ambiguous nature of the emojis, translators encounter difficulties in rendering them into Kurdish. This paper is an attempt to find out the nature and the frequency of the problems related to emojis and suggest more appropriate ways for dealing with them when they are translated into Kurdish. The paper takes up a descriptive-analytic approach. The data of the study is collected primarily from the 'Emoji movie' produced in 2017. The data are then categorized and analyzed thoroughly to explore the underlying factors of these problems and suggest effective strategies for translating them with minimum ambiguity. The results of this study show that polysemous emojis could be disambiguated through the context and other extralinguistic factors such as the setting and the technological background of the translators.
... Concerning the role of emoji in written communication, several topics have been addressed: redundancy and part-of-speech category (Donato and Paggio, 2017), complementary vs text-replacing functions of emoji (Dürscheid and Siever, 2017), emoji as text-replacement and its effect on reading time (Gustafsson, 2017), emoji as semantic primes (Wicke, 2017), among others (Cramer, Juan, and Tetreault, 2016;Herring and Dainas, 2017;Kelly and Watts, 2015). ...
... for luck) or literal translations (e.g. for the action to explode). According to Wicke (2017), these strategies enable one to use the semiotic advantages of emoji. ...
... Wicke and Bolognesi (2020) conduct a user study in which they asked participants to provide semantic representations for a sample of 300 English nouns using emoji, with the goal of identifying which representational strategies are most used to represent abstract and concrete concepts. They use a refined version of the classification of repre-sentational strategies proposed by Wicke (2017): literal, rebus, phonetic similarity and figurative construction. According to their results, figurative construction is the most used strategy (59%), followed by literal (33.91%). ...
Thesis
The visual representation of concepts has been the focus of multiple studies throughout history and is considered to be behind the origin of existing writing systems. Its exploration has led to the development of several visual language systems and is a core part of graphic design assignments, such as icon design. As is the case with problems from other fields, the visual representation of concepts has also been addressed using computational approaches. In this thesis, we focus on the computational generation of visual symbols to represent concepts, specifically through the use of blending. We started by studying aspects related to the transformation mechanisms used in the visual blending process, which led to the proposal of a visual blending taxonomy that can be used in the study and production of visual blends. In addition to the study of visual blending, we conceived and implemented several systems: a system for the automatic generation of visual blends using a descriptive approach, with which we conducted an experiment with three concepts (pig, angel and cactus); a visual blending system based on the combination of emoji, which we called Emojinating; and a system for the generation of flags, which we called Moody Flags. The experimental results obtained through multiple user studies indicate that the systems that we developed are able to represent abstract concepts, which can be useful in ideation activities and for visualisation purposes. Overall, the purpose of our study is to explore how the representation of concepts can be done through visual blending. We established that visual blending should be grounded on the conceptual level, lead- ing to what we refer to as Visual Conceptual Blending. We delineated a roadmap for the implementation of visual conceptual blending and described resources that can help in such a venture, as is the case of a categorisation of emoji oriented towards visual blending.
... Several related works have inspired the methods applied in the proposed translation system. Closely related to our textto-emoji system is the one presented by Wicke (2017). The author creates and evaluates a system that can translate action words into sequences of emoji through the use of vari-Proceedings of the 11th International Conference on Computational Creativity (ICCC'20) ISBN: 978-989-54160-2-8 ous linguistic strategies (metaphor, idioms, rebus etc). ...
Conference Paper
Full-text available
The task of translating text to images holds some valid creative potential and has been the subject of study in Computational Creativity. In this paper, we present preliminary work focused on emoji translation. The work-in-progress system is based on techniques of information retrieval. We compare the performance of our system with three deep learning approaches using a text-to-emoji task. The preliminary results suggest some advantages of using a knowledge-base approach as opposed to a purely data-driven approach. This paper aims to situate the research, underline its relevance and attract valuable feedback for its future development.
... how different emoji renders affect interpretation [28]), role in communication (e.g. studying emoji as semantic primes [41]), similarity (e.g. semantically measuring emoji similarity [3]) and text-to-emoji translation (e.g. ...
Article
The emoji connection between visual representation and semantic knowledge, together with its large conceptual coverage have the potential to be exploited in computational approaches to the visual representation of concepts. An example of a system that explores this potential is Emojinating-a system that uses a process of visual blending of existing emoji to represent concepts. In this paper, we use the Emojinating system as a case study to analyse the appropriateness of visual blending for the visual representation of concepts. We conduct three experiments in which we analyse output quality, type of blend used, usefulness to the user and ease of interpretation. Our main contributions are the following: (i) the production of a double-word concept list for testing the system; (ii) an extensive user study using two different concept lists (single-word and double-word); and (iii) a study that compares produced blends with user drawings.
... One of the approaches consists in gathering a set of individual graphic elements (either pictures or icons), which work as a translation when put side by side -e.g. translating plot verbs into sequences of emoji [7] or the Emojisaurus platform 1 . ...
Conference Paper
Full-text available
Graphic designers visually represent concepts in several of their daily tasks, such as in icon design. Computational systems can be of help in such tasks by stimulating creativity. However, current computational approaches to concept visual representation lack in effectiveness in promoting the exploration of the space of possible solutions. In this paper, we present an evolutionary approach that combines a standard Evolutionary Algorithm with a method inspired by Estimation of Distribution Algorithms to evolve emoji blends to represent user-introduced concepts. The quality of the developed approach is assessed using two separate user-studies. In comparison to previous approaches, our evolutionary system is able to better explore the search space, obtaining solutions of higher quality in terms of concept representativeness.
... Research on the role of emoji in written communication addresses several topics: e.g. redundancy and part-of-speech category [14], emoji function [15], effect on reading time [19], emoji as semantic primes [33] Fig. 1. Examples of retrieved emoji: existing (E), related (R) and blended (B) others [22,7,20]. ...
Conference Paper
Full-text available
Emoji system does not currently cover all possible concepts. In this paper, we present the platform Emojinating, which has the purpose of fostering creativity and aiding in ideation processes. It lets the user introduce a concept and automatically represents it, by searching for existing emoji and generating novel ones. The system combines the exploration of semantic networks with visual blending, and integrates data from EmojiNet, ConceptNet and Twemoji. To evaluate the system in terms of production efficiency and output quality, we produced emoji for a set of 1509 nouns from the New General Service List. The results show a coverage of 75% of the list.
... Emoji are not designed to be semantic primitives in the sense of [54,15], but a previous study investigated their potential to be used as such in language [53], showing that it is useful to regard emoji as semiotic building blocks. The discipline's founder, Ferdinand de Saussure, viewed semiotics as the "science that studies the life of signs within its society" [41]. ...
... Emoji can thus be used as metaphors, metonyms, icons and letters. [53] showed how symbolic narratives generated using the Scéalextric system can be augmented with emoji to render verbs as sequences of visual signs, Emoji can be used in this role as iconic signs for their literal meanings, as metaphors and as visual riddles using the rebus principle 4 . If Figure 1. ...
... Subjects were found to interpret the animation of these geometrical objects and shapes in terms of animated beings, attributing personality and motives. b) A sequence of emoji representing the concept of growth using a method derived in [53] to tell stories with emoji. The first emoji is the sapling emoji. ...
Conference Paper
Full-text available
With the increasing availability of commercial humanoid robots, the domain of computational storytelling has found a tool that combines linguistics with its physical originator, the body. We present a framework that evolves previous research in this domain, from a focus on the analysis of expressiveness towards a focus on the potential for creative interaction between humans and robots. A single story may be rendered in many ways, but embodiment is one of the oldest and most natural, and does more to draw people into the story. While a robot provides the physical means to generate an infinite number of stories, we want to hear stories which are more than the products of mere generation. In the framework proposed here, we let the robot ask specific questions to tailor the creation process to the experiences of the human user. This framework offers a new basis for investigating important questions in Human-Robot-Interaction, Computational creativity, and Embodied Storytelling.
... There are also several systems that focus on generating visual symbols from natural language. Most relevantly, Wicke developed a system that is capable of translating verbal narratives into emoji symbols (Wicke 2017). This is similar to our work in that narratives are told pictorially, not phonetically, but it differs somewhat in that our system strives for a highly abstract artistic representation, more similar to the primitive style of cave paintings than modern emojis. ...
Article
An increasingly large body of converging evidence supports the idea that the semantic system is distributed across brain areas and that the information encoded therein is multimodal. Within this framework, feature norms are typically used to operationalize the various parts of meaning that contribute to define the distributed nature of conceptual representations. However, such features are typically collected as verbal strings, elicited from participants in experimental settings. If the semantic system is not only distributed (across features) but also multimodal, a cognitively sound theory of semantic representations should take into account different modalities in which feature-based representations are generated, because not all the relevant semantic information may be easily verbalized into classic feature norms, and different types of concepts (e.g., abstract vs. concrete concepts) may consist of different configurations of non-verbal features. In this paper we acknowledge the multimodal nature of conceptual representations and we propose a novel way of collecting non-verbal semantic features. In a crowdsourcing task we asked participants to use emoji to provide semantic representations for a sample of 300 English nouns referring to abstract and concrete concepts, which account for (machine readable) visual features. In a formal content analysis with multiple annotators we then classified the cognitive strategies used by the participants to represent conceptual content through emoji. The main results of our analyses show that abstract (vs. concrete) concepts are characterized by representations that: 1. consist of a larger number of emoji; 2. include more face emoji (expressing emotions); 3. are less stable and less shared among users; 4. use representation strategies based on figurative operations (e.g., metaphors) and strategies that exploit linguistic information (e.g. rebus); 5. correlate less well with the semantic representations emerging from classic features listed through verbal strings.
Conference Paper
Full-text available
Emoji are becoming increasingly popular, both among users and brands. Their impact is such that some authors even mention a possible language shift towards visuality. We present a Visual Blending-based system for emoji generation, which is capable of representing concepts introduced by the user. Our approach combines data from ConceptNet, EmojiNet and Twitter's Twemoji datasets to explore Visual Blending in emoji generation. In order to assess the quality of the system, a user study was conducted. The experimental results show that the system is able to produce new emoji that represent the concepts introduced. According to the participants, the blends are not only visually appealing but also unexpected.