Article

Can Machines Talk? Comparison of Eliza with Modern Dialogue Systems

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Another artificial intelligence designed to mimic human speech is called Cleverbot. This chatbot was created by Rollo Carpenter and, according to Shah et al. (2016), has an effective artificial dialogue system that makes it function like a human conversation partner. Torrey et al. (2016) claim that Cleverbot picks up on human responses from real individuals and responds accordingly with ease. ...
... Elbot, a different popular chatbot, won the Loebner Prize 2008 For its accomplishments in human-machine interaction, the AI Contest (Shah et al., 2016). Shah et al. (2016) stated that Elbot was deserving of the award because interrogators were certain it was a human. ...
... Elbot, a different popular chatbot, won the Loebner Prize 2008 For its accomplishments in human-machine interaction, the AI Contest (Shah et al., 2016). Shah et al. (2016) stated that Elbot was deserving of the award because interrogators were certain it was a human. This chatbot, created by Fred Roberts, is not limited to providing a self-contained and specialised collection of FAQs; instead, it engages users in conversation about a wide range of topics through natural language interaction (NLI). ...
Article
Full-text available
The current study sought to determine how well university students' EFL listening comprehension skills may be developed by artificial intelligence (AI) technologies. One hundred students participated in the study, split into two groups: the control group (N = 50), which received traditional education, and the experimental group (N = 50), which received instruction using artificial intelligence systems. The study's instruments included an EFL listening comprehension skills checklist to determine which listening skills are most important for first-year college students to acquire. a pre-post listening skills test to measure students' listening abilities before and after using the chatbot and Duoling AI applications and a correction rubric . A statistical analysis was conducted to confirm the study's hypotheses. Findings of the study revealed that the experimental group students' EFL listening skills were enhanced as a result of using the Artificial Intelligence (chatbot and Duoling).
... A chatbot is a computer program capable of continuing a conversation and answering questions from human users (Verma et al., 2020). The first successful attempt to imitate human text-based conversation behaviour (Shum et al., 2018) was completed in the 1960 s by the computer program ELIZA, developed by Joseph Weizenbaum (Epstein & Klinkenberg, 2001;Shah et al., 2016;Weizenbaum, 1966). Recent progress in machine learning and artificial intelligence, as well as the prevalence of smartphones and mobile messenger apps, has boosted the popularity of chatbots. ...
... In a usability study (Study 1), the task-based chatbot, KIM, which helps customers find recipes tailored to their individual needs was evaluated. One hundred and twenty-three conversations were collected and analysed together with subjective user evaluation data and an external evaluation of KIM's conversational ability skills by three reviewers using the scale by Shah et al. (2016). A scenario-based experiment with 627 respondents (Study 2) adapting four conversations with KIM was used to further compare the effects of selected conversational elements. ...
... Our findings contribute to the research on (online) communication behaviour in general (Kasper & Wagner, 2014;Paulus et al., 2016) and human-chatbot conversation in particular (Hill, Randolph Ford, & Farreras, 2015;Shah, Warwick, Vallverdú, & Wu, 2016;Radziwill & Benton, 2017). The results of the two studies relate to conversation analysis of online talk (Paulus et al., 2016) and shed light on the textual conversation of humans with task-based chatbots. ...
Article
The use of text-based chatbots offering individual support to customers has increased steadily in recent years. However, thus far, research has focused on comparing text-based chatbots with either each other or with humans, whilst the investigation of task-based dialogues has been scarce. This paper aims to identify the characteristics of dialogues – that is, conversational elements – that lead to a successful task-based conversation. For this purpose, the chatbot, KIM, by MAGGI Kochstudio was used. It was designed to help customers find a recipe tailored to their individual needs. In order to investigate which conversational elements contribute to successful communication between the user and the chatbot KIM, a usability study collecting 123 unstructured dialogues and a scenario-based experiment using four dialogues with 627 respondents was conducted. The quantitative analysis demonstrates that task completion is characterized by a higher perception of the chatbot’s conversational ability and user satisfaction. The chatbot should propose correct recipe suggestions following a short dialogue, without the user needing to provide too much input. Based on these findings, we recommended equipping the skillset of task-based chatbots with elements that will complement their assistive qualities – for example, improved use of standard phrases, and reactions to similar domains and non-requests. Gender-specific differences in task completion should be considered.
... Bots perform simple functions and usually reply when addressed. The development of computer-assisted conversational agents started with the psychotherapeutic experiment ELIZA as early as in the 1960s (Shah et al., 2016). Since then, bots have been populating the web, often performing small functions to maintain online services and interaction on platforms (Geiger, 2014;Latzko-Toth, 2016). ...
... However, advances in natural language processing and machine learning over the last decade have enabled the development of bots capable of human-like interaction, usually referred to as chatbots or socialbots (Grimme et al., 2017). Newer versions of such bots can identify contexts of communication, modify their responses according to the interlocutor, and engage in human-like communication in ambiguous ways (e.g., Shah et al., 2016). ...
... Previous studies have shown that human-like features are essential cues for users to perceive technological interlocutors as social companions and to activate the psychological inference of anthropomorphism (e.g., Edwards et al., 2019;Epley et al., 2007;Wischnewski et al., 2022). We add to the existing discussion on the design of socialbots (e.g., Araujo, 2018;Shah et al., 2016) by emphasizing the aspects of configuration and communication. Both studied bots were configured to be even more human by the human users: for example, by adding human-like responses for the bots as if they were real users with intentions and opinions. ...
Article
Full-text available
This article examines communicative anthropomorphization, that is, assigning of humanlike features, of socialbots in communication between humans and bots. Situated in the field of human-machine communication, the article asks how socialbots are devised as anthropomorphized communication companions and explores the ways in which human users anthropomorphize bots through communication. Through an analysis of two datasets of bots interacting with humans on social media, we find that bots are communicatively anthropomorphized by directly addressing them, assigning agency to them, drawing parallels between humans and bots, and assigning emotions and opinions to bots. We suggest that socialbots inherently have anthropomorphized characteristics and affordances, but their anthropomorphization is completed and actualized by humans through communication. We conceptualize this process as communicative anthropomorphization.
... Chatbots are AI-based applications developed to mimic human interactions and engage in real-time spontaneous conversations with humans. Their role within classroom learning, particularly in language education, is tangential (Fryer et al., 2019;Shah et al., 2016;Yin et al., 2021). Previous relevant studies focus on how they comprehend human conversations and motivate students learning. ...
... These early developed chatbots had very little educational values. However, over the last decade, chatbots have improved (Coniam, 2014;Fryer et al., 2019;Shah et al., 2016;Yin et al., 2021). For example, Shah et al. (2016) found modern chatbots have significant higher scores from users than first-generation chatbots that were built using early natural language processing. ...
... However, over the last decade, chatbots have improved (Coniam, 2014;Fryer et al., 2019;Shah et al., 2016;Yin et al., 2021). For example, Shah et al. (2016) found modern chatbots have significant higher scores from users than first-generation chatbots that were built using early natural language processing. A chatbot evaluation study conducted by Coniam (2014) revealed that most modern chatbots are able to present grammatically acceptable responses. ...
Article
Full-text available
Student engagement is an important aspect of digital learning. It is energized by motivation and explained by three basic needs in Self-determination theory (SDT). Teacher support distinguished in SDT was widely applied in face-to-face settings, but not digital learning, particularly in K12 context. We know very little how to support the needs of the young children in digital learning. Recently, the founders of SDT also stated that we need more studies to understand how to support students' needs in digital learning environments. Therefore, this study aims to investigate how well three teacher support dimensions distinguished in SDT-autonomy, structure, and involvement-encourage K12 student behav-ioral, cognitive, and emotional engagement. In this study, three hundred and thirty Grade Eight students learned for four weeks in distance learning using technology (refer to digital learning in this paper) and finished a questionnaire on perceived teacher support and their engagement. Stepwise multiple regression models were used to analyze the data. The two major findings are teacher involvement is the most influential predictor and autonomy support has less effect. Two plausible explanations are (i) teacher-student relationships are more important in digital learning due to the school nature and (ii) teachers can support autonomy less in digital learning that offers more freedom learning. Experience due to its less structure.
... The reply is determined in most rule-based chatbots for singular dialogue, considering only the most recent response under reference. By employing a multi-turn response selection criterion where each response is utilized as feedback to pick a natural and contextually relevant response (Wu et al., 2016). ...
... 22 With all the pervasiveness and advancement in chatbots today, existing assessments have chiefly centered around building up their abilities and information to decipher and react to human language meaningfully. On account of these progressing advancements with natural language processing, chatbots these days have improved in caring for conversations (Shah et al., 2016). ...
Thesis
Full-text available
Nowadays, people's lives have become increasingly reliant on technology. Along with the rapid growth of digital platforms and social networks, advancements in technology conversational agents, often known as chatbots, are increasingly becoming a popular marketing tool for improving customer connections. This research was conducted to analyze the effect of using TAM’s construct beyond the first stage of adoption on Moroccan users’ continuance intention to use chatbot. The main purpose of the study is to determine the importance of users’ level of satisfaction with chatbot service and its role in the continuance intention of the users. A survey was developed in the French language and distributed to Moroccan chatbot users. The hypotheses were tested using the partial least squares path modeling (PLS-PM) via Smart-PLS. Findings showed that user satisfaction was the most powerful predictor of continuance intention, and it played an important role between TAM’s constructs and continuance intention.
... Chatbots are AI-based applications developed to mimic human interactions and engage in real-time spontaneous conversations with humans. Their role within classroom learning, particularly in language education, is tangential (Fryer et al., 2019;Shah et al., 2016;Yin et a., 2021). Previous relevant studies focus on how they comprehend human conversations and motivate students learning. ...
... These early developed chatbots had very little educational values. However, over the last decade, chatbots have improved (Coniam, 2014;Fryer et al., 2019;Shah et al., 2016;Yin et a., 2021). For example, Shah and colleagues (2016) found modern chatbots have significant higher scores from users than first-generation chatbots that were built using early natural language processing. ...
Article
Full-text available
As Artificial Intelligence (AI) advances technologically, it will inevitably bring many changes to classroom practices. However, research on AI in education reflects a weak connection to pedagogical perspectives or instructional approaches, particularly in K-12 education. AI technologies may benefit motivated and advanced students. Understanding the teacher’s role of student motivation in mediating and supporting learning with AI technologies in the classroom is needed. This study used self-determination theory as the undergirding framework to investigate how teacher support moderates the effects of student expertise on needs satisfactions and intrinsic motivation to learn with AI technologies. This experimental study involved 123 Grade 10 students, and used chatbots as AI-based technologies in the experiment. The analyses revealed that intrinsic motivation and competence to learn with the chatbot depended on both teacher support and student expertise (i.e. self-regulated learning and digital literacy), and the teacher support better satisfied the need for relatedness, and it less satisfied the need for autonomy. The findings refined our understanding about the application of self-determination theory and expand the pedagogical and design considerations of AI application and instructional practices.
... The current interest in chatbots or conversational agents emerged due to the substantial advancements in computing technology, artificial intelligence and machine learning. Such advancements have led to the broad improvements in domain of interpreting and predicting natural language including improvements in machine translation (Shah, et al., 2016). The growing usage of messaging platforms and accessibility of mobile Internet have also led to the adoption of chatbots (Folstad & Brandtzaeg, 2017). ...
Article
Full-text available
With the advent of Artificial Intelligence (AI), machine learning and advancements in computer technology, there is a growing popularity of conversational agents or chatbots among young adults for seeking different forms of therapeutic and social support. Young adults are also considered to be more vulnerable to various mental health disorders and psychological disturbances. Moreover, the rapid integration of chatbot technology into modern lifestyle has sparked an increasing interest in understanding the underlying reasons behind its widespread adoption, particularly among young adults. The current research aimed to provide a comprehensive overview of the psychosocial factors driving such chatbot usage among young adults worldwide by reviewing previous relevant literature from multiple reputable academic sources such as Google Scholar. From this review paper, prominent research gaps have been identified in the areas of identifying predictors of chatbot usage among young adults of India, its long-term effectiveness in the areas of mental health intervention and its applicability in various psycho-social domains. Overall sparse works have been done on chatbot usage using the Indian population particularly focusing on young adults. Therefore the findings from this paper deem to be important for understanding the future applications of chatbots and their degree of utility in the Indian context.
... Over-reliance on AI recommendations may reduce human team members' engagement and initiative, ultimately weakening intellectual diversity and decision-making capacity. More critically, AI decision-making processes, which are heavily reliant on training data, may inherit or amplify biases present in the data [157,158]. Such biases could manifest as favoritism or prejudice in resource allocation and task coordination [159,160], potentially exacerbating power imbalances within teams and creating new collaboration conflicts, thereby limiting overall team effectiveness. ...
Article
Full-text available
In multi-user collaborative interaction systems, the interface serves not only as a medium for human–computer interaction but also as a crucial channel for communication between users. Consequently, the quality of collaborative interface design directly impacts the overall effectiveness of the system. In collaborative systems, different users typically assume distinct roles, and task flows are typically more complex. Compared to single-user interfaces, multi-user collaborative interfaces must account for a broader range of collaboration requirements and characteristics. Although a substantial body of theoretical and practical research on user interface design exists, design methods specifically for multi-user collaborative interaction interfaces are still lacking. Therefore, this study builds on the existing theories and case studies of collaborative systems, extending user-centered design methods. The study emphasizes the analysis of task flows and role relationships in multi-user collaboration and integrates collaboration needs and characteristics throughout every stage of the interface design process. Ultimately, we propose a methodological framework for interface design tailored to multi-user collaborative interaction systems, aiming to provide theoretical support for the development of more advanced and comprehensive collaborative systems.
... In 1966, ELIZA became one of the world's first chatbots-a computer program designed to convincingly mimic human conversation [1,2]. Though, technically speaking, ELIZA was a far-cry from the large language-based chatbots of today (e.g., Gemini, Chat-GPT), discussion surrounding these kinds of programs even then was remarkably prescient: "ELIZA shows, if nothing else, how easy it is to create and maintain the illusion of understanding, hence perhaps of judgment deserving credibility. ...
Conference Paper
Full-text available
Humans may have evolved to be “hyperactive agency detectors”. Upon hearing a rustle in a pile of leaves, it would be safer to assume that an agent, like a lion, hides beneath (even if there may ultimately be nothing there). Can this evolutionary cognitive mechanism—and related mechanisms of anthropomorphism—explain some of people’s contemporary experience with using chatbots (e.g., ChatGPT, Gemini)? In this paper, we sketch how such mechanisms may engender the seemingly irresistible anthropomorphism of large language-based chatbots. We then explore the implications of this within the educational context. Specifically, we argue that people’s tendency to perceive a “mind in the machine” is a double-edged sword for educational progress: Though anthropomorphism can facilitate motivation and learning, it may also lead students to trust—and potentially over-trust—content generated by chatbots. To be sure, students do seem to recognize that LLM-generated content may, at times, be inaccurate. We argue, however, that the rise of anthropomorphism towards chatbots will only serve to further camouflage these inaccuracies. We close by considering how research can turn towards aiding students in becoming digitally literate—avoiding the pitfalls caused by perceiving agency and humanlike mental states in chatbots.
... Finally, different demographic groups may rate their satisfaction with chatbots differently too. Reference [54] found that younger users and female users rated the conversations with chatbots more favorably. Therefore, it is worthwhile to examine how individuals' demographic characteristics and personality traits would influence their adoption and expectations of using chatbots, so that the brand manager can develop their chatbot strategy based upon their brand's target market. ...
Article
Full-text available
Chatbots are widely used in customer services contexts today. People using chatbots have their pragmatic reasons, like checking delivery status and refund policies. The purpose of the paper is to investigate what are those factors that affect user experience and a chatbot’s service quality which influence user satisfaction and electronic word-of-mouth. A survey was conducted in July 2024 to collect responses in Hong Kong about users’ perceptions of chatbots. Contrary to previous literature, entertainment and warmth perception were not associated with user experience and service quality. Social presence was associated with user experience, but not service quality. Competence was relevant to user experience and service quality, which reveals important implications for digital marketers and brands of adopting chatbots to enhance their service quality.
... Nonetheless, chatbots have advanced within the past ten years (Coniam, 2014;Fryer et al., 2019;Yin et al., 2021). For instance, Shah et al. (2016) discovered that users give current chatbots noticeably higher ratings than they do first-generation chatbots constructed with early natural language processing. According to a Coniam (2014) evaluation study, the majority of contemporary chatbots can provide grammatically correct responses. ...
Article
Full-text available
How AI can be utilized in language education field and the influences on both learning, motivation, and engagement are the core of current research. As a result of the present study, 9 papers were made part of the research. These studies cover different AI-based treatments which can help different types of learning systems from kindergarten to university classroom in various environments. The data provided evidence for theme analysis compared with previously conducted studies, and thus the methodological quality of the research was evaluated. Findings demonstrate that AI-based methods of treatment are considerably more effective than the traditional ways of teaching the same language. Students had greater cardinal direction knowledge, and each individual also had some part of the concept. Further, AI computing technologies play the part of giving learners the intrinsic motivation, self-regulation and learner autonomy. This can stimulate students' engagement and interest in their studies. By way of the instructor support, and AI interface design the contextual factors as tools that help or hamper the effectiveness of interventions are used. Results have proven that AI is the most likely future of the language training and educators, governments, and researchers need to be kept informed. The need for longer-term viability and scalable solutions, as well as ethical aspects concern the process of AI-powered digital systems implementation requires deeper research. In view of the two-sided picture of AI-aided language learning trend, this systematic review provides outcomes that may lead to further investigation and practice of AI in the field of language learning.
... Nonetheless, due to unexpected interactions with the environment and unforeseen dynamical evolutions, the artifacts seem "alive" and have their kind of "soul" to an observer. This is also known as the Eliza effect (Shah et al., 2016). This effect continues to mislead the widespread perception of AI technologies from scientists and ordinary citizens. ...
Article
Full-text available
The article addresses the accelerating human–machine interaction using the large language model (LLM). It goes beyond the traditional logical paradigms of explainable artificial intelligence (XAI) by considering poor-formalizable cognitive semantical interpretations of LLM. XAI is immersed in a hybrid space, where humans and machines have crucial distinctions during the digitisation of the interaction process. The author’s convergent methodology ensures the conditions for making XAI purposeful and sustainable. This methodology is based on the inverse problem-solving method, cognitive modeling, genetic algorithm, neural network, causal loop dynamics, and eigenform realization. It has been shown that decision-makers need to create unique structural conditions for information processes, using LLM to accelerate the convergence of collective problem solving. The implementations have been carried out during the collective strategic planning in situational centers. The study is helpful for the advancement of explainable LLM in many branches of economy, science and technology.
... ELIZA couldn't understand its users, nor could it genuinely communicate about the issues they raised. This can be shown by analyzing its code, as was pointed out by ELIZA's creator (Weizenbaum, 1966;1976) and other scholars (Block, 1981;Shah et al., 2016). The simplicity of this program (200 lines in BASIC) makes it easy to show the tricks it uses to create the illusion of genuine communication. ...
Preprint
Full-text available
Can AI and humans genuinely communicate? In this article, after giving some background and motivating my proposal (§1-3), I explore a way to answer this question that I call the 'mental-behavioral methodology' (§4-5). This methodology follows the following three steps: First, spell out what mental capacities are sufficient for human communication (as opposed to communication more generally). Second, spell out the experimental paradigms required to test whether a behavior exhibits these capacities. Third, apply or adapt these paradigms to test whether an AI displays the relevant behaviors. If the first two steps are successfully completed, and if the AI passes the tests with human-like results, this constitutes evidence that this AI and humans can genuinely communicate. This mental-behavioral methodology has the advantage that we don't need to understand the workings of black-box algorithms, such as standard deep neural networks. This is comparable to the fact that we don't need to understand how human brains work to know that humans can genuinely communicate. This methodology also has its disadvantages and I will discuss some of them (§6).
... A chatbot is a software application that can recognize text or voice and respond with text and voice (Aksu Dünya & Yıldız Durak, 2023). Shah et al. (2016) stated that artificial intelligence (AI)-powered chatbots are software that offer personalized applications that use semantic analysis and natural language processing (NLP) techniques to perform text and voice communications, understand user commands, and create rapid interaction with specified expressions. ...
Article
Full-text available
Adopting innovations in educational practice is a challenging task. In order to promote the use of technological innovations, acceptance of the technology by potential users is a prerequisite. Indeed, understanding the various factors that influence technology acceptance is critical for technology acceptance research. The use and acceptance of chatbots in education as a technological innovation is a topic that needs to be investigated. Chatbots, which offer close to human interaction between the user and technology through text and voice, can provide significant benefits in educational environments. The UTAUT2 model (extending UTAUT), which is widely used to evaluate technology acceptance, can serve as a framework for evaluating the acceptance and use of chatbots. This study aims to predict factors influencing students' use of chatbots in education within the UTAUT2 framework. PLS-SEM and machine learning tested the model, involving 926 students. According to the findings of the study, behavioral intentions were influenced by various factors including performance expectations and attitudes. Facilitating conditions and intentions significantly impacted chatbot usage time. Moderator effects were observed with age, gender, and usage experience affecting behavioral intentions. Support vector machine and logistic regression showed high prediction accuracies for behavioral intentions and usage time, respectively. These results provide insights for chatbot designers to meet user needs in educational settings.
... In the 1950s, Turing used the question-and-answer mode to conduct the Turing test, which speculated whether a machine can think like a human [4]. By 1966, Eliza, the first formal question-answering system, came out, which was used to treat mental illness [5]. Today, question-answering systems have been widely applied. ...
Preprint
Full-text available
With the rapid growth of the agricultural information and the need for data analysis, how to accurately extract useful information from massive data has become an urgent first step in agricultural data mining and application. In this study, an agricultural question-answering information extraction method based on the IM-BILSTM (Improved Bidirectional Long Short-Term Memory) algorithm is designed. Firstly, it uses Python's Scrapy crawler framework to obtain the imformation of soil types, crop diseases and pests, and agricultural trade information, and remove abnormal values. Secondly, the information extraction converts the semi-structured data by using entity extraction methods. Thirdly, the BERT(Bidirectional Encoder Representations from Transformers) algorithm is introduced to improve the performance of the BILSTM algorithm. After comparing with the BERT-CRF(Conditional Random Field) and BILSTM algorithm, the result shows that the IM-BILSTM algorithm has better information extraction performance than the other two algorithms. This study improves the accuracy of the agricultural information recommendation system from the perspective of information extraction. Compared with other work that is done from the perspective of recommendation algorithm optimization, it is more innovative; it helps to understand the semantics and contextual relationships in agricultural question and answer, so as to Improve the accuracy of agricultural information recommendation systems. By gaining a deeper understanding of farmers' needs and interests, the system can better recommend relevant and practical information.
... The purpose of chatbots is to engage users in natural-like language interactions, facilitating meaningful conversations. Harnessing the advancements in AI and natural language processing technologies, currently available forms of chatbots have remarkably reached higher levels of enhancement, elevating their capabilities in both voice-enabled and written exchanges (Shah et al., 2016). ...
Article
Full-text available
This paper reports on a mixed-methods study delving into EFL students’ experiences and perspectives on a text-based pedagogical chatbot. Utilizing chatbot-mediated interaction, a questionnaire survey, and focus group discussions, the study centers around the cognitive and affective domains of learning in relation to the chatbot’s affordances and limitations. Additionally, it investigates potential associations between L2 proficiency and perceptions on the chatbot. The sample (n = 143) consisted of undergraduate students from a Saudi university who engaged in guided and self-initiated interactions with the chatbot. By and large, the findings point to positive experiences concerning the chatbot’s intelligibility and comprehension. In terms of the interaction, the chatbot is perceived as supportive of L2 practice and writing development, interest-provoking, enhancing motivation, and alleviating writing anxiety. Contrastingly, certain demotivating factors are reported regarding the chatbot’s interactional and instructional abilities, including the lack of extended conversations, sensitivity to inaccurate language forms, and sporadic irrelevant responses. Moreover, the Mann-Whitney U test reveals that L2 proficiency does not affect overall views on the chatbot-mediated interaction, except for the aspect of usefulness for L2 practice, which has significantly more positive views from high-intermediate students. Pedagogical implications pertinent to the integration of chatbots in L2 learning are discussed.
... Chatbots can be traced back to Alan Turing or Joseph Weizenbaum, depending on the narrative [16][17][18]. They are dialog systems with natural language capabilities of a textual or auditory nature [19,20]. ...
Article
Full-text available
Extinct and endangered languages have been preserved primarily through audio conservation and the collection and digitization of scripts and have been promoted through targeted language acquisition efforts. Another possibility would be to build conversational agents like chatbots or voice assistants that can master these languages. This would provide an artificial, active conversational partner which has knowledge of the vocabulary and grammar and allows one to learn with it in a different way. The chatbot, @llegra, with which one can communicate in the Rhaeto-Romanic idiom Vallader was developed in 2023 based on GPT-4. It can process and output text and has voice output. It was additionally equipped with a manually created knowledge base. After laying the conceptual groundwork, this paper presents the preparation and implementation of the project. In addition, it summarizes the tests that native speakers conducted with the chatbot. A critical discussion elaborates advantages and disadvantages. @llegra could be a new tool for teaching and learning Vallader in a memorable and entertaining way through dialog. It not only masters the idiom, but also has extensive knowledge about the Lower Engadine, that is, the area where Vallader is spoken. In conclusion, it is argued that conversational agents are an innovative approach to promoting and preserving languages.
... The modality of communication for knowledge transfer included text (T) and/or speech (S), emoji (E), facial expressions (F), or gestures (G) used by humans and CAs. The choice of modality of communication depends on the CA embodiment and type of task to be performed by the CA e.g., use of ELIZA and similar text-based CAs in e-commerce (Shah et al., 2016) for answering customer enquiries, or use of a voice assistant to guide children during an indoor treasure hunting game (Aeschlimann et al., 2020), or use of both speech and text in a language learning task in a classroom environment (Carlotto & Jaques, 2016). Table 2 summarizes the papers reviewed based on CA embodiment and communication modality. ...
... For instance, [18]found that holiday shoppers who are more inventive are more likely to use mobile devices. Improvements in NLP and AI have allo wed chatbots to mature significantly since their introduction [19]. This has led to an increase in the use of chatbots for usage in customer support across industries. ...
Conference Paper
Full-text available
The travel and tourism sector is one of the world’s most important economic drivers. This study analyzes the growth and transformation of the worldwide tourism sector. What keeps the economy going, what generates employment, what contributes to social stability, and what drives societal development. The industry is crucial to the global economy, supporting the livelihoods of hundreds of millions of people. When it comes to economic activity, tourism often serves as the only game in town on many islands. The ultimate goal of the role is to encourage the development of sustainable economies. From the greatest global travel companies to the smallest tour operators or hostel proprietors, the travel and tourism business employs millions of people worldwide. It can achieve real changes in the political and social systems through our work together. Preprocessing, feature selection, and model training are the three main components of the proposed method. Preprocessing is performed to clean the data. The model’s efficacy is measured using LSTM-AE after undergoing a filter-based and recursive feature selection process.
... The aim is to make the conversations between two parties as natural as possible so that they resemble a person-to-person conversation (Schuetzler et al., 2014;Seeger et al., 2018). Over the years, the technologies and therefore the chatbots have clearly evolved concerning their capabilities (Araujo, 2018;Shah et al., 2016). Initially, decision trees were used to determine what the rule-based chatbots would reply to in the next step. ...
Article
Full-text available
Although chatbots are oftentimes used in customer service encounters, interactions are oftentimes perceived as not satisfactory. One key aspect for designing chatbots is the use of anthropomorphic design elements. In this experimental study, we examine the two anthropomorphic chatbot design elements of personification, which includes a human-like appearance, and social orientation of communication style, which means a more sensitive and extensive communication. We tested the influence of the two design elements on social presence, satisfaction, trust and empathy towards a chatbot. First, the results show a significant influence of both anthropomorphic design elements on social presence. Second, our findings illustrate that social presence influences trusting beliefs, empathy, and satisfaction. Third, social presence acts as a mediator for both anthropomorphic design elements for satisfaction with a chatbot. Our implications provide a better understanding of anthropomorphic chatbot design elements when designing chatbots for short-term interactions, and we offer actionable implications for practice that enable more effective chatbot implementations.
... When monitoring a company's network, it is necessary to collect logs to understand and identify events on the network and to do this the SOC uses its main tool which is the SIEM (Security Information Event Management). The SIEM [8] [9] tools allow managing the security events of an IS and some of them are already used by the mobile Operator namely IBM Qradar, and Elastic. ...
Article
Full-text available
Companies around the world are the first targets of cybercriminals, because the end product of their attacks is much more lucrative than that of targeted attacks against individuals. As a result, businesses have much greater and more stringent cyber security needs. Moreover, losses in cases of compromise can be evaluated in terms of tens of millions of CFA francs, which makes it a prime target for cybercriminals. Generally, in companies, all the intervention capacities are put into play through an Information System Security team in order to meet the maximum-security needs of its information system. This team is often responsible for the SOC (Security Operation Centre), i.e., the supervision of the security of the information system of a structure through tools of collection, correlation of events and remote intervention. The main mission of the SOC is to identify, analyse and ameliorate cyber security incidents. To assist this team in the continuous management of security and to improve the response time to various security incidents, we designed and implemented a conversational agent for security event monitoring using the ELK Stack SIEM tool. As a result, we obtained a conversational agent that is able to identify and analyse security incidents and events of the company's information system, centralize and have a global view of the security status of all monitored devices, create personalized and adequate rules that can detect flaws in the system, provide reports on security incidents and events through voice exchanges. This will allow the SOC to fulfil the first two terms of its main mission, i.e. the identification and analysis of incidents in order to be able to react more quickly and efficiently to them, thus fulfilling the third and last term of its main mission, remediation. General Terms Supervision of security events Keywords Cybersecurity, information systems security, security event monitoring, conversational agent, SIEM.
... To our understanding, the current assessment is to use self-reporting as the criterion of judgment. [4][5][6][7] Traditionally, researchers have relied on self-report measures to assess a person's demographic information, beliefs, and feelings. [8,9] However, self-report measures are subjective and insensitive. ...
Article
Full-text available
Natural language processing (NLP) is central to the communication with machines and among ourselves, and NLP research field has long sought to produce human-quality language. Identification of informative criteria for measuring NLP-produced language quality will support development of ever-better NLP tools. The authors hypothesize that mentalizing network neural activity may be used to distinguish NLP-produced language from human-produced language, even for cases where human judges cannot subjectively distinguish the language source. Using the social chatbots Google Meena in English and Microsoft XiaoIce in Chinese to generate NLP-produced language, behavioral tests which reveal that variance of personality perceived from chatbot chats is larger than for human chats are conducted, suggesting that chatbot language usage patterns are not stable. Using an identity rating task with functional magnetic resonance imaging, neuroimaging analyses which reveal distinct patterns of brain activity in the mentalizing network including the DMPFC and rTPJ in response to chatbot versus human chats that cannot be distinguished subjectively are conducted. This study illustrates a promising empirical basis for measuring the quality of NLP-produced language: adding a judge's implicit perception as an additional criterion.
Article
Today, artificial intelligence is a rapidly developing technology. It is applied in many aspects of our daily lives, such as intelligent search engines, self-driving cars, and intelligent agents. As technology advances, intelligent agents become more popular. Smartphones and smart speakers like Amazon Echo have made them ubiquitous. Some of the most well-known examples of current conversational agents are Siri, Cortana, Alexa, and Google Assistant. The advent of AI-driven chatbots is revolutionizing customer service by delivering instant responses, efficiently addressing queries, and slashing wait times. Exactly, chatbots are crafted to mimic human behavior and effectively take on the role of humans in a chat environment. Many companies, particularly in e-commerce, leverage chatbots across various platforms as a means to connect with their customers, offering an efficient and automated way to engage in conversations and provide assistance. Indeed, chatbots possess the ability to continuously learn. This paper serves as a foundation for comprehending the subsequent evolution of chatbots. Firstly, we will delve into artificial intelligence, providing an overview that encompasses its historical development, along with comprehensive definitions and explanations of its various manifestations. Following that, we will define the term "chatbot" and provide an overview of its three primary categories: template-based models, retrieval-based models, and generative-based models. Lastly, the study will explore the impact of chatbots on the online marketplace's customer experience. We will investigate the factors that influence the platform's customer experience, the potential enhancements that chatbots can offer, and their effectiveness in addressing customer inquiries and grievances.
Article
Full-text available
A inteligência artificial (IA) aliada à educação tem sido alvo de intensos debates que colocam em xeque seu potencial disruptivo e seu horizonte de possibilidades. Neste escopo é que se debruça o presente artigo, ao propor a análise do papel da IA na educação, focando em seus impactos e desafios para os professores. O objetivo é entender como a IA pode transformar o processo de ensino e aprendizagem e qual o papel dos professores em meio a esse panorama. Trata-se de uma pesquisa exploratória e bibliográfica, analisando o estado da arte da literatura científica sobre o tema. A metodologia incluiu a seleção de artigos científicos disponíveis no Portal Periódicos CAPES e no SciELO Citation Index (SciELO CI), filtrando publicações nacionais em português, produzidas entre 2019 e 2024. Os termos de busca utilizados foram “professores e inteligência artificial” e “inteligência artificial e educação”. A partir dessa busca, foram selecionados 21 artigos que compõem o corpus de análise. Os resultados apontam que, embora a IA apresente benefícios como eficiência e personalização do ensino, ela também pode intensificar desigualdades no acesso às tecnologias e conduzir a uma superficialidade do processo de ensino e aprendizagem baseado em seu uso, caso não haja a devida capacitação docente. Conclui-se que, apesar da celeuma em torno da inserção da IA nos sistemas educacionais, ela não substituirá os professores, mas poderá complementar suas funções. A implementação da IA na educação deve ser feita de forma ética, equilibrada e engajada, garantindo acesso equitativo às tecnologias e a formação continuada dos professores, ressaltando a necessidade de atuação eficaz do Estado nesse processo.
Article
Full-text available
Chatbot technology, a rapidly growing field, uses Natural Language Processing (NLP) methodologies to create conversational AI bots. Contextual understanding is essential for chatbots to provide meaningful interactions. Still, to date chatbots often struggle to accurately interpret user input due to the complexity of natural language and diverse fields, hence the need for a Systematic Literature Review (SLR) to investigate the motivation behind the creation of chatbots, their development procedures and methods, notable achievements, challenges and emerging trends. Through the application of the PRISMA method, this paper contributes to revealing the rapid and dynamic progress in chatbot technology with NLP learning models, enabling sophisticated and human-like interactions on the trends observed in chatbots over the past decade. The results, from various fields such as healthcare, organization and business, virtual personalities, to education, do not rule out the possibility of being developed in other fields such as chatbots for cultural preservation while suggesting the need for supervision in the aspects of language comprehension bias and ethics of chatbot users. In the end, the insights gained from SLR have the potential to contribute significantly to the advancement of chatbots on NLP as a comprehensive field.
Chapter
examines the many facets of artificial intelligence, outlining its reach, development, and effects, with an emphasis on how it can be used in education. The study explores the evolutionary history of artificial intelligence, which has been fueled by exponential gains in computer power, data availability. The research looks at how AI is changing teaching, learning, and administrative procedures in educational institutions. The abstract explores the ways in which artificial intelligence is transforming education, outlining the nature, uses, implications, difficulties, and importance of AI. It explores how artificial intelligence technologies which include learning, reasoning and self-correction are revolutionizing traditional teaching methods and administrative procedures. It also facilitates automatic grading, data-driven decision-making, and lifelong learning. AI has significant ramifications that highlight its critical role in transforming educational paradigms internationally, helping both students and educators, despite its limitations.
Chapter
I dette kapitlet beskrives kommunikasjon i skjæringspunktet mellom biologisk og kunstig intelligens med vekt på organisasjoner. Hva betyr det at maskiner snakker, og hva skjer når mennesker snakker med maskiner? Kapitlet ser på hva digitale språkmodeller kan fortelle oss om menneskets kommunikasjonsevner, og hvordan teknologien bak dem ligger til grunn for vår forståelse av kommunikasjon i organisasjoner. Denne teknologien har preget vårt forhold til kommunikasjon i 70 år til nå, og vil helt sikkert endre vårt syn på kommunikasjon i de nærmeste årene. Et hovedpoeng i kapitlet er at mennesket alltid har bygget teknologi inn i sin egen mentale fungering. Språk-roboter er best forstått som mentale proteser som kan gjøre oss mer intelligente, men bare hvis vi tar oss bryderiet med å forstå både kunstig og biologisk intelligens.
Book
Full-text available
The Metaverse: A Critical Introduction provides a clear, concise and well-grounded introduction to the concept of the Metaverse, its history, the technology, the opportunities, the challenges and how it is having an impact almost every facet of society. The book serves as a standalone introduction to the Metaverse, as an overarching summary of the specialist volumes in The Metaverse Series, and removes the need to repeat basic information in each book. The book provides: • a concise history of the Metaverse idea and related implementations to date; • an examination of what the Metaverse actually is; • an introduction to the fundamental technologies used in the Metaverse; • a brief overview of how aspects of the Metaverse are having an impact on our lives across multiple disciplines and social contexts; • a consideration of the opportunities and challenges the evolving Metaverse; and • a sense of how the Metaverse may mature over the coming decades. The book will be practical guide, but drawing from academic research, practical and commercial experiences and inspiration from the science fiction origins and treatments of the Metaverse. The book will also explore the impact of the increasing number of virtual worlds and proto-Metaverses which have existed since the late 1990s. The aim is to provide professional and lay readers, researchers, academics and students with an indispensable guide to what counts as the Metaverse, opportunities and challenges, and how the future of the coming Metaverse can best be realised. There is more information on the book and the series at http://www.themetaverseseries.info/ The Metaverse: A Critical Introduction will be published on 24th September 2024 and is now available for pre-order.
Chapter
For most of us, 2020 and 2021 represent the years of the pandemic. The fear of being either victims or vectors of a life-threatening disease changed our habits. One of the habits that changed was the implementation of telemedicine, especially in the form of medical teleconsultations, and this change created additional difficulties in the medical practice. Numerous communication protocols have been implemented to establish the best possible communication in telemedicine. In this chapter, we will analyze the most commonly used protocols and describe our experience both in the evaluation of efficacy with patients and in the evaluation of medical skills to carry it out.
Chapter
Full-text available
In the process of digitalization and artificial intelligence (AI), data protection and digital ethics are trending topics. However, they are rarely considered in conjunction even though these areas are inextricably linked. This volume, the first in a series, seeks to close this gap. Informational self-determination is an expression of a European understanding of values, particularly with regard to smart technologies and AI applications. In addition to socially relevant dimensions of data protection and digital ethics, the authors demonstrate the influence of networked technologies on the experience and manipulation of modern reality, digitalized childhoods being one of them. Further thematic highlights from the perspective of data protection illustrate the multifaceted nature of the interdisciplinary approach of this book which will be continued throughout the series. With contributions by Dr. Stefan Brink | Prof. Dr. Petra Grimm | Dr. Clarissa Henning | Prof. Dr. Tobias Keber | Dr. Nina Köberer | Mike Kuketz | Dr. Walter Krämer | Daniel Maslewski | Prof. Dr. Ricarda Moll | Dr. Julia-Maria Mönig
Chapter
Deep learning has made significant improvement in natural language processing. Nowadays virtual assistants or chatbots attract attention of many researchers and are expected to be applied in more and more areas. We had designed and implemented an extensible financial virtual assistant using Genie framework. A new device (or skill) is developed to offer financial services in backend server cloud. The device and supported APIs (Application Programming Interface) are registered in an open repository Thingpedia. When Genie receives user utterances, it translates them into ThingTalk programs using a large deep-learning neural networks. Then, Genie executes the ThingTalk programs, which may invoke the financial services through the registered APIs. ThingTalk is a declarative programming language. Domain experts can easily describe financial services in high-level viewpoint with minimal knowledge and experiences of computer programming and system development, while complex services are implemented in backend servers and access through APIs. As a result, domain experts and computer engineers together can fast and easily build a virtual assistant that support natural language interface.
Chapter
Chatbot assistants like Siri, Alexa, etc., have permeated over 90,000 homes across the US and are forecasted to exceed 8.4 billion units by 2024. We have looked at the origins of the Chatbot as well as their categories and the roles they play in everyday life now. Here, we discuss the features of such devices and how they might be used for n advantageous and worthy purposes. We will take a close look at Chatbots that act as personal assistants, whether at home or at work, and how they enhance our experience and interface with the world around us.
Conference Paper
Full-text available
Emerging technologies such as virtual influencers, which are virtual robots algorithmically embedded with personae and personalities, are blurring the line between humans and computer-generated personalities and, consequently, the boundary between perceiving robots as machines and perceiving robots as social livings. Following a three-stage model of social perception and interaction toward virtual influencers, this paper (1) argues that when encountering virtual influencers, how audiences’ interpretation of social cues will influence their suspension of disbelief; (2) describes how audiences with different willingness of suspension of disbelief follow different patterns to form their social perception toward virtual influencers; and (3) articulates the source orientation model, which explains how the source toward which audiences orient their responses will affect their social interaction with virtual influencers.
Book
Full-text available
This book offers a collection of state-of-the-art conversation analytic work on the impact of different types of digital technologies and media on social interaction. It furthers our understanding of whether and to what extent the varying practices of digital interaction can be considered adaptations of the basic organisations and resources of co-present face-to-face interaction. The chapters explore the emerging practices in contemporary digital interaction and in interaction related to digital technologies. The volume is organised into four sections according to the platform or type of digital interaction: mobile messaging, social media, video conferencing, and human-computer interaction. Each of the chapters highlights an interactional or linguistic phenomenon – an action, a practice, a sequence, or a larger structure. Some of these are unique to online environments, such as emojis or hashtags, whereas some occur in both online and offline interaction, such as repair initiators and proposal sequences.
Chapter
A chatbot is a conversational agent that uses Artificial Intelligence (AI) to interpret the text of the chat using Natural Language Processing (NLP) in particular, instead of making direct contact with a live person, users can make conversation via text or voice. Chatbots are a fast-growing AI trend that involves the use of applications communicating with users in a conversational style and imitating human conversation using human language. Many industries are attempting to include solutions based on artificial intelligence like chatbots to improve their customer service in order to deliver better service to their customers with faster and less expensive support. This paper is a survey of the published chatbots to discover knowledge gaps and indicate areas that require additional investigation and study, starting from history and how it evolves during the past, then chatbots architectures to understand how it works, and to identify application of chatbots in many domains, and finish by chatbots limitations that shorten its lifespan and how can future work improve the chatbot for best performance. KeywordsChatbotsArtificial IntelligenceNatural Language ProcessingDeep Learning
Article
Artificial intelligence (AI) has been a disruptive technology within healthcare from the development of simple care algorithms to complex deep learning models. Importantly, AI has the potential to reduce the burden of administrative tasks, advance clinical decision making and improve patient outcomes. Unlocking the full potential of AI, requires the analysis of vast quantities of clinical information. Although AI holds tremendous promise, widespread adoption within plastic surgery remains limited. Understanding the basics is essential for plastic surgeons to see beyond the hype and focus on the true promise of AI. This review introduces AI including the history of AI, key concepts, applications of AI in plastic surgery and future implications.
Chapter
Digital transformation and globalisation have taken the online business to the next frontier, embracing the customer engagements with conversational artificial intelligence or chatbots. Chatbots are deployed across several industries ranging from e-commerce to healthcare. While the advantages of using chatbots are enormous, chatbots also introduce certain pitfalls. A lack of diversity among creators may result in biased responses from the chatbot. Though chatbots are widely used, not all of their security issues are satisfactorily resolved. It causes significant security issues and risks, which needs immediate attention. Many chatbots are built on top of social/messaging platforms, which has its own set of terms and conditions governing data collection and usage. This work gives a detailed analysis of security considerations in the context of communication with bots. This chapter has the potential to spark a debate and draw attention to the issues surrounding data storage and usage of chatbots to protect users.
Book
Full-text available
Parsing the Turing Test is a landmark exploration of both the philosophical and methodological issues surrounding the search for true artificial intelligence. Will computers and robots ever think and communicate the way humans do? When a computer crosses the threshold into self-consciousness, will it immediately jump into the Internet and create a World Mind? Will intelligent computers someday recognize the rather doubtful intelligence of human beings? Distinguished psychologists, computer scientists, philosophers, and programmers from around the world debate these weighty issues and, in effect, the future of the human race in this important volume. Foreword by Daniel C. Dennett.
Article
Full-text available
The second part contains the centerpiece: Turing's 1950 paper from Mind, "Com-puting Machinery and Intelligence," accompanied by three "ephemera": two early (1951) and difficult-to-find articles by Turing, "Intelligent Machinery, a Heretical The-ory" and "Can Digital Computers Think?", and a transcript of a 1952 BBC radio interview with Turing, M. H. A. Newman, Sir Geoffrey Jefferson, and R. B. Braithwaite, "Can automatic Calculating Machines Be Said to Think?" Shieber's presentation of the pi`ece de r'esistance (Turing 1950) devotes great attention to the sanctity of the text and is replete with scholarly paraphernalia comparing his carefully edited reprint with the original (which, by the way, is now available online, courtesy of JSTOR. org). The third, and final, part contains the immediate reactions to Turing's Mind paper as they appeared in that journal, followed by now-classic responses and some more-recent, important papers, some arranged chronologically, others logically. The first published response was Leonard Pinsky's early (1951), and satirical, "Do Ma-chines Think about Machines Thinking?" for which Shieber offers a brief, wry introduction. Next we have a quartet consisting of Keith Gunderson's important "The Imitation Game" (1964), Richard Purtill's response ("Beating the Imitation Game," 1971), and Geoffrey Sampson's ("In Defence of Turing") and P. H. Millar's ("On the Point of the Imitation Game") 1973 replies to Purtill. Jumping ahead a couple of decades comes Robert M. French's 1990 "Subcognition and the Limits of the Turing Test. " Next, in more of a logical than a chronological order, comes a trio consisting of John
Conference Paper
Full-text available
This paper presents an analysis of three major contests for machine intelligence. We conclude that a new era for Turing's test requires a fillip in the guise of a committed sponsor, not unlike DARPA, funders of the successful 2007 Urban Challenge. 12
Chapter
Full-text available
The Turing Test, originally configured as a game for a human to distinguish between an unseen and unheard man and woman, through a text-based conversational measure of gender, is the ultimate test for deception and hence, thinking. So conceived Alan Turing when he introduced a machine into the game. His idea, that once a machine deceives a human judge into believing that they are the human, then that machine should be attributed with intelligence. What Turing missed is the presence of emotion in human dialogue, without expression of which, an entity could appear non-human. Indeed, humans have been confused as machine-like, the confederate effect, during instantiations of the Turing Test staged in Loebner Prizes for Artificial Intelligence. We present results from recent Loebner Prizes and two parallel conversations from the 2006 contest in which two human judges, both native English speakers, each concomitantly interacted with a non-native English speaking hidden-human, and jabberwacky, the 2005 and 2006 Loebner Prize bronze prize winner for most human-like machine. We find that machines in those contests appear conversationally worse than non-native hidden-humans, and, as a consequence attract a downward trend in highest scores awarded to them by human judges in the 2004, 2005 and 2006 Loebner Prizes. Analysing Loebner 2006 conversations, we see that a parallel could be drawn with autistics: the machine was able to broadcast but it did not inform; it talked but it did not emote. The hidden-humans were easily identified through their emotional intelligence, ability to discern emotional state of others and contribute with their own ‘balloons of textual emotion’.
Conference Paper
Full-text available
One of the differences between natural conversational entities – NCE (humans) and artificial conversational entities – ACE (such as Carpenter’s Jabberwacky), is the ability the former have to constrain random output during dialogue. When humans want to participate in, and pursue conversation with each other they maintain coherent dialogue through contextual relevance, create metaphors – fusing seemingly unrelated ideas to ensure abstract points are understood, and indicate topic change at mutually acceptable junctures.
Conference Paper
Full-text available
At the heart of Turing"s 1950 imitation game is the question-answer test to assess whether a machine can respond with satisfactory and sustained answers to unrestricted questions. In 1966, Weizenbaum"s Eliza system made it possible for a human and a machine to communicate via text in question and answer sessions. Forty five year later in 2011, IBM Watson achieved remarkable success winning an unrestricted question-answer exhibition match competing against humans in Jeopardy! a US TV quiz show. Is it now time to scale up to Harnad"s Total Turing Test combining natural language with robot audio and vision engineering?
Conference Paper
Full-text available
One feature of 'humanness' that Turing did not factor into his imitation game for machine thinking and intelligence is that mistakes will be made by some of the human interrogators, and others are easily fooled. Those that err and those susceptible to deception include 'experts' who cannot be described as unintelligent as a result of their judgement. Turing described two imitation game scenarios: i) a 3-participant test involving a human interrogator questioning two hidden interlocutors simultaneously (STT) and decides which is human and which is machine; ii) 2-participant viva voce test (VVTT) involving the interrogator questioning one hidden interlocutor deciding whether it is human or machine. In this paper we summarise errors from 120 STTs conducted by Reading University, UK. 276 Turing tests have been conducted so far by the authors (96 tests at Reading University, 2008 i ii ; 180 tests at Bletchley Park, 2012 iii iv). Human interrogators and hidden humans in those tests included members of the public, experts, males/females, adults/ teenagers, and native/non-native UK English speakers. What we focus on here are the findings from the 120 machine-human simultaneous Turing tests and four types of errors that occurred in them (see Table 1):
Chapter
Full-text available
Describing two ways to practicalise his question-answer game to examine machine thinking in 1950, Turing believed one day a machine would succeed in providing satisfactory and sustained answers to any questions. In 2011 IBM Watson achieved success competing against two human champions in a televised general knowledge quiz show. Though he regarded the process of thinking mysterious, Turing believed building a machine to think might help us to understand how it is we humans think.
Article
Full-text available
This paper investigates the linguistic worth of current ‘chatbot’ programs – software programs which attempt to hold a conversation, or interact, in English – as a precursor to their potential as an ESL (English as a second language) learning resource. After some initial background to the development of chatbots, and a discussion of the Loebner Prize Contest for the most ‘human’ chatbot (the ‘Turing Test’), the paper describes an in-depth study evaluating the linguistic accuracy of a number of chatbots available online. Since the ultimate purpose of the current study concerns chatbots' potential with ESL learners, the analysis of language embraces not only an examination of features of language from a native-speaker's perspective (the focus of the Turing Test), but also aspects of language from a second-language-user's perspective. Analyses indicate that while the winner of the 2005 Loebner Prize is the most able chatbot linguistically, it may not necessarily be the chatbot most suited to ESL learners. The paper concludes that while substantial progress has been made in terms of chatbots' language-handling, a robust ESL ‘conversation practice machine’ (Atwell, 1999) is still some way off being a reality.
Conference Paper
Full-text available
The geography of a modern Eliza provides an illusion of natural language understanding, through sophisticated techniques capturing context and interaction-based learning. This can be seen in the best of the hundred-plus programmes entered into Chatterbox Challenge 2005 (CBC 2005), an alternative to Loebner’s Contest for artificial intelligence, Turing’s measure for intelligence through conversation. These artificial conversational entities (ACE) are able to maintain lengthy textual dialogues. This paper presents the experience of the author as one of the Judges in CBC 2005. Not ‘bathed in language experience’ like their human counterparts, Eliza’s descendants respond at times humorously and with knowledge but they lack metaphor use, the very feature of everyday human discourse. However, ACE find success as virtual e-assistants in single topic domains. Swedish furniture company IKEA deploys animated avatar Anna, a virtual customer service agent in twenty thousand conversations daily across eight country sites in six languages, including English. Anna provides IKEA’s customers with an alternative and more natural query system, than key-word only search, to find products and prices. The author’s findings show that modern Eliza’s appear to have come a long way from their ancestor but understanding remains in the head of the human user. Until metaphor design is included ACE will remain machine-like
Article
Full-text available
Chatterbox Challenge is an annual web-based contest for artificial conversational systems, ACE. The 2010 instantiation was the tenth consecutive contest held between March and June in the 60th year following the publication of Alan Turing's influential disquisition 'computing machinery and intelligence'. Loosely based on Turing's viva voca interrogator-hidden witness imitation game, a thought experiment to ascertain a machine's capacity to respond satisfactorily to unrestricted questions, the contest provides a platform for technology comparison and evaluation. This paper provides an insight into emotion content in the entries since the 2005 Chatterbox Challenge. The authors find that synthetic textual systems, none of which are backed by academic or industry funding, are, on the whole and more than half a century since Weizenbaum's natural language understanding experiment, little further than Eliza in terms of expressing emotion in dialogue. This may be a failure on the part of the academic AI community for ignoring the Turing test as an engineering challenge.
Article
Because of the high expectations users have on virtual assistants to interact with said systems on a human level, the rules of social interaction potentially apply and less the influence of emotion cues associated with the system responses. To this end the social psychological theories of control, reactance, schemata, and social comparison suggest strategies to transform the dialogue with a virtual assistant into an encounter with a consistent and cohesive personality, in effect using the mind-set of the user to the advantage of the conversation, provoking the user into reacting predictably while at the same time preserving the user's illusion of control. These methods are presented in an online system: Elbot.com.
Article
At the heart of Turing?s 1950 imitation game is the question-answer test to assess whether a machine can respond with satisfactory and sustained answers to unrestricted questions. In 1966, Weizenbaum?s Eliza system made it possible for a human and a machine to communicate via text in question and answer sessions. Forty five year later in 2011, IBM Watson achieved remarkable success winning an unrestricted questionanswer exhibition match competing against humans in Jeopardy! a US TV quiz show. Is it now time to scale up to Harnad?s Total Turing Test combining natural language with robot audio and vision engineering?.
Article
Whilst common sense knowledge has been well researched in terms of intelligence and (in particular) artificial intelligence, specific, factual knowledge also plays a critical part in practice. When it comes to testing for intelligence, testing for factual knowledge is, in every-day life, frequently used as a front line tool. This paper presents new results which were the outcome of a series of practical Turing tests held on 23rd June 2012 at Bletchley Park, England. The focus of this paper is on the employment of specific knowledge testing by interrogators. Of interest are prejudiced assumptions made by interrogators as to what they believe should be widely known and subsequently the conclusions drawn if an entity does or does not appear to know a particular fact known to the interrogator. The paper is not at all about the performance of machines or hidden humans but rather the strategies based on assumptions of Turing Test interrogators. Full, unedited transcripts from the tests are shown for the reader as working examples. As a result, it might be possible to draw critical conclusions with regard to the nature of human concepts of intelligence, in terms of the role played by specific, factual knowledge in our understanding of intelligence, whether this is exhibited by a human or a machine. This is specifically intended as a position paper, firstly by claiming that practicalising Turing’s test is a useful exercise throwing light on how we humans think, and secondly, by taking a potentially controversial stance, because some interrogators adopt a solipsist questioning style of hidden entities with a view that it is a thinking intelligent human if it thinks like them and knows what they know. The paper is aimed at opening discussion with regard to the different aspects considered.
Article
Manuscript ID: TETA-2012-0146.R2 This paper presents some important issues on misidentification of human interlocutors in text-based communication during practical Turing Tests. The study here presents transcripts in which human judges succumbed to the confederate effect, misidentifying hidden human foils for machines. An attempt is made to assess the reasons for this. The practical Turing tests in question were held on 23rd June 2012 at Bletchley Park, England. A selection of actual full transcripts from the tests is shown and an analysis is given in each case. As a result of these tests, conclusions are drawn with regard to the sort of strategies which can perhaps lead to erroneous conclusions when one is involved as an interrogator. Such results also serve to indicate conversational directions to avoid for those machine designers who wish to create a conversational entity that performs well on the Turing Test. Free to download, as at 15 June 2014 from here: http://www.tandfonline.com/action/.U53BzGfjhWA#.U53DV9SwXMU
Article
See: http://link.springer.com/article/10.1007/s00146-013-0534-3 Interpretation of utterances affects an interrogator’s determination of human from machine during live Turing tests. Here we consider transcripts realised as a result of a series of practical Turing tests which were held on 23rd June 2012 at Bletchley Park, England. The focus in this paper is to consider the effects of lying and truth-telling on the human judges by the hidden entities, whether human or a machine. Turing test transcripts provide a glimpse into short text communication, the type that occurs in emails: how does the reader determine truth from the content of a stranger’s textual message? Different types of lying in the conversations are explored and the judge’s attribution of human or machine is investigated in each test
Article
A series of imitation games involving 3-participant (simultaneous comparison of two hidden entities) and 2-participant (direct interrogation of a hidden entity) were conducted at Bletchley Park on the 100th anniversary of Alan Turing’s birth: 23 June 2012. From the ongoing analysis of over 150 games involving (expert and non-expert, males and females, adults and child) judges, machines and hidden humans (foils for the machines), we present six particular conversations that took place between human judges and a hidden entity that produced unexpected results. From this sample we focus on features of Turing’s machine intelligence test that the mathematician/code breaker did not consider in his examination for machine thinking: the subjective nature of attributing intelligence to another mind.
Article
In this paper we consider transcripts which originated from a practical series of Turing’s Imitation Game which was held on 23rd June 2012 at Bletchley Park, England. In some cases the tests involved a 3-participant simultaneous comparison of two hidden entities whereas others were the result of a direct 2-participant interaction. Each of the transcripts considered here resulted in a human interrogator being fooled, by a machine, into concluding that they had been conversing with a human. Particular features of the conversation are highlighted, successful ploys on the part of each machine discussed and likely reasons for the interrogator being fooled are considered. Subsequent feedback from the interrogators involved is also included.
Article
Standard linguistic theory assumes that meanings are attached to linguistic artefacts by some semantic component during their production yet prior to their material realization, and it is those meanings that are decoded by the recipient/interpreter of the realized signs according to the same mental machinery/semantic component inside their brain. Rather than theorizing a single sign that is encoded, materialized, transmitted and then decoded, integrationism assumes that signs are created not only by speakers/writers but also by hearers/readers. This paper looks at linguistic artefacts that are not created to mean anything but to do something. The successful accomplishment of those actions depends entirely upon the recipient recreating them as meaningful linguistic signs, no matter what the meaning assigned to them. Examples of such linguistic artifacts to be examined are simulated language as a product of “user-friendly” software, whether programmed as potential aids for human use of technical systems (e.g. Google’s “Did you mean…? and machine translation) or as deceptions (spam and texts inserted into emails to “fool” anti-spam programs), and utterances whose meanings have no relation to what the standard theory regards as lexical meaning nor to interpretative rules (e.g. glossalalia). Of particular interest are the hypothetical language of examples in linguistic theory (Bill is a farmer but John is not; Colorless green ideas sleep furiously) in which nothing is meant other than “this text represents a certain structure,” and the reproduction of texts that would be meaningful in one context but whose sole meaning is reduced to technical manipulation. In all of these cases a linguistic sign is produced (or its production programmed) by someone intent on accomplishing a certain end not through the recipient’s comprehension of a sign and its lexical or discourse meaning but by the human recipient’s creation of a linguistic sign on the one hand, or software unable to distinguish meaningful signs from meaningless textual strings on the other.
Chapter
This paper is a technical presentation of Artificial Linguistic Internet Computer Entity (A.L.I.C.E.) and Artificial Intelligence Markup Language (AIML), set in context by historical and philosophical ruminations on human consciousness. A.L.I.C.E., the first AIML-based personality program, won the Loebner Prize as “the most human computer” at the annual Turing Test contests in 2000, 2001, and 2004. The program, and the organization that develops it, is a product of the world of free software. More than 500 volunteers from around the world have contributed to her development. This paper describes the history of A.L.I.C.E. and AIML-free software since 1995, noting that the theme and strategy of deception and pretense upon which AIML is based can be traced through the history of Artificial Intelligence research. This paper goes on to show how to use AIML to create robot personalities like A.L.I.C.E. that pretend to be intelligent and selfaware. The paper winds up with a survey of some of the philosophical literature on the question of consciousness. We consider Searle’s Chinese Room, and the view that natural language understanding by a computer is impossible. We note that the proposition “consciousness is an illusion” may be undermined by the paradoxes it apparently implies. We conclude that A.L.I.C.E. does pass the Turing Test, at least, to paraphrase Abraham Lincoln, for some of the people some of the time. KeywordsArtificial Intelligence-natural language-chat robot-bot-Artificial Intelligence Markup Language (AIML)-Markup Languages-XML-HTML-philosophy of mind-consciousness-dualism-behaviorism-recursion-stimulusresponse-Turing Test-Loebner Prize-free software-open source-A.L.I.C.E-Artificial Linguistic Internet Computer Entity-deception-targeting
Chapter
A Turing Test like the Loebner Prize Contest draws on existing computer programs as participants. Though some entries are written just for the contest, there has been no standard interface for allowing the judges to interact with the programs. While Dr. Loebner has created his own standard more recently, there are many Web-based programs that are not easily adapted to his standard. The interface problem is indicative of the transition being made everywhere from older, console based, mainframe-like software with simple interfaces, to the new Web-based applications which require a Web browser and a graphical user interface to interact with. The Turing Hub interface attempts to provide a uniformity and facilitation of this testing interface. KeywordsLoebner Prize-Turing Test-Turing Hub
Chapter
I have entered the Loebner Prize five times, winning the “most humanlike program” category in 1996 with a surly ELIZA-clone named HeX, but failed to repeat the performance in subsequent years with more sophisticated techniques. Whether this is indicative of an unanticipated improvement in “conversation simulation” technology, or whether it highlights the strengths of ELIZA-style trickery, is left as an exercise for the reader. In 2000, I was invited to assume the role of Chief Scientist at Artificial Intelligence Ltd. (Ai) on a project inspired by the advice given by Alan Turing in the final section of his classic paper – our quest was to build a “child machine” that could learn and use language from scratch. In this chapter, I will discuss both of these experiences, presenting my thoughts regarding the Chinese Room argument and Artificial Intelligence (AI) in between. KeywordsLoebner Prize-Turing Test-Markov Model-information theory-Chinese Room-child machine-machine learning-Artificial Intelligence
Chapter
I propose to consider the question, “Can machines think?”♣ This should begin with definitions of the meaning of the terms “machine” and “think”. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words “machine” and “think” are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statistical survey such as a Gallup poll.
Article
A case of artificial paranoid has been synthesized in the form of a computer simulation model. The model and its embodied theory are briefly described. Several excerpts from interviews with the model are presented to illustrate its paranoid input-output behavior. Evaluation of the success of the simulation will depend upon indistinguishability tests.
Article
There is an extensive body of work on Intelligent Tutoring Systems: computer environments for education, teaching and training that adapt to the needs of the individual learner. Work on personalisation and adaptivity has included research into allowing the student user to enhance the system’s adaptivity by improving the accuracy of the underlying learner model. Open Learner Modelling, where the system’s model of the user’s knowledge is revealed to the user, has been proposed to support student reflection on their learning. Increased accuracy of the learner model can be obtained by the student and system jointly negotiating the learner model. We present the initial investigations into a system to allow people to negotiate the model of their understanding of a topic in natural language. This paper discusses the development and capabilities of both conversational agents (or chatbots) and Intelligent Tutoring Systems, in particular Open Learner Modelling. We describe a Wizard-of-Oz experiment to investigate the feasibility of using a chatbot to support negotiation, and conclude that a fusion of the two fields can lead to developing negotiation techniques for chatbots and the enhancement of the Open Learner Model. This technology, if successful, could have widespread application in schools, universities and other training scenarios.
Article
Based on insufficient evidence, and inadequate research, Floridi and his students report inaccuracies and draw false conclusions in their Minds and Machines evaluation, which this paper aims to clarify. Acting as invited judges, Floridi et al. participated in nine, of the ninety-six, Turing tests staged in the finals of the 18th Loebner Prize for Artificial Intelligence in October 2008. From the transcripts it appears that they used power over solidarity as an interrogation technique. As a result, they were fooled on several occasions into believing that a machine was a human and that a human was a machine. Worse still, they did not realise their mistake. This resulted in a combined correct identification rate of less than 56%. In their paper they assumed that they had made correct identifications when they in fact had been incorrect.
Article
Purpose – The purpose of this paper is to consider Turing's two tests for machine intelligence: the parallel‐paired, three‐participants game presented in his 1950 paper, and the “jury‐service” one‐to‐one measure described two years later in a radio broadcast. Both versions were instantiated in practical Turing tests during the 18th Loebner Prize for artificial intelligence hosted at the University of Reading, UK, in October 2008. This involved jury‐service tests in the preliminary phase and parallel‐paired in the final phase. Design/methodology/approach – Almost 100 test results from the final have been evaluated and this paper reports some intriguing nuances which arose as a result of the unique contest. Findings – In the 2008 competition, Turing's 30 per cent pass rate is not achieved by any machine in the parallel‐paired tests but Turing's modified prediction: “at least in a hundred years time” is remembered. Originality/value – The paper presents actual responses from “modern Elizas” to human interrogators during contest dialogues that show considerable improvement in artificial conversational entities (ACE). Unlike their ancestor – Weizenbaum's natural language understanding system – ACE are now able to recall, share information and disclose personal interests.
Article
A computer simulation of paranoid processes in the form of a dialogue algorithm was subjected to a validation study using indistinguishability tests. Judges rated degrees of paranoia present in initial psychiatric interviews of both paranoid patients and of versions of the paranoid model. Judges also attempted to distinguish teletyped interviews with real patients from interviews with the simulation model. The statistical results indicate a satisfactory degree of resemblance between the two groups of interviews. It is concluded that the model provides a successful simulation of naturally occurring paranoid processes as measured by these tests.
Article
Eliza is a program operating within the MAC time-sharing system at MIT which makes certain kinds of natural language conversation between man and computer possible. Input sentences are analyzed on the basis of decomposition rules which are triggered by key words appearing in the input text. Responses are generated by reassembly rules associated with selected decomposition rules. The fundamental technical problems with which Eliza is concerned are: (1) the identification of key words, (2) the discovery of minimal context, (3) the choice of appropriate transformations, (4) generation of responses in the absence of key words, and (5) the provision of an editing capability for Eliza scripts. A discussion of some psychological issues relevant to the Eliza approach as well as of future developments concludes the paper. 9 references.
Article
[Abstract] This article examines the recent trend in automated electronic commerce to animate avatars and other electronic entities in order to build trust relationships with consumers. The analysis begins with a discussion of the law of contract as it applies in the context of automation and an investigation of the technologies that automate electronic commerce. Tracing the architectures of human-computer interaction (HCI) back to their conceptual origins in the field of artificial intelligence (AI), the author then exposes some of the techniques used to deceive consumers. In a disturbing trend styled the californication of commerce, electronic entities are used to simulate familiarity and companionship in order to create the illusion of friendship. Such illusions can be exploited to misdirect consumers, the net effect of which is to diminish consumers' ability to make informed choices and to undermine the consent principle in data protection and privacy law. The author questions whether our lawmakers ought to respond by enacting laws more robust than those stipulated in today's typical electronic commerce legislation which, for the most part, tend to be limited to issues of form and formation. The author concludes by foreshadowing an important set of concerns lurking in the penumbras of our near future, and demonstrating that some persons are in need of legal protection right now - protection not from intelligent machine entities but, rather, from the manner in which some people are using them. *****[Résumé] Cet article examine la tendance récente dans le domaine du commerce électronique automatisé à recourir à l’animation d’avatars et d’autres entités électroniques afin de développer une relation de confiance avec les consommateurs et consommatrices. L’analyse commence par une discussion du droit des contrats, tel qu’il s’applique dans le contexte de l’automation, et une étude des technologies pour automatiser le commerce électronique. Traçant les structures de l’interaction personne/machine jusqu’à leurs origines conceptuelles dans le monde de l’intelligence artificielle, l’auteur souligne certaines techniques utilisées pour tromper les consommateurs. Dans un courant de commerce perturbant, dit californien, des entités électroniques servent à stimuler la familiarité et la camaraderie afin de créer une illusion d’amitié. Il est possible d’exploiter ces illusions afin de confondre les consommateurs et de diminuer leur capacité de faire des choix éclairés. Cela mine le principe du consentement sous-jacent à la protection des données et au droit relatif au respect de la vie privée. L’auteur se demande si nos législateurs ne devraient pas réagir en adoptant des lois plus rigoureuses que celles qui existent actuellement dans le domaine du commerce électronique et qui en général s’en tiennent aux questions de forme et de formation des contrats. En conclusion, l’auteur laisse entrevoir une série de préoccupations importantes qui bientôt surgiront de l’ombre. Il démontre que certaines personnes ont besoin de protection juridique dès maintenant – d’une protection non pas contre les machines intelligentes, mais plutôt contre leur utilisation particulière par certaines personnes.
Article
Five physicians with psychiatric training and experience reached five correct and five incorrect conclusions in deciding whether they were interviewing, by teletype, a paranoid patient or a computer simulation of paranoia. This is the strongest indistinguishability test yet satisfied by a computer program, but weaknesses of the test limit its value in developing computer models of psychopathology.
Psychologism and Behaviorism. In (Ed) Shieber, S. The Turing Test: Verbal Behavior as the Hallmark of Intelligence
  • N Block
Block. N. (1981). Psychologism and Behaviorism. In (Ed) Shieber, S. The Turing Test: Verbal Behavior as the Hallmark of Intelligence. MIT Press: UK: pp 229 -266
How to Build a Bionic Man. Channel 4 TV
Channel 4 (2013). How to Build a Bionic Man. Channel 4 TV. Retrieved from: http://www.channel4.com/programmes/how-to-build-a-bionic-man/episode-guide/series-1/episode-1
History. Retrieved from
Chatterbox Challenge (2005). History. Retrieved from: http://www.chatterboxchallenge.com/ 13.10.15
Evaluating the Language Resources of Chatbots for their potential in English as a Second Language European Association for Computer Assisted Language Learning Bringing AI to Life: Putting Today's Tools and Resources to Work
  • D K Conian
Conian, D. (2008). Evaluating the Language Resources of Chatbots for their potential in English as a Second Language. European Association for Computer Assisted Language Learning. ReCALL 20(1), PP 89-116 DOI: DOI: 10.1017/S0958344008000815 Copple, K. (2008). Bringing AI to Life: Putting Today's Tools and Resources to Work. In (Eds) R.
Skyfall Game AI. Retrieved from
  • Existor
Existor (2012). Skyfall Game AI. Retrieved from: http://www.existor.com/ai-skyfall-game-AI 22.3.13
BBC Radio 4 Today: Saturday 23 08.53am: Alan Turing's Life Achievements
  • T Feilden
Feilden, T. (2012). BBC Radio 4 Today: Saturday 23 June 2012, 08.53am: Alan Turing's Life Achievements. http://news.bbc.co.uk/today/hi/today/newsid_9731000/9731205.stm 22.3.13
There Must be an Angel: on the beginnings of the Arithmetic of Rays
  • D Link
Link, D. (2013). There Must be an Angel: on the beginnings of the Arithmetic of Rays. The Rutherford Journal, Volume 5 (forthcoming). http://www.rutherfordjournal.org/ Loebner Prize (2014). Home of the Loebner Prize for Artificial Intelligence.
17th annual prize for artificial intelligence
  • Loebner Prize
Loebner Prize. (2007). 17th annual prize for artificial intelligence. Retrieved from:
Cortana: your clever new personal assistant: http://windows.microsoft.com/en- gb/windows-10/getstarted-what-is-cortana Pre-publication version
  • Microsoft
Microsoft (2014). Cortana: your clever new personal assistant: http://windows.microsoft.com/en- gb/windows-10/getstarted-what-is-cortana Pre-publication version.
The Turing Test: Verbal Behavior as the Hallmark of Intelligence Conversation, Deception and Intelligence: Turing's Question-Answer Game Alan Turing: His Work and Impact, Part III Building a brain: intelligent machines, practice and theory
  • S M Shieber
  • Cambridge
  • Us Massachusetts
  • H Shah
Shieber, S. M. (2004). The Turing Test: Verbal Behavior as the Hallmark of Intelligence. MIT Press: Cambridge, Massachusetts, US Shah, H. (2013). Conversation, Deception and Intelligence: Turing's Question-Answer Game. In (Eds). S.B. Cooper & J. van Leeuwen, Alan Turing: His Work and Impact, Part III Building a brain: intelligent machines, practice and theory, pp. 614-620. Elsevier: Oxford.
From the Buzzing in Turing's Head to Machine Intelligence Contests. Towards a Comprehensive Intelligence Test (TCIT): Reconsidering the Turing Test for the 21 st Century symposium Emotion in the Turing Test: A Downward Trend for Machines in Recent Loebner Prizes
  • H Shah
  • K Warwick
  • H Shah
  • K Warwick
Shah, H., and Warwick, K. (2010c). From the Buzzing in Turing's Head to Machine Intelligence Contests. Towards a Comprehensive Intelligence Test (TCIT): Reconsidering the Turing Test for the 21 st Century symposium, in AISB 2010 Convention. DeMontfort University, 29 March – 1 April Shah, H., and Warwick, K. (2009). Emotion in the Turing Test: A Downward Trend for Machines in Recent Loebner Prizes. Chapter XVII (Section V) in (Eds: Vallverdú. & D. Casacuberta): Handbook of Pre-publication version.
Apple: Your wish is its command
  • Siri
Siri. (2013). Apple: Your wish is its command. http://www.apple.com/uk/ios/siri/, 29.5.13.
iFree everfriend pocket assistant for android phones
  • Spoony
Spoony. (2013). iFree everfriend pocket assistant for android phones. http://www.ifree.com/en/activities/apps/everfriends, 29.5.13.
  • Turinghub
TuringHub (2013). JFRED Chat Server. Retrieved from: http://testing.turinghub.com/ 9.3.13; 22.47
Zabaware. Retrieved from
  • Ultra Hal
Ultra Hal (2013). Zabaware. Retrieved from: http://www.zabaware.com/home.html 9.3.13; 22.49