Chapter

The anatomy of A.L.I.C.E

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

This paper is a technical presentation of Artificial Linguistic Internet Computer Entity (A.L.I.C.E.) and Artificial Intelligence Markup Language (AIML), set in context by historical and philosophical ruminations on human consciousness. A.L.I.C.E., the first AIML-based personality program, won the Loebner Prize as “the most human computer” at the annual Turing Test contests in 2000, 2001, and 2004. The program, and the organization that develops it, is a product of the world of free software. More than 500 volunteers from around the world have contributed to her development. This paper describes the history of A.L.I.C.E. and AIML-free software since 1995, noting that the theme and strategy of deception and pretense upon which AIML is based can be traced through the history of Artificial Intelligence research. This paper goes on to show how to use AIML to create robot personalities like A.L.I.C.E. that pretend to be intelligent and selfaware. The paper winds up with a survey of some of the philosophical literature on the question of consciousness. We consider Searle’s Chinese Room, and the view that natural language understanding by a computer is impossible. We note that the proposition “consciousness is an illusion” may be undermined by the paradoxes it apparently implies. We conclude that A.L.I.C.E. does pass the Turing Test, at least, to paraphrase Abraham Lincoln, for some of the people some of the time. KeywordsArtificial Intelligence-natural language-chat robot-bot-Artificial Intelligence Markup Language (AIML)-Markup Languages-XML-HTML-philosophy of mind-consciousness-dualism-behaviorism-recursion-stimulusresponse-Turing Test-Loebner Prize-free software-open source-A.L.I.C.E-Artificial Linguistic Internet Computer Entity-deception-targeting

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... At the end of the last century, many scholars have already been researching intelligent customer service development. In 1950, Turing [1] proposed the famous Turing test [2] in his paper "Computing Machinery and Intelligence" as in Figure 1. This standard is the ultimate goal of every natural language research. ...
... Turing test. C asks A and B whether they are human [2]. ...
... However, different fields may require corresponding training on brilliant customer service before being Turing test. C asks A and B whether they are human [2]. ...
Article
Full-text available
This paper proposes using a web crawler to organize website content as a dialogue tree in some domains. We build an intelligent customer service agent based on this dialogue tree for general usage. The encoder-decoder architecture Seq2Seq is used to understand natural language and then modified as a bi-directional LSTM to increase the accuracy of the polysemy cases. The attention mechanism is added in the decoder to improve the problem of accuracy decreasing as the sentence grows in length. We conducted four experiments. The first is an ablation experiment demonstrating that the Seq2Seq + Bi-directional LSTM + Attention mechanism is superior to LSTM, Seq2Seq, Seq2Seq + Attention mechanism in natural language processing. Using an open-source Chinese corpus for testing, the accuracy was 82.1%, 63.4%, 69.2%, and 76.1%, respectively. The second experiment uses knowledge of the target domain to ask questions. Five thousand data from Taiwan Water Supply Company were used as the target training data, and a thousand questions that differed from the training data but related to water were used for testing. The accuracy of RasaNLU and this study were 86.4% and 87.1%, respectively. The third experiment uses knowledge from non-target domains to ask questions and compares answers from RasaNLU with the proposed neural network model. Five thousand questions were extracted as the training data, including chat databases from eight public sources such as Weibo, Tieba, Douban, and other well-known social networking sites in mainland China and PTT in Taiwan. Then, 1000 questions from the same corpus that differed from the training data for testing were extracted. The accuracy of this study was 83.2%, which is far better than RasaNLU. It is confirmed that the proposed model is more accurate in the general field. The last experiment compares this study with voice assistants like Xiao Ai, Google Assistant, Siri, and Samsung Bixby. Although this study cannot answer vague questions accurately, it is more accurate in the trained application fields.
... The first leap in chatbot technology came with the introduction of the Artificial Linguistic Internet Computer Entity (ALICE) in 1995, winning the title of "most human computer" at the annual Turing Test contests in , 2001and 2004(Wallace, 2009). ALICE used a simple pattern-matching algorithm with its whole intelligence specified via the Artificial Intelligence Markup Language (AIML), an XML-based markup language which allows developers to alter the facts that are known to ALICE (Wallace, 2009;Marietto, 2013). ...
... The first leap in chatbot technology came with the introduction of the Artificial Linguistic Internet Computer Entity (ALICE) in 1995, winning the title of "most human computer" at the annual Turing Test contests in , 2001and 2004(Wallace, 2009). ALICE used a simple pattern-matching algorithm with its whole intelligence specified via the Artificial Intelligence Markup Language (AIML), an XML-based markup language which allows developers to alter the facts that are known to ALICE (Wallace, 2009;Marietto, 2013). ...
... For chitchat, we use a pipeline based on machine translation into English and back and the AIML patternmatching chatbot engine (Wallace, 2009). We opted to use a rule-based chatbot over a neural trainable system to retain complete control of the responses. ...
... We previously experimented with the BlenderBot neural chatbot (Roller et al., 2021) and found that it would often steer the conversation towards inappropriate topics. Having decided on AIML, we customised the patterns of the existing ALICE chatbot (Wallace, 2009) for our purposes. We opted for pattern matching in English as this avoids the morphological complexities of Czech, and we used machine translation (Popel et al., 2020) to achieve this. ...
Article
AI in education is a topic that has been researched for the last 70 years. However, the last two years have seen very significant changes. These changes relate to the introduction of OpenAI's ChatGPT chatbot in November 2022. The GPT (Generative Pre-trained Transformer) language model has dramatically influenced how the public approaches artificial intelligence. For many, generative language models have become synonymous with AI and have come uncritically viewed as a universal source of answers to most questions. However, it soon became apparent that even generative language models had their limits. Among the main problems that emerged was hallucination (providing answers containing false or misleading information), which is expected in all language models. The main problem of hallucination is that this information is difficult to distinguish from other information, and AI language models are very persuasive in presenting it. The risks of this phenomenon are much more substantial when using language modules to support learning, where the learner cannot distinguish correct information from incorrect information. The proposed paper focuses on the area of AI hallucination in mathematics education. It will first show how AI chatbots hallucinate in mathematics and then present one possible solution to counter this hallucination. The presented solution was created for the AI chatbot Edu-AI and designed to tutor students in mathematics. Usually, the problem is approached so that the system verifies the correctness of the output offered by the chatbot. Within the Edu-AI, checking responses is not implemented, but checking inputs is. If an input containing a factual query is recorded, it is redirected, and the answer is traced to authorised knowledge sources and study materials. If a relevant answer cannot be traced in these sources, a redirect to a natural person who will address the question is offered. In addition to describing the technical solution, the article includes concrete examples of how the system works. This solution has been developed for the educational domain but applies to all domains where users must be provided with relevant information.
... After ALICE (Artificial Linguistic Internet Computer Entity) [33], the real breakthrough for chatbots began with the advent of generative pre-trained transformers (GPTs) in the mid-2010s [6]. GPTs enabled chatbots to generate more human-like text and understand context with greater accuracy. ...
Article
Full-text available
The rapid emergence of infectious disease outbreaks has underscored the urgent need for effective communication tools to manage public health crises. Artificial Intelligence (AI)-based chatbots have become increasingly important in these situations, serving as critical resources to provide immediate and reliable information. This review examines the role of AI-based chatbots in public health emergencies, particularly during infectious disease outbreaks. By providing real-time responses to public inquiries, these chatbots help disseminate accurate information, correct misinformation, and reduce public anxiety. Furthermore, AI chatbots play a vital role in supporting healthcare systems by triaging inquiries, offering guidance on symptoms and preventive measures, and directing users to appropriate health services. This not only enhances public access to critical information but also helps alleviate the workload of healthcare professionals, allowing them to focus on more complex tasks. However, the implementation of AI-based chatbots is not without challenges. Issues such as the accuracy of information, user trust, and ethical considerations regarding data privacy are critical factors that need to be addressed to optimize their effectiveness. Additionally, the adaptability of these chatbots to rapidly evolving health scenarios is essential for their sustained relevance. Despite these challenges, the potential of AI-driven chatbots to transform public health communication during emergencies is significant. This review highlights the importance of continuous development and the integration of AI chatbots into public health strategies to enhance preparedness and response efforts during infectious disease outbreaks. Their role in providing accessible, accurate, and timely information makes them indispensable tools in modern public health emergency management.
... The 1990s saw notable evolution in conversational agents, with the development of Loebner Prize-winning systems A.L.I.C.E, Albert One, and Elbow. A.L.I.C.E, in particular, was noteworthy for being the result of a collaboration between 500 volunteer developers, whose discussions captured principles that have remained best practice in conversational AI (Wallace 2009). Their influence persists even into the 2020s, when most of our interaction with conversational agents occurs in commercial settings and with prepackaged chatbots. ...
Article
Artificial intelligence (AI)-related technologies available to consumers have been rapidly advancing but without parallel insight into the social process of their design. Building AI tools to converse with humans and support our lives remains a significant challenge for which consistency and codified standards are lacking. Following a tradition in science and technology studies of shining light on design and technology development as negotiated cultural processes, this article presents findings from ongoing interviews and fieldwork exploring the collaboration between engineers and user experience (UX) designers collaborating on conversational AI systems. To balance ethnographic data from a Tokyo-based conversational AI startup, interviews ( n = 20) also include an international sample of people working for and with the world's largest technology companies. I describe how usability determinations unfold through daily inter-office negotiation and suggest that UX professionals in AI production primarily engage in two kinds of dialog: The first is human-centric and focused on the emotional needs and material conditions of human-computer interaction. The second is a technical dialog grounded in the parameters of what code “can do” (or what an individual engineer understands to be possible) and bridges the humanistic and technical perspectives necessary for AI system design.
... In the 1990s, chatbots such as A.L.I.C.E. (Artificial Linguistic Internet Computer Entity) and Jabberwacky were developed (Wallace, 2009). These chatbots used NLP techniques to generate responses to user inputs, and they learned from user interactions and improved their responses over time. ...
Article
Full-text available
The progress in artificial intelligence (AI) technology has led to an increased use of automatic conversation programs in language learning. This study aimed to examine the effects of AI chatbots on the communicationwriting skills of Thai university students and investigate their self-regulated learning strategies while participating in this study. Twenty-six studentsparticipated in a four-week chatbot-based learning as an extracurricularactivity of an English writing course. Quantitative data on communicativewriting skills was obtained through a pretest and a posttest, while a Self-Regulated Learning Questionnaire was implemented to obtain qualitativedata. Quantitative results showed a significant difference between pretest scores and posttest scores. Qualitative findings revealed the medium use of self-regulated learning strategies. Furthermore, this research article also presents insights into the use of AI chatbots in language teaching and future research.
... Bu yıllarda özellikle Uluslararası İş Makineleri [International Business Machines (IBM)] şirketinin Deep Blue adlı bilgisayarının, dünya satranç şampiyonu Garry Kasparov'u yenmesiyle YZ'nin stratejik düşünme kapasitesi tüm çevrelerde hayranlıkla karşılanmıştır. Benzer tarihlerde ALICE adlı chatbot doğal dil işleme [Natural language processing (NLP)] tekniklerini kullanarak atası ELIZA'ya göre çok daha sofistike insan-makine etkileşimini sağlamayı başarmıştır (Wallace, 2009). 2010'lu yıllara gelindiğinde Toronto Üniversitesi'nde Alex Krizhevsky, Ilya Sutskever ve Geoffrey Hinton işbirliği içinde tasarlanan derin evrişimli bir sinir ağı mimarisi olan AlexNet 30 Eylül 2012'de ImageNet Büyük Ölçekli Görsel Tanıma Yarışması'na katılmış ve yapay sinir ağlarının resim tanıma gibi görevlerde ne kadar etkili olabileceğini göstermiştir (Krizhevsky vd., 2012). ...
Article
Full-text available
Bu çalışmanın amacı büyük dil modellerinin eğitime ve özellikle ölçme ve değerlendirme süreçlerine entegrasyonu ile ilgili fırsatları ve olası riskleri ele almaktır. Yapay zekâ teknolojilerinin emek ve zaman maliyetlerini düşürerek özellikle kalabalık ve heterojen sınıfları olan öğretmenler için sağlayabileceği pek çok avantaj bulunmakla birlikte etik dışı kullanımdan kaynaklı sorunlar, insan-insan etkileşiminin azalmasıyla sosyal izolasyonun artması, ana akım düşüncelerin kabul görmesi ve farklılıkların marjinalleşmesiyle birlikte öğrencilerin eleştirel düşünme becerilerinin zayıflaması gibi bir dizi olumsuz sonucu ortaya çıkarabilecek riskleri de bulunmaktadır. Bütün bu durumların yanı sıra çalışmada daha özel olarak büyük dil modellerinin ölçme ve değerlendirme süreçlerinde otomatik madde üretimi ve puanlama, ölçme aracı geliştirme ve inceleme, test sonuçlarını analiz etme ve biçimlendirici değerlendirme gibi amaçlarla nasıl kullanılabileceği ve bu tür amaçlarla kullanımın getirebileceği olumlu ve olumsuz sonuçlar tartışılmış; bunlara dayalı olarak paydaşlara ve politika geliştiricilere çeşitli öneriler sunulmuştur.
... Eliza, şablona dayalı bir desen eşleştirme ve yanıt seçimi şeması kullanmaktadır (Brandtzaeg & Følstad, 2017). Sohbet robotlarının tarihinde bir diğer ileri adım, 1995 yılında ELIZA'dan esinlenerek oluşturulan ilk çevrimiçi robot olan ALICE'in yaratılmasıdır (Wallace, 2009). ...
Article
Full-text available
Yapay zekâ alanındaki son gelişmelerle, sesli ve yazılı olarak cevap verebilme imkânı sağlayan sanal asistanlar ve sohbet robotları kullanıcılar ve müşteriler tarafından yaygın bir şekilde kullanılmaya başlanmıştır. Bu araştırmada, ‘sohbet robotu’ (chatbot) anahtar kelimesi ile eşleşen tweetler toplanarak, belirlenen alanlarda tematik dağılım ortaya konulması amaçlanmıştır. Sohbet robotlarının, dört önemli özelliği (sohbet/konuşma, erişilebilirlik, entegrasyon ve duygu) dikkate alınmıştır. Çalışmada İngilizce dilinde Twitter API ile toplamda 153093 olan gönderi üzerinden kelime ilişkilendirme analizi, kelime frekans analizi ve tematik analiz teknikleri kullanılarak tematik dağılım ortaya konulması amaçlanmıştır. ‘Sohbet Robotu’ ifadesi içeren gönderilerde istatistiksel olarak anlamlı ilişkilendirilmiş kelimeler %8,9’unda ‘müşteri’ ve %7,3’ünde ‘google’ olmuştur. Ayrıca, ‘iletişim’, ‘link’, ‘mühendis’, ‘hizmet’ ve ‘doğrudan mesaj’ kelimeleri de diğer ilişkilendirilmiş kelimelerden bazılarıdır. İstatistiksel olarak anlamlı ilişkilendirilmiş sohbet/konuşma alanında en çok yer alan kelime, % 15,3 ile ‘otomatikleştirme’ sözcüğü olmuştur. Erişilebilirlik alanında, %46,7’sinde ‘genel’, %32,9’unda ‘sanal’ ifadesi yer almaktadır. Entegrasyon alanında, ‘bileşen kullanımı’ (%22,4) ve duygu alanında ‘insan’ (%27,3) sözcükleri istatistiksel olarak ilişkilendirilmiştir. Sonuç olarak, temalar ve alt temalar dikkate alındığında, sohbet robotlarının sadece teknik özellikleri değil sosyal ve duygusal yönlerinin de öne çıktığı ortaya çıkmaktadır.
... The use of Artificial Intelligence (AI) for the human-computer conversation began with the invention of the chatbot. The development of chatbots goes way back in history; with ELIZA being the first chatbot developed by Weizenbaum (Weizenbaum, 1966), successively followed by other noticeable inventions; Artificial Linguistic Internet Computer Entity (ALICE) developed by Wallace (Wallace, 2009), Jabberwacky by Rollo Carpenter (De Angeli et al., 2005), and Mitsuku by Steve Worswick (Abdul-Kader et al., 2015). The AI resides as the backbone of these intelligent agents which can make decisions and responding based on human queries, environment, and experiences which is called model training. ...
... ELIZA was followed by chatbot applications like PARRY (Colby et al., 1971) and A.L.I.C.E. (Wallace, 2009) in later years. ...
... ELIZA was followed by chatbot applications like PARRY (Colby et al., 1971) and A.L.I.C.E. (Wallace, 2009) in later years. ...
Article
Full-text available
The purpose of this study was to investigate student perceptions and acceptance of a rule-based educational chatbot in higher education, employing the TAM (Technology Acceptance Model) framework. The researchers developed a rule-based chatbot for this purpose and examined the students' technology acceptance using qualitative research methods. Therefore, the study was design-based research using qualitative research methods. The participants of the study comprised 22 students studying in the Science Teaching program of Trakya University Faculty of Education and enrolled in the Modern Physics Course in the 2021–2022 fall semester. The research revealed that students' technology acceptance towards rule-based chatbots was high, even though these chatbots had technological limitations when compared to machine learning or deep learning-based ones. The students found rule-based chatbots to be useful, especially in terms of response quality, information quality, and access. Additionally, some technical details and open-source codes were also presented in the study, which can be a guide for rule-based chatbots to be designed for other areas of education.
... (Artificial Linguistic Internet Computer Entity) used heuristic pattern matching and the Artificial Intelligence Markup Language (AIML) to engage in conversations. [6] SmarterChild (2001): Available on AOL Instant Messenger and MSN Messenger, SmarterChild provided users with information retrieval services and conversational interactions. [7] Technological Evolution The evolution of chatbots has been closely tied to advancements in Natural Language Processing (NLP), machine learning, and artificial intelligence (AI). ...
Article
Full-text available
This research paper explores the transformative potential of conversational AI and chatbots in enhancing website user experience (UX). It addresses two key research questions: How do these technologies improve user engagement and satisfaction on websites, and what are the primary challenges in implementing them, along with effective solutions. The study examines case studies across diverse industries, including e-commerce, travel, healthcare, and finance, to gain insights into the underlying technologies powering conversational AI and chatbots, such as natural language processing (NLP), natural language understanding (NLU), and machine learning techniques. The paper highlights the significant benefits of integrating conversational AI and chatbots into websites, including providing personalized assistance, streamlining complex processes, ensuring 24/7 availability, and enhancing accessibility for users. However, the study also addresses the key challenges faced in implementation, ranging from handling ambiguity and context in natural language processing to ensuring data privacy and security, managing user expectations, and the need for continuous improvement and training. The research proposes solutions to these challenges, such as employing advanced NLP algorithms, robust API management tools, and establishing user feedback loops. Ethical considerations, including data privacy and addressing biases in AI responses, are also explored, emphasizing the importance of robust encryption, adherence to data privacy regulations, and advanced access control mechanisms. The paper concludes by providing a comprehensive overview of the current state and future directions of conversational AI and chatbots in enhancing website user experience, exploring emerging trends such as multimodal interactions, contextual awareness and personalization, integration with IoT devices, and the development of emotional intelligence and empathy in chatbots.
... As an example of retrieval-based colloquial agents, we can consider A.L.I.C.E. (Wallace, 2009) developed using the Artificial Intelligence Markup Language (AIML) (Wallace, 2003). Such a language comprises a class of data objects and partially describes the behaviour of computer programs that process them via stimulus-response templates. ...
Article
Full-text available
Chatbots are conversational software applications designed to interact dialectically with users for a plethora of different purposes. Surprisingly, these colloquial agents have only recently been coupled with computational models of arguments (i.e. computational argumentation), whose aim is to formalise, in a machine-readable format, the ordinary exchange of information that characterises human communications. Chatbots may employ argumentation with different degrees and in a variety of manners. The present survey sifts through the literature to review papers concerning this kind of argumentation-based bot, drawing conclusions about the benefits and drawbacks that this approach entails in comparison with standard chatbots, while also envisaging possible future development and integration with the Transformer-based architecture and state-of-the-art Large Language models.
... ELIZA simulated conversation by using pattern matching and substitution methodology, allowing it to provide scripted responses to typed inputs. Following ELIZA, other chatbots such as PARRY, which simulated the behavior of a paranoid schizophrenic, and A.L.I.C.E., which used heuristic pattern matching, further explored the capabilities of conversational agents [7,8]. The advent of machine learning and natural language processing (NLP) in the 21st century significantly advanced chatbot technology. ...
... While ELIZA's capabilities were quite limited, and its responses were generated with little context or variation, it sparked the future development of conversation systems that could genuinely converse with humans to fulfill meaningful needs beyond entertainment (Adamopoulou & Moussiades, 2020). ELIZA's approach laid the foundation for techniques like procedural response schemes, textual pattern matching, and more open-domain discussions online, though modern research ultimately requires much more sophisticated abilities in understanding language, reasoning, empathy, and self-awareness (Wallace, 2009). ...
Article
Full-text available
Artificial intelligence (AI) capabilities in natural language processing are rapidly advancing and transforming communication practices across diverse contexts. This review provides a comprehensive analysis of AI's emerging roles in mediating and participating in direct communication to highlight key opportunities and research priorities around responsible innovation. The study surveys the extensive literature on major applications of AI, including virtual assistants, chatbots, smart replies, sentiment analysis tools, and automatic translation technologies. It also closely examines their current and potential usage and benefits across interpersonal, organizational, and societal communication. The analysis reveals these AI technologies promise enhanced efficiency, personalization, accessibility, and new modalities of expression in communication. The study found that if judiciously and ethically applied, they could incrementally improve communication speed, quality, relationships, and even therapists' capabilities over time. However, more rigorous research is still recommended to investigate longitudinal impacts on human well-being, increase accessibility for vulnerable demographic groups, advance multimodality AI systems, and develop tailored guidelines and user-centered studies to ensure ethical, socially responsible progress. Overall, this review synthesizes the current state of the science and pressing research needs in this rapidly emerging field.
... While ELIZA's capabilities were quite limited, and its responses were generated with little context or variation, it sparked the future development of conversation systems that could genuinely converse with humans to fulfill meaningful needs beyond entertainment (Adamopoulou & Moussiades, 2020). ELIZA's approach laid the foundation for techniques like procedural response schemes, textual pattern matching, and more open-domain discussions online, though modern research ultimately requires much more sophisticated abilities in understanding language, reasoning, empathy, and self-awareness (Wallace, 2009). ...
Article
Full-text available
Artificial intelligence (AI) capabilities in natural language processing are rapidly advancing and transforming communication practices across diverse contexts. This review provides a comprehensive analysis of AI's emerging roles in mediating and participating in direct communication to highlight key opportunities and research priorities around responsible innovation. The study surveys the extensive literature on major applications of AI, including virtual assistants, chatbots, smart replies, sentiment analysis tools, and automatic translation technologies. It also closely examines their current and potential usage and benefits across interpersonal, organizational, and societal communication. The analysis reveals these AI technologies promise enhanced efficiency, personali-zation, accessibility, and new modalities of expression in communication. The study found that if judiciously and ethically applied, they could incrementally improve communication speed, quality, relationships, and even therapists' capabilities over time. However, more rigorous research is still recommended to investigate longitudinal impacts on human well-being, increase accessibility for vulnerable demographic groups, advance multimodality AI systems, and develop tailored guidelines and user-centered studies to ensure ethical, socially responsible progress. Overall, this review synthesizes the current state of the science and pressing research needs in this rapidly emerging field.
... The chatbot will get the experience by learning through the past experience using various algorithms. The data can be trained to the chatbot which will enable it to check with the knowledge base for providing accurate results to the query of the user through client side applications [6] Using embedded transformer-based generators and discrimination models, this architecture stands out as a leading approach i n generative chatbots. There are Present of various NLP techniques and various models.Another important area covered in the literature review is the use of deep learning and neurolinguistic models in multi-domain transfer learning scenarios [18]. ...
Preprint
Full-text available
Utilizing BERT's (Bidirectional Encoder Representations from Transformers) pre-trained language understanding capabilities, one can build a chatbot that mimics human speech through an interface. BERT, a Google creation, is a notable development in natural language processing (NLP) with impressive results on a variety of tasks. Chatbots have become essential tools for engaging people in the era of rising adoption of artificial intelligence (AI). This is especially true on mobile platforms where chatbots can adapt to various situations and communication modes, such as text and voice. Because of its substantial pre-training on large textual datasets, BERT's bidirectional architecture enables it to understand word meanings within their contextual context. BERT must be fine-tuned for chatbot applications by using a dataset that contains user inquiries and appropriate answers, together with annotations indicating
... The chatbot will get the experience by learning through the past experience using various algorithms. The data can be trained to the chatbot which will enable it to check with the knowledge base for providing accurate results to the query of the user through client side applications [6] Using embedded transformer-based generators and discrimination models, this architecture stands out as a leading approach in generative chatbots. There are Present of various NLP techniques and various models.Another important area covered in the literature review is the use of deep learning and neurolinguistic models in multi-domain transfer learning scenarios. ...
Article
Building a chatbot powered by BERT (Bidirectional Encoder Representations from Transform-ers) involves leveraging its pre-trained language understanding abilities to create an interface that mimics human conversation. Developed by Google, BERT marks a significant advancement in natural language processing (NLP), showcasing remarkable performance across a range of tasks. In the era of increasing artificial intelligence (AI) adoption, chatbots have emerged as crucial tools for engaging users, particularly on mobile platforms where they adapt to different contexts and communication modes, including text and voice. BERT's bidirectional architecture allows it to grasp word meanings within their surrounding context, thanks to its extensive pre-training on vast textual datasets. Fine-tuning BERT for chatbot applications involves training it on a dataset containing user queries paired with suitable responses, with annotations indicating response appropriateness. Tokenization, a crucial preprocessing step, involves breaking down sentences into smaller tokens to aid BERT's processing efficiency. The chatbot architecture integrates BERT, potentially incorporating additional layers to enhance context understanding and response generation. Following this, the model undergoes training using fine-tuned BERT on the prepared dataset, with adjustments made to hyperparameters for optimal performance. Evaluation of the chatbot typically involves testing it on a validation set or through interactive sessions to assess its effectiveness. Any necessary refinements to the architecture or finetuning process are guided by performance analysis. Ultimately, deploying the chatbot involves seamless integration into realworld platforms such as web or mobile applications, enabling smooth interaction between users and the chatbot across various scenarios, all while prioritizing originality and integrity in the development process.
... The chatbot will get the experience by learning through the past experience using various algorithms. The data can be trained to the chatbot which will enable it to check with the knowledge base for providing accurate results to the query of the user through client side applications [6] Using embedded transformer-based generators and discrimination models, this architecture stands out as a leading approach i n generative chatbots. There are Present of various NLP techniques and various models.Another important area covered in the literature review is the use of deep learning and neurolinguistic models in multi-domain transfer learning scenarios [18]. ...
Preprint
Full-text available
Utilizing BERT's (Bidirectional Encoder Representations from Transformers) pre-trained language understanding capabilities, one can build a chatbot that mimics human speech through an interface. BERT, a Google creation, is a notable development in natural language processing (NLP) with impressive results on a variety of tasks. Chatbots have become essential tools for engaging people in the era of rising adoption of artificial intelligence (AI). This is especially true on mobile platforms where chatbots can adapt to various situations and communication modes, such as text and voice. Because of its substantial pre-training on large textual datasets, BERT's bidirectional architecture enables it to understand word meanings within their contextual context. BERT must be fine-tuned for chatbot applications by using a dataset that contains user inquiries and appropriate answers, together with annotations indicating
... The 1990s was an "AI-Renaissance" in terms of investment and advancement of chatbot technology (Kietzmann & Park, 2024). During this period the chatbot ALICE (Artificial Linguistic Internet Computer Entity) used Artificial Intelligence Markup Language (AIML) to specify heuristic conversation rules and won the Loebner Prize in 2000, 2001, and 2004, a competition for computer programs considered to be the most humanlike (Wallace, 2009). ...
... An end-to-end dialogue system is implemented by a neural-based sequence-to-sequence model that can generate system responses from a dialogue history 44 . In contrast to task-oriented dialogue, the purpose of open-domain dialogue is to keep the user engaged and chat about topics that he or she is interested in 45 . Considering the overwhelmingly better performance of end-to-end systems with open-domain dialogue 46 , we only used an endto-end system for open-domain dialogue. ...
Article
Full-text available
If a dialogue system can predict the personality of a user from dialogue, it will enable the system to adapt to the user’s personality, leading to better task success and user satisfaction. In a recent study, personality prediction was performed using the Myers–Briggs Type Indicator (MBTI) personality traits with a task-oriented human–machine dialogue using an end-to-end (neural-based) system. However, it is still not clear whether such prediction is generally possible for other types of systems and user personality traits. To clarify this, we recruited 378 participants, asked them to fill out four personality questionnaires covering 25 personality traits, and had them perform three rounds of human–machine dialogue with a pipeline task-oriented dialogue system or an end-to-end task-oriented dialogue system. We also had another 186 participants do the same with an open-domain dialogue system. We then constructed BERT-based models to predict the personality traits of the participants from the dialogues. The results showed that prediction accuracy was generally better with open-domain dialogue than with task-oriented dialogue, although Extraversion (one of the Big Five personality traits) could be predicted equally well for both open-domain dialogue and pipeline task-oriented dialogue. We also examined the effect of utilizing different types of dialogue on personality prediction by conducting a cross-comparison of the models trained from the task-oriented and open-domain dialogues. As a result, we clarified that the open-domain dialogue cannot be used to predict personality traits from task-oriented dialogue, and vice versa. We further analyzed the effects of system utterances, task performance, and the round of dialogue with regard to the prediction accuracy.
Chapter
The evolution of artificial intelligence (AI) from the foundational work of Alan Turing to the advent of modern systems such as ChatGPT represents a significant technological and societal shift. Turing’s initial conceptualisation of machine intelligence constituted a pivotal foundation for the subsequent evolution of AI, particularly in regard to the Turing Test, which assesses a machine’s capacity to demonstrate human-like intelligence. This theoretical framework has had a significant impact on a number of fields, including digital marketing, by establishing the importance of machine intelligence in understanding and predicting human behaviour. The evolution of AI has been characterised by a shift between two distinct approaches: symbolic and connectionist. Symbolic AI, which was the dominant paradigm in the early years of AI research, employed logical rules for problem-solving. However, it was unable to effectively address complex tasks such as image recognition. The connectionist school, which drew inspiration from neural networks, demonstrated remarkable proficiency in pattern recognition and data learning, paving the way for significant advancements in AI applications. Despite periods of stagnation, which have been termed “AI winters”, the field has continued to evolve. The current approach combines symbolic and connectionist methods in order to create more robust AI systems. The advent of actual generative AI, particularly transformative models such as Transformers and diffusion models, has had a profound impact on digital marketing. Transformers, with their capacity to process and generate contextually accurate language, facilitate enhanced content creation and personalisation. Chatbots such as ChatGPT exemplify the potential of AI in customer interaction, offering sophisticated and context-aware responses. Furthermore, image generation techniques, including Generative Adversarial Networks (GANs) and diffusion models, have enabled the creation of realistic and personalised visual content, thereby transforming marketing strategies. The implementation of generative AI presents a number of challenges, including the potential for AI “hallucinations,” ethical concerns, and environmental impact. It is therefore essential to ensure transparency, mitigate biases, and develop sustainable AI systems in order to advance AI technologies in a responsible manner. As AI continues to be integrated into digital marketing, addressing these challenges will be vital for harnessing its full potential while maintaining ethical and sustainable practices.
Article
Розглянуто питання використання чат-ботів у психології. Відстежено динаміку змінення завдань при застосуванні чат-ботів, напрями використання, ризики впровадження технології. Дослідження спрямовано на обґрунтування особливості стратегії створення чат-боту з психології. Особливості розроблення чат-ботів полягають, перш за все, в розумінні потреб і мотивації користувачів. Такі розмовні інтерфейси користувача для взаємодії з даними та послугами значно змінюють спосіб, у який дизайнери та розробники звикли мислити про взаємодію та потреби користувачів. Чат-боти змінюють поведінку користувачів, а також їхні потреби. Чат-боти можуть принести користь людям із проблемами психічного здоров’я, але вони також створюють ризики та виклики. Етичні проблеми, перш за все, стосуються наявності належної доказової бази, використання та безпеки даних.
Chapter
Full-text available
A comprehensive study is presented to examine the intricate landscape of chatbots, focusing on their classification and design methodologies. Following an overview of the historical background and development of chatbot technology, the paper categorizes chatbots into five classes: rule-based, retrieval-based, generative models, non-task-oriented, and task-oriented. The discussion examines the design techniques used in chatbot, emphasizing the shift from simplistic pattern matching to more complex knowledge-based approaches. An analysis of dialogue management strategies is provided to address the challenges of maintaining context and coherence in conversations. Ethical and social ramifications, encompassing privacy, bias, transparency, and their implications on employment and societal interaction, are examined with on responsible design. Furthermore, the paper explores the NLP techniques underlying chatbot development, including natural language understanding and generation. It investigates evaluation methodologies, current trends, future directions, and open research challenges.
Article
The democratisation of AI chatbot technology is transforming the manner in which organisations engage with customers and market their products. This paper examines the historical development of chatbots, tracing their origins in the mid-20th century to their current applications utilising large language models (LLM), natural language processing (NLP), and machine learning (ML). Particular emphasis is placed on no-code and low-code platforms, which facilitate the creation and training of bespoke chatbots for users without technical expertise. These tools represent a significant step in democratising technological innovation, lowering barriers to entry for companies of all sizes. Furthermore, the article emphasises the advantages of AI chatbots for personalising communications, lead generation and customer data collection. However, it also identifies potential limitations, including restricted flexibility, scaling issues and data protection concerns. The findings indicate that AI chatbots are not merely a technological advancement, but a pivotal component of a new era of marketing communications, where accessibility and efficiency are becoming paramount. The future of this technology will depend on the capacity of companies to adapt its capabilities to evolving customer expectations and market dynamics.
Article
Full-text available
To cite this article: Çüm, S. (2024). Human-AI collaboration in educational measurement and evaluation: The use of large language models. Journal of Applied Measurement and Assessment, 1(2), 29–39. ................................................................................................................................................................................................................................................................................................................................................... The purpose of this study is to explore the opportunities and potential risks associated with integrating large language models into education, with a particular focus on measurement and evaluation processes. While artificial intelligence technologies offer numerous advantages, such as reducing labor and time costs-especially for educators managing large and heterogeneous classrooms-they also present risks. These risks include challenges stemming from unethical use, increased social isolation due to diminished human-to-human interaction, the marginalization of diverse perspectives in favor of mainstream ideas, and the potential weakening of students' critical thinking skills. In addition to these broader considerations, the article delves specifically into how large language models can be utilized in measurement and evaluation processes for purposes such as automated item generation and scoring, the development and review of measurement tools, analysis of test results, and formative assessment. The discussions shed light on both the benefits and potential drawbacks of using these technologies for the purposes outlined, offering tailored recommendations for stakeholders and policymakers based on these insights.
Article
Full-text available
To construct a chat-oriented dialogue system that users will use for a long time, it is important to establish a good relationship between the user and the system. In this paper, aiming to realize a personalizable chat-oriented dialogue system that establishes such a relationship with users by utilizing arbitrary user information naturally in dialogues, we constructed a novel corpus designed to incorporate arbitrary user information into system utterances regardless of the current dialogue topic while retaining appropriateness for the context. We then trained a model to generate appropriate system utterances using the constructed corpus. The results of a subjective evaluation indicated that the model could successfully generate system utterances incorporating arbitrary user information and dialogue context. Furthermore, we integrated our trained model into a dialogue system and validated the effectiveness of system utterances incorporating arbitrary user information and dialogue context through interactive dialogues with users.
Chapter
This volume provides a unique perspective on an emerging area of scholarship and legislative concern: the law, policy, and regulation of human-robot interaction (HRI). The increasing intelligence and human-likeness of social robots points to a challenging future for determining appropriate laws, policies, and regulations related to the design and use of AI robots. Japan, China, South Korea, and the US, along with the European Union, Australia and other countries are beginning to determine how to regulate AI-enabled robots, which concerns not only the law, but also issues of public policy and dilemmas of applied ethics affected by our personal interactions with social robots. The volume's interdisciplinary approach dissects both the specificities of multiple jurisdictions and the moral and legal challenges posed by human-like robots. As robots become more like us, so too will HRI raise issues triggered by human interactions with other people.
Article
Full-text available
The concept of imitation is both ubiquitous and curiously under-analysed in theoretical discussions about the cognitive powers and capacities of machines, and in particular—for what is the focus of this paper—the cognitive capacities of large language models (LLMs). The question whether LLMs understand what they say and what is said to them, for instance, is a disputed one, and it is striking to see this concept of imitation being mobilised here for sometimes contradictory purposes. After illustrating and discussing how this concept is being used in various ways in the context of conversational systems, I draw a sketch of the different associations that the term ‘imitation’ conveys and distinguish two main senses of the notion. The first one is what I call the ‘imitative behaviour’ and the second is what I call the ‘status of imitation’. I then highlight and untangle some conceptual difficulties with these two senses and conclude that neither of these applies to LLMs. Finally, I introduce an appropriate description that I call ‘imitation manufacturing’. All this ultimately helps me to explore a radical negative answer to the question of machine understanding.
Chapter
The Manual section of the Handbook of Pragmatics, produced under the auspices of the International Pragmatics Association (IPrA), is a collection of articles describing traditions, methods, and notational systems relevant to the field of linguistic pragmatics; the main body of the Handbook contains all topical articles. The first edition of the Manual was published in 1995. This second edition includes a large number of new traditions and methods articles from the 24 annual installments of the Handbook that have been published so far. It also includes revised versions of some of the entries in the first edition. In addition, a cumulative index provides cross-references to related topical entries in the annual installments of the Handbook and the Handbook of Pragmatics Online (at https://benjamins.com/online/hop/), which continues to be updated and expanded. This second edition of the Manual is intended to facilitate access to the most comprehensive resource available today for any scholar interested in pragmatics as defined by the International Pragmatics Association: “the science of language use, in its widest interdisciplinary sense as a functional (i.e. cognitive, social, and cultural) perspective on language and communication.”
Chapter
This research enhances Artificial Intelligence Markup Language (AIML) systems' understanding of English idioms and their emotional contexts. By integrating a database of 3,500 idioms with 16 emoticons representing different emotions, the study aims to enable AI to interpret idioms beyond their literal meanings and respond appropriately to their emotional undertones. The methodology includes collecting idioms from various online sources, using Python for extraction, and XML for data structuring. The emoticons, sourced from the Crocels Troller-Sniper Emotion Index 16, are selected to encompass a wide range of emotions, and then encoded with idioms in the XML database for dynamic, context-sensitive AI responses. Using Python, idioms and emoticons are combined and processed through the OpenAI API. The responses are analysed for sentiment and emotional alignment using Python, Pandas, and NLP tools, refining the AIML system's emotional intelligence. Additionally, a Python Flask API Gateway is developed for AIML parser integration, enhancing user interaction by providing emoticon-aligned responses. This research demonstrates the effective use of AI models and programming tools in creating a nuanced, emotionally intelligent dataset of idioms, significantly advancing AI's linguistic capabilities and understanding.
Article
Full-text available
Natural Language Understanding (NLU) components are used in Dialog Systems (DS) to perform intent detection and entity extraction. In this work, we introduce a technique that exploits the inherent relationships between intents and entities to enhance the performance of NLU systems. The proposed method involves the utilization of a carefully crafted set of rules that formally express these relationships. By utilizing these rules, we effectively address inconsistencies within the NLU output, leading to improved accuracy and reliability. We implemented the proposed method using the Rasa framework as an NLU component and used our own conversational dataset AWPS to evaluate the improvement. Then, we validated the results in other three commonly used datasets: ATIS, SNIPS, and NLU-Benchmark. The experimental results show that the proposed method has a positive impact on the semantic accuracy metric, reaching an improvement of 12.6% in AWPS when training with a small amount of data. Furthermore, the practical application of the proposed method can easily be extended to other Task-Oriented Dialog Systems (T-ODS) to boost their performance and enhance user satisfaction.
Article
Full-text available
Since the introduction of OpenAI's ChatGPT‐3 in late 2022, conversational chatbots have gained significant popularity. These chatbots are designed to offer a user‐friendly interface for individuals to engage with technology using natural language in their daily interactions. However, these interactions raise user privacy concerns due to the data shared and the potential for misuse in these conversational information exchanges. Furthermore, there are no overarching laws and regulations governing such conversational interfaces in the United States. Thus, there is a need to investigate the user privacy concerns. To understand these concerns in the existing literature, this paper presents a literature review and analysis of 38 papers out of 894 retrieved papers that focus on user privacy concerns arising from interactions with text‐based conversational chatbots through the lens of social informatics. The review indicates that the primary user privacy concern that has consistently been addressed is self‐disclosure. This review contributes to the broader understanding of privacy concerns regarding chatbots the need for further exploration in this domain. As these chatbots continue to evolve, this paper acts as a foundation for future research endeavors and informs potential regulatory frameworks to safeguard user privacy in an increasingly digitized world.
Article
Realistic co-speech gestures are important to anthropomorphize ECAs, as nonverbal behavior improves expressiveness of their speech greatly. However, the existing approaches to generating co-speech gestures with sufficient details (including fingers, etc.) in 3D scenarios are indeed rare. Additionally, they hardly address the problem of abnormal gestures, temporal–spatial coherence and diversity of gesture sequences comprehensively. To handle abnormal gesture issues, we put forward an angle conversion method to remove body part length from the original in-the-wild video dataset via transferring coordinates of human upper body key points into relative deflection angles and pitch angles. We also propose a neural network called HARP with encoder–decoder architecture to transfer MFCC featured speech audio into aforementioned angles on the basis of CNN and LSTM. The angles then can be rendered as corresponding co-speech gestures. Compared with the other latest approaches, the co-speech gestures generated by HARP are proved to be almost as good as the real person, i.e., they have strong temporal–spatial coherence, diversity, persuasiveness and credibility. Our approach puts finer control on co-speech gestures than most of the existing works by handling all key points of the human upper body. It is more feasible for industrial application, since HARP can be adaptive to any human upper body model. All related code and evidence videos of HARP can be accessed at https://github.com/drrobincroft/HARP .
Chapter
A chatbot is a computer program, which is blueprint for simulation and conversation between individuals over the internet. According to the input of a human, chatbots engage with clients and respond to them. It makes the client imagine that it is visiting with an individual while they're talking with the computer. Chatbots are turning out to be progressively significant passageways to automated administration and information in regions like client assistance, medical services, and training. The decision provokes interest about the motivations behind why chatbots are turning out to be more human-like than only a specialized gadget and what's in store for people and chatbots. This chapter gives how the chatbot and IoT mingle and procreate in all the sectors, as IoTs are wise contraptions that the authors track down in their standard normal presence and if chatbots can be made a piece of the IoT and its interface, exchange of information and data will be more important and it will happen all through the scope of the day considering the way that chatbots don't enervate.
Chapter
A chatbot is a computer program or software application that is designed to communicate with humans via text or speech-based interfaces. A chatbot's main objective is to mimic human conversation and deliver immediate responses to user inquiries. Chatbots are used in a variety of industries in various cases, including customer support, sales and marketing, appointment scheduling, retrieving information, virtual assistants, and even more. They can be used on websites, chat apps (such as WhatsApp, Facebook Messenger, or Slack), mobile applications, and voice-activated platforms such as Google Assistant, Siri, and Alexa.This chapter offers a thorough investigation of chatbots, chronicling their historical evolution, looking at their numerous applications, and sketching forth a complete classification scheme. This chapter seeks to provide a comprehensive overview of the evolution, significance, and classification of chatbots in the field of human-computer interaction from its genesis to modern uses.
Article
Chat Generative Pre-Trained Transformer (ChatGPT), OpenAI tarafından geliştirilen şimdiye kadar yapılmış en büyük dil modellerinden biridir. Kullanıma açılmasından beş gün sonra bir milyon kullanıcıya ulaşmış, sadece iki ay sonra ise aylık 100 milyon aktif kullanıcıya ulaşarak tarihin en hızlı büyüyen tüketici uygulaması haline gelmiş ve büyük bir heyecana yol açmıştır. ChatGPT’nin, benzer dil modellerinden farklı olarak birbirini takip eden soruları yanıtlayabildiği, uyarıldığında yanıtlarındaki hataları kabul edip düzenlemeler yapabildiği, farklı dilleri anlayıp bu dillerde cevaplar verebildiği ve yöneltilen sorulardan uygun olmayanları yanıtlamayı reddedebildiği görülmektedir. ChatGPT’nin sağlık alanında özellikle tıpta nasıl kullanılabileceği ve neler yapabildiği tartışılmış ve bu konuda birçok yayın yapılmıştır. Bu makale chatbotlar, doğal dil işleme, hesaplamalı dilbilim, ChatGPT ve tıp alanındaki kullanımını konu almaktadır.
Article
Full-text available
PDF EPUB Share icon Back to Top Abstract This study presents a systematic literature review to understand the applications, benefits, and challenges of digital assistants (DAs) in production and logistics tasks. Our conceptual framework covers three dimensions: information management, collaborative operations, and knowledge transfer. We evaluate human-DA collaborative tasks in the areas of product design, production, maintenance, quality management, and logistics. This allows us to expand upon different types of DAs, and reveal how they improve the speed and ease of production and logistic work, which was ignored in previous studies. Our results demonstrate that DAs improve the speed and ease of workers’ interaction with machines/information systems in searching, processing, and demonstrating. Extant studies describe DAs with different levels of autonomy in decision-making; however, most DAs perform tasks as instructed or with workers’ consent. Additionally, we observe that workers find it more intuitive to perform tasks and acquire knowledge when they receive multiple sensorial cues (e.g. auditory and visual cues). Consequently, future research can explore how DAs can be integrated with other technologies for robust multi-modal assistance such as eye tracking and augmented reality. This can provide customised DA support to workers with disabilities or conditions to facilitate more inclusive production and logistics.
Chapter
This article is dedicated to F-2 the companion robot and to interpretations of respondents’ estimations of designed communicative multimodal behaviour. The affective robot is described: it represents a platform for implementing and verifying various individual behavioural traits for robots. F-2 interprets multimodal input: text, face orientation and tactile signals; it translates the input into facts, which are seeds for further affective behaviour. Facts trigger behavioural patterns for reacting—concurrent scenarios with their activation degrees varying over time. The most activated scenario is implemented via one reaction out of a pool of corresponding scenario reactions. Each reaction is multimodal and includes one or several components: speech, gestures, gazes. Robot behaviour is estimated by human assessors during conducted experiments on communication. Several notable effects were observed during perception of implemented communicative behaviour of F-2. These effects are discussed, they are presumed to be evidence to common human expectations transfer from human-human interaction to human-robot interaction.
Article
Full-text available
Two critical policy questions will determine the impact of generative artificial intelligence (AI) on the knowledge economy and the creative sector. The first concerns how we think about the training of such models—in particular, whether the creators or owners of the data that are “scraped” (lawfully or unlawfully, with or without permission) should be compensated for that use. The second question revolves around the ownership of the output generated by AI, which is continually improving in quality and scale. These topics fall in the realm of intellectual property, a legal framework designed to incentivize and reward only human creativity and innovation. For some years, however, Britain has maintained a distinct category for “computer-generated” outputs; on the input issue, the EU and Singapore have recently introduced exceptions allowing for text and data mining or computational data analysis of existing works. This article explores the broader implications of these policy choices, weighing the advantages of reducing the cost of content creation and the value of expertise against the potential risk to various careers and sectors of the economy, which might be rendered unsustainable. Lessons may be found in the music industry, which also went through a period of unrestrained piracy in the early digital era, epitomized by the rise and fall of the file-sharing service Napster. Similar litigation and legislation may help navigate the present uncertainty, along with an emerging market for “legitimate” models that respect the copyright of humans and are clear about the provenance of their own creations.
Article
Full-text available
Case-based reasoning is a recent approach to problem solving and learning that has got a lot of attention over the last few years. Originating in the US, the basic idea and underlying theories have spread to other continents, and we are now within a period of highly active research in case-based reasoning in Europe as well. This paper gives an overview of the foundational issues related to case-based reasoning, describes some of the leading methodological approaches within the field, and exemplifies the current state through pointers to some systems. Initially, a general framework is defined, to which the subsequent descriptions and discussions will refer. The framework is influenced by recent methodologies for knowledge level descriptions of intelligent systems. The methods for case retrieval, reuse, solution testing, and learning are summarized, and their actual realization is discussed in the light of a few example systems that represent different CBR approaches. We also discuss the role of case-based methods as one type of reasoning and learning method within an integrated system architecture.
Article
I had some strong reactions to Joe Weizenbaum's book, Computer Power and Human Reason. The book mentions some important concerns which are obscured by harsh and sometimes shrill accusations against the Artificial Intelligence research community. On the whole, it seems to me that the personal attacks distract and mislead the reader from more valuable abstract points. I strongly recommend Samuel Florman's article "In Praise of Technology" in the November, 1975, issue of Harper's Magazine to see a different opinion about the role of technology in modern society.
Article
"In this book I have tried to explain what the science of psychology is like and how it got that way." Emphasis is on key topics, presented in historical order. The more technological and applied aspects are omitted. Interwoven with discussions of traditional experimental areas are biographies of Wundt, William James, Galton, Pavlov, Freud, and Binet. A glossary and suggestions for further reading are appended. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
The "user illusion" of this book's title comes from computer design and refers to thesimplistic mental image most of us have of our PCs. Our consciousness, says the author, is our user illusion of ourselves. This book makes the case that humans are designed for a much richer existence than processing a dribble of data from a computer screen, which actually constitutes a form of sensory deprivation. That there is actually far too little information in the so-called Information Age may be responsible for the malaise of modern society, that nagging feeling that there must be more to life. There is—but we have to get outside and live life with all our senses to experience it more fully. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Chapter
I propose to consider the question, “Can machines think?”♣ This should begin with definitions of the meaning of the terms “machine” and “think”. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words “machine” and “think” are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statistical survey such as a Gallup poll.
Conference Paper
The Turing Test was proposed by Alan Turing in 1950; he called it the Imitation Game. In 1991 Hu Loebner prize competition, offering a f h Loebner started the 100,000 prize to the author of the first computer program to pass an unrestricted Turing test. Annual competitions are held each year with smaller prizes for the best program on a restricted Turing test. This paper describes the development of one such Turing System, including the technical design of the program and its performance on the first three Loebner Prize competitions. We also discuss the program's four year development effort, which has depended heavily on constant interaction with people on the Internet via Tinymuds (multiuser network communication servers). Finally, we discuss the design of the Loebner com- petition itself, and address its usefulness in furthering the development of Artificial Intelligence.
Article
Eliza is a program operating within the MAC time-sharing system at MIT which makes certain kinds of natural language conversation between man and computer possible. Input sentences are analyzed on the basis of decomposition rules which are triggered by key words appearing in the input text. Responses are generated by reassembly rules associated with selected decomposition rules. The fundamental technical problems with which Eliza is concerned are: (1) the identification of key words, (2) the discovery of minimal context, (3) the choice of appropriate transformations, (4) generation of responses in the absence of key words, and (5) the provision of an editing capability for Eliza scripts. A discussion of some psychological issues relevant to the Eliza approach as well as of future developments concludes the paper. 9 references.
Article
In ‘Computing Machinery and Intelligence’, Alan Turing actually proposed not one, but two, practical tests for deciding the question ‘Can a machine think?’ He presented them as equivalent. I show here that the first test described in that much-discussed paper is in fact not equivalent to the second one, which has since become known as ‘the Turing Test’. Although the first, neglected, test uses a human’s linguistic performance in setting an empirical test of intelligence, it does not make behavioral similarity to that performance the criterion of intelligence. The two tests yield different results, and the first provides a more appropriate measure of intelligence. -----------------------------
Turing's Two Tests for Intelligence Computing Machinery and Intelligence ELIZA—A Computer Program for the Study of Naturaanguage Communication between Man and Machine
  • Susan Sterrett
  • Alan M Turing
[Sterrett 2000] Sterrett, Susan " Turing's Two Tests for Intelligence, " DARTMOUTH 2000 [Turing 1950] Turing, Alan M. " Computing Machinery and Intelligence, " MIND vol. LIX, 1950. [Weaver ????] [Weizenbaum 1966] Weizenbaum, Joseph " ELIZA—A Computer Program for the Study of Naturaanguage Communication between Man and Machine, " Communications of the ACM, Vol. 9. No. 1 (January 1966) [Weizenbaum 1976]
Excerpts from Report of Bill Clinton’s grand jury testimony in Washington Post
  • W Clinton
The Emporer’s New Mind
  • R Penrose
Stoned Machines and Very Human Humans: The Politics of Passing and Outing in the Loebner Contest
  • S Zdenek
  • S. Zdenek
RACTER, posted to the comp.ai.* hierarchy
  • J Barger
The Turk, Chess Automaton
  • G M Levitt
  • G.M. Levitt