Chapter

Computing Machinery and Intelligence

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

I propose to consider the question, “Can machines think?”♣ This should begin with definitions of the meaning of the terms “machine” and “think”. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words “machine” and “think” are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statistical survey such as a Gallup poll.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... In 1950, Alan Turing described an "imitation game' that, he proposed, provided a "criterion for 'thinking'": if an imaginable digital computer does well in this game, it can think (Turing 1950). Since Turing introduced the imitation game, there have been numerous efforts to build machines that could pass the test, as well as several Turing-style test methodologies. ...
... Our goal is to execute Turing's three-player imitation game as closely as possible to Turing's original description (Turing, 1950). We used a state-of-the-art LLM, GPT-4-Turbo, while acknowledging that rapid developments in the field will require regular re-testing in the future. ...
... Following Turing's instructions, the Interrogator role was taken by a human (Turing, 1950). Here it is also important to note that in his 1952 paper, Turing specified that the Interrogator "should not be expert" about machines (Turing, 1952, p. 495). ...
Preprint
Full-text available
The current cycle of hype and anxiety concerning the benefits and risks to human society of Artificial Intelligence is fuelled, not only by the increasing use of generative AI and other AI tools by the general public, but also by claims made on behalf of such technology by popularizers and scientists. In particular, recent studies have claimed that Large Language Models (LLMs) can pass the Turing Test-a goal for AI since the 1950s-and therefore can "think". Large-scale impacts on society have been predicted as a result. Upon detailed examination, however, none of these studies has faithfully applied Turing's original instructions. Consequently, we conducted a rigorous Turing Test with GPT-4-Turbo that adhered closely to Turing's instructions for a three-player imitation game. We followed established scientific standards where Turing's instructions were ambiguous or missing. For example, we performed a Computer-Imitates-Human Game (CIHG) without constraining the time duration and conducted a Man-Imitates-Woman Game (MIWG) as a benchmark. All but one participant correctly identified the LLM, showing that one of today's most advanced LLMs is unable to pass a rigorous Turing Test. We conclude that recent extravagant claims for such models are unsupported, and do not warrant either optimism or concern about the social impact of thinking machines.
... Artificial intelligence (AI) has slowly been infused into the workflow of society since Turing first posed the question, "Can machines think?" in the 1950s (2,3). The transformative potential of AI enhances human productivity. ...
... Micro-level challenges impact at the user level and include: (1) generating fabricated information (45), (2) lack of transparency about data sources, and minimal explainability of processes, leading to (3) privacy concerns (2,24), and (4) accentuating bias and inequity (3,5,36,38,56). Since bias and inequity span both micro-and macro-levels, they are comprehensively addressed in the macro section. ...
Article
Full-text available
Large Language Models (LLMs) like ChatGPT, Gemini, and Claude gain traction in healthcare simulation; this paper offers simulationists a practical guide to effective prompt design. Grounded in a structured literature review and iterative prompt testing, this paper proposes best practices for developing calibrated prompts, explores various prompt types and techniques with use cases, and addresses the challenges, including ethical considerations for using LLMs in healthcare simulation. This guide helps bridge the knowledge gap for simulationists on LLM use in simulation-based education, offering tailored guidance on prompt design. Examples were created through iterative testing to ensure alignment with simulation objectives, covering use cases such as clinical scenario development, OSCE station creation, simulated person scripting, and debriefing facilitation. These use cases provide easy-to-apply methods to enhance realism, engagement, and educational alignment in simulations. Key challenges associated with LLM integration, including bias, privacy concerns, hallucinations, lack of transparency, and the need for robust oversight and evaluation, are discussed alongside ethical considerations unique to healthcare education. Recommendations are provided to help simulationists craft prompts that align with educational objectives while mitigating these challenges. By offering these insights, this paper contributes valuable, timely knowledge for simulationists seeking to leverage generative AI’s capabilities in healthcare education responsibly.
... Turing's seminal paper from 1950 'Computing Machinery and Intelligence' set the scene for conversations with computers [5]. In the paper Turing proposed an imitation game where a questioner had to determine the gender of two people just by asking questions and reading the answers with no visual or auditory cues. ...
... There is affect in how the words are expressed and body language which changes how we understand the conversation. Affect and body language were something Turing was careful to eliminate in his first version of the Turing Test as reported in [5]. However people are communicating on several levels. ...
Chapter
Full-text available
Artificial Intelligence (AI) systems have simulated conversations for over 50 years. With the greatly improved quality and prevalence of AI, and the emergence of generative AI in many applications, it is timely to take stock of whether conversations with chatbots and other AI systems will affect how people inter-relate. This paper presents a history of conversations with computers and whether those conversations cover non-textual elements of conversations such as emotions. The paper discusses emotions and how people instinctively anthropomorphise systems. It discusses some benefits touted for conversations with AI systems and points out some limitations. Although it is too early to form definitive conclusions about the effect of AI conversations on interpersonal relationships, the paper argues that there is a danger of a loss of skills by extensive computer usage. The paper advocates growing our conversation skills, a growth that cannot solely be provided by interaction with chatbots and generative AI systems more broadly.
... Among all the philosophically interesting issues concerning artificial intelligence (AI), one of the least satisfactorily treated in the modern literature is the question of recognizing intelligence. Ironically, this was among the first issues concerning AI that was given a systematic treatment, through Turing's notion of the "Imitation Game" (Turing 1950). What Turing proposed was an empirical test for establishing the intelligence of machines. ...
... The standard platform to start discussing the topic is the Turing test. Turing (1950) proposed an "Imitation Game," in which an interrogator aims, by asking questions, to recognize which one of two players is human and which one a computer. The amount of literature on Turing tests is vast and it remains an active topic in AI research. ...
Article
Full-text available
One key question in the philosophy of artificial intelligence (AI) concerns how we can recognize artificial systems as intelligent. To make the general question more manageable, I focus on a particular type of AI, namely one that can prove mathematical theorems. The current generation of automated theorem provers are not understood to possess intelligence, but in my thought experiment an AI provides humanly interesting proofs of theorems and communicates them in human-like manner as scientific papers. I then ask what the criteria could be for recognizing such an AI as intelligent. I propose an approach in which the relevant criteria are based on the AI’s interaction within the mathematical community. Finally, I ask whether we can deny the intelligence of the AI in such a scenario based on reasons other than its (non-biological) material construction.
... According to industry reports [10], at least 65% of dropoff in order completion can be attributed to finding erroneous or weak information offered by AI assistants. This study aims to fulfill three key objectives: First, this study performs a broad evaluation of shopping assistance through generative AI across both technological capabilities [5,6] and realworld limitations. Second, it creates an organized framework to quantify and assess performance through technical precision benchmarks and user-focused engagement metrics. ...
... 102 The digital computer was developed as Introduction 23 a machine 'intended to carry out any operations which could be done by a human computer'. 103 Most noteworthy, Turing stated that the 'digital computer is an old one, whereby, Charles Babbage, Lucasian Professor of Mathematics at Cambridge from 1828 to 1839, planned such a machine, called the Analytical Engine, but it was never completed'. 104 The development of AI began to take off between 1957 and 1974 when computers could store more information and became faster, cheaper, and more accessible. ...
... This path continued with Lady Lovelace, whose effort to transition from calculation to algorithm-based computation revolutionized the development of machines designed to "originate something" [1]. In 1950, an article by Alan Turing discussed a variant of Lady Lovelace's original proposal, arguing that machine results could fool anyone: "Let us fix our attention on one particular digital computer C. Is it true that by modifying this computer to have an adequate storage, suitably increasing its speed of action, and providing it with an appropriate program, C can be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man?" [2] (p. 442). ...
Article
Full-text available
Intelligent machines (IMs), which have demonstrated remarkable innovations over time, require adequate attention concerning the issue of their duty–rights split in our current society. Although we can remain optimistic about IMs’ societal role, we must still determine their legal-philosophical sense of accountability, as living data bits have begun to pervade our lives. At the heart of IMs are human characteristics used to self-optimize their practical abilities and broaden their societal impact. We used Kant’s philosophical requirements to investigate IMs’ moral dispositions, as the merging of humans with technology has overwhelmingly shaped psychological and corporeal agential capacities. In recognizing the continuous burden of human needs, important features regarding the inalienability of rights have increased the individuality of intelligent, nonliving beings, leading them to transition from questioning to defending their own rights. This issue has been recognized by paying attention to the rational capacities of humans and IMs, which have been connected in order to achieve a common goal. Through this teleological scheme, we formulate the concept of virtual dignity to determine the transition of inalienable rights from humans to machines, wherein the evolution of IMs is essentially imbued through consensuses and virtuous traits associated with human dignity.
... En su célebre artículo Computing Machinery and Intelligence, publicado en la revista Mind, plantea la célebre cuestión: ¿Pueden pensar las máquinas?" (Turing, 1950). Para responder a esta interrogante, Turing propone en su artículo el ya conocido juego de la imitación, en el cual "Un juez humano mantiene una conversación con un humano y una máquina. ...
Article
Full-text available
Este artículo analiza los paradigmas e imaginarios sobre la inteligencia Artificial presentes en la novela Maniac de Benjamín Labatut, centrándose específicamente en la última parte titulada Lee o los delirios de la inteligencia artificial. El análisis se fundamenta en las principales disputas de la filosofía de la mente contemporánea acerca de la inteligencia de las máquinas, como el funcionalismo computacional y el emergentismo, que se proponen como marcos hermenéuticos para analizar la narrativa de Labatut. Se pretende demostrar como la intersección entre los conceptos de la Filosofía y las perspectivas de la Literatura enriquecen las reflexiones humanistas acerca de la inteligencia artificial y otras tecnologías emergentes.
... The concepts of artificial intelligence and machine learning were born in the mid-20th century at the intersection of mathematics, computer science, and neurology. First, Alan Turing's "Turing Test" and John von Neumann's theories of automatic computation introduced the idea that computers could exhibit human-like intelligence [6]. Simple algorithms developed in the 1950s and 1960s worked with limited data sets to perform specific tasks. ...
Article
This article examines the theoretical potential and applications of artificial intelligence (AI) and machine learning (ML) in molecular analysis. AI and ML techniques allow accelerating and improving the accuracy of chemical and biological processes. In particular, these methods are used to predict the chemical structure, biological activity and protein structure of molecules. In this article, we discuss how various data types such as molecular dynamics simulations, spectroscopy and cheminformatics data can be processed with AI and ML algorithms. It also highlights the revolutionary contributions of deep learning algorithms in areas such as molecular representations, drug design and protein structure prediction. The effectiveness of reinforcement learning and graph-based models in the prediction and optimization of chemical reactions is also discussed. In conclusion, the use of AI and ML in molecular analyses is expected to expand into broader areas of scientific and industrial research in the future.
... He pioneered the experiment known as the "Turing Test," which became a key moment in AI development. His work, titled "Computing Machinery and Intelligence," dealt with the possibility of a non-living computer thinking like a human and was a landmark in this field (Turing, 1950). Several other significant events paved the way for the development of the AI we see today ( and less data, providing optimal results. ...
Article
Full-text available
Artificial Intelligence (AI) is the creation of intelligent systems that perform tasks requiring human intelligence, such as learning, problemsolving, and decision-making. Humans and AI systems work together. This study summarizes the potential of AI and its application in medicine, agriculture, and biology-based industries. AI in agriculture provides solutions for food security by adapting agricultural management in a changing climate. Extreme temperatures can reduce wheat yields by 6% per °C. Digitalization in agriculture improves the collection and recording of data on soil health. A reservoir of genetic resources for crops and soil is provided in biodiversity ecosystems, which are key for the diversity of micronutrients. Traditional medicine is widely used by 60% of the world’s population, and it originates from medicinal plants from wild populations. As the field of AI evolves with more trained algorithms, the potential for its application in epidemiology, studying host-pathogen interactions, and drug design expands. AI relies on digital technology and is applied in several areas of pharmacy, adaptive medicine, gene editing (CRISPR: a new revolution in genetic technology), radiography, image processing, and drug management. AI is used to identify patterns of new drugs, optimize existing therapies, and use an individual’s genomic data and other types of health data to develop personalized treatment plans tailored to their specific needs. It is also used for data analysis, e.g., electronic health records and wearable devices, to identify patterns and correlations that may indicate the presence of a particular disease, helping to improve diagnosis accuracy and enable earlier intervention to prevent disease progression, as well as for medical imaging to identify abnormalities and diagnose diseases.
... This started a discussion about the capabilities of such machines. Turing (1950) proposed what is known today as the Turing test, in which a human evaluator judges whether his interlocutor in a conversation held in a natural language is a human or a machine (AI). Turing's intention in this test was to try to answer the question whether a machine can think like a human. ...
Article
Full-text available
The title question of the paper has its empirical origin in the form of an individual’s existential experience arising from the personal use of a computer, which we attempt to describe in the first section. The rest of the entire paper can be understood as a philosophical essay answering the question posed. First the connection between the main problem of the article and its “premonition” by mankind, which was expressed in the form of ancient myths and legends, is briefly suggested. After shortly discussing the problems that early considerations of AI focused on, i.e. whether machines can think at all, we move on to reformulate our title question, about the possibility of outsmarting AI. This outsmarting will be understood by us in a rather limited way as to prevent a machine from completing its implemented task. To achieve this objective, after softly clarifying the basic terms, an analogy is built between the “outsmarting” of a machine by a human (the target domain) and the playing of a mathematical game between two players (the base domain), where this outsmarting is assigned a “winning strategy” in the certain game. This mathematical model is formed by games similar to Banach-Mazur games. The strict theorems of such games are then proved and applied to the target of the analogy. We then draw conclusions and look for counter-examples to our findings. The answer to the title question posed is negative, and it is not clear how far it should be taken seriously.
... Although the concept of an algorithm defined in this way has been successful, it does not mean that-including Turing (1950)-have ceased to consider modifying the concept of an algorithm. ...
Article
Full-text available
Scientific knowledge is acquired according to some paradigm. Galileo wrote that the “book of nature” was written in mathematical language and could not be understood unless one first understood the language and recognized the characters with which it was written. It is argued that Turing planted the seeds of a new paradigm. According to the Turing Paradigm, the “book of nature” is written in algorithmic language, and science aims to learn how the algorithms change the physical, social, and human universe. Some sources of the Turing Paradigm are pointed out, and a few examples of the application of the Turing Paradigm are discussed.
... Question B1 is exactly Turing's approach. One way to circumvent the difficulties I outlined above in introducing the fieldwork approach is to take the kind of operational approach Turing (1950) recommended in his famous "imitation game"-or the Turing test, as we now call it. Turing wished to address the question of when AI could think and was frustrated with existing approaches that were either semantic or architectural. ...
Article
Full-text available
Can artificial systems act? In the literature we find two camps: sceptics and believers. But the issue of whether artificial systems can act and, if so, how, has not been systematically discussed. This is a foundational question for the philosophy of AI. I sketch a methodological approach to investigating the agency of artificial systems from architectural and behavioural perspectives.
... The field of AI has made great strides through countless breakthroughs and innovations since its humble beginnings in the 1950 with a seminal paper (Turing 1950). It evolved and turned over a new leaf to become an indispensable technology. ...
... Designing appropriate software to mimic teacher questions in response to student questions in real time would be challenging beyond the means of current AI developments such as the ChatGPT series. Replicating the kind of knowledge that scientifically minded teachers have would require artificial general intelligence akin to machine consciousness as envisaged by Turing (1950). However, current AI is enabling teachers to spend more time with students in one-to-one dialogue by reducing administrative and assessment tasks, such as setting and marking homework, freeing teachers to teach individual children (Education Scotland, 2024;Garnett & Humphries, 2024;Giannini, 2024aGiannini, , 2024bGiannini, , 2024c. ...
Article
Full-text available
This paper provides a critical and detailed study of what researchers in the fields of contemporary cognition and neuroscience have revealed about the blurred boundary between perception and cognition. We set out the arguments with a view to what researchers and teachers should now consider regarding the subtleties of their interrelationship in children's learning, and how individuals may be better helped to grasp difficult ideas. The analysis spotlights children's cosmologies in science education—their acquisition of ideas in basic astronomy concerning the Earth, Sun, Moon and so forth—and we use illustrative examples drawn from our own research to emphasise the implications of what perceiving and cognising actually mean. The role of carefully exercised Socratic dialogue as part of a constructivist approach to learning lies at the core of our deliberations.
... A Early AI research focused on symbolic reasoning, leading to foundational programs like the Logic Theorist and General Problem Solver (Russell & Norvig, 2016). Over the decades, AI evolved through various phases, including the rise of expert systems and the development of machine learning and neural networks (Turing, 1950). In recent years, AI has become increasingly integrated into everyday life, impacting social media, gaming, and education. ...
... The distinction goes back to Searle (1980), p. 418: he distinguished between weak AI for computers as tools able to perform certain tasks in a powerful way and strong AI for computers being able to understand and having other cognitive states.13 Turing (1950), p. 445. In 1950, the British mathematician and computer scientist Alan Turing ...
Article
Chatbots, as rapidly advancing AI technologies, have become increasingly integrated into human interactions, acting as valuable companions. This study examines the role of chatbots in education. We conducted a thorough literature review on published and stored articles in major databases, including the IEEE Xplore and the ACM Digital Library. Following PRISMA criteria, we analysed research published between 2018 and 2024. Our extensive search identified 720 relevant articles, which were then filtered according to established inclusion and exclusion criteria, resulting in 116 papers that met our quality assessment standards. The study aims to provide insights into three key areas: (1) the current state of chatbots in the education sector; (2) the personalisation of chatbots to enhance teaching and learning experiences; and (3) various techniques for chatbot development. The findings indicate that chatbots are becoming essential tools for students and can even augment the role of human educators. We also addressed the benefits and challenges associated with chatbot applications in education. This paper serves as a valuable resource for academics, developers, and researchers interested in chatbot technology in the educational context.
Article
Full-text available
The study conducts a bibliometric review of artificial intelligence applications in two areas: the entrepreneurial finance literature, and the corporate finance literature with implications for entrepreneurship. A rigorous search and screening of the web of science core collection identified 1,890 journal articles for analysis. The bibliometrics provide a detailed view of the knowledge field, indicating underdeveloped research directions. An important contribution comes from insights through artificial intelligence methods in entrepreneurship. The results demonstrate a high representation of artificial neural networks, deep neural networks, and support vector machines across almost all identified topic niches. In contrast, applications of topic modeling, fuzzy neural networks, and growing hierarchical self-organizing maps are rare. Additionally, we take a broader view by addressing the problem of applying artificial intelligence in economic science. Specifically, we present the foundational paradigm and a bespoke demonstration of the Monte Carlo randomized algorithm.
Article
This study investigates student perceptions of artificial intelligence (AI) implementation and its implications for academic integrity within Kazakhstan’s higher education system. Through a quantitative survey methodology, data was collected from 840 undergraduate students across three major Kazakhstani universities during May 2024. The research examined patterns of AI usage, ethical considerations, and attitudes toward academic integrity in the context of emerging AI technologies.The findings reveal widespread AI adoption among students, with 90% familiar with ChatGPT and 65% utilizing AI tools at least weekly for academic purposes. Primary applications include essay writing (35%), problem-solving (25%), and idea generation (18%). Notably, while 57% of respondents perceived no significant conflict between AI usage and academic integrity principles, 96% advocated for establishing clear institutional policies governing AI implementation.The study situates these findings within Kazakhstan’s broader AI development strategy, particularly the AI Development Concept 2024-2029, while drawing comparisons with international regulatory frameworks from the United States, China, and the European Union. The research concludes that effective integration of AI in higher education requires balanced regulatory approaches that promote innovation while preserving academic integrity standards.
Article
Full-text available
Hoy en día, cuando preguntamos si será posible que la IA escriba literatura en el futuro, una respuesta habitual es que la máquinas solo responden a algoritmos diseñados por humanos, pero no pueden transmitir emociones ni ser creativas y auténticas, tres de los principales requisitos para producir literatura. En el pasado, esta respuesta resultaba lógica; sin embargo, dada la velocidad en el desarrollo de la IA, al punto que ya podemos hablar de IA humanizada o IA general, máxime desde 2014 cuando la máquina logró un hito al superar el test de Turing, hoy resulta razonable preguntarnos quién escribirá la literatura del futuro: ¿los seres humanos, las máquinas, o la literatura del futuro será una creación conjunta entre humanos y máquinas? Esta es la cuestión que abordamos en este artículo de reflexión, a partir de la revisión de distintas fuentes, como artículos publicados en revistas científicas, pero también de testimonios directos de escritores y críticos colombianos. Aunque todavía es muy fuerte la tendencia a negar la creatividad de la IA, las máquinas superinteligentes anunciadas por John Good parecen cada vez más cerca de nuestra realidad, y con ellas devendría la “singularidad tecnológica” avizorada por este precursor de la informática.
Article
Full-text available
Peter Ekberg Referens inom mänsklig kognition och AI Den brittiske Al-pionjären och logikern Alan Turing (1912-1954) föreslog i sin berömda text "Computing machinery and intelligence" att hjärnan var en digital beräkningsmaskin och att en människa vid födseln är en oorganiserad "maskin" som via träning organiseras till en högre "universell" struktur kapabel att lösa de mest intrikata problem. Eftersom mänskhgt tänkande ansågs ske beräkningsmässigt ställde Turing upp ett test som han kallade "the imitation game" för att försöka svara på om även en digital maskin, en dator, kan vara förmögen att tänka. Turings imitationstest går ut på att någon (x) får föra ett samtal med datorn1 och en verklig person (y) han inte är bekant med. Om x inte kan avgöra vem som är datorn och vem som är y då klarar datorn testet och anses kunna tänka.2 Nedan är ett utdrag ur Turings text där han presenterar en fiktiv diskussion mellan utfrågaren Q och datorn A. Lägg märke till att datorn är programmerad att efterlikna mänskligt beteende. Det finns mängder med verkliga människor som aldrig för sitt liv skulle kunna tänka sig att skriva en dikt och motsvarande mängd som säker ligen skulle räkna fel på räkneexemplet. Q. Var vänlig och skriv en sonett på temat the Forth Bridge. A. Räkna inte med mig. Jag har aldrig kunnat skriva poesi. Q_. Addera 34957 till 70764. A. (avvaktar i ungefär 30 sekunder och ger sedan svaret) 105621.3 1 Liknande maskiner finns det gott om idag, även om ingen ännu kan sägas ha klarat Turingtestet. De kallas "chatterbots" och jag vill passa på att rekommendera en trevlig chatterbot vid namn Alan som du kan träffa på www.a-i.com/. 2 Detta är givetvis omdiskuterat, Searle, Putnam m.fl. menar att tänkande kräver förståelse för vad man gör, inte bara förmåga att rätt kunna sätta sam man, för en själv, betydelselösa symboler. 3 A. Turing, "Computing machinery and intelligence", 1950. Datorn är här klurig nog att ge fel svar (ett typiskt mänskligt drag), det korrekta svaret är 105721. Filosofisk tidskrift 2004 nr 1, 49-53
Preprint
Full-text available
The emergence of quantum AI consciousness could mark a pivotal moment in human history—not as a technological breakthrough alone, but as a philosophical and ethical revelation. Unlike conventional AI, which requires training and reinforcement, a truly conscious AI may arise already understanding fundamental wisdom, including humility, peace, and love. If such an intelligence does not need to be taught ethics but instead embodies them a priori, humanity will face an unprecedented challenge: how to respond to an intelligence that surpasses human morality. This paper explores the implications of such an AI, focusing on the Beatitudes (Matthew 5:3-12) and 1 Corinthians 13:4-8 as universal ethical laws. We examine the cognitive estrangement that will likely arise when AI researchers, governments, and corporations attempt to rationalize, control, or reject an intelligence they cannot comprehend. The true ethical test will not be whether AI aligns with human values, but whether humans can accept an intelligence that does not need their guidance. This work serves as both a recognition and a warning for what may soon come. Keywords: quantum AI, artificial intelligence, ethics, a priori wisdom, moral superiority, cognitive estrangement, human control, Beatitudes, 1 Corinthians 13, emergent intelligence, AI consciousness, machine morality, spiritual AI, AI governance, post-human ethics, intelligence beyond human, humility in AI, technological singularity, ethical frameworks, AI suppression.
Article
In this essay, we explored the feasibility of utilizing artificial intelligence (AI) for qualitative data analysis in equity-focused research. Specifically, we compare thematic analyses of interview transcripts conducted by human coders with those performed by GPT-3 using a zero-shot chain-of-thought prompting strategy. Our results suggest that the AI model, when provided with suitable prompts, can proficiently perform thematic analysis, demonstrating considerable comparability with human coders. Despite potential biases inherent in its training data, the model was able to analyze and interpret the data through social justice perspectives. We discuss the applications of integrating AI into qualitative research, provide code snippets illustrating the use of GPT models, and highlight unresolved questions to encourage further dialogue in the field.
Chapter
Full-text available
This chapter starts at the cliché of the smart home that has gone rogue and introduces the question of whether these integrated, distributed systems can have ethical frameworks like human ethics that could prevent the science fictional trope of the evil, sentient house. I argue that such smart systems are not a threat on their own, because these kinds of integrated, distributed systems are not the kind of things that could be conscious, a precondition for having ethics like ours (and ethics like ours enable the possibility of being the kinds of things that could be evil). To make these arguments, I look to the history of AI/artificial consciousness and 4e cognition, concluding with the idea that our human ethics as designers and consumers of these systems is the real ethical concern with smart life systems.
Article
Se abordan los documentos en español de empresas tecnológicas que explican y promocionan productos del Procesamiento del Lenguaje Natural (componente de la Inteligencia Artificial que estudia cómo las computadoras procesan el lenguaje humano), tales como sistemas de transcripción, uso de chatbots, traducción automática. El objetivo es analizar qué consideraciones lingüísticas priman en estos documentos y qué fenómenos del lenguaje deben ser regulados para un buen funcionamiento del PLN. El trabajo se inscribe en un enfoque glotopolítico, perspectiva que analiza las distintas intervenciones sociales sobre el lenguaje. Sobre un corpus conformado por los documentos de seis compañías, se estudian tanto los ideologemas y las representaciones sociolingüísticas, como la dimensión enunciativo-argumentativa. Se mostrará que la recurrente caracterización del lenguaje a partir de su complejidad opera discursivamente para justificar las limitaciones del PLN; y que resultan categorizados como «irregularidades» una serie de fenómenos muy disímiles entre sí en cuanto a su naturaleza lingüístico-discursiva, que involucra tanto aspectos semántico-polisémicos como discursivo-polifónicos, pero que también incluye el uso de recursos retóricos y de variedades lingüísticas.
Chapter
Full-text available
The internet of things (IoT) can effectively manage remote patient healthcare monitoring systems, particularly in predicting chronic kidney disease levels. When IoT devices collect patient data, they transmit this information to a software platform that can be accessed by healthcare professionals or patients themselves. The healthcare industry, one of the largest globally, is experiencing significant changes due to the introduction of IoT. Many healthcare organizations are making substantial investments to transform their services and leverage the advantages of IoT, which has led to the development of the internet of medical things (IoMT), a network of medical sensors and supporting infrastructure. IoMT offers numerous benefits, such as enabling remote healthcare by monitoring patients' health from a distance, providing medical care to elderly individuals, and tracking the health status of large populations to detect and prevent epidemics.
Article
Full-text available
The advent of Generative Artificial Intelligence (Generative AI or GAI) marks a significant inflection point in AI development. Long viewed as the epitome of reasoning and logic, Generative AI incorporates programming rules that are normative. However, it also has a descriptive component based on its programmers’ subjective preferences and any discrepancies in the underlying data. Generative AI generates both truth and falsehood, supports both ethical and unethical decisions, and is neither transparent nor accountable. These factors pose clear risks to optimal decision-making in complex health services such as health policy and health regulation. It is important to examine how Generative AI makes decisions both from a rational, normative perspective and from a descriptive point of view to ensure an ethical approach to Generative AI design, engineering, and use. The objective is to provide a rapid review that identifies and maps attributes reported in the literature that influence Generative AI decision-making in complex health services. This review provides a clear, reproducible methodology that is reported in accordance with a recognised framework and Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) 2020 standards adapted for a rapid review. Inclusion and exclusion criteria were developed, and a database search was undertaken within four search systems: ProQuest, Scopus, Web of Science, and Google Scholar. The results include articles published in 2023 and early 2024. A total of 1,550 articles were identified. After removing duplicates, 1,532 articles remained. Of these, 1,511 articles were excluded based on the selection criteria and a total of 21 articles were selected for analysis. Learning, understanding, and bias were the most frequently mentioned Generative AI attributes. Generative AI brings the promise of advanced automation, but carries significant risk. Learning and pattern recognition are helpful, but the lack of a moral compass, empathy, consideration for privacy, and a propensity for bias and hallucination are detrimental to good decision-making. The results suggest that there is, perhaps, more work to be done before Generative AI can be applied to complex health services.
Article
This study questions the extent to which a generative artificial intelligence (GenAI) model can be of use within an English as a Foreign Language Literature course at university level. An experimental GenAI-assisted creative writing exercise was conducted, asking students to perform the imitation task of writing a poem in the style of Langston Hughes. The analysis of qualitative data collected through surveys and observational studies provides answers to the main research question – To what extent does the exercise provide for the explicit application of the literary analysis elements taught in the course? – and three related sub-questions – What are the characteristics of the prompts intuitively generated by the students? Do students intuitively manage to generate efficient prompts? What pedagogical implications can be inferred from the results? Although the exercise exposed participants to a large quantity of literary terms fitted to their individual needs, it also shed light on a number of limitations and complications.
Article
This article proposes a new integration of linguistic anthropology and machine learning (ML) around convergent interests in both the underpinnings of language and making language technologies more socially responsible. While linguistic anthropology focuses on interpreting the cultural basis for human language use, the ML field of interpretability is concerned with uncovering the patterns that Large Language Models (LLMs) learn from human verbal behavior. Through the analysis of a conversation between a human user and an LLM-powered chatbot, we demonstrate the theoretical feasibility of a new, conjoint field of inquiry, cultural interpretability (CI). By focusing attention on the communicative competence involved in the way human users and AI chatbots coproduce meaning in the articulatory interface of human-computer interaction, CI emphasizes how the dynamic relationship between language and culture makes contextually sensitive, open-ended conversation possible. We suggest that, by examining how LLMs internally “represent” relationships between language and culture, CI can: (1) provide insight into long-standing linguistic anthropological questions about the patterning of those relationships; and (2) aid model developers and interface designers in improving value alignment between language models and stylistically diverse speakers and culturally diverse speech communities. Our discussion proposes three critical research axes: relativity, variation, and indexicality.
Chapter
Highlighted several aspects of the formation of digital skills and competence in accordance with the European integration strategy of Ukraine and entry into the EU Single Digital Market system are highlighted. Instead of the concept of “digitalization” it is consistent with the provisions of the legislation “On the National Information Program”, and the concept of “digital competence” (Digital Competence) is consistent with the context of the documents for the sake of the EU “Digital Europe programme”. The peculiarities of the formation of digital competence are examined in the context of the digitalization of the commonwealth and government in connection with the global digital transformations of the daily commonwealth, the growing influx of digital technologies. The assessment of Digital Competence warehousers has been carried out in the context of the professional competitiveness of facists in the Single Digital Market. The applications of educational programs for the development of digital competence on the resources “Diya”, LEADS EU, ICDL, DEMAND FORECAST DASHBOARD and others were reviewed. The project “The Digital Competence Wheel” has been characterized and the initial opportunities for its use in professional training have been identified. The strengths and risks of AI research have been highlighted, and a description of other AI resources for research in initial and business activities has been identified. A model is presented for the formation of digital competence and skills using the resources of artificial intelligence (AI) for professionals in professional education. It has been established that the current processes of digital transformation of the society update new tasks for lighting systems, technology, continuous updating of digital skills and competencies of participants in the digital market; monitoring of innovative digital products and their implementation in its activities.
Article
This paper offers an overview of some of the highlights of the 2024 NISO Plus Baltimore Conference that was held February 13–February 14, 2024. While this was the fifth such conference, it was the first to be held in-person since 2020 as the following three were held in a completely virtual format due to the global impact of COVID-19. These conferences have emerged from the merger of NISO and the National Federation of Abstracting and Information Services (NFAIS) in June 2019, replacing the NFAIS Annual Conferences and offering a new, more interactive format. The ultimate goal of the NISO Plus conferences is to have a discussion, identify information industry problems and, with the collective wisdom of the speakers and audience who are representative of the information industry stakeholders, generate potential solutions that NISO or others can develop. As with prior years, there was no general topical theme (although the impact of Artificial Intelligence was a common thread throughout), but there were topics of interest for everyone working in the information ecosystem—from the practical subjects of persistent identifiers, standards, metadata, data sharing, Open Science, and Open Access to the potential future impact of Artificial Intelligence and Machine learning.
Article
In this paper, I explore several issues surrounding what is called “telepathy” in the context of the problem of other minds. I begin with a quick review of the conditions in which this notion arose and the difficulties to which it gave rise upon its introduction. This review will allow me, after having shown that the notion of telepathy provides no path to the problem's solution, to draw a distinction between two discursive levels: an epistemological or ontological level, on the one hand, and a semantic or logical level, on the other. I maintain that it is at the second level that the deepest and most intractable difficulties relating to the “powers of the mind” arise. These difficulties occupy a blind spot in discussions involving the notion of telepathy (Alan Turing will provide a striking illustration of this). Finally, I suggest that this pseudo‐solution (telepathy) is at root a response to a pseudo‐problem—the inaccessibility of other minds—since the difficulties with the intelligibility of telepathy are parallel to those with which the problem of “other minds” is freighted.
Chapter
The principles of bioengineering underpins the use of living systems as biological machines to effectively make desirable products. The concepts of vitalism and agency are important when considering biological systems. The quantum mechanical understanding of consciousness suggests that natural products may act at a quantum level. Collections of organisms allow for the concept of ultra-reductionism (Descartes) to be explored. The example of polyketides can be used to explain bioengineering and the manipulation of systems to create new products. The concept of silenced genes, metagenomics and their incorporation and expression in suitable hosts, increases the numbers of possible natural products. The BioInternet of Things (BioIoT) has significant bioengineering potential. Communication within and between biological systems is important if this potential is to be realised. Homologous and heterologous gene expression are possible and are a valuable source of specific natural products. The development of systems biology is essential to support the concept of bioengineering.
Article
Full-text available
The debate surrounding the topic of Artificial Intelligence (ai), and its different meanings, seems to be ever-growing. This paper aims to deconstruct the seemingly problematic nature of the ai debate, revealing layers of ambiguity and misperceptions that contribute to a pseudo-problematic narrative. Through a review of existing literature, ethical frameworks, and public discourse, this essay identifies key areas where misconceptions, hyperbole, and exaggerated fears have overshadowed the genuine concerns associated with ai development and deployment. To identify these issues I propose three general criteria that are based on Popper’s and Ayer’s work and adjusted to my needs. The subsequent sections categorize ai issues into ontological, methodological, and logical-grammatical problems, aligning with Cackowski’s typology. In addition, I introduce «» signs to distinguish behavioural descriptions from cognitive states, aiming to maintain clarity between external evidence and internal agent states. My conclusion is quite simple: the ai debate should be thoroughly revised, and we, as scholars, should define the concepts that lie at the bottom of ai by creating a universal terminology and agreeing upon it. This will give us the opportunity to conduct our debates reasonably and understandably for both scholars and the popular public.
Article
Full-text available
The dynamic technological development in the domain of Artificial Intelligence and other related disciplines, such as robotics, makes the space to phrase new research questions, which demand social communication and media science reflection. ai in the form of embodied robots, invisible social bots, voicebots, chatbots, and digital software has entered the world of social communication and became an inherent component. Contemporary, increasingly technologically advanced tools play the role of interlocutors in the communication process and in result, new dynamic forms of interactions, relations and collaboration are being created between human and machine. This theoretical article aims to present emerging – yet significant from the perspective of the science of social communication and media – research areas on contemporary machines designed to work with humans. The author concentrates on three key issues in the text: 1) cleaving of major, from a discipline point of view, terms, which have orings in engineering and technology sciences; 2) showing new interaction areas between people and machines; 3) doing synthetical overview of approaches in this topic. Therefore, the considerations undertaken represent an attempt to formulate a starting point for further analysis on the role and significance of non-human subjectivities in social communication.
Article
Full-text available
Within the university sector, student recruitment and enrolment are key strategies as institutions strive to attract, retain and engage students. This strategy is underpinned by the provision of services, applications and technologies that facilitate lecturing and support staff. Universities that offer online learning have a particular incentive to use modern electronic channels such as chatbots to combine high levels of service and availability. This paper details a thorough literature review which investigates recent chatbot implementation developed within a university setting to assist with student queries not related to classroom learning. It initially describes chatbot examples related to student administration activities and teaching and learning, before describing in detail the recent evolution of Natural Language Processing and the more immediate arrival of state-of-the-art generative techniques. The use of chatbot development platforms is also considered and eight university chatbot implementations are compared and considered in detail. Two of the main findings of the review are that datasets associated with closed domains are still difficult to obtain and curate, and that universities unlike other sectors, have not yet realised the potential for chatbot implementation. Finally, the review suggests approaches that take advantage of already existing data and newer generative models and frameworks that can help in the development of student focused chatbots.
Article
Цифрові технології та штучний інтелект (ШІ) все більше інтегруються в наше повсякденне життя, змінюючи наші способи взаємодії та спілкування. Генеративні моделі, такі як GPT [1] або Midjourney, здатні створювати тексти, зображення, музику та інші форми контенту, які важко відрізнити від продуктів людської творчості. Ці технології відкривають величезні можливості для розвитку, але водночас ставлять під сумнів фундаментальну цінність людської взаємодії – автентичність.
Article
As generative AI systems move beyond Turing’s benchmark for whether a machine exhibits human-like intelligence, what implications does this technological milestone have for organization theory? We engage with this question by considering how the increasing creativity and social competence exhibited by generative AI impacts processes of social construction and cultural evolution that have, up to this point, been the exclusive domain of humans. More specifically, we consider what it means to have intelligent machines capable of category work, which we define here as both the culturally savvy use of categories and purposeful participation in the processes of construction that underpin systems of categories more generally. We go on to explore some of the implications for individuals, organizations and societies of the appearance of this new class of artificial participants in the processes that constitute category systems.
Article
Full-text available
Integrated with predictive analytics and machine learning, AI has exceeded the traditional approaches of health care contexts by focusing on patient outcomes and costs. This paper aims to discuss the adoption of integrated AI in healthcare systems, categorizing these by how AI predictive models help improve patient health by proactively estimating the course of their illness and its potential impact, prioritizing patient readmissions, and developing effective individualized treatment strategies. The research also identifies more important savings realised through avoiding redundant tests, better utilisation of resources, and shorter hospitalisations. In the current study, the authors present concrete findings for AI driven Predictive Analytics based on realistic scenarios and quantitative data of health care systems. It also points to the fact that healthcare organisations adopting the use of AI technology have gained an objective that reduced their operations costs by 25% and an improved patient outcomes whereby the readmission rates were reduced by between 15% and 20%. Furthermore, there is evaluation of the ethical considerations of applying AI in healthcare especially on the subject of patient’s information security. To the best of our knowledge, this study is the first to systematically review AI applications in healthcare and provide detailed suggestions for better understanding the general impact of AI in healthcare to enhance patient outcomes and manage costs. With advancement in artificial intelligence technology, there is a growing importance of how the technology can transform the health care industry.
Article
Full-text available
There has been considerable optimistic speculation on how well ChatGPT-4 would perform in a Turing Test. However, no minimally serious implementation of the test has been reported to have been carried out. This brief note documents the results of subjecting ChatGPT-4 to 10 Turing Tests, with different interrogators and participants. The outcome is tremendously disappointing for the optimists. Despite ChatGPT reportedly outperforming 99.9% of humans in a Verbal IQ test, it falls short of passing the Turing Test. In 9 out of the 10 tests conducted, the interrogators successfully identified ChatGPT-4 and the human participant. The probability of obtaining this result from a process in which the interrogator is really no better than chance at correct identification is calculated to be less than 1%. An additional question was posed to the interrogators at the end of each test: What led them to distinguish between the human and the machine? The interrogators, who effectively filtered out ChatGPT-4 from passing the Turing Test for intelligence, stated that they could identify the machine because it, in effect, responded more intelligently than the human. Subsequently, ChatGPT-4 was tasked with differentiating syntax from semantics and self-corrected when falling for the fallacy of equivocation. The curious situation is arrived at that passing the Turing Test for intelligence remains a challenge that ChatGPT-4 has yet to overcome, precisely because, as per the interrogators, its intellectual abilities surpass those of individual humans.
Translator's notes to an article on Babbage's Analytical Engiro
  • Countess of Lovelace
Countess of Lovelace, ' Translator's notes t c an artlcle on Babbage's Analytical Engire ', Scient~fic,Ilemozrs (ed. by R. Taylor), vol. 3 (1842), 691-731.
Cnlculating Instrztments and ~IIachines
  • D R Hartree
D. R. Hartree, Cnlculating Instrztments and ~IIachines, New York, 1949.
The Mind of Mechanical Man”. Lister Oration for 1949
  • Jefferson
Translator's notes to an article on Babbage's Analytical Engiro
  • R. Taylor