Article

Computing Machinery and Intelligence

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... This perspective goes with a mechanical view of the living to fit the Laplacian nature of the Turing machine -against the view of Turing himself (Turing 1950(Turing , 1952. This view and the Weissmanian schema conflict with physics developments since Newton, where reciprocal causality is a principle. ...
... This perspective goes with a mechanical view of the living to fit the Laplacian nature of the Turing machine -against the view of Turing himself (Turing 1950(Turing , 1952. This view and the Weissmanian schema conflict with physics developments since Newton, where reciprocal causality is a principle. ...
... Classical theoretical computer science emanates from mathematics, and the mathematics used is essentially discrete. These mathematics correspond to situations where the measure can be perfect in principle, and the determination is Laplacian, as Turing himself underlines (Turing 1950). On the other hand, we have introduced the concept of constraint precisely to account for the limits of the mathematical description of biological objects. ...
... This was shortly followed by McCulloch and Pitts, who formalized the computational model of an artificial neuron [12]. In 1950, Alan Turing published a landmark paper [13] in which he speculated about the possibility of creating machines that can think by themselves. He proposed a way to verify if a machine was able to think, called the Turing test. ...
... This method can be data-free or require a small calibration set to fine-tune the quantized network and minimize the loss of information between the high-precision network and the new low-precision one. The general concept of PTQ is represented in Figure 2. 13. ...
... A representation of the per-layer sparsity level for both pruning contexts in the case of a 20-layer network being pruning to 50% of sparsity can be found in Figure 3. 13. It can be observed that local pruning leads to all layers having the same sparsity level, as represented in Subfigure 3.13a and that global pruning leads to layers having di↵erent sparsity levels as represented in Subfigure 3.13b. ...
Thesis
Since their resurgence in 2012, Deep Neural Networks have become ubiquitous in most disciplines of Artificial Intelligence, such as image recognition, speech processing, and Natural Language Processing. However, over the last few years, neural networks have grown exponentially deeper, involving more and more parameters. Nowadays, it is not unusual to encounter architectures involving several billions of parameters, while they mostly contained thousands less than ten years ago.This generalized increase in the number of parameters makes such large models compute-intensive and essentially energy inefficient. This makes deployed models costly to maintain but also their use in resource-constrained environments very challenging.For these reasons, much research has been conducted to provide techniques reducing the amount of storage and computing required by neural networks. Among those techniques, neural network pruning, consisting in creating sparsely connected models, has been recently at the forefront of research. However, although pruning is a prevalent compression technique, there is currently no standard way of implementing or evaluating novel pruning techniques, making the comparison with previous research challenging.Our first contribution thus concerns a novel description of pruning techniques, developed according to four axes, and allowing us to unequivocally and completely define currently existing pruning techniques. Those components are: the granularity, the context, the criteria, and the schedule. Defining the pruning problem according to those components allows us to subdivide the problem into four mostly independent subproblems and also to better determine potential research lines.Moreover, pruning methods are still in an early development stage, and primarily designed for the research community. Indeed, most pruning works are usually implemented in a self-contained and sophisticated way, making it troublesome for non-researchers to apply such techniques without having to learn all the intricacies of the field. To fill this gap, we proposed FasterAI toolbox, intended to be helpful to researchers, eager to create and experiment with different compression techniques, but also to newcomers, that desire to compress their neural network for concrete applications. In particular, the sparsification capabilities of FasterAI have been built according to the previously defined pruning components, allowing for a seamless mapping between research ideas and their implementation.We then propose four theoretical contributions, each one aiming at providing new insights and improving on state-of-the-art methods in each of the four identified description axes. Also, those contributions have been realized by using the previously developed toolbox, thus validating its scientific utility.Finally, to validate the applicative character of the pruning technique, we have selected a use case: the detection of facial manipulation, also called DeepFakes Detection. The goal is to demonstrate that the developed tool, as well as the different proposed scientific contributions, can be applicable to a complex and actual problem. This last contribution is accompanied by a proof-of-concept application, providing DeepFake detection capabilities in a web-based environment, thus allowing anyone to perform detection on an image or video of their choice.This Deep Learning era has emerged thanks to the considerable improvements in high-performance hardware and access to a large amount of data. However, since the decline of Moore's Law, experts are suggesting that we might observe a shift in how we conceptualize the hardware, by going from task-agnostic to domain-specialized computations, thus leading to a new era of collaboration between software, hardware, and machine learning communities. This new quest for more efficiency will thus undeniably go through neural network compression techniques, and particularly sparse computations.
... Zuboff (1988) viste, at den digitale verden er vaesens forskellig fra den analoge. Turing (1950) -en markant, engelsk datalog (computer scientist)udfordrede computer science til at udvikle den digitale verden til en perfekt illusion af den fysisk-biologiske verden således, at den digitale så at sige kunne traede i stedet for den analoge verden. ...
... hjaelper, betegnet Turing-testen (Turing 1950). Dreyfus & Dreyfus (1986) har også bidraget til debatten, idet de benaegter muligheden af kunstig intelligens på samme niveau som menneskets "højeste" intelligens, som de forbinder med udøvelsen af ekspertise, hvor regler og hidtidige erfaringer ikke laengere raekker overfor, det nye eksperten står overfor. ...
Article
Alle brugere af digitale medier bevæger sig i informationssamfundet, hvor der er nye muligheder og spilleregler for værdiskabelse sammenholdt med de kendte fra den analoge verden. Artiklen belyser: Digitale agenter fremkommet igennem det seneste årti er de nye “indbyggere” i informationssamfundet. De har ikke direkte reference til autentiske personer. Hvordan kan vi forstå deres betydning for såvel analog som digital kommunikation og sociale relationer? (Afsnit A) Hvilken form for værdiskabelse er knyttet til de digitale agenter og agens? (Afsnit B) Hvilke værdiskabende muligheder og konsekvenser indebærer digital agens: Eksempler (Afsnit C) Hvilke er de socio-økonomiske udviklingsmuligheder med digital agens? (Afsnit D) Hvilke regulatoriske udfordringer stiller de digitale agens til virksomheder og til samfundet? (Afsnit E) I artiklen behandles disse spørgsmål ud fra samfundsvidenskabelige teorier om digitalisering, værdiskabelse og regulering.
... Düşünme makineleri kavramı, Descartes'ın Automata'sından, Charles Babbage'ın Analitik Motoruna, Leydi Lovelace'in yalnızca "gerçekleştirmesini nasıl emredeceğimizi bildiğimizi de olabilecek Analitik Motoruna" kadar aslında yüzyıllardır var olmuştur (2). Alan Turing, 1950'de "Makineler düşünebilir mi?" diye sordu ve "Taklit Oyunu"nu (şimdi Turing Testi olarak adlandırılıyor) oluşturdu ve bu konuya dikkatlerin yoğunlaşmasını tetikledi (3). YZ kavramı, ilk kez 1955'de Stanford Üniversitesi Bilgisayar Bilimleri Bölümü'den Prof. Dr. John McCarthy tarafından kullanılmasına karşın günümüzde anlamı ve tanımı konusunda fikir birliği bulunmamaktadır (4). ...
... Düşünme makineleri kavramı, Descartes'ın Automata'sından, Charles Babbage'ın Analitik Motoruna, Leydi Lovelace'in yalnızca "gerçekleştirmesini nasıl emredeceğimizi bildiğimizi de olabilecek Analitik Motoruna" kadar aslında yüzyıllardır var olmuştur (2). Alan Turing, 1950'de "Makineler düşünebilir mi?" diye sordu ve "Taklit Oyunu"nu (şimdi Turing Testi olarak adlandırılıyor) oluşturdu ve bu konuya dikkatlerin yoğunlaşmasını tetikledi (3). YZ kavramı, ilk kez 1955'de Stanford Üniversitesi Bilgisayar Bilimleri Bölümü'den Prof. Dr. John McCarthy tarafından kullanılmasına karşın günümüzde anlamı ve tanımı konusunda fikir birliği bulunmamaktadır (4). ...
... Therefore, imitation can be considered essential for the development of human intelligence [11]. Turing even suggested imitation to be a central capability on which to test an artificial intelligence in the Imitation Game [43]. ...
... Adversarial Learning. Similar to the Imitation Game [43], adversarial methods introduce an opponent or adversary, attempting to fool the decision-making instance, typically a classifier, by injecting adversarial samples. Adversarial samples refer to malicious or noisy samples specifically designed to attack a certain model [13,18]. ...
Preprint
We propose discriminative reward co-training (DIRECT) as an extension to deep reinforcement learning algorithms. Building upon the concept of self-imitation learning (SIL), we introduce an imitation buffer to store beneficial trajectories generated by the policy determined by their return. A discriminator network is trained concurrently to the policy to distinguish between trajectories generated by the current policy and beneficial trajectories generated by previous policies. The discriminator's verdict is used to construct a reward signal for optimizing the policy. By interpolating prior experience, DIRECT is able to act as a surrogate, steering policy optimization towards more valuable regions of the reward landscape thus learning an optimal policy. Our results show that DIRECT outperforms state-of-the-art algorithms in sparse- and shifting-reward environments being able to provide a surrogate reward to the policy and direct the optimization towards valuable areas.
... ELIZA was developed in 1966 at the Massachusetts Institute of Technology (MIT) Artificial intelligence laboratory by Joseph Weizenbaum who employed the pattern matching and substitution methodology to simulate chatbot experience [9]. It was the first program to attempt the imitation game proposed by Alan Turing to evaluate a machine's ability to behave intelligently in a similar way to humans [10]. Many other chatbots followed such as PARRY (1972) which was developed to simulate paranoid schizophrenia behaviour, Alice (1995), Cleverbot (1997), Mitsuku (2005) and many others. ...
... Is the research process well documented? 10 Is the study reproducible? ...
Article
Full-text available
The idea of developing a system that can converse and understand human languages has been around since the 1200s. With the advancement in artificial intelligence (AI), Conversational AI came of age in 2010 with the launch of Apple's Siri. Conversational AI systems leveraged Natural Language Processing (NLP) to understand and converse with humans via speech and text. These systems have been deployed in sectors such as aviation, tourism, and healthcare. However, the application of Conversational AI in the architecture engineering and construction (AEC) industry is lagging, and little is known about the state of research on Conversational AI. Thus, this study presents a systematic review of Conversational AI in the AEC industry to provide insights into the current development and conducted a Focus Group Discussion to highlight challenges and validate areas of opportunities. The findings reveal that Conversational AI applications hold immense benefits for the AEC industry , but it is currently underexplored. The major challenges for the under exploration were highlighted and discusses for intervention. Lastly, opportunities and future research directions of Conversational AI are projected and validated which would improve the productivity and efficiency of the industry. This study presents the status quo of a fast-emerging research area and serves as the first attempt in the AEC field. Its findings would provide insights into the new field which be of benefit to researchers and stakeholders in the AEC industry.
... Bilgisayar biliminin kurucusu matematikçi Alan Mathison Turing'in önemli kısmını İkinci Dünya Savaşı'nda kriptolog olarak görev yaparken gerçekleştirdiği çalışmaları hem bilgisayar hem de yapay zeka bilimleri için dönüm noktasıdır. Turing'in iki önemli çalışması, 1937 yılında yayımladığı "Saptama Problemi Hakkında Bir Uygulamayla Birlikte Hesaplanabilir Sayılar (On Computable Numbers with an Application to the Entscheidungsproblem)" ve 1950 yılında yayımladığı "Bilgisayar Mekanizması ve Zeka (Computing Machinery and Intelligence)"dır (4)(5). Bunların ilkinde hem algoritma hem de bellek programı içeren Turing Makinesi fikrini öne sürmüş; ikincisinde ise bilgisayarı evrensel bir makine olarak betimlemiş, insan sorgucunun karşı tarafı görmediği bir ortamda sorduğu sorulara cevap verenin bilgisayar mı insan mı olduğunu anlamaya çalıştığı Turing Testini ortaya koymuş ve "makineler düşünebilir mi" sorusunu gündeme getirmiştir (1,(4)(5)(6). ...
... Turing'in iki önemli çalışması, 1937 yılında yayımladığı "Saptama Problemi Hakkında Bir Uygulamayla Birlikte Hesaplanabilir Sayılar (On Computable Numbers with an Application to the Entscheidungsproblem)" ve 1950 yılında yayımladığı "Bilgisayar Mekanizması ve Zeka (Computing Machinery and Intelligence)"dır (4)(5). Bunların ilkinde hem algoritma hem de bellek programı içeren Turing Makinesi fikrini öne sürmüş; ikincisinde ise bilgisayarı evrensel bir makine olarak betimlemiş, insan sorgucunun karşı tarafı görmediği bir ortamda sorduğu sorulara cevap verenin bilgisayar mı insan mı olduğunu anlamaya çalıştığı Turing Testini ortaya koymuş ve "makineler düşünebilir mi" sorusunu gündeme getirmiştir (1,(4)(5)(6). ...
Article
Full-text available
Studies in the field of artificial intelligence and the products that emerged in this context come to the fore as an important and popular technology used by people today. Therefore, it is essential to be able to understand and follow the changes and transformations of this technological possibility on human life, social- cultural structure, nature-universe, and within this framework, to be able to predict the reflections and consequences of its use in different fields. Therefore, an important guiding element that will set the framework for the shaping of artificial intelligence technology and the handling of inclusion dimensions in different areas of life is the bioethics perspective. In the article, conceptual-historical-etymological preliminary information about artificial intelligence, which will enable the clarification of the concepts in the examination and discussion of artificial intelligence in terms of bioethics, is given.
... The contemporary and traditional perceptions, metaphors, and relationships of basic cognitive objects are contrasted and elaborated in this section. The taxonomy of cognitive objects represented in the brain can be classified into four forms [6], [7], [25], [39], [41], [42], [47], [48], [49], [52], [58], [68], [76], [99] as illustrated in Figure 2. ...
... Intelligence is the top level and most complex cognitive objects in the brain, which may be classified in the categories of reflexive, perceptive, cognitive, and instructive intelligence [1], [6], [17], [24], [26], [29], [36], [39], [41], [52], [58], [59], [60], [62], [65], [66], [67], [68], [69]. Intelligence is the ultimate power generated by the brain aggregated from data (sensory), information (cognition), and knowledge (comprehension). ...
Conference Paper
The emergence of abstract sciences as a counterpart of classic concrete sciences is presented in this work. The framework of abstract sciences encompasses data, information, knowledge, and intelligence sciences from the bottom up. It is found that intelligence is the ultimate level of cognitive objects generated in human brains aggregated from data (sensory), information (cognition), and knowledge (comprehension). However, there is a lack of rigorous studies and coherent theories towards the theoretical framework of abstract sciences as the counterpart of classical concrete sciences. This paper explores the cognitive and mathematical models of abstract mental objects in the brain. The taxonomy and cognitive foundations of them are explored. A set of mathematical models of data, information, knowledge, and intelligence is formally created in intelligent mathematics. Based on the cognitive and mathematical models of the cognitive objects, formal properties and relationship of contemporary data, information, knowledge, and intelligence sciences are rigorously explained.
... The groundwork for AI was urged when diplomatic history met history of science with Alan Turing's automaton 9 ("The Turing Machine") cracking the Enigma, a machine used by the German armed forces to send encrypted messages securely in World War II. In spite of the major evolution since then, Turing's question "Can machines think?" [34] still guides research in machine intelligence. ...
Thesis
Cancer is a leading cause of death worldwide making it a major public health concern. Different biomedical imaging techniques accompany both research and clinical efforts towards improving patient outcome. In this work we explore the use of a new family of imaging techniques, static and dynamic full field optical coherence tomography, which allow for a faster tissue analysis than gold standard histology. In order to facilitate the interpretation of this new imaging, we develop several exploratory methods based on data curated from clinical studies. We propose an analytical method for a better characterization of the raw dynamic interferometric signal, as well as multiple diagnostic support methods for the images. Accordingly, convolutional neural networks were exploited under various paradigms: (i) fully supervised learning, whose prediction capability surpasses the pathologist performance; (ii) multiple instance learning, which accommodates the lack of expert annotations; (iii) contrastive learning, which exploits the multi-modality of the data. Moreover, we highly focus on method validation and decoding the trained "black box" models to ensure their good generalization and to ultimately find specific biomarkers.
... Sarthak et al. [14] describe the A chatbot is a conversational agent where a computer designed to simulate an intelligent conversation. It will take user input in several voice,xsentiments, etc. ...
... A comparação de humano e máquina a muito tempo é objeto de estudo, visto que [TURING 1950] já instigava reflexões como a questão: "Podem as máquinas pensar?", propondo o Jogo da Imitação. Em seus estudos, o autor esperava que máquinas e homens acabariam por competir em campos puramente intelectuais, porém encerrava com a afirmação: "Podemos avistar só um pequeno trecho do caminho à nossa frente, mas ali já vemos muito do que precisa ser feito" [TURING 1950]. ...
... La neurociencia ha realizado contribuciones fundamentales para nuevos algoritmos matemáticos, cálculos biológicos, estructuras matemáticas, así como métodos que son determinantes para la creación de sistemas de inteligencia artificial. En la actualidad, la inteligencia artificial ha trabajado junto a la neurociencia en el ámbito de los videojuegos, la traducción de idiomas o la creación de arte; sin embargo, aún existe una brecha entre la inteligencia humana y la inteligencia de una máquina (Turing, 1950). ...
Article
Full-text available
«El neuromarketing. Un enfoque ético-legal» pretende ir más allá del mero análisis del concepto y de las bases de una técnica que ha surgido gracias a los avances tecnológicos que han provocado la cohesión entre disciplinas tan dispares como la neurociencia y el marketing. Pese a su juventud, el neuromarketing ha sido objeto de estudio principalmente acerca de su metodología de trabajo o sus técnicas; sin embargo, no existe unanimidad acerca de la creación de pautas de conducta o normas que actúen ante los nuevos peligros y compromisos éticos que pueden surgir ante su uso fraudulento. El objetivo del presente ensayo es concluir, una vez analizadas las posibles vulneraciones éticas y las preocupaciones morales que genera su utilización en la sociedad, una vía de regulación eficaz.
... Several limitations still exist in our study.First,this work was lack of subjective assessment such as radiation oncologist evaluation or Turing imitation test [33].Second,the diversity of CT scanner machines,image acquisition protocols,standard contouring,and even tumor staging hampered meaningful comparison of our results with other CNN models.Overall,increasing the amount of training data from different centers using different techniques could make the DL based model more robust, improving the segmentation accuracy. ...
Preprint
Full-text available
Objective:Deep learning (DL) based approach aims to construct a full workflow solution for cervical cancer with external radiotherapy (EBRT) and brachytherapy (BT).The purpose of this study was to evaluate the accuracy of EBRT planning structures derived from DL based auto-segmentation which would be a part implemented in the workflow of cervical cancer. Methods:Clinical target volumes (CTVs) and organs at risk (OARs) of 75 cervical cancer patients were manually delineated by senior radiation oncologists and auto-segmented by DL based method.The accuracy of DL based auto-segmented contours were evaluated using geometric and dosimetric metrics including dice similarity coefficient (DSC),95%hausdorf distance (95%HD),jaccard coefficient (JC) and dose-volume index (DVI).The correlation between geometric metrics and dosimetric difference was performed by Spearman’s correlation analysis. Results:The DL based auto-segmentation generated similar geometric performance in right kidney,left kidney,bladder,right femoral head and left femoral head with mean DSC of 0.88-0.93,95%HD of 1.03-2.96mm and JC of 0.78-0.88. Wilcoxon’s signed-rank test indicated significant dosimetric differences between manual and DL based auto-segmentation in CTV,spinal cord and pelvic bone (P<0.001). A strong correlation between the mean dose of pelvic bone and its 95%HD (R=0.843, P=0.000) was found in Spearman’s correlation analysis, and the remaining structures showed weak link between dosimetric difference and all of geometric metrics. Conclusion:Auto-segmentation achieved a satisfied agreement for most EBRT planning structures,although the dosimetric consistency of CTV was a concern. DL based auto-segmentation was an essential component in cervical cancer workflow which would generate the accurate contouring.
... As applied to a person, subjectivity, as a rule, is substantiated either through the attributive properties of consciousness (will, the ability to think and make autonomous decisions, self-awareness), or through an axiological approach. In the first option, we cannot assert that a weak AI thinks, which has been long discussed in the philosophy of AI [1,2,22,23]. Moreover, some authoritative researchers deny the possibility of a positive answer to the question of consciousness even in the prospect of creating a strong AI [1]. ...
Article
Full-text available
Active development of artificial intelligence (AI) technology states the problem of integrating this phenomenon into legal reality, limits of using this technology in social practices regulated by law and, ultimately, the development of an optimal model for legal regulation of AI. This article focuses on the problem of developing the legal content of the concept of AI including some methodological and ontological foundations of such work. The author suggests certain invariant characteristics of AI significant for legal regulation, which, if adopted by the legal scientific community, could be used as a scientifically grounded basis for constructing specific options for legal regulation that correspond to the needs of a particular sphere of social practice. The author believes that the scientifically grounded legal concept of AI is largely able to determine the direction and framework of applied legal research on the multifaceted problems of using AI technology in social interactions including ministering of justice, to separate the related legal issues and problems from issues of ethical, philosophical, technological and other nature.
... Alan Turing'in 1950 yılında yayınlanan makalesinde, makinelerin düşünüp düşünemeyeceği ve insanları iyi derecede taklit edebilecek makineler üretmenin mümkün olup olamayacağı sorularını yöneltmesi, yapay zekâ kavramının başlangıcı olarak görülmektedir (Turing, 2009). Yapay Zekâ; öğrenme, akıl yürütme ve eyleme dökme işlemlerini, insan sinir sisteminden ilham alarak gerçekleştiren ve bir dizi hesaplama teknolojisine dayanan bir bilimdir. ...
Article
We can define artificial intelligence systems as systems that serve the basic roles of society today, benefit us in many application areas, and can make autonomous decisions in the coming years, perhaps without the need for humans. In order for artificial intelligence systems to work with more and more autonomy, that is, with less human control, ethical structures must first be established. Ethical AI is AI that adheres to well-defined ethical guidelines regarding core values such as individual rights, privacy, equality and non-prejudice. Artificial intelligence ethical practices will help organizations operate more efficiently, produce cleaner products, reduce harmful environmental impacts, increase public safety and improve human health. Unethical artificial intelligence applications may cause serious harmful effects for society. The most important solution to responsibly manage these negative effects and direct artificial intelligence systems for the benefit of society is the development of ethical artificial intelligence systems. In recent years, studies on the ethics of artificial intelligence by academia, industry, government and civil society have begun to provide a basis. In this study, the ethics of artificial intelligence and its impact on society, labor market, inequality, privacy and prejudice are discussed, possible risks and threats are pointed out, and suggestions are made for solutions.
... It follows that, when models include cells as elementary components, the latter are described by ad hoc hypotheses that we reviewed elsewhere (Montévil, Speroni et al. 2016). This modus operandi is properly interpreted as imitation (Turing 1950); stricto sensu mathematical modeling must be based on the theoretical principles of the discipline being studied. Below we describe the mathematical model both from the theoretical framework provided by the principles and the 15 Figure 4: Schema of the determination of the system. ...
... The term "artificial intelligence" (AI) was first coined by John McCarthy in 1956. Alan Turning's notion of machines having the ability to do intelligent things [31] such as playing chess was realized in 1997 by IBM's Deep Blue which defeated then world chess champion, Gary Kasparov. The field of AI includes a variety of data accumulation and processing techniques such as Machine Learning (ML), Natural Language Processing (NLP), expert system, image recognition, deep learning etc. ...
Article
Monitoring of critical infrastructure for Structural Health Monitoring (SHM) is vital for the detection of structural damage (cracks or voids) at an initial stage, thus increasing the structures’ serviceable life. The traditional methods of visual inspection to detect damages are time-consuming and less efficient. Sensor based Non-Destructive Techniques (S-NDTs) such as ground-penetration radar, acoustic emission, laser scanning, etc. for detection and analysis are extensively used to monitor structural health but are expensive and time-consuming. Recent advancements in Artificial Intelligence (AI) techniques such as Computer Vision (CV) assisted with Convolutional Neural Network (CNN), Machine Learning (ML) and Deep Learning (DL) in Structural Health Monitoring (SHM) provide more accurate data classification and damage detection systems. This paper provides a state-of-the-art review of the applications of AI-based techniques in SHM. A detailed study on vision data collection, processing techniques, and segmentation (feature, model, and pattern) is discussed, along with their limitations. The application of AI techniques for SHM to detect, isolate, and identify data anomalies, along with biomimetic algorithms are reviewed to assist in future research directions for life critical infrastructure monitoring.
... Turing & Haugeland's [11] Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. ...
Article
Full-text available
In today’s society, many families do not have children due to various reasons. The reasons include the pressure brought by childcare, and that double-income families do not have time to raise children, especially novice parents who will retreat. For this reason, we made a chatbot to solve problems. Through the questionnaires collected by our study, we found that most novice parents use INFANBOT when their children cry, thus, we used a baby’s cry as an example here. When parents face a baby’s cry, they can tap the button “burst into tears!”. The chatbot will immediately tell parents how to solve the problem. The authors designed a chatbot system named INFANBOT that combined the concepts of infancy health education to alleviate parents’ troubles at work and parenting. It also reduced the anxiety of caregivers during the parenting process and gave them correct parenting knowledge. The INFANBOT is a real-time system that can provide real-time services to novice parents. Additionally, when the user is using INFANBOT, the system will record the problems encountered and invite the user to fill in a questionnaire at an appropriate time to improve the system. After a preliminary study, we found that INFANBOT can solve most of the problems encountered by users. Statistically, all respondents gave above 4.5 points in the Likert-type five-point scale. Therefore, most respondents felt that INFANBOT could solve their problems effectively and quickly. The INFANBOT system developed by this study is designed to meet the needs of users. The system design of INFANBOT established in this study met the needs of its users and can help users improve their parenting troubles. This study also has positive effects and contributions to society: 1. After using INFANBOT, users can effectively improve their knowledge of children’s health education. 2. After using INFANBOT, users feel recognition of the professionalism of the health care knowledge provided by the robot, which can effectively improve the user’s parenting problems. 3. Most of the users are satisfied with the positive results after using INFANBOT, so novice parents and parents who feel anxious about parenting can quickly search for common parenting problems on the LINE community software.
... It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable." 1,2 AI is a rapidly developing field, and encompasses a number of subfields and applications, including machine learning (ML), natural language processing, computer vision, robotics, and expert systems. 3 Among them, ML is the subfield that has the most applications in the biomedical field. ...
... "¿Las máquinas pueden pensar?". Esta pregunta se la formuló Alan Turing 1 , considerado como el padre de la computación, ya en el año 1950 (Turing, 1950). Al mismo tiempo, formuló un pequeño juego al que acuñó el nombre de "juego de las imitaciones". ...
Article
Full-text available
“¿Las máquinas pueden pensar?”. Esta pregunta se la formuló Alan Turing1, considerado como el padre de la computación, ya en el año 1950 (Turing, 1950). Al mismo tiempo, formuló un pequeño juego al que acuñó el nombre de “juego de las imitaciones”. El juego consiste en que una persona A interactúa con una máquina B y una persona C, e intentará adivinar cuál de ellos es la máquina. A no tiene acceso visual ni sonoro a B ni a C: sólo se puede comunicar a través de una terminal con ambas. El nombre de “imitación” se refiere a que la máquina B intentará replicar el comportamiento de la persona C. Este juego ha pasado a conocerse como el Test de Turing, que además ha extendido la idea básica del juego de las imitaciones: ¿podemos distinguir entre una persona y una máquina, por ejemplo, durante una conversación por mensajes de texto? ¿Y durante una llamada telefónica? Puede que un día tengamos incluso que preguntarnos si podemos distinguir a una persona de un robot.
... This is not an easy question to answer. The wellknown Turing test [37] has its limitations, as do most other tests proposed. The Turing Test is meant to be a kind of zeroknowledge proof (ZKP) used in cryptography -where the assumption is that the proof of something may be established by a simpler verification process, to an arbitrary level of certainty. ...
Preprint
p>The ‘brain-mind-intelligence’ structure may be considered analogous to a ‘computer-operating system (OS)-application’ construct. Software development would be far more difficult without the benefit of abstractions like file and memory handles, and graphical user interfaces, implemented in operating systems. The easy access to application-level semantics, provided by the infrastructure of the OS, simplifies application development. Abstractions can play a similar role in producing intelligent systems. The power of abstractions may be determined by their ability to unify the treatment of several types of higher concepts. Unifying various concepts in the domain of intelligence, such as problems, solutions, objects, and emotions, through abstractions will be of immense value. This paper shows how ‘higher-level perceptions’ could be used to serve this objective and construct an OS-like infrastructure. With the infrastructure in place, ordinary software objects developed and deployed using standard methods, become accessible using the semantics provided by the infrastructure. Now, after brief training exercises, the objects would be available for intelligent use. The focus, therefore, is on the design of the infrastructure that could play a mind-like role in humans. The design uses a new, non-symbolic, non-connectionist, transmutable method of representation that tackles the so-called ‘frame’ and ‘grounding’ problems of artificial intelligence. The infrastructure enables the conversion of sensory inputs to transmutable representations amenable to semantic modifications, to flexibly determine actions to perform on software objects developed externally. Furthermore, the infrastructure serves to ground and functionally manage the transmutable, connectionist, and symbolic systems. Thus, the new model plays a critical role in building intelligent systems that overcome major problems in learning and action selection, including brittleness associated with the limited semantics of purely symbolic and connectionist designs. The new model is then evaluated in terms of features, against criteria set forth independently by Newell, Sun, Vernon, and others. In addition, a new measure of intelligence introduced in this paper is used to quantitatively compare the proposed model with some others developed over the years. </p
... The idea of allowing the computer to evolve programs itself can be traced back to the works of Turing (1950), Samuel (1959) or von Neumann (1966 in the 1950s and saw its first applications in the 1980s by Forsyth (1981), Cramer (1985) and Hicklin (1986). Later, genetic programming became popular and well known to the public thanks to John Koza's contributions Koza (1990Koza ( , 1992 in the 1990s. ...
Article
Full-text available
In this work we aim to empirically characterize two important dynamical aspects of GP search: the evolution of diversity and the propagation of inheritance patterns. Diversity is calculated at the genotypic and phenotypic levels using efficient similarity metrics. Inheritance information is obtained via a full genealogical record of evolution as a directed acyclic graph and a set of methods for extracting relevant patterns. Advances in processing power enable our approach to handle previously infeasible graph sizes of millions of arcs and vertices. To enable a more comprehensive analysis we employ three closely-related but different evolutionary models: canonical GP, offspring selection and age-layered population structure. Our analysis reveals that a relatively small number of ancestors are responsible for producing the majority of descendants in later generations, leading to diversity loss. We show empirically across a selection of five benchmark problems that each configuration is characterized by different rates of diversity loss and different inheritance patterns, in support of the idea that each new problem may require a unique approach to solve optimally.
... The first one, Last terms the AGI-TS, and the second, the GB-TS. The former singularity scenario, as the name indicates, basically says that artificial general intelligences (AGI) will take over evolution once machines pass the Turing test (Turing 1950). 2 The latter says that, in one way or another, a global brain (GB; Russell 1983) created through information and communications technology (ICT) and in particular through the Internet will result in a "meta-system transition" (Turchin 1977). ...
Article
This article starts from the observation that the self ("I") appears to be a kind of singularity. It is something that information collapses into. Following this thought, this paper presents the unconventional interpretation of the singularity metaphor as newly emerging, higher-order subjectivity. Its goal is to inquire into the cognition of the emerging higher-order subject and the change in individual units' subjectivity caused by the transition. First, it asks what kind of superintelligence would emerge in a singularity interpreted in this way. This is important to consider, because it is not all that clear what "superintelligence" means if one goes away from defining it by the complexity of algorithms and lays the focus on subjectivity instead. Then, the paper looks at the individuals that constitute the emergence of superintelligence. First, it analyzes what it would take for human connectivity and intersubjectivity to increase to a singularity threshold. Building upon this, it attempts to analyze how superconnectivity might change the individual agents involved in the process. The thesis is that the extended spatiotemporal perspective of the super-agent is bought by decreased individual self-awareness.
... • Barrachina, J. A., Ren, C., Morisseau, C., Vieillard, G., and Ovarlez, J.-P. (2022d) [Turing, 1950] Regardless of the answer, all algorithms that can create the impression that a machine is thinking can be classified as Artificial Intelligence. Therefore, AI takes no regard for the algorithm itself; if you could hand-code every possible response to a conversation, it would be considered as AI. ...
Thesis
Radar signal and SAR image processing generally require complex-valued representations and operations, e.g., Fourier, wavelet transforms, Wiener, matched filters, etc. However, the vast majority of architectures for deep learning are currently based on real-valued operations, which restrict their ability to learn from complex-valued features. Despite the emergence of Complex-Valued Neural Networks (CVNNs), their application on radar and SAR still lacks study on their relevance and efficiency. And the comparison against an equivalent Real-Valued Neural Network (RVNN) is usually biased.In this thesis, we propose to investigate the merits of CVNNs for classifying complex-valued data. We show that CVNNs achieve better performance than their real-valued counterpart for classifying non-circular Gaussian data. We also define a criterion of equivalence between feed-forward fully connected and convolutional CVNNs and RVNNs in terms of trainable parameters while keeping a similar architecture. We statistically compare the performance of equivalent Multi-Layer Perceptrons (MLPs), Convolutional Neural Networks (CNNs), and Fully Convolutional Neural Networks (FCNNs) for polarimetric SAR image segmentation. SAR image splitting and balancing classes are also studied to avoid learning biases. In parallel, we also proposed an open-source toolbox to facilitate the implementation of CVNNs and the comparison with real-equivalent networks.
... O desenvolvimento da área de estudos em IA começou logo após a Segunda Guerra Mundial, com o artigo "Computing Machinery and Intelligence" do matemático inglês Alan Turing (TURING, 1950). John McCarthyconhecido pelos estudos no campo da inteligência artificial e por ser o criador da linguagem de programação Lispfoi quem cunhou o termo, em 1956, que é hoje amplamente usado por especialistas e estudiosos da área. ...
Conference Paper
Full-text available
A Inteligência Artificial, também referenciada como Sistemas Autônomos/Inteligentes, pode ser descrita como uma técnica que permite computadores a simularem a inteligência humana,a partir da utilização de algoritmos. O algoritmo é treinado, monitorado, corrigido e aperfeiçoado, otimizando assim, tarefas que eram realizadas unicamente por racionalização humana e que se tornavam desgastantes por se tratarem da análise de uma grande quantidade de informações. Contudo, à medida que a IA assume o papel de classificar, predizer e tomar decisões que influenciam e impactam diretamente a vida de pessoas, torna-se necessário discutir e buscar soluções para apoiar desenvolvedores, empresas, governo e a sociedade organizada a aprimorar seus conhecimentos no assunto e a instituir políticas que promovam a sistematização de uma IA que, além de performar bem, seja ética, equânime, sustentável e segura. Contribuindo assim, para a conciliação dos interesses da sociedade com os interesses do mercado e incentivando o desenvolvimento e aplicação de ferramentas precedidas por boaspráticas. Para tanto, o grupo RAIES (Rede de Inteligência Artificial Ética e Segura) busca desenvolver métodos baseados em princípios como Transparência, Responsabilização,Equidade/Não-Discriminação, Robustez/Proteção, Privacidade, Sustentabilidade,Diversidade e Inclusão para que os sistemas sejam acurados e contextualizados de acordo coma atual sociedade. A proposta desse trabalho é a de fazer “Reflexões para outras independências” através da apresentação e distribuição gratuita de métodos e ferramentas úteis que auxiliem na prevenção, gestão e mitigação da discriminação algorítmica (viés algorítmico/bias) em sistemas inteligentes.
... 2 However, Alan Turing in 1950 who evolved the Turing test to differentiate humans from machines was the first one to raise the possibility of machines ability to replicate human behavior and thinking. 3 The 1980s and 1990s saw a burst in the interest in AI. AI began to expand, resulting in developments in expert systems, evolutionary computing, machine learning (ML), deep learning (DL), natural language processing, computer vision and other data processing technologies. ...
Article
Full-text available
Artificial intelligence (AI) is the science and engineering of making intelligent machines to think and learn. The interest and involvement of AI in health care has expanded fast during the last decade. The use of AI based technologies has been integrated into medicine to raise the standard of patient care by accelerating processes and achieving greater accuracy in different clinical settings. The patient’s electronic records, pathology slides and different radiological images are nowadays assessed by AI technologies such as machine learning (ML) and deep learning (DL). This, in turn, has aided in the process of diagnosis and treatment of patients and increased physicians’ capabilities. AI is poised to transform medical practice in future. AI can aid clinicians to make accurate remedy choices, minimize needless surgeries, benefit oncologists to enhance patient’s chemotherapy regimen etc. The aim of this review is to primarily develop fundamental knowledge and awareness of AI among the healthcare professionals. The article mainly deals with the basic mechanism of AI, the recent scientific developments and applications of AI along with its risks and challenges in clinical setup.
... Dolayısıyla düalizmle tutarlıyken; materyalizmle tutarsız bir argüman olma özelliği taşır. Problemin çözümü, günümüzde halen önemli bir yer tutan 'bilgisayarlar düşünebilir mi?' (Turing, 1950) gibi sorulara felsefi bir temel sağlama potansiyeline sahiptir. Bununla birlikte, qualia problemi Türkçe literatürde öznellik problemi, zor problem ve açıklama boşluğu problemi olarak da karşımıza çıkmaktadır. ...
Chapter
Full-text available
The qualia problem, also known as the 'hard problem' in the philosophy of mind, is still a problem that has not lost its mystery and has not been agreed upon. The difficulty of the problem comes from the fact that the claim that the immaterial thing we call the mind has an ontological existence separate and distinct from the brain cannot be falsified. On the other hand, discussions of philosophy of mind are based on the conflict of dualist and materialist views in the most general framework. Likewise, we see that the opinions about the qualia problem are divided into these two branches. In this respect, it would not be wrong to say that the strongest defender of the dualist side is Thomas Nagel. It is possible to say that Nagel's examples of chocolate ice cream and being a bat are consistent with a strict dualism. The materialist's attitude towards the problem is usually reductive or elimination. On the other hand, in the moderate approaches of materialism, instead of the direct rejection of the qualia problem, it is also focused on the solution of the problem by staying within the boundaries of materialism. One of these names is Patricia Smith Churchland, known for neurophilosophy. In addition to these, it is possible to encounter the qualia problem in almost all of the studies that have directed their attention to consciousness and mind; The problem seems to be dispersed in philosophy of mind, being at the root of most philosophy of mind problems. Despite the central position of the problem in the philosophy of mind, studies that directly address and discuss the problem seem to be few in Turkish literature. The aim of this study is to outline the views about the qualia problem or the possibility of subjective experience in the philosophy of mind in the axis of dualism, materialism and neurophilosophy, and thus to collect the most prominent views about the problem in a single study.
Chapter
Full-text available
Deep learning, as one of the most currently remarkable machine learning techniques, has achieved great success in many applications such as image analysis, speech recognition and text understanding. It uses supervised and unsupervised strategies to learn multi-level representations and features in hierarchical architectures for the tasks of classification and pattern recognition. In the past few years, deep learning has played an important role in oceanography. In this Chapter, we review the emerging researches of deep learning models. First, from the perspective of the historical, we sort out the three development stages of artificial intelligence. Then, we discuss four commonly used deep learning architectures. Finally, we elaborate on the common application scenarios of deep learning technology.
Chapter
With the rise of far-reaching technological innovation, from artificial intelligence to Big Data, human life is increasingly unfolding in digital lifeworlds. While such developments have made unprecedented changes to the ways we live, our political practices have failed to evolve at pace with these profound changes. In this path-breaking work, Mathias Risse establishes a foundation for the philosophy of technology, allowing us to investigate how the digital century might alter our most basic political practices and ideas. Risse engages major concepts in political philosophy and extends them to account for problems that arise in digital lifeworlds including AI and democracy, synthetic media and surveillance capitalism and how AI might alter our thinking about the meaning of life. Proactive and profound, Political Theory of the Digital Age offers a systemic way of evaluating the effect of AI, allowing us to anticipate and understand how technological developments impact our political lives – before it's too late.
Article
The Lovelace Test is one of the most famous alternatives to the Turing Test. It suggests to use the concept of creativity as a way to estimate the ability of artificial intelligence to think in the same sense as a human being. It is demonstrated in the article that the concept of creativity is too ambiguous to be used as a criteria of anything. It is shown that both versions of the Lovelace Test are inherently behaviorist and therefore cannot prove the ability of a machine to think.
Chapter
Full-text available
If we want to embed AI in society, we need to understand what it is. What do we mean by artificial intelligence? How has the technology developed? Where do we stand now?
Article
Full-text available
The present article addresses key elements of the unique ontology of AI and argues that these require the expansion of the public sphere, in order to successfully manage the entry of new intelligent actors in legally regulated relationships which are based on the identification of causal connections. In this sense it attempts to link law and political science, given that the governance of any phenomenon or field includes law and in particular the detection, of legally interesting, causal relationships. Regulating such relationships effectively offers legal certainty, which in turn is a fundamental element of effective governance. In our self- evidently, human- centered world, whether we are talking about natural persons, or for legal persons, it is self- evident that there is, in the end, a human hand behind the causal relations with which law is involved. Once other non- human, intelligent actors gradually enter the forefront, these causal relations become further complicated. It is on these complications and their impact that we focus.
Research
Full-text available
There is a new wave of research being done in the field of finance that makes use of artificial intelligence (AI) and machine learning (ML). Too far, however, no review has provided a comprehensive overview of this study's history. In order to fill this informational void, we give a survey of current artificial intelligence and machine learning projects in the financial sector. We estimate the subject organization of AI and ML research in economics from 1986 through April of 2021 using co-citation and bibliometric-coupling analysis. We find three broad categories of finance scholarship that are approximately identical for both modes of study, including (1) portfolio creation, valuation, and investor behaviour; (2) financial fraud and distress; and (3) sentiment inference, prediction, and planning. We also use co-occurrence and fusion analyses to identify trends and research areas in the field of artificial intelligence and machine learning applied to the financial sector. Our findings offer an evaluation of AI and ML for the financial sector.
Article
Chatbot technology can be an important tool and supplement to education, leading to explorations in this area. Corpus-based chatbot building has a relatively low entry barrier as it only requires a relevant corpus to train a chatbot engine. The corpus is a set of human-readable questions and answers and may be an amalgamation of existing corpora. However, a suitable chemistry-based chatbot corpus catering for a freshman general chemistry course addressing inorganic and physical chemistry has not been developed. In this study, we present a basic chemistry conversational corpus consisting of 998 pairs of questions and answers, focused on a freshman general chemistry course addressing inorganic and physical chemistry. Ten human raters evaluated the responses of a chatbot trained on the corpus and suggests that the corpus resulted in better response than random (t = 17.4, p-value = 1.86E-53). However, only 20 of the 50 test questions show better responses compared to random (difference in mean score ≥ 1.9, paired t-test p-value ≤ 0.0324), suggesting that the corpus provides better responses to certain questions rather than overall better responses, with questions related to definitions and computational procedures answered more accurately. Hence, this provides a baseline for future corpora development.
Chapter
The human brain is the most powerful computational machine in this world that has inspired artificial intelligence for many years. One of the latest outcomes of the reverse engineering neural system is deep learning, which emulates the multiple-layer structure of biological neural networks. Deep learning has achieved a variety of unprecedented successes in a large range of cognitive tasks. However, accompanied by the achievements, the shortcomings of deep learning are becoming more and more severe. These drawbacks include the demand for massive data, energy inefficiency, incomprehensibility, etc. One of the innate drawbacks of deep learning is that it implements artificial intelligence through the algorithms and software alone with no consideration of the potential limitations of computational resources. On the contrary, neuromorphic computing, also known as brain-inspired computing, emulates the biological neural networks through a software and hardware co-design approach and aims to break the shackles from the von Neumann architecture and digital representation of information within it. Thus, neuromorphic computing offers an alternative approach for next-generation AI that balances computational complexity, energy efficiency, biological plausibility, and intellectual competence. This chapter aims to comprehensively introduce neuromorphic computing from the fundamentals of biological neural systems, neuron models, to hardware implementations. Lastly, critical challenges and opportunities in neuromorphic computing are discussed.KeywordsNeuromorphic computingSpiking neural networksArtificial intelligenceSilicon neuronsMemristive synapseBiological neural networksNeuromorphic chips
Chapter
Full-text available
Based on a large scale of technology application scenarios, artificial intelligence (AI) is expected to have disruptive impact on economies and societies. In recent years, breakthroughs have been made in basic research on the fundamental technologies of artificial intelligence. AI is showing greater potential to become a general-purpose technology. Major economies are focusing on policies, regulations, and strategic plans around basic research and R&D of technology application scenarios in AI. However, the optimization of AI policy-making demands more interdisciplinary knowledge and a broader societal debate. In the domain of technology assessment (TA), the research on AI and its potential impacts has been considered important already early. The research questions, which include impacts on the workforce as well as impacts on societal communication and democracy as well as fundamental issues like responsibility, transparency, and ethics, have drawn widespread attention from TA studies. This chapter presents a scholarly discussion of AI topics in the context of TA, based on a qualitative analysis of AI policy databases from the OECD and EPTA. The analysis concludes that enhancing global cooperation in TA will contribute to address fundamental ethical and societal issues of AI, which in turn broadens the knowledge base and helps to pave way for a more inclusive and just use of AI.
Article
Full-text available
This research focuses on deaf students in the United Arab Emirates. The proposed classroom assessment using sign language communicator (CASC) for special needs students (SN) in the United Arab Emirates is based on artificial intelligence (AI) tools. This research provides essential services for teaching evaluations, learning outcome assessments, and the development of learning environments. CASC model is composed of two models. The first model converts the speech to a sign language, which contains a speech recognizer, sign language recognizer. The second model converts the sign language to written text. This model generates a report for students’ understanding and class evaluation in advance before ending the course based on the sign language recognition and image processing tools. This model will have a significantly positive impact on SN students’ success and on effective lecturing and optimizing teaching and learning in the classroom. The accuracy of the model is 92%. The analysis of the student’s feedback in real-time provides effective instructional strategies.
Article
Bu makale yapay zekâ, algoritmalar, sanal gerçeklik ve büyük veri gibi gelişen teknolojilerin felsefeye yeni zorluklar ve fırsatlar getirdiğini savunmaktadır. Teknolojide yaşanan yapay zekâ devriminin felsefe disiplininde yeni bir paradigma doğuşunu zorunlu kıldığı tezinden yola çıkarak yapay zekanın mümkün olup olmadığına odaklanan konulardan oluşan yapay zekâ felsefesinin olanağını tarihsel süreç içerisinde temellendirmektedir. Sorunun takdiminin akabinde, ilk bölümde yapay zekaya ilişkin üç klasik tartışmaya yer verilmekte; ikinci bölümde fiziksel simge sistemi hipotezi açıklanarak insandan bağımsız zeki sistemler yapmayı tetikleyen hareket noktasına ışık tutulmaktadır. Ardından Hobbes, Leibniz ve Descartes gibi düşünürlerden hareketle yapay zekanın felsefi kökenine kısaca değinilerek Diderot’un ve Ayer’in yapay zekâ kriterleri özetlenmektedir. Yapay zekanın felsefi bir sorun olarak nitelendirildiği son bölümde ise yapay zekanın felsefeyle olan etkileşimi konu edilerek yapay zekânın temellerini, sınırlarını ve kapsamını keşfetmeye felsefe disiplininin ciddi şekilde dahil olduğu ve yapay zekanın neden felsefi bir problem olduğu tartışılmıştır.
Book
Full-text available
This book provides an in-depth analysis of the history and evolution of the major disciplines of science, which include the basic sciences, bioscience, natural sciences and medical science, with special emphasis on the Indian perspective. While academic interest shown in the history and philosophy of science dates back to several centuries, serious scholarship on how the sciences and the society interact and influence each other can only be dated back to the twentieth century. This volume explores the ethical and moral issues related to social values, along with the controversies that arise in relation to the discourse of science from the philosophical perspectives. The book sheds light on themes that have proved to have a significant and overwhelming influence on present-day civilisation. It takes the reader through a journey, on how the sciences have developed and have been discussed, to explore key themes like the colonial influences on science; how key scientific ideas have developed from Aristotle to Newton; history of ancient Indian mathematics; agency, representation, deviance with regard to the human body in science; bioethics; mental health, psychology and the sciences; setting up of the first teaching departments for subjects such as medicine, ecology and physiology in India; recent research in chemical technology; and even the legacy of ancient Indian scientific discoveries. A part of the Contemporary Issues in Social Science Research series, this interdisciplinary work will be of immense interest to scholars and researchers of philosophy, modern history, sociology of medicine, physical sciences, bioscience, chemistry and medical sciences. It will be of interest to the general reader also.
Article
Today, the quality of Nanotechnology and Nanoscience working with Artificial Intelligence is increasing day by day. Gains the importance of materials science effectively. Examination of SEM Images with Artificial Intelligence Methods represents a multidisciplinary field. In forming the data used in the experimental part, 22,000 SEM data are publicly available. It is known that CNR-IOM's TASC laboratory in Trieste was obtained as a result of 5 years of work of 100 scientists with the ZEISS SUPRA 40 resolution device. After examining the resolution, image size and quality one by one for the selection of the data in the prototype created for the experimental study, the feature that is considered is the image quality. In the creation of this data, after 100 image data are manually selected and arranged in nano and micro dimensions; A total of 1000 image data were created in 10 data sets. Then, artificial intelligence training was carried out using the CNN classification technique in the experimental study using the unsupervised learning method through machine learning. The approach used here enables the application of new methods and tools by adjusting to develop suitable parameters to solve specific properties of nanomaterials that can be applied to a wide variety of nanoscience use cases. Using it to create a materials science library may pave the way for future studies in the field of artificial intelligence and nanotechnology.
Article
Full-text available
In recent years, with the popularity of AI technologies in our everyday life, researchers have begun to discuss an emerging term “AI literacy”. However, there is a lack of review to understand how AI teaching and learning (AITL) research looks like over the past two decades to provide the research basis for AI literacy education. To summarize the empirical findings from the literature, this systematic literature review conducts a thematic and content analysis of 49 publications from 2000 to 2020 to pave the way for recent AI literacy education. The related pedagogical models, teaching tools and challenges identified help set the stage for today’s AI literacy. The results show that AITL focused more on computer science education at the university level before 2021. Teaching AI had not become popular in K-12 classrooms at that time due to a lack of age-appropriate teaching tools for scaffolding support. However, the pedagogies learnt from the review are valuable for educators to reflect how they should develop students’ AI literacy today. Educators have adopted collaborative project-based learning approaches, featuring activities like software development, problem-solving, tinkering with robots, and using game elements. However, most of the activities require programming prerequisites and are not ready to scaffold students’ AI understandings. With suitable teaching tools and pedagogical support in recent years, teaching AI shifts from technology-oriented to interdisciplinary design. Moreover, global initiatives have started to include AI literacy in the latest educational standards and strategic initiatives. These findings provide a research foundation to inform educators and researchers the growth of AI literacy education that can help them to design pedagogical strategies and curricula that use suitable technologies to better prepare students to become responsible educated citizens for today’s growing AI economy.
Chapter
Full-text available
Gängige Formen von Diskriminierung sowie die Reproduktion normativer Stereotype sind auch bei künstlicher Intelligenz an der Tagesordnung. Die Beitragenden erläutern Möglichkeiten der Reduktion dieser fehlerhaften Verfahrensweisen und verhandeln die ambivalente Beziehung zwischen Queerness und KI aus einer interdisziplinären Perspektive. Parallel dazu geben sie einem queer-feministischen Wissensverständnis Raum, das sich stets als partikular, vieldeutig und unvollständig versteht. Damit eröffnen sie Möglichkeiten des Umgangs mit KI, die reduktive Kategorisierungen überschreiten können.
Chapter
Full-text available
Gängige Formen von Diskriminierung sowie die Reproduktion normativer Stereotype sind auch bei künstlicher Intelligenz an der Tagesordnung. Die Beitragenden erläutern Möglichkeiten der Reduktion dieser fehlerhaften Verfahrensweisen und verhandeln die ambivalente Beziehung zwischen Queerness und KI aus einer interdisziplinären Perspektive. Parallel dazu geben sie einem queer-feministischen Wissensverständnis Raum, das sich stets als partikular, vieldeutig und unvollständig versteht. Damit eröffnen sie Möglichkeiten des Umgangs mit KI, die reduktive Kategorisierungen überschreiten können.
ResearchGate has not been able to resolve any references for this publication.