Chapter

Contemporary Approaches to Artificial General Intelligence

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Transformers) gibi yapay zekâ örnekleri sadece metin tabanlı dil işleme görevlerine odaklanmaktadır (Mahler, 2022). Yapay zekâ belirli yeteneklerin veya pratik görevlerin peşinde koşan bir çaba iken YGZ; makine öğrenmesi, doğal dil işleme, bilişsel bilim, robotik ve diğer alanlardaki ilerlemelerden yararlanan çok disiplinli bir çabadır (Pennachin & Goertzel, 2007). ...
... YGZ, insanın zekâsının işleyişine benzer algoritmalar ve modeller oluşturabilmektedir, bu sayede dünyayı görmekte, mantıklı düşünmekte ve insanlar gibi etkileşimde bulunabilmektedir (Pennachin & Goertzel, 2007). YGZ henüz gelişimin erken aşamalarında olmasına rağmen, yaptıklarıyla insanlığı hayrete düşürmektedir. ...
Article
Full-text available
Artificial general intelligence (AGI) is expected to cause a revolution similar to the industrial revolution and affect our lives in many ways. The AGI revolution involves not only technological developments but also the process of human adaptation to this change. This study examines the possible implications of AGI on the role of the teacher. AGI is defined as technology with human-level cognitive abilities and has many uses in education and training. There are a limited number of studies in foreign literature examining the possible effects of AGI on teacher roles. In Turkey, there is no study on this subject. This study fills an important gap in order to increase our understanding of the possible effects of AGI, a new technological paradigm on a global scale, in the field of education and training. Document analysis, one of the qualitative research methods, was used in the study. As a result of the study, it was determined that AGI can support teachers in creating personalized learning environments, monitoring student performance, improving educational processes and providing equal opportunities in education. The importance of ethical issues such as personal data privacy, algorithmic bias and fair access were emphasized in the use of AGI. It was emphasized that the responsible and safe use of AGI in educational processes is a necessity. In this context, the necessity of creating a qualified teacher training plan for teachers to effectively adapt to the AGI era is emphasized.
... Applications of weak AI, such as speech recognition or fraud detection, are already available today and are constantly being further developed. The main characteristic of such applications is that they are developed for a special task and are not able to execute other tasks [3]. Distinct from this is strong AI, which attempts to replicate the human brain in order to develop an AI that is not limited to specific tasks [2,3]. ...
... The main characteristic of such applications is that they are developed for a special task and are not able to execute other tasks [3]. Distinct from this is strong AI, which attempts to replicate the human brain in order to develop an AI that is not limited to specific tasks [2,3]. As strong AI is not available today [4], our paper focuses on weak AI. ...
Article
Full-text available
In the last few years, business firms have substantially invested into the artificial intelligence (AI) technology. However, according to several studies, a significant percentage of AI projects fail or do not deliver business value. Due to the specific characteristics of AI projects, the existing body of knowledge about success and failure of information systems (IS) projects in general may not be transferrable to the context of AI. Therefore, the objective of our research has been to identify factors that can lead to AI project failure. Based on interviews with AI experts, this article identifies and discusses 12 factors that can lead to project failure. The factors can be further classified into five categories: unrealistic expectations, use case related issues, organizational constraints, lack of key resources, and technological issues. This research contributes to knowledge by providing new empirical data and synthesizing the results with related findings from prior studies. Our results have important managerial implications for firms that aim to adopt AI by helping the organizations to anticipate and actively manage risks in order to increase the chances of project success.
... We define AI as a branch of computer science that aims to build intelligent tools that represent aspects of the human mind and can complete intellectual tasks that humans can perform. 20,21 AI subfields that may also be used together in a single application include machine learning, expert systems, natural language processing, and image and signal processing. 22 AI may be used in myriad ways to enhance humans' understanding of climate change and combat its societal impacts. ...
Article
Full-text available
Climate change critically impacts global pediatric health, presenting unique and escalating challenges due to children’s inherent vulnerabilities and ongoing physiological development. This scoping review intricately intertwines the spheres of climate change, pediatric health, and Artificial Intelligence (AI), with a goal to elucidate the potential of AI and digital health in mitigating the adverse child health outcomes induced by environmental alterations, especially in Low- and Middle-Income Countries (LMICs). A notable gap is uncovered: literature directly correlating AI interventions with climate change-impacted pediatric health is scant, even though substantial research exists at the confluence of AI and health, and health and climate change respectively. We present three case studies about AI’s promise in addressing pediatric health issues exacerbated by climate change. The review spotlights substantial obstacles, including technical, ethical, equitable, privacy, and data security challenges in AI applications for pediatric health, necessitating in-depth, future-focused research. Engaging with the intricate nexus of climate change, pediatric health, and AI, this work underpins future explorations into leveraging AI to navigate and neutralize the burgeoning impact of climate change on pediatric health outcomes. Impact Our scoping review highlights the scarcity of literature directly correlating AI interventions with climate change-impacted pediatric health that disproportionately affects vulnerable populations, even though substantial research exists at the confluence of AI and health, and health and climate change respectively. We present three case studies about AI’s promise in addressing pediatric health issues exacerbated by climate change. The review spotlights substantial obstacles, including technical, ethical, equitable, privacy, and data security challenges in AI applications for pediatric health, necessitating in-depth, future-focused research.
... However, I consider that there are also differences between acting humanly and AGI. Pennachin and Goertzel (2007) consider AGI more engineering-oriented than philosophical and scientific. In this sense, acting humanly has more room for philosophy, linguistics, and other research fields in which non-engineering analysis is still permitted. ...
Article
Full-text available
En este artículo analizo las contribuciones del enfoque del procesamiento de lenguaje natural (PLN) al desarrollo de la inteligencia artificial (IA). Se compone de cuatro secciones principales. En la primera se ofrece una breve conceptualización de la inteligencia artificial con el objetivo de ubicar el lugar del procesamiento del lenguaje natural en IA. En la segunda sección se proporcionan algunos de los elementos lingüísticos teóricos más relevantes para entender la importancia del procesamiento del lenguaje natural y su evolución. En la tercera, presentamos algunos elementos teóricos importantes para entender la forma en la funciona el procesamiento del lenguaje natural. Finalmente, en la cuarta sección, presentamos brevemente las principales características de los modelos de lenguaje de gran tamaño (LLMs) y de los modelos fundacionales (FMs) y su relación con el procesamiento del lenguaje natural. El objetivo general del artículo es proporcionar una visión general de este complejo campo de investigación, poniendo énfasis en los aspectos más relevantes.
... Despite the wide variety of definitions in the literature, there almost a unanimous agreement on some of the defining features of AGI. Specifically, the most important features of a typical AGI system are that (see, for example, [9], [36], [43], [47], [48]): it can learn and flexibly apply the limited and uncertain knowledge to solve a wide range of problems in entirely different contexts; its learning and actions are autonomous and goal-driven; it retains and accumulates relevant information in memory and reuse the knowledge in future tasks; and it can understand context and perform high-level cognitive tasks such as abstract and commonsense reasoning. We summarized the important properties in Figure 1. ...
Preprint
Full-text available
Generative artificial intelligence (AI) systems based on large-scale pretrained foundation models (PFMs) such as vision-language models, large language models (LLMs), diffusion models and vision-language-action (VLA) models have demonstrated the ability to solve complex and truly non-trivial AI problems in a wide variety of domains and contexts. Multimodal large language models (MLLMs), in particular, learn from vast and diverse data sources, allowing rich and nuanced representations of the world and, thereby, providing extensive capabilities, including the ability to reason, engage in meaningful dialog; collaborate with humans and other agents to jointly solve complex problems; and understand social and emotional aspects of humans. Despite this impressive feat, the cognitive abilities of state-of-the-art LLMs trained on large-scale datasets are still superficial and brittle. Consequently, generic LLMs are severely limited in their generalist capabilities. A number of foundational problems -- embodiment, symbol grounding, causality and memory -- are required to be addressed for LLMs to attain human-level general intelligence. These concepts are more aligned with human cognition and provide LLMs with inherent human-like cognitive properties that support the realization of physically-plausible, semantically meaningful, flexible and more generalizable knowledge and intelligence. In this work, we discuss the aforementioned foundational issues and survey state-of-the art approaches for implementing these concepts in LLMs. Specifically, we discuss how the principles of embodiment, symbol grounding, causality and memory can be leveraged toward the attainment of artificial general intelligence (AGI) in an organic manner.
... Stephen Hawking, Elon Musk, Steve Wozniak, or Bill Gates (Musk, 2014;Gates, 2008) are just some of the personalities that have openly shared their concerns about super-intelligent AI systems. Even if at the moment we find ourselves at the stage in which we can only talk about narrow AI (Kurzweil, 2005), we are moving fast towards the development of artificial general intelligence (AGI), namely AI machines which could be as intelligent as humans, and could carry out any intellectual task (Pennachin & Goertzel, 2007). Elon Musk together with Microsoft are currently investing a billion dollars in this business. ...
Article
Full-text available
p>As part of the 4th Industrial Revolution, the emergence of Artificial Intelligence will change almost all economic activities, and it will create enormous social and economic opportunities. It will also pose major challenges, accompanied by ethical dilemmas. The present study focuses on the perceptions of current employees predominantly from the IT area, on the development of AI. The aim is to capture the attitudes they have towards the emergence and the development of AI, the impact that it might have on certain sectors of social life and people in general. We sought for the 280 online surveyed subjects to have been employed for at least 6 months, assuming that being already anchored in their professional lives might reduce their biasness. The working methodology allowed us to process and interpret data both quantitatively and qualitatively. The results of the study could be used to predict possible changes that could occur in the future as an effect of the development of Artificial Intelligence, but also to reduce the negative impact that it could have.</p
... This pursuit necessitates ongoing innovation in areas such as advanced neural architectures, robust reasoning frameworks, multi-modal learning, and ethically grounded AI governance. Bridging the ANI-AGI gap presents a profound scientific and philosophical challenge, one that calls for interdisciplinary collaboration across fields such as neuroscience, cognitive science, philosophy, and engineering [8][9][10]. While AI has achieved remarkable success in the realm of narrow intelligence, developing AGI remains a fundamental objective for researchers. ...
Preprint
Full-text available
This paper reviews the evolution and application potential of Artificial General Intelligence (AGI) from its foundational research in 1943 to modern deep learning advancements. Key developments include McCulloch and Pitts'neural model, expert systems in the 1970s, and deep learning breakthroughs by Hinton's team. AGI seeks to replicate human cognitive functions, enabling robots with autonomous perception, decision-making, and social interaction within ethical guidelines. The paper explores AGI robot design, focusing on sensory, neural, and power systems, including sustainable energy solutions. Applications in agriculture, poverty reduction, healthcare, and virtual education are discussed, along with the challenges in achieving real-time, multi-domain adaptability. The research emphasizes a need for multidisciplinary collaboration to safely harness AGI's transformative potential.
... It posits that intelligence can be approximated through distributed representations and nonlinear transformations. Representative systems in this category include convolutional neural networks, recurrent neural networks, generative adversarial networks, and transformers [11]. ...
... AGI aims to imitate the properties of human general intelligence to achieve the following characteristics that a general intelligence system should have: (1) The ability to solve general problems in an unrestricted manner, comparable with human capabilities; (2) Most likely, the ability to solve specific problems in specific domains and contexts with exceptional efficiency; (3) The ability to use its broader and more specialized intelligence in a unified way; (4) The ability to learn from its environment, other intelligent systems, and teachers; (5) When it gains experience in solving new types of problems, it has the ability to become better at solving these problems. [16] As theoretical research, AGI aims to develop artificial intelligence systems with autonomous self-control, rational self-understanding, and the ability to learn new skills. It is capable of solving complex problems in environments and situations without prior instruction. ...
Article
Full-text available
The deep integration of Artificial Intelligence (AI) is gradually becoming a key force in innovating the teaching of English as a Foreign Language (EFL). This study aims to assess the practical effects of AI technology in providing customized instructional support and learning pathways in EFL instruction. The study reveals the benefits of AI in the instruction of English vocabulary, utilizing the Apriori algorithm from association rule mining and empirical analysis from survey data of 110 second-year university students across four different majors using AI-powered language learning platforms and AI-powered mobile language learning applications (such as UNIPUS AIGC platform and iTEST, intelligent assessment mobile application). It also deduces related teaching strategies and learning models. The results indicate that the use of AI-powered language learning platforms positively impacts English vocabulary learning outcomes in EFL instruction, and the combined use of AI-powered mobile language learning applications for self-testing and in-class tests effectively enhances vocabulary learning efficiency. The findings and conclusions of this study provide valuable insights for EFL educational practice and demonstrate the potential of AI in boosting the effectiveness of language learning, offering empirical support and guidance for future educational decision-making.
... For example, in (Simon and Newell, 1958) the authors state that "within ten years a digital computer will write music [with] considerable aesthetic value". However, more than half a century later, AGI is still a muchinvestigated theoretical problem (Pennachin and Goertzel, 2007;Goertzel, 2014;Roli et al., 2022). While some of the recent advances in Large Language Models (LLM) have sparked worldwide interest and renewed confidence in the near-future capabilities of Artificial Intelligence (AI), practical feasibility remains an open question. ...
Preprint
Full-text available
We introduce here the concept of Artificial General Creatures (AGC) which encompasses "robotic or virtual agents with a wide enough range of capabilities to ensure their continued survival". With this in mind, we propose a research line aimed at incrementally building both the technology and the trustworthiness of AGC. The core element in this approach is that trust can only be built over time, through demonstrably mutually beneficial interactions. To this end, we advocate starting from unobtrusive, nonthreatening artificial agents that would explicitly collaborate with humans, similarly to what domestic animals do. By combining multiple research fields, from Evolutionary Robotics to Neuroscience, from Ethics to Human-Machine Interaction, we aim at creating embodied, self-sustaining Artificial General Creatures that would form social and emotional connections with humans. Although they would not be able to play competitive online games or generate poems, we argue that creatures akin to artificial pets would be invaluable stepping stones toward symbiotic Artificial General Intelligence.
... With the advent of the Large Language Model (LLM), the term "Artificial General Intelligence" (AGI) has regained prominence [18]. The term's initial appearance can be traced back to 1997, in the context of prospective technologies [19], and later, in relation to powerful AI systems in 2007 [20]. The concept of AGI, which involves systems with "universal characteristics" [21], is also incorporated within the overarching definition of AI in this paper. ...
Article
Full-text available
Sustainability has become a critical global concern, focusing on key environmental goals such as achieving net-zero emissions by 2050, reducing waste, and increasing the use of recycled materials in products. These efforts often involve companies striving to minimize their carbon footprints and enhance resource efficiency. Artificial intelligence (AI) has demonstrated significant potential in tackling these sustainability challenges. This study aims to evaluate the various aspects that must be considered when deploying AI for sustainability solutions. Employing a SWOT analysis methodology , we assessed the strengths, weaknesses, opportunities, and threats of 70 research articles associated with AI in this context. The study offers two main contributions. Firstly, it presents a detailed SWOT analysis highlighting recent advancements in AI and its role in promoting sustainability. Key findings include the importance of data availability and quality as critical enablers for AI's effectiveness in sustainable applications, and the necessity of AI explainability to mitigate risks, particularly for smaller companies facing financial constraints in adopting AI. Secondly, the study identifies future research areas, emphasizing the need for appropriate regulations and the evaluation of general-purpose models, such as the latest large language models, in sustainability initiatives. This research contributes to the growing body of knowledge on AI's role in sustainability by providing insights and recommendations for researchers, practitioners, and policymakers, thus paving the way for further exploration at the intersection of AI and sustainable development.
... It posits that intelligence can be approximated through distributed representations and nonlinear transformations. Representative systems in this category include convolutional neural networks, recurrent neural networks, generative adversarial networks, and transformers [7]. ...
Preprint
Full-text available
In the present manuscript, we introduce a novel and holistic architecture for the ProtoAGI system, conceptualized from a systems engineering standpoint. This architecture is elaborately crafted to emulate artificial general intelligence (AGI) through the integration of diverse components and knowledge frameworks, thereby augmenting its performance and adaptability. We meticulously delineate the system's proficiency in processing intricate user inputs, its capacity for adaptive learning from historical datasets, and its ability to generate responses that are contextually relevant. The cornerstone of our proposition is the intricate orchestration of Large Language Models (LLMs), task-specific solvers, and a comprehensive knowledge repository, which collectively propel the system towards achieving genuine adaptability and autonomous learning capabilities. This approach not only signifies a pioneering venture into the realm of AGI system design but also lays the groundwork for subsequent advancements in this field.
... AI can be classified into two major types: narrow AI, which is designed for specific tasks (e.g., facial recognition or voice commands), and general AI, which mimics a broader spectrum of human intellect. AI aims to develop systems with autonomous intelligent functionality, capable of problem-solving, decisionmaking, and performing tasks typically requiring human intelligence [33]. ...
Article
Full-text available
This comprehensive review unfolds a detailed narrative of Artificial Intelligence (AI) making its foray into radiology, a move that is catalysing transformational shifts in the healthcare landscape. It traces the evolution of radiology, from the initial discovery of X-rays to the application of machine learning and deep learning in modern medical image analysis. The primary focus of this review is to shed light on AI applications in radiology, elucidating their seminal roles in image segmentation, computer-aided diagnosis, predictive analytics, and workflow optimisation. A spotlight is cast on the profound impact of AI on diagnostic processes, personalised medicine, and clinical workflows, with empirical evidence derived from a series of case studies across multiple medical disciplines. However, the integration of AI in radiology is not devoid of challenges. The review ventures into the labyrinth of obstacles that are inherent to AI-driven radiology—data quality, the ’black box’ enigma, infrastructural and technical complexities, as well as ethical implications. Peering into the future, the review contends that the road ahead for AI in radiology is paved with promising opportunities. It advocates for continuous research, embracing avant-garde imaging technologies, and fostering robust collaborations between radiologists and AI developers. The conclusion underlines the role of AI as a catalyst for change in radiology, a stance that is firmly rooted in sustained innovation, dynamic partnerships, and a steadfast commitment to ethical responsibility.
... "strong AI") soll die Abläufe im menschlichen Gehirn nachahmen. Die Merkmale eines solchen Systems sind Bewusstsein und Empathie [15]. Eine derartige KI soll Menschen bei schwierigen Problemen und Aufgaben unterstützen, aus Erfahrung lernen und selbstständig Entscheidungen treffen [2]. ...
Chapter
Künstliche Intelligenz (KI) ist im Bereich der Paketzustellung nicht mehr wegzudenken. Immer mehr Unternehmen nutzen KI-Systeme, um das Kauferlebnis der Kunden auch in diesem Bereich effizienter und angenehmer zu gestalten. Die vorliegende Arbeit identifiziert mittels systematischer Literaturrecherche in fünf disziplinrelevanten Datenbanken über einschlägige Suchbegriffe wissenschaftliche Veröffentlichungen auf dem Gebiet der Logistik im Zusammenhang mit einem Einsatz von KI im Bereich der Paketzustellung. Nach einer kurzen Darstellung der theoretischen Grundlagen zum Thema KI und Maschinelles Lernen werden basierend auf den Ergebnissen der Literaturrecherche sowie anhand ausgewählter Beispiele aus der Praxis die aktuellen Entwicklungen der KI im Bereich der Logistik, speziell im B2C-Handel mit Paketversand, beschrieben sowie dargestellt, welche Auswirkungen und Folgen der Einsatz von KI in Zukunft in diesem Bereich haben könnte. Über eine SWOT-Analyse werden die wichtigsten Faktoren der KI bei der Paketlieferung betrachtet.
... In this direction, artificial intelligence is also described as 'software (probably hardware) systems designed by people who act in physical and digital dimensions by sensing their environment, collecting data and interpreting this data, when given a complex purpose' (Samoili et al., 2020). Considering that artificial intelligence basically refers to systems with capabilities equivalent to human intelligence, artificial intelligence should have the following capabilities (Pennachin and Goertzel 2007): ...
... AI can be classified into two major types: narrow AI, which is designed for specific tasks (e.g., facial recognition or voice commands), and general AI, which mimics a broader spectrum of human intellect. AI aims to develop systems with autonomous intelligent functionality, capable of problem-solving, decision-making, and performing tasks typically requiring human intelligence [33]. ...
Preprint
Full-text available
This comprehensive review unfolds a detailed narrative of Artificial Intelligence (AI) making its foray into radiology, a move that is catalysing transformational shifts in the healthcare landscape. It sheds light on the journey of radiology, from the pioneering discovery of X-rays to today’s intricate imaging technologies, infused with machine learning and deep learning in medical image analysis. At the crux of this review lies an in-depth study of AI applications in radiology, elucidating its seminal roles in image segmentation, computer-aided diagnosis, predictive analytics, and workflow optimisation. A spotlight is cast on the profound impact of AI on diagnostic processes, personalised medicine, and clinical workflows, with empirical evidence derived from a series of case studies across multiple medical disciplines. However, the integration of AI in radiology is not devoid of challenges. The review ventures into the labyrinth of obstacles that are inherent to AI-driven radiology — data quality, the ’black box’ enigma, infrastructural and technical complexities, as well as ethical implications. Peering into the future, the review contends that the road ahead for AI in radiology is paved with promising opportunities. It advocates for continuous research, embracing avant-garde imaging technologies, and fostering robust collaborations between radiologists and AI developers. It concludes by firmly cementing the role of AI as a catalyst for change in radiology, a stance that is firmly rooted in sustained innovation, dynamic partnerships, and a steadfast commitment to ethical responsibility.
... The commonly known Artificial Intelligence (AI) is a subset of Artificial General Intelligence (AGI). As a matter of fact, the current AI is known as a "narrow AI", a specific program that can solve problems in a specialized area (Pennachin & Goertzel, 2007). However, most of these narrow AIs require input that fit their requirements before they can begin solving problems. ...
Article
Full-text available
Humanoid is part of the Social Assistive Robotics (SAR) which has been designed with a comprehensive artificial intelligence system that is widely used for elderly care, rehabilitation for people with physical disabilities as well as the intervention of individuals with cognitive impairment. Nowadays, with the advance of this technology, the world is also utilizing these humanoids for social activities and services. Consequently, some controversial questions amongst religious scholars, academicians, clinicians, services and societies arise on the urgency to use humanoid robots for therapy of the intervention in the brain-impairments, elderly care, rendering general services or for social companions when the world seems to adapt that humanoid is one of those modes, which shows potential in giving better outcomes. Due to the problem, there is an urgency to develop Fiqh Robotics to help the nation at large on the permissible of using humanoids used for social activities, services and rehabilitation. A qualitative method is used in analysing and synthesizing the relevant literature and document review to develop this Fiqh Robotic from the framework of Maqasid Shariah, together with the combination of accelerating rapid growth of the world Artificial Intelligence (AI, particularly discussing the measurement Humanoid Robot Interaction (HRI), Maqasid Shariah Principles, Islamic Legal Maxims and the permission of using this approach in Islam.
... Most user activities on digital platforms are complex behaviors resulting from human users' underlying intentions, goals, and belief systems. Although a bot operating in digital spaces need not fully emulate humans to achieve generalizable behavior, it is essential to consider the intricacies and sophistication of human users' behavior on the web during bot design [26]. To that end, engineering bots with behavior models similar to human users might take into account existing approaches of measuring generalizable user behavior while not having to replicate human cognition as such [27]. ...
Preprint
Full-text available
Software bots operating in multiple virtual digital platforms must understand the platforms' affordances and behave like human users. Platform affordances or features differ from one application platform to another or through a life cycle, requiring such bots to be adaptable. Moreover, bots in such platforms could cooperate with humans or other software agents for work or to learn specific behavior patterns. However, present-day bots, particularly chatbots, other than language processing and prediction, are far from reaching a human user's behavior level within complex business information systems. They lack the cognitive capabilities to sense and act in such virtual environments, rendering their development a challenge to artificial general intelligence research. In this study, we problematize and investigate assumptions in conceptualizing software bot architecture by directing attention to significant architectural research challenges in developing cognitive bots endowed with complex behavior for operation on information systems. As an outlook, we propose alternate architectural assumptions to consider in future bot design and bot development frameworks.
... Most user activities on digital platforms are complex behaviors resulting from human users' underlying intentions, goals, and belief systems. Although a bot operating in digital spaces need not fully emulate humans to achieve generalizable behavior, it is essential to consider the intricacies and sophistication of human users' behavior on the web during bot design [26]. To that end, engineering bots with behavior models similar to human users might take into account existing approaches of measuring generalizable user behavior while not having to replicate human cognition as such [27]. ...
Conference Paper
Full-text available
Software bots operating in multiple virtual digital platforms must understand the platforms’ affordances and behave like human users. Platform affordances or features differ from one application platform to another or through a life cycle, requiring such bots to be adaptable. Moreover, bots in such platforms could cooperate with humans or other software agents for work or to learn specific behavior patterns. However, present-day bots, particularly chatbots, other than language processing and prediction, are far from reaching a human user’s behavior level within complex business information systems. They lack the cognitive capabilities to sense and act in such virtual environments, rendering their development a challenge to artificial general intelligence research. In this study, we problematize and investigate assumptions in conceptualizing software bot architecture by directing attention to significant architectural research challenges in developing cognitive bots endowed with complex behavior for operation on information systems. As an outlook, we propose alternate architectural assumptions to consider in future bot design and bot development frameworks.Keywordscognitive botcognitive architectureproblematization
... The commonly known Artificial Intelligence (AI) is a subset of Artificial General Intelligence (AGI). As a matter of fact, the current AI is known as a "narrow AI", a specific program that can solve problems in a specialized area (Pennachin & Goertzel, 2007). However, most of these narrow AIs require input that fit their requirements before they can begin solving problems. ...
Article
Full-text available
Humanoid is part of the Social Assistive Robotics (SAR) which has been designed with a comprehensive artificial intelligence system that is widely used for elderly care, rehabilitation for people with physical disabilities as well as the intervention of individuals with cognitive impairment. Nowadays, with the advance of this technology, the world is also utilizing these humanoids for social activities and services. Consequently, some controversial questions amongst religious scholars, academicians, clinicians, services and societies arise on the urgency to use humanoid robots for therapy of the intervention in the brain-impairments, elderly care, rendering general services or for social companions when the world seems to adapt that humanoid is one of those modes, which shows potential in giving better outcomes. Due to the problem, there is an urgency to develop Fiqh Robotics to help the nation at large on the permissible of using humanoids used for social activities, services and rehabilitation. A qualitative method is used in analysing and synthesizing the relevant literature and document review to develop this Fiqh Robotic from the framework of Maqasid Shariah, together with the combination of accelerating rapid growth of the world Artificial Intelligence (AI, particularly discussing the measurement Humanoid Robot Interaction (HRI), Maqasid Shariah Principles, Islamic Legal Maxims and the permission of using this approach in Islam.
... The result of this de-emphasis is a conflict of motivation, where the personal motivation of the AI could be viewed as the efficient prioritization of a team's goal, which may not be the motive of human teammates. Although humans may be able to iteratively balance personal and team motives when operating in a team, this would be a greater challenge for AI teammates as they will lack a level of general intelligence, especially in early cases of human-AI teaming (Flathmann et al., 2020;Pennachin and Goertzel, 2007). While this may not make AI teammates highly performative teammate from a raw performance perspective, we know that raw performance is not the only component that makes an effective and compatible team member (Salas et al., 2008;Brannick et al., 1993), even if they are an AI team member (Bansal et al., 2019). ...
... Early AI research focused on "artificial general intelligence"; however, developing this kind of AI is both challenging and complicated. Current focus is therefore on "artificial narrow intelligence", which develops systems with the ability to perform a well-defined single task extremely well [17,18]. Therefore, almost all modern AI-based healthcare solutions are considered "artificial narrow intelligence". ...
Chapter
Full-text available
Diagnostic imaging (DI) refers to techniques and methods of creating images of the body’s internal parts and organs with or without the use of ionizing radiation, for purposes of diagnosing, monitoring and characterizing diseases. By default, DI equipment are technology based and in recent times, there has been widespread automation of DI operations in high-income countries while low and middle-income countries (LMICs) are yet to gain traction in automated DI. Advanced DI techniques employ artificial intelligence (AI) protocols to enable imaging equipment perceive data more accurately than humans do, and yet automatically or under expert evaluation, make clinical decisions such as diagnosis and characterization of diseases. In this narrative review, SWOT analysis is used to examine the strengths, weaknesses, opportunities and threats associated with the deployment of AI-based DI protocols in LMICs. Drawing from this analysis, a case is then made to justify the need for widespread AI applications in DI in resource-poor settings. Among other strengths discussed, AI-based DI systems could enhance accuracies in diagnosis, monitoring, characterization of diseases and offer efficient image acquisition, processing, segmentation and analysis procedures, but may have weaknesses regarding the need for big data, huge initial and maintenance costs, and inadequate technical expertise of professionals. They present opportunities for synthetic modality transfer, increased access to imaging services, and protocol optimization; and threats of input training data biases, lack of regulatory frameworks and perceived fear of job losses among DI professionals. The analysis showed that successful integration of AI in DI procedures could position LMICs towards achievement of universal health coverage by 2030/2035. LMICs will however have to learn from the experiences of advanced settings, train critical staff in relevant areas of AI and proceed to develop in-house AI systems with all relevant stakeholders onboard.
Chapter
Full-text available
Artificial intelligence (AI) is rapidly being implemented in the healthcare sphere, threatening the ability of patients to make judgments tailored to their personal circumstances and beliefs. This book is concerned with the ability of two legal systems, those of the UK and the U.S., to meet the resulting challenges posed to patient autonomy. It deploys a forward-looking analysis to identify the unique problems raised by clinical AI and to anticipate the responses that are offered by the common law’s doctrine of informed consent. This assessment culminates in a concrete proposal for the regulation of medical AI and an affirmation of the law’s fundamental role in societies’ adaptation to innovative technologies.
Chapter
Future Tech Startups and Innovation in the Age of AI Our book, Future of Tech Startups and Innovations in the Age of AI, mainly focuses on artificial intelligence (AI) tools, AI-based startups, AI-enabled innovations, autonomous AI Agents (Auto-GPT), AI-based marketing startups, machine learning for organizations, AI-internet of things (IoT) for new tech companies, AI-enabled drones for agriculture industry, machine learning (ML)/deep learning (DL)-based drip farming, AI-based driverless cars, AI-based weather prediction startups, AI tools for personal branding, AI-based teaching, AI-based doctor/hospital startups, AI for game companies , AI-based finance tools, AI for human resource management, AI-powered management tools, AI tools for future pandemics, AI/ML-based transportation companies, AI for media, AI for carrier counseling, AI for customer care, AI for next generation businesses, and many more applications. AI tools and techniques will revolutionize startups all over the world. Entrepreneurs, engineers, and practitioners have already moved toward AI-based solutions to reshape businesses. AI/ML will create possibilities and opportunities for improving human lifestyles. AI-enabled startups will work on cost-effective solutions to solve difficult problems. Recently, many research companies are interested in providing solutions and investing a lot in AI-based startups. AI-driven products will revolutionize the "smart world. " AI computing tech companies will help to model human speech recognition systems. Also, AI-based startups will focus on perception and reasoning of autonomous robotic systems. AI/ML-based tech startups will introduce smart online education systems for future pandemics. More interestingly, people are also moving for online job opportunities and trying to work from home. Future innovation needs closer relations between academia and industry. Therefore, online platforms need to be introduced that will only focus on academia and industry linkage. Future AI tech-based startups will focus more on research and development to introduce novel products to the market. Accordingly, engineers and many other people should be trained on AI tools and techniques to introduce innovative solutions for the smart world. In addition, integration of many new technologies with AI will be made possible. AI with IoT, smart cities, unmanned aerial vehicles (UAVs), wireless sensor networks, software-defined networks, network management, vehicular ad hoc networks , flying ad hoc networks, wireless communication technologies, ML, reinforcement learning, federated learning and other mechanisms will introduce new technological products.
Article
статья посвящена проблемам восприятия и понимания будущими педагогами рисков и новых возможностей искусственного интеллекта в образовании. Целью исследования являлся поиск ответа на вопрос о том, как современные студенты педагогического университета с разной степенью академической успешности и компьютерной грамотности воспринимают факт стремительного внедрения искусственного интеллекта в практику образования. Для ответа на него использовались методики: «экспертная оценка академической успешности студентов»; «экспертная оценка компьютерной грамотности студентов (с использованием «метода полярных баллов»), а также проективная исследовательская беседа и опросник, составленный на основе анализа современных теоретических источников. Выборку исследования составили 216 студентов бакалавриата и магистратуры направления «педагогическое образование». В качестве основного результата проведенной работы можно отметить, что будущие педагоги еще не в полной мере осознали риски и преимущества, которые несет с собой внедрение искусственного интеллекта в образование. В результате исследования было изучено как общее отношение студентов к распространенным угрозам распространения ИИ, так и зависимость этого отношения от различного рода факторов (уровня образовательной программы, опыта использования в определенной сфере ИИ или отсутствия такого опыта). Результаты исследования показали, что чуть больше половины студентов не используют возможности искусственного интеллекта для решения учебных задач, но среди них достаточно большой процент составляют те, кто использует его для решения бытовых и личных проблем. Оценка угроз распространения искусственного интеллекта существенно зависит от степени и характера использования систем искусственного интеллекта в жизни и работе. Использование искусственного интеллекта в решении профессиональных педагогических задач, как правило связано с более спокойной оценкой как возможностей искусственного интеллекта, так и его угроз. Идея перестройки образовательных программ с целью соблюдения минимальной интеллектуальной нагрузки на обучающегося встретила в целом положительную оценку, но она существенно отличалась от категории респондентов. the article is devoted to the problems of perception and understanding by future teachers of the risks and new opportunities of artificial intelligence in education. The purpose of the study was to find an answer to the question of how modern students of a pedagogical university with varying degrees of academic success and computer literacy perceive the fact of the rapid introduction of artificial intelligence into educational practice. To answer it, the following methods were used: “expert assessment of students’ academic success”; “expert assessment of students’ computer literacy (using the “polar points method”), as well as a projective research conversation and a questionnaire compiled on the basis of an analysis of modern theoretical sources. The study sample consisted of 216 undergraduate and graduate students in the field of “teacher education.” As the main result of the work, it can be noted that future teachers have not yet fully realized the risks and benefits that the introduction of artificial intelligence in education brings. As a result of the study, both the general attitude of students towards common threats from the spread of AI and the dependence of this attitude on various factors (level of educational program, experience in using AI in a certain area or lack of such experience) were studied. The results of the study showed that slightly more than half of students do not use the capabilities of artificial intelligence to solve educational problems, but among them, a fairly large percentage are those who use it to solve everyday and personal problems. Assessing the threats of the spread of artificial intelligence significantly depends on the degree and nature of the use of artificial intelligence systems in life and work. The use of artificial intelligence in solving professional pedagogical problems is usually associated with a calmer assessment of both the capabilities of artificial intelligence and its threats. The idea of restructuring educational programs in order to maintain a minimum intellectual load on the student met with a generally positive assessment, but it differed significantly from the category of respondents.
Article
The article is devoted to the analysis of the concept of artificial general intelligence (AGI) and its interpretation proposed by the Russian philosopher David Dubrovsky in his recent research papers. The first part of the article briefly outlines the current approaches to defining the concept of “artificial general intelligence”, including interpreting it as an artificial intelligent system capable of achieving common goals in a variety of environments. Referring to the texts of the most influential foreign researchers and developers, the author demonstrates the parallels between their proposed approaches to understanding general artificial intelligence and those interpretations proposed by David Dubrovsky and his co-authors. In particular, the commonality in the interpretations of the concept of the “world model” (Yan LeCun) and the concept of “techno-umwelt” is shown, as well as parallels between the hypothesis of “universal embodied AI” (Ben Herzel) and the arguments of the Russian philosopher about the possible implementation of AI through its involvement in various types of interactions with various worlds, virtual and physical. In the second part of the article, the potential of using the information approach developed by David Dubrovsky to solve the mind-body problem as a basis for explaining the phenomenon of general artificial intelligence is outlined. It is shown that despite the need to refine the concept of information causality proposed by the philosopher, his theory can contribute to a better understanding of the connection of possible AGI competencies with the phenomena of subjective reality. In conclusion, the key problems that currently make it difficult to find an answer to the question of the dependence of the qualities of general artificial intelligence on the presence of phenomenal consciousness are outlined. The emphasis is placed on the need to continue interdisciplinary cooperation between representatives of cognitive sciences, developers and philosophers, whose interaction is designed to help solve the characteristic difficulties associated with both the problem of conceptualizing the concept of “artificial general intelligence” and the problem of identifying consciousness in artificial intelligent systems
Book
Future Tech Startups and Innovation in the Age of AI Our book, Future of Tech Startups and Innovations in the Age of AI, mainly focuses on artificial intelligence (AI) tools, AI-based startups, AI-enabled innovations, autonomous AI Agents (Auto-GPT), AI-based marketing startups, machine learning for organizations, AI-internet of things (IoT) for new tech companies, AI-enabled drones for agriculture industry, machine learning (ML)/deep learning (DL)-based drip farming, AI-based driverless cars, AI-based weather prediction startups, AI tools for personal branding, AI-based teaching, AI-based doctor/hospital startups, AI for game companies , AI-based finance tools, AI for human resource management, AI-powered management tools, AI tools for future pandemics, AI/ML-based transportation companies, AI for media, AI for carrier counseling, AI for customer care, AI for next generation businesses, and many more applications. AI tools and techniques will revolutionize startups all over the world. Entrepreneurs, engineers, and practitioners have already moved toward AI-based solutions to reshape businesses. AI/ML will create possibilities and opportunities for improving human lifestyles. AI-enabled startups will work on cost-effective solutions to solve difficult problems. Recently, many research companies are interested in providing solutions and investing a lot in AI-based startups. AI-driven products will revolutionize the "smart world. " AI computing tech companies will help to model human speech recognition systems. Also, AI-based startups will focus on perception and reasoning of autonomous robotic systems. AI/ML-based tech startups will introduce smart online education systems for future pandemics. More interestingly, people are also moving for online job opportunities and trying to work from home. Future innovation needs closer relations between academia and industry. Therefore, online platforms need to be introduced that will only focus on academia and industry linkage. Future AI tech-based startups will focus more on research and development to introduce novel products to the market. Accordingly, engineers and many other people should be trained on AI tools and techniques to introduce innovative solutions for the smart world. In addition, integration of many new technologies with AI will be made possible. AI with IoT, smart cities, unmanned aerial vehicles (UAVs), wireless sensor networks, software-defined networks, network management, vehicular ad hoc networks , flying ad hoc networks, wireless communication technologies, ML, reinforcement learning, federated learning and other mechanisms will introduce new technological products.
Chapter
This chapter covers basic definitions, predictions, and expectations in the fields of robotics and machine ethics. It starts by presenting a working definition of “intelligent robot” and then highlights the importance of examining ethical questions related to AI and robotics. By reviewing expert predictions and surveys on technological singularity and the likelihood of AI with broad capacities, the chapter underscores the need for proactive research on the ethical issues involved. It argues that the generally optimistic views of researchers about AI’s future impact further justify this exploration.
Research
Full-text available
Vor dem Hintergrund des aktuellen wissenschaftlichen Diskurses über den Fachkräftemangel in der Immobilienbranche gewinnen KI-Technologien im digitalen Zeitalter zunehmend an Bedeutung. Die Verfügbarkeit von qualifizierten Fachkräften über den gesamten Lebenszyklus einer Immobilie stellt eine zunehmende Herausforderung für Immobilienunternehmen dar. Ziel dieser Forschungsarbeit ist es daher, mittels einer umfassenden Literaturanalyse und einer empirischen Untersuchung herauszufinden, welchen Einfluss der Einsatz von Künstlicher Intelligenz auf das Arbeitsleben in der Immobilienwirtschaft hat und welche organisatorischen Voraussetzungen erfüllt sein müssen, damit eine Akzeptanz von KI-Technologien innerhalb der Unternehmen erfolgt. Der Fokus liegt dabei auf der Selbstbestimmungstheorie von Deci und Ryan in Verbindung mit dem integrativen TAM-Modell. Die Ergebnisse der Analyse verdeutlichen, dass der Einsatz von Künstlicher Intelligenz einen spürbaren Einfluss auf die Menschen hat, wobei eine wahrgenommene Akzeptanz und Nutzung mit einem respektvollen Umgang mit der Technologie in der Immobilienwirtschaft einhergeht. Die Integration neuer KI-Technologien verändert die Dynamik am Arbeitsplatz und bietet gleichzeitig Chancen für die Kompetenzentwicklung der Mitarbeitenden im Rahmen ihrer täglichen Arbeit. Die gewonnenen Erkenntnisse ermöglichen einen Ausblick auf die Chancen und Herausforderungen, denen sich Mitarbeitende und Unternehmen in der Immobilienwirtschaft stellen müssen, um den notwendigen Einsatz von KI-Technologien erfolgreich mitzugestalten. Im Ergebnis kann ein adäquater Einsatz von KI in der Immobilienbranche zu einer gesunden und effizienten Interaktion zwischen Mensch und Maschine führen und damit eine attraktive und zukunftsträchtige Basis für die Digitalisierung in der Immobilienbranche schaffen. Dabei ist es entscheidend, dass die Unternehmen der Immobilienwirtschaft gezielt darauf achten, ihre Mitarbeitende durch gezielte Weiterbildungsmaßnahmen im Umgang mit KI-Tools zu fördern, um eine aktive Teilhabe und Mitbestimmung zu ermöglichen. English version: Against the backdrop of the current scientific discourse on the shortage of skilled workers in the real estate industry, AI technologies are becoming increasingly important in the digital age. The availability of qualified specialists over the entire life cycle of a property represents an increasing challenge for real estate companies. The objective of this research project is to conduct a comprehensive literature analysis and empirical study to ascertain the impact of artificial intelligence (AI) on the real estate industry and the organizational requirements for the acceptance of AI technologies within companies. The focus is on Deci and Ryan's self-determination theory in conjunction with the integrative technology acceptance model (TAM). The results of the analysis indicate that the use of artificial intelligence has a noticeable impact on people. This impact is evident in the perceived acceptance and use of AI technologies in the real estate industry, which are closely linked to a respectful approach to the technology. The integration of new AI technologies is changing the dynamics in the workplace, offering employees the opportunity to develop their skills as part of their daily work. The insights gained provide an outlook on the opportunities and challenges that employees and companies in the real estate industry must face in order to successfully shape the necessary use of AI technologies. As a result, the appropriate use of AI in the real estate industry can lead to healthy and efficient interaction between humans and machines, thus creating an attractive and promising basis for digitalization in the real estate industry. In this context, it is of the utmost importance that companies in the real estate industry take specific measures to encourage their employees to utilise AI tools through targeted training programmes, in order to facilitate active participation and co-determination.
Chapter
In recent years, advances in computer processing power and the availability of suitable programming environments and algorithms have made it possible to solve some Artificial Intelligence subtasks in a satisfactory manner. This chapter provides an informal overview of the state of the art. Of particular importance here is the interpretation of sensor data, such as recognizing objects in photos, diagnosing diseases from images, or transcribing spoken language into text. There has also been progress in analyzing the meaning of language, e.g., machine translation from one language to another, answering questions by an AI system, or conducting meaningful dialogues by intelligent assistants. Finally, AI systems have been able to beat human experts at computer games, automatically drive vehicles in real-world traffic, or perform creative acts, such as inventing new stories. The techniques used will be explained in later chapters.
Article
After the introduction, the first part of the paper is devoted to defining the concepts of artificial intelligence and totalitarianism, where the importance of distinguishing between the current (machine learning) and the projected (superintelligence) phase in the development of artificial intelligence, i.e. between the embryonic (totalitarian movement out of power) and the established (totalitarian movement in power) stage in the development of totalitarianism is underlined. The second part of the paper examines the connection between the current level of artificial intelligence and the embryonic phase of totalitarianism, while the third part of the paper analyzes the potential relationship between the superintelligence and the established totalitarianism. It seems, considering the similarities and differences between the effects of contemporary and future artificial intelligence and the effects of earlier totalitarianism, that today (and in the future) we do not have a mere replica of totalitarian phases from the 20th century, but special totalitarian phenomena in the form of "capillary totalitarianism", i.e. "hypertotalitarianism". Last century's totalitarianism, as well as today's "capillary" variant of it, were not necessarily irreversible, but "hypertotalitarianism" will be. In conclusion, protective measures against the risk of artificial intelligence are proposed, in the form of the principle of exemption (modeled after the concept of conscientious objection).
Article
Full-text available
The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI.
Article
In the twenty-five years since my prior paper on knowledge management, artificial intelligence has come roaring back, delivering significant and sensational innovation while introducing a panoply of controversial and adverse consequences; at the same time, knowledge management has atrophied and fallen from favour. In the ensuing years, also, philosophy has delivered a new school of thought, object-oriented ontology, which has shaken up the discipline and generated ongoing debate amongst its adherents. This paper looks critically across all three of these domains, makes reference back to the enquiry and recommendations in the prior paper, and tries to find a new way forward that will engage elements from each and move toward a more beneficial praxis for human knowledge and understanding. --------------------------------------------------------------- Note: this paper has also subsequently been published in: IUP Journal of Knowledge Management, Vol. 22, No. 3, pp. 27-65. ResearchGate won't let me select this journal.
Article
“Bilen varlık” ve “eyleyen varlık” olmak üzere iki tip özne tanımı olduğun söyleyebiliriz. Saf “eyleyen varlık” anlamıyla hukukun öznesi yalnızca asli kurucu iktidar olabilir. Bütün diğer hukuk özneleri “hukuku bilip ona göre eylemek” kapasitesine sahip olmaları gerektiği için aynı anda hem bilen hem eyleyen özne olmak durumundadır. Gerçek kişilerden bağımsız bir bilme ve eyleme kapasitesi olamayacağı için tüzel kişilerin ayrı birer hukuk öznesi olarak kabul edilmesi için bir sebep bulunmamaktadır. Hayvanların hukukun öznesi olup olamayacağı tamamen onların bilişsel kapasitesiyle ilgilidir. Bu anlamda, insan dışındaki çoğu canlı varlık hukukun öznesi olamaz. Yapay zekâ dışındaki cansız varlıkların da yine bilişsel kapasite eksikliği dolayısıyla hukukun öznesi olamayacağını söyleyebiliriz. Haklar açısından ise durum farklı olacaktır. Hak normları genellikle hak sahibi dışındakilere yükümlülük yükleyen normlar olarak anlaşılabilir. Bir başka deyişle, her hak tanıyan normu yükümlülük yükleyen bir norm olarak ve eksiksiz biçimde yeniden ifade edebiliriz. Bundan ötürü, hak sahibi olacak olan varlığın hukuku bilme veya hukuka göre eyleme kapasitesine sahip olması hak normlarının deontik yapısı nedeniyle şart değildir. Bu durumda, hukukun öznesi olamayacağına hükmettiğimiz cansız varlıkların ve hayvanların hak sahibi olabileceklerini söyleyebiliriz.
Chapter
The central premise of implementing machines to understand minds is perhaps based on Emerson Pugh’s (in)famous quote: “If the human brain were so simple that we could understand it, we would be so simple that we couldn’t”. This circular paradox has led us to the quest for that quintessential ‘mastermind’ or ‘master machine’ that can unravel the mysteries of our minds. An important stepping stone towards this understanding is to examine how perceptrons—models of neurons in artificial neural networks—can help to decode processes that underlie disorders of the mind. This chapter focuses on the rapidly growing applications of machine learning techniques to model and predict human behaviour in a clinical setting. Mental disorders continue to remain an enigma and most discoveries, therapeutic or neurobiological, stem from serendipity. Although the surge in neuroscience over the last decade has certainly strengthened the foundations of understanding mental illness, we have just started to rummage at the tip of the iceberg. We critically review the applied aspects of artificial intelligence and machine learning in decoding important clinical outcomes in psychiatry. From predicting the onset of psychotic disorders to classifying mental disorders, long-range applications have been proposed and examined. The veridicality and implementation of these results in real-world settings will also be examined. We then highlight the promises, challenges, and potential solutions of implementing these operations to better model mental disorders.
Article
Full-text available
In this paper, we demonstrate how the language and reasonings that academics, developers, consumers, marketers, and journalists deploy to accept or reject AI as authentic intelligence has far-reaching bearing on how we understand our human intelligence and condition. The discourse on AI is part of what we call the “authenticity negotiation process” through which AI’s “intelligence” is given a particular meaning and value. This has implications for scientific theory, research directions, ethical guidelines, design principles, funding, media attention, and the way people relate to and act upon AI. It also has great impact on humanity’s self-image and the way we negotiate what it means to be human, existentially, culturally, politically, and legally. We use a discourse analysis of academic papers, AI education programs, and online discussions to demonstrate how AI itself, as well as the products, services, and decisions delivered by AI systems are negotiated as authentic or inauthentic intelligence. In this negotiation process, AI stakeholders indirectly define and essentialize what being human(like) means. The main argument we will develop is that this process of indirectly defining and essentializing humans results in an elimination of the space for humans to be indeterminate. By eliminating this space and, hence, denying indeterminacy, the existential condition of the human being is jeopardized. Rather than re-creating humanity in AI, the AI discourse is re-defining what it means to be human and how humanity is valued and should be treated.
Chapter
Full-text available
This study explores the strategic application areas and evolution process of artificial intelligence (AI) in businesses. AI has played a pivotal role in the technological evolution, emerging as an integrated system capable of solving complex problems within limited tasks. The development process encompasses subfields such as data analysis, machine learning, and deep learning, with a focus on continual improvement of algorithms through learning from extensive data sets. The study concentrates on the strategic applications of AI across various business sectors. In marketing, AI is employed to predict customer behavior and optimize marketing strategies. In accounting and auditing, AI holds the potential to automate financial processes and enhance error detection. In the banking and finance sector, emphasis is placed on integrating AI into key operations like credit assessment, fraud detection, and portfolio management. Furthermore, the study delves into the contributions of AI to business processes within the tourism, human resources, health, and education sectors. In tourism, AI applications that analyze customer demands and offer customized travel suggestions play a significant role. Within human resources, AI has the potential to be utilized in diverse areas, ranging from recruitment processes to employee satisfaction management. In health and education, applications like diagnostic support systems, treatment planning, and student performance analysis showcase the effectiveness of AI. As a result, it is anticipated that AI will confer a strategic advantage to businesses, fostering innovation by further optimizing business processes in the future. This study unveils the potential of AI in providing a competitive edge to businesses, illustrating successful application examples in various sectors.
Preprint
Full-text available
This paper introduces a preliminary concept aimed at achieving Artificial General Intelligence (AGI) by leveraging a novel approach rooted in two key aspects. Firstly, we present the General Intelligent Network (GIN) paradigm, which integrates information entropy principles with a generative network, reminiscent of Generative Adversarial Networks (GANs). Within the GIN network, original multimodal information is encoded as low information entropy hidden state representations (HPPs). These HPPs serve as efficient carriers of contextual information, enabling reverse parsing by contextually relevant generative networks to reconstruct observable information. Secondly, we propose a Generalized Machine Learning Operating System (GML System) to facilitate the seamless integration of the GIN paradigm into the AGI framework. The GML system comprises three fundamental components: an Observable Processor (AOP) responsible for real-time processing of observable information, an HPP Storage System for the efficient retention of low entropy hidden state representations, and a Multimodal Implicit Sensing/Execution Network designed to handle diverse sensory inputs and execute corresponding actions. By combining the GIN paradigm and GML system, our approach aims to create a holistic AGI system capable of encoding, processing, and reconstructing information in a manner akin to human-like intelligence. The synergy of information entropy principles and generative networks, along with the orchestrated functioning of the GML system, presents a promising avenue towards achieving advanced cognitive capabilities in artificial systems. This preliminary concept lays the groundwork for further exploration and refinement in the pursuit of true brain-like intelligence in machines.
Article
Full-text available
The prospect of developing artificial general intelligence (AGI) with the same comprehensive capabilities as the human mind presents humanity both tremendous opportunities and dire risks. This paper explores the potential applications and implications of AGI across diverse domains including science, healthcare, education, security, and the economy. However, realizing AGI's benefits requires proactive alignment of its goals and values with those of humanity through responsible governance. As AGI approaches and possibly surpasses human-level intellectual abilities, we must grapple with complex ethical issues surrounding autonomy, consciousness, and disruptive societal impacts. The exact timeline for achieving AGI remains uncertain, but its emergence will likely stem from the convergence of advanced technologies like big data, neural networks, and quantum computing. Ultimately, the creation of AGI represents humanity's greatest opportunity to profoundly enhance flourishing, as well as our greatest challenge to steer its development toward benevolence rather than catastrophe. With sage preparation and foresight, AGI could usher in an unparalleled era of insight and invention for the betterment of all people. But without adequate safeguards and alignment, its disruptive potential could prove catastrophically destabilizing. This paper argues prudently governing the transition to AGI is essential for harnessing its transformative power to elevate rather than endanger our collective future.
Article
This paper seeks to understand the potential for robust global control of lethal autonomous weapons systems (LAWS). The paper seeks to uncover the predominant views and trends in global decision-making about such weapons systems by way of observing the positions and preferences of States inhabiting the international system as a realistic modality of a more probable normative outcome. Through a thorough examination of publicly available positions of United Nations (UN) Member States, it establishes a typology of varying positions maintained by States and reveals the argumentative rationale for the major positions advanced. This typology results to be far from unified and is composed of the following categories: (1) States that support the prohibition of LAWS; (2) States that support the prohibition of LAWS, but do not support calls for an international ban treaty; (3) States that do not support (or oppose) the prohibition of LAWS; (4) States with “flexible” positions over the LAWS: oppose use or use under certain circumstances, but not the development and production; (5) States that expressed support for multilateral talks, but have not expressed a position on the prohibition or not of LAWS; and (6) States that have called for a legally binding instrument (or legal regulation) on LAWS (inclusive of both prohibitions and regulations). Regulation and human control emerge as factors that have significant value in the equation.
Chapter
This paper is the extension of my recent paper which was presented at the AGI-22 conference. In this paper, I try to answer the comments I received during and after the conference and to clarify and explain in more details the points and results that were missed or omitted from my previous paper due to the page limitation of the proceedings.KeywordsArtificial General IntelligenceVersatility-Efficiency IndexAGI PyramidComplexityPower ConsumptionUnsolved Problem SpaceIntentional Vulnerability ImpositionHuman-First DesignComputational PowerHardware ArchitectureAGI SocietySingularity
Preprint
Full-text available
Even in the most cutting-edge Artificial General Intelligence (AGI) endeavors, the disparity between humans and artificial systems is extremely apparent. Although this difference fundamentally divides the capabilities of each, human-level intelligence (HLI) has remained the aim of AGI for decades. This paper opposes the binarity of the Turing Test, the foundation of this intention and original establishment of a potentially intelligent machine. It discusses how AI experts misinterpreted the Imitation Game as a means to anthropomorphize computer systems and asserts that HLI is a red herring that distracts current research from relevant problems. Despite the extensive research on the potential design of an AGI application, there has been little consideration of how such a system will access and ingest data at a human-like level. Although current machines may emulate specific human attributes, AGI is developed under the pretense that this can be easily scaled up to a general intelligence level. This paper establishes contextual and rational attributes that perpetuate the variation between human and AI data collection abilities and explores the characteristics that current AGI lacks. After asserting that AGI should not be seeking HLI, its current state is analyzed, the Turing Test is reevaluated, and the future of AGI development is discussed within this framework.
Article
Full-text available
The major purpose of this research is to provide a thorough review and analysis of the interplay between artificial intelligence (AI) and psychology. I talk about state-of-the-art computer programs that are able to simulate human cognition and behavior (such as Human-Computer Interfaces, models of the mind, and data mining programs). Applications may be broken down into several sub-categories and have many different aspects. While developing artificially intelligent robots has been and continues to be the major goal of AI research and development, the widespread acceptance and usage of AI systems have resulted in a much broader transfer of technology. The article begins with a brief history of cognitive psychology, a discussion of its fundamental ideas and models, and a look at the ways in which the study is connected to artificial intelligence (AI). The second part of this article takes a closer look at the difficulties encountered by the field of human-computer interaction, along with its aims, duties, applications, and underlying psychological theories. Multiple scientific, pragmatic, and technical obstacles (complexity problems, disturbing coefficients, etc.) stand in the way of extending or overcoming these limits. We also demonstrate the potential use of mental modeling in the areas of diagnosis, manipulation, and education support in this work. Predictions may be made with the use of data mining, knowledge discovery, or expert systems (for instance, the prognoses of children with mental problems based on their settings). The article reviews the missing features and offers an overview of the coefficients used in the system. Finally, we discuss the application of expert systems and life simulation (applied mental model) in virtual reality to benefit autistic people and their loved ones.
Article
Full-text available
Space-variant (or foveating) vision architectures are of importance in both machine and biological vision. In this paper, we focus on a particular space-variant map, the log-polar map, which approximates the primate visual map, and which has been applied in machine vision by a number of investigators during the past two decades. Associated with the log-polar map, we define a new linear integral transform, which we call the exponential chirp transform. This transform provides frequency domain image processing for space-variant image formats, while preserving the major aspects of the shift-invariant properties of the usual Fourier transform. We then show that a log-polar coordinate transform in frequency provides a fast exponential chirp transform. This provides size and rotation, in addition to shift, invariant properties in the transformed space. Finally, we demonstrate the use of the fast exponential chirp algorithm on a database of images in a template matching task, and also demonstrate its uses for spatial filtering
Article
Full-text available
This study examined infants' use of contour length in num- ber discrimination tasks. We systematically varied number and con- tour length in a visual habituation experiment in order to separate these two variables. Sixteen 6- to 8-month-old infants were habituated to displays of either two or three black squares on a page. They were then tested with alternating displays of either a familiar number of squares with a novel contour length or a novel number of squares with a familiar contour length. Infants dishabituated to the display that changed in contour length, but not to the display that changed in num- ber. We conclude that infants base their discriminations on contour length or some other continuous variable that correlates with it, rather than on number.
Article
Full-text available
Categorizations which humans make of the concrete world are not arbitrary but highly determined. In taxonomies of concrete objects, there is one level of abstraction at which the most basic category cuts are made. Basic categories are those which carry the most information, possess the highest category cue validity, and are, thus, the most differentiated from one another. The four experiments of Part I define basic objects by demonstrating that in taxonomies of common concrete nouns in English based on class inclusion, basic objects are the most inclusive categories whose members: (a) possess significant numbers of attributes in common, (b) have motor programs which are similar to one another, (c) have similar shapes, and (d) can be identified from averaged shapes of members of the class. The eight experiments of Part II explore implications of the structure of categories. Basic objects are shown to be the most inclusive categories for which a concrete image of the category as a whole can be formed, to be the first categorizations made during perception of the environment, to be the earliest categories sorted and earliest named by children, and to be the categories most codable, most coded, and most necessary in language.
Article
Full-text available
The ACT-R system is a general system for modeling a wide range of higher level cognitive processes. Recently, it has been embellished with a theory of how its higher level processes interact with a visual interface. This includes a theory of how visual attention can move across the screen, encoding information into a form that can be processed by ACT-R. This system is applied to modeling several classic phenomena in the literature that depend on the speed and selectivity with which visual attention can move across a visual display. ACT-R is capable of interacting with the same computer screens that subjects do and, as such, is well suited to provide a model for tasks involving human-computer interaction. In this article, we discuss a demonstration of ACT-R's application to menu selection and show that the ACT-R theory makes unique predictions, without estimating any parameters, about the time to search a menu. These predictions are confirmed.
Article
Full-text available
Presents a new theory of subjective probability according to which different descriptions of the same event can give rise to different judgments. The experimental evidence confirms the major predictions of the theory. First, judged probability increases by unpacking the focal hypothesis and decreases by unpacking the alternative hypothesis. Second, judged probabilities are complementary in the binary case and subadditive in the general case, contrary to both classical and revisionist models of belief. Third, subadditivity is more pronounced for probability judgments than for frequency judgments and is enhanced by compatible evidence. The theory provides a unified treatment of a wide range of empirical findings. It is extended to ordinal judgments and to the assessment of upper and lower probabilities. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Argues that the main point of disagreement in the debate over the nature of mental imagery concerns the following: (a) whether certain aspects of the way in which images are transformed should be attributed to intrinsic knowledge-independent properties of the medium in which images are instantiated or the mechanisms by which they are processed; or (b) whether images are typically transformed in certain ways because Ss take their task to be the simulation of the act of witnessing certain real events taking place and therefore use their tacit knowledge of the imaged situation to cause the transformation to proceed as they believe it would have proceeded in reality. The tacit knowledge account is seen as more plausible because empirical results demonstrate that both "mental scanning" and "mental rotation" transformations can be critically influenced by varying the instructions given to Ss and the precise form of the task used and, that the form of the influence is explainable in terms of the semantic content of Ss' beliefs and goals. (40 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Conducted a study with 153 undergraduates to determine if imagination could replace physical color and pattern in adaptation stimuli known to produce orientation-specific color aftereffects. Ss were given instructions to imagine the presence of colors within achromatic bar patterns in one adaptation condition and the presence of bar patterns onto homogeneous color fields in another condition—to pair specific colors with specific orientations, as in the standard procedure for producing the McCollough effect. Results indicate the presence of weak color aftereffects following both adaptation conditions. When bars were imagined onto colors, the aftereffects were characteristic of the McCullough effect; when colors were imagined onto bars, the effects were opposite those of the McCollough effect. The color aftereffects following imagination cannot be explained by traditional theories of feature-contingent color aftereffects, theories that generally assume the exclusive operation of feature detectors selectively tuned for color. Instead, these results emphasize the active role of imagery in forming color–feature associations. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Previous investigators have argued that basic color categories are structured in terms of a universal focal area with varying boundaries. In the present study 2 developmental implications were investiaged: (a) that foci for color categories become established and are stabilized earlier than boundaries and (b) that focal judgments are always more stable than boundary judgments. A total of 20 kindergartners, 40 3rd graders, and 40 adults served in 3 color designation experiments modeled after those of B. Berlin and P. Kay (1969). Means and variances of focal and boundary judgments for the 8 basic chromatic terms were determined for the 3 groups. In general, both hypotheses were supported. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
In order to function robustly in the world, au-tonomous agents need to assimilate concepts for physical entities and relations, grounded in perception and action. They also need to as-similate concepts for perceptual properties like color, shape, and weight, and perhaps even-tually even for nonphysical objects like uni-corns. The process of acquiring concepts that carry meaning in terms of the agent's own physiology we call embodiment. Unlike cur-rent robotic agents, those endowed with em-bodied concepts will more readily understand high level instructions. As a consequence, these robots won't have to be instructed at a low level. We have developed an autonomous agent architecture that facilitates embodiment of action and perception, and accommodates embodied concepts for both physical and non-physical objects, properties, and relations.
Article
Full-text available
We describe a Bayesian approach for learning Bayesian networks from a combination of prior knowledge and statistical data. First and foremost, we develop a methodology for assessing informative priors needed for learning. Our approach is derived from a set of assumptions made previously as well as the assumption oflikelihood equivalence, which says that data should not help to discriminate network structures that represent the same assertions of conditional independence. We show that likelihood equivalence when combined with previously made assumptions implies that the user''s priors for network parameters can be encoded in a single Bayesian network for the next case to be seen—aprior network—and a single measure of confidence for that network. Second, using these priors, we show how to compute the relative posterior probabilities of network structures given data. Third, we describe search methods for identifying network structures with high posterior probabilities. We describe polynomial algorithms for finding the highest-scoring network structures in the special case where every node has at mostk=1 parent. For the general case (k>1), which is NP-hard, we review heuristic search algorithms including local search, iterative local search, and simulated annealing. Finally, we describe a methodology for evaluating Bayesian-network learning algorithms, and apply this approach to a comparison of various approaches.
Article
An attempt is made to carry out a program (outlined in a previous paper) for defining the concept of a random or patternless, finite binary sequence, and for subsequently defining a random or patternless, infinite binary sequence to be a sequence whose initial segments are all random or patternless finite binary sequences. A definition based on 2 G. J. Chaitin the bounded-transfer Turing machine is given detailed study, but insufficient understanding of this computing machine precludes a complete treatment. A computing machine is introduced which avoids these difficulties. Key Words and Phrases: computational complexity, sequences, random sequences, Turing machines CR Categories: 5.22, 5.5, 5.6 1. Introduction In this section a definition is presented of the concept of a random or patternless binary sequence based on 3-tape-symbol bounded-transfer Turing machines. 2 These computing machines have been introduced and studied in [1], where a proposal to apply them in this manne...
Article
This book is about abduction, 'the logic of Sherlock Holmes', and about how some kinds of abductive reasoning can be programmed in a computer. The work brings together Artificial Intelligence and philosophy of science and is rich with implications for other areas such as, psychology, medical informatics, and linguistics. It also has subtle implications for evidence evaluation in areas such as accident investigation, confirmation of scientific theories, law, diagnosis, and financial auditing. The book is about certainty and the logico-computational foundations of knowledge; it is about inference in perception, reasoning strategies, and building expert systems.
Article
The am program was constructed by Lenat in 1975 as an early experiment in getting machines to learn by discovery. In the preceding article in this issue of the AI Journal, Ritchie and Hanna focus on that work as they raise several fundamental questions about the methodology of artificial intelligence research. Part of this paper is a response to the specific points they make. It is seen that the difficulties they cite fall into four categories, the most serious of which are omitted heuristics, and the most common of which are miscommunications. Their considerations, and our post-am work on machines that learn, have clarified why am succeeded in the first place, and why it was so difficult to use the same paradigm to discover new heuristics. Those recent insights spawn questions about “where the meaning really resides” in the concepts discovered by am. This in turn leads to an appreciation of the crucial and unique role of representation in theory formation, specifically the benefits of having syntax mirror semantics. Some criticism of the paradigm of this work arises due to the ad hoc nature of many pieces of the work; at the end of this article we examine how this very adhocracy may be a potential source of power in itself.
Article
Abstract Cerebral blood flow was measured using positron emission tomography (PET) in three experiments while subjects performed mental imagery or analogous perceptual tasks. In Experiment 1, the subjects either visualized letters in grids and decided whether an X mark would have fallen on each letter if it were actually in the grid, or they saw letters in grids and decided whether an X mark fell on each letter. A region identified as part of area 17 by the Talairach and Tournoux (1988) atlas, in addition to other areas involved in vision, was activated more in the mental imagery task than in the perception task. In Experiment 2, the identical stimuli were presented in imagery and baseline conditions, but subjects were asked to form images only in the imagery condition; the portion of area 17 that was more active in the imagery condition of Experiment 1 was also more activated in imagery than in the baseline condition, as was part of area 18. Subjects also were tested with degraded perceptual stimuli, which caused visual cortex to be activated to the same degree in imagery and perception. In both Experiments 1 and 2, however, imagery selectively activated the extreme anterior part of what was identified as area 17, which is inconsistent with the relatively small size of the imaged stimuli. These results, then, suggest that imagery may have activated another region just anterior to area 17. In Experiment 3, subjects were instructed to close their eyes and evaluate visual mental images of upper case letters that were formed at a small size or large size. The small mental images engendered more activation in the posterior portion of visual cortex, and the large mental images engendered more activation in anterior portions of visual cortex. This finding is strong evidence that imagery activates topographically mapped cortex. The activated regions were also consistent with their being localized in area 17. Finally, additional results were consistent with the existence of two types of imagery, one that rests on allocating attention to form a pattern and one that rests on activating stored visual memories.
Article
The Novamente AI Engine, a novel AI software system, is briefly reviewed. Unlike the majority of contemporary AI projects, Novamente is aimed at artificial general intelligence, rather than being restricted by design to one particular application domain, or to a narrow range of cognitive func- tions. Novamente integrates aspects of many prior AI projects and paradigms, including symbolic, neural-network, evolutionary programming and re- inforcement learning approaches; but its overall ar- chitecture is unique, drawing on system-theoretic ideas regarding complex mental dynamics and as- sociated emergent patterns.
Article
PSYCHOLOGICAL SCIENCE Abstract— A large body of evidence suggests that visual attention selects objects as well as spatial locations. If attention is to be regarded as truly object based, then it should operate not only on object repre- sentations that are explicit in the image, but also on representations that are the result of earlier perceptual completion processes. Report- ing the results of two experiments, we show that when attention is directed to part of a perceptual object, other parts of that object enjoy an attentional advantage as well. In particular, we show that this object-specific attentional advantage accrues to partly occluded objects and to objects defined by subjective contours. The results cor- roborate the claim that perceptual completion precedes object-based attentional selection. The world consists of objects and surfaces. It is reasonable to sup- pose, therefore, that the human visual system has evolved to represent and operate on visual information in terms of objects and surfaces. Recent evidence supports this idea in showing that visual attention—the process of selecting a salient or task-relevant subset of visual information for deeper processing than the rest—can act on an object-based represen-
Article
In Part I, four ostensibly different theoretical models of induction are presented, in which the problem dealt with is the extrapolation of a very long sequence of symbols—presumably containing all of the information to be used in the induction. Almost all, if not all problems in induction can be put in this form. Some strong heuristic arguments have been obtained for the equivalence of the last three models. One of these models is equivalent to a Bayes formulation, in which a priori probabilities are assigned to sequences of symbols on the basis of the lengths of inputs to a universal Turing machine that are required to produce the sequence of interest as output. Though it seems likely, it is not certain whether the first of the four models is equivalent to the other three. Few rigorous results are presented. Informal investigations are made of the properties of these models. There are discussions of their consistency and meaningfulness, of their degree of independence of the exact nature of the Turing machine used, and of the accuracy of their predictions in comparison to those of other induction methods. In Part II these models are applied to the solution of three problems—prediction of the Bernoulli sequence, extrapolation of a certain kind of Markov chain, and the use of phrase structure grammars for induction. Though some approximations are used, the first of these problems is treated most rigorously. The result is Laplace's rule of succession. The solution to the second problem uses less certain approximations, but the properties of the solution that are discussed, are fairly independent of these approximations. The third application, using phrase structure grammars, is least exact of the three. First a formal solution is presented. Though it appears to have certain deficiencies, it is hoped that presentation of this admittedly inadequate model will suggest acceptable improvements in it. This formal solution is then applied in an approximate way to the determination of the “optimum” phrase structure grammar for a given set of strings. The results that are obtained are plausible, but subject to the uncertainties of the approximation used.
Article
• Helmholtz' "Popular Scientific Lectures" have spread his fame far and wide among educated people everywhere. While he was professor at Heidelberg and still a comparatively young man, nearly three-score years ago he composed the preface to the first edition of his monumental work on light and vision, in all their intricate and manifold relations to each other; and already considerably more than a decade has passed since the publication of the posthumous third edition of the Physiologische Optik which was brought up to date and greatly enlarged under the collaboration of Nagel, Gullstrant and v. Kries. Yet in all these years there has been no English translation of this great classical treatise; and unfortunately no similar work in English of any kind. It is interesting to note that both Young and Helmholtz, the two great pioneers in Physiological Optics, started on their careers in the medical profession, and each of them afterwards gained his greatest renown in Physics. Apart from its own intrinsic value, the treatise on Physiological Optics is a model of scientific method and logical procedure that has hardly ever been excelled in these respects (PsycINFO Database Record (c) 2012 APA, all rights reserved) • Helmholtz' "Popular Scientific Lectures" have spread his fame far and wide among educated people everywhere. While he was professor at Heidelberg and still a comparatively young man, nearly three-score years ago he composed the preface to the first edition of his monumental work on light and vision, in all their intricate and manifold relations to each other; and already considerably more than a decade has passed since the publication of the posthumous third edition of the Physiologische Optik which was brought up to date and greatly enlarged under the collaboration of Nagel, Gullstrant and v. Kries. Yet in all these years there has been no English translation of this great classical treatise; and unfortunately no similar work in English of any kind. It is interesting to note that both Young and Helmholtz, the two great pioneers in Physiological Optics, started on their careers in the medical profession, and each of them afterwards gained his greatest renown in Physics. Apart from its own intrinsic value, the treatise on Physiological Optics is a model of scientific method and logical procedure that has hardly ever been excelled in these respects (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Editorial Introduction to ‘Evolutionary Origins of Morality: Cross-Disciplinary Perspectives’.The four principal papers presented here, with interdisciplinary commentary discussion and their authors’ responses, represent contemporary approaches to an evolutionary understanding of morality -- of the origins from which, and the paths by which, aspects or components of human morality evolved and converged. Their authors come out of no single discipline or school, but represent rather a convergence of largely independent work in primate ethology, anthropology, evolutionary biology, and dynamic systems modelling on related problems, conjectures and tentative conclusions. In inviting contributions I deliberately made no attempt to define morality more sharply than common language and understanding have left it, including our ordinary responses to right and wrong, but not all of the very diverse thinking about this and other practical concerns -- and about what we should make of all these -- that ethics encompasses. This was not because of any lack of past definitions of morality, but because the history of controversy over these makes it advisable not to prejudge the question pending our inquiry's results. From these we may see some outlines of plausible answers begin to emerge, only a brief sketch of some of which I can attempt here, sampling the hypotheses, scientific support, critical discussion of this, and refinement of positions in authors’ responses.
Article
Packed with real-time computer simulations and rigorous demonstrations of these phenomena, this book includes results on vision, speech, cognitive information processing, adaptive pattern recognition, adaptive robotics, conditioning and attention, cognitive-emotional interactions, and decision making under risk. "Neural Networks and Natural Intelligence" first discusses neural network architecture for preattentive 3-D vision and then shows how this architecture provides a unified explanation, through systematic computer simulations, of many classical and recent phenomena from psycho-physics, visual perception, and cortical neurophysiology. It illustrates within the domain of preattentive boundary segmentation and featural filling-in, how computer experiments help to develop and refine computational vision models. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
The color specimens developed by the OSA Committee on Uniform Color Scales are intended to provide a uniform sampling of the three-dimensional domain of realizable surface colors. The research of this article confirms the distinction between basic and nonbasic color terms and defines the locations of the eleven basic surface colors within the OSA space through systematic, monolexemic color naming of 424 samples of the OSA set seen against a gray background. It is concluded that the coordinate system of the OSA color space is well suited to the specification of color order: The locations of the basic colors are reasonably arranged and can be conveniently and precisely visualized.
Book
: Effort was directed toward showing that the techniques that have emerged for constructing sophisticated problem-solving programs also provide us with new, strong tools for constructing theories of human thinking. They allow us to merge the rigor and objectivity associated with behaviorism with the wealth of data and complex behavior associated with the gestalt movement. To this end their key feature is not that they provide a general framework for understanding problem-solving behavior (although they do that too), but that they finally reveal with great clarity that the free behavior of a reasonably intelligent human can be understood as the product of a complex but finite and determinate set of laws. Although we know this only for small fragments of behavior, the depth of the explanation is striking. (Author)
Article
The Novamente AI Engine is briefly reviewed. The overall architecture is unique, drawing on system-theoretic ideas regarding complex mental dynamics and associated emer-gent patterns. We describe how these are facilitated by a novel knowledge representation which allows diverse cogni-tive processes to interact effectively. We then elaborate the two primary cognitive algorithms used to construct these processes: probabilistic term logic (PTL), and the Bayesian Optimization Algorithm (BOA). PTL is a highly flexible in-ference framework, applicable to domains involving uncer-tain, dynamic data, and autonomous agents in complex envi-ronments. BOA is a population-based optimization algo-rithm which can incorporate prior knowledge. While origi-nally designed to operate on bit strings, our extended ver-sion also learns programs and predicates with variable length and tree-like structure, used to represent actions, per-ceptions, and internal state. We detail some of the specific dynamics and structures we expect to emerge through the interaction of the cognitive processes, outline our approach to training the system through experiential interactive learn-ing, and conclude with a description of some recent results obtained with our partial implementation, including practi-cal work in bioinformatics, natural language processing, and knowledge discovery.