Article

Attitudes toward intelligent machines

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... I think the best way to answer that question is to look at the historical context. When behaviourism held sway, anyone adverting to meaningful internal states in their explanation of intelligent behaviour was suspected of an 5 Actually, I just made four points: (1) EAI hasn't been shown to be the best approach; (2) EAI hasn't been shown to be doomed; (3) non-EAI hasn't been shown to be the best approach; (4) non-EAI hasn't been shown to be doomed. What follows supports point (4) and, to a lesser extent, point (1). ...
... Of course, AI can benefit from an understanding of how the body accomplishes this grounding in the natural case. But a slavish copying of nature may be unnecessary and in some cases unhelpful (as it was in the achievement of artificial flight [5,8,49]; see Section 6). ...
Article
this paper, all verbatim quotations are reproduced with their original emphasis, except where indicated otherwise
... Many have said, following Lady Lovelace's dictum, that the machine can only do what it is told to do (Lovelace 1843, p.722), or in the anxious language of the mid-20th century, the computer is but a 'fast moron' . IBM made this slur into doctrine in the 1950s to salve public fears of artificial intelligence sparked by the machine's highly publicised winning at draughts (checkers) and chess; the mantra then spread (McCorduck 1979, pp.159, 173;Armer 1963). But IBM's 'fast moron' is not the machine we have, which in essential respects is the one Herman Goldstine and John von Neumann addressed in the first-ever paper on programming (Goldstine and von Neumann 1947). ...
Article
Full-text available
This essay celebrates the beginning of Digital Enlightenment Studies by raising some basic questions that are still uncomfortably incunabular. (See Jerome McGann on that, below.) The central question I pose is this: what are the consequences of applying digital methods to scholarly editing and how best might we think about them? It is an essay by an outsider intended for a scholarly commune of knowledgeable practitioners in the application of these methods. Whatever value the essay has comes from placing those methods in the broader context of digital humanities historically and philosophically considered. It starts with some of the earliest explorations of devices ‘to think with’, as Ivor Richards said of the codex; considers their fundamental role as modelling machines; takes up the first attempt to design the quasi-cognitive processes of digital computing; uses the author’s own fledgling project for a glimpse of how different these processes must be from our own; and finally ends with an example of their potential for propelling imaginative thought. Throughout are provocations to do something about the disciplinary amnesia of digital humanities, to see that it becomes of the humanities rather than merely in them.
... People have historically distrusted automated, intelligent decisionmaking systems such as AI/ML (Armer, 1995); organizational contexts are no exception (for a review, see Glikson & Woolley, 2020). Applicants can learn in several ways that an organization is using AI/ML in its hiring process. ...
Article
Full-text available
While many organizations' hiring practices now incorporate artificial intelligence (AI) and machine learning (ML), research suggests that job applicants may react negatively toward AI/ML-based selection practices. In the current research, we thus examined how organizations might mitigate adverse reactions toward AI/ML-based selection processes. In two between-subjects experiments, we recruited online samples of participants (undergraduate students and Prolific panelists, respectively) and presented them with vignettes representing various selection systems and measured participants' reactions to them. In Study 1, we manipulated (a) whether the system was managed by a human decision-maker, by AI/ML, or a combination of both (an “augmented” approach), and (b) the selection stage (screening, final stage). Results indicated that participants generally reacted more favorably toward augmented and human-based approaches, relative to AI/ML-based approaches, and further depended on participants' pre-existing familiarity levels with AI. In Study 2, we sought to replicate our findings within a specific process (selecting hotel managers) and application method (handling interview recordings). We found again that reactions toward the augmented approach generally depended on participants’ familiarity levels with AI. Our findings have implications for how (and for whom) organizations should implement AI/ML-based practices.
... Ancak modern anlamda zeki makinelere dair ilk düşünceleri tetikleyen, Alan Mathison Turing olmuştur. Turing, 'Computing Machinery and Intelligence (Hesaplama Makineleri ve Zekâ)' başlıklı makalesinde 'Makineler düşünebilir mi?' sorusunu sormuştur (Turing, 1950;Turing, 1995;Turing, 2009 Turing ile yakın zamanda gerçekleştirilmiş olan diğer çalışmalardan bazıları: (Armer, 1960;De Latil, 1956;Hovland, 1960;Kemeny, 1955;Mays, 1952;Pinsky, 1951;Shannon, 1950;Wilkes, 1953)]. Aynı zamanda modern Bilgisayar Bilimleri'nin de babası olarak anılan Turing'in (Daylight, 2015;Nerode, 2016: x) YZ alanı, özellikle farklı disiplinlerdeki gerçek dünya tabanlı problemlere sunduğu etkin çözümler ile kısa sürede gelişme göstermiştir. ...
Thesis
Full-text available
[TR] Yapay Zekâ (YZ), hiç kuşkusuz ki Bilgisayar Bilimleri’nin en önemli ilgi alanlarından birisidir. İnsan düşünce - davranış şekilleri ve doğa dinamiklerinin benzetimi neticesinde sunduğu etkin ve tutarlı çözümler nedeniyle hızlı bir şekilde gelişmiş ve modern yaşamın neredeyse her alanında yerini alarak gücünü kanıtlamıştır. Çok-disiplinli uygulanabilir olma özelliği, aynı zamanda farklı problem çeşitlerine kolaylıkla adapte edilebilmesiyle ilgilidir. Bu durum, YZ’nin kendi içerisinde alt-araştırma alanlarına ayrılmasına da sebep olmuştur. Optimizasyon, YZ’nin yoğun bir şekilde ilgili olduğu problem alanlarından birisidir. Genel olarak, eldeki kaynaklar bağlamında ‘optimumu – en uygunu bulma’ çabaları olarak tanımlayabileceğimiz optimizasyon, klasik tekniklerin yetersiz kalmaya başlaması sonrasında, çözümü YZ bünyesinde bulmuştur. Elde edilen başarılı sonuçlar, optimizasyon problemleri için kullanılan tekniklerin tasarlanmasına sebep olmuş ve sonuç olarak ilgili literatürde, YZ tabanlı optimizasyon tekniklerine yönelik bir alt-araştırma ortamı ortaya çıkmıştır. YZ tabanlı optimizasyon teknikleri tipik olarak çeşitli mantıksal ve matematiksel çözüm yaklaşımları çerçevesinde şekillenen ve çoğunlukla doğal dinamiklerden esinlenerek tasarlanan algoritmalar olarak bilinmektedir. Bu tür algoritmalara yönelik literatür özellikle son yıllarda büyük bir ivme kazanmış ve bilim insanlarının ilgi alanları içerisine girmiştir. Bu tez çalışmasının amacı, daha kolay kodlanabilen, alternatif YZ tabanlı optimizasyon algoritmaları geliştirmektir. Bu bağlamda, arkaplanı anlamak adına öncelikli olarak ‘YZ ve optimizasyon’ konusuna yönelik temel bilgiler sunulmuş ve literatürde ön plana çıkan bazı algoritmalardan bahsedilmiş, sonrasında ise, iki farklı sürekli optimizasyon algoritması geliştirilmiştir. Sırasıyla Girdap Optimizasyon Algoritması (GOA) ve Bilişsel Gelişim Optimizasyon Algoritması (BiGOA) adı verilen algoritmalar, Sürü Zekâsı adı altında incelenebilen çeşitli mekanizmaları ve basit matematiksel süreçleri içermektedir. Geliştirilen algoritmalar, çeşitli değerlendirme süreçlerinden geçirilmiş ve elde edilen bulgular, algoritmaların zeki optimizasyonda yeter düzeyde başarılı olduklarını göstermiştir. [ENG] It is clear that Artificial Intelligence (AI) is one of the most important interest areas of Computer Science. Because of its effective and efficient solutions provided as a result of efforts on simulating human thinking - behaviors and nature dynamics, it has improved rapidly and proved its power by taking place within almost all fields of the modern life. This feature of being applicable multidisciplinary is related to being also easily adaptable to different types of problems. This situation has also caused AI to be divided into sub-research fields. Optimization is one of the related problems to which the AI is related intensely. Optimization, which we can define generally as the efforts on ‘finding the most appropriate – optimum one’ in the sense of the sources in hand, has found its way in AI after classical techniques have become unsatisfactory on solutions. Obtained successful results have caused designing techniques used for optimization problems and as a result, a sub-research environment related to AI based optimization techniques has appeared. Typically, AI based optimization techniques are known as the algorithms formed in the context of various logical and mathematical solution approaches and mostly designed by inspiring from natural dynamics. The literature associated with such algorithms has gained momentum in especially recent years and taken part in scientists’ interest areas. Objective of this thesis study is to develop alternative AI based optimization algorithms, which can be coded easier. In this sense, in order to understand the background, essential information regarding to the subject of ‘AI and optimization’ was given, and then some algorithms that come into prominence in the literature were explained primarily, and after that, two different continuous optimization algorithms were developed. Called as Vortex Optimization Algorithm (VOA), and Cognitive Development Optimization Algorithm (CoDOA), the related algorithms include various mechanism and some additional simple mathematical processes that can be examined in the context of Swarm Intelligence. The developed algorithms were employed in some evaluation processes and the findings showed that they are successful enough in intelligent optimization processes.
... Intelligent behavior on the part of a machine no more implies complete functional equivalence between machine and brain than flying by an airplane implies complete functional equivalence between plane and bird. (Armer, 1963. p But there is more to the sciences of the artificial than defining the "true nature" of natural phenomena. ...
Article
Full-text available
The ability and compulsion to know are as characteristic of our human nature as are our physical posture and our languages. Knowledge and intelligence, as scientific concepts, are used to describe how an organism's experience appears to mediate its behavior. This report discusses the relation between artificial intelligence (AI) research in computer science and the approaches of other disciplines that study the nature of intelligence, cognition, and mind. The state of AI after 25 years of work in the field is reviewed, as are the views of its practitioners about its relation to cognate disciplines. The report concludes with a discussion of some possible effects on our scientific work of emerging commercial applications of AI technology, that is, machines that can know and can take part in human cognitive activities.
... The analogy can be used to emphasise that when we talk about intelligent systems or intelligence in joint systems, we should be very careful about what dimensions or functions are used as a basis for the assessment (Armer, 1963). We may easily be able to amplify one function, but it need not be the most important one or the one that is essential for the functioning of the joint system. ...
... A Rutherford would score as much as 10 on this scale, since he was sometimes very far ahead of the evidence on atomic structure. Great achievements were also made by Turing who sketched the essentials of digital computers about 10 years before these were actually in existence (Armer, 1963;Green, 1963;Oettinger, 1964). ...
Article
Full-text available
Some ways of looking at scientific originality are presented. By far "the most usual way for an original concept to start is in exactly 1 brain. The individual inventor is often more important than any corporate research by a team of people." Distinctions between problem solvers and problem finders are presented in the text and summarized in a table; the major dimensions of distinction involve definition, objective, method, and outcome. Major sections are: Problem solver and problem finder. The individual and the institute. Conclusions. "Originality and individuality are badly needed in science… . Intellectual honesty is required in the sense of willingness to take a chance on the tougher problems as well as the easier ones." Make some mistakes and admit them; more promising problems may be found in the failures; naive productivity is more desirable than sophisticated sterility. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
... People who do not address the questions regarding the important abstractions may therefore build simulations which very superficially model biological systems without realising that there are important phenomena to which they are ontologically blind and which they have not modelled. In some cases, attempting to get desired results by trying to replicate physical structures using artificial components may fail, like early attempts to replicate bird flight; whereas replication at a higher level of abstraction may be more successful – as happened in the history of achieving artificial flight (Armer, 1962, p. 334) (reprinted in Chrisley, 2000). ...
Article
Full-text available
This paper is concerned with some methodological and philosophical problems related both to the longterm objective of building human-like robots (like those `in the movies') and short- and medium-term objectives of building robots with capabilities of more or less intelligent animals. In particular, we claim that organisms are information-processing machines, and thus information-processing concepts will be essential for designing biologically-inspired robots. However identifying relevant concepts is non trivial since what an information processor is doing cannot in general be determined simply by observing it. Having a general framework for describing and comparing agent architectures may help.
Chapter
In this chapter, I want to argue for the end of sex robots. I will problematise the use of the term “sexSex, sex/gender” that is associated with them and push back against attempts to dilute the meanings of sexSex, sex/gender to include anything. Initially, I considered a better description for these objects to be masturbation robots, but they function as a form of pornography, making them porn robots: pornographic representations of women and girls. As I will argue, it is a mistake to attribute sexSex, sex/gender to them as objects and the arguments I set out in this chapter also apply to porn dolls.
Article
This article reconfigures the history of Artificial Intelligence (AI) and its accompanying tradition of criticism by excavating the work of Mortimer Taube, a pioneer in information and library sciences, whose magnum opus, Computers and Common Sense: The Myth of Thinking Machines (1961), has been mostly forgotten. By focusing on his attack on the General Problem Solver (GPS), the second major AI program, it conveys the essence of Taube's distinctive critique. I examine his incisive analysis of the social construction of “thinking machines,” and conclude that, despite considerable changes to the underlying technology of AI, much of Taube's criticism remains relevant today. Moreover, his status as an “information processing” insider who criticized AI on behalf of the public good challenge the boundaries and focus of most critiques of AI over the past half-century. In sum, his work offers an alternative model from which contemporary AI workers and critics can learn much.
Chapter
Der Grad der Kontrollierbarkeit und Prüfbarkeit automatisierter Buchführungssysteme ergibt sich aus der Fähigkeit der zur Verfügung stehenden Kontrollund Prüfungskonzepte, dem Abweichungsverhalten automatisierter Buchführungssysteme zu begegnen 1). Der Grad der Kontrollierbarkeit von Systemen kann auch anhand der Extrema gemessen werden: (1) Vollständige Kontrollierbarkeit besteht in einem System, wenn für jeden Systemablaufzustand die entsprechenden Inputs und Outputs geliefert werden können. (2) Vollständige Nichtkontrollierbarkeit liegt vor, wenn für keinen Systemablaufzustand entsprechende Inputs und Outputs geliefert werden können.
Chapter
So far, we have concerned ourselves with the details of logic design, that is, we have developed the skills and the techniques to implement individual computer subunits. But no matter how well we understand the details of these circuits, the overall picture of a computer will be rather vague as long as we do not perceive the systematic organization of its components. If we want not only to recognize the structure of a specific machine, but also attempt to judge the significance of variations in the layout of different machines, it is further necessary to comprehend the philosophy of their design.
Chapter
Full-text available
It is suggested that some limitations of current designs for medical AI systems (be they autonomous or advisory) stem from the failure of those designs to address issues of artificial (or machine) consciousness. Consciousness would appear to play a key role in the expertise, particularly the moral expertise, of human medical agents, including, for example, autonomous weighting of options in (e.g.,) diagnosis; planning treatment; use of imaginative creativity to generate courses of action; sensorimotor flexibility and sensitivity; empathetic and morally appropriate responsiveness; and so on. Thus, it is argued, a plausible design constraint for a successful ethical machine medical or care agent is for it to at least model, if not reproduce, relevant aspects of consciousness and associated abilities. In order to provide theoretical grounding for such an enterprise we examine some key philosophical issues that concern the machine modelling of consciousness and ethics, and we show how questions relating to the first research goal are relevant to medical machine ethics. We believe that this will overcome a blanket skepticism concerning the relevance of understanding consciousness, to the design and construction of artificial ethical agents for medical or care contexts. It is thus argued that it would be prudent for designers of MME agents to reflect on issues to do with consciousness and medical (moral) expertise; to become more aware of relevant research in the field of machine consciousness; and to incorporate insights gained from these efforts into their designs.
Article
The time seems ripe to reflect on the state of AI and re-evaluate its goals and techniques. It is argued that a fundamental oversight has been made in adopting a focus on knowledge and reasoning without simultaneously emphasising understanding and meaning. Despite not being able to define the latter term precisely, we can exploit our limited knowledge of meaning that most AI efforts will simply never achieve artificial intelligence of the kind that many pioneers in the field envisaged, or that most members of the public imagine when they hear this term. The only route to this goal is argued to be via learning, and objections to this approach are argued to be weak or flawed. A number of implications of learning with the specific aim of acquiring meaning and understanding capabilities are discussed.
Article
Full-text available
To build an argument for the supervening importance of agenda, I locate the digital humanities within the context of a central human predicament: the anxiety of identity stemming from the problematic relation of human to non-human, both animal and machine. I identify modelling as the fundamental activity of the digital humanities and draw a parallel between it and our developing confrontation with the not-us. I then go on to argue that the demographics of infrastructure within the digital humanities, therefore in part its emphasis, is historically due to the socially inferior role assigned to those who in the early years found para-academic employment in service to the humanities. I do not specify an agenda, rather conclude that modelling, pursued within its humane context, offers a cornucopia of agenda if only the "mind-forged manacles" of servitude's mind-set can be broken.
Article
when we think about the social roles of the computer we tend to think of a familiar list of administrative, financial, and governmental functions and a set of social, political, and legal problems that are raised by what computers do. In this essay we look at social roles that the computer plays, not as a direct result of what they do, but because of the relationships people form with computers: how people think about computers, how they use computers to think and to not think about other things, how deeply subjective private worlds of computation (including preferences for different styles of programming) and a medium in which people work through personal and political concerns that are far from any instrumental use of the computer. In sum, we shall be looking at the computer as a metaphor and as a projective medium, and suggesting that this subjective side of the computer presence is highly relevent to understanding issues concerning computation and public life. There is an extraordinary range of textures, tones, and emotional intensity in the way people relate to computers-from seeming computer addiction to confessed computerphobia. I have recently been conducting an ethnographic investigation of the relationships that people form with computers and with each other in the social worlds that grow up around the machines. In my interviews with people in very different computing environments, I have been impressed by the fact that when people talk about computers they are often using them to talk about other things as well. In the general public, a discourse about computers can carry feelings about public life--anxieties about not feeling safe in a society that is perceived as too complex, a sense of alienation from politics and public institutions. Ideas about computers can also express feelings about more private matters, even reflecting concerns about which the individual does not seem fully aware. When we turn from the general public to the computer experts, we find similar phenomena in more developed forms. There, too, ideas about computers carry feelings about political and personal issues. But in addition, the expert enters into relationships with computers which can give concreteness and coherence to political and private concerns far removed from the world of computation. In particular, the act of programming can be an expressive activity for working through personal issues relating to control and mastery.
Article
The hardware technology for an intelligent machine is available. We see no contraindication to the construction of the software of such a machine. This paper reviews and lists the functional properties of intelligent machines as seen by many authors, and attempts to formulate then in terms of basic computational methods and a program structure. It is suggested that an interchange between brain scientists and artificial intelligence workers could be more fruitful than before. The question of the validity of comparing brains and computers remains unsettled.
Article
The test Turing proposed for machine intelligence is usually understood to be a test of whether a computer can fool a human into thinking that the computer is a human. This standard interpretation is rejected in favor of a test based on the Imitation Game introduced by Turing at the beginning of "Computing Machinery and Intelligence."
Article
Examined viewpoints concerning the relationships of man and machines, such as the problems of computers as thinking machines and the brain as an intelligent computer. Various classes of machines (e.g., prosthetics, utensils and decorative auxiliaries and complementarities, prime movers, regulators, quantizers, and substitute and replacement aids) are described that are seen as behavioral aids, with their inventions, operations, and programs credited to persons. (12 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Animals and robots perceiving and acting in a world require an ontology that accommodates entities, processes, states of affairs, etc., in their environment. If the perceived environment includes information-processing systems, the ontology should reflect that. Scientists studying such systems need an ontology that includes the first-order ontology characterising physical phenomena, the second-order ontology characterising perceivers of physical phenomena, and a (recursive) third order ontology characterising perceivers of perceivers, including introspectors. We argue that second- and third-order ontologies refer to contents of virtual machines and examine requirements for scientific investigation of combined virtual and physical machines, such as animals and robots. We show how the CogAff architecture schema, combining reactive, deliberative, and meta-management categories, provides a first draft schematic third-order ontology for describing a wide range of natural and artificial agents. Many previously proposed architectures use only a subset of CogAff, including subsumption architectures, contention-scheduling systems, architectures with ‘executive functions’ and a variety of types of ‘Omega’ architectures. Adding a multiply-connected, fast-acting ‘alarm’ mechanism within the CogAff framework accounts for several varieties of emotions. H-CogAff, a special case of CogAff, is postulated as a minimal architecture specification for a human-like system. We illustrate use of the CogAff schema in comparing H-CogAff with Clarion, a well known architecture. One implication is that reliance on concepts tied to observation and experiment can harmfully restrict explanatory theorising, since what an information processor is doing cannot, in general, be determined by using the standard observational techniques of the physical sciences or laboratory experiments. Like theoretical physics, cognitive science needs to be highly speculative to make progress.
Conference Paper
The question of whether machines can think like men, the classic AI paradigm, is academic. The fact is that a new task environment is emerging and most of the roles will be best suited to, and require the synergism derived from, man-machine interaction. Men will be required to form relationships with analogs of themselves on a level that is uncharacteristic of current man-machine interactions. Conventional models do not capture the virtual dependencies that are likely to evolve in these man-machine systems. The concept of men and machines as joint components of virtual systems is explored, social and psychological implications are examined and a model of manmachine interaction is developed within that framework.
Article
This two part paper explores management issues raised by expert systems (ES). In the first section, a brief history of ES is presented, and the competitive potential of ES is analyzed from a business policy perspective. The second section, appearing in the next issue, discusses development and implementation issues of ES. Both sections highlight similarities and differences between expert systems and other types of information systems.
Article
This listing is intended as an introduction to the literature on Artificial Intelligence¿i.e., to the literature dealing with the problem of making machines behave intelligently. We have divided this area into categories and cross-indexed the references accordingly. Large bibliographies, without some classification facility, are next to useless. This particular field is still young, but there are already many instances in which workers have wasted much time in rediscovering (for better for for worse) schemes already reported. In the last year or two this problem has become worse, and in such a situation just about any information is better than none.
Article
A comprehensive technical survey of Soviet switching theory and its applications to the logical design of digital systems reveals that, despite considerable activity (763 papers and books), the average state of the art in the U.S.S.R. is somewhat behind that in the U.S. However, there are a large number of noteworthy contributions, particularly in those aspects of the field dealing with complexity estimates of switching networks, synthesis of multiterminal circuits, the selection of logical primitives (building blocks), and certain minimization problems. This paper evaluates the Soviet position through June, 1964, compares it with that in the West, and summarizes the significant Soviet technical contributions. Recommendations are offered for initiating research in the United States in several special problem areas in switching theory.
Article
Described in this report are the results of a comprehensive technical survey of all published Soviet literature in coding theory and its applications--over 400 papers and books appearing before March 1967. The purpose of this report is to draw attention to this important collection of technical results, which are not well known in the West, and to summarize the significant contributions. Particular emphasis is placed upon those results that fill gaps in the body of knowledge about coding theory and practice as familiar to non-Soviet The most noteworthy Soviet contributions have occurred in those areas that deal with codes for the noiseless channel, codes that correct asymmetric errors, decoding for cyclic codes, randomcoding bounds on the amount of computation required, and various application criteria--that is, when to use which code, and how well it performs. Other important but isolated results have been reported on the construction of optimal low-rate codes, bounds on nonrandom codes, linear (continuous) coding, codes for checking arithmetic operations, properties of code polynomials, linear transformations of codes, multiple-burst-correcting codes, special synchronization codes, and certain broad generalizations of the conventional coding problem. Little or no significant work has been done on pseudorandom sequences, unit-distance codes (with one exception), the application of codes to the design of redundant computers and memories, the search for good cyclic codes, and the physical realization of sequential decoding algorithms. Section II of this report is directed to the nonspecialist, and describes the status of the field of coding theory in the Soviet Union, summarizes the major technical results, and compares these with corresponding work in the West. Section III discusses in detail for the coding specialist new theoretical results, details of coding procedures, and analytical tools described in the Soviet literature. A complete bibliography is included.
ResearchGate has not been able to resolve any references for this publication.