Article

The lamp and the lighthouse: Joseph Weizenbaum, contextualizing the critic

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The life and work of the computer scientist Joseph Weizenbaum is a testament to the ways in which the field of artificial intelligence has engendered discontent. In his articles, public talks, and most notably in his 1976 book, Computer Power and Human Reason, Weizenbaum challenged the faith in computerized solutions and argued that the proper question was not whether computers could do certain things, but if they should. As a computer scientist speaking out against computers, Weizenbaum has often been treated as something of a lone Jeremiah howling in the wilderness. However, as this article demonstrates, such a characterization fails to properly contextualize Weizenbaum. Drawing upon his correspondence with Lewis Mumford, this article argues that Weizenbaum needs to be understood as part of a community of criticism, while also exploring how he found the role of discontented critic to be a lonely and challenging one.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... It is crucial to distinguish between deciding and choosing, as planning involves a decision-making process that arises from the presence of various alternatives (Beresford & Sloper, 2008). According to Weizenbaum (1976) and Loeb (2021) "deciding" is a computational activity, something that can ultimately be programmed but "choice" is the product of human judgment. In an emergency, deciding cannot be programmed since there are a lot of uncertainties and thus for an effective decision-making procedure it is helpful to have a lot of choices available that have been produced using human judgment. ...
Article
Full-text available
National Emergencies set the need for quick responses and actions from all institutions that are responsible for national security and/or citizens' health. Planning for addressing national emergencies in general is characterized by uncertainty, risk, and time pressure. Furthermore, given the infrequent occurrence of emergencies, a significant number of stakeholders lack the necessary experience. Consequently, they need more time to assimilate the incoming information into their mission-planning process and finally to propose the appropriate courses of action. An antidote to the above difficulties can be the appropriate training in combination with the enhancement of the stakeholders' creativity. In order to achieve that, we propose a novel methodology of creating mind map templates, based on creativity techniques, for specific operations that can be applied to national emergencies. Our methodology, called "MEMOS", was developed according to the relative theoretical frameworks and then tested with a case study, including hypothetical security scenarios in national emergency situations. The proposed templates can be used as inspiration tools or idea tanks in order to guide and speed up the process of course of action development in mitigation planning for similar future security operations.
... La capacidad de estos avatares para simular la humanidad y transmitir cualquier tipo de información, verdadera o falsa, pone en riesgo la integridad y confiabilidad del contenido en línea. En un contexto donde las noticias falsas (fake news) representan una amenaza constante (Knox, 2019), la facilidad con la que se pueden generar avatares realistas para difundir información falsa agrava las preocupaciones existentes sobre la veracidad y la confianza en los medios digitales (Loeb, 2021). ...
Article
Full-text available
El presente estudio experimental aborda el impacto del tipo de emisor de información en la percepción de credibilidad de las noticias tecnológicas. La investigación involucró a 150 estudiantes universitarios, quienes, me- diante un diseño entre sujetos 3 x 1, observaron un video sobre un inno- vador invento médico. La variable principal fue el emisor de la noticia: un presentador humano, un avatar con alto grado de realismo humano y un avatar de apariencia ficticia. La evaluación se centró en la credibilidad de la información. A través de análisis de la varianza (Anovas) y pruebas post- hoc se descubrió una jerarquía clara en la percepción de credibilidad. Los datos revelaron que existen diferencias estadísticamente significativas en la credibilidad otorgada a la condición humana y a la del avatar realista, a fa- vor del humano. Sin embargo, no se hallaron diferencias significativas en- tre el humano y el avatar ficticio. Esto sugiere que la apariencia no humana de un avatar no necesariamente disminuye la credibilidad frente a una per- sona real, aunque los avatares muy realistas pueden generar cierto rechazo que se puede traducir en una menor credibilidad percibida.
... The ability of these avatars to simulate humanity and convey any type of information, true or false, puts the integrity and reliability of online content at risk. In a context where fake news poses a constant threat (Knox, 2019), the ease with which realistic avatars can be generated to disseminate false information compounds existing concerns about veracity and trust in digital media (Loeb, 2021). ...
Article
Full-text available
The present experimental study addresses the impact of the type of infor- mation transmitter on the perceived credibility of technology news. Using a 3 × 1 between-subjects design, the research involved 150 university stu- dents who watched a video about an innovative medical invention. The primary variable was the news broadcaster: a human presenter, an avatar with a high degree of human realism, and an avatar with a purely fictional appearance. The evaluation focused on the credibility of the information. Through ANOVAs and post hoc tests, a clear hierarchy in perceived credi- bility was discovered. The data revealed statistically significant differences in credibility between the human and realistic avatar condition in favor of the human. However, no significant differences were found between the hu- man and the fictional avatar. This suggests that the non-human appearan- ce of an avatar does not necessarily diminish credibility vis-à-vis an actual person, although very realistic avatars may cause some rejection, transla- ting into lower perceived credibility.
... During this period, there was a significant growth in the progress of machine learning algorithms dedicated to problem-solving and interpreting human language. The year 1964 stands out when Joseph Weizenbaum introduced the ELIZA program (Loeb, 2021), a chatbot that engaged in dialogues with humans using natural language processing (NLP) and programmed phrases. Shakey, the first mobile robot capable of reasoning about its own decisions, was created in 1966. ...
... The massive uptake of the AI application ChatGPT to generate natural-sounding text from large datasets of human language has centered this debate. For decades, a community of critics has warned that unthinking acceptance of complex technology, such as AI, risks sacrificing human needs and that data may be used unscrupulously in digital economies (Loeb 2021;Knox 2019). AI technology now pervades personal, workplace, and educational environments, even if this is not always apparent to users (Bearman and Luckin 2020;Siemens et al. 2022). ...
Article
Full-text available
In our postdigital world, unseen algorithms and artificial intelligence (AI) underpin most business and educational technologies and systems. Also, the use of educational data to better understand and support teaching and learning is growing in higher education. Other AI technologies such as synthetic media and AI-generated avatars are increasingly used to present video-based content in business and society but are less common in educational content and lectures, as their effectiveness and impact on learning are still being researched and debated. In this study, an AI-generated avatar was implemented in the redesign of business ethics material in a postgraduate course to present videos and online activities and to prompt critical reflection and discussion of the social and ethical implications of algorithms. Using a qualitative research design, we then explored students’ perceptions of teaching and learning with AI-generated avatars. The students interviewed felt AI avatars were suitable, sometimes even preferred, for lecture delivery, with some enhancements. This study contributes insights into the use of AI-generated avatars in education by examining their potential benefits and challenges and generating three key pedagogical principles to consider. Future directions for educational design and research are discussed, particularly the pressing need to engage students creatively and critically with the social and ethical implications of AI avatars.
Article
Full-text available
In 1972, ten members of the machine intelligence research community travelled to Lake Como, Italy, for a conference on the ‘social implications of machine intelligence research’. This paper explores their varied and contradictory approaches to this topic. Researchers, including John McCarthy, Donald Michie and Richard Gregory, raised ‘ethical’ questions surrounding their research and actively predicted risks of machine intelligence. At the same time, they delayed any action to mitigate these risks to an uncertain future where technical capabilities were greater. I argue that conference participants’ claims that 1972 was ‘too early’ to speculate on societal impacts of their research were disingenuous, motivated both by threats to funding and by researchers’ own politically informed speculation on the future.
Article
En enquêtant sur les débats autour des enjeux de la régulation de l’IA, nous avons observé que les problèmes définitionnels étaient au cœur de conflits normatifs sur les moyens d’assujettir l’IA à un « contrôle social », qu’il soit technique, éthique, juridique ou politique. En prenant comme fil rouge de l’analyse les significations variées de l’IA, cet article vise à participer à la compréhension des tensions normatives sur son contrôle. Nous proposons une cartographie des lieux, des acteurs et des approches qui donne à voir comment les débats autour du contrôle de l’IA se structurent selon quatre arènes normatives différenciées, soit : les spéculations transhumanistes sur les dangers d’une superintelligence et le problème du contrôle de son alignement aux valeurs humaines ; l’auto-responsabilisation des chercheurs développant une science entièrement consacrée à la certification technique des machines ; les dénonciations des effets néfastes des systèmes d’IA sur les droits fondamentaux et le contrôle des rééquilibrages des pouvoirs ; enfin, la régulation européenne du marché par le contrôle de la sécurité du fait des produits et service de l’IA.
Article
Full-text available
Recent advances in Artificial Intelligence (AI) have led to intense debates about benefits and concerns associated with this powerful technology. These concerns and debates have similarities with developments in other emerging technologies characterized by prominent impacts and uncertainties. Against this background, this paper asks, What can AI governance, policy and ethics learn from other emerging technologies to address concerns and ensure that AI develops in a socially beneficial way? From recent literature on governance, policy and ethics of emerging technologies, six lessons are derived focusing on inclusive governance with balanced and transparent involvement of government, civil society and private sector; diverse roles of the state including mitigating risks, enabling public participation and mediating diverse interests; objectives of technology development prioritizing societal benefits; international collaboration supported by science diplomacy, as well as learning from computing ethics and Responsible Innovation.
Article
This article reconfigures the history of Artificial Intelligence (AI) and its accompanying tradition of criticism by excavating the work of Mortimer Taube, a pioneer in information and library sciences, whose magnum opus, Computers and Common Sense: The Myth of Thinking Machines (1961), has been mostly forgotten. By focusing on his attack on the General Problem Solver (GPS), the second major AI program, it conveys the essence of Taube's distinctive critique. I examine his incisive analysis of the social construction of “thinking machines,” and conclude that, despite considerable changes to the underlying technology of AI, much of Taube's criticism remains relevant today. Moreover, his status as an “information processing” insider who criticized AI on behalf of the public good challenge the boundaries and focus of most critiques of AI over the past half-century. In sum, his work offers an alternative model from which contemporary AI workers and critics can learn much.
Book
Full-text available
A theoretical examination of the surprising emergence of software as a guiding metaphor for our neoliberal world. New media thrives on cycles of obsolescence and renewal: from celebrations of cyber-everything to Y2K, from the dot-com bust to the next big things—mobile mobs, Web 3.0, cloud computing. In Programmed Visions, Wendy Hui Kyong Chun argues that these cycles result in part from the ways in which new media encapsulates a logic of programmability. New media proliferates “programmed visions,” which seek to shape and predict—even embody—a future based on past data. These programmed visions have also made computers, based on metaphor, metaphors for metaphor itself, for a general logic of substitutability. Chun argues that the clarity offered by software as metaphor should make us pause, because software also engenders a profound sense of ignorance: who knows what lurks behind our smiling interfaces, behind the objects we click and manipulate? The combination of what can be seen and not seen, known (knowable) and not known—its separation of interface from algorithm and software from hardware—makes it a powerful metaphor for everything we believe is invisible yet generates visible, logical effects, from genetics to the invisible hand of the market, from ideology to culture.
Article
Full-text available
Disconnection has recently come to the forefront of public discussions as an antidote to an increasing saturation with digital technologies. Yet, experiences with disconnection are often reduced to a form of disengagement that diminishes their political impact. Disconnective practices focused on health and well-being are easily appropriated by big tech corporations, defusing their transformative potential into the very dynamics of digital capitalism. In contrast, a long tradition of critical thought, from Joseph Weizenbaum to Jaron Lanier passing through hacktivism, demonstrates that engagement with digital technologies is instrumental to develop critique and resistance against the paradoxes of digital societies. Drawing from this tradition, this article proposes the concept of ‘Disconnection-through-Engagement’ to illuminate situated practices that mobilize disconnection in order to improve critical engagement with digital technologies and platforms. Hybridity, anonymity and hacking are examined as three forms of Disconnection-through-Engagement, and a call to decommodify disconnection and recast it as a source of collective critique to digital capitalism is put forward.
Article
Full-text available
Software is usually studied in terms of the changes triggered by its operations in the material world. Yet to understand its social and cultural impact, one needs to examine also the different narratives that circulate about it. Software’s opacity, in fact, makes it prone to being translated into a plurality of narratives that help people make sense of its functioning and presence. Drawing from the case of Joseph Weizenbaum’s ELIZA, widely considered the first chatbot ever created, this article proposes a theoretical framework based on the concept of ‘biographies of media’ to illuminate the dynamics and implications of software’s discursive life. The case of ELIZA is particularly relevant in this regard because it became the centre of competing narratives, whose trajectories transcended the actual functioning of this programme and shaped key controversies about the implications of computing and artificial intelligence.
Book
Full-text available
Most users want their Twitter feed, Facebook page, and YouTube comments to be free of harassment and porn. Whether faced with “fake news” or livestreamed violence, “content moderators”-who censor or promote user-posted content-have never been more important. This is especially true when the tools that social media platforms use to curb trolling, ban hate speech, and censor pornography can also silence the speech you need to hear. In this revealing and nuanced exploration, award-winning sociologist and cultural observer Tarleton Gillespie provides an overview of current social media practices and explains the underlying rationales for how, when, and why these policies are enforced. In doing so, Gillespie highlights that content moderation receives too little public scrutiny even as it is shapes social norms and creates consequences for public discourse, cultural production, and the fabric of society. Based on interviews with content moderators, creators, and consumers, this accessible, timely book is a must-read for anyone who’s ever clicked “like” or “retweet.”.
Article
Full-text available
The notion of computation has changed the world more than any previous expressions of knowledge. However, as know-how in its particular algorithmic embodiment, computation is closed to meaning. Therefore, computer-based data processing can only mimic life’s creative aspects, without being creative itself. AI’s current record of accomplishments shows that it automates tasks associated with intelligence, without being intelligent itself. Mistaking the abstract (computation) for the concrete (computer) has led to the religion of “everything is an output of computation”—even the humankind that conceived the computer. The hypostatized role of computers explains the increased dependence on them. The convergence machine called deep learning is only the most recent form through which the deterministic theology of the machine claims more than what it actually is: extremely effective data processing. A proper understanding of complexity, as well as the need to distinguish between the reactive nature of the artificial and the anticipatory nature of the living are suggested as practical responses to the challenges posed by machine theology.
Article
Full-text available
Purpose – The purpose of this paper is to consider the question of equipping fully autonomous robotic weapons with the capacity to kill. Current ideas concerning the feasibility and advisability of developing and deploying such weapons, including the proposal that they be equipped with a so-called “ethical governor”, are reviewed and critiqued. The perspective adopted for this study includes software engineering practice as well as ethical and legal aspects of the use of lethal autonomous robotic weapons. Design/methodology/approach – In the paper, the author survey and critique the applicable literature. Findings – In the current paper, the author argue that fully autonomous robotic weapons with the capacity to kill should neither be developed nor deployed, that research directed toward equipping such weapons with a so-called “ethical governor” is immoral and serves as an “ethical smoke-screen” to legitimize research and development of these weapons and that, as an ethical duty, engineers and scientists should condemn and refuse to participate in their development. Originality/value – This is a new approach to the argument for banning autonomous lethal robotic weapons based on classical work of Joseph Weizenbaum, Helen Nissenbaum and others.
Article
Full-text available
Introduction I. Developing a Sense of Place 1. "The Old Order and the New" 2. Defining Regionalism 3. Community and Place 4. Organicism and Planning II. Undertaking a Vision 5. "Regions--to Live In" 6. Regional Planning as "Exploration" 7. "Dinosaur Cities" 8. Planned Decentralization: The Road Not Taken 9. The RPNY and the "Ideology of Power" 10. Place and Polity in the "Neotechnic" Era Conclusion: The Relevance of Ecological Regionalism
Book
What counts as knowledge in the age of big data and smart machines? Technologies of datafication renew the long modern promise of turning bodies into facts. They seek to take human intentions, emotions, and behavior and to turn these messy realities into discrete and stable truths. But in pursuing better knowledge, technology is reshaping in its image what counts as knowledge. The push for algorithmic certainty sets loose an expansive array of incomplete archives, speculative judgments, and simulated futures. Too often, data generates speculation as much as it does information. Technologies of Speculation traces this twisted symbiosis of knowledge and uncertainty in emerging state and self-surveillance technologies. It tells the story of vast dragnet systems constructed to predict the next terrorist and of how familiar forms of prejudice seep into the data by the back door. In software placeholders, such as “Mohammed Badguy,” the fantasy of pure data collides with the old specter of national purity. It shows how smart machines for ubiquitous, automated self-tracking, manufacturing knowledge, paradoxically lie beyond the human senses. This data is increasingly being taken up by employers, insurers, and courts of law, creating imperfect proxies through which my truth can be overruled. This book argues that as datafication transforms what counts as knowledge, it is dismantling the long-standing link between knowledge and human reason, rational publics, and free individuals. If data promises objective knowledge, then we must ask in return, Knowledge by and for whom; enabling what forms of life for the human subject?
Book
A fascinating examination of technological utopianism and its complicated consequences. In The Charisma Machine, Morgan Ames chronicles the life and legacy of the One Laptop per Child project and explains why—despite its failures—the same utopian visions that inspired OLPC still motivate other projects trying to use technology to “disrupt” education and development. Announced in 2005 by MIT Media Lab cofounder Nicholas Negroponte, One Laptop per Child promised to transform the lives of children across the Global South with a small, sturdy, and cheap laptop computer, powered by a hand crank. In reality, the project fell short in many ways—starting with the hand crank, which never materialized. Yet the project remained charismatic to many who were captivated by its claims of access to educational opportunities previously out of reach. Behind its promises, OLPC, like many technology projects that make similarly grand claims, had a fundamentally flawed vision of who the computer was made for and what role technology should play in learning. Drawing on fifty years of history and a seven-month study of a model OLPC project in Paraguay, Ames reveals that the laptops were not only frustrating to use, easy to break, and hard to repair, they were designed for “technically precocious boys”—idealized younger versions of the developers themselves—rather than the children who were actually using them. The Charisma Machine offers a cautionary tale about the allure of technology hype and the problems that result when utopian dreams drive technology development.
Book
https://www.amazon.com/gp/product/0262533480/ref=dbs_a_def_rwt_bibl_vppi_i0
Book
An account of conflicts within engineering in the 1960s that helped shape our dominant contemporary understanding of technological change as the driver of history. In the late 1960s an eclectic group of engineers joined the antiwar and civil rights activists of the time in agitating for change. The engineers were fighting to remake their profession, challenging their fellow engineers to embrace a more humane vision of technology. In Engineers for Change, Matthew Wisnioski offers an account of this conflict within engineering, linking it to deep-seated assumptions about technology and American life. The postwar period in America saw a near-utopian belief in technology's beneficence. Beginning in the mid-1960s, however, society—influenced by the antitechnology writings of such thinkers as Jacques Ellul and Lewis Mumford—began to view technology in a more negative light. Engineers themselves were seen as conformist organization men propping up the military-industrial complex. A dissident minority of engineers offered critiques of their profession that appropriated concepts from technology's critics. These dissidents were criticized in turn by conservatives who regarded them as countercultural Luddites. And yet, as Wisnioski shows, the radical minority spurred the professional elite to promote a new understanding of technology as a rapidly accelerating force that our institutions are ill-equipped to handle. The negative consequences of technology spring from its very nature—and not from engineering's failures. “Sociotechnologists” were recruited to help society adjust to its technology. Wisnioski argues that in responding to the challenges posed by critics within their profession, engineers in the 1960s helped shape our dominant contemporary understanding of technological change as the driver of history.
Book
A guide to understanding the inner workings and outer limits of technology and why we should never assume that computers always get it right. In Artificial Unintelligence, Meredith Broussard argues that our collective enthusiasm for applying computer technology to every aspect of life has resulted in a tremendous amount of poorly designed systems. We are so eager to do everything digitally—hiring, driving, paying bills, even choosing romantic partners—that we have stopped demanding that our technology actually work. Broussard, a software developer and journalist, reminds us that there are fundamental limits to what we can (and should) do with technology. With this book, she offers a guide to understanding the inner workings and outer limits of technology—and issues a warning that we should never assume that computers always get things right. Making a case against technochauvinism—the belief that technology is always the solution—Broussard argues that it's just not true that social problems would inevitably retreat before a digitally enabled Utopia. To prove her point, she undertakes a series of adventures in computer programming. She goes for an alarming ride in a driverless car, concluding “the cyborg future is not coming any time soon”; uses artificial intelligence to investigate why students can't pass standardized tests; deploys machine learning to predict which passengers survived the Titanic disaster; and attempts to repair the U.S. campaign finance system by building AI software. If we understand the limits of what we can do with technology, Broussard tells us, we can make better choices about what we should do with it to make the world better for everyone.
Book
For the first time, this book compiles original documents from Science for the People, the most important radical science movement in U.S. history. Between 1969 and 1989, Science for the People mobilized American scientists, teachers, and students to practice a socially and economically just science, rather than one that served militarism and corporate profits. Through research, writing, protest, and organizing, members sought to demystify scientific knowledge and embolden "the people" to take science and technology into their own hands. The movement's numerous publications were crucial to the formation of science and technology studies, challenging mainstream understandings of science as "neutral" and instead showing it as inherently political. Its members, some at prominent universities, became models for politically engaged science and scholarship by using their knowledge to challenge, rather than uphold, the social, political, and economic status quo. Highlighting Science for the People's activism and intellectual interventions in a range of areas -- including militarism, race, gender, medicine, agriculture, energy, and global affairs -- this volume offers vital contributions to today's debates on science, justice, democracy, sustainability, and political power. © 2018 by University of Massachusetts Press. All rights reserved.
Book
As seen in Wired and Time A revealing look at how negative biases against women of color are embedded in search engine results and algorithms Run a Google search for “black girls”—what will you find? “Big Booty” and other sexually explicit terms are likely to come up as top search terms. But, if you type in “white girls,” the results are radically different. The suggested porn sites and un-moderated discussions about “why black women are so sassy” or “why black women are so angry” presents a disturbing portrait of black womanhood in modern society. In Algorithms of Oppression, Safiya Umoja Noble challenges the idea that search engines like Google offer an equal playing field for all forms of ideas, identities, and activities. Data discrimination is a real social problem; Noble argues that the combination of private interests in promoting certain sites, along with the monopoly status of a relatively small number of Internet search engines, leads to a biased set of search algorithms that privilege whiteness and discriminate against people of color, specifically women of color. Through an analysis of textual and media searches as well as extensive research on paid online advertising, Noble exposes a culture of racism and sexism in the way discoverability is created online. As search engines and their related companies grow in importance—operating as a source for email, a major vehicle for primary and secondary school learning, and beyond—understanding and reversing these disquieting trends and discriminatory practices is of utmost importance. An original, surprising and, at times, disturbing account of bias on the internet, Algorithms of Oppression contributes to our understanding of how racism is created, maintained, and disseminated in the 21st century.
Book
Cambridge Core - Law and Economics - Re-Engineering Humanity - by Brett Frischmann
Book
HOWARD P. SEGAL, FOR THE EDITORS In November 1979 the Humanities Department of the University of Michi­ gan's College of Engineering sponsored a symposium on ''Technology and Pessimism. " The symposium included scholars from a variety of fields and carefully balanced critics and defenders of modern technology, broadly defined. Although by this point it was hardly revolutionary to suggest that technology was no longer automatically equated with optimism and in turn with unceasing social advance, the idea of linking technology so explicitly with pessimism was bound to attract attention. Among others, John Noble Wilford, a New York Times science and technology correspondent, not only covered the symposium but also wrote about it at length in the Times the following week. As Wilford observed, "Whatever their disagreements, the participants agreed that a mood of pessimism is overtaking and may have already displaced the old optimistic view of history as a steady and cumulative expansion of human power, the idea of inevitable progress born in the Scientific and Industrial Rev­ olutions and dominant in the 19th century and for at least the first half of this century. " Such pessimism, he continued, "is fed by growing doubts about soci­ ety's ability to rein in the seemingly runaway forces of technology, though the participants conceded that in many instances technology was more the symbol than the substance of the problem.
Book
For over half a century, the biologist Barry Commoner has been one of the most prominent and charismatic defenders of the American environment, appearing on the cover of Time magazine in 1970 as the standard-bearer of "the emerging science of survival." In Barry Commoner and the Science of Survival, Michael Egan examines Commoner's social and scientific activism and charts an important shift in American environmental values since World War II.Throughout his career, Commoner believed that scientists had a social responsibility, and that one of their most important obligations was to provide citizens with accessible scientific information so they could be included in public debates that concerned them. Egan shows how Commoner moved naturally from calling attention to the hazards of nuclear fallout to raising public awareness of the environmental dangers posed by the petrochemical industry. He argues that Commoner's belief in the importance of dissent, the dissemination of scientific information, and the need for citizen empowerment were critical planks in the remaking of American environmentalism.Commoner's activist career can be defined as an attempt to weave together a larger vision of social justice. Since the 1960s, he has called attention to parallels between the environmental, civil rights, labor, and peace movements, and connected environmental decline with poverty, injustice, exploitation, and war, arguing that the root cause of environmental problems was the American economic system and its manifestations. He was instrumental in pointing out that there was a direct association between socioeconomic standing and exposure to environmental pollutants and that economics, not social responsibility, was guiding technological decision making. Egan argues that careful study of Commoner's career could help reinvigorate the contemporary environmental movement at a point when the environmental stakes have never been so high.
Book
In The Second Self , Sherry Turkle looks at the computer not as a "tool," but as part of our social and psychological lives; she looks beyond how we use computer games and spreadsheets to explore how the computer affects our awareness of ourselves, of one another, and of our relationship with the world. "Technology," she writes, "catalyzes changes not only in what we do but in how we think." First published in 1984, The Second Self is still essential reading as a primer in the psychology of computation. This twentieth anniversary edition allows us to reconsider two decades of computer culture--to (re)experience what was and is most novel in our new media culture and to view our own contemporary relationship with technology with fresh eyes. Turkle frames this classic work with a new introduction, a new epilogue, and extensive notes added to the original text. Turkle talks to children, college students, engineers, AI scientists, hackers, and personal computer owners--people confronting machines that seem to think and at the same time suggest a new way for us to think--about human thought, emotion, memory, and understanding. Her interviews reveal that we experience computers as being on the border between inanimate and animate, as both an extension of the self and part of the external world. Their special place betwixt and between traditional categories is part of what makes them compelling and evocative. (In the introduction to this edition, Turkle quotes a PDA user as saying, "When my Palm crashed, it was like a death. I thought I had lost my mind.") Why we think of the workings of a machine in psychological terms--how this happens, and what it means for all of us--is the ever more timely subject of The Second Self .
Article
Much depends on knowing what limits to impose on the application of computers to human affairs and on knowing the impact of the computer on human dignity.
Article
The abstract for this document is available on CSA Illumina.To view the Abstract, click the Abstract button above the document title.