Chapter

Ten Guidelines for Intelligent Systems Futures: Volume 1

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Intelligent systems – those that leverage the power of Artificial Intelligence (AI) – are set to transform how we live, travel, learn, relate to each other and experience the world. This paper details outcomes of a global study, where a multi-pronged methodology was adopted to identify people’s perceptions, attitudes, thresholds and expectations of intelligent systems and to assess their perspectives toward concepts focused on bringing such systems in the home, car, and workspace. After background details grounding the study’s rationale, the paper first outlines the research approach and then summarizes key findings, including a discussion on how people’s knowledge of intelligent systems impacts their understandings of (and willingness to embrace) such systems; an overview of the domino effect of smart things; an outline of people’s concerns with, flexibility toward and need to maintain control over intelligent systems; and a discussion of people’s preference for helper usages, as well as insights on how people view Affective Computing. Ten design guidelines that were informed by the study findings are outlined in the fourth section, while the last part of the paper offers conclusive remarks, alongside open questions and a call for action that focuses on designers’ and developers’ moral and ethical responsibility for how intelligent systems futures are being and will be shaped.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Patients can track much of their own diagnostic material, seek information, and make choices about treatment (Neff and Nafus, 2016). New technical intermediation will be required to facilitate these layers and shifting relationships with their accompanying uncertainties about data quality and consistency, not to mention provider responsibility, attention, and span of control (Vallo Hult et al., 2016, 2019. ...
... Bostrom, 2014;Suchman, 2017;Schneier, 2015;Zuboff, 2018 and many others), the field will require a mass of engaged researchers to make a dent in the forward momentum that technology so easily acquires. Ethnography and the work of anthropologists is ideally suited to this challenge, and their insights need amplification (Blomberg, 2018;Loi, 2019;Nafus, 2018). The 2018 IFIP 8.2 Conference (Schultze et al., 2018) visits this topic. ...
Book
Full-text available
Qualitative and Critical Research in Information Systems and Human Computer Interaction explores the history and adoption of qualitative and critical research in Information Systems (IS) and contrasts it with the growth of similar methods/theories in Human Computer Interaction (HCI) and, to a lesser, extent Computer Supported Collaborative Work (CSCW). The supposition behind the comparison was that the areas overlap in subject matter and would overlap in methods and authors. However, marked differences were observed in the structure of publications, conferences, and on social media that led to questions about the extent to which the fields shared a common framework. The authors find that the history of each discipline reflects institutional factors that affected the respective timelines for the use of these approaches. This leads them to consider a sociological epistemic framework, which explains the differences quite well. It also supports characterizations of the culture of IS made by members, as having open paradigm and high collegiality, described as an adhocracy. The authors propose that qualitative and critical research developed interdependently in IS. Aside from institutional factors, a further difference in uptake of methods and critical framework comes from the US/Europe divide in research traditions and the political/epistemic climates affecting research in the respective regions. Research from beyond the transatlantic traditions postdates the developments covered here but is touched on at the end of the monograph. The primary goal of Qualitative and Critical Research in Information Systems and Human Computer Interaction is to better understand the ways the IS research community differentiates itself into diverse constituencies, and how these constituencies interact in the field's complex processes of knowledge creation and dissemination. Another goal is to create cross-disciplinary discussion and build on related work in the fields. This is important in the era of platforms with global reach, and the concurrent development of powerful AI and analytics capabilities that both intrude on daily life and try to emulate human intelligence.
... Patients can track much of their own diagnostic material, seek information, and make choices about treatment (Neff and Nafus, 2016). New technical intermediation will be required to facilitate these layers and shifting relationships with their accompanying uncertainties about data quality and consistency, not to mention provider responsibility, attention, and span of control (Vallo Hult et al., 2016, 2019. ...
... Bostrom, 2014;Suchman, 2017;Schneier, 2015;Zuboff, 2018 and many others), the field will require a mass of engaged researchers to make a dent in the forward momentum that technology so easily acquires. Ethnography and the work of anthropologists is ideally suited to this challenge, and their insights need amplification (Blomberg, 2018;Loi, 2019;Nafus, 2018). The 2018 IFIP 8.2 Conference (Schultze et al., 2018) visits this topic. ...
Article
Full-text available
ABSTRACT Information Systems (IS) and Human Computer Interaction (HCI)–including Computer-Supported Cooperative Work (CSCW)–address the development and adoption of computing systems by organizations, individuals, and teams. While each has its own emphasis, the timelines for adopting qualitative and critical research differ dramatically. IS used both in the late 1980s, but critical theory appeared in HCI only in 2000. Using a hermeneutic literature review, the paper traces these histories; it applies academic cultures theory as an explanatory framework. Institutional factors include epistemic bases of source disciplines, number and centrality of publication outlets, and political and geographic contexts. Key innovations in IS are covered in detail. The rise of platformization drives the fields toward a common scope of study with an imperative to address societal issues that emerge at scale. See http://ronininstitute.org/research-scholars/eleanor-wynn/
... Accordingly, it is important to also understand what users want and need, and to incorporate this into the development process on an equal footing alongside non-maleficence. An example of how HCAI can focus on understanding and fulfilling the needs of the user while still minimizing negative outcomes is provided by Loi (2019), who sets out ten guidelines for intelligent systems that include 'Make people feel unique and empower their unique goals', 'Do not make systems human, but capable of helping humans', and 'Prioritize usages that matterhelper usages'. Similarly, Bellet et al. (2021) used a user experience methodology when designing an AI algorithm to manage human-machine transitions with vehicle automation, including measures of utility, satisfaction, and safety. ...
Article
Full-text available
Human-centered artificial intelligence (HCAI) seeks to shift the focus in AI development from technology to people. However, it is not clear whether existing HCAI principles and practices adequately accomplish this goal. To explore whether HCAI is sufficiently focused on people, we conducted a qualitative survey of AI developers (N = 75) and users (N = 130) and performed a thematic content analysis on their responses to gain insight into their differing priorities and experiences. Through this, we were able to compare HCAI in principle (guidelines and frameworks) and practice (developer priorities) with user experiences. We found that the social impact of AI was a defining feature of positive user experiences, but this was less of a priority for developers. Furthermore, our results indicated that improving AI functionality from the perspective of the user is an important part of making it human-centered. Indeed, users were more concerned about being understood by AI than about understanding AI. In line with HCAI guidelines, developers were concerned with issues such as ethics, privacy, and security, demonstrating an ‘avoidance of harm’ perspective. However, our results suggest that an increased focus on what people need in their lives is required for HCAI to be truly human-centered.
Chapter
In smart cities, citizens’ lives and data will be increasingly intertwined with the systems used by local, state, and federal governments. Given such a context, this paper focuses on the role that automated decision(-making) systems (ADS) play—and could play—within smart cities and unpacks the challenges of stakeholder participation in determining this role. To address future scenarios and propose a provisional framework for participating in and with ADS-laden cities and government, we begin with the case of New York City’s Automated Decision Systems Task Force as a concrete example of the challenges of regulating, governing, and participating with ADS. As a single example, we explore the particularities surrounding the Task Force, including the mobility of its policy, the difficulty of defining the issue, and finally the proposed framework for overseeing ADS in New York City. We introduce two practices of participating in city-making: participatory governance and participatory design. As we explain, these practices emphasize the role of stakeholders—e.g. citizens, bureaucrats, private industry actors, and community groups—in crafting the policy and technology that they use or are impacted by. We then propose a provisional framework for ADS transparency and accountability based on these participative practices. We recommend enabling citizens to directly help regulate and design automated decision-making systems with the caveats that doing so can be messy, contextual, and frictional. In sum, this paper advocates that all communities should be empowered to understand and impact the systems determine their destiny.
Chapter
Building on results of a recent global study as well as additional exploratory research focused on Aging in Place, this paper reflects on the role that intelligent systems and ambient computing may play in future homes and cities, with a specific emphasis on populations aged 65 and beyond. This paper is divided into five sections. The first section provides an introductory background, which outlines context, vision, and implications around the development of ambient computing and smart home technologies for the 65+ population. The second part of the paper overviews the methodological approaches adopted during the research activity at the center of this paper. The third section summarizes pertinent findings and a discussion on the opportunities offered by intelligent, ambient systems for the 65+ population follows. While this fourth section will specifically focus on the smart home, it will also provide reflections on opportunities and applications in the context of autonomous vehicles and smart cities. The fifth and last section offers conclusive remarks, including implications for developers and designers that are shaping ambient computing usages and technologies for the 65+ population. The paper ultimately advocates for adopting Participatory Design [1] approaches, to ensure that intelligent and ambient technologies are developed with (instead of for) end users.
Conference Paper
We are living in a time of ecological and humanitarian crisis that requires imminent action from the joint fields of HCI and interaction design. In a very palpable way, we seem to be moving towards the "end of the world" (certainly, as we have known it). This workshop addresses three concrete end-of-world challenges - the end of nature, end of culture and end of the human - to contribute to a much-needed design research agenda and to build community in the process. The workshop will explore how the design of technology can support a fairer and more secure set of futures by considering these three end-states and what we, as participants (both contributing to futures and living with the outcomes), can offer to improve the options. Contributions to theory and practice will be welcome.
Chapter
The purpose of this chapter is to provide an introduction to the urbanity concept from a variety of perspectives and in the context of smart cities. A key element characterizing smart cities is the introduction of information and communication technologies and other pervasive and aware technologies providing a space for the exploration of new understandings of urbanity. Aware technologies in combination with aware people enable formulation of an ambient urbanities framework. The chapters that follow provide an exploration of the challenges and opportunities for elements of the ambient urbanities framework, with Chapter 2 focusing on sensing; Chapter 3 on infrastructures; Chapter 4 on culture, economies, and everything as ambient; Chapter 5 on digital literacies; Chapter 6 on smart information architectures; Chapter 7 on the need for new methodologies and theoretical spaces; Chapter 8 on innovating privacy; Chapter 9 on measuring ambient urbanities; Chapter 10, a synthesis; and Chapter 11 on ambient urbanities and beyond.
Chapter
This chapter explores awareness in relation to sensing and smartness in the city enabled through aware people and aware technologies, including the internet of things (IoT), the internet of people (IoP), and the internet of experiences (IoE). The main aim of this chapter is to shed light on where intelligence resides in the city and what constitutes and contributes to sensing and making cities smarter in relation to evolving notions of urbanity. The research literature for awareness, sensing, sensors, the IoT, the IoP, and the IoE is explored in this chapter in the context of urbanity and smart cities, enabling identification of issues, controversies, and problems. Using an exploratory case study approach, solutions and recommendations are advanced. This chapter makes a contribution to 1) research and practice across multiple domains including the IoT, the IoP, and IoE and 2) emerging thinking on human sensing and associated behaviors in smart cities.
Conference Paper
AI-based systems are shifting and will increasingly shift how we relate to content, context and each other. This extended keynote abstract discusses insights from a global study that focused on people's perceptions, attitudes, thresholds and expectations of intelligent systems as well as their perspectives on smart home, autonomous cars, and smart workspace. Insights helped create ten design guidelines to assist intelligent systems designers, technologists and decision makers.
Article
Full-text available
We show that faces contain much more information about sexual orientation than can be perceived or interpreted by the human brain. We used deep neural networks to extract features from 35,326 facial images. These features were entered into a logistic regression aimed at classifying sexual orientation. Given a single facial image, a classifier could correctly distinguish between gay and heterosexual men in 81% of cases, and in 71% of cases for women. Human judges achieved much lower accuracy: 61% for men and 54% for women. The accuracy of the algorithm increased to 91% and 83%, respectively, given five facial images per person. Facial features employed by the classifier included both fixed (e.g., nose shape) and transient facial features (e.g., grooming style). Consistent with the prenatal hormone theory of sexual orientation, gay men and women tended to have gender-atypical facial morphology, expression, and grooming styles. Prediction models aimed at gender alone allowed for detecting gay males with 57% accuracy and gay females with 58% accuracy. Those findings advance our understanding of the origins of sexual orientation and the limits of human perception. Additionally, given that companies and governments are increasingly using computer vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women.
Book
Full-text available
An investigation into the assignment of moral responsibilities and rights to intelligent and autonomous machines of our own making. One of the enduring concerns of moral philosophy is deciding who or what is deserving of ethical consideration. Much recent attention has been devoted to the "animal question"—consideration of the moral status of nonhuman animals. In this book, David Gunkel takes up the "machine question": whether and to what extent intelligent and autonomous machines of our own making can be considered to have legitimate moral responsibilities and any legitimate claim to moral consideration. The machine question poses a fundamental challenge to moral thinking, questioning the traditional philosophical conceptualization of technology as a tool or instrument to be used by human agents. Gunkel begins by addressing the question of machine moral agency: whether a machine might be considered a legitimate moral agent that could be held responsible for decisions and actions. He then approaches the machine question from the other side, considering whether a machine might be a moral patient due legitimate moral consideration. Finally, Gunkel considers some recent innovations in moral philosophy and critical theory that complicate the machine question, deconstructing the binary agent–patient opposition itself. Technological advances may prompt us to wonder if the science fiction of computers and robots whose actions affect their human companions (think of HAL in 2001: A Space Odyssey) could become science fact. Gunkel's argument promises to influence future considerations of ethics, ourselves, and the other entities who inhabit this world.
Conference Paper
A deep discussion and reflection on the implications surrounding the design, development, and deployment of what are being described as artificially intelligent systems is urgent. We propose that within this context, Participatory Design, not only has much to offer, it has a responsibility to deeply engage. This workshop brings together practitioners and researchers from diverse disciplinary backgrounds to discuss and reflect on the cultural, ethical, political, and social implications surrounding the design, development, and deployment of intelligent systems and to explore participatory design approaches, tools, and guidelines that should ground the design of intelligent systems.
Article
Seeking more common ground between data scientists and their critics.
Article
Over the last 10 years, Computer-Supported Cooperative Work (CSCW) has identified a base set of findings. These findings are taken almost as assumptions within the field. In summary, they argue that human activity is highly flexible, nuanced, and contextualized and that computational entities such as information transfer, roles, and policies need to be similarly flexible, nuanced, and contextualized. However, current systems cannot fully support the social world uncovered by these findings. This paper argues that there is an inherent gap between the social requirements of CSCW and its technical mechanisms. The social-technical gap is the divide between what we know we must support socially and what we can support technically. Exploring, understanding, and hopefully ameliorating this social-technical gap is the central challenge for CSCW as a field and one of the central problems for HCI. Indeed, merely attesting the continued centrality of this gap could be one of the important intell...
The ethics of artificial intelligence
  • N Bostrom
  • E Yudkowsky
Microsoft’s fun millennial AI bot, into a genocidal maniac
  • Tay Trolls Turned
University & responsible media to debunk dangerous & flawed report claiming to identify LGBTQ people through facial recognition technology
  • Hrc Glaad
  • Call On
  • Stanford
Design for affective intelligence
  • D Loi
  • G Raffa
  • A A Esme