Matthew Sample’s research while affiliated with Montreal Clinical Research Institute and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (21)


Critical Contextual Empiricism and the Politics of Knowledge
  • Article
  • Full-text available

November 2023

·

61 Reads

Teorie vědy / Theory of Science

Matthew Sample

What are philosophers doing when they prescribe a particular epistemology for science? According to science and technology studies, the answer to this question implicates both knowledge and politics, even when the latter is hidden. Exploring this dynamic via a specific case, I argue that Longino’s “critical contextual empiricism” ultimately relies on a form of political liberalism. Her choice to nevertheless foreground epistemological concerns can be clarified by considering historical relationships between science and society, as well as the culture of academic philosophy. This example, I conclude, challenges philosophers of science to consider the political ideals and accountability entailed by their prescribed knowledge practices.

Download

Developing Ethical Guidelines for Implantable Neurotechnology: The Importance of Incorporating Stakeholder Input

February 2023

·

26 Reads

·

1 Citation

Michelle Pham

·

Matthew Sample

·

·

[...]

·

Eran Klein

Recent advancements in neuroengineering research have prompted neuroethicists to propose a variety of “ethical guidance” frameworks (e.g., principles, guidelines, framing questions, responsible research innovation frameworks, and ethical priorities) to inform this work. In this chapter, we offer a comparative analysis of five recently proposed ethical guidance frameworks (NIH neuroethics guiding principles, Nuffield Council on Bioethics, Global Neuroethics Summit Delegates, the Center for Neurotechnology’s neuroethical principles and guidelines, and the Neurotechnology Ethics Taskforce’s ethical priorities). We identify some common themes among these frameworks, making explicit significant areas of convergence. We also highlight three areas of ethical consideration that have not received sufficient attention across these frameworks (extended care for research participants, cultural salience, and stakeholder input). Further attention and analysis of these three areas would improve the breadth and scope of ethical considerations for neuroengineering research.


Brain-computer interfaces, disability, and the stigma of refusal: A factorial vignette study

January 2023

·

177 Reads

·

8 Citations

Public Understanding of Science

As brain-computer interfaces are promoted as assistive devices, some researchers worry that this promise to "restore" individuals worsens stigma toward disabled people and fosters unrealistic expectations. In three web-based survey experiments with vignettes, we tested how refusing a brain-computer interface in the context of disability affects cognitive (blame), emotional (anger), and behavioral (coercion) stigmatizing attitudes (Experiment 1, N = 222) and whether the effect of a refusal is affected by the level of brain-computer interface functioning (Experiment 2, N = 620) or the risk of malfunctioning (Experiment 3, N = 620). We found that refusing a brain-computer interface increased blame and anger, while brain-computer interface functioning did change the effect of a refusal. Higher risks of device malfunctioning partially reduced stigmatizing attitudes and moderated the effect of refusal. This suggests that information about disabled people who refuse a technology can increase stigma toward them. This finding has serious implications for brain-computer interface regulation, media coverage, and the prevention of ableism.


Figure 1: Image Left: Using a wearable BCI (Image credit, Patrick Bennet). Image right: Using an implantable BCI (Image credit, UPMC). Images and Caption provided in Web-Based Survey (Source for both images used together: US media coverage of BCI research, 2019).
Figure 3. Worries about the domains of application of BCIs (N = 132 to 135).
Figure 4. Enthusiasm about the domains of application of BCIs (N = 132 to 135). The response option for worry
Themes/codes and exemplary quotes of primary factors for recommendation
Brain-Computer Interfaces, Inclusive Innovation, and the Promise of Restoration: A Mixed-Methods Study with Rehabilitation Professionals

September 2022

·

119 Reads

·

7 Citations

Engaging Science Technology and Society

Over the last two decades, researchers have promised “neuroprosthetics” for use in physical rehabilitation and to treat patients with paralysis. Fulfilling this promise is not merely a technical challenge but is accompanied by consequential practical, ethical, and social implications that warrant sociological investigation and careful deliberation. In response, this paper explores how rehabilitation professionals evaluate the development and application of BCIs. It thereby also asks how the BCIs come to be seen as desirable or not, and implicitly, what types of persons, rights, and responsibilities are assumed in this discourse. To this end, we conducted a web-based survey (N=135) and follow-up interviews (N=15) with Canadian professionals in physical therapy, occupational therapy, and speech-language pathology. We find that rehabilitation professionals, like other publics, express hope and enthusiasm regarding the use of BCIs for assistive purposes. They envision BCI devices as powerful means to reintegrate patients and disabled people into social life but also express practical and ethical reservations about the technology, positioning themselves as uniquely qualified to inform responsible BCI design and implementation. These results further illustrate the nascent “co-production” of neural technologies and social order. More immediately, they also pose a serious challenge for implementing frameworks of responsible innovation; merely prescribing more inclusive technology development may not counteract technocratic processes and widely held ableist views about the need to augment certain bodies using technology.



Science, Responsibility, and the Philosophical Imagination

April 2022

·

79 Reads

·

9 Citations

Synthese

If we cannot define science using only analysis or description, then we must rely on imagination to provide us with suitable objects of philosophical inquiry. This process ties our intellectual findings to the particular ways in which we philosophers think about scientific practice and carve out a cognitive space between real world practice and conceptual abstraction. As an example, I consider Heather Douglas’s work on the responsibilities of scientists and document her implicit ideal of science, defined primarily as an epistemic practice. I then contrast her idealization of science with an alternative: “technoscience,” a heuristic concept used to describe nanotechnology, synthetic biology, and similar “Mode 2” forms of research. This comparison reveals that one’s preferred imaginary of science, even when inspired by real practices, has significant implications for the distribution of responsibility. Douglas’s account attributes moral obligations to scientists, while the imaginaries associated with “technoscience” and “Mode 2 science” spread responsibility across the network of practice. This dynamic between mind and social order, I argue, demands an ethics of imagination in which philosophers of science hold themselves accountable for their imaginaries. Extending analogous challenges from feminist philosophy and Mills’s. “Ideal Theory’ as Ideology,” I conclude that we ought to reflect on the idiosyncrasy of the philosophical imagination and consider how our idealizations of science, if widely held, would affect our communities and broader society.


Principles for the design of multicellular engineered living systems

March 2022

·

401 Reads

·

34 Citations

Remarkable progress in bioengineering over the past two decades has enabled the formulation of fundamental design principles for a variety of medical and non-medical applications. These advancements have laid the foundation for building multicellular engineered living systems (M-CELS) from biological parts, forming functional modules integrated into living machines. These cognizant design principles for living systems encompass novel genetic circuit manipulation, self-assembly, cell–cell/matrix communication, and artificial tissues/organs enabled through systems biology, bioinformatics, computational biology, genetic engineering, and microfluidics. Here, we introduce design principles and a blueprint for forward production of robust and standardized M-CELS, which may undergo variable reiterations through the classic design-build-test-debug cycle. This Review provides practical and theoretical frameworks to forward-design, control, and optimize novel M-CELS. Potential applications include biopharmaceuticals, bioreactor factories, biofuels, environmental bioremediation, cellular computing, biohybrid digital technology, and experimental investigations into mechanisms of multicellular organisms normally hidden inside the “black box” of living cells.


Pragmatism for a Digital Society: The (In)significance of Artificial Intelligence and Neural Technology

March 2021

·

39 Reads

·

3 Citations

Headlines in 2019 are inundated with claims about the “digital society,” making sweeping assertions of societal benefits and dangers caused by a range of technologies. This situation would seem an ideal motivation for ethics research, and indeed much research on this topic is published, with more every day. However, ethics researchers may feel a sense of déjà vu, as they recall decades of other heavily promoted technological platforms, from genomics and nanotechnology to machine learning. How should ethics researchers respond to the waves of rhetoric and accompanying academic and policy-oriented research? What makes the digital society significant for ethics research? In this paper, we consider two examples of digital technologies (artificial intelligence and neural technologies), showing the pattern of societal and academic resources dedicated to them. This pattern, we argue, reveals the jointly sociological and ethical character of significance attributed to emerging technologies. By attending to insights from pragmatism and science and technology studies, ethics researchers can better understand how these features of significance affect their work and adjust their methods accordingly. In short, we argue that the significance driving ethics research should be grounded in public engagement, critical analysis of technology’s “vanguard visions,” and in a personal attitude of reflexivity.


Brain–computer interfaces and personhood: interdisciplinary deliberations on neural technology

November 2019

·

342 Reads

·

40 Citations

Objective. Scientists, engineers, and healthcare professionals are currently developing a variety of new devices under the category of brain–computer interfaces (BCIs). Current and future applications are both medical/assistive (e.g. for communication) and non-medical (e.g. for gaming). This array of possibilities has been met with both enthusiasm and ethical concern in various media, with no clear resolution of these conflicting sentiments. Approach. To better understand how BCIs may either harm or help the user, and to investigate whether ethical guidance is required, a meeting entitled ‘BCIs and Personhood: A Deliberative Workshop’ was held in May 2018. Main results. We argue that the hopes and fears associated with BCIs can be productively understood in terms of personhood, specifically the impact of BCIs on what it means to be a person and to be recognized as such by others. Significance. Our findings suggest that the development of neural technologies raises important questions about the concept of personhood and its role in society. Accordingly, we propose recommendations for BCI development and governance.


Do Publics Share Experts’ Concerns about Brain–Computer Interfaces? A Trinational Survey on the Ethics of Neural Technology

October 2019

·

185 Reads

·

44 Citations

Science Technology & Human Values

Since the 1960s, scientists, engineers, and healthcare professionals have developed brain–computer interface (BCI) technologies, connecting the user’s brain activity to communication or motor devices. This new technology has also captured the imagination of publics, industry, and ethicists. Academic ethics has highlighted the ethical challenges of BCIs, although these conclusions often rely on speculative or conceptual methods rather than empirical evidence or public engagement. From a social science or empirical ethics perspective, this tendency could be considered problematic and even technocratic because of its disconnect from publics. In response, our trinational survey (Germany, Canada, and Spain) reports public attitudes toward BCIs ( N = 1,403) on ethical issues that were carefully derived from academic ethics literature. The results show moderately high levels of concern toward agent-related issues (e.g., changing the user’s self) and consequence-related issues (e.g., new forms of hacking). Both facets of concern were higher among respondents who reported as female or as religious, while education, age, own and peer disability, and country of residence were associated with either agent-related or consequence-related concerns. These findings provide a first look at BCI attitudes across three national contexts, suggesting that the language and content of academic BCI ethics may resonate with some publics and their values.


Citations (18)


... While this framework shares themes with a number of recent neuroethical frameworks [20], several elements distinguish the IEEE Neuroethics Framework. First, it brings together a wide array of stakeholders across many different fields, disciplines, and geographic areas, integrating technical and ethical knowledge so as to be of practical value to engineers and technologists. ...

Reference:

Applying the IEEE BRAIN neuroethics framework to intra-cortical brain-computer interfaces
Developing Ethical Guidelines for Implantable Neurotechnology: The Importance of Incorporating Stakeholder Input
  • Citing Chapter
  • February 2023

... Aside from the legal [9] and technological uncertainties of integrating HA into a defence environment, all new technologies have the potential to raise significant ethical issues. These relate to the psychological, physical, and social damage to health that employing such technologies might have on the battlefield or potentially beyond [10][11][12][13][14]. Research into the ethical aspects of human augmentation is not new [15][16][17][18][19][20], and over the past decade, several attempts to create an ethical framework for military integration of HA have emerged both in the academic [21] and government/state literature [22]. ...

Brain-computer interfaces, disability, and the stigma of refusal: A factorial vignette study

Public Understanding of Science

... While empirical research has focused on the perspectives of BCI users regarding these interactions and their ethical implications [37][38][39], studies exploring expert perspectives-such as those of developers, practitioners, and researchers-remain relatively limited. Although some have investigated developers' concerns about ethical risks related to BCIs [37,40], few have examined how experts perceive the relationship between strong and weak forms of human-machine symbiosis and the ethical issues they entail. To address this gap, this study draws on semi-structured interviews with Chinese experts in the fields of BCI and neuroscience to investigate their professional views on humanmachine symbiosis and its ethical implications, with particular attention to distinctions between strong and weak symbiotic relationships. ...

Brain-Computer Interfaces, Inclusive Innovation, and the Promise of Restoration: A Mixed-Methods Study with Rehabilitation Professionals

Engaging Science Technology and Society

... An example of how the 'historical imagination' can be utilized is 5 Hesse's revised thesis of underdetermination has been discussed by (Ariew, 1984;Laudan, 1990;Longino, 2002;Newton-Smith & Lukes, 1978):Recent critiques and developments of the 'thesis of underdetermination' include (Adeel, 2015;Bonk, 2008;Laudan, 1990):The following book reviews of Science and the Human Imagination do not make any acknowledgment of Hesse's reflections on the role of the human imagination in science. See (O' Connor, 1956;Nicholl, 1955;Johnstone, 1956;Hanson, 1958;Lenzen, 1956;Hardin, 1955): Recent explorations of the role of the imagination in science, such as (McLeish, 2019;Sample, 2022;Toon, 2016) do not refer to Hesse's view.In Hesse's biographical memoir, Jardine (2018) does acknowledge Hesse's reference to the 'creative imagination' but does not explore this concept in any detail. See (Jardine, 2018, p. 22). ...

Science, Responsibility, and the Philosophical Imagination

Synthese

... These results suggest that conservation laws and steadystate network fluxes critical for biological function are insulated from genetic perturbations. We note that this may aid in the design of synthetic organisms that recapitulate these salient metabolic features from wild nonsynthetic organisms [72][73][74][75]. Similarly, we should not be surprised if these principles are applicable to the search for Earth-like planetary atmospheres where specific steadystate fluxes and conservation laws may be critical for sustaining life. ...

Principles for the design of multicellular engineered living systems

... First, Matthew Sample and Eric Racine criticise the tendency of digital ethics to be driven by narratives that dominate public media or policy circles. 47 They argue that these narratives tend to assign significance to technologies and their impacts in ways that reflect the priorities of the groups most influential economically, politically or academically. Instead, ethicists should focus on empowering the publics that emerge in response to the impacts of new technologies, defined as groups that share similar experiences of the impacts of these technologies. ...

Pragmatism for a Digital Society: The (In)significance of Artificial Intelligence and Neural Technology
  • Citing Chapter
  • March 2021

... Some of these technologies are also used in do-it-yourself applications [8][9][10] . Neurotechnologies are currently being further developed with great hopes-including financial revenues for tech-companies and opening up other fields of application [11][12][13][14][15] . However, neurotechnology also bears risks, like changing the brain in unpredictable ways, stigmatization of users and refusers, patient autonomy, data protection, or responsibility for device failure 14,16,17 . ...

Do Publics Share Experts’ Concerns about Brain–Computer Interfaces? A Trinational Survey on the Ethics of Neural Technology

Science Technology & Human Values

... By modulating pain perception, a BCI might inadvertently affect these interconnected components, potentially altering self-awareness and identity. (53) Given the potential risks associated with stimulation-based BCIs, the next section focuses exclusively on the safer and more controllable applications of feedback-providing BCIs. While both systems are bidirectional, feedback-based BCIs do not stimulate brain activity. ...

Brain–computer interfaces and personhood: interdisciplinary deliberations on neural technology

... The specific issues that the neuroethics discourse addresses are closely intertwined with an understanding of the brain as the primary locus of human identity and agency, as well as the increasing pervasiveness of neuroscience and -technology in all areas of life (Rose & Abi-Rached, 2013)-often referred to as the 'neuro-turn'. Connected philosophical, psychological and legal questions have rekindled ethical debates about neuroessentialism, i.e., whether neuroscience and neurotech indeed raise unique ethical issues (e.g. in comparison to other domains of bioethics) that require unique ethical responses (Racine & Sample, 2019). ...

Do We Need Neuroethics?
  • Citing Article
  • July 2019

AJOB Neuroscience

... Future solutions should incorporate Blockchain and other security protocols to enable safe data transmission [149][150][151].Another critical challenge is adapting machine learning and deep learning algorithms to work with limited datasets. Research should focus on optimizing AI models, developing efficient endto-end architectures, and leveraging synthetic data generation techniques, such as generative adversarial networks (GANs), to overcome data scarcity issues [152,153].Material and Structural Limitations-PENG materials, such as PVDF and PZT, experience mechanical deterioration over time, which adversely impacts their long-term energy conversion efficacy. ...

Healthcare uses of artificial intelligence: Challenges and opportunities for growth
  • Citing Article
  • June 2019

Healthcare Management Forum