André Schmiljun

André Schmiljun
Adam Mickiewicz University | UAM · Institute of Philosophy

Doctor of Philosophy

About

21
Publications
1,766
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
9
Citations
Introduction
André Schmiljun, PhD in Philosophy, studied History and Philosophy at Humboldt University Berlin. In his doctoral thesis (under supervision of Prof. Dr. Ch. Möckel and Prof. Dr. S. Dietzsch) he analysed the phenomena of antipolitcs in the work of Friedrich W. J. Schelling (1775-1854). His research interests cover Robot Ethics, German Idealism and Philosophy of Mind.
Additional affiliations
September 2017 - present
Adam Mickiewicz University
Position
  • PostDoc Position
Description
  • Since 2017, I work on my habilitation at Adam Mickiewicz University in Poznań (under supervision of Prof. Dr. Ewa Nowak) concerning the possibility of moral competence in Artificial Intelligence.
Education
January 2013 - January 2015
Humboldt-Universität zu Berlin
Field of study
  • Philosophy
January 2007 - January 2013
Humboldt-Universität zu Berlin
Field of study
  • Philosophy and History

Publications

Publications (21)
Article
The discussion around artificial empathy and its ethics is not a new one. This concept can be found in classic science fiction media such as Star Trek and Blade Runner and is also pondered on in more recent interactive media such as the video game Detroit: Become Human. In most depictions, emotions and empathy are presented as the key to being huma...
Preprint
Full-text available
Searle defends a strong biocentric position, which he himself calls ‘Biological Naturalism’. For Searle, the mind is necessarily a biological process. On the contrary, the Polish philosopher Marcin Miłkowski has recently published a very promising approach in “Explaining the Computational Mind” (2013), in which he heads in an utterly different dire...
Article
Full-text available
Philosophizing with children is a valuable integrative method that should be practiced at primary schools, especially within the education of social sciences and technology. Children are confronted with existential questions that they learn to discuss and approach from different perspectives. Additionally, this method helps to foster children’s mor...
Preprint
Full-text available
Philosophizing with children is a valuable integrative method that should be practiced at primary schools, especially within the education of social sciences and technology. Children are confronted with existential questions that they learn to discuss and approach from different perspectives. Additionally, this method helps to foster children’s mor...
Article
Full-text available
Two major strategies (the top-down and bottom-up strategies) are currently discussed in robot ethics for moral integration. I will argue that both strategies are not sufficient. Instead, I agree with Bertram F. Malle and Matthias Scheutz that robots need to be equipped with moral competence if we don’t want them to be a potential risk in society, c...
Article
Full-text available
Machines have always been a tool or technical instrument for human beings to facilitate and to accelerate processes through mechanical power. The same applies to robots nowadays – the next step in the evolution of machines. Over the course of the last few years, robot usage in society has expanded enormously, and they now carry out a remarkable num...
Article
This special volume of Ethics in Progress addresses the issue of mind and machines. Machines have always been a tool or technical instrument for human beings to facilitate and to accelerate processes through mechanical power. The same applies to robots nowadays-the next step in the evolution of machines. They already clean our houses, mow our lawns...
Article
Two major strategies (the top-down and bottom-up strategies) are currently discussed in robot ethics for moral integration. I will argue that both strategies are not sufficient. Instead, I agree with Bertram F. Malle and Matthias Scheutz that robots need to be equipped with moral competence if we don’t want them to be a potential risk in society, c...
Article
Full-text available
Der Aufsatz untersucht das Begriffspaar Staat und Freiheit bei F. W. J. Schelling. Die These lautet: Obwohl Schelling den Staat ablehnt, scharf als „Mechanismus“ kritisiert und verurteilt, ist dieser aus seiner Sicht der einzige Garant — als „zweite Natur“ — zur Wiederherstellung der „verlorenen Einheit“, Sicherung der Freiheit des Menschen und Erm...
Article
Full-text available
With the development of autonomous robots, one day probably capable of speaking, thinking and learning, self-reflecting, sharing emotions, in fact, with the raise of robots becoming artificial moral agents (AMAs) robot scientists like Abney, Veruggio and Petersen are already optimistic that sooner or later we need to call those robots “people” or r...
Article
Full-text available
Bertram F. Malle is one of the first scientists, combining robotics with moral competence. His theory outlines that moral competence can be understood as a system of five components including moral norms, a moral vocabulary, moral cognition, moral decision making and moral communication. Giving a brief (1) introduction of robot morality, the essay...
Article
Staat und Freiheit bei F. W. J. Schelling In seiner 31. Münchner Vorlesung zur "Grundlegung der positiven Philosophie" aus dem Winter 1832/1833 sieht Schelling die Legitimation für die Existenz des Staates nur in einer Funktion: Seine Aufgabe besteht darin, die Freiheit der Individuen zu garantieren (Sandkühler 2005, 197). Schelling eröffnet dazu e...
Article
Full-text available
With the development of autonomous robots, one day probably capable of speaking, thinking and learning, self-reflecting, sharing emotions, in fact, with the raise of robots becoming artificial moral agents (AMAs) robot scientists like Abney, Veruggio and Petersen are already optimistic that sooner or later we need to call those robots “people” or r...
Article
Bertram F. Malle is one of the first scientists, combining robotics with moral competence. His theory outlines that moral competence can be understood as a system of five components including moral norms, a moral vocabulary, moral cognition, moral decision making and moral communication. Giving a brief (1) introduction of robot morality, the essay...
Chapter
Die Frage nach der Vereinbarkeit von Geist und Natur hat in der Philosophiegeschichte eine lange Tradition und ist mal zugunsten der einen (Materialismus, Naturalismus, Physikalismus), mal zugunsten der anderen Seite (Idealismus, Phänomenalismus) entschieden worden. Insbesondere durch die neurowissenschaftliche Forschung hat die Diskussion fachüber...
Book
Was ist Antipolitik, wie entsteht Antipolitik und wie ist ihr Verhältnis zur Parlamentarischen Demokratie? Stellt sie vielleicht sogar eine Gefahr für die Parlamentarische Demokratie dar? Die vorliegende Untersuchung diskutiert diese Fragen mit Rückgriff auf eine historische Analyse, nämlich am Beispiel des Idealisten und Philosophen Friedrich Wilh...
Chapter
Die Bedeutung der Philosophie in der Sozialen Arbeit oder gar eine Philosophie der Sozialen Arbeit wurde bislang noch nicht in der Wissenschaftslandschaft aufgegriffen. Hier setzt der Sammelband „Philosophie in der Sozialen Arbeit“ an. In den einzelnen Beiträgen erfahren Lesende viel über Soziale Arbeit, ihre Herausforderungen als Profession, Aufga...
Thesis
Full-text available
War Schelling ein politischer Denker? Und wenn ja, wie lässt sich seine politische Denkweise einordnen? Die Antworten auf beide Fragen gehen in der Forschung weit auseinander. Die vorliegende Dissertation schlägt daher eine neue Lesart vor, in dem sie Schelling als Antipolitiker deutet. Hierdurch gelingt es, der Ambivalenz und Breite Schellings Pol...

Network

Cited By

Projects

Projects (6)
Project
Moral competence is regarded as a key feature in robot ethics in order to guarantee security, “safety, acceptance and justified trust” (Malle, 2014, 2016; Malle & Scheutz, 2014, 2019, Scheutz, 2016, 2017; Scheutz, Malle, & Briggs G., 2015, Bringsjord et. al., 2015). Following the Psychologist Georg Lind, we assume that an Artificial Intelligence (AI) proofs its moral competence if it is able to solve conflicts and dilemmatic situations on the ground of moral principles through deliberation and discussion (Lind 2019, p. 20). In the presented paper, we describe a theoretical framework to teach an AI moral competence based on Lind’s Moral Competence Test (MCT) – which is a validated test to measure moral competence. The MCT presents two dilemma stories to the participants. It reveals the “participants' moral competence in their manifest pattern of responses” . We agree with current opinions that AI will have to show its ability to cope with our norms and values through learning them as a every ordinary child does (Malle, 2016; Irrgang, 2014). We suggest to provide the AI with dynamically learning safety and security components that adaptively increase or decrease over time with its reached C-score level in MCT. The C-score level varies normally between “0” (no moral quality) and “100” (hundred percent of moral competence). The higher the C-score level turns out within AI, the more safety will be guaranteed as the AI corresponds to higher moral orientations. The safety boundaries are dynamically changing based on system behavior and its correspondent effects on the probability of the risks and hazards. Comparable approaches have been published by Adam et. al. (2016). To define such an adaptive safety/security component, we require to use dynamically developing methods and the best solution is AI itself. We suggest to use methods of Natural Language Process (NLP) like voice/ /text recognition to let the AI understand the test and discuss its results (Khayrallah et. al., 2015; Eppe et. al., 2016). After every try, the AI is given feedback from a human interlocutor who’s in direct interaction with it. If the AI improved its score level it is learned a new moral principle or value of a higher stage of moral orientation. The AI’s goal is to learn moral values and principles which are linked with an improvement in score level. However, adaptivity of these principals allows to tighten the safety/security boundaries when the score is dropped during its life cycle for any reason. We argue that a learning and dynamical AI is able to increase its moral competence and safety & security functions. Additionally, we claim that our approach complies (1) with Misselhorn’s concept of moral agency (2013) as well as - in a very rudimentary sense – with (2) Korsgaard’s idea of practical identical (Korsgaard, 2009/2011, p. 42) and it allows to presuppose some kind of (3) “access consciousness” (Block, 1995). We skip in this paper all ambiguous, partly metaphysical debates on phenomenal consciousness, freedom of will, or understanding in AI that are lively discussed in robot ethics these days (Floridi, 2005, Taddeo & Floridi, 2005; Chella et. al. 2019).
Project
According to Bertram F. Malle, any robot becomes a “social robot” the moment it starts to “collaborate with, look after, or help humans” (Malle 2014, 189; see also Loh 2019, S, 27; Breazeal 2002; Duffy 2004, 2008; Markowitz 2015, Sequeira et. al. 2019; Ayanoğlu and Sequeira 2019). As robots find more and more use in many different areas of society, such as being employed as assistive robots in health care (Misselhorn 2019; Shibata et. al. 2008, Dautenhahn 2003), or as companion robots for children (Elder 2017) and elderly people in institutions or domestic scenarios (Turkle et. al. 2006), these machines are no longer only objects (moral patients), but turn into subjects of interaction and communication and act as moral agents. In the field of human robot interactions (HRIs), researchers have therefore analyzed human expectations of social robots in several different studies concerning their design (Dautenhahn et. al. 2009; Hashimoto et. al. 2006, Bartneck 2002) and function (Goetz et. al. 2003; Siegel et. al. 2009). Additionally, social robot interactions have been studied with regards to gender (Crowelly et al., 2009; Eyssel et al., 2012) and cultural aspects (Lee et al., 2012; Haring et al., 2014). In this project, we will discuss the integration and acceptance of social robots into society based on moral competence. Our aim is to develop a study containing a questionnaire combined with a Moral Competence Test (MCT). The questionnaire will confront our participants with a fictional dilemma of a social humanoid robot that replaces a human assistant in their usual dental practice. Participants will, among other tasks, have to decide if they would still continue to visit this dentist or if they would change their dental practice. Participants targeted by this study will be between 20 and 60 years old and situated in Poland and Scotland. Our research questions are as follows: Does a high stage of moral orientation correlate with acceptance of social robots that are interacting with us or could replace us? Do we trust social robots more than we trust human beings? Does the “uncanny valley” (MacDorman, 2006) have any effects on moral competence regarding humanoid robots? Are we able to feel empathy for social robots? Our first hypothesis is that older participants will be more likely to reject the new artificial assistant, although a recent study by Kirsten Thommes et. al. (2019) has shown that people between 55 and 64 may have an unexpectedly positive opinion about robots in medical facilities and care institutions. Secondly, we predict that participants with less doctor’s visits will not have as many concerns about the change within the dental practice as participants who visit the practice more frequently. Third, the same may apply to participants with a high score of moral competence. We assume that these participants “respect the laws and the order of society” or at least try to avoid “disapproval by others” (Lind 2016, 53) and regard the humanoid robot as part of our modern society with equal rights and values.
Project
Searle defends a strong biocentric position, which he himself calls ‘Biological Naturalism’. For Searle, the mind is necessarily a biological process. On the contrary, the Polish philosopher Marcin Miłkowski has recently published a very promising approach in “Explaining the Computational Mind” (2013), in which he heads in an utterly contrary direction. According to his theory of Computational Mechanism, the mind can be realized computationally (as it already works computationally) and doesn’t presuppose a biological basis. I will argue that although both concepts – Biological Naturalism and Computational Mechanism - start out from different points of view, they share some common ground.