Article

Précis of from bacteria to bach and back: The evolution of minds

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Dennett KM odmieta a tvrdí, že v naturalistickej teórii neexistuje miesto, kde sa vedomie odohráva, rovnako ako neexistuje niekto (subjekt), kto by dané stavy zakúšal v určitom priestore a čase (Dennett 1991;2005;2018a). 8 Ak sa vo finálnej teórii stále nachádza "niečo" alebo "niekto" v mozgu, tak nemáme teóriu vedomia, pretože akonáhle začneme vysvetľovať proces zodpovedný za vedomie, vedomie a subjekt zmiznú, pretože to je to, čo máme vysvetliť. ...
... Dennett na viacerých miestach nabáda k tomu, aby sme pozmenili tradičnú filozofickú otázku Čo vedomie je? na funkcionálnu otázku Čo vedomie robí? (pozri Dennett 1991;2005;2018a). To podľa môjho názoru vyplýva nielen z Dennettovej formulácie ťažkej otázky, ale aj z tvrdenia, že vedomá skúsenosť je účinkom kognitívnych procesov a funkcií mozgu. ...
... 3 Dennettove práce o problematike vedomia sú rozsiahle, v jednej štúdii preto nemožno zachytiť všetky aspekty jeho celkovej teórie vedomia. 4 Väčšina interpretácií sa zameriava na predstavenie a analýzu ranej verzie Dennettovej koncepcie(Dennett 1991), v dôsledku čoho absentuje bližšia analýza niektorých jej kľúčových prvkov z jeho neskorších prác(Dennett 2005;2018a) -pričom okrem iných chýba najmä analýza ťažkej otázky. Na ťažkú otázku bližšie upozornil až Frankish (2020) vo vzťahu k iluzionistickým pozíciám. ...
Article
Full-text available
The main aim of the study is to analyze key features of Dennett’s naturalistic conception of conscious experience. The paper proceeds from the assumption that Dennett’s primary intention is the naturalization of consciousness through the so-called “Hard Question”: And then what happens? The structure of the text consists of two main levels of Dennett's naturalistic program: a) negative level – rejection of the Cartesian model of consciousness, b) positive level – formulation of the Multiple drafts model. I assume that examining these levels will justify the importance of the Hard Question, thus contributing to a better understanding of Dennett's naturalization of conscious experience. Key words: Consciousness, Dennett, Hard question, Naturalism, Phenomenal character
... Naturally, such a reading may not appeal to Spinoza scholars who emphasize Spinoza's appeal to the sub specie aeternitatis, which can be understood as analogous to the eternal Nagelian view from nowhere. However, something like this is similarly found in Dennett's [2017] controversial idea of free-floating rationales, which exist independently from time and an observer. As I shall show in the second part of this paper, this is a sort of rationalism in both Dennett and Spinoza, but it is of a very anti-Cartestian sort. ...
... In Daniel Dennett's [2017] From Bacteria to Bach and Back, we encounter two strange ideas, that of 'free-floating rationales' and the 'strange inversion of reasoning'. The latter, Huebner [2011] and Schliesser [2018] argue, finds its analogue in Spinoza's argument that 'final causes are but figments of the human imagination' [Spinoza 1677[Spinoza (1992: 59]. ...
Article
Full-text available
This paper compares Spinoza with Daniel Dennett and uncovers a number of striking parallels. Genevieve Lloyd’s recent work on Spinoza reveals a picture of a philosopher that anticipated many of Dennett’s later ideas. Both share a fervent opposition to Descartes’ conception of mind and body and endorse a strikingly similar naturalist philosophy. It is the goal of this paper to tease out these connections and once again highlight the richness of a Spinozist lens of the world.
... These general learning mechanisms, together with some number of attentional biases, are often used to acquire all manner of culturally inherited, domain-specific packages of skills, e.g. to play chess, to drive a car, to read and write, etc. These may also include skills that allow individuals to imitate others more adeptly, thus allowing them to learn to become better and more selective social learners (Heyes 2018a; also see Dennett 2017). ...
... Many have examined aspects and implications of our ability to smoothly 'couple' our minds with our technology (Clark 2007;Palermos 2014;Carter & Palermos 2016) as well as with our physically and socially constructed environments (Hutchins 2008(Hutchins , 2011Davidson & Kelly 2020). Others have explored the idea that culture was part of the design problem that selected for minds that are cultural-porous, technologically extendable, deeply socially permeable-in exactly these ways (Tomasello 1999;Boyd et al 2011;Dennett 2017;Kelly & Hoburg 2017;Muthukrishna et al 2018). Some of this work focuses on material culture and technology (Jeffares 2010;Malafouris 2013;Sterelny 2018), including how artifacts might scaffold or extend specific psychological faculties like memory (Clark 2005;Heersmink 2020), mathematical cognition (Menary 2015), or economic reasoning (Clark 1997). ...
Article
Full-text available
Human behavior and thought often exhibit a familiar pattern of within group similarity and between group difference. Many of these patterns are attributed to cultural differences. For much of the history of its investigation into behavior and thought, however, cognitive science has been disproportionately focused on uncovering and explaining the more universal features of human minds-or the universal features of minds in general. This entry charts out the ways in which this has changed over recent decades. It sketches the motivation behind the cultural turn in cognitive science, and situates some of its central findings with respect to the questions that animate it and the debates that it has inspired. Woven throughout the entry are examples of how the cognitive science of culture, and especially its elevated concern with different forms of diversity and variation, continues to influence and be influenced by philosophers. One cluster of philosophical work falls within the traditional subject matter of philosophy of science, in this case of the cognitive and social sciences. Philosophers have analyzed and assessed the methods and evidence central to the scientific study of cognition and culture, and have offered conceptual scrutiny, clarification, and synthesis. Research in a second vein sees philosophers themselves contributing more directly to cognitive scientific projects, (co)constructing theories, helping build computational models, even gathering empirical data. A third kind of work is naturalistic philosophy or philosophy of nature, wherein philosophers seek to use results from the cognitive science of culture to inform or transform debates over long-standing philosophical questions, including questions about the nature of philosophy and philosophical methodology itself.
... A teoria matemática da comunicação, também conhecida como teoria da informação, foi projetada no campo da biologia, fornecendo o fundamento conceitual da comunicação para as teorias meméticas, isto é, as hipóteses biológicas dedicadas ao estudo da cultura, ou da transmissão cultural de informações, com destaque para Dawkins (1976Dawkins ( , 1993, Dennet (1995Dennet ( , 2017, Brodie (1996) e Cavalli-Sforza (2000. Tal como na genética -onde o gene constitui a unidade mínima de transmissão de informações de hereditariedade -no domínio da memética, os memes são considerados unidades de informação replicadas de um cérebro a outro via imitação, constituindo a unidade mínima da evolução cultural. ...
... Embora todas as evidências disponíveis apontem para as vantagens (biológicas, psicológicas e morais) da aderência aos protocolos de saúde, em detrimento a "negacionismo", deve-se considerar que a mente não é uma ferramenta de acesso direto a verdades objetivas, como aquelas buscadas nos moldes das metodologias científicas. Segundo a hipótese de Mercier e Sperber (2009/2017, ao invés de ter evoluído para o acesso direto à verdade ou à racionalidade, o mais provável é que a racionalidade humana tenha evoluído sob o comando de fortes demandas sociais, adquirindo uma forte tendência de empregar métodos de argumentação, visando a sempre justificar sistemas de crenças e de ações, na busca de persuadir outros membros da sociedade a aderirem às nossas justificativas e argumentos. ...
Article
Full-text available
Viral analogy models for the study of information distribution in the scope of human communication have been interdisciplinarily debated in different academic spheres, from naturalistic epistemiology (DENNET, 1995, 2017), through evolutionary biology (DAWKINS, 1976, 1993), genetics population (CAVALLI-SFORZA, 2000), mathematical modeling in evolutionary anthropology (BOYD; RICHERSON, 2005) and cultural epidemiology (SPERBER, 1985, 1994, 1996; WEISS, 2001; MORIN, 2016). In this essay, I propose the idea that in the same way - in a globalized world - diseases in human populations have the potential to spread pandemic orders (UJVARI, 2011), the globalized flow of information, under the impact of new cognitive technologies (DASCAL, 2005) also had ecological distribution potential exceeding epidemiological scales, reaching pandemiological levels. Therefore, I will seek to articulate the still embryonic notion of pandemiology (CASTIEL, 1995; ISPIR, 2020; AKERMAN; CASTIEL, 2021) and the Epidemiology of Representations (SPERBER, 1985, 1996; LERIQUE, 2017) in what I am proposing as a Pandemiology of Representations. Initially, I will introduce two well-established theories that characterize communication and how they are directly implicated in viral models for the study of ecological information distribution. Next, I will present the epidemiology of representations in their original formulation, suggesting its expansion towards a pandemiology of representations, in order to monitor/analyze projected information beyond an ecological boundary. Finally, I will seek to typify some of the phenomena that could be more closely studied in the context of the worsening public health crisis that is plaguing Brazil in the context of the COVID-19 pandemic.
... Such a debate harmonises the very idea of postanthropocentrism (in historical sense) or post-anthropocentrism (in discourse sense) while these terms themselves are contradictory constructions. One should, for examples, read Dennett (2017) and Hayward (1997). Dennett (2017), in general, argues that it is only the human species who have competency with comprehension, and Hayward (1997), particularly in relation to the environment and non-human species, argues that more than anthropocentrism human chauvinism and speciesism are objectionable. ...
... One should, for examples, read Dennett (2017) and Hayward (1997). Dennett (2017), in general, argues that it is only the human species who have competency with comprehension, and Hayward (1997), particularly in relation to the environment and non-human species, argues that more than anthropocentrism human chauvinism and speciesism are objectionable. That is, a meagre epistemological and ontological shift from the anthropocentrism to the postanthropocentrism or post-anthropocentrism do not offer solutions to the current worsening situation of the ecosystem imbalances. ...
Article
This article explores ‘post(s)’ perspective understandings for the 21st century social work. Drawing mainly on post-debates, this article argues that human beings and their societies will evolve in unimagined ways in the future than these have been in all of their previous historical periods. Social work therefore must re-invent and re-adjust itself in the rest of the 21st century. Such re-invention and re-adjustment, however, will pivot around some complex theoretical narratives concerning to the ‘post(s)’ contexts and conditions of the 21st century.
... Morphospaces provide us with a global picture of possible designs and how they relate to each other (whether they are distant or close) in a feature space. By making reasonable assumptions about relationships between features, we can still make some qualitative assessments about our systems of interest [176,179,180,182]. ...
... A most obvious feature of our plot is that living and artificial systems appear separated by a gap that grows bigger as systems become more complex or more socially interactive. The divide reflects a fundamental difference between biological and artificial systems: the pressure of Darwinian selection and evolution that promotes autonomy (as discussed in [185] in terms of selfishness) [182,186]. Composed replicative units are more complex, thus can support the propagation of their selves with enhanced internal computation that enables to predict ever more complex environments [5]. Due to evolution, this computational prowess must further protect autonomy-thus closing a reinforcing loop that necessarily pushes biological replicators towards the left wall of our morphospace. ...
Article
Full-text available
When computers started to become a dominant part of technology around the 1950s, fundamental questions about reliable designs and robustness were of great relevance. Their development gave rise to the exploration of new questions, such as what made brains reliable (since neurons can die) and how computers could get inspiration from neural systems. In parallel, the first artificial neural networks came to life. Since then, the comparative view between brains and computers has been developed in new, sometimes unexpected directions. With the rise of deep learning and the development of connectomics, an evolutionary look at how both hardware and neural complexity have evolved or designed is required. In this paper, we argue that important similarities have resulted both from convergent evolution (the inevitable outcome of architectural constraints) and inspiration of hardware and software principles guided by toy pictures of neurobiology. Moreover, dissimilarities and gaps originate from the lack of major innovations that have paved the way to biological computing (including brains) that are completely absent within the artificial domain. As it occurs within synthetic biocomputation, we can also ask whether alternative minds can emerge from A.I. designs. Here, we take an evolutionary view of the problem and discuss the remarkable convergences between living and artificial designs and what are the pre-conditions to achieve artificial intelligence.
... Los ordenadores ya no serán solamente herramientas para creación, como hasta ahora se ha manejado, ahora los agentes creativos serán los ordenadores. Esto ha dado lugar a una nueva y muy prometedora área de aplicación de la IA denominada Creatividad Computacional que ya ha producido resultados muy interesantes (Colton et al., 2009(Colton et al., , 2015López de Mántaras, 2016) en ajedrez, música, artes plásticas y narrativa, entre otras actividades creativas. ...
Article
Full-text available
Resumen En las últimas décadas se ha desatado una inevitable proliferación por el avance en la tecnología, el llamado siglo de la información a traído consigo la conectividad entre personas de todas partes del mundo y tal pareciera que esto no fuera suficiente que ahora mismo, consideramos tener conectividad directa con los objetos o maquinas que cada vez son más de uso común. Esta necesidad acarrea consigo la innovación de estas cosas al nivel que cada una de ellas posea la capacidad de adquirir experiencia y conocimiento continuo de su ambiente de trabajo. Con las anteriores características definimos uno de los avances tecnológicos más estudiados y esperados de todo el mundo a lo largo de los años, la inteligencia artificial. Sin embargo, el hecho de que tengamos muy claros los beneficios que puede traer consigo a la vida productiva del hombre, requiere directamente también un análisis más a detalle de las consecuencias que esto puede traer consigo. Si bien la inteligencia artificial traerá una producción más acelerada, análisis de riesgos más exactos y decisiones más acertadas, se puede ver que también podrá traer vulnerabilidad al sentido moral y humanista de las cosas, donde si bien ahora una maquina toma el lugar de una o más personas en un mundo de gran población, puede significar una debacle en la comunidad laboral de proporciones no imaginadas. Hoy en día, las organizaciones buscan estar a la vanguardia del progreso industrial y tecnológico, para traer consigo más ganancias y crecimiento, a su vez este progreso en los últimos años se ha dado de una manera mucho más acelerada de lo que fue en épocas pasadas. Bajo esta perspectiva es importante que estas organizaciones evalúen a la inteligencia artificial, en sus principales enfoques. Uno de estos enfoques se centra en adaptar el ordenador de tal manera que tenga capacidad de razonamiento autónomo, lo más cercano posible a la del ser humano. De esta manera la maquina dejará de atenerse a un algoritmo de base establecido y por lo contrario tomará decisiones con base a toda la información recopilada para generar todos los escenarios posibles para la toma de decisiones. Y dando una posible sugerencia para esta operación, sugiero que la decisión final siempre sea supervisada y corroborada por una persona. Está claro que debemos considerar muchas variables en este proceso que estamos viviendo de modernidad, en la cual claramente los humanos ya no seremos los únicos participes del desarrollo tecnológico, muy pronto compartiremos opiniones o tendremos un análisis de riesgo, de manera más estrecha por las maquinas o dispositivos que se crearan con estas tecnologías, pero con ello es importante darle un sentido moral, humano y/o ético a estos asuntos para lograr un bienestar colectivo y equilibrado./In recent decades an inevitable proliferation has been unleashed by the advancement in technology, the so-called information century has brought with it connectivity between people from all over the world and it seems that this was not enough that right now, we consider having connectivity direct with the objects or machines that are increasingly in common use. This need brings with it the innovation of these things to the level that each of them has the ability to acquire experience and continuous knowledge of their work environment. With the above characteristics we define one of the most studied and expected technological advances in the world over the years, artificial intelligence. However, the fact that we are very clear about the benefits that it can bring to the productive life of man, directly also requires a more detailed analysis of the consequences that this can bring. Although artificial intelligence will bring a more accelerated production, more accurate risk analysis and more accurate decisions, it can be seen that it can also bring vulnerability to the moral and humanistic sense of things, where although now a machine takes the place of one or the other. more people in a world with a large population, it can spell a debacle in the work community of unimagined proportions. Today, every day, organizations seek to be at the forefront of industrial and technological progress, to bring more profits and growth, in turn this progress in recent years has occurred in a much more accelerated way than what It was in times past. From this perspective, it is important that these organizations evaluate artificial intelligence, in its main approaches. One of these approaches focuses on adapting the computer in such a way that it has autonomous reasoning capacity, as close as possible to that of the human being. In this way, the machine will stop following an established base algorithm and will instead make decisions based on a compilation of all the information collected to generate all possible scenarios for decision-making. And giving a possible suggestion for this operation, I suggest that the final decision is always supervised and corroborated by one person. It is clear that we must consider many variables in this process that we are living in modernity, in which clearly humans will no longer be the only participants in technological development, very soon we will share opinions or have a risk analysis, more closely by machines or devices that were created with these technologies, but with this it is important to give a moral, human and / or ethical sense to these matters to achieve a collective and balanced well-being. Keywords: Society of modernity, Ethics, Public Policies
... The intentional stance forcefully shows how quickly we ascribe intentional content to entities that do not share our cognitive architecture. 9 All of this may suggest a deep evolutionary rationale for the usefulness of the intentional stance (Dennett, 1987(Dennett, , 2017Rosenberg, 2011), but such an instrumentalist justification falls short of vindicating intentionality. We may equally interpret this strategy as undermining the efforts made by Neo-Cartesians and Neo-Behaviorists to naturalize content, instead supporting a much more eliminativist stance. ...
Article
Full-text available
Eliminativism is a position most readily associated with the eliminative materialism of the Churchlands, denying that there are such things as propositional states. This position has created much controversy, despite the fact that intentionality has long been seen as perhaps the core problem for naturalistic philosophy. There is a more radical interpretation of eliminativism, however, denying not only mental states, such as beliefs and desires, but also intentionality (i.e., aboutness) on a global level. This position traces its contemporary origin back to Quine, but has generally been assumed to undermine naturalism or, worse, to be incoherent by the majority of philosophers who maintain that there clearly are things or mental states that are about others. In a recent paper, Hutto and Satne (2015a) offer an update that tries to revive John Haugeland’s baseball analogy from his influential 1990 review paper The Intentionality All-Stars on the state of the game to argue that the failure of Neo-Cartesians, Neo-Behaviorists, and Neo-Pragmatists should urge us to make them work together to naturalize content and “win the game.” But Hutto and Satne misunderstand what the game is ultimately about. The goal of the Intentionality All-Stars is not to naturalize content against eliminativism but to defend a naturalist “third-person” view of the problem against first-person phenomenalists. And for this goal, a naturalist defense of global content eliminativism would equally enable them to emerge victorious. Revisiting Haugeland, I will offer my own analysis of the current state of play to argue that global content eliminativism has not received sufficient attention and deserves a more prominent place in the debate than it currently occupies.
... Ilyenek például az utakon közlekedő önvezető autók, a kórházakban a sebészeti beavatkozásokat végző robotok és a hadszíntereken a célpontok megsemmisítésére bevetett harci típusú drónok. Az uralkodó típusú mesterséges intelligencia olyan rendszerek működéséért felelős, amelyek emberi beavatkozástól mentesek (Dennett, 2017). ...
Article
Az infokommunikációs technológiák, a szenzorok és az adattömegek gyors terjedésének köszönhetően a mesterséges intelligencia (MI) a 21. század egyik legfontosabb technológiájává vált. A tanulmányban arra keressük a választ, hogy az emberi agy összetett gondolkodásának modellálásával mennyi esély van alkotó mesterséges intelligenciát létrehozni. Elképzelhető úgy a jövő, hogy azt emberi módon gondolkodó lények uralják?
... If you think that qualities or qualia (conceived as a special type of property) are the source of the explanatory gap, but also think that there are no qualities or qualia, then you are naturally led to the view that there is no serious explanatory gap at all-just the illusion of one. This is Dennett/Frankish-brand illusionism (Dennett, 2017;Frankish, 2016). Taken to extremes, this is a view on which there are no conscious experiences in any animal, human or non-human. ...
Article
Full-text available
Peter Godfrey-Smith’s Metazoa and Joseph LeDoux’s The Deep History of Ourselves present radically different big pictures regarding the nature, evolution and distribution of consciousness in animals. In this essay review, I discuss the motivations behind these big pictures and try to steer a course between them.
... For eons, humans have been creating and releasing into the world advanced intelligences -via pregnancy and birth of other humans. This, in Dennett's phrase, has been achieved until now via high levels of "competency without comprehension" [357]; however, we are now moving into a phase in which we create beings with comprehension -with rational control over their structure and cognitive capacities, which brings additional responsibility. A new ethical framework will have to be formed without reliance on folk notions such as "machine", "robot", "evolved", "designed", etc. because these categories are now seen to not be crisp natural kinds. ...
Preprint
Full-text available
Synthetic biology and bioengineering provide the opportunity to create novel embodied cognitive systems (otherwise known as minds) in a very wide variety of chimeric architectures combining evolved and designed material and software. These advances are disrupting familiar concepts in the philosophy of mind, and require new ways of thinking about and comparing truly diverse intelligences, whose composition and origin are not like any of the available natural model species. In this Perspective, I introduce TAME - Technological Approach to Mind Everywhere - a framework for understanding and manipulating cognition in unconventional substrates. TAME formalizes a non-binary (continuous), empirically-based approach to strongly embodied agency. When applied to regenerating/developmental systems, TAME suggests a perspective on morphogenesis as an example of basal cognition. The deep symmetry between problem-solving in anatomical, physiological, transcriptional, and 3D (traditional behavioral) spaces drives specific hypotheses by which cognitive capacities can scale during evolution. An important medium exploited by evolution for joining active subunits into greater agents is developmental bioelectricity, implemented by pre-neural use of ion channels and gap junctions to scale cell-level feedback loops into anatomical homeostasis. This architecture of multi-scale competency of biological systems has important implications for plasticity of bodies and minds, greatly potentiating evolvability. Considering classical and recent data from the perspectives of computational science, evolutionary biology, and basal cognition, reveals a rich research program with many implications for cognitive science, evolutionary biology, regenerative medicine, and artificial intelligence.
... A common thread running through neurophenomenology, from Varela (2010) to more contemporary scholars such as Colombetti (2017) and Gallese (2019), is the organismic view of human consciousness. Humans have unprecedented cognitive powers to perceive, make sense and articulate through language their ideas about their universe (Dennett, 2017). However, these unique human powers have their origin, and are scaffolded by, the neurobiology of an organism sensing and interacting with its lifeworld. ...
Chapter
Full-text available
The emerging field of neurophenomenology provides a source of fresh insights into professional practice from an embodied perspective. This chapter draws upon the lifeworld perspectives of master mariners at sea to illustrate the potential benefits of applying a neurophenomenological lens to better understand professional practice and its development. Neurophenomenology aims to integrate the fields of cognition, neurobiology and the phenomenological examination of human experience in order to advance and illuminate understandings of human consciousness. While it remains an emerging interdisciplinary field, it is supported by decades of empirical, neurobiological evidence. As such, it provides an evidence-informed approach to understanding embodied dimensions of practice. This chapter considers what neurophenomenology can bring to embodied perspectives, in professional education, and how neurophenomenology can enlighten educational practices that support professionals’ being, doing and becoming. The chapter draws on relevant examples from master mariners at sea, as well as other professional contexts, and demonstrates that neurophenomenology provides a fruitful and tantalising lens for developing insights into education and professional practice.
... In articulating our specialist research findings, we must therefore also concern ourselves with different "publics" (Pettit & Young, 2017). Because the words we use do not speak for themselves (pace Dennett, 2017). ...
Article
Full-text available
What does a name mean in translation? Quine argued, famously, that the meaning of gavagai is indeterminate until you learn the language that uses that word to refer to its object. The case is similar with scientific texts, especially if they are older; historical. Because the meanings of terms can drift over time, so too can the meanings that inform experiments and theory. As can a life’s body of work and its contributions. Surely, these are also the meanings of a name; shortcuts to descriptions of the author who produced them, or of their thought (or maybe their collaborations). We are then led to wonder whether the names of scientists may also mean different things in different languages. Or even in the same language. This problem is examined here by leveraging the insights of historians of psychology who found that the meaning of “Wundt” changed in translation: his experimentalism was retained, and his Völkerpsychologie lost, so that what Wundt meant was altered even as his work—and his name—informed the disciplining of Modern Psychology as an experimental science. Those insights are then turned here into a general argument, regarding meaning-change in translation, but using a quantitative examination of the translations of Piaget’s books from French into English and German. It is therefore Piaget who has the focus here, evidentially, but the goal is broader: understanding and theorizing “the mistaken mirror” that reflects only what you can think to see (with implications for replication and institutional memory).
... Daniel Dennet has suggested that we should doubt the existence of an abstract consciousness, claiming that our choices are conditioned by physical information in our brains (Dennett, 2017). But this notion that we do not make conscious choices misses the point of consciousness. ...
Article
Full-text available
This paper describes semantic communication as an arbitrary loss function. I reject the logical approach to semantic information theory described by Carnap, Bar-Hillel and Floridi, which assumes that semantic information is a logical function of Shannon information mixed with categorical objects. Instead, I follow Hirotugu Akaike’s maximum entropy approach to model semantic communication as a choice of loss. The semantic relationship between a thing and a message about the thing is modelled as the loss of information that results in the impression contained in the message, so that the semantic meaning of a bear’s footprint is the difference between the actual bear and its footprint. Experience has a critical function in semantic meaning because a bear footprint can only be meaningful if we have some experience with an actual bear. The more direct our experience, the more vivid the footprint will appear. In this model, what is important is not the logic of the categories represented by the information but the loss of information that reduces our experiences of reality to functional communication. The hard problem of semantic communication arises because real objects and events do not come with categorical labels attached, so the choice of loss is necessarily imperfect and illogical.
... For eons, humans have been creating and releasing into the world advanced, autonomous intelligences-via pregnancy and birth of other humans. This, in Dennett's phrase, has been achieved until now via high levels of "competency without comprehension" (Dennett, 2017); however, we are now moving into a phase in which we create beings via comprehension-with rational control over their structure and cognitive capacities, which brings additional responsibility. A new ethical framework will have to be formed without reliance on binary folk notions such as "machine, " "robot, " "evolved, " "designed, " etc., because these categories are now seen to not be crisp natural kinds. ...
Article
Full-text available
Synthetic biology and bioengineering provide the opportunity to create novel embodied cognitive systems (otherwise known as minds) in a very wide variety of chimeric architectures combining evolved and designed material and software. These advances are disrupting familiar concepts in the philosophy of mind, and require new ways of thinking about and comparing truly diverse intelligences, whose composition and origin are not like any of the available natural model species. In this Perspective, I introduce TAME—Technological Approach to Mind Everywhere—a framework for understanding and manipulating cognition in unconventional substrates. TAME formalizes a non-binary (continuous), empirically-based approach to strongly embodied agency. TAME provides a natural way to think about animal sentience as an instance of collective intelligence of cell groups, arising from dynamics that manifest in similar ways in numerous other substrates. When applied to regenerating/developmental systems, TAME suggests a perspective on morphogenesis as an example of basal cognition. The deep symmetry between problem-solving in anatomical, physiological, transcriptional, and 3D (traditional behavioral) spaces drives specific hypotheses by which cognitive capacities can increase during evolution. An important medium exploited by evolution for joining active subunits into greater agents is developmental bioelectricity, implemented by pre-neural use of ion channels and gap junctions to scale up cell-level feedback loops into anatomical homeostasis. This architecture of multi-scale competency of biological systems has important implications for plasticity of bodies and minds, greatly potentiating evolvability. Considering classical and recent data from the perspectives of computational science, evolutionary biology, and basal cognition, reveals a rich research program with many implications for cognitive science, evolutionary biology, regenerative medicine, and artificial intelligence.
... First, offering an explanation requires to identify the underlying, most often implicit, question it should answer. It has been shown that an explanation can be defined as an answer to a why-question [4,7] and that it should provide a reason that justifies what happens [8,9]. Besides, it is dependent on the context, as it must be adapted to the specific user need [10]. ...
Conference Paper
Full-text available
Machine Learning has provided new business opportunities in the insurance industry, but its adoption is for now limited by the difficulty to explain the rationale behind the prediction provided. In this work, we explore how we can enhance local feature importance explanations for non-expert users. We propose design principles to contextualise these explanations with additional information about the Machine Learning system, the domain and external factors that may influence the prediction. These principles are applied to a car insurance smart pricing interface. We present preliminary observations collected during a pilot study using an online A/B test to measure objective understanding, perceived understanding and perceived usefulness of explanations. The preliminary results are encouraging as they hint that providing contextualisation elements can improve the understanding of ML predictions.
... The differences in the image patterns that individuals would associate with a single symbol might be small when they belong to a single culture. This is one of the conditions for words to work as memes [37] where symbols used in a culture are associated with shared image patterns that guarantee reality. In other words, when the same word is used in a communication between individuals from different cultures, there is there is no guarantee that the shared image will correspond with reality. ...
Conference Paper
Full-text available
Ideas are created in one's mind through cognitive processes after obtaining perceptual stimuli either by hearing or reading words or by seeing images. They should have different representations depending on their origin of information, i.e., words or images, and the cognitive processes for dealing with them. The comparison between these processes is often labeled by the terms, "word and wordless thought" and there is a strong argument that favors wordless thought. The purpose of this paper is to compare the two cognitive processes for words and images by applying the state of the art cognitive architecture, the Model Human Processor with Realtime Constraints (MHP/RT) proposed by Kitajima and Toyota, developed for understanding behavioral ecology of human beings. This study shows that the perceived dimensionality of images is larger than that of words, which leads to the conclusion that the number of discriminable states for images is an order of magnitude larger than that of words, and due to this, image-based processing can store information about absolute times in memory but word-based processing cannot. This should lend significantly larger expressive power to image-based processing. It is argued that the loss of reality in word-based processing results in significant implications for the development of globalization and the illusion of mutual understanding in word-level communications.
... accessed on 12 May 2022. 16 Daniel Dennett's abbreviation for "Very much more than ASTronomical", used to refer to "finite but almost unimaginably larger numbers than such merely astronomical quantities as the number of microseconds since the Big Bang times the number of electrons in the visible universe" [59] (ch.6 endnote 36). 17 Again this undefined "we." 18 Note that Kraut does not restrict to realistic life spans. ...
Article
Full-text available
The present article critiques standard attempts to make philosophy appear relevant to the scientific study of well-being, drawing examples in particular from works that argue for fundamental differences between different forms of wellbeing (by Besser-Jones, Kristjánsson, and Kraut, for example), and claims concerning the supposedly inherent normativity of wellbeing research (e.g., Prinzing, Alexandrova, and Nussbaum). Specifically, it is argued that philosophers in at least some relevant cases fail to apply what is often claimed to be among their core competences: conceptual rigor—not only in dealing with the psychological construct of flow, but also in relation to apparently philosophical concepts such as normativity, objectivity, or eudaimonia. Furthermore, the uncritical use of so-called thought experiments in philosophy is shown to be inappropriate for the scientific study of wellbeing. As an alternative to such philosophy-as-usual, proper attention to other philosophical traditions is argued to be promising. In particular, the philosophy of ZhuangZi (a contemporary of Aristotle and one of the most important figures in Chinese intellectual history) appears to concord well with today’s psychological knowledge, and to contain valuable ideas for the future development of positive psychology.
... The AHA modelling framework can tackle elementary computational mechanisms thought to underlie subjective experience in different animals. We argue that our common evolutionary history suggests a continuity in neural, computational and evolutionary mechanisms that underlie subjective phenomena [91,134,204]. We are therefore concerned with functionally defined concepts in the same way that founders of ethology [205] and comparative psychology [206] used various human-derived terms without anthropomorphizing them. ...
Article
To understand animal wellbeing, we need to consider subjective phenomena and sentience. This is challenging, since these properties are private and cannot be observed directly. Certain motivations, emotions and related internal states can be inferred in animals through experiments that involve choice, learning, generalization and decision-making. Yet, even though there is significant progress in elucidating the neurobiology of human consciousness, animal consciousness is still a mystery. We propose that computational animal welfare science emerges at the intersection of animal behaviour, welfare and computational cognition. By using ideas from cognitive science, we develop a functional and generic definition of subjective phenomena as any process or state of the organism that exists from the first-person perspective and cannot be isolated from the animal subject. We then outline a general cognitive architecture to model simple forms of subjective processes and sentience. This includes evolutionary adaptation which contains top-down attention modulation, predictive processing and subjective simulation by re-entrant (recursive) computations. Thereafter, we show how this approach uses major characteristics of the subjective experience: elementary self-awareness, global workspace and qualia with unity and continuity. This provides a formal framework for process-based modelling of animal needs, subjective states, sentience and wellbeing.
... J. Friston et al., 2021). We play out different models against each other (largely implicitly), and we let our models or hypotheses die in our stead (Dennett, 2017). We choose our actions such that they minimize the divergence (i.e., prediction errors, under some assumptions) between two probability distributions (models): One describing our expected outcomes if we were to follow a course of action, and another describing our desired future (our prior or preferred outcomes). ...
Preprint
Full-text available
More than 40 years ago, pioneering social psychologist Robert Zajonc (1980) published his seminal work titled “Preferences need no inferences” in which he argued for the primacy of affect over cognition. Affective evaluation (the preference) comes first, he claimed, and only then do cognitive processes (the inferences) kick in. The view is untenable in light of recent predictive processing accounts of the mind, which hold that all mental functioning is built from (approximate) Bayesian inference. It casts perception, action, and learning as inference but, perhaps counterintuitively, valuation too. We discuss how valuation —understood as the process of how we come to value, prefer or like things— emerges as a function of learning and inference, and how this conception may help us resolve traditional conundrums in the science of aesthetic experience, such as the nature of the "beholder's share", the link between curiosity and appreciation, Keats' "negative capability" and the tension between the mere exposure principle and the goldilocks (optimal level) principle.
... In other words, consciousness and memory formation must be intimately linked [88,89]. Memory after consciousness requires the end of consciousness and the beginning of memory, suggesting an illusion [90]. However, the sequence of pre-consciousness to post-consciousness is not an illusion when building the neurobiology of semantics that links to syntactical rules. ...
Article
Full-text available
This paper proposes biophysical principles for why geometric holonomic effects through the geometric vector potential are sentient when harmonized by quantized magnetic vector potential in phase-space. These biophysical principles are based on molecular level electromagnetic resonances in partially holistic molecules where nonintegrated information acts as the consciousness process's conduit-using the informational structure of physical feelings as a transition into subjectivity. The transformation of internal energies from potential to kinetic as 'concealed' motion may measure the causal capacity required to bridge causality for conscious experience. Conformational transitions produce bond-breaking, resulting in boundary conditions and limiting the molecular wavefunction to a partially holistic molecular environment with molecular holonomic effects. The van der Waals energy increases protein conformational activity (re-arrangement of bonds), causing energy transfer and information in protein-protein interactions across the cerebral cortex through the energy transduction process. Energy transitions predetermine molecular level electromagnetic resonances in aromatic residues of amino acids. The energy sharing between various nested molecular level electromagnetic resonances interacting with the intermolecular adhesion of London forces at the nexus between phospholipids and the lipophilic proteins has a key role in constraining the release of energy resulting in a vast array of information-based action through negentropic entanglement. Such information structure, passing from the objectivity of holonomic effects stemming from molecular level electromagnetic resonances, has an inherent ambiguity since meaning cannot be related to context, which constitutes preconscious experienceability. The transition from potentiality to actuality where Coulombic force is expressed as a smear of possible experiences where carriers of evanescent meanings instantly actualize through intermittent dispersion interactions as conscious experiences and return to potentiality in preconscious experienceabilities.
... In particular, they are populational, and can still realise a form of mindless variation-introduction, in which properties of the system that are observable at the population level need not be instantiated at the individual level (Richerson and Boyd, 2005;Claidière et al., 2014). That is, even without a strict Darwinian framework, cultural evolution can still produce a ' ' that outsmarts single individual brains (Muthukrishna and Henrich, 2016;Dennett, 2017). In addition, convergent transformation allows in principle for a very large suite of factors that can influence the population-level outcomes. ...
Article
Full-text available
Typical examples of cultural phenomena all exhibit a degree of similarity across time and space at the level of the population. As such, a fundamental question for any science of culture is, what ensures this stability in the first place? Here we focus on the evolutionary and stabilising role of ‘convergent transformation’, in which one item causes the production of another item whose form tends to deviate from the original in a directed, non-random way. We present a series of stochastic models of cultural evolution investigating its effects. The results show that cultural stability can emerge and be maintained by virtue of convergent transformation alone, in the absence of any form of copying or selection process. We show how high-fidelity copying and convergent transformation need not be opposing forces, and can jointly contribute to cultural stability. We finally analyse how non-random transformation and high-fidelity copying can have different evolutionary signatures at population level, and hence how their distinct effects can be distinguished in empirical records. Collectively, these results supplement existing approaches to cultural evolution based on the Darwinian analogy, while also providing formal support for other frameworks – such as Cultural Attraction Theory – that entail its further loosening. Social media summary Culture can be produced and maintained by convergent transformation, without copying or selection involved.
... The present-day philosopher Daniel Dennett distinguished four grades of umwelt, or organismic experience (Dennett, 2017). Dennett's first two grades are instinctive 'Darwinian creatures', capable of no adaptive behaviour and 'Skinnerian creatures, who… adjust their behavior in reaction to "reinforcement,"' with adaptive but random behaviours being reinforced. ...
... In fact, they often possess very little control over teleological entities while still partly determining them. And here it is important to distinguish between control and determinationan insight we thank Dennett (1984aDennett ( , 2017 for. It is the absence of control by the field, and the partial independence that teleological entities have from the field, which tends to lead to the common belief that they simply cannot be deterministic. ...
Article
This paper argues that the account of teleology previously proposed by the authors is consistent with the physical determinism that is implicit across many of the sciences. We suggest that much of the current aversion to teleological thinking found in the sciences is rooted in debates that can be traced back to ancient natural science, which pitted mechanistic and deterministic theories against teleological ones. These debates saw a deterministic world as one where freedom and agency is impossible. And, because teleological entities seem to be free to either reach their ends or not, it was assumed that they could not be deterministic. Mayr’s modern account of teleonomy adheres to this basic assumption. Yet, the seeming tension between teleology and determinism is illusory because freedom and agency do not, in fact, conflict with a deterministic world. To show this, we present a taxonomy of different types of freedom that we see as inherent in teleological systems. Then we show that our taxonomy of freedom, which is crucial to understanding teleology, shares many of the features of a philosophical position regarding free will that is known in the contemporary literature as ‘compatibilism’. This position maintains that an agent is free when the sources of its actions are internal, when the agent itself is the deterministic cause of those actions. Our view shows that freedom is not only indispensable to teleology, but also that, contrary to common intuitions, there is no conflict between teleology and causal determinism.
... Reflected in the framework of info-computational nature, living organisms are cognitive agents, from single cells to humans, (Dennett 2017;Dodig-Crnkovic and von Haugwitz 2017). Cognitive artifacts can also be seen as natural physical systems with various degrees of cognitive capacities Dodig-Crnkovic 2014b). ...
Preprint
Full-text available
Recent comprehensive overview of 40 years of research in cognitive architectures, (Kotseruba and Tsotsos 2020), evaluates modelling of the core cognitive abilities in humans, but only marginally addresses biologically plausible approaches based on natural computation. This mini review presents a set of perspectives and approaches which have shaped the development of biologically inspired computational models in the recent past that can lead to the development of biologically more realistic cognitive architectures. For describing continuum of natural cognitive architectures, from basal cellular to human-level cognition, we use evolutionary info-computational framework, where natural/ physical/ morphological computation leads to evolution of increasingly complex cognitive systems. Forty years ago, when the first cognitive architectures have been proposed, understanding of cognition, embodiment and evolution was different. So was the state of the art of information physics, bioinformatics, information chemistry, computational neuroscience, complexity theory, self-organization, theory of evolution, information and computation. Novel developments support a constructive interdisciplinary framework for cognitive architectures in the context of computing nature, where interactions between constituents at different levels of organization lead to complexification of agency and increased cognitive capacities. We identify several important research questions for further investigation that can increase understanding of cognition in nature and inspire new developments of cognitive technologies. Recently, basal cell cognition attracted a lot of interest for its possible applications in medicine, new computing technologies, as well as micro- and nanorobotics.
Chapter
The author’s aim of exploring neuroscience research in relation to learning and teaching using an eclectic phenomenological approach is explained. This is a relatively new field in psychology. Although impossible to cover every aspect of the subject, it is suggested we now have a lot of information which should be considered. There are opportunities to achieve some understanding about neuroplasticity and manage ourselves better as we attempt to survive the physical, social and environmental challenges of our age. Brain plasticity is introduced and stories of key discoveries in neuroscience are told. These narratives illustrate the implications of ‘firing and wiring,’ neurodiversity, neurodivergence and disability. Psychological theories about consciousness, memory, brain regeneration, useful life-extension and lifelong learning are explored. Relevant and current backup data references are included in the reference list for topics discussed in this chapter.
Chapter
Research by current educational psychologists and learning theorists is discussed. The brain has proved much more complex than traditionally understood. Psychological evidence about the development of babies and young children in the early years is considered. The psychological problems of teenagers and young adults growing up in a normative assessment culture is evidenced. There is a discussion of the discovery of neuroplasticity by psychologists, cultural implications, neuro-diversity and how behaviour can be better understood. Formal scientific practices traditionally ignored this area creating difficulties and misunderstandings in people’s lives and confusion about learning processes. Measuring intelligence is shown to be about assessing actions, not just retrieving facts. All of this information can enable learning and be researched in teaching. Relevant and current backup data references are included in the references list for topics discussed in this chapter.
Chapter
Fresh information gained from research updates the theory presented in the author’s previous book. Neuroscience backs up clinical research into the functions of feelings and emotions proving they are integral to brain plasticity. They inform our thoughts by stimulating and creating awareness, meaning and inference. Emotional memory models are explained demonstrating their importance for information and motivation as they act independently of the stream of conscious worded dialogue we call ‘thinking.’ Neuroscience and clinical research shows how feelings and ‘emotional intelligence’ processes contribute to cognition. Stimulation through the senses is discussed and some ways concept and emotion models are framed and adjusted. The implications of how proprioception can create affect, inform emotion, aid learning and promote recovery are described. Empathy is explored both as a restorative therapy and an essential thinking strategy. Relevant and current data backup references are included in the reference list for topics discussed in this chapter.
Chapter
Full-text available
This chapter challenges the predominant assumption that humans shape technology using top-down, intelligent design, suggesting that technology should instead be viewed as the result of a Darwinian evolutionary process where humans are the agents of mutation. Consequently, we humans have much less control than we think over the outcomes of technology development.
Article
Full-text available
Realism is back. After several decades of denying there was anything beyond interpretation, thinkers in the postmodern tradition are returning to reality. A new cluster of Continental thinkers—including Maurizio Ferraris, Graham Harman, and Markus Gabriel—argue that realism was unjustly, and unwisely, aban- doned. While part of their motivation is purely philosophical, they also see realism as a defense against a crude, Nietzschean style of politics exemplified by a crop of world leaders who act as though the truth is whatever they say it is. Even in sociology, the thin, metaphysics-free theorizing of rational actor theory has been joined by “critical realism,” a metaphysically heavyweight view that accepts that things have objective natures that make them what they are, and powers that enable real causal interactions between things. تعود الواقعية، بعد عدة عقود من إنكار وجود أي شيء يتجاوز التفسير. ويرجع المفكرون في تقاليد ما بعد الحداثة إلى الواقع. تجادل مجموعة جديدة من المفكرين القاريين – بما في ذلك ماوريتسيو فيراريس، غراهام هارمان، وماركوس غابرييل- بأن الواقعية هُجرت بشكل غير عادل وغير حكيم. في حين أن جزءًا من دوافعهم فلسفية بحتة، فإنهم يرون أيضًا الواقعية كدفاع ضد نمط سياسي نيتشوي فظ يتجسد في مجموعة من قادة العالم الذين يتصرفون وكأن الحقيقة هي كل ما يقولونه. حتى في علم الاجتماع، فإن التنظير الهش، الخالي من الميتافيزيقا، لنظرية الفاعل العقلاني قد انضمت إليها “الواقعية النقدية”، وهي نظرة ميتافيزيقية ثقيلة تقبل أن الأشياء لها طبيعة موضوعية تجعلها ما هي عليه، وقوى تمكن التفاعلات السببية الحقيقية بين الأشياء. Realizm geri döndü. Yorumun ötesinde hiçbir şeyin olmadığına itikat edilen onlarca yıldan sonra Postmodern gelenekteki düşünürler realiteye (gerçekliğe) geri dönüyorlar. Aralarında Maurizio Ferraris, Graham Harman ve Markus Gabriel’in de bulunduğu kıta filozoflarından oluşan yeni bir grup, realizmin haksız yere ve pervasızca terkedildiğini iddia ediyor. Onları harekete geçiren şeylerden biri sırf felsefi olsa da realizmi, her ne olursa olsun söylemlerinin yegâne hakikat olduğunu düşünerek davranan bir dünya liderleri güruhu tarafından temsil edilen kaba, Nietzscheci politik tavra karşı bir savunma olarak görüyorlar. Sosyolojide bile, rasyonel fail teorisinin cılız, metafizikten azade teorileştirmesine, “şeylerin onları ne ise o yapan nesnel doğaları ve aralarında gerçek nedensel etkileşimleri mümkün kılan güçleri olduğunu kabul eden ağır sıklet bir metafizik görüş” olan “eleştirel gerçeklik” eşlik etti. Bu arada, Analitik felsefe başlangıcından bu yana ağırlıklı olarak realist kalmasına rağmen aynı zamanda zihnin dışında bir dünyanın bilgisini ya da herhangi bir bilgiyi talep etmekte hakikaten haklı olup olmayacağımız sorusuyla da boğuşmuştur.3 Ancak, son zamanlarda analitik muhitten düşünürler, bizim dünyaya dair fikirlerimizin bizatihi dış dünyadan radikal bir şekilde farklı olduğuna dair açıkça kusurlu ama kökleşmiş varsayımlara işaret ederek Kartezyen mirastan devşirilen yinelenen şüpheciliğe kafa tuttular. Ayrıca takipçilerine bizim bilinçli düşünceden bile önce gerçek bir dünyada vücuda gelen düşünürler olduğumuzu hatırlattılar.4 Hiç kimse hayata bir anti-realist olarak başlamaz. Gerçekten de tarihsel açıdan realizm, bizim varsayılan pozisyonumuz olarak görünüyor. O halde şu son zamanlara kadar niçin realizm bu denli ikna ediciliğini yitirdi?5 Belki de anti-realizmin aydınlanmanın bazı umulmayan sonuçlarından biri olduğu cevaplardan biri olabilir.6 UMULMAYAN AYDINLANMA Çoğu olmasa da belli sayıda Aydınlanma düşünürü aklın zaferinin ve yeni bilimin doğuşunun insanlığın beklentilerini ve maddi koşullarını daha iyiye götürebileceğine inandı.7 Bu anlamda bilginin, insanlığı geri bırakan zincirleri kırarak, onu azat ettiğini düşündüler.8 Ancak teknoloji ve tıp yoluyla pratize edilen bu dönüştürücü bilgi, şüphesiz insan yaşamını iyileştirirken akla hayale sığmayacak toplumsal ve kurumsal dönüşümleri de beraberinde getirdi. Gittikçe, ulaşım mekanik, iletişim kitlesel ve insan toplumu soyut ve gayri şahsi görünür oldu. görünüyordu. İnsanlar, tasarlanmış mekânlar ile son derece özel etkileşimlere girerek, büyük oranda doğadan kopuk yaşadılar. Aydınlanma biliminin yürürlüğe girmesiyle ortaya çıkan sanayileşme, kentleşme ve diğer sosyokültürel etkiler birçok insan deneyiminin doğal (ya da gerçek) dünyası ile yapay dünyası arasındaki sınırı bulanıklaştırdı. Kimileri için, böyle bir “de-realization” anlayışı, zihinden bağımsız bir gerçekliğe erişmenin inandırıcılığını baltalamaya başladı.9 On dokuzuncu yüzyılın sonlarında yazan Friedrich Nietzsche, Aydınlanma biliminin sunduğu dünya görüşünün nesnel ahlaka yaşam alanı bırakmadığı sonucuna vardı. Nietzsche, çekiçle felsefe yaparak, büyük-dönüştürücü fikirlerin, etkili düşünürlerin bakış açılarının olgunlaştırılmasından daha fazlası olmadığını ve bu yüzden gerçekliğin nesnel temsiline dair az veya çok hiçbir iddiası olmadığını ilan etti. Nietzsche hakikat fikrinin, aslında, ötekini manipüle etmek ve ona hükmetmek için kullanılan bir iktidar aracı olduğunu savundu. Bunun sonucu İtalyan filozof Maurizio Ferraris’in belirttiği gibi: “Bilgi-güç yanılgısıdır; bilginin her bir formunun, gücün bir tür ifadesi sayılıp şüpheyle mütalaa edilmesi gerekliliğidir.”10 Normlar, kurumlar ve kültürel formlarda somutlaşan, sadece gücün bir ifadesi olarak değerlendirilen bilgiye duyulan bu güvensizlik Michel Foucault, Jacques Derrida ve Richard Rorty gibi yirminci yy. düşünürleri tarafından irdelendi ve miras bırakıldı. Bu “Postmodern” düşünürler Aydınlanma filozoflarının insanlığın kurtuluşu addettikleri şeyi potansiyel ya da fiili bir baskı aracı olarak gördüler. Bu pozisyonun daha radikal savunucuları için gerçekliğin bilgisine yönelik herhangi bir iddianın olası suistimallerinden kaçınmanın yegâne yolu, bu tür iddiaları dekonsrükte etmek ve en nihayetinde bu tarz herhangi bir bilginin var olma imkanını reddetmektir. REALİZMDEN REALİTEYE Postmodern düşünce gerçeklik ve hakikat kavramlarını istikrarsızlaştıran bir politik tutumun sorumluluğunu ancak bu kadar üstlenebiliyorken, Vladimir Putin, Silvio Berlusconi ve Donald Trump, neyin makul bir şekilde doğru ve neyin “asparagas”11 olduğu hakkındaki geniş çaplı kültürel uzlaşının çöküşünden yani popülerleştirilmiş Postmodern şüpheciliğin katkı sağladığı bir çöküşten nemalandılar.12 Berlusconi’nin İtalya başbakanı olarak istikrarsız medya patronluğu sürecinde hoyratça gerçekliği istismar ettiğine şahitlik ettikten sonra Ferraris, herhangi birinin onlar hakkında ne düşündüğü dikkate alınmaksızın bazı şeylerin tam da ne ise o olduğu kabul edilmeden, gücü elinde tutanların iddialarına nasıl karşı çıkılacağının belirsiz olduğunu ileri sürer. O, “Çoğu Postmodern düşünürün inandığının aksine, her şeyden önce tarihin öğretilerine dayanarak, gerçeklik ve doğruluğun her daim zalimin zulmüne karşı mazlumu himaye etmeyi sağladığını düşünmek için makul sebepler olduğu” sonucuna vardı.13 Şu an realizme karşı duyulan Postmodern şüphenin kendisi şüphe içindeyse ya gerçekliğe karşı o kökleşmiş Kartezyen şüpheciliğe ne demeli? Hurbert Dreyfus ve Charles Taylor gibi fenomenolojik eğilimi ağır basanlar tarafından geliştirilen bir cevap; Descartes’ın zihinde izole edilmiş ve temsil etmeleri gereken dünyaya kapatılmış ideler görüşünü tenkide tabi tutmaktır. Bu “iç/dış” yapısı bir kereliğine dahi kabul edildiğinde kişinin gerçek dünyaya dair fikirlerini karşılaştırmak için kafasının dışına nasıl çıkacağı her daim bir sorun olacaktır. Dreyfus ve Taylor, evvela gündelik deneyimin nesnelerini anladığımız terimler yoluyla tüm yaşam dünyalarını gayri ihtiyari kabul ettiğimizi belirten alternatif bir yaklaşım önerir. Bu önsel ve düşünce ürünü olmayan bütüncül çerçeveyi varsaymaksızın ne düşüncelerimizin ne de hakkında konuştuğumuz şeylerin çoğunu tanımlayamayız bile. Bizim dünya üzerindeki seyrüseferimiz büyük oranda fiziksel çevremizle girdiğimiz, bilinçli farkındalık seviyesinin altında olan, bilişsel etkileşim yoluyla gerçekleşir. Mademki sürekli bu çevreyle temas halindeyiz o halde onun gerçek olduğunu kabul etmeliyiz. Dünyayı nasıl yorumladığımız ve onunla nasıl etkileştiğimiz hakkındaki bu gözlemler makuldür. Ancak kendini adamış bir Kartezyen, kasıtlı bir şekilde seçilmemiş olmasına rağmen, zorunlu olarak varsaydığımız yaşam dünyamızın bir yanılsama olabileceğine işaret edecektir. Elbette biz de hataen yanılabiliriz! Ayrıyeten dış dünyanın gerçekliğini onunla girdiğimiz bilinçdışı etkileşimden elde edebiliriz ki bu da yalnızca bu etkileşimin dış dünyayla olduğunu evvelce varsayarsak olur. Yine de dış dünyayla bilinçdışı etkileşime girdiğimiz görüntüsü nihayetinde bizim yanıltıcı iç temsillerimizin (sanrı) başka bir boyutu da olabilir.
Article
Memes are proposed as cultural equivalents to genes, and meme-based research (memetics) has been undertaken to examine cultural aspects of management and organization studies (MOS). However, variable operationalization of the meme concept for a fragmented range of research topics has hampered the development of a coherent memetic MOS discipline. In particular, there is a largely unrecognized dilemma regarding the ontological status of the meme because it is unclear if the concept represents a real cultural gene-like entity or a gene metaphor. This paper provides a fresh view of the applications of the meme in MOS and the degree to which fundamental meme theory supports the memetic endeavour for the field. The paper aims to improve the accessibility of memetics to MOS scholars, whose interests involve cultural phenomena, by summarizing the heterogeneity in the extant research and providing the basis for the next stage of the memetic MOS research programme. A conceptualization is provided to show how applications of the meme can be made, either as a real gene-like entity or a gene metaphor. Ideas are provided for how research can be conducted that will contribute to MOS and support evaluation of the ontological status of the meme.
Conference Paper
Full-text available
It is conventionally used to identify the beginning of the modern science with the scientific activity of Galileo Galilei. Nevertheless, as is known thanks to copious studies about the Mathematics of the Renaissance, lots of intuitions of the Pisan ‘scientist’ were consequence of a lively scientific debate and a cultural milieu that marked the Sixteenth Century. Among characteristics of modern science, surely the employ of the instrument to prove a theory was one of the most important. However the protagonists of Sixteenth Century had already gained a certain awareness about the useful of instrument to do science and as a good argument to defend their own thesis. In this paper, I would like to show how into the controversy about the equilibrium conditions of a scale, a debate that involved the main mathematicians of the time, Guidobaldo dal Monte, the patron of Galileo, often used experiments and instruments to prove the indifferent equilibrium. This approach is really evident in Le mechaniche dell‘illustriss. sig. Guido Ubaldo de‘ Marchesi del Monte: Tradotte in volgare dal sig. Filippo Pigafetta (1581), namely the Italian translation of Mechanicorum Liber (1577), the first printed text entirely dedicated to mechanics.
Article
The question of whether cultural transmission is faithful has attracted significant debate over the last 30 years. The degree of fidelity with which an object is transmitted depends on 1) the features chosen to be relevant, and 2) the quantity of details given about those features. Once these choices have been made, an object is described at a particular grain. In the absence of conventions between different researchers and across different fields about which grain to use, transmission fidelity cannot be evaluated because it is relative to the choice of grain. In biology, because a genotype-to-phenotype mapping exists and transmission occurs from genotype to genotype, a privileged grain of description exists that circumvents this ‘grain problem.’ In contrast, in cultural evolution, the genotype–phenotype distinction cannot be drawn, rendering claims about fidelity dependent upon researchers’ choices. Thus, due to a lack of unified conventions, claims about fidelity transmission are difficult to evaluate.
Article
The goal of this synthetic paper is to break down the dimensions of consciousness, attempt to reverse-engineer their evolutionary function, and make sense of the origins of consciousness by breaking off those dimensions that are more likely to have arisen later. A Darwinian approach will allow us to revise the philosopher’s concept of consciousness away from a single ‘thing’, an all-or-nothing quality, and towards a concept of phenomenological complexity that arose out of simple valenced states. Finally, I will offer support for an evaluation-first view of consciousness by drawing on recent work in experimental philosophy of mind.
Chapter
This chapter is an in-depth examination of the various very convoluted philosophical issues that arise from the interconnection between the ideas of mind and matter. First the chapter critically examines the state of the art in the domain of the philosophy of the mind by approaching an array of different theories regarding the relationships between these two notions. While some accounts interpret this relationship in a reductive manner, whatever the specific direction of the reduction may be (from eliminative materialism to physicalism to pan-psychism) other approaches have very controversially insisted on the impossibility to explain away the so called “hard problem” of consciousness. There are still other doctrines that tend to understand the connection between mind and matter in a more systemic fashion by employing the ideas of emergence and complexity. While there is no denying that there exist philosophical merits to all these positions to various degrees, the point that the chapter makes is that they all come with insurmountable conceptual difficulties as well. The second part of the chapter advances an alternative framework inspired by the ideas of the Spanish philosopher Gustavo Bueno that, while avoiding the limitations of the various approaches considered, highlights the ways in which the notions of mind and physical matter can be accorded their proper roles without mutual reductionisms and without the resource to the very problematic idea of emergence.
Chapter
Full-text available
Some philosophers have argued that, owing to our humble evolutionary origins, some mysteries of the universe will forever remain beyond our ken. But what exactly does it mean to say that humans are ‘cognitively closed’ to some parts of the universe, or that some problems will forever remain ‘mysteries’? First, we distinguish between representational access (the ability to develop accurate scientific representations of reality) and imaginative understanding (immediate, intuitive comprehension of those representations), as well as between different modalities of cognitive limitation. Next, we look at tried-and-tested strategies for overcoming our innate cognitive limitations. In particular, we consider how metaphors and analogies can extend the reach of the human mind, by allowing us to make sense of bizarre and counterintuitive things in terms of more familiar things. Finally, we argue that this collection of mind-extension devices is combinatorial and open-ended, and that therefore pronouncements about cognitive closure and about the limits of human inquiry are premature.
Article
Full-text available
V tomto článku se zabývám konceptem SI (superinteligence) a s ní spojeným problémem kontroly. Podle určité skupiny teoretiků umělé inteligence stojíme na prahu události, která může radikálně změnit povahu technologického pokroku a lidské společnosti obecně. Touto událostí má být takzvaná technologická singularita, která je často spojována se vznikem první větší než lidské inteligence. Lidé jako Nick Bostrom varují před nebezpečím, které pro nás vznik SI znamená a upozorňují, že musíme co nejdříve najít metody kontroly této inteligence. Podle Bostroma a dalších nebezpečí SI vyplývá z její povahy. Já na jedné straně posuzuji, jak SI může vzniknout, a na druhé straně soudím smysluplnost problému kontroly. Vznik SI vyplývá ze vzniku umělé inteligence. Proto soustřeďuji podstatnou část textu argumentům pro její vznik. Ukazují jak se Bostrom a další nechávají unést jedním problematickým argumentem. Dále podrobuji jejich pozici klasické kritice umělé inteligence. Prezentuji, jak stále lpí na problematických domněnkách svých předchůdců. Zejména soustřeďuji svoji kritiku na tvrzení, že SI bude mít jeden konečný cíl, který bude interpretovat. Toto tvrzení označuji jako antitetické, a to představě, že SI bude obecnou inteligencí. V závěru z mojí argumentace vyvozuji, že problém kontroly zaměňuje dva jiné různé problémy „kontroly.“
Article
Full-text available
Research in core brain physiology shows that understanding with long-term memory/recall (learning) is processed in the brain through the ubiquitous behaviors of pattern generalizing, visualizations, and most importantly, making/using connections. As such, there are four areas of concern in the current developmental algebra curriculum: thinking that practice yields understanding, stand-alone lessons, no daily use of connections, and inappropriate or no use of visualizations. In this paper, the author proposes a change from the current equation-solving curriculum as the driving force to the daily use of function behaviors and function representation as the common theme. Function behaviors and function representation easily connect symbolic algebra concepts and skills to previously taught concepts and skills, and they promote the use of real-world contexts while integrating pattern generalizing, visualizations, and making/using connections.
Article
Full-text available
This meta-study of animal semantics is anchored in two claims, seemingly creating a fuzzy mismatch, that animal utterances generally appear to be simple in structure and content variation and that animals’ communicative understanding seems disproportionally more advanced. A set of excerpted, new studies is chosen as basis to discuss whether the semantics of animal uttering and understanding can be fused into one. Studies are prioritised due to their relatively complex designs, giving priority to dynamics between syntax, semantics, pragmatics, and between utterers and receivers in context. A communicational framework based on utterance theory is applied as a lens for inspection of how these aspects relate to the assumed mismatch. Inspection and discussions of the studies bring several features to surface of which five are stressed in the following. Firstly, both syntactic structures and possible semantic content are seen as lean, although richer than earlier believed, and research continues to reveal new complexities in utterances. Secondly, there is a clear willingness to broaden the perception of animals’ semantic capacity to comprehend communication both by arguing theoretically and by generating empirical research in new contexts. Thirdly, the ambition to make sense of these tendencies is still often motivated by an evolutionary search for early building blocks for verbal language, with the pro et cons that such a position can have. Fourthly, the ‘allowed’ scientific frame for studying semantic capacity among animals is extended to new fields and contexts challenging the only-in-the-wild norm. Fifthly, the dilemma of integrating uttering and understanding as aspects of an after all functional communicational system, calls for new epistemological concepts to make sense of the claimed mismatch. Affordances , abduction , life-genre , and lifeworld are suggested.
Preprint
Full-text available
Four issues in the remedial algebra curriculum and pedagogy are analyzed while implementing core brain behavior.
Article
Communication among living structures is certainly one of the most important parameters in biology leading to evolution, social, and complex behavior. It did not escape our attention, and surprise, that those important philosophers of science such as Daniel Dennet, explicitly denies bacterial communication, while other eminent molecular biophysicists present explicit theoretical modeling for it. Communication is a loose concept, and this may be the key point for disagreements. In view of the fundamental importance of the problem, we designed and performed a clean dedicated experiment, reducing at most technical jargons concerning the context of microorganisms, in order to find the correct answer under Popperian falsification paradigm test for the problem. For this purpose, we use a set of colonies of Bacillus subtilis accordingly elaborated: in short, two independent colonies are identically prepared, but one receives false external information and the other does not. Then, we compare their sporulation evolution, and using the Shannon concepts of the information theory, we conclude that in Bacillus subtilis, there exists some sort of bacterial communication.
Chapter
In a world challenged by increasingly complex crises, a sound understanding of reality and high quality learning become crucial elements for strengthening children and making societies more resilient and fit for the future. This chapter argues that outdoor learning—even given the fact that quite a few aspects of it are under-researched—can play an important role in contributing to the kind of learning the twenty-first century needs. Outdoor learning enables cumulative, fundamental fostering of learning in multiple dimensions, such as academic learning, social interaction, personal development and well-being, mental, physical and social health, creativity, and much more. It is an add-in approach, easy to integrate into normal schooling, at very low cost. It therefore should be very high up on the agenda of any decision maker who is concerned with the future of our education systems. The chapter elaborates why the reminder of the book is a toolbox for just such decision makers in education authorities, teacher-training universities, schools and research institutions, to systemically embed outdoor learning in their respective practices.
Article
The dramatic nature and irregular frequency of solar eclipses may have helped trigger the development of human curiosity. If the kind of solar eclipses we experience on Earth are rare within the Universe, human-like curiosity may also be rare.
Article
Full-text available
A basic deep neural network (DNN) is trained to exhibit a large set of input–output dispositions. While being a good model of the way humans perform some tasks automatically, without deliberative reasoning, more is needed to approach human-like artificial intelligence. Analysing recent additions brings to light a distinction between two fundamentally different styles of computation: content-specific and non-content-specific computation (as first defined here). For example, deep episodic RL networks draw on both. So does human conceptual reasoning. Combining the two takes advantage of the complementary costs and benefits of each. It also offers a better model of human cognitive competence.
ResearchGate has not been able to resolve any references for this publication.