An Enquiry Concerning Human Understanding
Abstract
Commit it then to the flames: for it can contain nothing but sophistry and illusion.' Thus ends David Hume's Enquiry concerning Human Understanding, the definitive statement of the greatest philosopher in the English language. His arguments in support of reasoning from experience, and against the 'sophistry and illusion' of religiously inspired philosophical fantasies, caused controversy in the eighteenth century and are strikingly relevant today, when faith and science continue to clash. The Enquiry considers the origin and processes of human thought, reaching the stark conclusion that we can have no ultimate understanding of the physical world, or indeed our own minds. In either sphere we must depend on instinctive learning from experience, recognizing our animal nature and the limits of reason. Hume's calm and open-minded scepticism thus aims to provide a new basis for science, liberating us from the 'superstition' of false metaphysics and religion. His Enquiry remains one of the best introductions to the study of philosophy, and this edition places it in its historical and philosophical context.
... When theories to be developed are analytical, rather than explanatory or predictive, causal reasoning is not necessary for the development of the theory; however, some authors [18] use causal reasoning despite the fact the problems associated with causal reasoning. Hence, demonstrating the validity of causality has been questioned by many philosophers of science, including Hume and Russell [14,24]. The latter argues that causality can only be established in fully closed systems (such as mathematics), and empirical software engineering is not. ...
... • The validity of causality has been questioned by many philosophers of science. Hume [14] argued that it is impossible to ever demonstrate that changes in one variable produce changes in another. Russell [24] argued that causality can be established unambiguously only in a completely isolated system. ...
Context: This work is part of a research project whose ultimate goal is to systematize theory building in qualitative research in the field of software engineering. The proposed methodology involves four phases: conceptualization, operationalization, testing, and application. In previous work, we performed the conceptualization of a theory that investigates the structure of IT departments and teams when software-intensive organizations adopt a culture called DevOps. Objective: This paper presents a set of procedures to systematize the operationalization phase in theory building and their application in the context of DevOps team structures. Method: We operationalize the concepts and propositions that make up our theory to generate constructs and empirically testable hypotheses. Instead of using causal relations to operationalize the propositions, we adopt logical implication, which avoids the problems associated with causal reasoning. Strategies are proposed to ensure that the resulting theory aligns with the criterion of parsimony. Results: The operationalization phase is described from three perspectives: specification, implementation, and practical application. First, the operationalization process is formally defined. Second, a set of procedures for operating both concepts and propositions is described. Finally, the usefulness of the proposed procedures is demonstrated in a case study. Conclusions: This paper is a pioneering contribution in offering comprehensive guidelines for theory operationalization using logical implication. By following established procedures and using concrete examples, researchers can better ensure the success of their theory-building efforts through careful operationalization.
... These findings support the view that Black-White racial disparities in COVID-19 vaccination rates during Phase 1 of Pennsylvania's vaccination plan coincided with inequities in COVID-19 vaccine allocation 47 . Given that Black and White populations were the only significant population factors in our best-fitting regression models, we limit our discussion here to these two populations. ...
... Our results concerning inequities in vaccine allocation may understate inequities in vaccine access (though not necessarily vaccine uptake) 47 . Black Pennsylvanians may have been less able to obtain vaccines outside their own neighborhoods, given racial disparities in transportation access 50 , work commute time 51 , and internet access 52 . ...
Early racial disparities in COVID-19 vaccination rates have been attributed primarily to personal vaccine attitudes and behavior. Little attention has been paid to the possibility that inequitable vaccine distribution may have contributed to racial disparities in vaccine uptake when supplies were most scarce. We test the hypothesis that scarce vaccines were distributed inequitably using the shipping addresses of 385,930 COVID-19 vaccine doses distributed in the first 17 weeks of Pennsylvania’s Phase 1 rollout (December 14, 2020 through April 12, 2021). All shipments we analyze were allocated via the Federal Retail Pharmacy Program, a public-private partnership coordinated by the Centers for Disease Control and Prevention.
Overall, White people had an average of 81.4% more retail pharmacy program doses shipped to their neighborhoods than did Black people. Regression models reveal that weekly vaccine allocations determined by pharmacy chains—rather than initial shipment and administration site decisions requiring state and federal approval—drove these effects. All findings remained consistent after controlling for neighborhood differences in income, population density, insurance coverage, number of pharmacies, and other social determinants of health.
Our findings suggest that the private distribution of scarce public resources should be assessed for racial impact, regulated as public resources, and monitored continuously.
... Conversely, given that the laws of nature are inviolable, the occurrence of an anomalous event warrants the conclusion that an external intervention has taken place. This invalidates Hume's contention (Hume 1999: essay X) that an understanding of the laws of nature precludes belief in miracles. ...
The objective of this contribution is to present a counterargument to the view that religious faith is inherently incompatible with reason due to its lack of scientific evidence. To this end, it will draw upon the insights of Immanuel Kant's Critique of Pure Reason. It will be demonstrated that the distinction between the meta-empirical sphere and that of the scientific understanding does not imply the irrationality of the former for Kant. This is because, while reason encompasses understanding, it is not constrained by it. On the contrary, defining the boundaries of the sphere accessible to scientific investigation implies recognising the space outside those boundaries, the definition of which is made possible by the operational instrument of noumenon. While this result does not contradict scientific reason, it does allow the boundaries of the two spheres to be defined in a non-conflicting way and implies that the metaempirical sphere is a legitimate area of endeavor. The result of these considerations is to demonstrate that any stance which, in the name of a misconceived scientificity, denies in principle any possible metascientific or religious perspective on reality, is ultimately unreasonable. Rather, such a stance is based on implicit metaphysical assumptions.
... Causality is crucial in human reasoning and knowledge. Defining and formalizing causality has been a significant area of research in philosophy and formal methods [12,21,24,11]. In recent years, with the rise of machine learning and AI, there has been growing interest in formalizing causal reasoning. ...
This work extends Halpern and Pearl's causal models for actual causality to a possible world semantics environment. Using this framework we introduce a logic of actual causality with modal operators, which allows for reasoning about causality in scenarios involving multiple possibilities, temporality, knowledge and uncertainty. We illustrate this with a number of examples, and conclude by discussing some future directions for research.
... Innumerable such deduction rules can be inductively inferred from the given samples. In other words, induction has arbitrariness (Hume, 1748;Goodman, 1954;Quine, 1969). ...
Large language models (LLMs) are capable of solving a wide range of tasks, yet they have struggled with reasoning. To address this, we propose , which aims to enhance LLMs' reasoning capabilities by program-generated logical reasoning samples. We first establish principles for designing high-quality samples by integrating symbolic logic theory and previous empirical insights. Then, based on these principles, we construct a synthetic corpus named (), comprising numerous samples of multi-step deduction with unknown facts, diverse reasoning rules, diverse linguistic expressions, and challenging distractors. Finally, we empirically show that ALT on FLD substantially enhances the reasoning capabilities of state-of-the-art LLMs, including LLaMA-3.1-70B. Improvements include gains of up to 30 points on logical reasoning benchmarks, up to 10 points on math and coding benchmarks, and 5 points on the benchmark suite BBH.
... The Scottish philosopher David Hume is best known for his fundamental investigations into causation. He explored the concept of causality in his philosophical works "A Treatise of Human Nature" (Hume, 1896) and "An Inquiry Concerning Human Understanding" (Hume, 2007). Hume's ideas laid the philosophical foundation for subsequent debates on causality. ...
... Beneath this universal concept lies a deep philosophical debate between 18th-century thinkers David Hume and Immanuel Kant. Hume, an empiricist, viewed causality as a mental habit formed by repeated experiences [1], while Kant, a transcendental idealist, saw it as an inherent concept imposed by the mind on sensory experience [2]. The Hume-Kant debate echoes in the modern discussion of causal inference from data versus text. ...
Causal networks are widely used in many fields to model the complex relationships between variables. A recent approach has sought to construct causal networks by leveraging the wisdom of crowds through the collective participation of humans. While this can yield detailed causal networks that model the underlying phenomena quite well, it requires a large number of individuals with domain understanding. We adopt a different approach: leveraging the causal knowledge that large language models, such as OpenAI's GPT-4, have learned by ingesting massive amounts of literature. Within a dedicated visual analytics interface, called CausalChat, users explore single variables or variable pairs recursively to identify causal relations, latent variables, confounders, and mediators, constructing detailed causal networks through conversation. Each probing interaction is translated into a tailored GPT-4 prompt and the response is conveyed through visual representations which are linked to the generated text for explanations. We demonstrate the functionality of CausalChat across diverse data contexts and conduct user studies involving both domain experts and laypersons.
... It gives credit where credit is due and respects the observer for the formless entity that it is. Knowledge/understanding can only go as far as forms go, since they themselves are form [15]. Beyond that, it is the realm of the formless, that fundamentally will always remain a mystery. ...
This paper presents an introduction to self-reference. The definition of self-reference will be presented, namely the entity with the property of looking-backat-itself, and from this definition it will be shown how the entire world is obtained. Through repeated look-backs-at-itself, self-reference starts from the first self-identification, “I am,” which is experienced as the sensation of being alive, and continues to more complex self-identification, ending up with the entire world being self-reference itself. In this process, it is shown how consciousness is the direct consequence of self-reference and how qualia present inclusion and transcendence. It will be shown that the definition of self-reference implies an interplay between form and formless, making it in the end an entity that cannot be spoken about, though at the same time responsible for the creation of the entire world. Parallels with set theory will be made.
... It takes up Hume's assumptions on connection among ideas(Hume 1748(Hume /1955, "Resemblance, Contiguity in time or place, and Cause or Effect". Kehler's class of Resemblance Relations includes Parallel, Contrast, Exemplification, Generalisation, Exception, and Elaboration. ...
This paper deals with three interrelated topics, linguistic anaphora, multi-modal anaphora and the top-down broadcasting of information using gestural post-holds in multimodal dialogue. Initially, a new solution for definite, pronominal and pro-adverbial anaphora is given based on the idea that an existentially quantified general term may output a definite reference. This approach is extended to multimodal anaphora, where part or all of an anaphor’s meaning is contributed by some sequence of iconic or deictic gestures. Anaphora exploit the semantic potential of their antecedents, they work, as tradition has it, “bottom-up”. An inverse relation, more general than cataphora, and investigated here for the first time, is “broadcasting”, where information is freely distributed top down and input to receiving sites (ports). Anaphora are modelled with the same top-down mechanism and the same applies for coherence relations in dialogue which generally show an anaphora-like behaviour. “Broadcasting” can be used in the context of anaphors, for example, to provide their gestural meaning parts but also for a verb’s multi-modal arguments for referring to a location, a direction or an area. As to multi-modal data, broadcasting is shown to be frequently tied up with gestural post-holds, the holding of a gesture’s stroke information independently of semantically alignable speech. This leads to considering post-holds from a new perspective, stressing their speech-independent function and their relevance for indicating topic-continuity. We show that multi-modal anaphora and especially broadcasting cross single contributions and turns. The data which let us develop these perspectives come from the SaGA (Speech and Gesture Alignment) corpus, a set of route-description dialogues generated in a VR-setting incorporating marker-based eye-tracking facilities. The calculus used to model the anaphora and broadcasting dynamics is the concurrent λΨ-calculus, a recently developed two-tiered machinery using a Ψ-calculus for input-output, data transport and broadcasting. The data transported are in a typed λ-calculus format incorporating Neo-Davidsonian representations; these data can be linguistic, gestural only or multi-modal. Multi-modal informational chunks are modelled as communicating agents sending and receiving information via input-output-channels. They are introduced incrementally on an empirically motivated construction or gesture-plus-construction or gesture only basis. The λΨ-calculus is also used for the multi-modal fusion component unifying gestural and linguistic information; hence, the paper is also a contribution to multi-modal fusion of linguistic and gestural input. Finally, it is shown how the presented algorithm can capture multi-modal coherence relations or a multi-modal anaphora resolution based on PTT ideas.
... This parallels the ideas of David Hume who said we get our ethics from our feelings more than our reasoning. 35,36 Such a connection would be difficult to prove, especially since critical thinking learning outcomes alone have little literature. The significance of the work starts with the value placed on critical thinking and the value placed on the transfer of critical thinking skills to practice. ...
Introduction
Little literature exists on graduates’ application to practice for explicit critical thinking skills learned in dental school.
Purposes
Discern the (1) degree to which graduates apply explicit critical thinking skillsets in practice; (2) degree of adaptation of critical thinking skillsets to practice; (3) frequency of use for critical thinking skillsets in practice; and (4) perceptions to improve critical thinking learning guidance in dental school.
Methods
Five critical thinking exercises/skillsets were selected that had been in place over 5 years with at least one paper: geriatrics, treatment planning, technology decision making, ethics, evidence‐based dentistry; each followed concepts from an emulation model in critical thinking. Electronic survey administered in 2023/2024 to alumni graduated in the last 5 years.
Results
Of 98 (from 320 distributed) returned, 56 completed the entire survey. Dental school experiences positively influenced use of critical thinking skills in practice. On a five‐point scale, mostly 4s and 5s were reported for “…benefit your thinking.” Fifty‐three percent reported “using ideas from the exercise and developed my own thought processes,” 35% reported “using the thought process largely as offered in the college” and 5% reported “do not use the exercise.” Sixty percent reported using the skillsets hourly or daily. With minor variations all skillsets were reported positively for use in practice.
Conclusions
A positive influence of critical thinking skills was gained from the college experience with explicit positive impact for each of the five critical thinking experiences. The questions may be a model for future follow‐up studies of explicit dental school critical thinking exercises.
Atas dos GTs Kant e Criticismo e Semântica, com trabalhos apresentados no XX Encontro da Associação Nacional de Pós-Graduação em Filosofia (ANPOF), ocorrido em Recife, Pernambuco, entre 30 de setembro e 4 de outubro de 2024.
This paper critically addresses the pervasive neglect of indigenous approaches to social transformation within the field of international development cooperation. It shows how commonly used evaluation frameworks—shaped by Western assumptions about evidence, measurement, and progress—tend to exclude non-Western knowledge systems. Focusing on African Initiated Churches (AICs) as exemplars of development actors with transformational approaches that incorporate the spiritual, this study explores the possible reforms required in mainstream evaluation practices to recognise and include development alternatives. An analysis of AIC evaluation practices reveals the potential for decolonised frameworks rooted in African and Indigenous epistemologies, including relational, communal, and spiritual ways of generating evidence. This paper argues that fostering mutual learning and dialogue in the field of development evaluation is fundamental to driving more inclusive and sustainable social change.
This paper explores the theme of gaining the ability to speak in Russian-Israeli literature, which is written by immigrant writers who continue to use the Russian language in Israel. Often caught between Israeli and Russian, Eastern and Western cultures, these writers create characters who grapple with identity issues. Frequently, Jewishness becomes for them not just a recourse to memory or their own roots but takes on the scale of a miracle: the miracle of speech acquisition after a long period of silence. The works of Julius Margolin, Yulia Shmukler, and Efrem Baukh illustrate how openly speaking about their Jewish identity can lead to sudden and miraculous salvation in catastrophic circumstances. In contrast, Linor Goralik radically reinterprets this motif through a postmodernist lens, offering a new perspective on the problem of miracles and catastrophes. By applying philosopher Alexei Losev's theory of miracle to the analysis, this paper aims to elucidate the significance of the speech acquisition motif for Russian-Israeli authors. It explores the functions of this motif and investigates its various interpretations in the context of Russian-Israeli literature. Through analyzing selected works, this paper demonstrates the evolution and transformation of this motif over time. Additionally, it examines how the motif interacts with broader themes such as identity, memory, trauma, and the quest for meaning.
The post and/or modern humanity conceptualise their epistemic ability as providing the singular reliable interlocutor of knowing. However, implicit in this epistemology is modern humanity’s turning away from being qua being to a self-referential, relative and reductive modality of understanding. The ever-growing scientific corpus aids this view on post and/or modern epistemology, but by its success, this knowing has lessened other modes of knowing, e.g., metaphysics, indigenous knowledge systems, etc. In the post and/or modern milieu, the exploration of being is consequently not done on being’s own complex and irreducible terms but only on those construed by the subject. Inspired by the decolonial turn in the African academy and utilising the paradigm of African Philosophy and the Ratzingerian critique, the case is made that the influence of the seemingly opposing – but significantly coupled – reductive epistemological movements of modern empiricism and postmodern relativism have collaborated to disenchant the human experience. By ‘acceptable’ knowledge’s limitation to knowing relative to and/or measurable by the thinking subject, the post and/or modern subject is detached from being and becomes disenchanted in the divestment of wonder. It is contended that for the human to encounter being, wonder before the cosmos as-it-is, that is, the experience of enchantment, must be reclaimed for the sake of non-reductive and non-self-referential knowledge. By appealing to African decolonised epistemology and Aristotelian-Thomism, a more liberated conceptualisation of ‘science’ beyond the constraints of post and/or modern epistemology, incorporating the wondrous, is argued for.Intradisciplinary and/or interdisciplinary implications: Utilising a multimodal approach as developed from the African context, this research touches on fundamental themes in both theology and philosophy, as the argument is made that for being to be apprehended, the experience of wonder and awe needs to be reclaimed. In this sense, the study touches on psychological dimensions of the human experience, modes of knowing and reframing how ‘science’ is defined.
An extremely popular view among faithless persons is that persons of faith are not legitimate philosophical opponents. After all, one would be so if and only if one met a strong condition in: avoiding appeal to emotions or Scriptures, suspending judgment or seeking to convince others without using propositions of faith and respecting Pyrrhonist epistemic standards. The essay challenges this condition; it supports a weak condition according to which one is a legitimate philosophical opponent if and only if one recognizes one’s difficulty ofdistinguishing emotions and reasons for taking propositions to be true, is aware of some of one’s propositions of faith and acknowledges one’s argumentative limits. While criticizing the strong condition and backing up the weak one, the essay tackles two philosophical personas: Faithless Descartes who purports but disrespects the strong condition; and Faithful Descartes who illustrates a person of faith who meets the weak condition. That is not yet an exegetical essay on Descartes. Hence, though based on his works, the stated personas are not exactly identical to Descartes’ own stance.
Philosophical understanding of the problem of skepticism and its sources has grown in recent years, but important questions remain about the contribution of underdetermination and closure principles to skeptical arguments. My aim here is to improve upon this situation. Sections 1, 2 compare a closure principle I call Weakening to a principle I call Underdetermination. It appears that the former doesn’t follow from, and is less plausible than, the latter. Section 3 examines Dretske’s Zebra Case as a putative counterexample to Weakening. The New Zebra Case, a variant of the original, indicates that Weakening is preserved after all. Sections 4, 5 establish that Weakening and Underdetermination converge in a canonical skeptical argument, the Misleading Evidence Argument. Such convergence is significant, because it is widely thought that abandoning Weakening allows us to escape skepticism. Section 6 takes up Martin Smith’s claim that underdetermination principles don’t underwrite a skeptical argument of any interest. I argue that this assessment is mistaken.
The theory of evolutionary ethics suggests that the biological process of natural selection can supply a foundation for morality. This paper considers the philosophical groundings and implications of such a theory, with reference to common defenses against the counterarguments of the theory. This paper finds that—in spite of recent defenses—the theory of evolutionary ethics remains philosophically indefensible.
¿Cómo podemos saber que no estamos soñando? En este ensayo, abordo ésta y otras cuestiones relacionadas desde un punto de vista trascendental, construyendo una narrativa filosófica centrada en tres “gigantes”: Descartes, Kant y Putnam. De cada uno de ellos tomo algunas ideas y descarto otras, con el fin de desarrollar un enfoque trascendental históricamente informado, aunque original, del escepticismo sobre los sueños. Sostengo que estos pueden distinguirse de las cogniciones objetivas, ya que no suelen cumplir las condiciones trascendentales de tales cogniciones, por ejemplo, las condiciones de la referencia lingüística. De hecho, basándome en algunas ideas de G. E. Moore y Wittgenstein, sostengo además que las formulaciones del escepticismo onírico resultan carentes de sentido: no pueden comprenderse lingüísticamente. Sin embargo, la reflexión sobre estas formulaciones escépticas puede llevarnos a una clara comprensión estética de las condiciones trascendentales del sentido, así como del significado de palabras filosóficamente problemáticas como “sueño”, “percepción” y “realidad”.
Meu objetivo neste artigo é defender que podem ser atribuídos sentidos distintos para os termos “vontade” e “volição” na filosofia de Hume. Ao contrário das interpretações tradicionais, sustento que Hume não identifica vontade e volição. Inicialmente, apresento argumentos de Hobbes e Locke contra a concepção escolástica sobre a produção de ações voluntárias e defendo que Hume associa-se a esses dois filósofos. A seguir, apresento os argumentos da interpretação tradicional que identifica vontade e volição na filosofia humeana e também algumas objeções feitas a tais argumentos. Por fim, em oposição à interpretação tradicional, defendo que Hume acredita que a vontade pode ser compreendida como a faculdade pela qual produzimos ações voluntárias e que volições são paixões motivacionais em exercício. As paixões motivacionais que produzem ações são volições, que é a percepção pela qual produzimos ações voluntárias.
Com o intuito de salientar o lugar que a leitura de Hume tem dentro da obra de Deleuze, notadamente durante sua produção filosófica dos anos 1950 e 1960, interessa-nos descrever as principais concepções de síntese que o francês propõe em “Empirismo e subjetividade”, de 1953. Uma das questões que motivam essa pesquisa é a origem empírica do eu. Trata-se do que Deleuze denomina síntese incompreensível ou empírica. Ademais, reconhecemos a síntese do tempo na qual se descreve a condição que governa o funcionamento do hábito. Às sínteses anteriores, adicionam-se pelo menos mais duas identificadas nessa obra. Temos, de um lado, a síntese que indica o acordo das faculdades ou, mais exatamente, da articulação entre os princípios de associação e os da paixão, e, de outro, a síntese dos juízos, que surge da correlação entre as ditas faculdades.
Gettier cases reveal the paradoxes within the universally applied, but therefore misunderstood, framework of Plato's "justified true belief" (JTB). By identifying and addressing five challenges this analysis highlights the limitations of JTB in dynamic contexts. The resulting instabilities and contradictions necessitate a shift toward a dualistic model of knowledge, distinguishing between static knowledge (SK), which is timeless and unchanging, and dynamic knowledge (DK), which can adapt and evolve with changing circumstances. In this framework, Gettier cases will be explained as conceptual coincidences. In dynamic environments, knowledge claims demand methodologies that transcend the limitations of JTB to adapt to evolving information. Assertions in this regard, with their critical moments procedurally lead to more useful conceptualizations. Consequently, DK offers tools for epistemological analysis with both an idealistic and a pragmatic approach, the latter defined as "justified true crisis" (JTC).
Las teorias filosóficas con respecto al conocimiento necesitan de un enfoque interdisciplinar para avanzar a responder varias de sus cuestiones. En este sentido este artículo busca ofrecer un marco teórico desde la psicología cognitiva y las ciencias cognitvas que ofrezcan comprender cómo se da la adquisicicón y transformación de categorias durante la primera infancia. Se muestran estudios que evidencían que la categorización se vale de mecanismos (como la atención) y de facultades cognitivas (como el lenguaje) para crear nuevas categorias. Se discuten perspectivas sobre los procesamientos que pueden estar ocurriendo al momento de categorizar. De atl modo que se exponen diferenctes vertientes teoricas sobre el lenguaje y la atención y se propone que el temperamento es una facultad innata que puede estar influyecndo en nuestra forma de categorizar y por lo tanto de conocer el mundo.
For Hegel scepticism is one of the greatest forces in philosophical thought. He makes a sharp distinction between the scepticism of Ancient Greece, and the scepticism of modern thinkers from Descartes to (Hegel’s contemporary) Schulze. These two forms of scepticism appear to have a similar foundation, but according to Hegel, their nature is substantially different. Hegel will subsequently attempt to incorporate the fundamentals of ancient scepticism into the dialectics of consciousness, his primary subject in Phenomenology of Spirit, transforming its role in the process. Hegel reinterprets scepticism as a force of constant, self-affecting movement that is immanent to consciousness itself.
This article tries to uncover the viability of the „God Hypothesis” as a framework for applying scientific methodologies to theological claims, examining the historical context, philosophical foundations, and contemporary challenges of this endeavor. By evaluating the potential for empirical verification of theological assertions through case studies on miracles, near-death experiences, and quantum mechanics, the paper addresses the epistemological boundaries between science and theology. It also considers the implications of emerging technologies and interdisciplinary approaches for the future of the God Hypothesis in the 21st century.
According to our folk theory, we can reliably detect when people are lying through observing behavioral cues. The cues occur because people are motivated to lie but afraid to get caught. Lying is also commonplace according to our folk theory, so it is a good thing that we have this capacity to tell when people are deceiving. Social epistemologists have agreed, elevating our folk theory into epistemological wisdom. Elizabeth Fricker's epistemology of testimony leads the field in transforming folk thoughts into philosophical theories. Extensive research in communication studies, however, shows that our folk theory is mistaken. The social epistemology of communication should understand and incorporate lessons from this research when explaining why we acquire knowledge and justification through conversation and when making recommendations for how to do better. Starting with the facts provides a surer foundation. This chapter elaborates this research, with special attention to its ecological validity, concluding with a detailed discussion of Fricker's argument for a monitoring requirement on justified testimony-based belief and her argument that we frequently possess a reliably true quasi-perceptual belief that our speakers are trustworthy.
This essay examines the issue of criminal liability from the viewpoint of the character theory of responsibility, with particular attention being paid to the role of excuse-based defences. Two different versions of the character theory are examined and compared: the traditional character theory and the utilitarian motivational theory of responsibility. Following a brief overview of the distinction between justification and excuse in common law jurisprudence, the two versions of the character theory are discussed and their implications are highlighted. The essay concludes that the traditional character theory, with its emphasis on moral blameworthiness, offers a better basis for understanding the nature of criminal responsibility in relation to offences which also constitute moral wrongs. The utilitarian motivational theory, on the other hand, may be given priority when considering the question of responsibility in relation to offences in which the element of moral blame is absent, minimal or questionable.
This paper investigates the topic of epistemic authority from the perspective of the ordinary people facing expert testimony. In particular, two central questions are discussed: how one should respond to expert testimony; and what should one do before expert disagreement.
Friedrich Waismann regards the definition of analytical judgments, which are very similar to the predication of the essential of the essence, as contradictory. His word is based on denying the unified combination of subject and predicate and on the constructiveness and nonexistence of the whole. This paper mainly focuses on the problem of predication; because Weisman's criticism is based on the difficulty that the components are not predicated of the whole and also on considering the whole exclusively as the constructive one. Comparatively studying the problems based on the opinions of Muslim philosophers and logicians, we show that such a problem has long been prominent in their works, especially in Ibn Sīnā, and received a satisfactory answer as well. Apart from the denominative predication resolution of a few of Peripatetics, they, due to distinguishing the real and the constructive whole, believe that, when components are combined, another object is also generated in such a way that the predication in some cases is possible and in some cases is not. Wisemaan's problem is only related to the second. Also, predication discussions of Muslim thinkers are related to the discussion of the unified and adjacent (inḍimāmī) combinations. Studying the historical roots of the problem, we will show that the content of Ibn Sīnā's words is the very unified and adjacent combinations, which is expressed by the conditioning no (bisharṭ lā) and unconditioned (lā bisharṭ) considerations. Weisman's problem applies to the first, not to the second.
Quantum Physics is usually defined as a theory that affirms a primary role of randomness and probability. Eleven well-known quantum experiments are examined and the result is the coexistence of both random and causal type behaviour. We need both to describe experiments. Quantum Mechanics states the general overcoming of causality and this statement constitutes an unlimited generalization, not supported by experience. Determinism and indeterminism are philosophical systems, that universalize causality or chance. The crucial point is the difference between epistemic and intrinsic randomness. In the first aspect, randomness does not have a fundamental meaning, in the second, randomness of the individual event is explained, but the detection of high stable regularities remains to be explained. The article addresses the question of entanglement and various aspects of probability theory, including the law of large numbers, arriving at the thesis that many relevant questions are unresolved. A causal description is in quantum mechanics impossible in principle, given its assumptions. In order not to contradict experience, both points of view are necessary, causal and random. This is the state of the research, to return to and start again from.
How does the Church in Ghana address the growing concerns of health care delivery especially among the Ghana-Eυe from the perspective of Christianity? This is the question that this study sought to answer. Using literature research, this paper argued that holistic Christian health care is nothing short of ᶑagbε in the Eυe context. ᶑagbε is holistic well-being. Holistic well-being can hardly be separated from the salvation that Jesus offers. At his incarnation, Jesus brought the gospel of the kingdom of God/kingdom of heaven as holistic well-being for human benefit. While preaching his holistic gospel, Jesus also healed and taught people. His spectacular healing ministry and other deeds did not prove his divine sonship or messiahship but attested to the fact that in his life and ministry, the kingdom of God/heaven had come in power and glory. In Jesus then, one finds the full realisation of God’s ᶑagbε to heal the human body, soul and spirit. The study contributes to scholarship in helping to indigenize the meaning of salvation in the Eυe context. This wades into the debate on inculturation theology in Africa. Keywords: ᶑagbε, health, Ghana
Ever since John Leslie Mackie’s ‘popularization’ of moral error theories in meta-ethics, increasing attention has been focused on how to escape the force of nihilism. For many opponents of the moral error theory, ‘moral nihilism’ is used as a derogatory synonym associated with immorality and selfishness, but such a defamatory usage of the label is obviously not very helpful for a serious philosophical examination of the view. The goal of this paper is to draw on insights by David Hume and other Humean philosophers such as J.L. Mackie, Richard Joyce, and Richard Garner, in order to turn ‘moral nihilism’ from a term of abuse to a ‘badge of honour’.
In this paper, we use the recent appearance of LLMs and GPT-equipped robotics to raise questions about the nature of semantic meaning and how this relates to issues concerning artificially-conscious machines. To do so, we explore how a phenomenology constructed out of the association of qualia (defined as somatically-experienced sense data) and situated within a 4e enactivist program gives rise to intentional behavior. We argue that a robot without such a phenomenology is semantically empty and, thus, cannot be conscious in any way resembling human consciousness. Finally, we use this platform to address and supplement widely-discussed concerns regarding the dangers of attempting to produce artificially-conscious machines.
This dissertation investigates the roots of African Christian Spirituality and Traditions and challenges the presupposition that Europe is the source and origin of African Christian spiritual practices. It clarifies the foundational role of early African Traditional Spirituality and practices as well as the influence of African Christianity from the pre-colonial era. The research preoccupies the question of the foundations of African Christianity and what constitutes African Christianity. The colonial and missionary encounters with Africa profoundly influenced the understanding of the roots of African Christian spirituality and practices. These recent historical developments of African Christianity often overshadow the pre-colonial foundations of African Christian spirituality, traditions, and belief systems.
The study asserts the existence of a long and cumulative wealth of spiritual traditions and practices that lay the foundation of African Christian spirituality. It lays the foundation for the African Christian traditions, belief systems, and practices. Thus, this research establishes the claim that the solid foundation of African Christian spirituality traces its roots from the cumulative pre-colonial foundations without denying the contribution of the Western colonial and missionary inculturation of Christianity.
Determining whether repetitive head impacts (RHI) cause the development of chronic traumatic encephalopathy (CTE)-neuropathological change (NC) and whether pathological changes cause clinical syndromes are topics of considerable interest to the global sports medicine community. In 2022, an article was published that used the Bradford Hill criteria to evaluate the claim that RHI cause CTE. The publication garnered international media attention and has since been promoted as definitive proof that causality has been established. Our counterpoint presents an appraisal of the published article in terms of the claims made and the scientific literature used in developing those claims. We conclude that the evidence provided does not justify the causal claims. We discuss how causes are conceptualised in modern epidemiology and highlight shortcomings in the current definitions and measurement of exposures (RHI) and outcomes (CTE). We address the Bradford Hill arguments that are used as evidence in the original review and conclude that assertions of causality having been established are premature. Members of the scientific community must be cautious of making causal claims until the proposed exposures and outcomes are well defined and consistently measured, and findings from appropriately designed studies have been published. Evaluating and reflecting on the quality of research is a crucial step in providing accurate evidence-based information to the public.
Graphical abstract
Experimental methods from psycholinguistics allow experimental philosophers to study important automatic inferences, with a view to explaining and assessing philosophically relevant intuitions and arguments. Philosophical thought is shaped by verbal reasoning in natural language. Such reasoning is driven by automatic comprehension inferences. Such inferences shape, e.g., intuitions about verbally described cases, in philosophical thought experiments; more generally, they shape moves from premises to conclusions in philosophical arguments. These inferences can be examined with questionnaire-based and eye-tracking methods from psycholinguistics. We explain how these methods can be adapted for use in experimental philosophy. We demonstrate their application by presenting a new eye-tracking study that helps assess the influential philosophical ``argument from illusion.'' The study examines whether stereotypical inferences from polysemous words (viz., appearance verbs) are automatically triggered even when prefaced by contexts that defeat the inferences. We use this worked example to explain the key conceptual steps involved in designing behavioural experiments, step by step. Going beyond the worked example, we also explain methods that require no laboratory facilities.
The Idea of a unified foundation of all reality has long been core to many attempts at a fundamental ontology, as well as many arguments for the divine. In medieval India a cluster of arguments for metaphysical inheritance, causal entanglement, the impossibility of fundamental relations and more, were advanced together to show there must be an ultimate and unified ground. But foundationalism has been under attack in both recent metaphysics, and Buddhist philosophy. This article unpacks Vedānta’s defense of divine foundationalism against Madhyamaka Buddhism’s metaphysical nihilism. Firstly, we look at how inheritance arguments for ultimate ground aimed to circumvent the possibility of infinite regress. Secondly, we assess three arguments that this ground is a unified modal anchor, with entangled causal power, providing a connective medium for all phenomena. We address some caveats and limitations, but go on to argue that if they are right, they circumvent the Buddhists’ ‘dualistic’ assumption that if the empirical world is mere imagined convention, it needs no explanation. Monists and nihilists are allies against excessively realist ontologies, but these arguments make a compelling case for some unified fundamental nature from which, as the Upaniṣads put it, all things emerge like sparks from a fire.
The objective of the present work is to understand and elucidate Kant’s notion of category and how he derived the categories from a single transcendental principle. Kant did not put forward any definition of categories. He believed that categories cannot be defined without perpetrating a circle. Thus, he began his discourse with certain features of categories in his work Critique of Pure Reason. We have discussed the characteristic features of Kantian categories. An important point to be noted here is that the categories, in the fullest Kantian sense of the term, must have a distinct property, namely that it should necessarily be applicable to all objects of knowledge. However, we are not concerned with the necessary applicability of concept to all objects of knowledge here in this paper. The analysis about the Kantian notion of categories, more importantly, necessitates a discussion about how he derived them from a single transcendental principle. Kant referred to the single principle which guides the search for the categories as “the clue to the discovery of the categories.” The specific and clear formulation of the principle which served as the transcendental clue to the discovery of the categories for Kant is that to every form of judgment there corresponds a pure and basic concept of the understanding. The forms of judgments and the categories both originate from the same source, namely, the function of the understanding, i.e., thinking. It may be noted here that the understanding is the power or faculty of knowing and thinking or judging is the function of understanding. Kant argued that the twelve logical forms of judgments provided the clue to the origin of twelve corresponding a priori concepts or categories. Two arguments provided by Kant in support of the principle serving as a transcendental clue to the discovery of the categories are analysed. An orthodox view held by some philosophers that for Kant the forms of judgment are forms of analytic judgment has been critically analyzed and is interpreted as erroneous. Keywords: category, judgment, understanding, categorematic, syncategorematic
The philosophical tension between reductionism and holism has shaped our pursuit of knowledge for centuries. From Aristotle's assertion that "the whole is greater than the sum of its parts" to the mechanistic precision of Enlightenment science, these opposing views have guided our understanding of complex systems. Reductionism, which seeks to explain the world by dissecting it into its smallest components, has driven monumental scientific advances, from Newtonian physics to molecular biology. In contrast, holism emphasizes the interconnections between parts, revealing that emergent properties arise from the relationships within a system. This paper traces the historical evolution of these philosophies, leading to their reconciliation in the modern era through systems theory and the development of artificial intelligence (AI). By exploring how AI exhibits emergent phenomena, we see the reemergence of holistic thinking, where intelligence and complex behaviors arise from the interaction of simple components. This synthesis of reductionism and holism provides a new framework for understanding both the natural and artificial worlds.
Keywords: reductionism, holism, Aristotle, emergent phenomena, AI, artificial intelligence, systems theory, complexity, neural networks, deep learning, philosophy of science, cybernetics, history of science, emergent behavior, complexity science.
According to a prevailing view, conceptual engineering introduces a revolutionary philosophical methodology , challenging traditional conce ptual anal ysis. Ho wever, in our paper, we argue that closer scrutiny reveals not only the falsity but also the inherent ambiguity of this narrative. We explore four interpretations of the 'Anti-Novelty Claim', the claim that conceptual engineering is not a new way of doing philosophy. Discussing the Anti-Novelty Claim from the perspective of a text's producer, the text's consumers, and the exegetical potential of the text, we examine each perspective's metaphilosophical implications and demonstrate that taking each perspective requires different methods. Adopting these differ ent methods, we argue that the differ ent interpr etations of the Anti-Novelty Claim range from nearl y triviall y true to unlikel y but untested. Importantl y, we emphasize that each interpretation offers unique philosophical insights, yet addressing them requires diverse types of evidence, preventing a singular, straightforward answer to whether conceptual engineering is new.
The discovery of the motives behind human actions turned into a controversial issue among philosophers during the seventeenth and eighteenth centuries. While supporters of libertarianism maintained that human actions stemmed from an inner will, the Necessitarians, on the other hand, believed human actions were an effect of previous external causes. David Hume, however, came up with a middle-way solution for this dispute which he named his Reconciling Project. This comparative study aims to discover the concept of the Reconciling Project in the works of Alexander Pope. To achieve this the article would elaborate upon Hume’s thoughts on the theme mentioned above expressed in the eighth chapter of his book An Enquiry Concerning Human Understanding entitled Of Liberty and Necessity. It would trace it to the works of Pope by analysing Pope’s views in Essay on Man and his notion of The Ruling Passion. It would be argued and concluded through a comparative method that Hume’s thoughts had found their way into the works of Pope, and both thinkers found human instincts as the real motive behind human actions. Through an intertextual analysis, it would also be concluded that this was either the direct effect of Hume’s thoughts on Pope or that both figures were influenced by the same intellectual currents of the eighteenth century, Enlightenment, and secularisation.
The brain may have evolved a modular architecture for daily tasks, with circuits featuring functionally specialized modules that match the task structure. We hypothesize that this architecture enables better learning and generalization than architectures with less specialized modules. To test this, we trained reinforcement learning agents with various neural architectures on a naturalistic navigation task. We found that the modular agent, with an architecture that segregates computations of state representation, value, and action into specialized modules, achieved better learning and generalization. Its learned state representation combines prediction and observation, weighted by their relative uncertainty, akin to recursive Bayesian estimation. This agent’s behavior also resembles macaques’ behavior more closely. Our results shed light on the possible rationale for the brain’s modularity and suggest that artificial systems can use this insight from neuroscience to improve learning and generalization in natural tasks.
ResearchGate has not been able to resolve any references for this publication.