Published by Springer Nature
Online ISSN: 1574-9274
Print ISSN: 0048-3893
Learn more about this page
Recent publications
The aim of this paper is to show that Curry’s recent defence of the interpretivist approach to beliefs is unsuccessful. Curry tries to argue that his version of interpretivism, which is based on the model-theoretic approach to folk-psychological attributions, is well-suited to resisting the epistemological argument that is directed at interpretivism. In this paper, I argue that even if Curry’s defence is successful in this case, his theory does not have enough resources to solve the metaphysical problems of interpretivism. In particular, I argue that the model-theoretic version of interpretivism that Curry espouses does not explain the claim that beliefs are constituted by the process of attribution, which is central to the interpretivist project. In the final parts of the paper, I discuss the issue of the relation between interpretivism and other forms of the broadly superficial/deflationary approach to beliefs, especially dispositionalism. I contend that if one wants to adopt a superficial/deflationary approach, it is best not to adopt interpretivism as it is an unnecessarily complex and problematic version of this broad view.
The objective of the current paper is to provide a critical analysis of Dretske’s defense of the naturalistic version of the privileged accessibility thesis. Dretske construed that the justificatory condition of privileged accessibility neither relies on the appeal to perspectival ontology of phenomenal subjectivity nor on the functionalistic notion of accessibility. He has reformulated introspection (which justifies the non-inferentiality of the knowledge of one’s own mental facts in an internalist view) as a displaced perception for the defense of naturalistic privileged accessibility. Both internalist and externalist have been approved the plausibility of first-person authority argument through privileged accessibility; however, their disagreement lies on the justificatory condition of privileged accessibility. Internalist hold the view that the justificatory warrant for privileged accessibility is grounded on phenomenal subjectivity. In contrast to the internalist view, externalists uphold the view that the justificatory condition for privileged accessibility lies outside the domain of phenomenal subjectivity. As a proponent of naturalistic content externalism, Dretske defends the view that subject’s privileged accessibility is not due to having access to the particular representational state (hence, they have the privilege of getting sensory representational information) and the awareness of mental fact rather the awareness of the whole representational mechanism. Having the knowledge of a particular representational state through privileged access is not the sufficient condition for the accuracy of knowledge about one’s own mental facts. The justificatory warrant lies external to the subject. Even though Dretske’s naturalistic representation is not plausible enough while dealing with the reduction of phenomenal qualities of experience, however, provides a new roadmap to compatibilists for the defense of privileged accessibility and has a major impact on transparency theorists.
In this paper, I assess Duncan Pritchard’s defense of the “orthodox” view on epistemic normativity. On this view, termed “epistemic value T-monism” (EVTM), only true belief has final value. Pritchard discusses three influential objections to EVTM: the swamping problem, the goal of inquiry problem, and the trivial truths problem. I primarily focus on Pritchard’s defense of the trivial truths problem: truth cannot be the only final epistemic value because we value “trivial” truths less than “significant” truths. In response, Pritchard appeals to epistemic virtue: the virtuous agent desires “substantial” truths, where “substantiality” is a matter of the non-luckiness of the true belief. Thus, what has substantial value, for Pritchard, is well-grounded true belief. Yet, I argue, this moves away from “orthodox” EVTM: this view fails to attribute final epistemic value to non-grounded, accidentally true belief. Aside from giving up EVTM altogether, I suggest this leads to either a revision or revolution. On the revision interpretation, EVTM was always imprecise: what orthodoxy always really cared about was epistemically grounded truth rather than truth simpliciter. On the revolutionary reading, what we really care about, contra orthodoxy, formation of belief in line with virtues rather than truth simpliciter.
Rawls famously argued against meritocratic conceptions of distributive justice on the grounds that the accumulation of merit is an unavoidably lucky process, both because of differences in early environment, and innate talents. Thomas Mulligan (2018a) has recently provided a novel defense of meritocracy against the “luck objection”, arguing that both sources of luck would be mostly eliminated in a meritocracy. While a system of fair equality of opportunity ensures that differences in social class or early environment do not lead to differences in the accumulation of merit, Kripke’s essentiality of origin thesis means that our genetic endowments, and thus our innate talents, could not have been any other way. But if we could not fail to have our innate talents, Mulligan argues, then it is not a matter of luck that we have them, and so the merits we accumulate on their basis are not so luck-dependent. This paper argues that Mulligan’s appeal to the essentiality of origin thesis fails to rescue meritocratic conceptions of distributive justice from the luck objection for two reasons. First, even granting essentiality of origin and fair equality of opportunity, the contingencies of the market and the social environment mean that having some innate talents is far luckier than having others. And second, the appeal to essentiality of origin misses the underlying motivation for the luck objection, and ignores the intimate connection between desert and responsibility.
The tension that many early scientists experienced between a reliance on religious tradition as a source of truth and scientific methodology as a guide to truth eventually led to a clash between theists who claimed that the existence of the universe required a creator and non-theists, who insisted that recourse to a creator to explain why there is something perverts scientific methodology. The present paper defends the position that physics and its foreseeable cosmological extensions neither requires nor excludes either opposed contention. Each has the status of an inference to the best explanation. This is developed in three stages. The first uses historical analysis to support the claim that the advancement of physics and cosmology do not rely on an appeal to supernatural forces. The second explains inference to the best explanation. The third shows how this accommodates these conflicting claims. An appendix examines an influential argument that the intelligibility of the universe requires a creator.
Many observers argue that in its very beginning, Zionism was an instance of wrongful settler colonialism. Are they right? I will address this question by examining the vision of Egalitarian Zionism in light of various theories of the wrongfulness of colonialism. I will argue that no theory decisively supports a positive answer.
This text discusses aspects of Human-Machine Interaction from the perspectives of an engineer and a philosopher. Machines are viewed not so much as tools, but more as complex socio-technical systems. On this basis, the relation between production and use of such systems and its influence on the interaction between human and machine is examined. The concept of intentionality serves as a common thread, and its close connection with the concepts of usefulness , purpose , and functionality as well as the more socio-culturally shaped concept of destination is shown. We also discuss the connection of these considerations to current philosophical debates around Floridi’s “Onlife Manifesto”. The core of our argumentation is the significance of the unintended in the interplay between justified expectations, the intended, and experienced results of real-world cooperative action. We extend the meaning of this term up to the negatively intended as harmful effect in order to highlight the prominent position of problem solving in the socio-cultural development of humankind. We focus on information processes in that Human-Machine Interaction and the concept of unintended information . With the advancing digital transformation and the growing possibilities of digital monitoring, the importance of information processes as forms of description is growing, too. We show how the concept of non-intended information is linked to such concepts of control of development, in which the non-intended is marginalised and thus an essential driver of socio-cultural development is deactivated. We compare this with Floridi’s reflections on “fears and risks in a hyperconnected era”.
In a recent article in this journal, David Rondel argues that symbolic (or semiotic) objections to markets hold significant argumentative force. Rondel distinguishes between Incidental markets and Pervasive markets, where Incidental markets describe individual instances of exchange and Pervasive markets comprise the social management of goods by an institutional market arrangement. In this reply, I specify a key insight that buttresses Rondel’s distinction. The distinction as it is currently characterized fails to identify when Incidental markets become Pervasive. This opaqueness allows scholars that defend markets without limits to question the analytical distinctiveness of Incidental and Pervasive markets. I show that by incorporating the market’s price mechanism as an indicator of a properly Pervasive market, Rondel’s distinction is not only able to tackle the aforementioned retort, but also allows for important reflections on what types of institutions should be considered markets at all.
Wollen (2022) is a critique of deontological libertarianism, the version of this philosophy predicated upon private property rights and the non-aggression principle. The launching pad for this article of his is the difficulty faced by conjoined twins, who diverge sharply in their view of their desirable future. The present rejoinder maintains that this author’s critique fails; further, that it really has little or nothing to do with conjoined twins per se, but, rather, aims at an entirely different challenge, that of the tie or the dead heat. This latter is a critique that all political economic philosophies face, without exception, and libertarianism does no worse on this challenge than any other, the critique of Wollen to the contrary notwithstanding.
According to Kant, the division of the categories “is not the result of a search after pure concepts undertaken at haphazard,” but is derived from the “complete” classification of judgments developed by traditional logic. However, the sorts of judgments that he enumerates in his table of judgments are not all ones that traditional logic has dealt with; consequently, we must say that he chose the sorts of judgments in question with a certain intention. Besides, we know that his choice of judgments and categories is strongly influenced by certain views of natural science that he fully accepts. For this reason, his argumentations are sometimes seriously inconsistent. As to Kant’s argumentations of categories, many problems have already been pointed out, but in this paper, I take up the categories of quantity and quality once more, and make clear his argumentations’ hidden logic and its distortion from the point of view of the history of logic and natural science. First, I confirm that there are non-negligible problems in his explanation to the effect that his derivation of the categories of quantity and quality is based on the quantity and quality of judgments. Next, I reconsider the meaning of his treating the categories of quantity and quality as pure concepts of the understanding. Finally, I conclude that by having recourse to the categories of quantity and quality Kant tried unjustly to apriorize the distinction between the “extensive magnitude” and “intensive magnitude” that has a long formational history since Aristotle.
Set theory faces two difficulties: formal definitions of sets/subsets are incapable of assessing biophysical issues; formal axiomatic systems are complete/inconsistent or incomplete/consistent. To overtake these problems reminiscent of the old-fashioned principle of individuation, we provide formal treatment/validation/operationalization of a methodological weapon termed "outer approach" (OA). The observer's attention shifts from the system under evaluation to its surroundings, so that objects are investigated from outside. Subsets become just "holes" devoid of information inside larger sets. Sets are no longer passive containers, rather active structures enabling their content's examination. Consequences/applications of OA include: a) operationalization of paraconsistent logics, anticipated by unexpected forerunners, in terms of advanced truth theories of natural language; b) assessment of embryonic craniocaudal migration in terms of Turing's spots; c) evaluation of hominids' social behaviors in terms of evolutionary modifications of facial expression's musculature; d) treatment of cortical action potentials in terms of collective movements of extracellular currents, leaving apart what happens inside the neurons; e) a critique of Shannon's information in terms of the Arabic thinkers' active/potential intellects. Also, OA provides an outer view of a) humanistic issues such as the enigmatic Celestino of Verona's letter, Dante Alighieri's "Hell" and the puzzling Voynich manuscript; b) historical issues such as Aldo Moro's death. Summarizing, we suggest that the safest methodology to quantify phenomena is to remove them from our observation and tackle an outer view, since mathematical/logical issues such as selective information deletion and set complement rescue incompleteness/inconsistency of biophysical systems.
In this paper, I argue that perdurantism is incompatible with priority monism: the view that the universal mereological fusion, U, is fundamental. For the monist’s fundamental object can neither persist by being a trans-temporal object (i.e., a space-time ‘worm’) nor by being an instantaneous stage. If U persisted via being a worm, it would be grounded in its temporal parts, meaning that it would not be fundamental as it would not be ungrounded. If U were a stage, on the other hand, it would face a problem from the possibility of ‘temporal gunk’. But if U persisted by neither being a worm nor a stage, then U could not persist via having temporal parts, and thus perdurantism would be false. Given that a similar combination of perdurantism and priority pluralism also faces a problem from temporal gunk, I conclude that perdurantism does not sit well with mereological based accounts of fundamentality.
The Uzumaki effect
The argument from appearance for the content view or intentionalism attracts a lot of attention recently. In my paper, I follow Charles Travis to argue against the key premise that representational content can be ‘read off’ from a certain way that a thing looks to a subject. My arguments are built upon Travis’s original objection and a reinterpretation of Rodrick Chisholm’s comparative and noncomparative uses of appearance words. Byrne, Schellenberg and others interpret Travis’ ‘visual looks’ as Chisholm’s comparative use, and appeal to the noncomparative use as an alternative to avoid Travis’s objection. I demonstrate that they misunderstand both Chisholm and Travis. Both the comparative use and the noncomparative use are semantic notions, while ‘visual looks’ is a metaphysical one. Although Chisholm’s appearance objectivism –– that appearance expressions attribute appearances to ordinary objects –– is close to ‘visual looks’, appearance objectivism is not exceptional to the noncomparative use as Byrne interprets. In the end, I also show that Byrnean’s conception of distinctive visual gestalt cannot exclude contrary representational contents, because a distinctive visual gestalt can be shared by different kinds of things. Besides, Byrne and others do not explain why a distinctive visual gestalt should be presented as ‘being instantiated’. Therefore, I conclude that representational content cannot be read off from a certain way that a thing looks to a subject; the argument from appearance thus fails.
In The Axiological Status of Theism and Other Worldviews (2020), I defend the Complete Understanding Argument for anti-theism, which says that God’s existence makes the world worse with respect to our ability to understand it. In a recent article, Roberto Di Ceglie offers three objections to my argument. I seek to rescue my argument by showing (1) that understanding can come in degrees; (2) that I’m not a consequentialist about the value of understanding; and (3) that my argument is consistent with God providing us with sufficient knowledge of important spiritual matters. Di Ceglie’s objections point to future areas for fruitful exploration but do not defeat my argument.
In this short comment on Just, Reasonable Multiculturalism, I concentrate on the permissible extent of interference by a liberal state in a community within that state when such interference aims to protect individuals within that community from it. He and I both value individuals and want them protected, of course. This shared value, however, leads us to different conclusions. On any liberal view, individuals must be allowed to act as they wish subject only to specific sorts of justified limitations. In the mainstream approach that Cohen-Almagor accepts, these will include limitations necessary to not merely protect but also to promote autonomy. On my own view, by contrast, it is protection alone that justifies interference. I thus spell out Cohen-Almagor’s view about the need for interference with non-liberal groups within a liberal state and indicate my disagreement.
The paper examines Heidegger's notions of truth and knowledge in the context of Locke's theory of same. It argues that when Heidegger's expositions of "primordial truth" and knowledge as a "retainment of assertion" are analyzed in their Beings, new and improved definitions emerge which support Locke's ideas of truth and knowledge. It shows that Heidegger's primordial truth is the process which uncovers Locke's propositional truth and on which any knowledge must be based. Wherefrom, to solve the problem of what knowledge is and how one defines it, one must start with Heidegger's conceptions of truth and knowledge.
The term “cyborg” is being used in a surprising variety of ways. Some authors argue that the human being as such is—and has always been—a cyborg (Clark, Sorgner). Others see the term as describing what is peculiar about humanity in the present era (Haraway, Case). Still others reserve it for some current forms of human existence (Moe and Sandler, Warwick). Lastly, Clynes and Kline, who originally introduced the term, use it as referring to possibilities of the future. In the present paper, I examine what is at stake in this disagreement. I highlight that the different uses of the term “cyborg” can be seen as being based on one and the same conception of the human being and its relationship to technology, namely, the idea that human-machine hybridization is a gradual, longstanding and ongoing process. I explain how, arising from this common idea, the existing uses of “cyborg” diverge. I then raise the question of which of these uses is the most plausible or useful.
In the paper, I aim to reconstruct a charitable interpretation of Durkheimian utilitarianism, a normative theory of public morality proposed by well-recognised American moral psychologist – Jonathan Haidt, which might provide reasons to justify particular legal regulations and public policies. The reconstruction contains a coherent theory that includes elements of rule-utilitarianism, value pluralism, objective list theory and perfectionism, as well as references to Emile’s Durkheim views on human nature. I also compare Durkheimian utilitarianism with two similar theories – Brad Hooker’s rule consequentialism and Krzysztof Saja’s institutional function consequentialism.
This paper discusses an analogical argument for the compatibility of the evidential argument from evil and skeptical theism. The argument is based on an alleged parallel between the paradox of the preface and the case of apparently pointless evil. I argue that the analogical argument fails, and that the compatibility claim is undermined by the epistemic possibility of inaccessible reasons for permitting apparently pointless evils. The analogical argument fails, because there are two crucial differences between the case of apparently pointless evil and the case of the preface. First, in the preface case, our non-cumulative evidence supports a claim of error, whereas the analogical argument is based on the idea that our non-cumulative evidence supports the success claim that there’s at least one instance of truly pointless evil. Second, our non-cumulative evidence of error in the case of the preface rests on a track record establishing author fallibility; in the case of apparently pointless evils, there is no relevant track record to support the claim that some apparently pointless evils are truly pointless. These differences, together with certain plausible assumptions about our fallibility and reliability with respect to propositions about evil, also indicate that inaccessible reasons for permitting apparently pointless evil are epistemically possible. Given the epistemic possibility of such reasons, we’re in no position to judge whether such reasons are present in every case no matter how large the set of such evils may be.
The experience of one’s body as one’s own is normally referred to as one’s “bodily sense of ownership” (BSO). Despite its centrality and importance in our lives, BSO is highly elusive and complex. Different psychopathologies demonstrate that a BSO is unnecessary and that it is possible to develop a limited BSO that extends beyond the borders of one’s biological body. Therefore, it is worth asking: what grounds one’s BSO? The purpose of this paper is to sketch a preliminary answer to the ‘grounding question.’ Thus, I begin by briefly presenting some contemporary competing hypotheses concerning the ‘grounding question’ and explain why they seem unsatisfying. Second, I discuss the “dual-aspect” of bodily awareness, which is manifest in every normal tactile experience and consists in a subject-object structure of awareness. I then argue that the “dual-aspect” of bodily awareness has the potential of explaining BSO and can, therefore, be considered its grounds. Taking the “dual-aspect” of bodily awareness as the grounds of BSO manages to escape difficulties faced by contemporary hypotheses concerning BSO, fulfills certain necessary demands upon any account of BSO, and explains relevant empirical findings and psychopathologies. Consequently, I argue that it is a hypothesis worth pursuing.
For the regularity of human conduct as well as the peaceful coexistence in the society among human subjects, the notions of equal rights and justice are to be properly articulated and adapted. It is from this backdrop that this research embarks on an overview of the late English jurist, Jeremy Bentham's arguments on equal rights and justice. Who is Jeremy Bentham? What arguments has he put forward for the praxis of what would constitute equal rights and justice? Is there any connection between his utilitarian ideals and any of these? These form the background to this research. In this essay, we argue that Jeremy Bentham's utilitarian basis for the comprehension of rights and justice is not only inadequate but also grossly misleading. It is the submission of this essay therefore that a utilitarian basis for the subject matter is not worth the while in terms of practical utility towards the maintenance of peace and harmony among members of the society.
There has been the tendency to construe one for the other or to say that the one passes synonymously for the other, logical positivism and logical atomism. The thrust of this study is to unclad this locus that is gradually becoming widespread and accentuated. Upon a critical assessment of the main thrust of logical positivism, we find that it is a movement, which laid emphasis on language and on the elimination of metaphysics. Logical atomism however is a principle that derives from one of the members of logical positivism, Bertrand Russell. In this essay, the main occupation is to recapitulate the main kernel of logical positivism, its emphasis on language and meaning and how logical atomism crept into its discursive fray. At the end, we submit that logical atomism is a consequence of logical pos-itivism, the converse being impossible, historically speaking.
Theories of different and independent types of intelligence constitute a Lakatosian research program, as they all claim that human intelligence has a multidimensional structure, consisting of independent cognitive abilities, and that human intelligence is not characterized by any general ability that is of greater practical importance, or that has greater predictive validity, than other, more specialized cognitive abilities. This paper argues that the independent intelligences research program is degenerating , since it has not led to novel, empirically corroborated predictions. However , despite its flaws, the program provides an illustrative example of some of the philosophical problems that inhere in Lakatos's so-called "methodology". Indeed, Lakatos's conceptions of the negative heuristic, the positive heuristic, and the relationship between scientific appraisal and advice are all vulnerable to objections. The upshot is that theories of independent intelligences indeed teach us more about philosophy of science than about the nature of human intelligence.
Distributive justice is generally important to persons in society. This was widely recognized by early Confucian thinkers, particularly Confucius, Mencius, and Xunzi, in ancient China. Confucius, Mencius, and Xunzi had developed, in varying degrees and with different emphases, their respective conceptions of distributive justice to address the relevant social problems in their times. These conceptions not only are intrinsically valuable political thoughts, but may prove useful in dealing with current or future social issues. Thus in this essay, first I provide a detailed interpretation of each of those thinkers’ conceptions of distributive justice, then I combine some essential elements of those conceptions to form a general and coherent conception, which is called a complex Confucian conception of distributive justice, and finally I evaluate this conception by considering some of its implications and limitations, in theory and in practice. This aims mainly at exploring the meaning and practical bearing of basic Confucian conceptions of distributive justice.
The harshness objection is the most important challenge to luck egalitarianism. Very recently, Andreas Albertsen and Lasse Nielsen provided a scrupulous analysis of the harshness objection and claim that only the inconsistency objection-the objection that luck egalitarianism is incompatible with the ideal of basic moral equality-has real bite. I argue that the relevantly construed incoherence objection is not as strong as Albertsen and Nielsen believe. In doing so, first, I show that the deontological luck egalitarian conception of equal treatment does not endorse harsh policies such as excessive responsibility-sensitive healthcare that would be disrespectful to the imprudent. Second, I demonstrate that deontological luck egalitarianism is not troubled by the case that involves a lack of respect for the prudent, which vexes Anderson's relational egalitarianism that Kasper Lippert-Rasmussen's argument highlights. I thus claim that the harshness objection is not a truly decisive objection against the luck egalitarian project.
In the first part of the paper, I discuss Benatar’s asymmetry argument for the claim that it would have been better for each of us to have never lived at all. In contrast to other commentators, I will argue that there is a way of interpreting the premises of his argument which makes all of them come out true. (This will require one departure from Benatar’s own presentation.) Once we see why the premises are true, we will, however, also realise that the argument trades on an ambiguity that renders it invalid. In the second part of the paper, I consider whether discussions of how best to implement the anti-natalist conclusion crosses a moral barrier. I ask whether we can, independently of any philosophical argument, raise a legitimate moral objection to discussions of how best to end all life on earth. I discuss three views concerning the role of our pre-philosophical views and attitudes in philosophical debates: the external view according to which these attitudes set moral barriers to the content of philosophical debate whilst themselves standing outside this debate; the internal view according to which our intuitions are part of the material for philosophical reflection and play no further role; and the intermediate view according to which our pre-reflective views and attitudes, without themselves requiring philosophical validation, can play an important role when it comes to issues regarding the implementation of philosophical claims.
Constructivism as a distinct metaethical position has garnered significant interest in recent years due in part to Sharon Street’s theory, Humean metaethical constructivism. According to Street’s account, practical reasons are constructed by individual valuing entities. On this view, then, whether a particular reason applies to an individual is completely contingent upon what that individual actually values. In this article I argue for the recognition of multiple sources of practical reasons and values, including both individuals and communities. The resulting view, which I call layered constructivism, strengthens the constructivist project and begins to resolve some of the common critiques leveled against Street’s Humean constructivism. To begin, layered constructivism retains many of the benefits of Street’s approach, such as providing a naturalistic picture of normativity and maintaining a close tie between practical reasons and individual motivation. Moreover, the inclusion of collective sources of normativity and the importance of the resulting values for individuals is supported by recent empirical research on norms. Layered constructivism can also respond to the common concerns that Humean constructivism fails to adequately account for the immense influence our social lives have on our normative reasons and values, and that it entails an objectionable level of contingency. Finally, acknowledging the existence of differently constructed reasons helps us make sense of the pervasive human experience of navigating a variety of seemingly incommensurable normative reasons.
The current (still limited) use of the notion of informativeness in the domain of information system ontologies seems to indicate that such ontologies are informative if and only if they are understandable for their final recipients. This paper aims at discussing some theoretical issues emerging from that use which, as we will see, connects the informativeness of information system ontologies to their representational primitives, domains of knowledge, and final recipients. Firstly, we maintain that informativeness interacts not only with the actual representational primitives, but also with their variability over time. Secondly, we discuss the correspondence between representational primitives and domains of knowledge of those ontologies. Finally, we explore the possibility of an epistemological discrepancy between human beings and software systems on the understanding of ontological contents.
This article explains why I decided to write the book Just, Reasonable Multiculturalism (CUP, 2021), my appreciation of multiculturalism, and my puzzlement when I heard growing attacks on multiculturalism, describing it as one of the causes of extremist ideology and radicalization. Those attacks brought me to write this book. I discuss some of the main themes of my book: male circumcision, which was the most difficult chapter for me to write; Amish education that brought to my attention a new concern, child abuse, of which I was not aware at first; legal precedents concerning women and minorities and, finally, the two countries I examined in which security considerations underlie and trigger discrimination against minorities: France and Israel.
The traditional view in epistemology has it that knowledge is insensitive to the practical stakes. More recently, some philosophers have argued that knowledge is sufficient for rational action: if you know p, then p is a reason you have (epistemically speaking). Many epistemologists contend that these two claims stand in tension with one another. In support of this, they ask us to start with a low stakes case where, intuitively, a subject knows that p and appropriately acts on p. Then, they ask us to consider a high stakes version of the case where, intuitively, this subject does not know that p and could not appropriately act on p without double-checking. Finally, they suggest that the best explanation for our shifty intuitions is that p is a reason the subject has in the low stakes case but not in the high stakes case. In short, according to this explanation, having a reason (in the epistemic sense) is sensitive to the stakes. If so, either knowledge is sensitive to the stakes or else you can know that p even if p is not a reason you have (in the epistemic sense). In this paper, I consider more closely the relation between having a reason (in the epistemic sense) and having a reason to check. I argue that the supposition that if one has p as a reason (in the epistemic sense) then one has a reason not to check whether p, or no reason to check whether p, is highly doubtful. On the contrary, I suggest, it is plausible that, given our fallibility, one always has an epistemic reason to check whether p, whether or not p is a reason one has (in the epistemic sense). On the basis of this observation, I show that one can offer a new way of explaining the cases in question, allowing us to reconcile the traditional view about knowledge and the sufficiency of knowledge for rational action.
In this paper, I argue that a modified version of well-being subjectivism can avoid the standard, yet unintuitive, conclusion that morally horrible acts may contribute to an agent’s well-being. To make my case, I argue that “Modified Subjectivists” need not accept such conclusions about well-being so long as they accept the following three theoretical addenda: 1) there are a plurality of values pertaining to well-being, 2) there are some objective goods, even if they do not directly contribute to well-being, and 3) some of these values and goods (from 1 and 2) are bound-up with one another.
What drives bodies together? What inclines them towards one another? What keeps these bodies inclined towards each other as the world around them continues to fall apart? In this article, I argue that the circulation of grief and anger produces a choral inclination, a relationality forged through our emotional responses to loss. Coming together through this choral inclination allows us to acknowledge loss, confront its conditions, and enact a collective response to it. I engage with feminist philosopher Adriana Cavarero’s concept of inclination and its further development by political theorist Bonnie Honig to theorize the posture and politics of togetherness. Both thinkers turn to prefabricated relationships – between mother and child or sisters – and in proposing an alternative, I offer choral inclination to help theorize the dynamics that bring people together when neither these prefigured relationships nor access to care is readily available. I turn to grief and anger as emotions that circulate in the context of loss, and pluralize the affective responses through which our bodies incline and thus politicize one another. I develop these claims through a novel reading of Euripides’ Hecuba. In the final section, I briefly explore the motivations and leadership structure found within the Movement for Black Lives to link develop the implications for choral inclination for feminist and democratic politics. In contrast to commonplace frameworks that consider tragedy in primarily its historical context, I mobilize ancient tragedy to help theorize and enact feminist responses to the contemporary context.
Panentheism is a theism with great potential. Whereas pantheism takes God to be equivalent to the world, panentheism entertains as much while still asserting God’s transcendence of the mere world. There is much beauty in this idea that God is both “in the world” and “above” it. But there is also much subtlety and confusion. Panentheism is notoriously tricky to demarcate from the other theisms, and there is plenty of nuance left to be explored. The core problem of panentheism is this—what exactly does it mean for the world to “include God” and for God to “contain the world?” Numerous answers have been given, but it seems there is still something left to be desired. In this paper, we endeavor to give panentheism a firm and rigorous footing. Utilizing basic category theory, we provide a precise answer to the daunting problem of “world inclusion.” In the process, we also offer a new variety of metaphysical grounding, “morphic grounding.”
Macrostructure of the argument of the Sumptum (dashed arrows indicate further material premises)
Anselm of Canterbury’s so-called ontological proofs in the Proslogion have puzzled philosophers for centuries. The famous description “something / that than which nothing greater can be conceived” is part and parcel of his argument. Most commentators have interpreted this description as a definition of God. We argue that this view, which we refer to as “definitionism”, is a misrepresentation. In addition to textual evidence, the key point of our argument is that taking the putative definition as what Anselm intended it to be – namely a description of a content of faith – allows getting a clear view of the discursive status and argumentative structure of Proslogion 2–4, as well as making sense of an often neglected part of the argument.
In a recent article in this journal, Calum Miller skillfully and creatively argues for the counterintuitive view that there aren’t any good reasons to believe that non-human animals feel pain in a morally relevant sense. By Miller’s lights, such reasons are either weak in their own right or they also favor the view that non-human animals don’t feel morally relevant pain. In this paper, I explain why Miller’s view is mistaken. In particular, I sketch a very reasonable abductive argument for the conclusion that non-human animals feel morally relevant pain. This argument shows that, even in the face of Miller’s moderate skepticism about whether non-human animals feel pain in a morally relevant sense, it’s still more epistemically reasonable to believe that non-human animals feel pain in a morally relevant sense than not. In which case, I conclude that Miller has failed to show that there aren’t any good reasons to believe that non-human animals don’t feel pain in a morally relevant sense that don’t also count in favor of the view that non-human animals don’t feel morally relevant pain.
This paper focuses on Schellenberg’s Capacitism about Phenomenal Evidence, according to which if one is in a phenomenal state constituted by employing perceptual capacities, then one is in a phenomenal state that provides phenomenal evidence. This view offers an attractive explanation of why perceptual experience provides phenomenal evidence, and avoids difficulties faced by its contemporary alternatives. However, in spite of the attractions of this view, it is subject to what I call “the alien experience problem”: some alien experiences (e.g. clairvoyant experience) are constituted by employing perceptual capacities, but they do not provide phenomenal evidence. This point is illustrated by a counterexample which is similar to, but also different in some important respects from, Bonjour’s famous clairvoyant Norman example. At the end of the paper, I sketch a restricted version of Capacitism about Phenomenal Evidence by putting some etiological constraint on the perceptual capacities employed.
Reliabilism says that knowledge must be produced by reliable abilities. Abilism disagrees and allows that knowledge is produced by unreliable abilities. Previous research strongly supports the conclusion that abilism better describes how knowledge is actually defined in commonsense and science. In this paper, I provide a novel argument that abilism is ethically superior to reliabilism. Whereas reliabilism unethically discriminates against agents by excluding them from knowing, abilism virtuously includes them.
Nelson Goodman’s theory of symbol systems expounded in his Languages of Art has been frequently criticized on many counts (cf. list of secondary literature in the entry “Goodman’s Aesthetics” of Stanford Encyclopedia of Philosophy and Sect. 3 below). Yet it exerts a strong influence and is treated as one of the major twentieth-century theories on the subject. While many of Goodman’s controversial theses are criticized, the technical notions he used to formulate them seem to have been treated as neutral tools. One such technical notion is that of the density of symbol systems. This serves to distinguish linguistic symbols from pictorial representations (after Goodman entirely rejected resemblance in that role) and is a crucial part of Goodman’s explanation of what constitutes aesthetic experience (and so indirectly what is art). Thus its significance for Goodman’s theory is fundamental. The aim of this paper is a detailed, logical analysis of this notion. It turns out that Goodman’s definition is highly problematic and cannot be applied to symbol systems in the way Goodman envisaged. To conclude, Goodman’s theory is problematic not just because of its controversial theses but also because of logical problems with the technical notions used at its very core. Hence the controversial claims are not simply contestable, but inaccurately expressed.
McGowan argues “that ordinary utterances routinely enact norms without the speaker having or exercising any special authority” and thereby not “merely cause” but “constitute” harm if harm results from adherence to the enacted norms. The discovery of this “previously overlooked mechanism,” she claims, provides a potential justification for “further speech regulation.” Her argument is unsuccessful. She merely redefines concepts like “harm constitution” and “norm enactment” and fails to explain why speech that “constitutes” harm is legally or morally problematic and thus an initially more plausible target for speech regulation than speech that “merely causes” harm. Even if she could explain that, however, her account would still be incapable of identifying cases where utterances “constitute harm.” This is so for two reasons. First, she provides neither analytical nor empirical criteria for deciding which (if any) so-called “s-norms” have been enacted by an “ordinary utterance.” Second, even if such criteria could be provided, there is no epistemically available means to distinguish whether harm has ensued due to adherence to the enacted s-norms or through other mechanisms (like “mere causation”). Given this lack of criteria and practical applicability, there is no way that this account could serve as a principled basis for speech regulation – it could only serve as a pretext for arbitrary censorship.
Block’s ( The Philosophical Review, 90 (1), 5–43 1981) anti-behaviourist attack of the Turing Test not only illustrates that the test is a non-sufficient criterion for attributing thought; I suggest that it also exemplifies the limiting case of the more general concern that a machine which has access to enormous amounts of data can pass the Turing Test by simple symbol-manipulation techniques. If the answers to a human interrogator are entailed by the machines’ data, the Turing Test offers no clear criterion to distinguish between a thinking machine and a machine that merely manipulates representations of words and sentences as it is found in contemporary Natural Language Processing models. This paper argues that properties about vagueness are accessible to any human-like thinker but do not normally display themselves in ordinary language use. Therefore, a machine that merely performs simple symbol manipulation from large amounts of previously acquired data – where this body of data does not contain facts about vagueness – will not be able to report on these properties. Conversely, a machine that has the capacity to think would be able to report on these properties. I argue that we can exploit this fact to establish a sufficient criterion of thought. The criterion is a specification of some of the questions that, as I explain, should be asked by the interrogator in a Turing Test situation.
This paper describes semantic communication as an arbitrary loss function. I reject the logical approach to semantic information theory described by Carnap, Bar-Hillel and Floridi, which assumes that semantic information is a logical function of Shannon information mixed with categorical objects. Instead, I follow Hirotugu Akaike’s maximum entropy approach to model semantic communication as a choice of loss. The semantic relationship between a thing and a message about the thing is modelled as the loss of information that results in the impression contained in the message, so that the semantic meaning of a bear’s footprint is the difference between the actual bear and its footprint. Experience has a critical function in semantic meaning because a bear footprint can only be meaningful if we have some experience with an actual bear. The more direct our experience, the more vivid the footprint will appear. In this model, what is important is not the logic of the categories represented by the information but the loss of information that reduces our experiences of reality to functional communication. The hard problem of semantic communication arises because real objects and events do not come with categorical labels attached, so the choice of loss is necessarily imperfect and illogical.
In “Justice as Fairness: Political not Metaphysical,” John Rawls suggests an approach to a public conception of justice that eschews any dependence on metaphysical conceptions of justice in favor of a political conception of justice. This means that if there is a metaphysical conception of justice that actually obtains, then Rawls’ theory would not (and could not) be sensitive to it. Rawls himself admitted in Political Liberalism that “the political conception does without the truth.” Similarly, in Law of Peoples, Rawls endorses a political conception of justice to govern the society of peoples that is not concerned with truth, but instead concerned with being sufficiently neutral so as to avoid conflict with any reasonable comprehensive doctrines. The odd result is that this neutrality excludes any conception of truth at all. Therefore, in times of crisis that demand incisive decision making based on scientific, economic or moral considerations, public reason will stall because it can contain no coherent conception of truth.
The author presents a review of research literature on the analysis of the sphere of informal philosophizing in Russia of the last thirty years. He discusses the genesis and content of the idea of non-institutional creative philosophizing in the works of famous modern Russian philosophers. He notes that philosophical self-identity in today’s Russia is characterized by uncertainty, uncertainty, and mobility. This is directly related to such phenomena as the rapid collapse of book culture, the change of mediums of communication, a clear decrease in the abstracting-reflective component of thinking and the audiovisual turn. The author presents both the review generalized studies of intellectual life and non-institutional philosophizing in modern Russia, as well as works examining the features of theoretical platforms unformal philosophical groups.
Dialetheism is the view that some contradictions are true. Resorting to either metaphysical dialetheism or semantic dialetheism may seem like an appropriate resolve to certain theological contradictions. At least for those who concede to theological contradictions, and take dialetheism seriously. However, I demonstrate that neither of these types of dialetheism would serve to be amenable in resolving an Islamic theological contradiction. This is a theological contradiction that I refer to as ‘the paradox of an unknowable and ineffable God’. As a result of this, I propose an alternative type of dialetheism which aims to resolve the paradox of an unknowable and ineffable God. I call this type of dialetheism, ‘mystical dialetheism’.
In a recent book devoted to the axiology of theism, Kirk Lougheed has argued that the ‘complete understanding’ argument should be numbered among the arguments for anti-theism. According to this argument, God’s existence is detrimental to us because, if a supernatural and never completely understandable God exists, then human beings are fated to never achieve complete understanding. In this article, I argue that the complete understanding argument for anti-theism fails for three reasons. First, complete understanding is simply impossible to achieve. Second, even if achieving complete understanding were possible, it would not be beneficial. Third, the only type of complete understanding that is possible to achieve and is beneficial to human beings is the understanding of that which is of primary importance to us, and not the understanding of everything, as Lougheed seems to assume. God can grant us complete understanding of that which is of primary importance to us. As a consequence, God’s existence ends up being beneficial and not detrimental to us.
In Machiavelli’s Prince there appears to be a link between Chap.IX on the civil principality and the hope for a unification of Italy by a new prince – a theme presented in the final Exhortation. In both sections, Machiavelli’s unusual lack of historical illustrations suggests the hypothesis that the civil principality and the new prince play a symbolic function (not one of practical advice). The reading here proposed argues that there is an ideal relation between Machiavelli’s Prince and the Discourses on Livy regarding the opportunity of a regime change from popular principality to Republic. The possibility to switch from one form of government to the other is due to Machiavelli’s idealizations of the civil principality and the new prince. Accordingly, it is argued that Machiavelli’s method follows a strategy of mythologizing realism. In the civil principality the tension between the new prince and his people is not resolved, nor that between the plebs and the nobles. The paper concludes that Machiavelli emphasis on ‘the solitude’ of the prince is what ultimately justifies a transition from the civil principality to the Republic.
Top-cited authors
Daniel Hutto
  • University of Wollongong
Glenda Satne
  • University of Wollongong
Erik Rietveld
  • University of Amsterdam
Julian Kiverstein
  • Academisch Medisch Centrum Universiteit van Amsterdam
Giovanna Colombetti
  • University of Exeter