Philosophical Studies

Published by Springer Nature
Online ISSN: 1573-0883
Print ISSN: 0031-8116
Learn more about this page
Recent publications
  • Miles TuckerMiles Tucker
I develop and defend a maximizing theory of moral motivation: I claim that consequentialists should recommend only those desires, emotions, and dispositions that will make the outcome best. I advance a conservative account of the motives that are possible for us; I say that a motive is an alternative if and only if it is in our psychological control. The resulting theory is less demanding than its competitors. It also permits us to maintain many of the motivations that we value most, including our love for those most important to us. I conclude that we are closer to meeting morality’s demands on our character than has been appreciated.
The primary goal of this paper is to provide substantial motivation for exploring an Acquaintance account of phenomenal consciousness, on which what fundamentally explains phenomenal consciousness is the relation of acquaintance. Its secondary goal is to take a few steps towards such an account. Roughly, my argument proceeds as follows. Motivated by prioritizing naturalization, the debate about the nature of phenomenal consciousness has been almost monopolized by representational theories (first-order and meta-representational). Among them, Self-Representationalism is by far the most antecedently promising (or so I argue). However, on thorough inspection, Self-Representationalism turns out not explanatorily or theoretically better than the Acquaintance account. Indeed, the latter seems to be superior in at least some important respects. Therefore, at the very least, there are good reasons to take the Acquaintance account into serious consideration as an alternative to representational theories. The positive contribution of this paper is a sketch of an account of consciousness on which phenomenal consciousness is explained partly in representationalist terms, but where a crucial role is played by the relation of acquaintance.
In her wonderful book, Approaches to the Theory of Freedom in Sor Juana Inés de la Cruz, Virginia Aspe produces a groundbreaking presentation of Sor Juana’s theory of freedom with productive scholarship on the Coimbran Jesuit tradition and Renaissance Humanism in Latin America. In this paper, I center on Aspe’s interpretation of two of Sor Juana’s major works, First Dream, in which a disembodied dreaming soul rises into the nighttime sky in an attempt to take in and understand the universe, and Critique of a Sermon, in which Sor Juana lays out counter-arguments to the Jesuit preacher Antonio Vieira’s designation of the greatest demonstration of Christ’s love and its implications for human freedom and God’s grace. My goal is to supplement and sometimes critique the interpretations endorsed by Aspe so as to ultimately enrich Aspe’s key insights into Sor Juana’s philosophical views. Ultimately, I first argue that Aspe’s Aristotelian interpretation of First Dream is necessary but not sufficient for understanding Sor Juana’s poem, which requires the inclusion 15th and 16th century Neoplatonism. Secondly, I argue that in her interpretation of Critique of a Sermon Aspe misattributes a view to Sor Juana that belongs to Antonio Vieira, which requires a small tweak to her theory of Sor Juana’s views on freedom and human moral psychology.
This paper first demonstrates that recognition of the diversity of ways that emotional responses modulate ongoing attention generates what I call the puzzle of emotional attention, which turns on the fact that distinct emotions (e.g., fear, happiness, disgust, admiration etc.) have different attentional profiles. The puzzle concerns why this is the case, such that a solution consists in explaining why distinct emotions have the distinct attentional profiles they do. It then provides an account of the functional roles of different emotions, as tied to their evaluative themes, which explains and further elucidates the distinctive attentional profiles of different emotions, so solving the puzzle of emotional attention. Following that, it outlines how such attentional profiles are reflected in the character of emotional experience and its attentional phenomenology. The resulting picture is a more detailed account of the connections between emotion and attention than is currently on offer in the philosophical literature.
According to a familiar modern view, color and other so-called secondary qualities reside only in consciousness, not in the external physical world. Many have argued that this “Galilean” view is the source of the mind-body problem in its current form. This paper critically examines a radical alternative to the Galilean view, which has recently been defended or sympathetically discussed by several philosophers, a view I call “anti-modernism.” Anti-modernism holds, roughly, that the modern Galilean scientific image is incomplete – in particular, it leaves out certain irreducible qualitative properties, such as colors – and that we can solve (or dissolve) the hard problem of consciousness by accepting these properties as objective features of the external physical world. I argue, first, that anti-modernism cannot fulfill its promise. Even if the outer world is resplendent with primitive colors, color experience remains a mystery. Second, I argue that the theoretical costs of accepting irreducible colors in the world are enormous. Even if irreducible colors in the world could dispel the mysteries surrounding consciousness, the theoretical benefit would not be worth the cost. If the problems of consciousness and color require that we posit irreducible properties somewhere, it would be far more plausible to accept irreducible phenomenal properties on the side of the subject, and to reject irreducible colors on the side of the object.
What is it to be able to intend to do something? At the end of her ground-breaking book, Agents’ Abilities, Romy Jaster identifies this question as a topic for future research. This article tackles the question from within the framework Jaster assembled for understanding abilities. The discussion takes place in two different spheres: intentions formed in acts of deciding, and intentions not so formed. The gradability of abilities has an important place in Jaster’s framework, and it is explained how abilities to acquire intentions of these two kinds -- including both general and specific abilities—can come in degrees, as she conceives of degrees of ability. Although Jaster “sympathize[s] with the idea that having an ability to intend to [A] is a matter of intending to [A] in a sufficient proportion of the relevant possible situations in which there is an overriding reason to intend to [A],” an alternative to this idea is developed.
Existing work on gaslighting ties it constitutively to facts about the intentions or prejudices of the gaslighter and/or his victim’s prior experience of epistemic injustice. I argue that the concept of gaslighting is more broadly applicable than has been appreciated: what is distinctive about gaslighting, on my account, is simply that a gaslighter confronts his victim with a certain kind of choice between rejecting his testimony and doubting her own basic epistemic competence in some domain. I thus hold that gaslighting is a purely epistemic phenomenon—not requiring any particular set of intentions or prejudices on the part of the gaslighter—and also that it can occur even in the absence of any prior experience of epistemic injustice. Appreciating the dilemmatic character of gaslighting allows us to understand its connection with a characteristic sort of epistemic harm, makes it easier to apply the concept of gaslighting in practice, and raises the possibility that we might discover its structure and the associated harm in surprising places.
Assertion is widely regarded as an act associated with an epistemic position. To assert is to represent oneself as occupying this position and/or to be required to occupy this position. Within this approach, the most common view is that assertion is strong: the associated position is knowledge or certainty. But recent challenges to this common view present new data that are argued to be better explained by assertion being weak. Old data widely taken to support assertion being strong has also been challenged. This paper examines such challenges and finds them wanting. Far from diminishing the case for strong assertion, carefully considering new and old data reveals that assertion is as strong as ever.
Why is it good to be less, rather than more incoherent? Julia Staffel, in her excellent book “Unsettled Thoughts,” answers this question by showing that if your credences are incoherent, then there is some way of nudging them toward coherence that is guaranteed to make them more accurate and reduce the extent to which they are Dutch-bookable. This seems to show that such a nudge toward coherence makes them better fit to play their key epistemic and practical roles: representing the world and guiding action. In this paper, I argue that Staffel’s strategy needs a small tweak. While she identifies appropriate measures of epistemic value, she does not identify appropriate measures of practical value. Staffel measures practical value using Dutchbookability scores. But credences have practical value in virtue of recommending actions that produce as much utility as possible. And while susceptibility to a Dutch book is a surefire sign that one’s credences are needlessly bad at this task, one’s degree of Dutch-bookability is not itself a good measure of how well they recommend practically valuable actions. Strictly proper scoring rules, I argue, are the right tools for measuring both epistemic and practical value. I show that we can rerun Staffel’s strategy swapping in strictly proper scoring rules for Dutch-bookability measures. So long as one’s epistemic scoring rule and practical scoring rule are “sufficiently similar,” there is some way of nudging incoherent credences toward coherence that is guaranteed to yield more of both types of value.
This paper explores the question of what makes an action morally worthy. I start with a popular theory of moral worth which roughly states that a right action is morally praiseworthy if and only if it is performed in response to the reasons which make the action right. While I think the account provides promising foundations for determining praiseworthiness, I argue that the view lacks the resources to adequately satisfy important desiderata associated with theories of moral worth. Firstly, the view does not adequately capture the degree to which an action has moral worth, and secondly, the view does not identify if right actions produced from overdetermined motives have moral worth. However, all is not lost; I also argue that the account can satisfy the desiderata when it attends to the agent’s counterfactual motives in addition to their actual motives. By considering counterfactual motives, we can measure the robustness of the actual praiseworthy motive, and attending to motivational robustness allows the new proposal to fully satisfy the two desiderata. At the end of this paper, I respond to some criticisms typically brought against a counterfactual view of moral worth.
Many college teachers believe that teaching can promote justice. Meanwhile, many in the broader American public disparage college classrooms as spaces of left-wing partisanship. This paper engages with that charge of partisanship. Section 1 introduces the charge. Then, in Sect. 2, I consider what teaching for justice should aim to do. I argue that selective institutions of higher education impose positional costs on members of a generation who do not attend them, and that those positional costs accrue not only in terms of distributive equality but also in terms of civic equality. Teaching for justice, I argue, should be understood as an attempt to lessen those costs. But the civic equality costs that selective higher education imposes can be meaningfully lessened only by a radical version of teaching for justice: educational consciousness raising for institutional reform. This sets up a high hurdle for any defense against the partisanship charge, because the kind of teaching for justice we have most justice-related reason to engage in seems especially susceptible to that charge. Section 3 gives a public reasons case in favor of teaching for justice so understood. Because civic equality is a commitment we all ought to share as free and equal citizens, teaching for justice aimed at restoring civic equality enjoys a public reasons justification. Still, an all-things-considered assessment of our permissions or obligations to engage in this teaching project awaits a careful thinking through of the case against it.
This paper presents a puzzle about the logic of real definition. I demonstrate that five principles concerning definition—that it is coextensional and irreflexive, that it applies to its cases, that it permits expansion, and that it is itself defined—are logically incompatible. I then explore the advantages and disadvantages of each principle—one of which must be rejected to restore consistency.
Causal pluralists hold that that there is not just one determinate kind of causation. Some causal pluralists hold that ‘cause’ is ambiguous among these different kinds. For example, Hall argues that ‘cause’ is ambiguous between two causal relations, which he labels dependence and production. The view that ‘cause’ is ambiguous, however, wrongly predicts zeugmatic conjunction reduction, and wrongly predicts the behaviour of ellipsis in causal discourse. So ‘cause’ is not ambiguous. If we are to disentangle causal pluralism from the ambiguity claim, we need to consider what other linguistic approaches are available to the causal pluralist. I consider and reject proposals that ‘cause’ is a general term, that the term is an indexical, and that the term conveys different kinds of causation through implicature or presupposition. Finally, I argue that causal pluralism is better handled by treating ‘cause’ as a univocal term within a dynamic interpretation framework.
The fundamentality square
What grounds facts of ground? Some metaphysicians invoke fundamental grounding laws to answer this question. These are general principles that link grounded facts to their grounds. The main business of this paper is to advance the debate about the metaphysics of grounding laws by exploring the prospects of a plausible yet underexplored minimalist account, one which is structurally analogous to a familiar Humean conception of natural laws. In the positive part of this paper, I articulate such a novel view and argue for its merits. The minimalist account shuns essences and takes laws to be unmysterious elite regularities. Therefore, it is a promising alternative for theorists of ground who spurn the acceptance of essentialism about the grounding laws but think that these are needed in our theorizing. In the negative part, I argue that widely accepted principles of ground, coupled with the tenets of minimalism, jeopardize the fundamentality of the grounding laws. I discuss two immediately available and prima facie appealing strategies to evade this threat. However, I show that both have undesirable theoretical costs. I conclude by casting doubts on whether the benefits of a minimalist account of fundamental grounding laws outweigh such costs.
Recent work on empathy has focused on the phenomenon of feeling on behalf of, or for, others, and on determining the role it ought to play in our moral lives. Much less attention, however, has been paid to ‘feeling-with.’ In this paper, I distinguish ‘feeling-with’ from ‘feeling-for.’ I identify three distinguishing features of ‘feeling-with,’ all of which serve to make it distinct from empathy. Then, drawing on work in feminist moral psychology and feminist ethics, I argue that ‘feeling-with’ has unique moral value over and above ‘feeling-for.’ I end by rebutting some likely objections to the claim that ‘feeling-with’ is morally valuable.
I argue that there are some situations in which it is praiseworthy to be motivated only by moral rightness de dicto, even if this results in wrongdoing. I consider a set of cases that are challenging for views that dispute this, prioritising concern for what is morally important (de re, and not de dicto) in moral evaluation (for example, Arpaly, 2003; Arpaly & Schroeder, 2013; Harman 2015; Weatherson, 2019). In these cases, the agent is not concerned about what is morally important (de re), does the wrong thing, but nevertheless seems praiseworthy rather than blameworthy. I argue that the views under discussion cannot accommodate this, and should be amended to recognise that it is often praiseworthy to be motivated to do what is right (de dicto).
This paper will examine a novel argument in favour of entity grounding over fact-only grounding. The idea of this argument, roughly speaking, is that the proponents of fact-only grounding cannot provide a unified account of grounds of identity, whereas the proponents of entity grounding can. In this paper, I will give a response to this argument. Specifically, I will argue that the problem which this argument raises to the proponents of fact-only grounding is also a problem with which the proponents of entity grounding are faced. Therefore, this argument fails to show that entity grounding is superior to fact-only grounding. Moreover, I will suggest that the failure of this argument points to a general lesson about the issue of grounds of identity facts.
Philosophers tend to assume a close logical connection between seeing-as reports and seeing-that reports. But the proposals they have made have one striking feature in common: they are demonstrably false. Going against the trend, I suggest we stop trying to lump together seeing-as and seeing-that. Instead, we need to realize that there is a deep logical kinship between seeing-as reports and seeing-objects reports.
Illusionism is the thesis that phenomenal consciousness does not exist, but merely seems to exist. Many opponents to the thesis take it to be obviously false. They think that they can reject illusionism, even if they conceded that it is coherent and supported by strong arguments. David Chalmers has articulated this reaction to illusionism in terms of a “Moorean” argument against illusionism. This argument contends that illusionism is false, because it is obviously true that we have phenomenal experiences. I argue that this argument fails (or is dialectically irrelevant) by showing that its defenders cannot maintain that its crucial premise (properly understood) has the kind of support needed for the argument to work, without begging the question against illusionism.
People give surprising weight to others' expectations about their behaviour. I argue the practice of conforming to others' expectations is ethically well-grounded. A special class of 'reasonable expectations' can create prima facie obligations even in cases where the expectations arise from contingent pre-existing practices, and the duty-bearer has not created them, or directly benefited from them. The obligation arises because of the substantial goods that follow from such conformity-goods capable of being endorsed from many different ethical perspectives and implicating key moral factors such as consent, fairness, respect, autonomy, and reciprocity. Given the innumerable situations where such expectations can arise, their ethical significance is critical both practically and philosophically. Keywords Expectations · Conformity · Legitimate expectations · Descriptive norms · Hope-casts Expectations about behaviour matter. Everyday life abounds with recommendations that we should temper, curb, lower, match, manage, live up to, or raise expectations. Such expectations touch on almost every part of our social existence: commercial arrangements, intimate relationships, professional standards, coordinating conventions , joint actions, collective goods, social ritual, etiquette, proxemics (including personal space), verbal and nonverbal communication, the use of public and quasi-public spaces, and more. Yet what is the moral status of such expectations? What more, ethically speaking, is added to a practice or action when it is expected? If person X expects person Y will
According to the so-called ‘proportionality principle’, causes should be proportional to their effects: they should be both enough and not too much for the occurrence of their effects. This principle is the subject of an ongoing debate. On the one hand, many maintain that it is required to address the problem of causal exclusion and take it to capture a crucial aspect of causation. On the other hand, many object that it renders accounts of causation implausibly restrictive and often reject the principle wholesale. I argue that there is exaggeration on both sides. While one half of the principle is overly demanding, the other half is unobjectionable. And while the unobjectionable half does not block exclusion arguments on its own, it provides a nuanced picture of higher-level causation, fits with recent developments in philosophy of causation, and motivates adjustments to standard difference-making accounts of causation. I conclude that at least half of the proportionality principle is worth taking seriously.
Prior’s puzzle is a puzzle about the substitution of certain putatively synonymous or coreferential expressions in sentences. Prior’s puzzle is important, because a satisfactory solution to it should constitute a crucial part of an adequate semantic theory for both proposition-embedding expressions and attitudinal verbs. I argue that two recent solutions to this puzzle are unsatisfactory. They either focus on the meaning of attitudinal verbs or content nouns. I propose a solution relying on a recent analysis of that-clauses in linguistics. Our solution is superior, as it not only avoids the problems faced by previous solutions, but it also brings developments in linguistics in line to solve an old puzzle in philosophy.
In this paper I argue, that if it is metaphysically possible for it to have been the case that nothing existed, then it follows that the right modal logic cannot extend D, ruling out popular modal logics S4 and S5. I provisionally defend the claim that it is possible for nothing to have existed. I then consider the various ways of resisting the conclusion that the right modal logic is weaker than D.
I consider the plausibility of discounting for kinship , the view that a positive rate of pure intergenerational time preference is justifiable in terms of agent-relative moral reasons relating to partiality between generations. I respond to Parfit's objections to discounting for kinship, but then highlight a number of apparent limitations of this approach. I show that these limitations largely fall away when we reflect on social discounting in the context of decisions that concern the global community as a whole, such as those related to global climate change.
A.J. Cotnoir has argued that we should distinguish between two notions of proper parthood: outstripped part and non-identical part. Outstripped parthood is an asymmetric relation, but non-identical parthood is not. We argue, first, that the intuitions Cotnoir uses to motivate these notions do not always give the right verdict; and, second, that systematic reasons for distinguishing these two notions of parthood have further counter-intuitive consequences. This means the distinction between two notions of proper parthood currently lacks adequate motivation.
In various of my writings, both in Philosophical Studies and elsewhere, I have argued that an account of trying sentences is available that does not require quantification over alleged attempts or tryings. In particular, adverbial modification in such sentences can be dealt with, without quantification over any such particulars. In ‘Attempts’, Jonathan D. Payton (Payton, 2021) has sought to dispute my claim. In this paper, I consider his claims and reply to them. I believe that my account withstands such scrutiny. In what follows, I refer to my book as ‘MA’, in giving page numbers to guide the reader. ‘Payton’ always refers to ‘Payton 2021’.
One well-known objection to supersubstantivalism is that it is inconsistent with the contingency of location. This paper presents a new objection to supersubstantivalism: it is inconsistent with the vagueness of location. Though contingency and vagueness are formally similar, there are important philosophical differences between the two. As a result, the objection from vague location will be structurally different than the objection from contingent location. The paper explores these differences and then defends the argument that supersubstantivalism is inconsistent with the plausible thesis that it is vague where I am located.
In this paper, I defend a version of the medical model of disability, which defines disability as an enduring biological dysfunction that causes its bearer a significant degree of impairment. We should accept the medical model, I argue, because it succeeds in capturing our judgments about what conditions do and do not qualify as disabilities, because it offers a compelling explanation for what makes a condition count as a disability, and because it justifies why the federal government should spend hundreds of billions of dollars, annually, on aid and accommodations for disabled people. After responding to a pair of objections Elizabeth Barnes has raised against the medical model, I contrast it with Guy Kahane and Julian Savulescu's welfarist account of disability, and with Barnes's own mere-difference view. Both of these accounts face serious challenges, although elements of Barnes's view can—and, in my opinion, should—be adopted by proponents of the medical model.
A growing consensus in the literature on agentive modals has it that ability modals like ‘can’ or ‘able to’ have a dual , i.e. interpretations of ‘must’ or ‘cannot but’ which stand to necessity as ability stands to possibility . We argue that this thesis (which we call ‘Agentive Duality’) is much more controversial than meets the eye. While Agentive Duality follows from the orthodox possibility analysis of ability given natural assumptions, it sits uneasily with a wide range of alternative proposals which are unified by the idea that ability requires control . In particular, we show that against the background of a control requirement on ability, Agentive Duality can be used to derive absurd predictions featuring this dual. Far from being a purely definitional thesis, Agentive Duality thus affords a new lens through which to assess the long-standing debate between possibility analyses of ability and their discontents.
The notion of individualised evidence holds the key to solve the puzzle of statistical evidence, but there’s still no consensus on how exactly to define it. To make progress on the problem, epistemologists have proposed various accounts of individualised evidence in terms of causal or modal anti-luck conditions on knowledge like appropriate causation (Thomson 1986), sensitivity (Enoch et al. 2012) and safety (Pritchard 2018). In this paper, I show that each of these fails as satisfactory anti-luck condition, and that such failure lends abductive support to the following conclusion: once the familiar anti-luck intuition on knowledge is extended to individualised evidence, no single causal or modal anti-luck condition on knowledge can succeed as the right anti-luck condition on individualised evidence. This conclusion casts serious doubts on the fruitfulness of the move from anti-luck conditions on knowledge to anti-luck conditions on individualised evidence. I expand on these doubts and point out further aspects where epistemology and the law come apart: epistemic anti- luck conditions on knowledge do not adequately characterise the legal notion of individualised evidence.
When I am looking at an apple, I perceptually attribute certain properties to certain entities. Two questions arise: what are these entities (what is it that I perceptually represent as having properties) and what are these properties (what properties I perceive this entity as having)? This paper is about the former, less widely explored, question: what does our perceptual system attribute properties to? In other words, what are these ‘sensory individuals’. There have been important debates in philosophy of perception about what sensory individuals would be the most plausible candidates for which sense modalities. The aim of this paper is to ask a related question about picture perception: what is the sensory individual of picture perception? When we look at a picture and see an apple depicted in it, what kind of entity do we see? What do we perceptually attribute properties to? I argue that the most straightforward candidates (ordinary objects, sui generis sensory individuals, no sensory individuals) are all problematic and that the most plausible candidate for the sensory individuals of picture perception are spatiotemporal regions.
A good chunk of the recent discussion of hypocrisy concerned the hypocritical “moral address” where, in the simplest case, a person criticises another for \(\phi \)-ing having engaged in \(\phi \)-ing himself, and where the critic’s reasons are overtly moral. The debate has conceptual and normative sides to it. We ask both what hypocrisy is, and why it is wrong. In this paper I focus on the conceptual explication of hypocrisy by examining the pragmatic features of the situation where accusations of hypocrisy are made. After rejecting several extant views, I defend the idea that moral criticisms are best understood as moves in an agonistic or hostile conversation, and that charges of hypocrisy are attempts to prevent the hypocrite from gaining an upper hand in a situation of conflict. I finish by linking this idea to frame-theoretic analysis and evolutionary psychology.
Alexander Pruss has given a quick argument against the claim that consistency is possibility using Gödel’s second incompleteness theorem. The argument does not distinguish metalanguage claims of consistency from object-language ones, rendering it unsound.
I argue that fictionalism about grounding is unmotivated, focusing on Naomi Thompson’s (2022) recent proposal on which the utility of the grounding fiction lies in its facilitating communication about what metaphysically explains what. I show that, despite its apparent dialectical kinship with other metaphysical debates in which fictionalism has a healthy tradition, the grounding debate is different in two key respects. Firstly, grounding talk is not indispensable, nor even particularly convenient as a means of communicating about metaphysical explanation. This undermines the revolutionary proposal. Secondly, talk of grounding primarily occurs within metaphysics, which means the usual options for motivating a non-literal interpretation are ineffective. This undermines the hermeneutic proposal.
Where does normativity come from? Or alternatively, in virtue of what do facts about what an agent has reason to do obtain? On one class of views, reason facts obtain in virtue of agents’ motivations. It might seem like a truism that at least some of our reasons depend on what we desire or care about. However, some philosophers, notably Derek Parfit, have convincingly argued that no reasons are grounded in this way. Typically, this latter, externalist view of reasons has been thought to enjoy the advantage of extensional adequacy—that is, the ability to account for all the reasons we intuitively think people have. This paper provides a novel argument against this assumption by considering a type of case wherein the relative strengths of the agent’s reasons can only be adequately explained by reference to what she cares about. Adding some further assumptions yields that there are at least some internally sourced reasons.
Constitutivism holds that an account of what a thing is yields those normative standards to which that thing is by nature subject. We articulate a minimal form of constitutivism that we call formal, non-epistemological constitutivism which diverges from orthodox versions of constitutivism in two main respects. First: whereas orthodox versions of constitutivism hold that those ethical norms to which people are by nature subject are sui generis because of their special capacity to motivate action and legitimate criticism, we argue that these features are compatible with treating these norms as of a piece with those ‘formal’ natural-historical norms which can be used to assess living things. Second: unlike orthodox versions of constitutivism, our version does not seek to use a non-normative account of that kind of being which we are as a means of identifying those normative claims to which we are are by nature subject. We then indicate how our position can afford us the resources to address some of the familiar difficulties that face cognitivism in ethics.
Perhaps the biggest disconnect between philosophers and non-philosophers on the question of gun rights is over the relevance of arms to our dignitary interests. This essay attempts to address this gap by arguing that we have a strong prima facie moral right to resist with dignity and that violence is sometimes our most or only dignified method of resistance. Thus, we have a strong prima facie right to guns when they are necessary often enough for effective dignified resistance. This approach is distinctively non-libertarian: it doesn’t justify gun rights on the basis of (mere) liberty or security. Nonetheless it is compatible with libertarian defenses of gun rights based on a liberty right to guns, and, if sound, in fact lowers the bar for gun rights in some ways, as it justifies access to guns even when nonviolent means would better achieve the liberty or security aims of potential victims. And although this defense of gun rights is most readily categorized as “conservative” or rightist, it relies upon principles and intuitions about dignity popular among progressives in other domains, such as in disability, women’s, or LGBT rights debates.
Moral judgments about harming 10 individuals of a species to saving 100 of the same species, ranging from 1 Absolutely morally wrong, to 4 Neither right nor wrong, to 7 Absolutely morally right
Robert Nozick famously raised the possibility that there is a sense in which both deontology and utilitarianism are true: deontology applies to humans while utilitarianism applies to animals. In recent years, there has been increasing interest in such a hybrid views of ethics. Discussions of this Nozickian Hybrid View, and similar approaches to animal ethics, often assume that such an approach reflects the commonsense view, and best captures common moral intuitions. However, recent psychological work challenges this empirical assumption. We review evidence suggesting that the folk is deontological all the way down—it is just that the moral side constraints that protect animals from harm are much weaker than those that protect humans. In fact, it appears that people even attribute some deontological protections, albeit extremely weak ones, to inanimate objects. We call this view Multi-level Weighted Deontology. While such empirical findings cannot show that the Nozickian Hybrid View is false, or that it is unjustified, they do remove its core intuitive support. That support belongs to Multi-level Weighted Deontology, a view that is also in line with the view that Nozick himself seemed to favour. To complicate things, however, we also review evidence that our intuitions about the moral status of humans are, at least in significant part, shaped by factors relating to mere species membership that seem morally irrelevant. We end by considering the potential debunking upshot of such findings about the sources of common moral intuitions about the moral status of animals.
On a simple and neat view, sometimes called the Relational Analysis of Attitude Ascriptions, a belief ascription on the form ‘S believes that x is F’ is correct if, and only if, S stands in the belief-relation to the proposition designated by ‘that x is F’, i.e., the proposition that x is F. It follows from this view that, for a person to believe, say, that x is a boat, there is one unique proposition that she has to believe. This paper argues against this view. It fails, I contend, to make sense of peripheral concept variation . As we attribute and individuate concepts, two people’s concepts C1 and C2 count as e.g., concepts of boats even if their concepts have different extensions in peripheral, or borderline, cases of boats. Thus, A and B can believe that x is a boat through believing peripherally different propositions. It follows that there is no unique proposition that a person has to believe in order to believe e.g., that x is a boat.
The Kanizsa triangle
Amodal completion
Amodal completion is usually characterized as the representation of those parts of the perceived object that we get no sensory stimulation from. In the case of the visual sense modality, for example, amodal completion is the representation of occluded parts of objects we see. I argue that relationalism about perception, the view that perceptual experience is constituted by the relation to the perceived object, cannot give a coherent account of amodal completion. The relationalist has two options: construe the perceptual relation as the relation to the entire perceived object or as the relation to the unoccluded parts of the perceived object. I argue that neither of these options are viable.
Here, I put forward a new account of how experience gives rise to the belief that time passes. While there is considerable disagreement amongst metaphysicians as to whether time really does pass, it has struck many as a default, ‘common sense’ way of thinking about the world. A popular way of explaining how such a belief arises is to say that it seems perceptually as though time passes. Here I outline some difficulties for this approach, and propose instead that the belief in time passing is elicited by a particular feature of agentive experience. When we deliberately move our bodies, bring something to mind, or focus our attention, we experience ourselves as the sources of these actions. Sensing oneself as a source, I argue, is a unique type of change experience, one which leads us to a belief that time is passing.
Nomic realists have traditionally put laws to work within a theory of natural modality, in order to provide a metaphysical source for causal necessitation, counterfactuals, and dispositions. However, laws are well-suited to perform other work as well. Necessitation is a widespread phenomenon and includes (for example) cases of categorial, conceptual, grounding, mathematical and normative necessitation. A permissive theory of universals allows us to extend nomic realism into these other domains. With a particular focus on grounding necessitation, it is argued that the sorts of reasons for positing laws in the natural causal domain also apply in other domains. Laws might well be the source of all first-order modality.
The New Evil Demon Problem presents a serious challenge to externalist theories of epistemic justification. In recent years, externalists have developed a number of strategies for responding to the problem. A popular line of response involves distinguishing between a belief’s being epistemically justified and a subject’s being epistemically blameless for holding it. The apparently problematic intuitions the New Evil Demon Problem elicits, proponents of this response claim, track the fact that the deceived subject is epistemically blameless for believing as she does, not that she is justified for so believing. This general strategy—which I call the “unjustified-but-blameless maneuver”—is motivated, in part, by the assumption that the distinction between epistemic justification and blamelessness is merely an extension of the familiar distinction between moral justification and blamelessness. In this paper, I consider three ways of drawing the distinction between justification and blamelessness familiar from the moral domain: the first in terms of a connection with reactive attitudes, the second in terms of the distinction between wrongness and wronging, and the third in terms of reasons-responsiveness. All three ways of drawing the distinction, I argue, make it difficult to see how an analogous distinction in the epistemic domain could help externalists explain away the intuitions which underwrite the New Evil Demon Problem. Motivating the unjustified-but-blameless maneuver, I conclude, is a much less straightforward task than its proponents tend to assume.
According to what may be called PERMANENT, blameworthiness is forever: once you are blameworthy for something, you are always blameworthy for it. Here a prima facie case for this view is set out, and the view is defended from two lines of attack. On one, you are no longer blameworthy for a past offense if, despite being the person who committed it, you no longer have any of the pertinent psychological states you had at the time of the misdeed. On the other, you can cease to be blameworthy if you sufficiently experience guilt or remorse, suffer enough punishment, or are forgiven for your misdeed. Although several points made in support of the second challenge are accepted, they are entirely consistent with PERMANENT. Neither line of attack, as so far presented, undermines the plausibility of this view stemming from the prima facie case.
What is the source of aesthetic knowledge? Empirical knowledge, it is generally held, bottoms out in perception. Such knowledge can be transmitted to others through testimony, preserved by memory, and amplified via inference. But perception is where the rubber hits the road. What about aesthetic knowledge? Does it too bottom out in perception? Most say “yes”. But this is wrong. When it comes to aesthetic knowledge, it is appreciation, not perception, where the rubber hits the road. The ultimate source of aesthetic knowledge is feeling. In this essay, we articulate and defend the very idea of affective knowledge and reveal aesthetic knowledge to be a species of the genus. We then show how the view resolves a thorny problem that has bedeviled aesthetic epistemologists: how to reconcile the seemingly direct character of aesthetic knowledge with the possibility of acquiring such knowledge from criticism. One learns from criticism, we argue, when it guides one’s engagement with an object so that one can appreciate it in virtue of those of its features that render it worthy of appreciation.
Results of Experiment 1
Results of Experiment 2
We present experimental evidence that supports the thesis (advanced recently by Stefánsson and Bradley in Philos Sci 82(4):602–625, 2015, Br J Philos Sci 70(1):77–102, 2019; Bradley in Decisions theory with a human face, Cambridge University Press, Cambridge, 2017; Goldschmidt and Nissan-Rozen in Synthese 198:7553–7575, 2021) that people might positively or negatively desire risky prospects conditional on only some of the prospects’ outcomes obtaining. We argue that this evidence has important normative implications for the central debate in normative decision theory between two general approaches on how to rationalize several common patterns of preference, which are ruled out as irrational by orthodox decision theory, namely the re-individuation approach and the non-expected utility approach.
Top-cited authors
Sandra Harding
  • University of California, Los Angeles
Lara Buchak
  • University of California, Berkeley
Ellen Fridland
  • King's College London
Lucy Allais
  • University of California, San Diego
Agustin Vicente
  • Universidad del País Vasco / Euskal Herriko Unibertsitatea