Synthese

Published by Springer Nature
Online ISSN: 1573-0964
Learn more about this page
Recent publications
Article
  • Miloud BelkonieneMiloud Belkoniene
It is natural to regard understanding as having a rational dimension, in the sense that understanding seems to require having justification for holding certain beliefs about the world. Some philosophers however argue that justification is not required to gain understanding of phenomena. In the present paper, my intention is to provide a critical examination of the arguments that have been offered against the view that understanding requires justification in order to show that, contrary to what they purport to establish, justification remains a plausible requirement on understanding.
 
Article
  • Bryce DalbeyBryce Dalbey
  • Bradford SaadBradford Saad
We motivate five constraints on theorizing about sensory experience. We then propose a novel form of naturalistic intentionalism that succeeds where other theories fail by satisfying all of these constraints. On the proposed theory, which we call structure matching tracking intentionalism , brains states track determinables. Internal structural features of those states select determinates of those determinables for presentation in experience. We argue that this theory is distinctively well-positioned to both explain internal-phenomenal structural correlations and accord external features a role in fixing phenomenology. In addition, we use the theory to shed light on how one comes to experience “missing shades”.
 
Article
  • Phillip Hintikka KievalPhillip Hintikka Kieval
According to doxastic involuntarism, we cannot believe at will. In this paper, I argue that permissivism, the view that, at times, there is more than one way to respond rationally to a given body of evidence, is consistent with doxastic involuntarism. Rober (Mind 128(511):837–859, 2019a, Philos Phenom Res 1–17, 2019b) argues that, since permissive situations are possible, cognitively healthy agents can believe at will. However, Roeber (Philos Phenom Res 1–17, 2019b) fails to distinguish between two different arguments for voluntarism, both of which can be shown to fail by proper attention to different accounts of permissivism. Roeber considers a generic treatment of permissivism, but key premises in both arguments depend on different, more particular notions of permissivism. Attending to the distinction between single-agent and inter-subjective versions of permissivism reveals that the inference from permissivism to voluntarism is unwarranted.
 
Article
  • Richard TeagueRichard Teague
The problem of closure for the traditional unstructured possible worlds model of attitudinal content is that it treats belief and other cognitive states as closed under entailment, despite apparent counterexamples showing that this is not a necessary property of such states. One solution to this problem, which has been proposed recently by several authors (Schaffer in: Hawthorne and Gendler (eds) Oxford studies in epistemology, Oxford University Press, Oxford, pp 235–271, 2005; Yalcin in Philos Phenomenol Res 97(1):23–47, 2018; Hoek in: Kindermann (ed) Unstructured content, Oxford University Press, Oxford, forthcoming), is to restrict closure in an unstructured setting by treating propositional attitudes as question-sensitive. Here I argue that this line of response is unsatisfying as it stands because the problem of closure is more general than is typically discussed. A version of the problem recurs for attitudes like wondering, entertaining, considering, and so on, which are directed at questions rather than propositions. For such questioning attitudes, the appeal to question-sensitivity is much less convincing as a solution to the problem of closure.
 
Article
While ΛCDM has emerged as the standard model of cosmology, a small group of physicists defend Modified Newtonian Dynamics (MOND) as an alternative view on cosmology. Exponents of MOND have employed a broad, at times explicitly philosophical, conceptual perspective in arguing their case. This paper offers reasons why that MONDian defense has been ineffective. First, we argue that the defense is ineffective according to Popperian or Lakatosian views--ostensibly the preferred philosophical views on theory assessment of proponents of MOND. Second, we argue that the defense of MOND can instead best be reconstructed as an instance of meta-empirical theory assessment. The formal employment of meta-empirical assessment by MONDians is unconvincing, however, because it lacks a sufficient epistemic foundation. Specifically, the MONDian No Alternatives Argument relies on falsifiability or explanation conditions that lack epistemic relevance, while the argument from Unexpected Explanatory Success fails since there is a known alternative to MOND. In the last part of the paper, we draw some lessons for applications of meta-empirical assessment more generally.
 
Article
An efficient and eco-friendly green methodology is developed for the synthesis of novel sulfonamide derivatives from sulfanilamide and phthalic anhydride in ethanol as solvent, using ultrasound irradiations. High yield, short reaction time, green conditions and optimization with the design of experiment are the major advantages of this method. The structures of the synthesized compounds were carefully characterized by 1H, 13C NMR as well as IR
 
Three Approaches and Two Questions
Article
Why would decision makers (DMs) adopt heuristics, priors, or in short “habits” that prevent them from optimally using pertinent information—even when such information is freely-available? One answer, Herbert Simon’s “procedural rationality” regards the question invalid: DMs do not, and in fact cannot, process information in an optimal fashion. For Simon, habits are the primitives, where humans are ready to replace them only when they no longer sustain a pregiven “satisficing” goal. An alternative answer, Daniel Kahneman’s “mental economy” regards the question valid: DMs make decisions based on optimization. Kahneman understands optimization not differently from the standard economist’s “bounded rationality.” This might surprise some researchers given that the early Kahneman, along with Tversky, have uncovered biases that appear to suggest that choices depart greatly from rational choices. However, once we consider cognitive cost as part of the constraints, such biases turn out to be occasional failures of habits that are otherwise optimal on average. They are optimal as they save us the cognitive cost of case-by-case deliberation. While Kahneman’s bounded rationality situates him in the neoclassical economics camp, Simon’s procedural rationality echoes Bourdieu’s “habitus” camp. To abridge the fault line of the two camps, this paper proposes a “two problem areas hypothesis.” Along the neoclassical camp, habits satisfy wellbeing, what this paper calls “substantive satisfaction.” Along the Bourdieu camp, habits satisfy belonging, love, and bonding with one’s environment, what this paper calls “transcendental satisfaction.”
 
Article
According to conceptual reductive accounts, if properties of one domain can be conceptually reduced to properties of another domain, then the former properties are ontologically reduced to the latter properties. I will argue that conceptual reductive accounts face problems: either they do not recognise that many higher-level properties are correlated with multiple physical properties, or they do not clarify how we can discover new truthmakers of sentences about a higher-level property. Still, there is another way to motivate ontological reduction, the truthmaker reductive explanations (TRE). TRE can be given by using resources from John Heil’s truthmaker theory and the a priori entailment view or the a posteriori entailment view. I will argue that we can give these truthmaker reductive explanations if there are various less-than-perfectly similar physical properties that can be the truthmakers of sentences about higher-level properties and the physical similarity between them can explain why an irreducible higher-level property is not needed.
 
Article
Open texture is a kind of semantic indeterminacy first systematically studied by Waismann. In this paper, extant definitions of open texture will be compared and contrasted, with a view towards the consequences of open-textured concepts in mathematics. It has been suggested that these would threaten the traditional virtues of proof, primarily the certainty bestowed by proof-possession, and this suggestion will be critically investigated using recent work on informal proof. It will be argued that informal proofs have virtues that mitigate the danger posed by open texture. Moreover, it will be argued that while rigor in the guise of formalisation and axiomatisation might banish open texture from mathematical theories through implicit definition, it can do so only at the cost of restricting the tamed concepts in certain ways.
 
Article
There are many accounts of representation in the philosophical literature. However, regarding olfaction, Burge’s (2010) account is widely endorsed. According to his account, perceptual representation is always of an objective reality, that is, perception represents objects as such. Many authors presuppose this account of representation and attempt to show that the olfactory system itself issues in representations of that sort. The present paper argues that this myopia is a mistake and, moreover, that the various arguments in favor of olfactory objects fail. Yet, by taking seriously a minimal notion of representation, adopted from Shea (2018), we can see that the olfactory system is representational after all even if it doesn’t represent objects as such. That is, olfaction issues in minimal representations. Crucially, however, this paper will conclude with an argument to the effect that olfactory object files (objectual representations of olfactory objectual properties) are constructed by interactions between various mental systems. The claim to be defended is that objectual representations of olfactory objects are constructed when minimal olfactory content is embedded in object-files that contain other non-olfactory properties that meet Burge’s criteria for representation. Some extant work on feature-binding, attention, and object-files will be introduced to support the suggestion.
 
Article
In recent years, the philosophy and psychology of reasoning have made a ‘social turn’: in both disciplines it is now common to reject the traditional picture of reasoning as a solitary intellectual exercise in favour of the idea that reasoning is a social activity driven by social aims. According to the most prominent social account, Mercier and Sperber’s interactionist theory, this implies that reasoning is not a normative activity. As they argue, in producing reasons we are not trying to ‘get things right’; instead our aims are to justify ourselves and persuade others to accept our views. I will argue that even if interactionism has played a crucial role in bringing about the ‘social turn’ in our thinking about reasoning, it does not convince in its claim that reasoning is not a normative activity. Moreover, I argue that it is in fact perfectly possible to understand reasoning as a social tool that is also aimed at getting things right. I will propose that Gilbert Ryle’s conceptualization of reasoning as ‘didactic discourse’ offers one possible way to understand reasoning as both social and normative activity, and that as such his ideas could be of great value for the social turn in our thinking about reasoning.
 
Article
Inspired by Williamson’s knowledge-first epistemology, I propose a position on practical knowledge that can be called the ‘know-how-first view’; yet whereas Williamson is one of the pioneers of the new intellectualism about know-how, I employ the know-how-first view to argue against intellectualism and instead develop a know-how-first version of anti-intellectualism. Williamson argues that propositional knowledge is a sui generis unanalyzable mental state that comes first in the epistemic realm; in parallel, I propose that know-how is a sui generis unanalyzable power that comes first in the practical realm. To motivate this suggestion, I put forward two arguments: (1) drawing on dispositionalist ideas, I argue that the practical component of know-how is unanalyzable; (2) based on an investigation of the natures of intentionality and intelligence, I argue that know-how is prior to intentional and intelligent abilities in the order of explanation of agential action. Deploying this know-how-first anti-intellectualism, I then set out know-how-first solutions to two challenging problems for anti-intellectualism: the sufficiency problem and the necessary condition problem.
 
Article
It is sometimes argued that, given its detachment from our current most successful science, analytic metaphysics has no epistemic value because it contributes nothing to our knowledge of reality. Relatedly, it is also argued that metaphysics properly constrained by science can avoid that problem. In this paper we argue, however, that given the current understanding of the relation between science and metaphysics, metaphysics allegedly constrained by science suffers the same fate as its unconstrained sister; that is, what is currently thought of as scientifically respectful metaphysics may end up also being without epistemic value. The core of our claim is that although much emphasis is put on the supposed difference between unconstrained analytic metaphysics, in opposition to scientifically constrained metaphysics, it is largely forgotten that no clear constraining relation of metaphysics by science is yet available.
 
Article
Rédei and Gyenis recently displayed strong constraints of Bayesian learning (in the form of Jeffrey conditioning). However, they also presented a positive result for Bayesianism. Despite the limited significance of this positive result, I find it useful to discuss its two possible strengthenings to present new results and open new questions about the limits of Bayesianism. First, I will show that one cannot strengthen the positive result by restricting the evidence to so-called “certain evidence”. Secondly, strengthening the result by restricting the partitions—as parts of one’s evidence—to Jeffrey-independent partitions requires additional constraints on one’s evidence to preserve its commutativity. So, my results provide additional grounds for caution and support for the limitations of Bayesian learning.
 
Article
Indecision and Buridan's Principle (forthcoming in Synthese) The problem known as Buridan’s Ass says that a hungry donkey equipoised between two identical bales of hay will starve to death. Indecision kills the ass. Some philosophers worry about human analogs. Computer scientists since the 1960s have known about the computer versions of such cases. From what Leslie Lamport calls ‘Buridan’s Principle’ – a discrete decision based on a continuous range of input-values cannot be made in a bounded time – it follows that the possibilities for human analogs of Buridan’s Ass are far more wide-ranging and securely provable than has been acknowledged in philosophy. We are never necessarily decisive. This is mathematically provable. I explore four consequences: first, increased interest of the literature’s solutions to Buridan’s Ass; second, a new asymmetry between responsibility for omissions and responsibility for actions; third, clarification of the standard account of akrasia; and, fourth, clarification of the role of credences in normative decision-theory.
 
Article
Supporters of conceptual engineering often use Haslanger’s ameliorative project as a key example of their methodology. However, at face value, Haslanger’s project is no cause for optimism about conceptual engineering. If we interpret Haslanger as seeking to revise how people in general use and understand words such as ‘woman’, ‘man’, etc., then her project has been unsuccessful. And if we interpret her as seeking to reveal the meaning of those words, then her project does not involve conceptual engineering. I develop and defend an alternative interpretation of Haslanger’s project and argue that, so interpreted, it is a successful conceptual engineering project after all. In so doing, I develop what I call a particularist account of the success conditions for conceptual engineering.
 
Article
Proponents of the extended mind have suggested that phenomenal transparency may be important to the way we evaluate putative cases of cognitive extension. In particular, it has been suggested that in order for a bio-external resource to count as part of the machinery of the mind, it must qualify as a form of transparent equipment or transparent technology. The present paper challenges this claim. It also challenges the idea that phenomenological properties can be used to settle disputes regarding the constitutional (versus merely causal) status of bio-external resources in episodes of extended cognizing. Rather than regard phenomenal transparency as a criterion for cognitive extension, we suggest that transparency is a feature of situations that support the ascription of certain cognitive/mental dispositional properties to both ourselves and others. By directing attention to the forces and factors that motivate disposition ascriptions, we arrive at a clearer picture of the role of transparency in arguments for extended cognition and the extended mind. As it turns out, transparency is neither necessary nor sufficient for cognitive extension, but this does not mean that it is entirely irrelevant to our understanding of the circumstances in which episodes of extended cognizing are apt to arise.
 
Article
This paper purports to disprove an orthodox view in contemporary epistemology that I call ‘the epistemic conception of memory’, which sees remembering as a kind of epistemic success, in particular, a kind of knowing. This conception is embodied in a cluster of platitudes in epistemology, including ‘remembering entails knowing’, ‘remembering is a way of knowing’, and ‘remembering is sufficiently analogous to knowing’. I will argue that this epistemic conception of memory, as a whole, should be rejected insofar as we take into account some putative necessary conditions for knowledge. It will be illustrated that while many maintain that knowing must be (1) anti-luck and (2) an achievement, the two conditions do not apply to remembering. I will provide cases where the subject successfully remembers that p but lacks knowledge that p for failing to meet the two putative conditions for knowledge. Therefore, remembering is not a kind of knowing but a sui generis cognitive activity.
 
Article
In (Avigad, 2020), Jeremy Avigad makes a novel and insightful argument, which he presents as part of a defence of the ‘Standard View’ about the relationship between informal mathematical proofs (that is, the proofs that mathematicians write for each other and publish in mathematics journals, which may in spite of their ‘informal’ label be rather more formal than other kinds of scientific communication) and their corresponding formal derivations (‘formal’ in the sense of computer science and mathematical logic). His argument considers the various strategies by means of which mathematicians can write informal proofs that meet mathematical standards of rigour, in spite of the prodigious length, complexity and conceptual difficulty that some proofs exhibit. He takes it that showing that and how such strategies work is a necessary part of any defence of the Standard View. In this paper, I argue for two claims. The first is that Avigad’s list of strategies is no threat to critics of the Standard View. On the contrary, this observational core of heuristic advice in Avigad’s paper is agnostic between rival accounts of mathematical correctness. The second is that that Avigad’s project of accounting for the relation between formal and informal proofs requires an answer to a prior question: what sort of thing is an informal proof? His paper havers between two answers. One is that informal proofs are ultimately syntactic items that differ from formal derivations only in completeness and use of abbreviations. The other is that informal proofs are not purely syntactic items, and therefore the translation of an informal proof into a derivation is not a routine procedure but rather a creative act. Since the ‘syntactic’ reading of informal proofs reduces the Standard View to triviality, makes a mystery of the valuable observational core of his paper, and underestimates the value of the achievements of mathematical logic, he should choose some version of the second option.
 
Article
This paper argues that ethical propositions can legitimately be used as evidence for and against empirical conclusions. Specifically, I argue that this thesis is entailed by several uncontroversial assumptions about ethical metaphysics and epistemology. I also outline several examples of ethical-to-empirical inferences where it is extremely plausible that one can rationally rely upon their ethical evidence in order to gain a justified belief in an empirical conclusion. The main upshot is that ethical propositions can, under perfectly standard conditions, play both direct and indirect evidential roles in (social) scientific inquiry.
 
Article
Dog whistling—speech that seems ordinary but sends a hidden, often derogatory message to a subset of the audience—is troubling not just for our political ideals, but also for our theories of communication. On the one hand, it seems possible to dog whistle unintentionally, merely by uttering certain expressions. On the other hand, the intention is typically assumed or even inferred from the act, and perhaps for good reason, for dog whistles seem misleading by design, not just by chance. In this paper, I argue that, to understand when and why it’s possible to dog-whistle unintentionally (and indeed, intentionally), we’ll need to recognize the structure of our linguistic practices. For dog whistles and for covertly coded speech more generally, this structure is a pair of practices, one shared by all competent speakers and the other known only to some, but deployable in the same contexts. In trying to identify these enabling conditions, we’ll discover what existing theories of communicated content overlook by focusing on particular utterances in isolation, or on individual speakers’ mental states. The remedy, I argue, lies in attending to the ways in which what is said is shaped by the temporally extended, socio-politically structured linguistic practices that utterances instantiate.
 
Article
I argue that that an influential strategy for understanding conspiracy theories stands in need of radical revision. According to this approach, called ‘generalism’, conspiracy theories are epistemically defective by their very nature. Generalists are typically opposed by particularists, who argue that conspiracy theories should be judged case-by-case, rather than definitionally indicted. Here I take a novel approach to criticizing generalism. I introduce a distinction between ‘Dominant Institution Conspiracy Theories and Theorists’ and ‘Non-Dominant Institution Conspiracy Theories and Theorists’. Generalists uncritically center the latter in their analysis, but I show why the former must be centered by generalists’ own lights: they are the clearest representatives of their views, and they are by far the most harmful. Once we make this change in paradigm cases, however, various typical generalist theses turn out to be false or in need of radical revision. Conspiracy theories are not primarily produced by extremist ideologies, as generalists typically claim, since mainstream, purportedly non-extremist political ideologies turn out to be just as, if not more responsible for such theories. Conspiracy theories are also, we find, not the province of amateurs: they are often created and pushed by individuals widely viewed as experts, who have the backing of our most prestigious intellectual institutions. While generalists may be able to take this novel distinction and shift in paradigm cases on board, this remains to be seen. Subsequent generalist accounts that do absorb this distinction and shift will look radically different from previous incarnations of the view.
 
Article
The aim of this collection is to show how work in the analytic philosophical tradition can shed light on the nature, value, and experience of anxiety. Contrary to widespread assumptions, anxiety is not best understood as a mental disorder, or an intrinsically debilitating state, but rather as an often valuable affective state which heightens our sensitivity to potential threats and challenges. As the contributions in this volume demonstrate, learning about anxiety can be relevant for debates, not only in the philosophy of emotion, but also in epistemology, value theory, and the philosophy of psychopathology. In this introductory article, we also show that there is still much to discover about the relevance that anxiety may have for moral action, self-understanding, and mental health.
 
Article
The shooting bias hypothesis aims to explain the disproportionate number of minorities killed by police. We present the evidence mounting in support of the existence of shooting bias and then focus on two dissenting studies. We examine these studies in light of Biddle and Leuschner’s (2015) “inductive risk account of epistemically detrimental dissent” and conclude that, although they meet this account only partially, the studies are in fact epistemically and socially detrimental as they contribute to racism in society and to a social atmosphere that is hostile to science as scholars working on issues of racism come under attack. We emphasize this final point via recourse to Kitcher’s “Millian argument against the freedom of research.”
 
Article
In this paper, I provide an account of epistemic anxiety as an emotional response to epistemic risk: the risk of believing in error. The motivation for this account is threefold. First, it makes epistemic anxiety a species of anxiety, thus rendering psychologically respectable a notion that has heretofore been taken seriously only by epistemologists. Second, it illuminates the relationship between anxiety and risk. It is standard in psychology to conceive of anxiety as a response to risk, but psychologists – very reasonably – have little to say about risk itself, as opposed to risk judgement. In this paper, I specify what risk must be like to be the kind of thing to which anxiety can be a response. Third, my account improves on extant accounts of epistemic anxiety in the literature. It is more fleshed out than Jennifer Nagel’s (2010a), which is largely agnostic about the nature of epistemic anxiety, focusing instead on what work it does in our epistemic lives. In offering an account of epistemic anxiety as an emotion, my account explains how it is able to do the epistemological work to which Nagel puts it. My account is also more plausible than Juliette Vazard’s (2018, 2021), on which epistemic anxiety is an emotional response to potential threat to one’s practical interests. Vazard’s account cannot distinguish epistemic anxiety from anxiety in general, and also fails to capture all instances of what we want to call epistemic anxiety. My account does better on both counts.
 
Article
Some revisionary ontologies are highly parsimonious: they posit far fewer entities than what we quantify over in ordinary discourse. The most radical examples are minimal ontologies, on which physical simples are the only things that exist. Highly parsimonious ontologies, and especially minimal ones, face the challenge of either accounting for the truth of our ordinary quantificational discourse, or paraphrasing such discourse away. Common strategies for addressing this challenge include classical reduction (by means of formal derivation and postulates), paraphrase nihilism, and a distinction between ontological and existence commitments. I argue, however, that these strategies are either implausible or fail to provide truth conditions consistent with minimal or parsimonious ontologies. I then discuss, defend, and suggest ways to strengthen an alternative framework for reduction, on which the sentences of reducing theories ground those of reducible theories. Relative to the other options for defending minimal ontology, a strengthened grounding-reductive approach can (in principle) provide more defensible truth conditions for minimal ontology, better preserve scientific realist intuitions, set a more attainable standard for reduction, and allow our existence commitments to be more responsive to empirical evidence and scientific expertise. As a result, I argue that minimal ontology becomes more defensible—though not certain—on a grounding-reductive framework. But even if minimal ontology were wrong, the grounding-reductive framework makes other parsimonious but non-minimal ontologies more plausible.
 
Article
In recent philosophy of science there has been much discussion of both pluralism, which embraces scientific terms with multiple meanings, and eliminativism, which rejects such terms. Some recent work focuses on the conditions that legitimize pluralism over eliminativism – the conditions under which such terms are acceptable. Often, this is understood as a matter of encouraging effective communication – the danger of these terms is thought to be equivocation, while the advantage is thought to be the fulfilment of ‘bridging roles’ that facilitate communication between different scientists and specialisms. These theories are geared towards regulating communication between scientists qua scientists. However, this overlooks an important class of harmful equivocation that involves miscommunication between scientists and nonscientists, such as the public or policymakers. To make my case, I use the example of theory of mind, also known as ‘mindreading’ and ‘mentalizing’, and broadly defined as the capacity to attribute mental states to oneself and others. I begin by showing that ‘theory of mind’ has multiple meanings, before showing that this has resulted in harmful equivocations of a sort and in a way not accounted for by previous theories of pluralism and eliminativism.
 
Article
How should one understand comparisons in which neither of two alternatives is at least as good as the other? Much recent literature on comparability problems focuses on what the appropriate explanation of the phenomenon is. Is it due to vagueness or the possibility of non-conventional comparative relations such as parity? This paper argues that the discussions on how to best explain comparability problems has reached an impasse at which it is hard to make any progress. To advance the discussion we suggest a new classification of comparability problems that focuses on the problems they cause for practical reasoning.
 
Article
Human freedom is often characterised as a unique power of self-determination. Accordingly, free human action is often thought to be determined by the agent in some distinctive manner. What is more, this determination is widely assumed to be a kind of efficient-causal determination. In reaction to this efficient-causal-deterministic conception of free human action, this paper argues that if one takes up the understanding of determination and causality that is offered by Anscombe in ‘Causality and Determination’, and moreover takes up an understanding of free human action that is constrained by Anscombe’s account of intentional action in Intention, then an account of free human action as distinctively caused or determined by the agent is untenable. However, the notion of necessitation that Anscombe presents in ‘Causality and Determination’, which implies neither causality nor determination, offers an attractive alternative account. This alternative account pushes us to reconsider the sense in which human freedom is a power of self-determination, and to acknowledge the limits of our control in free action.
 
Article
Social scientists often draw on a variety of evidence for their causal inferences. There is also a call to use a greater variety of evidence in social science research. This topical collection examines the philosophical foundations and implications of evidential diversity in the social sciences. It assesses the application of Evidential Pluralism in the context of the social sciences, especially its application to economics and political science. It also discusses the concept of causation in cognitive science and the implications of evidential diversity for the social sciences.
 
Article
It is sometimes argued that, given its detachment from our current most successful science, analytic metaphysics has no epistemic value because it contributes nothing to our knowledge of reality. Relatedly, it is also argued that metaphysics properly constrained by science can avoid that problem. In this paper we argue, however, that given the current understanding of the relation between science and metaphysics, metaphysics allegedly constrained by science suffers the same fate as its unconstrained sister; that is, what is currently thought of as scientifically respectful metaphysics may end up also being without epistemic value. The core of our claim is that although much emphasis is put on the supposed difference between unconstrained analytic metaphysics, in opposition to scientifically constrained metaphysics, it is largely forgotten that no clear constraining relation of metaphysics by science is yet available.
 
Article
According to the Best System Account (BSA) of lawhood, laws of nature are theorems of the deductive systems that best balance simplicity and strength. In this paper, I advocate a different account of lawhood which is related, in spirit, to the BSA: according to my account, laws are theorems of deductive systems that best balance simplicity, strength, and also calculational tractability. I discuss two problems that the BSA faces, and I show that my account solves them. I also use my account to illuminate the nomological character of special science laws.
 
Photos of SAND lab members with and without the Fawkes software, photos at courtesy of the SAND lab
Illustration of the game’s basic premises
Article
The fast development of synthetic media, commonly known as deepfakes, has cast new light on an old problem, namely—to what extent do people have a moral claim to their likeness, including personally distinguishing features such as their voice or face? That people have at least some such claim seems uncontroversial. In fact, several jurisdictions already combat deepfakes by appealing to a “right to identity.” Yet, an individual’s disapproval of appearing in a piece of synthetic media is sensible only insofar as the replication is successful. There has to be some form of (qualitative) identity between the content and the natural person. The question, therefore, is how this identity can be established. How can we know whether the face or voice featured in a piece of synthetic content belongs to a person who makes claim to it? On a trivial level, this may seem an easy task—the person in the video is A insofar as he or she is recognised as being A. Providing more rigorous criteria, however, poses a serious challenge. In this paper, I draw on Turing’s imitation game, and Floridi’s method of levels of abstraction, to propose a heuristic to this end. I call it the identification game. Using this heuristic, I show that identity cannot be established independently of the purpose of the inquiry. More specifically, I argue that whether a person has a moral claim to content that allegedly uses their identity depends on the type of harm under consideration.
 
Article
The explanatory/pragmatic-trial distinction enjoys a burgeoning philosophical and medical literature and a significant contingent of support among philosophers and healthcare stakeholders as an important way to assess the design and results of randomized controlled trials. A major motivation has been the need to provide relevant, generalizable data to drive healthcare decisions. While talk of pragmatic and explanatory trials could be seen as convenient shorthand, the distinction can also be seen as harboring deeper issues related to inferential strategies used to evaluate causal claims regarding medical treatments. A comprehensive, critical analysis of the distinction and underlying epistemological framework upon which the distinction is based, particularly with respect to treatment effectiveness, has yet to be forthcoming. I provide this, analyzing the distinction’s relationship to generalizability and cognate distinctions between ideal conditions and real-world practice, internal and external validity, and efficacy and effectiveness. I also analyze recent philosophical work that relies on the explanatory/pragmatic-trial distinction and that advocates for more pragmatic trials. I conclude that as an organizing principle for trial-design decisions and trial evaluation, the explanatory/pragmatic-trial distinction is conceptually problematic and not as useful as its proponents seem to think. Since some pragmatic-trial features can be inimical to establishing treatment effectiveness, pragmatic-trial features should not be conflated with pragmatic trials’ avowed goals. If the distinction is to be useful, it and some associated concepts, including generalizability, should be reformulated, lest they continue to underlie a medical epistemology that could contribute to methodologically flawed and potentially unethical advice for the design and interpretation of trials.
 
Article
We often need to have beliefs about things on which we are not experts. Luckily, we often have access to expert judgements on such topics. But how should we form our beliefs on the basis of expert opinion when experts conflict in their judgments? This is the core of the novice/2-expert problem in social epistemology. A closely related question is important in the context of policy making: how should a policy maker use expert judgments when making policy in domains in which she is not herself an expert? This question is more complex, given the messy and strategic nature of politics. In this paper we argue that the prediction with expert advice (PWEA) framework from machine learning provides helpful tools for addressing these problems. We outline conditions under which we should expert PWEA to be helpful and those under which we should not expect these methods to perform well.
 
Article
Theorists of oppression commonly accept that unfair social power disparities result in a variety of harms. In particular, oppression is characterized by a loss of open-mindedness in the oppressors, and negative internalization in the oppressed. That is, while oppressors are often unable or unwilling to consider the points of view of the oppressed, the oppressed often come to internalize conditions of oppression by experiencing them as indicative of their own alleged shortcomings. Nevertheless, the psychological mechanisms behind these phenomena have remained underexplored. This is unfortunate, since understanding the psychological processes behind these phenomena could help us understand how they could be reversed. In this work, I aim to fill this lacuna by extending debates on mechanisms of mindreading (simulation-based or theorizing-based mechanisms responsible for interpreting and manipulating one’s and others’ mental states via attribution of propositional attitudes) to show how close-mindedness and negative internalization come about. I synthesize empirical findings to show that while theorizing fosters emotional insulation by “reframing” affective cues from a third-person point of view, simulation fosters feelings of emotional vulnerability and psychological continuity. As a result, while theorizing allows oppressors to take a somewhat detached attitude during self and other interpretation, involuntary simulation fosters negative internalization on the part of the oppressed.
 
Article
Recent historical studies have investigated the first proponents of methodological structuralism in late nineteenth-century mathematics. In this paper, I shall attempt to answer the question of whether Peano can be counted amongst the early structuralists. I shall focus on Peano’s understanding of the primitive notions and axioms of geometry and arithmetic. First, I shall argue that the undefinability of the primitive notions of geometry and arithmetic led Peano to the study of the relational features of the systems of objects that compose these theories. Second, I shall claim that, in the context of independence arguments, Peano developed a schematic understanding of the axioms which, despite diverging in some respects from Dedekind’s construction of arithmetic, should be considered structuralist. From this stance I shall argue that this schematic understanding of the axioms anticipates the basic components of a formal language.
 
Article
According to panpsychists, physical phenomena are, at bottom, nothing but experiential phenomena. One argument for this view proceeds from an alleged need for physical phenomena to have features beyond what physics attributes to them; another starts by arguing that consciousness is ubiquitous, and proposes an identification of physical and experiential phenomena as the best explanation of this alleged fact. The first argument assumes that physical phenomena have categorical natures, and the second that the world’s experience-causing powers or potentials underdetermine its physical features. I argue that panpsychists are not entitled to these assumptions.
 
Article
According to the standard view, a belief is based on a reason and doxastically justified—i.e., permissibly held—only if a causal relation obtains between a reason and the belief. In this paper, I argue that a belief can be doxastically justified by a reason’s mere disposition to sustain it. Such a disposition, however, wouldn’t establish a causal connection unless it were manifested. My argument is that, in the cases I have in mind, the manifestation of this disposition would add no positive epistemic feature to the belief: a belief that is justified after the manifestation of a reason’s causal powers must have already been justified before their manifestation. As a result, those who adhere to the standard causal view of the basing relation face a hard choice: they should either abandon the enormously popular view that doxastic justification has a basing requirement or modify their view of the basing relation.
 
Article
The existing literature on the rational underdetermination problem often construes it as one resulting from the ubiquity of objective values. It is therefore sometimes argued that subjectivists need not be troubled by the underdetermination problem. But on closer examination, it turns out, they should. Or so I will argue. The task of the first half of this paper is explaining why. The task of the second half is finding a subjectivist solution the rational underdetermination problem. The basic problem, I argue, is as follows. Idealizing subjectivism generates too many ideal selves to deliver determinate or commensurable options regarding what non-ideal deliberating agents ought to do. My solution: these idealized options should be assessed from the only perspective we can, in fact, occupy, namely, that of our non-ideal, actual selves. Deciding what to do therefore becomes, in part, an exercise in deciding who to be. But one might now worry this just moves the arbitrariness bump in the rug. Privileging the perspective of our actual self seems contrary to the rationale for idealizing in the first place. I consider two solutions to the problem, one democratic, the other modelled on trusteeship. In the end, I argue, our actual self has complete freedom to choose the ideal self it grants rational authority. In the final part of the paper, I present my positive proposal as a solution to the underdetermination problem confronting the idealizing subjectivist and then argue that, so understood, this account vindicates a tidied-up version of how some reflective people already do deliberate in their everyday lives. This, in turn, suggests that a decision-procedure closely connected to the account is both possible (because actual) and attractive.
 
The source of the benefits of greater inclusion
Article
I compare two different arguments for the importance of bringing new voices into science: arguments for increasing the representation of women, and arguments for the inclusion of the public, or for “citizen science”. I suggest that in each case, diversifying science can improve the quality of scientific results in three distinct ways: epistemically, ethically, and politically. In the first two respects, the mechanisms are essentially the same. In the third respect, the mechanisms are importantly different. Though this might appear to suggest a broad similarity between the cases, I show that the analysis reveals an important respect in which efforts to include the public are more complex. With citizen science programs, unlike with efforts to bring more women into science, the three types of improvement are often in conflict with one another: improvements along one dimension may come at a cost on another dimension, suggesting difficult trade-offs may need to be made.
 
Article
The metalinguistic approach to conceptual engineering construes disputes between (what I shall call) linguistic reformers and linguistic conservatives as metalinguistic disagreements on how best to use particular expressions. As the present paper argues, this approach has various merits. However, it was recently criticised in Cappelen’s seminal Fixing Language (2018). Cappelen raises an important objection against the metalinguistic picture. According to this objection – the Babel objection, as I shall call it – the metalinguistic account cannot accommodate the intuition of disagreement between linguistic conservatives and reformers who are speaking different languages. The objection generalises to metalinguistic approaches to e.g. moral disagreements. This paper discusses the Babel objection and shows how to dispel it.
 
Article
Among the philosophical accounts of reference, Quine’s (1974) The Roots of Reference stands out in offering an integrated account of the acquisition of linguistic reference and object individuation. Based on a non-referential ability to distinguish bodies, the acquisition of sortals and quantification are crucial steps in learning to refer to objects. In this article, we critically re-assess Quine’s account of reference. Our critique will proceed in three steps with the aim of showing that Quine effectively presupposes what he sets out to explain, namely, reference to objects. We are going to argue (i) that sortals do not individuate, (ii) that bodies are already objects, and (iii) that the acquisition of variables presupposes a notion of identity. The result is diagnostic of a central desideratum for any theory of reference: an explanation of spatiotemporal object individuation.
 
Article
The basic kinds of physical causality that are foundational for other kinds of causality involve objects and the causal relations between them. These interactions do not involve events. If events were ontologically significant entities for causality in general, then they would play a role in simple mechanical interactions. But arguments about simple collisions looked at from different frames of reference show that events cannot play a role in simple mechanical interactions, and neither can the entirely hypothetical causal relations between events. These arguments show that physics, which should be authoritative when it comes to the metaphysics of causality, gives no reasons to believe that events are causal agents. Force relations and some cases of energy-momentum transfer are examples of causal relations, with forces being paradigmatic in the macroscopic world, though it is conceivable that there are other kinds of causal relation. A relation between two objects is a causal relation if and only if when it is instantiated by the two objects there is a possibility that the objects that are the terms of the relation could change. The basic metaphysics of causality is about objects, causal relations, changes in objects, and a causal primitive. The paper also includes a discussion of the metaphysics of forces and a discussion of the metaphysics of energy and momentum exchanges.
 
The presenting, impressional, positional, intentional structure of perception
The presentifying, reproductive, positional, intentional structure of memory
The presentifying, reproductive, quasi-positional, intentional structure of free fantasy
The collapse of the presentifying, reproductive structure of memory veering into a hallucination perception
The collapse of the presentifying, reproductive structure of free fantasy veering into a hallucination perception
Article
There is currently no consensus about a general account of hallucination and its object. The problem of hallucination has de facto generated contrasting accounts of perception, led to opposing epistemic and metaphysical positions, and, most significantly, exposed a manifold of diverging views concerning the intentionality of experience, in general, and perceptual intentionality, in particular. In this article, I aim to clarify the controversial status, experiential possibility, and intentional structure of hallucination qua distinctive phenomenon. The analysis will first detect a phenomenological, Husserlian-informed concept of hallucination in its irreducibility to other kinds and modes of sensory experience. This will set the theoretical basis to develop an account of hallucination by means of a morphological description of those diversified structures of intentional consciousness that lend themselves to generate hallucinatory appearances. I will then describe both the turning of certain kinds of intentional experience into hallucinatory perceptions and the status of hallucinatory objects. This will support the possibility of hallucination in a strict and rigorous sense, elucidate the enigmatic claim that ‘in hallucination we are conscious of something while nothing truly appears,’ and offer a seminal perspective concerning the alleged problem that hallucinations pose on perceptual intentionality. With the aid of some crucial distinctions, I will then argue that hallucinations do not affect perceptual intentionality as a dyadic, relational structure.
 
Article
We can distinguish two senses of the Given, the nonconceptual and the non-doxastic. The idea of the nonconceptual Given is the target of Sellars’s severe attack on the Myth of the Given, which paves the way for McDowell’s conceptualism, while the idea of the non-doxastic Given is largely neglected. The main target of the present paper is the non-doxastic Given. I first reject the idea of the nonconceptual Given by debunking the false assumption that there is a systematic relation between the conceptual and the nonconceptual. I then propose a constitutive understanding of experience and concept, which at once challenges the idea of the non-doxastic Given. Unlike the more familiar Davidsonian challenge, which questions the transition from the non-doxastic to the doxastic, the constitutive understanding implies that the idea of the non-doxastic Given endangers the very possibility of having thought about the world. I urge an exorcism of the Myth of the Given by proposing doxasticism, the view that experience is essentially a doxastic attitude towards that which is experienced.
 
Article
The prediction error minimization framework (PEM) denotes a family of views that aim at providing a unified theory of perception, cognition, and action. In this paper, I discuss some of the theoretical limitations of PEM. It appears that PEM cannot provide a satisfactory explanation of motivated reasoning, as instantiated in phenomena such as self-deception, because its cognitive ontology does not have a separate category for motivational states such as desires. However, it might be thought that this objection confuses levels of explanation. Self-deception is a personal level phenomenon, while PEM offers subpersonal explanations of psychological abilities. Thus, the paper examines how subpersonal explanations couched in the PEM framework can be thought of as related to personal level explanations underlying self-deception. In this regard, three views on the relation between personal and subpersonal explanations are investigated: the autonomist, the functionalist, and the co-evolutionary perspective. I argue that, depending on which view of the relation between the personal and subpersonal is adopted, the PEM paradigm faces a dilemma: either its explanatory ambitions should be reduced to the subpersonal domain, or it cannot provide a satisfactory account of motivated reasoning as instantiated in self-deception.
 
Top-cited authors
Erik Rietveld
  • University of Amsterdam
Julian Kiverstein
  • Academisch Medisch Centrum Universiteit van Amsterdam
Karl J Friston
  • University College London
Jelle Bruineberg
  • University of Amsterdam
Shaun Gallagher
  • The University of Memphis