Recent publications
We examine trends in drug overdose deaths by race, gender, and geography in the United States during the period 2013-2020. Race and gender specific crude rates were extracted from the final National Vital Statistics System multiple cause-of-death mortality files for several jurisdictions and used to calculate the male-to-female ratios of crude rates between 2013 and 2020. We established 2013-2019 temporal trends for four major drug types: psy-chostimulants with addiction potential (T43.6, such as methamphetamines); heroin (T40.1); natural and semi-synthetic opioids (T40.2, such as those contained in prescription pain-killers); synthetic opioids (T40.4, such as fentanyl and its derivatives) through a quadratic regression and determined whether changes in the pandemic year 2020 were statistically significant. We also identified which race, gender and states were most impacted by drug overdose deaths. Nationwide, the year 2020 saw statistically significant increases in overdose deaths from all drug categories except heroin, surpassing predictions based on 2013-2019 trends. Crude rates for Black individuals of both genders surpassed those for White individuals for fentanyl and psychostimulants in 2018, creating a gap that widened through 2020. In some regions, mortality among White persons decreased while overdose deaths for Black persons kept rising. The largest 2020 mortality statistic is for Black males in the District of Columbia, with a record 134 overdose deaths per 100,000 due to fentanyl, 9.4 times more than the fatality rate among White males. Male overdose crude rates in 2020 remain larger than those of females for all drug categories except in Idaho, Utah and Arkan-sas where crude rates of overdose deaths by natural and semisynthetic opioids for females exceeded those of males. Drug prevention, mitigation and no-harm strategies should include racial, geographical and gender-specific efforts, to better identify and serve at-risk groups.
The growing interconnections among societies have facilitated the emergence
of systemic crises, i.e., shocks that rapidly spread around the world and cause
major disruptions. Advances in the interdisciplinary field of complexity can help
understand the mechanisms underpinning systemic crises. This article reviews
the most important concepts and findings from the pertinent literature. It demonstrates that an understanding of the nature of disruptions of globally interconnected systems and their implications is critical to prevent, react to, and recover from systemic crises. The resulting analytical framework is applied to two prominent examples of global systemic crises: the 2008 global financial crisis and the COVID-19 pandemic. The article provides evidence that relying on reactive and
recovery capacities to face systemic crises is not sustainable because of the
extraordinary costs they impose on societies. Efforts are needed to develop a
multipronged strategy to strengthen our capacities to face systemic crises and
address fundamental mismatches between the nature of global challenges and
the necessary collective action to address these challenges.
Social reference points have been identified to be important determinants of individuals’ welfare. We investigate the consequences of social reference points for risk taking in a laboratory experiment. In the main treatments, risk-taking subjects observe the predetermined earnings of peer subjects when making a risky choice. We exogenously manipulate peers’ earnings and find a significant treatment effect: decision makers make less risk-averse choices in the case of larger peers’ earnings. The treatment effect is consistent with an application of prospect theory to social reference points and cannot be explained by reference points based on counterfactual information, anchoring, and experimenter demand effects. In additional analyses, we show that diminishing sensitivity seems to play an important role in subjects’ risky choices. We explore also whether inequity aversion and expectations-based reference points can account for our findings and conclude that they do not provide plausible alternative explanations for them.
This paper was accepted by Yan Chen, behavioral economics and decision analysis.
Funding: This research was financially supported by the Bonn Graduate School of Economics and the Center for Economics and Neuroscience in Bonn.
Supplemental Material: The online appendix and data are available at https://doi.org/10.1287/mnsc.2023.4698 .
A mid-century net zero target creates a challenge for reducing the emissions of emissions-intensive, trade-exposed sectors with high cost mitigation options. These sectors include aluminium, cement, chemicals, iron and steel, lime, pulp and paper and petroleum refining. Available studies agree that decarbonization of these sectors is possible by mid-century if more ambitious policies are implemented soon. Existing carbon pricing policies have had limited impact on the emissions of these sectors because their marginal abatement costs almost always exceed the tax rate or allowance price. But emissions trading systems with free allowance allocations to emissions-intensive, trade-exposed sectors have minimized the adverse economic impacts and associated leakage. Internationally coordinated policies are unlikely, so implementing more ambitious policies creates a risk of leakage. This paper presents policy packages a country can implement to accelerate emission reduction by these sectors with minimal risk of leakage. To comply with international trade law the policy packages differ for producers whose goods compete with imports in the domestic market and producers whose goods are exported. Carbon pricing is a critical component of each package due its ability to minimize the risk of adverse economic impacts on domestic industry, support innovation and generate revenue. The revenue can be used to assist groups adversely impacted by the domestic price and production changes due to carbon pricing and to build public support for the policies.
Key policy insights
• A country with a mid-century net zero GHG emission target likely will need to implement more ambitious mitigation policies soon for emission-intensive sectors such as aluminium, cement, chemicals, iron and steel, lime, pulp and paper and petroleum refining.
• More ambitious mitigation policies are likely to vary by country and be implemented at different times, creating a risk of leakage due to industrial production shifts to other jurisdictions.
• More ambitious mitigation policy packages, compatible with international trade law, that a country can implement to reduce emissions from these sectors with minimal risk of leakage are available but differ for producers whose goods compete with imports in the domestic market and those whose goods are exported.
• Carbon pricing is a critical component of each package due its ability to minimize the risk of adverse economic impacts on domestic producers, support innovation and generate revenue.
One major insight derived from the moral twin earth debate is that evaluative and descriptive terms possess different levels of semantic stability, in that the meanings of the former but not the latter tend to remain constant over significant counterfactual variance in patterns of application. At the same time, it is common in metanormative debate to divide evaluative terms into those that are thin and those that are thick. In this paper, I combine debates about semantic stability and the distinction between the thin and the thick by presenting a new seamless inferentialist account of thin and thick evaluative terms which, despite subsuming them under the same metasemantic analysis, can nevertheless account for their varying levels of semantic stability. According to this position of ‘seamless metaconceptualism’, thin and thick evaluative terms do not belong to different categories, but are both understood as metaconceptual devices which do not differ in kind, but in scope. By providing the same analysis for both thin and thick terms, seamless metaconceptualism not only entails that the latter cannot shoulder the philosophical work that some have attributed to them, but also removes much of their surrounding intrigue.
1. Introduction
Non-philosophers could be forgiven for thinking that philosophers are a cautious bunch. For philosophers are becoming increasingly preoccupied with prudence. Naturally, however, philosophers have something different in mind than the ordinary sense of ‘prudence’. Rather than denoting the quality of cautiousness, philosophers typically take ‘prudence’ to denote an evaluative or normative standpoint, one whose evaluations are in some sense determined by facts about what is good and bad for us; or, to use some more terminology that is apt to mislead the lay reader, facts about well-being, welfare or self-interest.
Two recent examples of this trend are Guy Fletcher’s Dear Prudence: The Nature and Normativity of Prudential Discourse and Dale Dorsey’s A Theory of Prudence. Each book covers a lot of ground, incorporating previously published work together with new material. Fletcher’s primary focus is the meta-prudential, the philosophy of well-being’s answer to meta-ethics. His book covers such topics as the nature of prudential judgment, the semantics of prudential language, the normativity of prudence, its implications for traditional meta-ethical views such as realism, anti-realism and error theory, and much else besides. While Fletcher defends various views in relation to these issues, the primary aim of the book is to argue that these debates, which he thinks have been largely neglected, deserve much more attention. To adapt a well-worn platitude from recent political discourse, the idea is that whatever you think about the issues, it would be good if we were having a robust debate about them.
In response to the COVID-19 pandemic, large parts of the economy were locked down and, as a result, households’ income risk rose sharply. At the same time, policy makers put forward the largest stimulus package in history. In the U.S., it amounted to $2 trillion, a quarter of which represented transfer payments to households. To the extent that such transfers were i) announced in advance and ii) conditional on recipients being unemployed, they mitigated income risk associated with the lockdown—in contrast to unconditional transfers. We develop a baseline scenario for a COVID-19 recession in a medium-scale HANK model and use counterfactuals to quantify the impact of transfers. For the short run, we find large differences in the transfer multiplier: it is negligible for unconditional transfers and about unity for conditional transfers. Overall, we find that the transfers reduced the output loss due to the pandemic by some 2 percentage points at its trough.
We consider a model of product differentiation where consumers are uncertain about the qualities and prices of firms' products. They can inspect all products at zero cost. A share of consumers is expectation-based loss averse. For these consumers, buying products of varying quality and price creates disutility from gain-loss sensations. Even at modest degrees of loss aversion they may refrain from inspecting all products and choose an individual default that is strictly dominated in terms of surplus. Firms' strategic behavior exacerbates the scope for this effect. The model generates “scale-dependent psychological switching costs” that increase in the value of the transaction. They imply that making switching easier or costless for consumers would not motivate more switching.
Through a survey and analyses of observational data, we provide systematic evidence that institutional investors value and demand climate risk disclosures. The survey reveals the investors have a strong demand for climate risk disclosures, and many actively engage their portfolio firms for improvements. Empirical analyses of holdings data corroborate this evidence by showing a significantly positive association between climate-conscious institutional ownership and better firm-level climate risk disclosure. We establish further evidence of institutional investors’ influence on firms’ climate risk disclosures by examining a shock to the climate risk disclosure demand of French institutional investors (French Article 173).
This work contributes to architectural and economic understanding of a largely unexplored field of blockchain-based applications referred to as Decentralised Finance (DeFi). Merging technical and economic perspectives, we define DeFi as an ecosystem of decentralised applications which is built on top of permissionless smart contract platforms to mimic and extend traditional financial services. We frame DeFi as a research object in information systems (IS) and management-related platform literature. Further, we conceptualise DeFi as a platform economy within software ecosystems and in interdependence with exogenous factors. This enables us to identify three promising avenues for future research: (i) base layer blockchains as a platform for developers, (ii) the dApp economy of platform modules, and (iii) the diffusion and adoption of DeFi.
Decision makers weight small probabilities differently when sampling them and when seeing them stated. We disentangle to what extent the gap is due to how decision makers receive information (through description or experience), the literature’s prevailing focus, and what information they receive (population probabilities or sample frequencies), our novel explanation. The latter determines statistical confidence, the extent to which one can know that a choice is superior in expectation. Two lab studies, as well as a review of prior work, reveal sample decisions to respond to statistical confidence. More strongly, in fact, than decisions based on population probabilities, leading to higher payoffs in expectation. Our research thus not only offers a more robust method for identifying description-experience gaps. It also reveals how probability weighting in decisions based on samples — the typical format of real-world decisions — may actually come closer to an unbiased ideal than decisions based on fully specified probabilities — the format frequently used in decision science.
Las espadas persas muy curvadas, denominadas shamshir (šamšir), alcanzaron su máxima curvatura y popularidad en los siglos XVI-XVII y seguían siendo el tipo de espada preferida en los campos de batalla por los ejércitos persas. Aunque se han llevado a cabo numerosas investigaciones sobre sus materiales y métodos de forja, hasta donde sabemos, no existe ninguna investigación científica sobre su rendimiento mecánico general ni ningún análisis científico de su excepcional forma. El siguiente artículo ofrece un análisis exhaustivo de la curva alta de las espadas persas y demuestra que esta curva alta proporcionaba una fuerza de corte máxima de la espada y también permitía ciertas formas de empuje que pasaban por encima y por debajo del escudo del adversario en el campo de batalla.
Background and aim
The first SARS-CoV-2 pandemic wave in Germany involved a tradeoff between saving the lives of COVID-19 patients by providing sufficient intensive care unit (ICU) capacity and foregoing the health benefits of elective procedures. This study aims to quantify this tradeoff.
Methods
The analysis is conducted at both the individual and population levels. The analysis calculates quality-adjusted life years (QALYs) to facilitate a comparison between the health gains from saving the lives of COVID-19 patients in the ICU and the health losses associated with postponing operative procedures. The QALYs gained from saving the lives of COVID-19 patients are calculated based on both the real-world ICU admissions and deaths averted from flattening the first wave. Scenario analysis was used to account for variation in input factors.
Results
At the individual level, the resource-adjusted QALY gain of saving one COVID-19 life is predicted to be 3 to 15 times larger than the QALY loss of deferring one operation (the average multiplier is 9). The real-world QALY gain at the population level is estimated to fall within the range of the QALY loss due to delayed procedures. The modeled QALY gain by flattening the first wave is 3 to 31 times larger than the QALY loss due to delayed procedures (the average multiplier is 17).
Conclusion
During the first wave of the pandemic, the resource-adjusted health gain from treating one COVID-19 patient in the ICU was found to be much larger than the health loss from deferring one operation. At the population level, flattening the first wave led to a much larger health gain than the health loss from delaying operative procedures.
This paper addresses the question whether the recent rise in inflation can be explained by financial dominance. It is motivated by the fact that the monetary policy response has been slow and timid which might have reflected concerns that a proper response would have triggered substantial financial risks. We find that a misjudgement of aggregate supply conditions provides a better explanation than financial dominance arguments. Still, recent policy moves such as the Transmission Protection Instrument by the European Central Bank and asset purchases by the Bank of England indicate that financial dominance concerns might become more pressing with further monetary tightening.
Using current performance to set future targets can discourage effort and reduce performance. Our study examines whether this ratchet effect also undermines incentives of high-level managers and executives. We use a dynamic model to show that empirical tests used in prior literature can falsely reject the null hypothesis of no ratchet effect. We also motivate a new test that can better detect the adverse incentives effects of target setting. Specifically, we show that the ratchet effect can be identified as the effect of past performance on changes in perceived target difficulty. We use panel data from nine annual 2011–2019 surveys to implement this test. Similar to prior studies, we find strong evidence that targets are revised upward following good performance. Nevertheless, we reject the ratchet effect hypothesis because we further find that good performance in one period is associated with a decrease in perceived target difficulty in the next period. This finding is more pronounced in settings where well-performing managers have more private information about future performance and where long-term commitments are more credible.
This paper was accepted by Suraj Srinivasan, accounting.
Supplemental Material: The data files and online appendices are available at https://doi.org/10.1287/mnsc.2022.4641 .
Extant research on the gender pay gap suggests that men and women who do the same work for the same employer receive similar pay, so that processes sorting people into jobs are thought to account for the vast majority of the pay gap. Data that can identify women and men who do the same work for the same employer are rare, and research informing this crucial aspect of gender differences in pay is several decades old and from a limited number of countries. Here, using recent linked employer–employee data from 15 countries, we show that the processes sorting people into different jobs account for substantially less of the gender pay differences than was previously believed and that within-job pay differences remain consequential.
Aim
The European Union (EU) has received criticism for being slow to secure coronavirus disease (COVID-19) vaccine contracts in 2020 before the approval of the first COVID-19 vaccine. This study aimed to retrospectively analyze the EU’s COVID-19 vaccine procurement strategy. To this end, the study retrospectively determined the minimum vaccine efficacy that made vaccination cost-effective from a societal perspective in Germany before clinical trial announcements in late 2020. The results were compared with the expected vaccine efficacy before the announcements.
Methods
Two strategies were analyzed: vaccination followed by the complete lifting of mitigation measures and a long-term mitigation strategy. A decision model was constructed using, for example, information on age-specific fatality rates, intensive care unit costs and outcomes, and herd protection thresholds. The base-case time horizon was 5 years. Cost-effectiveness of vaccination was determined in terms of the costs per life-year gained. The value of an additional life-year was borrowed from new, innovative oncological drugs, as cancer is a condition with a perceived threat similar to that of COVID-19.
Results
A vaccine with 50% efficacy against death due to COVID-19 was not clearly cost-effective compared with a long-term mitigation strategy if mitigation measures were planned to be lifted after vaccine rollout. The minimum vaccine efficacy required to achieve cost-effectiveness was 40% in the base case. The sensitivity analysis showed considerable variation around the minimum vaccine efficacy, extending above 50% for some of the input variables.
Conclusions
This study showed that vaccine efficacy levels expected before clinical trial announcements did not clearly justify lifting mitigation measures from a cost-effectiveness standpoint. Hence, the EU’s sluggish procurement strategy still appeared to be rational at the time of decision making.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
Information
Address
Frankfurt am Main, Germany