January 2025
·
2 Reads
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
January 2025
·
2 Reads
June 2024
·
2 Reads
Jerusalem Review of Legal Studies
August 2023
·
366 Reads
·
6 Citations
The Journal of Legal Analysis
Regulatory and sociological resistance to new market-driven technologies, particularly to those that rely on collection and analysis of personal data, is prevalent even in cases where the technology creates large social value and saves lives. This article is a case study of such tragic technology resistance, focusing on tracking devices in cars which allow auto insurers to monitor how policyholders drive and adjust the premiums accordingly. Growing empirical work reveals that such “usage-based insurance” induces safer driving, reducing fatal accidents by almost one third, and resulting in more affordable and fair premiums. Yet, California prohibits this technology and other states limit its effectiveness, largely in the interest of privacy protection. The article evaluates the justifications fueling the restrictive regulation vis-à-vis the loss of lives resulting from this regulation. It concludes that the social benefits of the tracking technology dramatically outweigh the privacy and related costs.
June 2023
·
4 Reads
The Journal of Legal Studies
January 2023
·
19 Reads
SSRN Electronic Journal
March 2022
·
58 Reads
·
4 Citations
Russian Journal of Economics and Law
Objective: to develop and substantiate the theory of data pollution, which makes it possible to realize and assess the harms the economy of big data creates. Methods: dialectical approach to cognition of social phenomena, allowing to analyze them in historical development and functioning in the context of the totality of objective and subjective factors, which predetermined the following research methods: formal-logical and sociological. Results: This article develops a novel framework – data pollution – to rethink the harms the data economy creates and the way they have to be regulated. The author argues that social intervention should focus on the external harms from collection and misuse of personal data. The article challenges the hegemony of the prevailing view that the injuries from digital data enterprise are exclusively private. That view has led lawmakers to focus solely on privacy protection as the regulatory objective. The article claims, instead, that a central problem in the digital economy has been largely ignored: how the information given by people affects others, and how it undermines and degrades public goods and interests. Scientific novelty: The data pollution concept offers a novel perspective why existing regulatory tools – torts, contracts, and disclosure law – are ineffective, mirroring their historical futility in curbing the harms from industrial pollution. The data pollution framework also opens up a rich roadmap for new regulatory devices – “an environmental law for data protection” – which focuses on controlling these external effects. The article examines how the tools used to control industrial pollution – production restrictions, carbon tax, and emissions liability – could be adapted to govern data pollution. Practical significance: the main provisions and conclusions of the article can be used in scientific, pedagogical and law enforcement activities when considering the issues related to the theory of data pollution.
January 2022
·
9 Reads
·
2 Citations
SSRN Electronic Journal
July 2021
·
15 Reads
This chapter examines how equality in the eyes of the law would survive if legal commands are personalized and result in different rules for different people. It argues that nothing in the framework of personalized law violates equality before the law. On the contrary, personalized treatment provides tools to distribute rights and burdens in a manner that conforms to egalitarian views and to notions of desert and need. If desert and need are determined by relevant attributes in a proportional manner, a just system should treat people differently . The chapter examines how personalized law, when designed to promote goals other than equality, could be bolstered (or constrained) by various notions of distributive justice. It recognizes that the use of Big Data and artificial intelligence could itself be a source of injustice, perpetuating historical biases. The chapter discusses ways to resolve this concern. Finally, it compares the deliberate differentiation of commands under personalized law with unintended forms of differential treatment pervasive under uniform laws. It concludes that the use of a multitude of relevant factors to personalize commands, derived from transparent statistical methods, offers novel opportunities to promote distributive justice goals under the law.
July 2021
·
16 Reads
This chapter examines problems of coordination that could arise under personalized law, where different people are subject to different rules guiding their behavior. While earlier chapters of the book focused on individual actors and their personal legal environments, this chapter shifts to look at joint activity—how the interaction between people could be affected by personalized law. Would a personalized law regime that optimizes the atomistic parts result in more social harm, by neglecting the composite whole? It is often noted, correctly, that standardization and uniformity are needed for well-coordinated activity, to help people anticipate what others will do and synchronize their own actions. The chapter argues, however, that personalized law has a surprising potential to advance new forms of coordination. The chapter examines rules of traffic, methods of contracting, procedure in litigation, and the forms of property rights ( numerus clausus ), and argues that a properly designed personalized law regime could potentially achieve coordination in these spheres despite the greater variation and the non-uniformity of rules and of individual actions.
July 2021
·
7 Reads
This chapter concludes the book by offering some preliminary reflections on the robotic aspects of personalized law. It begins by identifying some early experiments with the use of algorithms and machine learning in law, noting the immense potential they unveil. It confronts the “see” versus “scan” methodologies for individualized treatment—judges “looking people in the eye” versus algorithms analyzing the numerous personal aspects they are permitted to scan. The chapter highlights the critical roles of humans in algorithmic personalized law, primarily in setting the goals that the algorithms will be coded to optimize, in choosing the data by which algorithms are trained and people are subsequently screened, and in scrutinizing and repairing undesirable patterns. The chapter argues that the need to set specific goals and priorities for each law would transform the common law method of legal refinements, and would offer greater transparency for legislative accords. The book ends by pointing to areas of the law most ripe for phase-one personalized law.
... We should also note that in foreign legal literature, researchers express their opinion about the need and expediency of establishing criminal liability of robots in situations where their cognitive qualities are sufficiently similar to human ones [62,63]. At the same time, intelligence, including the ability to self-learn, is not determined solely by biological factors [64]. ...
March 2022
Russian Journal of Economics and Law
... Gegenwärtig werden im öffentlichen Dienst die Herausforderungen des Managements in Zeiten zunehmender Autonomie, Vielfalt und Pluralismus unterschätzt, trotz der existierenden populären Debatten über Diversitäts-und Identitätspolitiken. Hingegen besteht die eigentliche Herausforderung darin, faire und regelbasierte Organisationssysteme unter individualisierten Bedingungen zu entwerfen, die gleichzeitig die verschiedensten (häufig unbeabsichtigten) Formen von Diskriminierung vermeiden (Ben-Shahar & Porat, 2021;Nordell, 2021 (Peters & Handschin, 2012). Der gesamte Bereich der Ethikpolitik "ist eine Kultur, die im Misstrauen verwurzelt ist" (Mackenzie, 2002). ...
July 2021
... We expand on existing studies focusing on assessing the quantity and quality of information disclosure (e.g., Christensen, 2016) by exploring how information is generated and delivered in the disclosure process. While the quality and accuracy of information are undoubtedly important, we argue that it is also important to examine the manner in which information is generated, presented, and delivered to the intended audience (Ben-Shahar & Schneidern, 2014;Jin, Luca, & Martin, 2022). We fill the voids by shifting the focus from the mere substance of information to the broader context of smart disclosure, which allows information to be delivered in a readily accessible, programmable, interactive, and standardized manner. ...
April 2014
... Intelligent digital technologies are an effective tool for solving multi-criteria tasks of water distribution planning. The use of And allows you to work with geographically and temporally distributed information arrays, providing the decision maker (DM) with a higher level of information and analytical support [9,15,17,19]. ...
September 2019
The Journal of Legal Analysis
... Sunstein (2002) discussed the potential influence of default rules on peopleʼs preferences and behaviors, analyzing the effects of changes in default rules. Bar-Gill and Ben-Shahar (2021) analyzed the role of information costs in opting out of default rules against the backdrop of the increasing importance of default rules as policy tools. Morse and Birnhack (2022) argue that the so-called "privacy paradox"a gap between usersʼ expressed preferences and their actual behavior when it comes to protecting privacypersists in a changed form in the data people leave behind after death. ...
January 2020
SSRN Electronic Journal
... This definition puts forth two aspects of unconscionability: the content of the contract on one hand and the context in which the contract was signed on the other hand. Leff (1967) summarizes this dichotomy as follows: "In order to distinguish the two interests, I shall often refer to bargaining naughtiness as 'procedural unconscionability', and to evils in the resulting contract as 'substantive' unconscionability." 10 The American Law Institute recently issued a draft of the Restatement of the Law of Consumer Contracts (see Bar-Gill et al. 2019). The current version states that "the doctrine of unconscionability has the primary goal of protecting contracting parties against fundamentally unfair and unreasonably one-sided terms. ...
June 2019
European Review of Contract Law
... The second case is that massive personal information databases are systematically concentrated on the industry with the aspects of public goods or infrastructure, which includes public utility, finance, healthcare, and telecommunications, and might include credit reporting agencies. A massive leak of the database can undermine the "public good aspects" of data aggregation and the public's trust in it, causing "data pollution" (Ben-Shahar, 2017). In both cases, the security investment made by multiple businesses can be made futile by a single incident in the sources of externalities in the same security chain. ...
January 2017
SSRN Electronic Journal
... Porat also proposes a change to the law, with the adoption of an Expanded Duty of Restitution ("EDR"), under which, when certain conditions are met, recipients would compensate benefactors for unrequested benefits. 11 (8) Omri Ben-Shahar & Ariel Porat, "The Restoration Remedy in Private Law" 119 Columbia Law Review 1901(2018. One of the most perplexing problems in private law is when and how to compensate victims for emotional harm. ...
Reference:
Laudatio: Ariel Porat
January 2017
SSRN Electronic Journal
... Regulators often base their decisions on the assumption that consumers in the financial marketplace are "econs" rather than "humans" (Thaler and Sunstein, 2008) and will "review and understand the disclosed information and then take it into account when making decisions" (Behavioral Insights Team and Ontario Securities Commission, 2019, p. 4). However, studies in the laboratory and field have shown that many legislated disclosures are demonstrably ineffective despite conforming to regulatory statutes, a point that has been raised and repeated by scholars and supported by behavioral data for almost half of a century (e.g., Ben-Shahar and Schneider, 2011;Cude, 2006;Day, 1976;Loewenstein, Sunstein and Golman, 2014;Lunn, Visual salience and credit card choice 5 ...
July 2017
Actual Problems of Economics and Law
... 22 If the user doesn't log in, query processing is consistent with Apple Intelligence's privacy policies: OpenAI is required to process queries ephemerally and must not use data from requests to train its models [16]. 23 However, without logging into their OpenAI account, the user would not have access to any premium features linked to their OpenAI account [116]. If, on the other hand, the user chooses to log in to their OpenAI account, query processing is governed by ChatGPT's privacy policies and the user's OpenAI account settings [16]-not Apple's policies. ...
January 2017
The University of Chicago Law Review