Article

Empathy in the Digital Administrative State

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Humans make mistakes. Humans make mistakes especially while filling out tax returns, benefit applications, and other government forms, which are often tainted with complex language, requirements, and short deadlines. However, the unique human feature of forgiving these mistakes is disappearing with the digitalization of government services and the automation of government decision-making. While the role of empathy has long been controversial in law, empathic measures have helped public authorities balance administrative values with citizens’ needs and deliver fair and legitimate decisions. The empathy of public servants has been particularly important for vulnerable citizens (for example, disabled individuals, seniors, and underrepresented minorities). When empathy is threatened in the digital administrative state, vulnerable citizens are at risk of not being able to exercise their rights because they cannot engage with digital bureaucracy.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... There has been increasing research into the digital welfare state in recent years, with special attention paid to the risks of surveillance technologies and automated decisions in the enforcement of social security benefits (Alston, 2019;Van Toorn et al., 2024). This has been paired with, and is in part motivated by, malpractices such as Robodebt (Australia) (Whiteford, 2021), MiDAS (USA) (Ranchordás, 2022) and the childcare benefits scandal (the Netherlands) (Bouwmeester, 2023). 1 These cases have shown how data-driven social security surveillance and enforcement can result in false allegations of fraud, excessive targeting of vulnerable minorities, disregard for proportionality and empathy (Ranchordás, 2022), and fundamental rights violations (Alston, 2019). ...
... There has been increasing research into the digital welfare state in recent years, with special attention paid to the risks of surveillance technologies and automated decisions in the enforcement of social security benefits (Alston, 2019;Van Toorn et al., 2024). This has been paired with, and is in part motivated by, malpractices such as Robodebt (Australia) (Whiteford, 2021), MiDAS (USA) (Ranchordás, 2022) and the childcare benefits scandal (the Netherlands) (Bouwmeester, 2023). 1 These cases have shown how data-driven social security surveillance and enforcement can result in false allegations of fraud, excessive targeting of vulnerable minorities, disregard for proportionality and empathy (Ranchordás, 2022), and fundamental rights violations (Alston, 2019). ...
... Data-driven anti-fraud policies form an especially risky legal frontier or 'grey zone' in which long-established constitutional principles are prone to violations, especially when AI techniques and risk scores are used to single out suspicious cases (Damen, 2023: 534). More generally, the shift from street-level to screen-and systemlevel bureaucracy (Zouridis et al., 2020) has weakened the capacity to 'bend the rules' to ensure proportionate rule enforcement (Van Lancker and Van Hoyweghen, 2021), diminishing administrative empathy and increasing the risk that vulnerable citizens may be left behind in unfair and incomprehensible bureaucratic procedures (Ranchordás, 2022;Spijkstra, 2024). ...
Article
Full-text available
Recent digital welfare malpractices have shown how rule of law systems may fail to protect citizens against abuses of power by social security administrations. Despite growing attention for the problems of the digital welfare state, the macro-level of the rule of law and ‘checks on government’ has remained largely overlooked. This article examines how the effectiveness of rule of law control mechanisms is put under pressure in the welfare state, particularly in the context of (semi-)automated systems of surveillance and enforcement. It explains how the convergence of automation and welfare conditionality jeopardises the safeguarding function of legislation, parliamentary and external scrutiny, and administrative adjudication. The first part brings together the literature on automated decision-making, welfare conditionality and the rule of law, and the second part explores these perspectives in a comparative analysis of Robodebt (Australia) and the childcare benefits scandal (the Netherlands). The main findings are threefold. First, rule of law-breakdowns in the digital welfare state stem primarily from excessive welfare conditionality (intensified criteria and obligations, intrusive surveillance and rigid enforcement), rather than automation as such. Second, the politico-ideological appeal of budgetary savings and tough enforcement drives executive branches to disregard constitutionally prescribed controls. Third, conventional control mechanisms fail to curb or counteract this threat due to various institutional weaknesses, from the ex ante stage of legislation to the ex post stage of scrutiny and judicial review. The article concludes by reflecting on the importance of stronger legal safeguards and control mechanisms, to improve the protection of citizens across welfare states.
... Desde otros enfoques teóricos se han intentado sentar las bases de análisis/solución de los problemas de desigualdad en el acceso a la Administración y a sus servicios y prestaciones en el derecho a la buena Administración (destacadamente Ponce Solé (2013)) o formulaciones con connotaciones conductuales o morales como la empatía (Ranchordás, 2022). Sin embargo, estas aproximaciones dejan, respectivamente, aspectos que nos interesan sin resolver o que presentan debilidades. ...
... De hecho, la jurisprudencia que ha ido incorporándolo lo usa, generalmente, como apoyo, pero no como ratio decidendi exclusiva, de otras obligaciones jurídicas consolidadas de orden procedimental -motivación, congruencia, cumplimiento de plazos. Ranchordás (2022Ranchordás ( , p. 1342) intenta construir entorno al concepto de empatía las herramientas para corregir los fallos de la burocracia digital. Esa respuesta empática para la humanización de entornos digitales, que la propia autora califica como parcial, tendría dos vertientes. ...
... 1380 y ss.) con reserva de humanidad, que evalúe si determinados servicios deben ser presenciales, con un diseño más accesible e inclusivo o una aproximación más antiformalista, que flexibilice las consecuencias jurídicas del incumplimiento por personas vulnerables, fundamentalmente, de plazos, no parece aportar jurídicamente nada nuevo. Se trata de un concepto paraguas con cierto gancho descriptivo, pero con limitaciones que Ranchordás (2022Ranchordás ( , pp. 1388Ranchordás ( -1389 implícitamente reconoce: no se trata de algo emocional, no es un fin en sí misma, no tiene prioridad sobre la eficiencia. ...
Article
Full-text available
Este artículo aborda los problemas que aquejan a las administraciones en su atención a la ciudadanía, singularmente a la más vulnerable. Para ello, se realiza un diagnóstico de los elementos que dificultan que la ciudadanía, con independencia de su cualificación, contactos o nivel de competencia digital, acceda de forma igualitaria a los recursos, servicios y procedimientos administrativos. También se analizan críticamente los distintos enfoques doctrinales sobre los que se ha querido incidir en una mejor atención administrativa y que permiten analizar si es un problema estructural o puntual y si es posible que una Administración lenta, cerrada o poco eficiente garantice el principio de igualdad. Finalmente se realizan una serie de propuestas regulatorias, organizativas y procedimentales para una reforma administrativa que garantice los derechos del 99 %.
... Analyzing digital bureaucratic encounters through the lens of the administrative burden framework allows us to identify the potential downsides of digital government initiatives that are often surrounded by technooptimism and economic narratives of efficiency (Løberg 2021;Vydra and Klievink 2019). Moreover, conceptualizing digital administrative burdens as a specific form of administrative burden is a useful analytical instrument to identify several of its distinctive features, especially regarding origins, which are often hidden in the design of information systems and algorithms (Peeters and Widlak 2018), and possible remedies, such as carving out spaces for empathy and the human factor in impersonal interactions and decision-making procedures (Ranchordás 2022). Furthermore, an analytical focus on the impact of both the presence and absence of street-level bureaucrats contributes to a relatively unexplored area in the study of the causes of administrative burdens (Bell et al. 2020;Bell and Smith 2022). ...
... The automation of decision-making procedures impacts discretionary spaces at the operational level (Bovens and Zouridis 2002; De Boer and Raaphorst 2023) and transforms face-to-face bureaucratic encounters into digital or digitally assisted ones (Breit et al. 2021;Lindgren et al. 2019), whereas the use of algorithms for data analysis and risk assessments is associated with transparency and explainability issues (Bullock 2019; Busuioc 2021) and with discriminatory effects of data bias (Hannah-Moffat and Maurutto 2010). Additionally, increasingly complex forms of data governance are informing these and other practices (Janssen et al. 2020;Yeung 2018), raising further concerns regarding fairness and transparency in decision making (Widlak, Van Eck, and Peeters 2021) and the "human factor" in bureaucratic encounters (Ranchordás 2022). Through these and other mechanisms, citizens may face barriers and difficulties in their encounters with digital government and, moreover, potential exclusion from benefit applications or fair treatment in governmental rule enforcement activities. ...
... The narrow focus on efficiency in digital government innovations risks an "automation hubris," or overconfidence by political and bureaucratic decision-makers in automated decision-making, data sharing, and impersonal interactions at the expense of the human factor in bureaucratic encounters (cf. Ranchordás 2022;Young et al. 2021). ...
Article
Innovations in digital government are changing state–citizen interactions. While often seen as means to increase government efficiency and reduce compliance costs for citizens, a growing body of literature suggests citizens may also experience administrative burdens in such interactions. This article aims to provide some cohesion to the existing research and makes three specific contributions. First, it carves out a conceptual common ground by identifying digital administrative burdens and digital bureaucratic encounters as specific objects of study. Second, automated administrative decision making, digital interactions, and data-assisted decision making are identified as contemporary practices of particular relevance for future studies on the intersection of digital government and administrative burden. Studies suggest learning costs and psychological costs may be especially prevalent in digital bureaucratic encounters and that they often have distributive effects. Third, the article concludes with the formulation of several research themes for the further development of the field.
... za davek ali prispevek (Korpič-Horvat, 2021) (prim.. odločbo USRS U-I-24/07 z dne 4. 10. 2007) ali vračila socialnih prejemkov, ko ti želijo uveljaviti svoja upravičenja, je ustrezen postopek toliko bolj izrazit dejavnik pravne in socialne države. Če postopek ni predvidljiv in prijazen do upravičencev, strankam oteži oziroma onemogoči zakonske pravice, zlasti ob neredki nevednosti in/ali neopremljenosti ranljivih skupin (Ranchordas, 2022). Pri socialno pogojenih zadevah bi se morali zato ti elementi, tako na ravneh pravnega urejanja kot izvajanja, odražati kot presek pravne in socialne države, ne pa jemati ustavnih garancij kot administrativnih ovir ali procesnih pravil, kot izgovorov za birokratizem (Kovač, 2021). ...
... člena ("ZUPJS", 2010) ali 11. člena ("ZPIZ-2", 2012). To je pomembno, ker gre v upravnih postopkih po definiciji za enostransko, asimetrično razmerje, kjer organ in stranka nista enaka (Ranchordas, 2022). Enako poudarja Vrhovno sodišče RS v več sodbah v socialnih ali (drugih) upravnih zadevah. ...
... Teh primerov je še več, pri večini gre le za poenostavljanje usmerjenih navadnih načinov vlaganja vlog in vročanja, npr. v času koronapandemije, ko je bil ZUP prav zato spremenjen (v letu 2020), tako z začasno zakonsko ureditvijo po ZZUSUDJZ v prvem valu (Kovač & Kerševan, 2020) (Ranchordas, 2022). ...
Chapter
Due to the fast-growing complexity of relations and their limited substantive determinism, globalisation, digitalisation and other changes in society, procedural law plays an increasingly important role in administrative relations, as it does (should) ensure effective implementation of sectoral public policies and at the same time prevent abuse of power. Therefore, in socially conditioned matters, formal elements on the levels of legal regulation and the implementation of rules should be reflected as a factor of the legal and welfare state linked to a common concept and not perceiving the rules of declaring the will and constitutional guarantees as an administrative hurdle. Social rights of individuals, usually from various vulnerable social groups are predominantly determined in administrative procedures, especially within the competence of social institutions. This field is extremely heterogeneous, with several million decisions issued in Slovenia per year it concerns each member of the community, either as net payer or recipient of the social system. With the aim of defining the key guidelines for understanding procedural law for social goals, the article deals also with the analysis of the selected good and bad examples of sectoral regulation and administrative case-law, e.g., under the umbrella of debureaucratisation.
... In addition, it is important to acknowledge participatory rights throughout the various steps of the decision-making process, i.e. before the main regulatory or administrative procedure of adopting an act as well as after an administrative regulation or act has been enacted. If the procedure is not predictable and user-friendly, it makes it difficult or impossible for the parties to exercise their legal rights, especially for vulnerable groups who are likely ill-informed and/or ill-equipped (Ranchordas, 2022). ...
... Godec, 1993 as regards (political) accountability to run public tasks, which are also established through empirical studies. For instance, when bureaucrats and citizens had differing views on public decisions, it was the bureaucrats who influenced government due to their asymmetrical imbalance into the network (Young & Tanner, 2022) and/or lack of involvement of allespecially socially marginalisedstakeholders (Ranchordas, 2022, Clark, 2017. ...
... This is relevant above all in the context of digitalization, where the digital divide halts certain social groups with less access to appropriate tools and the skills to use them. They can even be discriminated against, e.g. through artificial intelligence algorithms (Ranchordas, 2022). ...
Conference Paper
Full-text available
Participation, together with other principles of good public governance such as transparency, inclusiveness or accountability, is a key feature of contemporary administrative systems and democratic societies. It is exercised through the participatory rights of various stakeholders, citizens, businesses, NGOs, and others, provided by the respective supra-and national regulations and rules. Systemic efforts toward participation contribute to a more democratic society and better decision-making by enabling the parties to have a say in the procedure. However, a holistic approach to participation seems to be missing. This deficiency could be overcome through a joint regulation of the basic principles and rights, regardless of the type of administrative act adopted-either general or individual administrative acts, i.e. administrative regulations or single-case decisions-and the stage of the procedure, i.e. planning, formal procedure, and ex post evaluation. The proposed approach relies on several methods of research, above all literature overview, comparative insights, normative analysis, statistical data, and expert interviews. Due to the growing volume of administrative activity, administrative participation and transparency are also gaining importance. With Slovenia as a case study, the paper incorporates the above elements into an overall model of participatory rights, presumably to be enforced through extended Administrative Procedure Act(s) on both national and EU levels.
... Most studies on digital government focus on specific applications at the organizational level and, thereby, overlook the role of data-exchange in altering accountability mechanisms, organizational control, and the way citizens are affected. Several critical studies have identified problems regarding citizens' capacity to understand the reasoning underlying administrative decisions (Peeters & Widlak, 2023), to seek redress in case of wrongdoing by the state (Busuioc, 2021), and to use administrative law to defend their rights (Ranchordás, 2022;Van Eck, 2018). However, this is rarely done from the vantage point of data-exchange across government organizations (Davidson et al., 2023;Van Donge, Bharosa, & Janssen, 2020, thereby failing to analyze the "how authority and accountability are redistributed throughout information flows" (Pelizza, 2016: 304). ...
... Scholta, Mertens, Kowalkiewicz, & Becker, 2019), the use of administrative law in digital citizen-state interactions (e.g. Ranchordás, 2022), and digital service provision and the citizen experience thereof (e.g. Larsson, 2021;Madsen et al., 2022;Peeters, 2023). ...
Article
Open Access - The interconnectedness of government organizations through data-exchange is proliferating. This is relevant for many debates in public administration today since all applications of data-driven government rest on a foundation of data. In this article, rather than focusing on specific applications, we analyze the way supra-organizational data-exchange shapes such applications and specifically automated administrative decision-making (AADM). We argue that the whole of bureaucracy that is connected through data-exchange implies the organizational separation of the collection or gathering of government data from the exchange, modification, combination and/or analysis and subsequently its (re)use in decision-making processes. To analyze the consequences of this new division of labor we further develop the concept of the infrastructure-level bureaucracy and formulate hypotheses on its consequences for data itself, organizations, and citizens. Ultimately, we argue infrastructural information flows pose challenges for democratic control and for procedural lawfulness in the constitutional state.
... For example, in the Netherlands, an algorithm was developed to assess the risk of social fraud, and here Journal of Comparative Social Work 2024/1 8 the algorithms turned out to be discriminatory, as it automatically identified citizens. with either dual nationality or low income, as being at high risk (Ranchordas, 2021). ...
... If the algorithm in the model is incorrectly trained, it can lead to algorithmic bias, so we need to better understand how AI models are not objective, but instead dependent on the coexistence of multiple logics and constructions (Ratner & Elmholdt, 2023). Ranchordas (2021) points out that a move away from human discretion to AI models is a threat to empathy, on which particularly vulnerable citizens are dependent to be able to exercise their rights. ...
... Even the coding and implementing of algorithmic systems can distort the bureaucratic rules that they are designed to implement, potentially erasing procedural and other safeguards along the way (Citron, 2008). The automatic processing then eliminates the place where an empathetic person could intervene and take account of exceptional circumstances (Ranchordás, 2022). Possible access to a person is also important because automation places disproportionate burdens on vulnerable populations that lack sufficient technical means or literacy (Lerman, 2013). ...
... Sometimes opacity is a deliberate decision, as in those targeted by "do not fly lists," who are not allowed to know the basis for adverse decisions, even in the face of widespread suspicion that they were targeted because of their minority status, such as having an Arabic name (Citron and Pasquale, 2014). At other times, it results from incompetence bordering on malevolence, as when a Michigan welfare fraud detection system misidentified thousands of people and then failed to notify them until after the period to appeal had passed (Ranchordás, 2022(Ranchordás, , pp. 1377(Ranchordás, -1378. Similarly, when a facial recognition system misidentified Robert Williams (who is Black), "law enforcement officers … manipulated existing policies and used the technology to route around the analogy protections that should have controlled their investigative process" (Solow-Niederman, 2023, p. 132). ...
Article
Full-text available
Artificial intelligence (AI) and machine learning (ML) systems increasingly purport to deliver knowledge about people and the world. Unfortunately, they also seem to frequently present results that repeat or magnify biased treatment of racial and other vulnerable minorities. This paper proposes that at least some of the problems with AI’s treatment of minorities can be captured by the concept of epistemic injustice. To substantiate this claim, I argue that (1) pretrial detention and physiognomic AI systems commit testimonial injustice because their target variables reflect inaccurate and unjust proxies for what they claim to measure; (2) classification systems, such as facial recognition, commit hermeneutic injustice because their classification taxonomies, almost no matter how they are derived, reflect and perpetuate racial and other stereotypes; and (3) epistemic injustice better explains what is going wrong in these types of situations than does the more common focus on procedural (un)fairness.
... While the value ideals presented by Rose et al. (2015) provide a retrospective picture of the broader changes within public administration values, recent studies have pointed to a deepening interest in the interactional and personal dimensions of digital bureaucracy. Ranchordas (2021), for example, suggests that administrative empathy recognizes the diversity of citizens, placing the emphasis on the skills of citizens and the possibilities of alleviating the shortcomings of automation. ...
... Responsible public administration has a specific duty to protect people in vulnerable positions, drawing attention to the constitutional right of non-discrimination and equality. The imaginary problematizes the premises of technological design reflecting the discussions on administrative empathy, as the imaginary focuses on human as the contact point for administrative processes (Ranchordas, 2021). ...
Article
Full-text available
This article investigates future visions of digital public administration as they appear within a particular regulatory process that aims to enable automated decision-making (ADM) in public administration in Finland. By drawing on science and technology studies, public administration studies, and socio-legal studies we analyze law in the making and identify four imaginaries of public digital administration: understandable administration, self-monitoring administration, adaptive administration, and responsible administration. We argue that digital administration is seen from the perspective of public authorities serving their current needs of legitimizing existing automation practices. While technology is pictured as unproblematic, the citizen perspective is missing. We conclude that the absence of an in-depth understanding of the diverse needs of citizens raises the question whether the relationship between public power and citizens is becoming a one-way street despite of the public administration ideals that express values of citizen engagement.
... In addition, techno-pessimists often stress the problem of the 'dehumanisation' of social policy. Such an argument of 'digital rigidity' concerns the lack of empathy between the state and the citizen (Ranchordás, 2022). Paradoxically, this feature is seen by techno-optimists as the main advantage of digital surveillance. ...
... In addition, techno-pessimists often stress the problem of the 'dehumanisation' of social policy. Such an argument of 'digital rigidity' concerns the lack of empathy between the state and the citizen (Ranchordás, 2022). Paradoxically, this feature is seen by techno-optimists as the main advantage of digital surveillance. ...
Book
Full-text available
It is with great pleasure that we present the first of three monographs describing the research results on combating corruption and fraud. Combating corruption begins with awareness of the problem. Sharing the knowledge of the Visegrad Group countries and Ukraine in counteracting corrupt behaviour has a significant cognitive and educational aspect. Additionally, the inclusion of technological tools allows for reducing the negative consequences of such behaviour.
... The arrangements of individual welfare states are thus largely path dependent (Bertram, 2011), sharing the common characteristics of significant administrative rigidity. This rigidity poses challenges to vulnerable populations in accessing social rights (Ranchordás, 2021). To align democratic social systems with citizens' needs, these systems undergo reforms, primarily in substantive, procedural, legal, and organisational terms. ...
Article
Full-text available
Social services within predominantly service-based public administration are undergoing constant reforms in response to evolving social circumstances. These reforms should align with international and constitutional principles, including the rule of law, the welfare state, and public governance, and rely on the assessment of the gap between the current state of affairs and the desired situation. Reforms of the competences and organisation of social work institutions should be analysed in comparison with related systems, taking into account national and sector-specific characteristics. The objective of this research was to examine, from a legal-administrative perspective, the reforms of social services in three systems building on the Rechtsstaat tradition: Slovenia, Austria, and Croatia. A qualitative approach was employed as the most suitable for evaluative research. The methodology included a literature review, semi-structured interviews, relational content analysis of reform policy documents and public administration performance reports, and a systemic analysis of the codification of administrative procedure as the formal legal framework for providing social services. The results show that the reforms of social services in Slovenia and Croatia share similarities as they focus on the Weberian context, with all the challenges associated with such formalised reforms. In contrast, Austrian reforms, influenced by a hybrid public management, are more results-oriented and thus able to provide user-oriented services.
... Eubanks, 2018;Whiteford, 2021;Geiger, 2021), initial optimistic assessments as to the potential of automated tools to improve citizen access to governmental services are increasingly giving way to concerns about negative impacts. Notably, scholars have flagged their potential to introduce (or further compound) administrative burdens (Peeters & Widlak, 2023;Madsen, Lindgren & Melin, 2022;Larsson, 2021) and to negatively impact disadvantaged citizens (Ranchordás 2022;Alon-Barkat & Busuioc, 2023). Importantly, such burdens -which we term here "algorithmic burdens" when resulting from algorithmic automation -stem not only from the digitalization of citizens' encounters with government (Peeters, 2023;Peeters & Widlak, 2018) but also specifically from the implications of the incorporation of AI algorithmic tools in the decision-making process. ...
Article
Full-text available
Our contribution aims to propose a novel research agenda for behavioural public administration (BPA) regarding one of the most important developments in the public sector nowadays: the incorporation of artificial intelligence into public sector decision-making. We argue that this raises the prospect of distinct set of biases and challenges for decision-makers and citizens, that arise in the human-algorithm interaction, and that thus far remain under- investigated in a bureaucratic context. While BPA scholars have focused on human biases and data scientists on ‘machine bias’, algorithmic decision-making arises at the intersection between the two. In light of the growing reliance on algorithmic systems in the public sector, fundamentally shaping the way governments make and implement policy decisions, and given the high-stakes nature of their application in these settings, it becomes pressing to remedy this oversight. We argue that behavioural public administration is well-positioned to contribute to critical aspects of this debate. Accordingly, we identify concrete avenues for future research, and develop theoretical propositions.
... Tegelijk zijn er ook zorgen over het vermogen van overheidsorganisaties om verantwoording af te leggen over besluiten gemaakt op basis van elders verzamelde data. Ook blijkt het lastig voor burgers om met behulp van administratief recht zich te verdedigen tegen besluiten in hun nadeel, fouten in registraties te herstellen, en uitzonderingen op gestandaardiseerde besluitvorming te bepleiten ( Van Eck, 2018;Widlak & Peeters, 2020;Ranchordás, 2022). Daarnaast zijn er problemen rond de behoorlijke behandeling van burgers die om wat voor reden dan ook -veelal vanwege een ongebruikelijke persoonlijke biografie (Madsen e.a., 2023)uit de digitale 'happy flow' vallen (Schou & Pors, 2019;Larsson, 2021). ...
Article
Full-text available
Beyond the system-level bureaucracy: On data flows, algorithms and inclusive AI in the data-bureaucracy Around the turn of the century, government digitalization mostly consisted of the automation of relatively simple bureaucratic procedures and the digitalization of organizations’ client databases. In this article, we argue that the ‘system-level bureaucracy’ has been surpassed by the emergence of information infrastructures, in which (big) data is shared among a wide variety of organizations, and the increased use of machine learning algorithms to assist administrative decision-making. We call this the ‘data-bureaucracy’. Rather than mere technical innovations developed to improve government efficiency, these developments have profound consequences for the way government organizations use public and private data, organize decision-making processes, and can be held accountable for their actions and decisions by citizens. We speak of the emergence of a ‘coding elite’ – data professionals that design concrete AI-applications and, in doing so, make (implicit) trade-offs between relevant public values beyond political and public scrutiny. In order to recover public value deliberation in algorithmic governance, we argue for the importance of inclusive design processes of AI-applications and develop a concrete framework for realizing such processes (‘inclusive AI’).
... Eubanks, 2018;Whiteford, 2021;Geiger, 2021), initial optimistic assessments as to the potential of automated tools to improve citizen access to governmental services are increasingly giving way to concerns about negative impacts. Notably, scholars have flagged their potential to introduce (or further compound) administrative burdens (Peeters & Widlak, 2023;Madsen, Lindgren & Melin, 2022;Larsson, 2021) and to negatively impact disadvantaged citizens (Ranchordás 2022;Alon-Barkat & Busuioc, 2023). Importantly, such burdens -which we term here "algorithmic burdens" when resulting from algorithmic automation -stem not only from the digitalization of citizens' encounters with government (Peeters, 2023;Peeters & Widlak, 2018) but also specifically from the implications of the incorporation of AI algorithmic tools in the decision-making process. ...
Article
Full-text available
We propose a novel research agenda for public administration regarding one of the most important developments in the public sector nowadays: the incorporation of artificial intelligence into public sector decision-making. These developments and their implications are gaining increasing attention from public administration scholars. We argue that public administration research in the behavioral strand is well-positioned to contribute to important aspects of this debate, by shedding light on the psychological processes underlying human-AI interactions and the distinct set of biases that arise for decision-makers and citizens alike in these algorithmic encounters. These aspects thus far remain considerably under-investigated in bureaucratic contexts. We develop theoretical propositions on two core behavioral aspects pertaining to algorithmic interactions in bureaucratic contexts. Namely, the first theme refers to the interaction between public sector decisionmakers and algorithms and related cognitive biases arising in decision-making. The second theme pertains to the interaction between citizens and algorithms and how this affects citizen experience of, and responses to, government decision-making in its algorithmic iteration. A meaningful behavioral research agenda on the incorporation of AI in government requires taking the study of algorithm-related cognitive biases seriously in this context and the organizational and institutional challenges that can arise as a result.
... It also includes lessening the burden of proof of burden for applicants (and shifting it back to the state, e.g. see Avanzo, 2018), fostering more trust and 'administrative empathy'or acknowledging that citizens are not always able to complete the application process on their own, and have different needs regarding formal support (Ranchordas, 2022). Parties involved in the policy design of postal addresses need to be consistently reminded that securing an address is not (at all) the ends, it is merely an administrative means towards being included and recognised. ...
Thesis
Full-text available
The main purpose of this thesis is to gain a better understanding of access to social rights for people experiencing homelessness (PEH). In many welfare states, the principle of 'no address, no rights' persists, making social rights contingent on having a registered address. This creates a 'Postal Paradox' (Byrne, 2018) where the absence of an address restricts access to rights, perpetuating homelessness. Several European countries have introduced alternative registration systems for address-less persons, and there is growing recognition of the 'right to an effective postal address' for PEH , as formulated in the Homeless Bill of Rights by FEANTSA and the Housing Rights Watch (2017). The thesis primarily investigates the reference address at a Public Centre for Social Welfare (PCSW) in Belgium, by using quantitative and qualitative research methods. The analysis of an integrated administrative dataset reveals the severe complexity of chronic homelessness, and show significant challenges in the importance of making visible those who are 'administratively invisible'. The qualitative analysis made use of different perspectives to examine address-lessness, both from a non-take-up and administrative burdens perspective. Findings reveal a disconnect between policy design and implementation, impacting a vulnerable population group. Despite its intention to include the most excluded, the policy's implementation is hindered by conditionality, administrative burdens, and conflicting goals. This inconsistency results in exclusion of non-compliant individuals, leading to severe repercussions such as a loss of access to fundamental rights and further administrative and social exclusion.
... Customers in the digital age expect personal service and empathy, and request prospective housebound flexible digital shopping experiences and attentive customer support service [55][56][57]. Customers continue to explore these digital experiences and exploit tools to enact and realise desired changes, which they can appreciate and enjoy as improved service quality obtained from their etransactions. Given the constraints imposed by digital conversion, delivering e-services continues to emerge as a challenge that requires more specific guidelines for meeting the explosive changes in ecustomer expectations [58][59][60]. ...
Preprint
Full-text available
Service quality has been one of the most significant concepts in marketing literature over the last two decades, empirically linked with customer satisfaction and long-term sustainability for businesses. The purpose of this paper is to provide an extensive review of the evolution of marketing-related articles in service quality and development. The review synthesis has shown that the Grönroos and SERVQUAL were some of the earliest service quality models to broaden intangibles within services and homogenise user experience outcomes between industries. Service quality research has evolved with technological evolution and digitalisation, which has changed customer experience parameters and created new dimensions. New-age digital trends dictate high-quality services based on personalisation, customisation habits, and the advent of artificial intelligence (AI) and machine learning. Customer preferences and loyalty have more to do with sustainability and ethical con-siderations. The service quality concept implications include the drivers of value co-creation and which emerging technologies could inform how services are delivered to customers. The review also highlights some challenges and opportunities for service quality research, including new frontiers that consider the interaction between humans and machines and social infrastructure and physical environment contexts in which services are delivered. To address the issue of the lack of an integrated perspective, this study utilises findings from the literature, particularly those emphasising human-AI interactions. The study's implication is to give a more complete theoretical framework of service quality and to combine findings from different points of view. This will give practitioners more leverage to implement service-level strategies in changing market conditions.
... These organizations are responsible for payment of various benefits to Danish citizens, and investigating potential fraud with these (Zajko, 2023). This makes these sites both central to the tenets of the welfare state (Esping-Andersen, 1990) and sites fraught with moral conundrums for citizens and caseworkers alike (Ranchordás, 2022). ...
Conference Paper
Full-text available
accepted for the 2024 European Communication Researchers Association Conference, Ljubjana Slovenia. Abstract: "Data, data, data! … I can't make bricks without clay."
... In recent decades, there has been a notable increase in interdisciplinary research examining the risks and opportunities associated with public sector initiatives in machine learning concerning citizens and vulnerable populations (Plesner and Justesen 2022;Ranchordas 2021;Ratner and Elmholdt 2023). Much of this scholarly inquiry has concentrated on the outcomes of data assemblages, such as enhanced efficiency, governmental paradigms, and algorithmic discrimination. ...
Article
Full-text available
Public sector adoption of AI techniques in welfare systems recasts historic national data as resource for machine learning. In this paper, we examine how the use of register data for development of predictive models produces new ‘afterlives’ for citizen data. First, we document a Danish research project’s practical efforts to develop an algorithmic decision-support model for social workers to classify children’s risk of maltreatment. Second, we outline the tensions emerging from project members’ negotiations about which datasets to include. Third, we identify three types of afterlives for citizen data in machine learning projects: (1) data afterlives for training and testing the algorithm, acting as ‘ground truth’ for inferring futures, (2) data afterlives for validating the algorithmic model, acting as markers of robustness, and (3) data afterlives for improving the model’s fairness, valuated for reasons of data ethics. We conclude by discussing how, on one hand, these afterlives engender new ethical relations between state and citizens; and how they, on the other hand, also articulate an alternative view on the value of datasets, posing interesting contrasts between machine learning projects developed within the context of the Danish welfare state and mainstream corporate AI discourses of the bigger, the better.
... APR conflates the law and the application of the law (Hildebrandt and Koops 2010), with repercussions at both the individual and societal level. At the individual level, there is a psychological role at play when humans have the opportunity of being heard by another human being; emotions and empathy are a part of legal proceedings (Ranchordàs 2022). At the societal level, the application of the law has been a time-consuming endeavor notably to ensure a level of scrutiny and care. ...
Article
Full-text available
Driven by the increasing availability and deployment of ubiquitous computing technologies across our private and professional lives, implementations of automatically processable regulation (APR) have evolved over the past decade from academic projects to real-world implementations by states and companies. There are now pressing issues that such encoded regulation brings about for citizens and society, and strategies to mitigate these issues are required. However, comprehensive yet practically operationalizable frameworks to navigate the complex interactions and evaluate the risks of projects that implement APR are not available today. In this paper, and based on related work as well as our own experiences, we propose a framework to support the conceptualization, implementation, and application of responsible APR. Our contribution is twofold: we provide a holistic characterization of what responsible APR means; and we provide support to operationalize this in concrete projects, in the form of leading questions, examples, and mitigation strategies. We thereby provide a scientifically backed yet practically applicable way to guide researchers, sponsors, implementers, and regulators toward better outcomes of APR for users and society.
... Crucially, new modes of digital public governance not only jeopardise individual rights but also threaten collective rights and interests in the proper functioning of the public sector as a crucial driver of the legitimacy of administrative action (Smuha 2021;Ranchordás 2022;Coglianese 2023;Kouroutakis 2023;Carney 2023). It has been stressed that there is a need to rethink administrative procedural fairness and to move beyond current individualistic approaches Tomlinson et al. 2023). ...
Article
Full-text available
Public sector digitalisation is transforming public governance at an accelerating rate. Digitalisation is outpacing the evolution of the legal framework. Despite several strands of international efforts to adjust good administration guarantees to new modes of digital public governance, progress has so far been slow and tepid. The increasing automation of decision-making processes puts significant pressure on traditional good administration guarantees, jeopardises individual due process rights, and risks eroding public trust. Automated decision-making has, so far, attracted the bulk of scholarly attention, especially in the European context. However, most analyses seek to reconcile existing duties towards individuals under the right to good administration with the challenges arising from digitalisation. Taking a critical and technology-centred doctrinal approach to developments under the law of the European Union and the Council of Europe, this paper goes beyond current debates to challenge the sufficiency of existing good administration duties. By stressing the mass effects that can derive from automated decision-making by the public sector, the paper advances the need to adapt good administration guarantees to a collective dimension through an extension and a broadening of the public sector’s good administration duties: that is, through an extended ex ante control of organisational risk-taking, and a broader ex post duty of automated redress. These legal modifications should be urgently implemented.
... Recent scholarship has begun to examine the knowledge practices of streetlevel bureaucrats (Lipsky 2010) who use algorithmic decision tools (Snow 2021), including how they exercise (or withhold) individual judgement. Ranchordás (2022) argues that empathy ought to be seen as a key value within administrative law, and, accordingly, that the digitalized state ought to consider multiple viewpoints as well as individual circumstances (2022: 45). She argues that there ought to be a duty to forgive errors in some cases -a duty not well served by automation and data-driven decision support. ...
... An additional challenge for AI technology in the courtroom is its inability to replicate human empathy. [19] Apart from that, the two concepts of AI as explained previously are also very limited in their decision trees or databases. The data that can be compiled by AI creators is limited to past data and data on events that have already occurred, while the cases handled by criminal courts are dynamic and develop over time. ...
... Despite documentation of adverse effects, public authorities in digitalized welfare states are committed to intensifying the use of data (Mann, 2020;Ranchordás, 2022). My studies show that to consider data as the foundation of managing and governing welfare delivery is a risky strategy, since data can then be used promiscuously as 'fact' or as 'in need of context'. ...
Article
Full-text available
This paper examines how the intense focus on data in political digitalization strategies takes effect in practice in a Danish municipality. Building on an ethnographic study of data-driven management, the paper argues that one of the effects of making data a driver for organizational decision-making is uncertainty as to what data are and can be taken to mean. While in political discourse and strategies, data are considered as a resource for collaboration across organizational units as well as for optimization of their performance, in practice, data are not this straightforward entity. The paper presents a kind of data work that identifies data as part of different worlds (ontologies). The management task that results from this is nurturing organizational spaces that articulate data as relational. The paper argues that being attentive to the troublesome experiences public sector employees have when encountering data may help mitigate some of the risks of seeing data merely as a resource. The paper concludes that as public sector managers learn to nurture spaces where differences in data can be articulated, they also protect core values of welfare bureaucracies. Acknowledging that data work is about what we take to be real and what not (ontological work) is a first step in this direction.
... 89 In addition to being an issue about privacy, a focus on human dignity also engages the question of why these groups, already marginalized and vulnerable, were targeted and flagged in the first place, instead of focusing upon extending assistance and empathy. 90 The sociotechnical background behind the deployment of such systems, premised upon optimization logics of machine learning alongside the politically driven ambition of sifting out 'welfare cheats', reduced the vulnerability framing of social welfare recipients into one of efficiency and computational tractability. 91 In turn, the deployment of AI systems can reflect biases, including through problematic associations of fraud with certain ethnicities or other protected grounds. ...
Article
Full-text available
The ubiquitous use of artificial intelligence (AI) can bring about both positive and negative consequences for individuals and societies. On the negative side, there is a concern not only about the impact of AI on first-order discrete individual rights (such as the right to privacy, non-discrimination and freedom of opinion and expression) but also about whether the human rights framework is fit for purpose relative to second-order challenges that cannot be effectively addressed by discrete legal rights focused upon the individual. The purpose of this article is to map the contours and utility of the concept of human dignity in addressing the second-order challenges presented by AI. Four key interpretations of human dignity are identified, namely: non-instrumentalization of the human person; the protection of certain vulnerable classes of persons; the recognition and exercise of inherent self-worth (including the exercise of individual autonomy); and a wider notion of the protection of humanity. Applying these interpretations to AI affordances, the paper argues that human dignity should foreground three second-order challenges, namely: the disembodiment of empiric self-representation and contextual sense-making; the choice architectures for the exercise of cognitive autonomy; and, the experiential context of lived experiences using the normative framework of human vulnerability.
... In particular, implementation, unlike analysis, might be perceived as an inherently human process, requiring capabilities that are simply irreplaceable by a computer (see, e.g., Kasy and Abebe 2021). This is particularly true if implementation involves emotions (Yalcin et al. 2022;Xu 2022;Ranchordas 2022), e.g., allowing a human judge to incorporate equity concerns or compassion. Recall, however, that individuals with a legal profession in our study have an even stronger belief that automation in implementation is likely to yield unfair outcomes. ...
Article
Full-text available
Artificial intelligence plays an increasingly important role in legal disputes, influencing not only the reality outside the court but also the judicial decision-making process itself. While it is clear why judges may generally benefit from technology as a tool for reducing effort costs or increasing accuracy, the presence of technology in the judicial process may also affect the public perception of the courts. In particular, if individuals are averse to adjudication that involves a high degree of automation, particularly given fairness concerns, then judicial technology may yield lower benefits than expected. However, the degree of aversion may well depend on how technology is used, i.e., on the timing and strength of judicial reliance on algorithms. Using an exploratory survey, we investigate whether the stage in which judges turn to algorithms for assistance matters for individual beliefs about the fairness of case outcomes. Specifically, we elicit beliefs about the use of algorithms in four different stages of adjudication: (i) information acquisition, (ii) information analysis, (iii) decision selection, and (iv) decision implementation. Our analysis indicates that individuals generally perceive the use of algorithms as fairer in the information acquisition stage than in other stages. However, individuals with a legal profession also perceive automation in the decision implementation stage as less fair compared to other individuals. Our findings, hence, suggest that individuals do care about how and when algorithms are used in the courts.
... From an individual perspective, the human touch of the application of the law-the ideal of being heard and judged by another human who can show emotions and empathy-gets lost. And yet, we need empathy in legal proceedings which the "digital administrative state" threatens (Ranchordàs 2022). This is true as much for following procedures in behind-the-counter processing applications of benefits, as in more complex situations heard in courts. ...
Article
Full-text available
The field of computational law has increasingly moved into the focus of the scientific community, with recent research analysing its issues and risks. In this article, we seek to draw a structured and comprehensive list of societal issues that the deployment of automatically processable regulation could entail. We do this by systematically exploring attributes of the law that are being challenged through its encoding and by taking stock of what issues current projects in this field raise. This article adds to the current literature not only by providing a needed framework to structure arising issues of computational law but also by bridging the gap between theoretical literature and practical implementation. Key findings of this article are: (1) The primary benefit (efficiency vs. accessibility) sought after when encoding law matters with respect to the issues such an endeavor triggers; (2) Specific characteristics of a project—project type, degree of mediation by computers, and potential for divergence of interests—each impact the overall number of societal issues arising from the implementation of automatically processable regulation.
Article
Social rights have developed in response to 19 th -century laissez-faire capitalism. They have given rise to an interventionist welfare state that purports to liberate people from the whims of the market, making use of a plethora of social transfers. Legal doctrine on the right to social security reflects this ‘rationale of decommodification’ by focusing on the promotion of a benefit system which is ‘available, accessible and adequate’. However, with such a system, the welfare state accumulates more and more powers. These may turn against the very people whom it is designed to protect, sometimes with devastating effects for individual claimants. Recent scandals in a number of countries have shown how the ideal of the welfare state may turn into a ‘dystopia’. Legal doctrine pertaining to the right to social security will have to face this challenge. It must be accompanied by stronger qualitative guarantees that protect individuals from social bureaucracy. This contribution proposes three qualitative guarantees of individual treatment to enhance interpretation of the right to social security: compensation, elevation and participation. These standards are not alien to the right to social security, but in mainstream thinking are often given only secondary importance. Now they need to be dusted off and placed centre stage.
Article
While public sector organizations continue to implement information and communication technologies (ICT), it is unclear to what extent digitalization efforts influence the discretion of street‐level bureaucrats. Understanding the relationship between public sector ICT and human discretion is an important consideration for public managers overseeing technology implementation processes and evaluating outcomes. This systematic literature review examines studies that explore the relationship between street‐level discretion and ICT use. The results show there are several mechanisms through which ICTs change human discretion, such as removing face‐to‐face interactions, standardizing processes, enhancing top‐down monitoring, increasing information exchanges, and providing decision supports. Street‐level bureaucrats adopt various strategies to resist ICT‐driven standardization, suggesting that technology implementation may alter workplace dynamics and the professional identities of frontline workers. Revealing the processes of changing discretion has implications for research and practice as more agencies expand their reliance on ICT to deliver public services, thereby incorporating new actors and institutional arrangements.
Chapter
Compared to the private sector, public actors face tougher justification requirements for their decisions as a matter of administrative law; subjects of algorithmic decision systems must in that context be able to scrutinise whether a decision about them complies with all legal protection requirements vis-à-vis the State. More particularly, individuals need to know or understand the reasoning behind an automated decision. In the EU, decision-making in individual cases by public administrations is framed by EU law, EU general principles, and administrative procedures that are essential for administrative justice. The procedural dimension of the rule of law provides, in this sense, to natural and legal persons individual guarantees guaranteeing, among other rights, a clear understanding of the reasons behind an administrative decision. That individual guarantee stems from the fundamental right of good administration, which is a general principle of EU law, enshrined in Article 41 of the EU Charter of Fundamental Rights. The challenging question arising from the use of AI systems in public administrations is how the requirements imposed by Article 41 of the Charter fit the case of machine learning algorithms and more particularly in terms of the duty of care and the duty to give reasons imposed as a matter of EU law. This chapter highlights the challenges machine learning algorithms used in fully automated systems and recommender systems raise regarding the fundamental right of good administration. It then offers potential “by design” solutions as well as additional safeguards to tackle these challenges.
Chapter
Constitutional law and administrative law are inseparable partners in protecting fundamental rights. Yet, in the digital state, where rights are increasingly mediated by digital technology, the role of administrative law has become invisible. This invisibility is translated at two levels: first, with digitalisation, government has become invisible to citizens due to the gradual disappearance of human assistance. Second, the public sector no longer sees citizens as individuals with rights, perceiving them as data points. This invisibility is problematic because when citizens do not see what administrative law entails (and vice versa), many citizens become excluded from exercising their rights on equal terms. This chapter focuses on the interplay between the constitutional granting of rights and the administrative law procedures that operationalise them. It does so by drawing on digital constitutionalism, a scholarly trend which seeks to understand how the values of contemporary constitutionalism are being reshaped in the digital context, namely by the interaction of public and private interests. This chapter seeks to understand the ‘invisibility problem’ within the digital transformation of the public sector by employing a holistic approach that intertwines digital constitutionalism with administrative law.
Article
Full-text available
This paper investigates the adoption of artificial intelligence (AI) in public governance and its impact on socioeconomic welfare, focusing on Slovenian Social Work Centres (SWCs). The objectives are to assess how AI applications align with governance models such as (Neo)Weberian Bureaucracy, New Public Management (NPM), and Good Governance, and to evaluate their effectiveness in promoting socioeconomic welfare. Furthermore, the study aims to identify opportunities and risks associated with AI in public governance and to provide policy recommendations for the ethical and effective integration of AI. A mixed-methods approach is adopted, comprising a comprehensive literature review to develop a theoretical framework, a cross-tabulation analysis of the European Commission's dataset of 686 AI use cases in 27 EU Member States, and a case study of AI implementation in Slovenian SWCs. This includes the analysis of administrative data from 2018–2022 on the e-Welfare platform and analysis of reports from Slovenian oversight bodies such as the Court of Audit, the Administrative Inspection, and the Human Rights Ombudsman. The results show that AI significantly improves administrative efficiency, particularly in the areas of resource management, cost-effectiveness, and service quality, which closely align with NPM principles. However, challenges remain in terms of transparency and accountability, as AI systems are often not transparent, making oversight difficult and jeopardising public trust, especially in the area of social welfare. The study concludes that while AI has significant potential to improve public governance, appropriate regulation and human oversight are essential to mitigate risks and ensure compliance with governance principles. The study provides valuable insights into the role of AI in administrative efficiency and is therefore relevant to policymakers, public officials, and researchers aiming to leverage AI's benefits while ensuring ethical governance and equitable socioeconomic outcomes.
Article
The article presents a systematic review of scholarly articles ( n = 168) focused on digital discretion—how digital technologies affect the discretionary autonomy of street‐level bureaucrats. Two analytical tasks are performed concerning how digital transformation has affected (1) the relations between top‐down governing and the work of street‐level bureaucrats and (2) street‐level bureaucrats in the encounter with individuals. The documentation shows an expanding research area, and most articles showcase various subjectification effects when street‐level bureaucrats become screen‐ or system‐level bureaucrats. Street‐level bureaucrats are under increasing pressure from changing political rationalities consisting of streams of scientific knowledge, societal discourses, and the need for changes in governmental practices. The transformation carries political rationalities that fundamentally reshape value structures, organizing, and working methods of street‐level work. Still, the research area needs further development and consolidation. The reviewed research analyzes policies without considering the political content developed during policy initiation, formulation, and decision‐making. Instead, the research is characterized by a short‐sighted analytical perspective on how individual street‐level bureaucrats handle a changing technological development.
Article
Full-text available
Automated, administrative decision-making (AADM) is a key component in digital government reforms. It represents an aspiration for a better and more efficient administration but also presents challenges to values of public administration. We systematically review the emerging literature on use of AADM from the perspective of good governance. Recognizing the inherent tensions of values of public administration, the broad review identifies key synergies, trade-offs and limits of AADM and good governance associated with 9 values: Accountability, efficiency, equality, fairness, resilience, responsiveness, right-to-privacy, rule-of-law, and transparency. While synergies represent "low-hanging fruits", trade-offs and limits are "hard cases" representing challenges to good governance. Taking the specific decision-making context into account, practitioners and scholars should attempt to nurture the "fruits" and lessen the tensions of the "hard-cases" thereby increasing the desirable societal outcomes of use of AADM. EVIDENCE FOR PRACTICE • Public authorities employing automated, administrative decision-making as well as relevant regulators must be aware of the inherent synergies, trade-offs, and limits vis-à-vis good governance.
Article
Full-text available
The front‐line of the welfare state is increasingly not a letter, phone call or face‐to‐face visit, but an online user‐interface. This ‘interface first’ bureaucracy is a fundamental reshaping of social security administration, but the design and operation of these interfaces is poorly understood. Drawing on interview data from senior civil servants, welfare benefits advisors and claimants on the UK's flagship Universal Credit working‐age benefit, this paper is a detailed analysis of the role played by interfaces in the modern welfare state. Providing examples from across the Universal Credit system, it sets out a five‐fold typology of user‐interface design elements in the social security context: (i) structuring data input, (ii) interaction architecture, (iii) operative controls, (iv) prompting and priming, and (v) integrations. The paper concludes by considering the implications of an ‘interface first’ welfare bureaucracy for future research.
Article
Full-text available
El objeto de este artículo se centra en las posibilidades y límites de la adopción totalmente automatizada de decisiones administrativas discrecionales, el papel de la supervisión humana y el alcance e intensidad del control judicial de dichas decisiones. El estudio se detiene brevemente en la explicación de algunos conceptos esenciales vinculados a la IA para clarificar su uso en este análisis. Entre ellos, se realiza un breve análisis del concepto tradicional de discrecionalidad administrativa en España, poniendo de relieve su falta de profundidad y la necesidad de repensarlo a la luz de la buena administración. A partir de aquí, se abordan cuáles son los límites jurídicos de la IA y cómo deberían conducir a evitar su uso para la automatización total del ejercicio de potestades discrecionales, en el estado actual de la técnica. Esa limitación conduce a lo que se denomina una reserva de humanidad, que se desarrolla a continuación. Finaliza el análisis exponiendo la adaptación de las técnicas de control judicial al ejercicio de potestades administrativas con IA, los posibles problemas de indiferencia e insuficiencia que puede tener dicho control judicial y el nivel de intensidad de control judicial que debiera desarrollarse.
Article
Full-text available
Learning from control deficits in the childcare benefits scandal: A plea for multi-level analysis in law and policy research The Dutch childcare benefits scandal has attracted much attention from law and policy scholars in recent years. The scandal stands out not only because of its disastrous outcomes, but also because of a toxic combination of errors on multiple levels of government. In this essay, the authors make a plea for a reappraisal of multi-level analysis in law and policy research. The importance of such an approach is demonstrated through an examination of ‘control deficits’ in the childcare benefits scandal, with attention for the state, organizational and individual levels of government. This yields conclusions about control deficits in the digital welfare state, as well as a number of broader reflections on the value of multi-level analysis in the study and practice of government.
Article
Religious symbols, such as the hijab, are often deemed undesirable or banned in public employment. We test if clients’ perceptions and their performance are influenced by a hijab-wearing public servant, and further test if clients’ reflections on empathy or professionalism about the public servant mitigate potential negative effects. We preregistered and conducted a two-step 2 × 3 between-subjects experiment ( n = 2,680; representative sample in Austria). We find no evidence that the wearing of a hijab by a public servant negatively influences clients’ perceptions, nor their performance during a public service process. The reflection answer with respect to professionalism or empathy, however, is related to clients’ performance: Clients’ positive reflection on public servants’ empathy or professionalism—independent of whether the public servant wears a hijab or not—positively relates to their performance in terms of task correctness. We discuss the relevance of these results regarding religious stereotyping and public employment policies.
Article
Full-text available
Social and other administrative procedures are gaining importance because of the increasing complexity of administrative relationships brought about by the Covid‐19 pandemic, digitalisation, and other societal changes. When exercising social rights, procedural elements should be seen – both at the level of regulation and enforcement of the rules – as factors contributing to the welfare state, the rule of law, and good administration, and not as an excuse for a bureaucratic attitude. In view of the multifunctionality of social procedures, including their casual‐functional role in social relationships and their potential for a critical value‐based evaluation of the current regulation, the rationale for this study is to assess the impact of the Covid‐19 pandemic on special administrative procedures conducted by the 16 social work centres (SWCs) in Slovenia. A special emphasis is placed on the informational calculation of social assistance payments, such as child benefits, kindergarten subsidies or state scholarships ‐by far the most numerous procedures involving social rights in Slovenia, with over one million cases annually. Drawing upon a normative analysis, available statistics, semi‐structured interviews with SWCs managers and surveys among employees, the findings reveal that the response of SWCs to the crisis has improved. However, largely due to the lack of coordination on the part of the line ministry, the simplifications introduced mainly benefit the public administration rather than particularly vulnerable parties to the procedure. Consequently, there is a need to pay greater attention to providing the parties with adequate protection of their constitutional rights and other elements of good public governance. Points for Practitioners In addition to analysing the direct practical implications of the legislative, organisational, and IT adaptations to the Covid‐19 pandemic, the article provides a broader study of the multifunctionality of social procedures and their role in ensuring citizens’ fundamental rights in times of socially unstable conditions. The findings are thus directly applicable for practitioners deciding on social procedures in the broader European setting, and for policymakers and legislators in the respective fields. As the conclusions are grounded on a strong methodological framework, this should contribute to advocating the much‐needed change in ensuring the protection of the basic constitutional rights in social procedures in times of crisis in Central Europe and beyond.
Article
Full-text available
To err is human, and as such, administrative errors are an inevitable component of current and future welfare state bureaucracy. Hitherto, while studies on administrative burden have shown us that routine interactions with welfare bureaucracy are often burdensome, very little is known about the nature of these interactions when something goes wrong. Most social policy and public administration scholarship focus on ex-ante analysis of administrative errors, with only scant research devoted to ex-post analysis of how claimants experience such errors once they occur, and the types of costs they may incur. This article contributes to the growing field of administrative burden research by examining welfare claimants' experiences of administrative errors. Analysis of 19 interviews with Israeli benefit recipients uncovered two themes. The first related to the process of correcting errors, including identifying and communicating them to the system. The second theme addressed the consequences of errors: on the one hand, economic and emotional costs including loss of trust in the system, and on the other, acquiring bureaucratic skills. These findings highlight bureaucratic errors as a critical and unique site of learning burden, as well as the need for a human contact to allow claimants to better deal with their consequences.
Article
Full-text available
System failure in the digital welfare state: Exploring parliamentary and judicial control in the Dutch childcare benefits scandal In recent years, there have been multiple policy fiascos (large-scale policy failures) in the digitalized enforcement of social security across welfare states. Besides similarities in outcomes, these cases have exposed far-reaching consequences of ‘rule of law system weaknesses’ in the form of deficits in parliamentary and judicial control. While there has been increasing attention for the vulnerability of benefit recipients in the digital welfare state, the importance of parliamentary and judicial control and the problematic nature of deficits in these control mechanisms has remained largely overlooked. This article takes a first step towards an understanding of this issue. It (a) examines the linkages between parliamentary and judicial control and policy failure and (b) analyses the role of deficits in these control mechanisms in the Dutch childcare benefits scandal. The results demonstrate the combined relevance of a variety of control deficits related to the legislative process, parliamentary scrutiny and judicial review. Moreover, interdependencies between parliamentary and judicial control highlight the importance of holistically analysing their combined influence. All in all, the article facilitates a deeper understanding of system failure in the digital welfare state and provides recommendations for future research.
Article
Full-text available
Digital constitutionalism is a strand of scholarship that focuses on the relationship between constitutional law and the socio-legal challenges posed by the digital revolution. However, such scholarship often builds uncritically on the tenets of liberal, state-centred constitutional theory, giving rise to contradictions between analytical starting points and normative aspirations. Against this background, with an approach inspired by societal constitutionalism, this article engages with digital constitutionalism as both an object and a means of critique. As an object, digital constitutionalism is assessed in the light of its contradictions. As a means, digital constitutionalism is used to assess the limits of traditional, liberal, state-centred constitutional theory. In other words, societal constitutionalism is the theoretical lens used to both deconstruct and reconstruct digital constitutionalism according to its normative aspirations. The article has three main goals: first, linking different discourses within digital constitutionalism, highlighting its critical potential; second, advancing some proposals based on such reflections; and third, bringing digital constitutionalism closer to the broader global constitutionalism discourse. After an overview of societal constitutionalism, the article focuses on digital constitutionalism’s definition and three functionally differentiated systems: politics, economy and law. For each of them, it highlights analytical and normative gains deriving from the societal constitutionalism-based approach as well as policy proposals to be developed further.
Chapter
Maschinelles Lernen stellt zunehmend einen wichtigen Faktor soziotechnischen Wandels dar. Zugleich ist es selbst Produkt der Realitäten, an deren Reproduktion es in Form praktischer Anwendungen wie auch als Spekulationsobjekt beteiligt ist. Die Beiträge des Bandes verhandeln gegenwärtige Manifestationen maschinellen Lernens als Phänomene, die für epistemische Verunsicherungen sorgen und die Bedingungen von Sozialität rekonfigurieren. Sie begegnen dieser Herausforderung, indem sie konkrete Verfahren in ihrer gesellschaftlichen Einbettung analysieren sowie bestehende theoretische Charakterisierungen sogenannter Künstlicher Intelligenz kritisch reflektieren.
Preprint
Full-text available
Appropriately regulating artificial intelligence is an increasingly urgent policy challenge. Legislatures and regulators lack the specialized knowledge required to best translate public demands into legal requirements. Overreliance on industry self-regulation fails to hold producers and users of AI systems accountable to democratic demands. Regulatory markets, in which governments require the targets of regulation to purchase regulatory services from a private regulator, are proposed. This approach to AI regulation could overcome the limitations of both command-and-control regulation and self-regulation. Regulatory market could enable governments to establish policy priorities for the regulation of AI, whilst relying on market forces and industry R&D efforts to pioneer the methods of regulation that best achieve policymakers' stated objectives.
Chapter
Many countries experience waning public trust in government, while digitisation of society is on the rise and public authorities continue to further digitise their services. If governments do not pay enough attention to the perspective of citizens and adopt a citizen-centric approach while designing digital services, accessibility of government services may decrease and further erode public trust. Against this background, this chapter explores the digitisation of government services from a citizen’s perspective, with a special focus on the role of the law and legal professionals. The chapter is based on the experiences of the authors over the past years with using digitisation to enhance citizens’ access to the law and to improve the way in which (internal and external) processes of public legal departments work.In current practice, the logic of the law often requires that citizens split up their problems or requests for help and approach different government bodies for different aspects of their problem. Well-designed digital government services put real-life problems that citizens experience first and combine all regulations that offer possible solutions, thus reducing complexity for the citizen and increasing their capacity to act. The limited capacity for personal contact can then be reserved for those citizens that need it. From a legal perspective, it is essential that government services are based on a traceable and transparent connection to the underlying regulations, using external knowledge models. Current government practices leave a lot to be desired in this respect. Methodologies are available to create these connections and are an important element for designing legally sound and citizen-centric government services.
Chapter
Developments over the last decade in the use of AI in tax administration have been nothing short of outstanding. Not only are taxpayers increasingly making use of automated systems in tax compliance, but perhaps more importantly, tax enforcement is increasingly reliant on new technologies as compliance-enhancing and fraud-prevention tools. However, whilst the use of AI brings very significant advantages to both the efficiency and the equity of tax systems, it also carries important risks. This paper identifies the development of a new AI fallacy within the tax policy sphere, namely that of unconstrained success: that the use of AI in tax compliance and enforcement can compensate for the deficiencies of the tax law. This paper considers the rationale behind the development of this fallacy, in particular the political and institutional dynamics involved in the approval of new tax legislation. It concludes that, maximising the benefits of the use of AI in tax compliance and enforcement requires departure from this fallacy, and the recognition of the wider dynamics of the tax policy-administration symbiosis.
Article
Full-text available
This paper gives an overview of e-government development in Estonia. The analysis incorporates both public sector initiatives and private sector developments which have contributed to the evolution of e-government. Private sector agents are seen as endogenous, not exogenous, in explaining e-government performance. Ultimately, the development of Internet banking by the private sector was fundamental in enabling the government to launch interactive online services. The findings reveal that the implementation of Estonian e-government is considerably more heterogeneous than previous studies have indicated. Basic service delivery and platforms for participation vary significantly across functional areas. Some ministries have provided innovative online services for the last ten years while others still struggle in making basic information available online. The availability of innovative platforms for online political participation has delivered remarkable outcomes in the last elections, while they have consistently failed to engage the public in the legislative process.
Article
In 2020, a Dutch court passed judgment in a case about a digital welfare fraud detection system called Systeem Risico Indicatie (SyRI). The court ruled that the SyRI legislation is unlawful because it does not comply with the right to privacy under the European Convention of Human Rights. In this article we analyse the judgment and its implications. This ruling is one of first in which a court has invalidated a welfare fraud detection system for breaching the right to privacy. We show that the immediate effects of the judgment are limited. The judgment does not say much about automated fraud detection systems in general, because it is limited to the circumstances of the case. Still, the judgment is important. The judgment reminds policymakers that fraud detection must happen in a way that respects data protection principles and the right to privacy. The judgment also confirms the importance of transparency if personal data are used.
  • Deshaney V. Winnebago
  • Cnty
DeShaney v. Winnebago Cnty. Dep't of Soc. Servs., 489 U.S. 189, 189 (1989).
2012) (noting that stakeholder analysis, which operationalizes values like open-minded reasoning and fairness, requires understanding a situation from the victim's perspective)
  • Carol W Lewis
  • Stuart C
  • The Gilman
  • Ethics
  • In
  • Service
CAROL W. LEWIS & STUART C. GILMAN, THE ETHICS CHALLENGE IN PUBLIC SERVICE: A PROBLEM SOLVING GUIDE 164 (3d ed. 2012) (noting that stakeholder analysis, which operationalizes values like open-minded reasoning and fairness, requires understanding a situation from the victim's perspective);
Dutch Government Resigns, supra note 146
  • Henley
Henley, Dutch Government Resigns, supra note 146.
  • Alejandro De
  • La Garza
Alejandro de La Garza, States Automated Systems Are Trapping Citizens in Bureaucratic Nightmares with Their Lives on the Line, TIME (May 17, 2020), https://time.com/5840609 [https://perma.cc/2NR2-Z7JX].
Robo-Bureaucrat and the Administrative Separation of Powers
  • Matthew B Seipel
Matthew B. Seipel, Robo-Bureaucrat and the Administrative Separation of Powers, 2020
Michigan's MiDAS Unemployment System: Algorithm Alchemy Created Lead, Not Gold
  • Robert N Charette
Robert N. Charette, Michigan's MiDAS Unemployment System: Algorithm Alchemy Created Lead, Not Gold, IEEE SPECTRUM (Jan. 24, 2018), https://spectrum.ieee.org/michigans-midas-unemployment-system-algorithm-alchemy-thatcreated-lead-not-gold [https://perma.cc/TE27-XYJA].
discussing the way in which technology may embody power, authority, and politics)
  • See Langdon Winner
See Langdon Winner, Do Artifacts Have Politics?, 109 DAEDALUS 121, 133-34 (1980) (discussing the way in which technology may embody power, authority, and politics). The interaction between humans and AI remains complex. See Kailas Vodrahalli, Tobias Gerstenberg & James Zou, Do Humans Trust Advice More If It Comes from AI? An Analysis of Human-AI Interactions 5-7 (July 14, 2021) (unpublished manuscript), https://arxiv.org/pdf/2107.07015.pdf [https://perma.cc/872J-HXZZ].
A.2; see also Calo & Citron, supra note 15, at 844 (describing some concerning by-products of an increasingly automated government)
  • Infra Part
Infra Part IV.A.2; see also Calo & Citron, supra note 15, at 844 (describing some concerning by-products of an increasingly automated government).
arguing that disparities in broadband access reinforce inequality in many aspects of life
  • See Olivier Sylvain
  • Network Equality
See Olivier Sylvain, Network Equality, 67 HASTINGS L.J. 443, 469-72 (2016) (arguing that disparities in broadband access reinforce inequality in many aspects of life, including education, employment, and employability).
For example, digital signatures alone saved twenty minutes per government transaction. Id
  • Digital See
  • Dividends
See DIGITAL DIVIDENDS, supra note 197. For example, digital signatures alone saved twenty minutes per government transaction. Id. at 118.
De Awb-rechter na 25 jaar: een karakterschets [The Awb Judge After 25 Years: A Character Sketch
  • See Ben Schueler
See Ben Schueler, De Awb-rechter na 25 jaar: een karakterschets [The Awb Judge After 25 Years: A Character Sketch], DE NEDERLANDSE VERENIGING VOOR RECHTSPRAAK [DUTCH COURTS ASSOCIATION] (Jan. 2019), https://trema.nvvr.org/uploads/documenten/downloads/ 2019-01_Schueler_De-Awb-rechter.pdf [https://perma.cc/K7MX-ESHU].
Verschoonbare termijnoverschrijding E-mailservice van www.overheid.nl [Excusable Time Limit: E-mail Service from www
  • Case See Annemarie Drahmann
  • Note
see Annemarie Drahmann, Case Note, Verschoonbare termijnoverschrijding E-mailservice van www.overheid.nl [Excusable Time Limit: E-mail Service from www.overheid.nl], 2020 COMPUTERRECHT, no. 86, ¶ ¶ 1, 5. 212. See Drahmann, supra note 211, ¶ 3 (describing the court's excusal of the appellants for exceeding the permit limit in this unique circumstance).
¶ 6 (endorsing the Dutch Council of State's more flexible approach, placing higher value on the citizen's perspective)
  • Id
Id. ¶ 6 (endorsing the Dutch Council of State's more flexible approach, placing higher value on the citizen's perspective).
Qu'est-ce que le droit à l'erreur face à l'administration?
  • Kim Willsher
  • Republique Française
Kim Willsher, French Parliament Passes Law Giving Citizens the Right To Make Mistakes, GUARDIAN (Jan. 24, 2018, 5:56 AM), https://www.theguardian.com/world/2018/jan/24/ french-parliament-passes-law-giving-citizens-the-right-to-make-mistakes [https://perma.cc/G8QZ-99L9]; Qu'est-ce que le droit à l'erreur face à l'administration? [What Is the Right To Error in the Face of Administration?], REPUBLIQUE FRANÇAISE, https://www.service-public.fr/particuliers/ vosdroits/F34677 [https://perma.cc/B8E7-VMZY] (last updated Sept. 10, 2020).
Automating Empathy: Social Impact, Mediated Emotion and Subjectivity
  • Andrew Mcstay
The German model of administrative law . . . has been probably the most influential in Europe (it was incorporated for example in Austria
  • Moshe Cohen-Eliya
On Common Good and Good Governance: An Alternative Approach
  • O P Cf
are characterized by strong bureaucratic inertia
  • Joel F See
For a reflection on the ethics framework of AI with regard to the specificities of different applications, see
  • Eleanor Bird