Article

Beyond the bubble that is Robodebt: How governments that lose integrity threaten democracy

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Robodebt describes the automated process of matching the Australian Taxation Office's income data with social welfare recipients' reports of income to Centrelink. Discrepancies signalling benefit overpayment trigger debt notices. The scheme has been criticised for inaccurate assessment, illegality, shifting the onus of proof of debt onto welfare recipients, poor support and communication, and coercive debt collection. Beyond immediate concerns of citizen harm, Robodebt harms democratic governance. Through persisting with Robodebt, the government is launching a regulatory assault on its own integrity. Two Senate inquiries reveal government endorsing (1) incoherence and inconsistency in public engagement, (2) unsound purposes and processes and (3) disregard for citizens. Such actions destroy trustworthiness. Citizens keep their distance and as a result, cooperation falters. At particular risk is the tax system. Citizens harmed by government turn to alternative authorities for help and opportunity, not always along legitimate pathways. The underground economy provides one such opportunity for fearful welfare recipients.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... The OCI used a data cross-matching algorithm to compare earnings recorded on a customer's Centrelink record with historical employer-reported income data from the Australian Taxation Office and issued automated debt raising and recovery notifications whenever debts were detected. The system replaced a prior process where departmental officials evaluated discrepancies, chased down employer records, and assessed accuracy before issuing debt notifications (Braithwaite, 2020). This fully automated system became known colloquially as "Robodebt," and its error-prone nature made it so controversial that it soon became subject to a report by the Commonwealth Ombudsman 5 (Glenn, 2017). ...
... The data was not capable of reflecting the variations in loading or entitlements that increasingly occur in the modern workplace (O'Donovan, 2020, p. 36). Braithwaite (2020) highlighted that "stigma surrounding social welfare recipients and public outrage around welfare fraud have meant that the government has been able to claim social licence to run its Robodebt programme without being held accountable" (p.244). Furthermore, (Braithwaite, 2020) argued that Robodebt not only inflicted harms on individual citizens, but stubborn persistence with a failing system undermined the governments integrity and threatened democracy by undermining the trust of citizens. ...
... Braithwaite (2020) highlighted that "stigma surrounding social welfare recipients and public outrage around welfare fraud have meant that the government has been able to claim social licence to run its Robodebt programme without being held accountable" (p.244). Furthermore, (Braithwaite, 2020) argued that Robodebt not only inflicted harms on individual citizens, but stubborn persistence with a failing system undermined the governments integrity and threatened democracy by undermining the trust of citizens. Referring to the idea of procedural justice, Braithwaite (2020) contended that: ...
Book
Social Work in an Online World addresses the shift from analog to digital practice in social work with varied client systems and at varied system levels (micro, mezzo, and macro). Going beyond online mental health services, which are largely individually focused and synchronously delivered, the authors offer a map of digital social work practice that can be expanded to include support, identity, community action, education, and psychoeducation. In addition, the book places special emphasis on digital equity and data justice, highlighting the core social work value of social justice. Social Work in an Online World demonstrates that the shift to hybrid and digital practice is moving forward, largely positively, for social workers and for those they seek to serve. Readers wishing to adopt digital practices will be inspired by this ground-breaking guide to apply these standards in their own practice and applications. Contents, sample chapter, and pre-orders are available through NASW Press: https://www.naswpress.org/product/53673/available-for-pre-order-social-work-in-an-online-world
... The OCI used a data cross-matching algorithm to compare earnings recorded on a customer's Centrelink record with historical employer-reported income data from the Australian Taxation Office and issued automated debt raising and recovery notifications whenever debts were detected. The system replaced a prior process where departmental officials evaluated discrepancies, chased down employer records, and assessed accuracy before issuing debt notifications (Braithwaite, 2020). This fully automated system became known colloquially as "Robodebt," and its error-prone nature made it so controversial that it soon became subject to a report by the Commonwealth Ombudsman 5 (Glenn, 2017). ...
... The data was not capable of reflecting the variations in loading or entitlements that increasingly occur in the modern workplace (O'Donovan, 2020, p. 36). Braithwaite (2020) highlighted that "stigma surrounding social welfare recipients and public outrage around welfare fraud have meant that the government has been able to claim social licence to run its Robodebt programme without being held accountable" (p.244). Furthermore, (Braithwaite, 2020) argued that Robodebt not only inflicted harms on individual citizens, but stubborn persistence with a failing system undermined the governments integrity and threatened democracy by undermining the trust of citizens. ...
... Braithwaite (2020) highlighted that "stigma surrounding social welfare recipients and public outrage around welfare fraud have meant that the government has been able to claim social licence to run its Robodebt programme without being held accountable" (p.244). Furthermore, (Braithwaite, 2020) argued that Robodebt not only inflicted harms on individual citizens, but stubborn persistence with a failing system undermined the governments integrity and threatened democracy by undermining the trust of citizens. Referring to the idea of procedural justice, Braithwaite (2020) contended that: ...
Chapter
Full-text available
This chapter begins with an introduction to datafication and its societal implications, and then explores, using three short case studies, issues with the use of algorithms in government services. The chapter concludes by considering the strategies that social and community workers might employ to achieve data justice.
... While misconceptions about personal data in the commercial sector might lead to lost customers and reduced revenue due to wrong inference, within government initiatives they can lead to severe and costly mismanagement of programs, for example in the public health or social security domains, as well as the more general loss in the overall trust in governments by the public [14,49,54]. In the context of research, assumptions and misconceptions about population data and how these are used for research studies, potentially after being linked with other data, can lead to wrong outcomes that can result in conclusions with severe negative impact in the real world [12,34]. ...
... In more recent times such techniques have seen much wider use in domains ranging from business intelligence to social science research. Within governments, record linkage is, for example, being used to find welfare fraudsters [14,21], and in national security to identify terrorism suspects [18]. Modern record linkage techniques are based on sophisticated statistical or machine learning based approaches [25,45] and are capable of producing high quality linked data sets. ...
... Unless relevant metadata describing such changes are available, it can be challenging to identify any changed definitions because the change might only have subtle effects on the characteristics of the population of interest for a research study. (14) Temporal data aspects do not matter. Given the dynamic nature of personal details, the time and date when population data are measured and included into a database can be crucial because differences in data lag can lead to inconsistent data making them not suitable for research studies [13,34]. ...
Preprint
Full-text available
Databases covering all individuals of a population are increasingly used for research studies in domains ranging from public health to the social sciences. There is also growing interest by governments and businesses to use population data to support data-driven decision making. The massive size of such databases is often mistaken as a guarantee for valid inferences on the population of interest. However, population data have characteristics that make them challenging to use, including various assumptions being made how such data were collected and what types of processing have been applied to them. Furthermore, the full potential of population data can often only be unlocked when such data are linked to other databases, a process that adds fresh challenges. This article discusses a diverse range of misconceptions about population data that we believe anybody who works with such data needs to be aware of. Many of these misconceptions are not well documented in scientific publications but only discussed anecdotally among researchers and practitioners. We conclude with a set of recommendations for inference when using population data.
... Aimed at eliminating fraudulent welfare payments and recovering any benefits overpayments it generated thousands of inappropriate notifications of debts to people who were incorrectly identified as owing debts for previously overpaid unemployment benefits. The welfare system then aggressively dealt with the collection of the debts identified (Butler, 2018) via the use of debt collection agents (Alcorn, 2017) and poorly communicated messages to effected welfare recipients (Braithwaite, 2020). ...
... Two Australian Senate inquiries found that government had endorsed unintelligibility and irregularity in engagement with the public, as well as fallacious aims and procedures and an indifference for the rights of citizens (Braithwaite, 2020). The scheme was described by the first inquiry as entailing procedural unfairness, and as disempowering people causing distress, emotional trauma, and shame. ...
... Other forms of policy can constrain the ways that people can use their financial capability, based upon specific personal factors. The recent 'Robodebt' policy that matched financial records from the Australian Tax Office to Centrelink data resulted in some people having false debts created, and then having to deal with debt collection agencies, in some cases leading to a loss of money for people, many of whom were in low-income households [227]. Many individuals who were not familiar with debt collection processes were unsure of how to handle their debt notice, which resulted in debts that were falsely raised being paid [227]. ...
... The recent 'Robodebt' policy that matched financial records from the Australian Tax Office to Centrelink data resulted in some people having false debts created, and then having to deal with debt collection agencies, in some cases leading to a loss of money for people, many of whom were in low-income households [227]. Many individuals who were not familiar with debt collection processes were unsure of how to handle their debt notice, which resulted in debts that were falsely raised being paid [227]. ...
Research
Full-text available
This report brings together current research to provide a theoretical model of the current financial wellbeing environment in the Australian context. We describe how financial wellbeing is related to people, how the pandemic has depleted financial wellbeing for many, how it relates to policy and how various programs have responded to financial wellbeing issues. The complex interactions between the different dimensions that drive financial wellbeing demonstrate the need for a more nuanced approach to financial wellbeing that can integrate these different areas into an overall model of financial wellbeing. We argue that there needs to be more attention given to structural drivers of financial wellbeing, and that adopting a systems approach to financial wellbeing is the best way to do this. While there are a number of actors in the Australian ecosystem who work to drive structural change, and who already are employing systems-based approaches, there is scope for greater coordination in these efforts.
... Robodebt has been criticised for inaccurate assessments, illegality, shifting the burden of proof of debt onto welfare recipients, poor support and communication, and coercive debt collection (Braithwaite, 2020). Thus, in 2017, within a year of its implementation, it came under scrutiny by public bodies, while at the same time, activists from non-governmental organisations began to raise awareness of the system's shortcomings. ...
Book
Full-text available
It is with great pleasure that we present the first of three monographs describing the research results on combating corruption and fraud. Combating corruption begins with awareness of the problem. Sharing the knowledge of the Visegrad Group countries and Ukraine in counteracting corrupt behaviour has a significant cognitive and educational aspect. Additionally, the inclusion of technological tools allows for reducing the negative consequences of such behaviour.
... As a result, the program was heavily condemned by the public, deemed unlawful in 2019, and eventually shut down. Additionally, after an inquiry by Australia's Royal Commission, the government later paid out $1.8 billion in refunds and compensation as a result of a class-action lawsuit (Braithwaite 2020). ...
Article
Full-text available
In an era defined by the global surge in the adoption of AI-enabled technologies within public administration, the promises of efficiency and progress are being overshadowed by instances of deepening social inequality, particularly among vulnerable populations. To address this issue, we argue that democratizing AI is a pivotal step toward fostering trust, equity, and fairness within our societies. This article navigates the existing debates surrounding AI democratization but also endeavors to revive and adapt the historical social justice framework, maximum feasible participation, for contemporary participatory applications in deploying AI-enabled technologies in public administration. In our exploration of the multifaceted dimensions of AI’s impact on public administration, we provide a roadmap that can lead beyond rhetoric to practical solutions in the integration of AI in public administration.
... Otherwise, why would they exist? Such logic can lead to disasters such as the Australian Robodebt scandal (Braithwaite, 2020) or the British Post Office's (Christie, 2020) pursuit of sub-postmasters erroneously thought to be fraudulent. In both cases, misplaced faith in technical rationality was the fundamental error. ...
Book
This captivating book delves into the complex realm of management and organizational dynamics, focusing on the significance of paradoxes. From the intricate interplay between social obligations and business missions to the tension between stability and change, this textbook unveils the essence of these enduring contradictions. As organizations evolve, paradoxes become defining features, and illustrative cases of this are included throughout the textbook. Organizational Paradoxes equips students of organization management, organization change and strategy with an understanding of paradoxes, their philosophical underpinnings and their management in practice.
... One of the most egregious examples of data harms is the widely publicised case of robodebt, an automated debt assessment and recovery scheme implemented by Services Australia as part of its income compliance programme (Carney, 2018a(Carney, , 2018b. The impact of robodebt, in terms of the personal and societal trauma inflicted upon millions of Australians wrongly accused of welfare fraud, has been extensively documented (Braithwaite, 2020;Graycar & Masters, 2022;Nikidehaghani et al., 2023;O'Donovan, 2019), thanks in part to a nation-wide Royal Commission (Robodebt Royal Commission, 2023). These incidents have raised awareness both with the Australian government and among the general public about the diverse risks, harms and injustices arising from the government use of automated decisionmaking (ADM) systems. ...
Article
Full-text available
In recent years, Australia has embarked on a digital transformation of its social services, with the primary goal of creating user‐centric services that are more attentive to the needs of citizens. This article examines operational and technological changes within Australia's National Disability Insurance Scheme (NDIS) as a result of this comprehensive government digital transformation strategy. It discusses the effectiveness of these changes in enhancing outcomes for users of the scheme. Specifically, the focus is on the National Disability Insurance Agency's (NDIA) use of algorithmic decision support systems to aid in the development of personalised support plans. This administrative process, we show, incorporates several automated elements that raise concerns about substantive fairness, accountability, transparency and participation in decision making. The conclusion drawn is that algorithmic systems exercise various forms of state power, but in this case, their subterranean administrative character positions them as “algorithmic grey holes”—spaces effectively beyond recourse to legal remedies and more suited to redress by holistic and systemic accountability reforms advocated by algorithmic justice scholarship.
... These practices are thought to infringe such values as equality (justice), integrity (nonmaleficence), and choice (freedom) and require further investigation as well as society and state involvement. A purposeful design of the Australian government machine learning mechanism for debt recovery, Robodebt, was deemed to provide false or incorrectly calculated debt notices to eventually equate to government extortion (Braithwaite, 2020;Martin, 2018). As observed by Carney (2019) such legal errors of the Robodebt programme were due to the rushed design by the government not following the legal and ethical standards on machine learning provided by Administrative Review Council in 2004 including breaches in solidarity, dignity, transparency, and trust, which eventually led to the programme being shut down and repayments of unfair recoveries. ...
... These practices are thought to infringe such values as equality (justice), integrity (nonmaleficence), and choice (freedom) and require further investigation as well as society and state involvement. A purposeful design of the Australian government machine learning mechanism for debt recovery, Robodebt, was deemed to provide false or incorrectly calculated debt notices to eventually equate to government extortion (Braithwaite, 2020;Martin, 2018). As observed by Carney (2019) such legal errors of the Robodebt programme were due to the rushed design by the government not following the legal and ethical standards on machine learning provided by Administrative Review Council in 2004 including breaches in solidarity, dignity, transparency, and trust, which eventually led to the programme being shut down and repayments of unfair recoveries. ...
Article
Full-text available
Ethical conduct of artificial intelligence (AI) is undoubtedly becoming an ever more pressing issue considering the inevitable integration of these technologies into our lives. The literature so far discussed the responsibility domains of AI; this study asks the question of how to instil ethicality into AI technologies. Through a three‐step review of the AI ethics literature, we find that (i) the literature is weak in identifying solutions in ensuring ethical conduct of AI, (ii) the role of professional conduct is underexplored, and (iii) based on the values extracted from studies about AI ethical breaches, we thus propose a conceptual framework that offers professionalism as a solution in ensuring ethical AI. The framework stipulates fairness, nonmaleficence, responsibility, freedom, and trust as values necessary for developers and operators, as well as transparency, privacy, fairness, trust, solidarity, and sustainability as organizational values to ensure sustainability in ethical development and operation of AI.
... Failed cases of AI-based public services have shown vital lessons for using automation in public services and the need for higher scrutiny with better practices. Outside the EU, a so-called case of "Robodebt" in Australia showed that data-driven service in the public sector with the pretence of "better management of the welfare system" [3] could have the opposite effect of instrumentalizing accounting techniques. This then could intensify inequalities by generating unlawful debt schemes affecting already vulnerable people [23]. ...
... The case concerns a major policy controversy of the type that Schön and Rein (1994) saw as requiring frame reflection if a productive resolution were to be attained. Significantly for our purposes, this Netherlands case has parallels with recent instances in other jurisdictions where application of artificial intelligence to support governmental processes have produced serious unintended negative effects, such as the 'Robodebt' scandal in Australia (Braithwaite, 2020) or the exam grading fiasco in the United Kingdom (Kippin & Cairney, 2022). Our case concerns the Netherlands government's System Risk Indication (SyRI) program, an artificial intelligence-based system that used a self-learning algorithm to detect suspected fraudulent behaviour. ...
Article
Full-text available
Scholarship on evidence-based policy, a subset of the policy analysis literature, largely assumes information is produced and consumed by humans. However, due to the expansion of artificial intelligence in the public sector, debates no longer capture the full range concerns. Here, we derive a typology of arguments on evidence-based policy that performs two functions: taken separately, the categories serve as directions in which debates may proceed, in light of advances in technology; taken together, the categories act as a set of frames through which the use of evidence in policy making might be understood. Using a case of welfare fraud detection in the Netherlands, we show how the acknowledgement of divergent frames can enable a holistic analysis of evidence use in policy making that considers the ethical issues inherent in automated data processing. We argue that such an analysis will enhance the real-world relevance of the evidence-based policy paradigm.
... Recent research in Australia by the Human Rights Commission has shown that while 46% of Australians were unaware that they were being processed by algorithmic tools in their interactions with government, a vast majority (88%) wanted to know the purpose behind the use of such tools, with 87% requiring the ability to appeal decisions made through algorithmic technologies (Australian Human Rights Commission, 2021). This percentage may be higher than that in other countries, however, because Australia has experienced recent high-pro le failures of algorithmic technologies through both the Robodebt scheme (Braithwaite, 2020) and covert biometrics use-like the Clearwater AI for facial recognition-by federal police without judicial oversight (Ryan, 2020). When algorithms allocate access to public services, social sorting can result in severe access limitations, leading to staunch and highly public critique of the policy and implementation failures, further diminishing public faith in algorithmic governance. ...
Chapter
Full-text available
This handbook is currently in development, with individual articles publishing online in advance of print publication. At this time, we cannot add information about unpublished articles in this handbook, however the table of contents will continue to grow as additional articles pass through the review process and are added to the site. Please note that the online publication date for this handbook is the date that the first article in the title was published online. For more information, please read the site FAQs.
... While suitable help was arranged to assist the individual, it is a reminder of the profound responsibility researchers have during the design phase of diverse digital solutions suitable for addressing mental health problems, and specifcally, for the end-users of our innovations. Others have identifed that the potential for harm/s to arise when using AI is a factor for consideration when developing technological innovations for people who have preexisting or emerging mental illness and/or suicidality [2]. Additionally, the release of the World Health Organization's (WHO) Ethics and Governance of Artifcial Intelligence in Health Care suggests that future innovations should be iatrogenically sound to ensure the safety of at-risk populations [3]. ...
Article
Full-text available
Background. The prominence of technology in modern life cannot be understated. However, for some people, these innovations or their related plausible advancements can be associated with perceptual misinterpretation and/or incorporation into delusional concepts. Objective. This paper aims to explore the intersection of technological advancement and experiencing psychosis. We present a discussion about the explanation seeking that incorporates the concept, that for some people, of technological innovation becoming intertwined with delusional symptoms over the past 100 years. Methods. A longitudinal review of the literature was conducted to synthesize and draw these concepts together, mapping them to a timeline that aligns computing science and healthcare expertise and presents the significant technological changes of the modern era charted against mental health milestones and reports of technology-related delusions. Results. It is possible for technology to be incorporated into the content of delusions with evidence supporting a link between the rate of technological change, the content of delusions, and the use of technology as a way of seeking an explanation. Moreover, analysis suggests a need to better understand how innovations may impact the mental health of people at risk of psychosis and other mental health conditions. Conclusions. Clinical experts and lived experience experts need to be informed about and collaborate with future research and development of technology, specifically artificial intelligence and machine learning, early in the development cycle. This concurs with other artificial intelligence research recommendations calling for design attention to the development and implementation of technological innovation applied in a mental health context.
... Such over-expectations might cause costly mismanagement in areas such as public health or in government decision-making. Furthermore, failing population data projects, such as census operations or health surveillance, might even result in the loss of trust in governments and science by the public [8,11]. In the context of research, myths and misconceptions 1 about population data can lead to wrong outcomes of research studies that can result in conclusions with severe negative impact [12,13]. ...
Article
Full-text available
Databases covering all individuals of a population are increasingly used for research and decision-making. The massive size of such databases is often mistaken as a guarantee for valid inferences. However, population data have characteristics that make them challenging to use. Various assumptions on population coverage and data quality are commonly made, including how such data were captured and what types of processing have been applied to them. Furthermore, the full potential of population data can often only be unlocked when such data are linked to other databases. Record linkage often implies subtle technical problems, which are easily missed. We discuss a diverse range of myths and misconceptions relevant for anybody capturing, processing, linking, or analysing population data. Remarkably, many of these myths and misconceptions are due to the social nature of data collections and are therefore missed by purely technical accounts of data processing. Many are also not well documented in scientific publications. We conclude with a set of recommendations for using population data.
... Skandalen kunne skje fordi forvaltningen forhastet implementeringen og ikke hadde utviklet gode nok etiske krav, i tillegg til at det var fravaer av egnede demokratiske eller rettslige kontrollordninger, samt manglende åpenhet og vektlegging av rettsstats-og lovgivningskrav (Carney, 2018). Det er pekt på at skandalen har et slikt omfang at den er egnet til å undergrave tilliten til den australske staten (Braithwaithe, 2020). ...
Article
Full-text available
Artificial intelligence is increasingly used to streamline the Norwegian welfare state. The use of artificial intelligence challenges the current mechanisms to ensure that the welfare state is kept within the rule of law. The capacity and digitization of the welfare services, and, in particular, the use of artificial intelligence, entails a considerable risk of systemic error. There are already manifold examples of things that can go spectacularly wrong. The Norwegian rule of law principles must be rethought, and there is a need to articulate a new rule of law principle – systematic work to prevent systematic errors. Only with such principles integrated in digitalization processes can artificial intelligence serve to strengthen the welfare state. *Original text in Norwegian: Det er stor iver etter å effektivisere den norske velferdsforvaltningen ved hjelp av kunstig intelligens. Bruken av kunstig intelligens utfordrer imidlertid de systemene vi i dag har for å sikre at rettsstatlige prinsipper etterleves i velferdsforvaltningen. Digitaliseringen av masseforvaltningen, og særlig bruk av kunstig intelligens, innebærer en høy risiko for masseproduksjon av feil. Det er allerede mange eksempler på at det kan gå virkelig galt. Den norske rettsstatstenkningen må videreutvikles, og det er behov for et nytt prinsipp: systematisk arbeid for å forebygge systematiske feil. Kun hvis rettsstatsprinsipper integreres i digitaliseringsarbeidet, kan kunstig intelligens styrke velferdsstaten.
... Braithwaite 2020. ...
Preprint
Full-text available
Thomas R. Dye’s much cited definition of public policy as whatever governments choose to do or not do, i.e., government action and inaction, helps us understand the parameters of what policy is but says very little about the dynamics that produce government policy choice. Critical policy studies offers one way to understand these dynamics, the power relations that produce them, and a means to evaluate policy against democratic and social justice values. Critical policy studies is different to more rationalist forms of policy analysis, in that it rejects the notion that policy can be designed and implemented in a neutral and scientific fashion, free from interests, values and ideologies. This claim, and scholarly focus, is important to note as it underpins critical policy studies’ research themes – the analysis of the social construction of policies to unpack common knowledge, perceptions, values, ideologies and power relations, and evaluate them against social justice and democratic ideals and values. The chapter proceeds in three main sections. Firstly, the origins of critical policy studies are examined and critical policy studies is defined. Critical policy studies’ relation, and reaction, to the work of Harold Laswell and the policy sciences is especially examined. Secondly, the relation of critical theory to critical policy studies is unpacked, sketching the links between Marxist theory to present day critical theory. In the third section, three common critical policy studies themes are analysed: technocratic policy, power and democracy; social construction in the policy process; and policy discourses. A case study in Australian politics and policy is provided for each theme: Robodebt; sexual and gender based violence; and COVID governance of Indigenous communities. The chapter concludes by drawing out key themes for students of critical policy studies to use in their own analyses and evaluations of policy.
... Organizations should indeed uphold their own moral fortitude amidst any possible problems with employees' personal values or any flaws in character that could motivate them to flout institutional rules. Therefore, there may be a disagreement between these different sorts of reliability and conflict between promoting personal integrity and developing incorruptible institutions and processes (Braithwaite, 2020;Seibel, W. 2020). Moreover, to ensure the employees act ethnically and with integrity in the private sector. ...
Article
Full-text available
The study examines the mediation effect of employee accountability on the relationship between working conditions and organizational health. The data were collected using a survey questionnaire on a sample of 311 elementary school teachers from public schools in North and South District of Kiblawan, Davao del Sur. The study employed a correlational and causal approach using Path Analysis to determine the relationships between working conditions, organizational health, and employee accountability. Findings revealed that working conditions and organizational health are positively and significantly related. Moreover, there is also a significant and positive relationship between working conditions and employee accountability. Results also indicated that employee accountability and organizational health are significantly and positively related. Using the Path Analysis, the mediation model suggested that employee accountability partially mediates the positive relationship between working conditions and organizational health. Specifically, the total effect of working conditions on organizational health is mediated by or passes through employee accountability. The remaining is attributed to the direct impact of working conditions or indirect effect through the mediation of other variables that are not considered in the study.
... The resulting bureaucratic inhumanity is evident in millions of unanswered phonecalls every year (Dingwall 2018;Whyte 2020b) and the 'Robodebt' scandal-an automated debt-recovery program aimed at income-support beneficiaries that was ultimately ruled illegal by the Federal Court (Medhora 2019). Valerie Braithwaite (2020) argues the harms of Robodebt go beyond the immediate harm to citizens, to harming trust in government and threatening democracy. ...
... This finding shows that the replacement of people-centered services with robots and machines is a real fear for consumers. This may be attributed to people obtaining much of their understanding from popular media (ie, films [52]) or past negative experiences with common automated services such as banking (which was a comparison noted by many participants) or the very poorly received Australian debt recovery program, Robodebt [53]. Such preconceptions about automation clearly had a major impact on the reasons community and help-seeker participants provided for not using Lifeline's services if technology enhancements were introduced, which would need to be carefully addressed if AI is to be used effectively to support human decision-making processes in crisis support contexts. ...
Article
Full-text available
Background Emerging technologies, such as artificial intelligence (AI), have the potential to enhance service responsiveness and quality, improve reach to underserved groups, and help address the lack of workforce capacity in health and mental health care. However, little research has been conducted on the acceptability of AI, particularly in mental health and crisis support, and how this may inform the development of responsible and responsive innovation in the area. Objective This study aims to explore the level of support for the use of technology and automation, such as AI, in Lifeline’s crisis support services in Australia; the likelihood of service use if technology and automation were implemented; the impact of demographic characteristics on the level of support and likelihood of service use; and reasons for not using Lifeline’s crisis support services if technology and automation were implemented in the future. Methods A mixed methods study involving a computer-assisted telephone interview and a web-based survey was undertaken from 2019 to 2020 to explore expectations and anticipated outcomes of Lifeline’s crisis support services in a nationally representative community sample (n=1300) and a Lifeline help-seeker sample (n=553). Participants were aged between 18 and 93 years. Quantitative descriptive analysis, binary logistic regression models, and qualitative thematic analysis were conducted to address the research objectives. Results One-third of the community and help-seeker participants did not support the collection of information about service users through technology and automation (ie, via AI), and approximately half of the participants reported that they would be less likely to use the service if automation was introduced. Significant demographic differences were observed between the community and help-seeker samples. Of the demographics, only older age predicted being less likely to endorse technology and automation to tailor Lifeline’s crisis support service and use such services (odds ratio 1.48-1.66, 99% CI 1.03-2.38; P<.001 to P=.005). The most common reason for reluctance, reported by both samples, was that respondents wanted to speak to a real person, assuming that human counselors would be replaced by automated robots or machine services. Conclusions Although Lifeline plans to always have a real person providing crisis support, help-seekers automatically fear this will not be the case if new technology and automation such as AI are introduced. Consequently, incorporating innovative use of technology to improve help-seeker outcomes in such services will require careful messaging and assurance that the human connection will continue.
... How and for what purpose AI is implemented partly determines whether benefits or harms are generated from its use. For example, an algorithm autonomously tasked with determining welfare payments, without meaningful human oversight, and ultimately making inaccurate calculations is a deployment context that can generate harms (Braithwaite, 2020). Or AI tasked with assessing employee performance to input into, and potentially communicate, termination decisions raises questions regarding the transparency of data collection and the appropriateness of deploying the technology for such purposes (Obedkov, 2021). ...
Article
Full-text available
Artificial intelligence (AI) is increasingly inputting into various human resource management (HRM) functions, such as sourcing job applicants and selecting staff, allocating work, and offering personalized career coaching. While the use of AI for such tasks can offer many benefits, evidence suggests that without careful and deliberate implementation its use also has the potential to generate significant harms. This raises several ethical concerns regarding the appropriateness of AI deployment to domains such as HRM, which directly deal with managing sometimes sensitive aspects of individuals’ employment lifecycles. However, research at the intersection of HRM and technology continues to largely center on examining what AI can be used for, rather than focusing on the salient factors relevant to its ethical use and examining how to effectively engage human workers in its use. Conversely, the ethical AI literature offers excellent guiding principles for AI implementation broadly, but there remains much scope to explore how these principles can be enacted in specific contexts-of-use. By drawing on ethical AI and task-technology fit literature, this paper constructs a decision-making framework to support the ethical deployment of AI for HRM and guide determinations of the optimal mix of human and machine involvement for different HRM tasks. Doing so supports the deployment of AI for the betterment of work and workers and generates both scholarly and practical outcomes.
Article
Australia's Robodebt scheme is now internationally infamous for how not to use automation in government. Belying heightened concern with artificial intelligence, Robodebt involved traditional, relatively simple computer algorithms to automate the identification and pursuit of alleged historical welfare debts. Yet, it was based on a false legal premise. This paper argues that rather than a technical failure, Robodebt was intentional, motivated by political imperatives, aided and abetted by an overly responsive public service culture. Engaging with the literature of organisational wilful ignorance, it is argued that these political and public service cultures were constituted by useful idiots, who acted as foils to keep the illegal scheme running for 4 years. Drawing on the Report of the Royal Commission into the Robodebt Scheme, the paper identifies a range of practices through which strategic wilfulness ignorance of Robodebt's unlawfulness was actively constituted. Such practices, particularly well‐versed in the senior echolons of the public service, include feigning ignorance, studious nonresponse and not asking questions, telling untruths and giving ambiguous information, advising informally, not sharing and withholding information, and individual and institutional bullying. Documenting the concrete mechanisms of wilful organisational ignorance contributes to better understanding of this phenomenon and helps remedial action.
Chapter
This chapter proposes an analytical lens to comprehensively address the role of Artificial Intelligence (AI) applications in mediating arbitrary exercise of power in public administration and the citizen harms that result from such conduct. It provides a timely and urgent account to fill gaps in conventional Rule of Law thought. AI systems are socio-technical by nature and, therefore, differ from the text-driven social constructs that the legal professions dealing with Rule of Law issues concentrate on. Put to work in public administration contexts with consequential decision-making, technical artefacts can contribute to a variety of hazardous situations that provide opportunities for arbitrary conduct. A comprehensive lens to understand and address the role of technology in Rule of Law violations has largely been missing in literature. We propose to combine a socio-legal perspective on the Rule of Law with central insights from system safety—a safety engineering tradition with a strong scientific as well as real-world practice—that considers safety from a technological, systemic, and institutional perspective. The combination results in a lexicon and analytical approach that enables public organisations to identify possibilities for arbitrary conduct in public AI systems. Following on the analysis, interventions can be designed to prevent, mitigate, or correct system hazards and, thereby, protect citizens against arbitrary exercise of power.
Article
Behavioural insights and the use of nudge have attracted a lot of interest among governments across the globe since the introduction of the UK's Behavioural Insights Unit in 2010. One of the key challenges since these early days has been the concern that behavioural policy design, in particular the use of nudges, could be misused to manipulate citizens. When the Robodebt Royal Commission released its report in 2023, these concerns were renewed in Australia. It revealed that the Department of Human Services had used behavioural insights to inform the design of letters informing citizens of a debt in such a way as to minimise the impact on call centres while shifting that impact onto citizens. Did this use finally reveal what many had feared? Could government not be trusted with behavioural insights? This article will first explore the ethical concerns that have surrounded the implementation of nudges and behavioural policy. Following this, the paper will go beyond the debate over the ethics of implementing behavioural policies and argue instead that a focus on the theoretical opportunities and risks of nudge and behavioural policy fails to capture the significant risks inherent in implementation. When all proposed protections—the use of ethical frameworks, publication and testing, and in‐depth research—remain optional in practice, a commitment to ‘ideology‐free’ evidence can obscure more than it enlightens. The paper concludes by pointing to critical steps the Australian public sector can take to ensure future accountability and transparency for policy design, for nudges but also beyond. Points for practitioners Nudging is neither ethically neutral nor inherently problematic. The context in which policy is designed is critical. Robodebt highlighted several flaws in the context in which policy is designed in Australian federal policymaking, including a public service which appeared more comfortable with debates over technical delivery concerns than the content of policy. Robodebt revealed that parts of the public sector had become overly focused on ‘what works’, rather than providing advice on social desirability, acceptability, human rights, and equity. This experience should therefore lead to greater apprehension about the use of nudging, as there is a risk that ethical issues will go uninterrogated.
Article
Proponents of legal automation believe that translating the law into code can improve the legal system. However, research and reporting suggest that legal software systems often contain flawed translations of the law, resulting in serious harms such as terminating children's healthcare and charging innocent people with fraud. Efforts to identify and contest these mistranslations after they arise treat the symptoms of the problem, but fail to prevent them from emerging. Meanwhile, existing recommendations to improve the development of legal software remain untested, as there is little empirical evidence about the translation process itself. In this paper, we investigate the behavior of fifteen teams---nine composed of only computer scientists and six of computer scientists and legal experts---as they attempt to translate a bankruptcy statute into software. Through an interpretative qualitative analysis, we characterize a significant epistemic divide between computer science and law and demonstrate that this divide contributes to errors, misunderstandings, and policy distortions in the development of legal software. Even when development teams included legal experts, communication breakdowns meant that the resulting tools predominantly presented incorrect legal advice and adopted inappropriately harsh legal standards. Study participants did not recognize the errors in the tools they created. We encourage policymakers and researchers to approach legal software with greater skepticism, as the disciplinary divide between computer science and law creates an endemic source of error and mistranslation in the production of legal software.
Article
Full-text available
Large-scale public sector information systems (PSIS) that administer social welfare payments face considerable challenges. Between 2014 and 2023, an Australian government agency conceived and implemented the Online Compliance Intervention (OCI) scheme, widely referred to as Robodebt. The scheme's primary purpose was to apply digital transformation in order to reduce labour costs and increase recovery of overpayments. Among its key features were a simplified, but inherently erroneous, estimation method called income averaging, and a new requirement that welfare recipients produce documentation for income earned years earlier. Failure by welfare recipients to comply with mandates resulted in the agency recovering what it asserted to be overpayments. This article presents a case study of Robodebt and its effects on over 1 million of its clients. The detailed case study relies on primary data through Senate and other government hearings and commissions, and secondary data, such as media reports, supplemented by academic sources. Relevant technical features include (1) the reliance on the digital persona that the agency maintains for each client, (2) computer-performed inferencing from client data, and (3) automated decision-making and subsequent action. This article employs a socio-technical systems approach to understanding the factors underlying a major PSIS project failure, by focusing on the system's political and public service sponsors; its participants (users); the people affected by it (usees); and the broader economic, social, and political context. Practical and theoretical insights are presented, with the intention of highlighting major practical lessons for PSIS, and the relevance of an articulated socio-technical frame for PSIS.
Article
Robodebt was an administratively harmful policy created by bureaucrats incrementally extending existing welfare compliance policies in Australia. This article analyses the long history that created the malign institutional state in which Robodebt was able to occur. It argues the fertile ground for this policy was laid through the historical interplay of three institutional processes: the rules of Commonwealth budget making, the fractured relationship between policy and service delivery in Australian social security, and the structure of the fraud and compliance framework of the Department of Human Services. This created a pattern of institutional change in which compliance policies were added in incremental layers over decades before Robodebt as part of an ongoing drive for savings and operational efficiency. The article concludes by arguing the recommendations of the Royal Commission, which focus on improved legal processes and oversight, are insufficient to resolve the institutional problems at the root of Robodebt. Points for practitioners Robodebt occurred as a result of bureaucrats incrementally extending existing welfare compliance policies, which was a standard annual practice that had been occurring for about the preceding 30 years. The expansion of compliance programs was one of the only ways for the Department of Human Services, as the service delivery arm of social security, to meet annual demands from central agencies and politicians to cut expenditure and provide offsets for new spending. For long‐term change, the government and the Australian Public Service will need to go further than the recommendations of the Robodebt Royal Commission by addressing the offsetting mechanisms of Commonwealth Budget processes and the structure of the social services portfolio that separates policy and service delivery.
Article
The 'Robodebt' scheme was an initiative pursued by the Australian Department of Human Services between 2016 and 2019 to increase the amount of money recovered from supposed 'overpayments' to recipients of welfare benefits. Drawing on the rich body of empirical material generated by the Royal Commission into the Robodebt Scheme as well as journalists and academic observers, this paper develops an understanding of the affair from the perspective of the sociology of organizations. Particular use is made of a growing body of research in the organizational sociology of ignorance. Following an outline of the main features of Robodebt, the paper explains the significance of the conception of ignorance as more complex than the mere absence of knowledge in organizational life. It then examines the specifics of the way in which Robodebt casts light on the role played by systemic, wilful ignorance in the relationship between law, bureaucracy and politics. The paper concludes with some reflections on the senses in which Robodebt was a manifestation not only of a crisis, fiasco or scandal, but also of the normal operation of the 'will to ignorance' (Nietzsche) in organizational life. The question is, then, whether there are circumstances that change the relationship between knowing and ignorance, perhaps to a point in which ignorance becomes the most important resource of action. (Luhmann 1998: 94)
Article
Focusing on holistic wellbeing rather than solely economic prosperity is becoming ever more popular among policy makers, both in Australia and New Zealand, and elsewhere. And yet, turning a complex set of system‐level indicators of wellbeing into actionable policy requires us to rethink how we develop, implement, and evaluate policy. In this article, I review the current trends in wellbeing, including developments in the measurement and tracking of wellbeing, and offer practical steps for integrating actionable wellbeing outcomes into future policymaking processes. Points for practitioners Focusing on wellbeing as part of the policy making process is becoming more popular among governments, including in Australia and New Zealand. The New Zealand Government has been doing wellbeing budgets since 2019 while the Australian Government released a new wellbeing framework in 2023. Wellbeing policy represents an approach to policy making that aims to maximize the general health and happiness of a target population on both subjective and objective measures of wellbeing. This includes both economic and non‐economic measures of prosperity and wellbeing. There are many ways of tracking the wellbeing effect of policy and so choosing the right framework is important for effective wellbeing policy making. This starts with a wellbeing purpose for the policy and a clear and concise definition of wellbeing. Doing wellbeing policy requires a good understanding of what wellbeing represents and how it is measured. You need relevant and measurable indicators of wellbeing, an evaluation strategy, and the ability to reflect and innovate as part of an iterative policy making process.
Chapter
Full-text available
Relentless civil society activism is a remedy to the ritualism of states promising big and delivering poorly on crisis amelioration. Regulation must be a human, relational craft. Centralized bureaucracies that over-prioritize desk audits and risk measurement that dates quickly as it feeds into algorithmic regulation are a risk. Detective skills and relational skills of street-level inspectors must be re-prioritized.
Chapter
In this ambitious collection, Zofia Bednarz and Monika Zalnieriute bring together leading experts to shed light on how artificial intelligence (AI) and automated decision-making (ADM) create new sources of profits and power for financial firms and governments. Chapter authors—which include public and private lawyers, social scientists, and public officials working on various aspects of AI and automation across jurisdictions—identify mechanisms, motivations, and actors behind technology used by Automated Banks and Automated States, and argue for new rules, frameworks, and approaches to prevent harms that result from the increasingly common deployment of AI and ADM tools. Responding to the opacity of financial firms and governments enabled by AI, Money, Power and AI advances the debate on scrutiny of power and accountability of actors who use this technology. This title is available as Open Access on Cambridge Core.
Article
Full-text available
Given the tremendous potential and infuence of artifcial intelligence (AI) and algorithmic decision-making (DM), these systems have found wide-ranging applications across diverse felds, including education, business, healthcare industries, government, and justice sectors. While AI and DM ofer signifcant benefts, they also carry the risk of unfavourable outcomes for users and society. As a result, ensuring the safety, reliability, and trustworthiness of these systems becomes crucial. Tis article aims to provide a comprehensive review of the synergy between AI and DM, focussing on the importance of trustworthiness. Te review addresses the following four key questions, guiding readers towards a deeper understanding of this topic: (i) why do we need trustworthy AI? (ii) what are the requirements for trustworthy AI? In line with this second question, the key requirements that establish the trustworthiness of these systems have been explained, including explainability, accountability, robustness, fairness, acceptance of AI, privacy, accuracy, reproducibility, and human agency, and oversight. (iii) how can we have trustworthy data? and (iv) what are the priorities in terms of trustworthy requirements for challenging applications? Regarding this last question, six diferent applications have been discussed, including trustworthy AI in education, environmental science, 5G-based IoTnetworks, robotics for architecture, engineering and construction, fnancial technology, and healthcare. Te review emphasises the need to address trustworthiness in AI systems before their deployment in order to achieve the AI goal for good. An example is provided that demonstrates how trustworthy AI can be employed to eliminate bias in human resources management systems. Te insights and recommendations presented in this paper will serve as a valuable guide for AI researchers seeking to achieve trustworthiness in their applications.
Article
Although new hepatitis C treatments are a vast improvement on older, interferon‐based regimens, there are those who have not taken up treatment, as well as those who have begun but not completed treatment. In this article, we analyse 50 interviews conducted for an Australian research project on treatment uptake. We draw on Berlant’s (2007, Critical Inquiry , 33) work on ‘slow death’ to analyse so‐called ‘non‐compliant’ cases, that is, those who begin but do not complete treatment or who do not take antiviral treatment as directed. Approached from a biomedical perspective, such activity does not align with the neoliberal values of progress, self‐improvement and rational accumulation that pervade health discourses. However, we argue that it is more illuminating to understand them as cases in which sovereignty and agency are neither simplistically individualised nor denied, and where ‘modes of incoherence, distractedness, and habituation’ are understood to co‐exist alongside ‘deliberate and deliberative activity […] in the reproduction of predictable life’ (Berlant, 2007, p. 754). The analysed accounts highlight multiple direct and indirect forces of attrition and powerfully demonstrate the socially produced character of agency, a capacity that takes shape through the constraining and exhausting dynamics of life in conditions of significant disadvantage.
Article
Full-text available
The use of algorithms and automation of public services is not new, but in recent years there has been a step change in processing power and a decrease in the price of these technologies, which means we are seeing more widespread use. These advances are reframing our perception of what matters in ways that impact the ethical dimensions of day-to-day life. In turn, these changes challenge long-standing assumptions about public service ethics and how it is taught. In this multidisciplinary authored paper, we argue that public service leaders must be attentive to ethical questions that converge around adopting “data-driven” techniques, including algorithmic decision-making ( ). Algorithmic and technology focused ethics question assumptions about the current deficits within public service ethics pedagogy in public service programs and university programs and the future direction of the discipline. To do so raises longstanding but neglected questions about the public services’ role in the state and recovering what Rohr refers to as the ‘ethics of the office.’ This, we argue, will have implications for teaching public service ethics.
Chapter
Full-text available
Thomas R. Dye’s much cited definition of public policy as whatever governments choose to do or not do – that is, government action and inaction – helps us to understand the parameters of what policy is but says very little about the dynamics that produce government policy choice. The field of critical policy studies offers one way to understand these dynamics, the power relations that produce them and a means to evaluate policy against democratic and social justice values. Critical policy studies is different from more rationalist forms of policy analysis in that it rejects the notion that policy can be designed and implemented in a neutral and scientific fashion, free from interests, values and ideologies. This claim, and scholarly focus, is important to note as it underpins the research themes of critical policy studies – the analysis of the social construction of policies to unpack common knowledge, perceptions, values, ideologies and power relations, and evaluate them against social justice and democratic ideals and values. The chapter proceeds in three main sections. Firstly, the origins of critical policy studies are examined and critical policy studies is defined. The relation, and reaction, of critical policy studies to the work of Harold Lasswell and the policy sciences is especially examined. Secondly, the relation of critical theory to critical policy studies is unpacked, sketching the links between Marxist theory to present-day critical theory. In the third section, three common critical policy studies themes are analysed: technocratic policy, power and democracy; social construction in the policy process; and policy discourses. The chapter concludes by drawing out key themes for students of critical policy studies to use in their own analyses and evaluations of policy.
Article
Artificial intelligence (AI) and algorithmic decision making are having a profound impact on our daily lives. These systems are vastly used in different high-stakes applications like healthcare, business, government, education, and justice, moving us toward a more algorithmic society. However, despite so many advantages of these systems, they sometimes directly or indirectly cause harm to the users and society. Therefore, it has become essential to make these systems safe, reliable, and trustworthy. Several requirements, such as fairness, explainability, accountability, reliability, and acceptance, have been proposed in this direction to make these systems trustworthy. This survey analyzes all of these different requirements through the lens of the literature. It provides an overview of different approaches that can help mitigate AI risks and increase trust and acceptance of the systems by utilizing the users and society. It also discusses existing strategies for validating and verifying these systems and the current standardization efforts for trustworthy AI. Finally, we present a holistic view of the recent advancements in trustworthy AI to help the interested researchers grasp the crucial facets of the topic efficiently and offer possible future research directions.
Article
With the advent of highly effective antiviral treatment for hepatitis C, many people have undergone treatment and been cured. Others, however, have not undergone treatment, even where it is free and readily available. Australia's aim of eliminating the disease by 2030 means this group is of concern to researchers, health professionals and policymakers. This article draws on 50 interviews conducted for a research project on treatment experiences to examine treatment non-uptake in Australia. Informed by Berlant's (2007) work on ‘slow death’, it analyses experiences of non-uptake to explain the dynamics at work in such outcomes. The analysis is divided into three parts. First, participant Cal describes a lifetime in which hepatitis C, homelessness and prison have shaped his outlook and opportunities. Second, Evan describes intergenerational drug consumption, family contact with the prison system and an equally long history with hepatitis C. Finally, Rose also describes a long history of hepatitis C, complex struggles to improve life and contact with the prison system. All three accounts illuminate the dynamics shaping treatment decisions, calling to mind Berlant's slow death as a process of being ‘worn out by the activity of reproducing life’ under conditions that both demand self-management, and work against it. In concluding, the article points to Berlant's distinction between ‘epidemics’ and ‘endemics’, arguing that its politics apply directly to hepatitis C. In doing so, it highlights the need to address the criminalising, pathologising, capitalist context of ‘attrition’ (Berlant) that wears out lives even as it fetishises autonomy, responsibility and choice.
Article
The Australian National Disability Insurance Scheme (NDIS) allocates funds to participants for purchase of services. Only one percent of the 89,299 participants spent all of their allocated funds with 85 participants having failed to spend any, meaning that most of the participants were left with unspent funds. The gap between the allocated budget and realised expenditure reflects misallocation of funds. Thus we employ alternative machine learning techniques to estimate budget and close the gap while maintaining the aggregate level of spending. Three experiments are conducted to test the machine learning models in estimating the budget, expenditure and the resulting gap; compare the learning rate between machines and humans; and identify the significant explanatory variables. Results show that machines learn “faster” than humans; machine learning models can improve the efficiency of the NDIS implementation; and significant explanatory variables identified by decision tree models and regression analysis are similar.
Article
This paper explores the force of automation and its contradictions and resistances within (and beyond) the financial sector, with a specific focus on computational practices of credit-scoring and lending. It examines the operations and promotional discourses of fintech start-ups LendUp.com and Elevate.com that offer small loans to the sub-prime consumers in exchange for access to their online social media and mobile data, and Zest AI and LenddoEFL that sell automated decision-making tools to verify identity and assess risk. Reviewing their disciplinary reputational demands and impacts on users and communities, especially women and people of colour, the paper argues that the automated reimagination of credit and creditability disavows the formative design of its AI and redefines moral imperatives about character to align with the interests of digital capitalism. The economic, social and cultural crises precipitated by the Covid-19 pandemic have only underscored the internal contradictions of these developments, and a variety of debt resistance initiatives have emerged, aligned with broader movements for social, economic, and climate justice around the globe. Cooperative lending circles such as the Mission Asset Fund, activist groups like #NotMyDebt, and Debt Collective, a radical debt abolition movement, are examples of collective attempts to rehumanize credit and debt and resist the appropriative practices of contemporary digital finance capitalism in general. Running the gamut from accommodationist to entirely radical, these experiments in mutual aid, debt refusal, and community-building provide us with roadmaps for challenging capitalism and re-thinking credit, debt, power, and personhood within and beyond the current crises.
Article
Full-text available
We undoubtably live in a digitally infused world. From government administrative processes to financial transactions and social media posts, digital technologies automatically collect, collate, combine and circulate digital traces of our actions and thoughts, which are in turn used to construct digital personas of us. More significantly, government decisions are increasingly automated with real world effect; companies subvert human workers to automated processes; while social media algorithms prioritise outrage and ‘fake news’ with destabilizing and devastating effects for public trust in social institutions. Accordingly, what it means to be a person, a citizen, and a consumer, and what constitutes society and the economy in the 21 st century is profoundly different to that in the 20 th century.
Book
Taking a multi-disciplinary perspective (including public health, sociology, criminology, and political science amongst others) and using examples from across the globe, this book provides a detailed understanding of the complex and highly contested nature of drug policy, drug policy making, and the theoretical perspectives that inform the study of drug policy. It draws on four different theoretical perspectives: evidence-informed policy, policy process theories, democratic theory, and post-structural policy analysis. The use of and trade in illegal drugs are a global phenomenon. It is viewed by governments as a significant social, legal, and health problem that shows no signs of abating. The key questions explored throughout this book are what governments and other bodies of social regulation should do about illicit drugs, including drug policies aimed at improving health and reducing harm, drug laws and regulation, and the role of research and values in policy development. Seeing policy formation as dynamic iterative interactions between actors, ideas, institutions, and networks of policy advocates, the book explores how policy problems are constructed and policy solutions selected, and how these processes intersect with research evidence and values. This then animates the call to democratise drug policy and bring about inclusive meaningful participation in policy development in order to provide the opportunity for better, more effective, and value-aligned drug policies. This book will be of great interest to students and scholars of drug policy from a number of disciplines, including public health, sociology, criminology, and political science.
Article
Full-text available
In light of the growing need to pay attention to general public opinions and sentiments toward AI, this paper examines the levels of understandings amongst the Australian public toward the increased societal use of AI technologies. Drawing on a nationally representative survey of 2019 adults across Australia, the paper examines how aware people consider themselves to be of recent developments in AI; variations in popular conceptions of what AI is; and the extent to which levels of support for AI are liable to alter with additional exposure to information about AI. While a majority of respondents consider themselves to have little knowledge and familiarity with the topic of AI, the survey nevertheless finds considerable range of relatively ‘plausible’ basic understandings of what AI is. Significantly, repeated questioning highlights a willingness among many people to reassess their opinions once having received further information about AI, and being asked to think through issues relating to AI and society. These patterns remain relatively consistent, regardless of respondents’ political orientation, income, social class and other demographic characteristics. As such, the paper concludes by considering how these findings provide support for the development of public education efforts to further enhance what might be termed ‘public understanding of AI’.
Article
This Special Issue addresses the use of linked data for research purposes and to carry out government functions such as child protection, allocation of resources, and debt recovery. Government investment in big data has the potential to change citizens' experience of the welfare state in a broad range of areas in both positive and negative ways. It is therefore important that the Australian social policy community understands and engages with the potential benefits and risks involved in the linkage and analysis of government datasets. Papers in this Special Issue discuss the technical challenges and institutional barriers involved in the construction and governance of linked government data assets and showcase the promise of big data for generating policy relevant insights. This Special Issue also features papers critically interrogating the potential for big data to produce social harms. We contextualise this collection of papers with a brief history of recent policy developments in regards to access to government held data. We also discuss ways of improving public trust and social licence for the use of big data and argue that the voices of First Nations and disadvantaged Australians must be given greater weight in discussions of how their data will be used.
Chapter
Australia has a comprehensive system of social security but, as Solomon explains, it adopts a welfare or charity-based rather than a rights-based approach referencing its obligations at international law. The country’s institutional framework gives no constitutional protection, no bill of rights and no right to social security at common law with social security decisions made solely within the political and policy space. The country’s majoritarian political impulses serve to socially exclude welfare recipients and its embrace of neoliberalism has given this an economic rather than a social/poverty reduction policy focus with the individual rather than society taking the risk. A ‘workfare’ focus and some privatisation of the social security system have resulted in accessibility to social security support being highly constrained even if formally available.
Article
The regulatory welfare state illuminates path dependencies and tendencies to mutual growth in markets, welfare, and regulation. This article uses two specific welfare-to-work programs, one in Korea and one in Australia, to illustrate the institutional interconnections that are in play within the regulatory welfare state. Governance of these programs is hampered by lack of discursive capacity to identify where problems exist and how they can be fixed. When faced with new programs, implementers look to higher authorities to make sense of and to solve the problems on the ground, but authorities are blinded by old institutional categories that pit market mentalities against welfare mentalities with regulation as an ideological tool, rather than an integral part of solutions. Transparency and cross-boundary listening are necessary to create the bridging capital to make these programs work and reconnect democratically elected governments with their citizens.
Article
Full-text available
As smart technologies such as artificial intelligence (AI), automation and Internet of Things (IoT) are increasingly embedded into commercial and government services, we are faced with new challenges in digital inclusion to ensure that existing inequalities are not reinforced and new gaps that are created can be addressed. Digital exclusion is often compounded by existing social disadvantage, and new systems run the risk of creating new barriers and harms. Adopting a case study approach, this paper examines the exclusionary practices embedded in the design and implementation of social welfare services in Australia. We examined Centrelink’s automated Online Compliance Intervention system (‘Robodebt’) and the National Disability Insurance Agency’s intelligent avatar interface ‘Nadia’. The two cases show how the introduction of automated systems can reinforce the punitive policies of an existing service regime at the design stage and how innovative AI systems that have the potential to enhance user participation and inclusion can be hindered at implementation so that digital benefits are left unrealised.
Article
Full-text available
This article asks how rule of law institutions failed to ‘bell the cat’ on the illegality of Centrelink's robo-debt programme and its unethical character. It identifies serious structural deficiencies in the design of accountability and remedial avenues at seven different levels. It argues for adherence to Administrative Review Council guidelines on machine learning, Parliamentary accounting of Ombudsman and Audit agencies on rule of law foundations and model litigant protocols, attention to ethical administration, redacted publication of selected first tier Administrative Appeals Tribunal rulings, contractual guarantees of independence in legal aid/advocacy funding, building of pro bono advocacy partnerships, and cultural change designed to counter stigmatisation of the vulnerable.
Article
Full-text available
This article reviews Australia’s social security online compliance initiative (‘OCI’) to determine whether the Senate Community Affairs References Committee was right to recommend that its administering agency (Centrelink) resume responsibility for obtaining all information necessary for calculating working age payment debts based on verifiable actual fortnightly earnings rather than on the basis of assumed averages, or whether responsibility has always remained with Centrelink when the person is unable to easily provide records. It argues that legal responsibility ultimately has always rested with Centrelink in such cases and outlines distributional justice and best practice reasons why the OCI system should be brought into compliance with the law
Article
Full-text available
Article
Full-text available
Australia s national system of social security reached its centenary in June 2008. This article provides a broad overview of how social security has developed in Australia over the last 100 years or so and reflects on how the system has come to be as it is now. Although much has changed in that time, there are strong elements of continuity as well particularly the prevalence of means tests, the use of funding from general revenue, and the strong emphasis on participation. It is noted that Australian model of social security differs markedly from the international norm. Nevertheless, it has proven to be remarkably resilient since its inception a century ago such that arrangements akin to social insurance (the usual model elsewhere) have, as a result, mainly developed in the private sector. Maximising economic and social participation has also been a cornerstone of Australia s system. The authors speculate that, given the relative stability demonstrated by the system so far, 100 years from now the essential elements of Australia s social security system may well remain intact.
Book
Full-text available
Every day, we make decisions on topics ranging from personal investments to schools for our children to the meals we eat to the causes we champion. Unfortunately, we often choose poorly. The reason, the authors explain, is that, being human, we all are susceptible to various biases that can lead us to blunder. Our mistakes make us poorer and less healthy; we often make bad decisions involving education, personal finance, health care, mortgages and credit cards, the family, and even the planet itself. Thaler and Sunstein invite us to enter an alternative world, one that takes our humanness as a given. They show that by knowing how people think, we can design choice environments that make it easier for people to choose what is best for themselves, their families, and their society. Using colorful examples from the most important aspects of life, Thaler and Sunstein demonstrate how thoughtful "choice architecture" can be established to nudge us in beneficial directions without restricting freedom of choice. Nudge offers a unique new take-from neither the left nor the right-on many hot-button issues, for individuals and governments alike. This is one of the most engaging and provocative books to come along in many years. © 2008 by Richard H. Thaler and Cass R. Sunstein. All rights reserved.
Chapter
Full-text available
Article
Full-text available
Why an institution's rules and regulations are obeyed or disobeyed is an important question for regulatory agencies. This paper discusses the findings of an empirical study that shows that the use of threat and legal coercion as a regulatory tool--in addition to being more expensive to implement--can sometimes be ineffective in gaining compliance. Using survey data collected from 2,292 taxpayers accused of tax avoidance, it will be demonstrated that variables such as trust need to be considered when managing noncompliance. If regulators are seen to be acting fairly, people will trust the motives of that authority, and will defer to their decisions voluntarily. This paper therefore argues that to shape desired behavior, regulators will need to move beyond motivation linked purely to deterrence. Strategies directed at reducing levels of distrust between the two sides may prove particularly effective in gaining voluntary compliance with an organization's rules and regulations.
Article
Neoliberal reforms and ring-wing ideologies have seen the ideal of the social security ‘safety net’ take a hammering in the UK, USA and Australia. While the gap between rich and poor has widened, and demand for welfare payments increased, politicians, certainly in Australia, have generally neglected low income families, preferring to twiddle the economic dials affecting middle and upper income earners instead. Of course, tussling over who pays tax, how much, what constitutes useful expenditure, and who receives welfare services and benefits is not new – these questions have attended the modern welfare state from its inception. But the welfare safety net that most of us, grudgingly or otherwise, concede to be necessary for collective social harmony is no longer proving as effective as we would wish. Even with a battered and frayed, but still ostensibly functional systems of welfare payment and support offered in Australia, the number of people experiencing perpetual disadvantage is rising, with intergenerational poverty – its increase and impacts on children – of particular concern.
Article
Although it is part of core government business to collect information about its citizens, ‘big data’ has increased the scale, speed and complexity of data collection and use to such an extent that it is arguably qualitatively different from the record-keeping that has gone before it. Big data represents a radical shift in the balance of power between State and citizen. This article argues that embedding big data in government operations masks its deployment as enhancing government power, rather than simply facilitating execution of government activities. In other words, big data is ‘disruptive’ technology that calls for the examination of the limits of government power. To illustrate this argument, this article examines a selection of recent case studies of attempts by the Australian government to deploy big data as a tool of governance. It identifies the risk to the citizen inherent in the use of big data, to justify review of the bounds of government power in the face of rapid technological change.
Book
An effective democratic society depends on the confidence citizens place in their government. Payment of taxes, acceptance of legislative and judicial decisions, compliance with social service programs, and support of military objectives are but some examples of the need for public cooperation with state demands. At the same time, voters expect their officials to behave ethically and responsibly. To those seeking to understand-and to improve-this mutual responsiveness, Trust and Governance provides a wide-ranging inquiry into the role of trust in civic life. Trust and Governance asks several important questions: Is trust really essential to good governance, or are strong laws more important? What leads people either to trust or to distrust government, and what makes officials decide to be trustworthy? Can too much trust render the public vulnerable to government corruption, and if so what safeguards are necessary? In approaching these questions, the contributors draw upon an abundance of historical and current resources to offer a variety of perspectives on the role of trust in government. For some, trust between citizens and government is a rational compact based on a fair exchange of information and the public's ability to evaluate government performance. Levi and Daunton each examine how the establishment of clear goals and accountability procedures within government agencies facilitates greater public commitment, evidence that a strong government can itself be a source of trust. Conversely, Jennings and Peel offer two cases in which loss of citizen confidence resulted from the administration of seemingly unresponsive, punitive social service programs. Other contributors to Trust and Governance view trust as a social bonding, wherein the public's emotional investment in government becomes more important than their ability to measure its performance. The sense of being trusted by voters can itself be a powerful incentive for elected officials to behave ethically, as Blackburn, Brennan, and Pettit each demonstrate. Other authors explore how a sense of communal identity and shared values make citizens more likely to eschew their own self-interest and favor the government as a source of collective good. Underlying many of these essays is the assumption that regulatory institutions are necessary to protect citizens from the worst effects of misplaced trust. Trust and Governance offers evidence that the jurisdictional level at which people and government interact-be it federal, state, or local-is fundamental to whether trust is rationally or socially based. Although social trust is more prevalent at the local level, both forms of trust may be essential to a healthy society. Enriched by perspectives from political science, sociology, psychology, economics, history, and philosophy, Trust and Governance opens a new dialogue on the role of trust in the vital relationship between citizenry and government.
Article
'[Valerie] Braithwaite merges her considerable knowledge of a wide range of disciplines to produce an exemplar of interdisciplinary research. The use of the taxation system as the basis for analysis of how people manage their relationship with authority is effective and produces a much-needed addition to the behavioural literature. While the book is primarily about defiance in taxation, many instances of non-taxation related defiance are included, which provides excellent support and extension of the tax-based arguments. Braithwaite has produced an excellent example of a book that is grounded in the extant literature, while expanding our understanding of the importance of understanding the behaviours that drive defiance. The aim of the book is to "show how authorities can live symbiotically with defiance" and she achieves this superbly, illustrating how improved satisfaction with "the process" can minimise defiance.'
Article
What happens when a person's commonsense view of justice diverges from the sense of justice he or she sees enshrined in particular laws? Does the perception of one particular law as unjust make an individual less likely to comply with unrelated laws? This Article advances the Flouting Thesis - the idea that the perceived legitimacy of one law or legal outcome can influence one's willingness to comply with unrelated laws-and provides original experimental evidence to support this thesis. The results suggest that willingness to disobey the law can extend far beyond the particular unjust law in question, to willingness to flout unrelated laws commonly encountered in everyday life (such as traffic violations, petty theft, and copyright restrictions), as well as willingness of mock jurors to engage in juror nullification. Finally, this Article explores the relationship between perceived injustice and flouting and offers several possible explanations, including the role of law in American popular culture and the expressive function of the law in producing compliance.
Article
This paper investigates the relationship between making additional payments to the state for student loan (via the Higher Education Contribution Scheme) and child support (via the Child Support Scheme) and compliance with tax law. Data are taken from the Community Hopes, Fears, and Actions Survey based on a random sample of 2040 individuals. Additional payments were found to pose a compliance problem for tax authorities. At the same time, this study demonstrated that perceived deterrence, moral obligation and possible trustworthiness play significant roles in reducing tax evasion. An important finding to emerge from this study is that tax evasion is more likely to accompany additional payments when personal income and belief in trust norms are low. The finding of greater tax evasion among economically marginalized groups has been demonstrated in other contexts, but the adverse effects of becoming irreconcilably socially marginalized from legal authority has tended to be both undervalued and under-theorized in the taxation compliance literature.
Cash economy: summary of CTSI research findings and questions for future research Centre for Tax System Integrity Research Note 6 Australian National University
  • V Braithwaite
Trust in electoral management bodies Australian National University
  • T P Laanela
Submission No. 15 to the Senate Community Affairs References Committee Inquiry into Centrelink's compliance program
  • D O'donovan
The Guardian Australia
  • P. Karp
The Guardian Australia
  • K. Murphy
Trends and Issues in Crime and Criminal Justice
  • T. Prenzler
forthcoming)Trust in electoral management bodies
  • T P Laanela
Report of the United Nations Special Rapporteur on extreme poverty and human rights: digital technology, social protection and human rights
  • P. Alston
The Guardian Australia
  • C. Knaus