Content uploaded by Mallory E. Compton
Author content
All content in this area was uploaded by Mallory E. Compton on Sep 01, 2022
Content may be subject to copyright.
Administrative Errors and Race: Can technology mitigate inequitable administrative
outcomes?
Mallory E. Compton1
Matthew M. Young2
Justin B. Bullock3
Robert Greer4
1Corresponding Author
Public Service & Administration Department
Bush School of Government & Public Service
mallory.compton@tamu.edu
2Institute of Public Administration
Faculty of Governance and Global Affairs
Leiden University
m.m.young@fgga.leidenuniv.nl
3Evans School of Public Policy & Governance
University of Washington
jbull14@uw.edu
4Public Service & Administration Department
Bush School of Government & Public Service
rgreer1@tamu.edu
This version of this paper has been accepted for publication at Journal of Public Administration
Research and Theory. Please cite using this recommended citation:
Compton, Mallory E, Matthew M Young, Justin B Bullock, and Robert Greer. 2022.
“Administrative Errors and Race: Can Technology Mitigate Inequitable Administrative
Outcomes?” Journal of Public Administration Research and Theory: muac036.
https://doi.org/10.1093/jopart/muac036
1
Scholars have long recognized the role of race and ethnicity in shaping the development
and design of policy institutions in the United States, including social welfare policy. Beyond
influencing the design of policy institutions, administrative discretion can disadvantage
marginalized clientele in policy implementation. Building on previous work on street-level
bureaucracy, administrative discretion, and administrative burden, we offer a theory of racialized
administrative errors and we examine whether automation mitigates the adverse administrative
outcomes experienced by clientele of color. We build on recent work examining the role of
technological and administrative complexity in shaping the incidence of administrative errors,
and test our theory of racialized administrative errors with claim-level administrative data from
53 US unemployment insurance programs, from 2002-2018. Using logistic regression, we find
evidence of systematic differences by claimant race and ethnicity in the odds of a state workforce
agency making an error when processing Unemployment Insurance claims. Our analysis
suggests that non-white claimants are more likely to be affected by agency errors that result in
underpayment of benefits than white claimants. We also find that automated state-client
interactions reduce the likelihood of administrative errors for all groups compared to face-to-face
interactions, including Black and Hispanic clientele, but some disparities persist.
Keywords: administrative errors, administrative burden, administrative data, race,
unemployment insurance
Brief overview: How can technology and context mitigate inequities in administrative
outcomes experienced by clientele of color, in the US unemployment insurance system?
2
“To err is human.” Errors are part of the human condition. As are their consequences. In
public administration, errors in processes or decision-making may be thought of as
administrative errors, defined as “any deviation from an intended outcome that is mandated by
either law or organizational rules” (Bullock 2014). Such deviations may be intentional or
accidental. Some errors may have little or no substantive consequence for the agency or client.
Some go unnoticed, while others are identified and corrected. When the consequences of
administrative errors are neither immediate nor severe (at least to the organization), they are
likely to reoccur over time. This poses a problem for successful policy implementation, which
requires administrative processes and outcomes to be effective, efficient, and equitable
(Compton and t’ Hart 2019). Procedural accuracy is critical for non-arbitrary decision-making.
And, in a very real sense, this problem is inescapable: the “wickedness” of public problems and
the need for political acceptability leads to complex policy solutions that require implementers to
exercise administrative discretion, and as a consequence, make errors.
Public administration as a discipline has largely overlooked the costs of administrative
errors borne by clientele (with exceptions, like Ryu, Wenger, and Wilkins 2012 and Wenger and
Wilkins 2009). This is an unfortunate oversight because administrative errors can burden both
the organization and claimant. At the extreme, administrative errors result in the wrongful denial
of services or erroneous under-provision. In less extreme cases, these errors may simply require
time and effort to correct. We examine the sources and consequences of administrative errors in
Unemployment Insurance (UI) programs in the United States, focusing on the relative likelihood
of an administrative error occurring in claims filed by white and non-white claimants.
As a procedural program with a relatively narrow aim and routinized processes, we might
expect UI outcomes to be predictable and low in variance. With observable monetary outputs
3
(claims paid and claims denied), management of UI focuses on standardized procedures to
reduce the influence of discretion on outcomes (Wilson 1989). As a result, we would not expect
errors to be common or shaped by discretion, because accuracy in such decisions is likely
targeted by managerial strategies. Yet, UI is a complex federal program with a notable error rate:
more than 10% of claims are paid improperly each year. It is also a program criticized for being
riddled with red tape and burdensome processes with unreasonable wait times that discourage or
exclude applicants from receiving benefits (Chang 2020; McDermott and Cowan 2020;
Friedman 2020; Rosenberg 2020). Further, because the program is implemented by states, there
is substantial cross-state variation in policies, processes, and technologies. This context allows
for analysis of the extent to which historically marginalized clientele are more likely to bear the
burden of administrative errors in a complex service delivery system, and whether technology
mitigates this burden.
The first question we address in this paper is whether historically disadvantaged clientele
disproportionately experience administrative errors in the process of claiming unemployment
insurance benefits. Following Widlak and Peeters (2020) and Peeters (2020), we argue that
administrative errors are a cause of administrative burden. State actions that deviate from
intended outcomes impose administrative burdens to the extent that correcting the error requires
clients to navigate a formal appeal process (Moynihan and Herd 2010; Herd and Moynihan 2018;
Baekgaard and Tankink 2021). As an example, there are cases in which a state workforce agency
incorrectly determines that a claimant was fired from their job for cause, and thus disqualified
from receiving unemployment insurance, but the claimant was actually made redundant or
furloughed, and thus eligible for benefits. Such errors may be resolved, but will likely delay
payment of benefits and require the claimant to provide additional documentation (a compliance
4
burden) while also inducing anxiety (a psychological burden). In this situation, an administrative
mistake in misclassifying the reason for job separation could easily snowball into a substantial
administrative burden for the claimant to navigate in order to receive their entitled benefits. With
this view, administrative errors impose a significant and unjustified burden on claimants in
accessing the services to which they are entitled. Understanding the distribution of administrative
errors across clientele groups is important not only because equity in policy outputs is a
democratic value, but also because individuals’ interactions with public bureaucracy can have
powerful effects on political outcomes like trust in government, civic engagement, and public
opinion (Soss 1999, Soss and Schram 2007, Campbell 2003).
When the state provides qualitatively different services to disempowered clientele,
messages are sent about values of citizenship, belongingness, and deservingness (Ernst, Nguyen,
and Taylor 2013). And when experiences with policy implementation differ systematically
across racial groups, racialized feedback can produce or perpetuate political inequality (Michener
2018, Johnson, Meier & Carroll 2018, Maltby 2017). If some groups experience a higher quality,
more accurate, and less burdensome determination process than others, then “administrative
errors” are more than idiosyncratic mistakes: they represent a systematic failure of administration
and/or policy to meet the requirement of equal treatment under the law. The distribution of
administrative errors should thus be understood and studied as an output of governmental
decision-making processes with very real programmatic and political consequences. We explore
this phenomenon and offer a theory of racialized administrative errors.
The second question we address is whether technological tools affect the odds of
claimants’ experiencing an administrative error, and in particular a racialized administrative
error. Discretion is unavoidable in the implementation of public policy, and is influenced by
5
bureaucrats’ personal beliefs and normative views (Maynard-Moody and Musheno 2003; Lipsky
1980; Scott 1997; Simon 1997). Studies of active bureaucratic representation show that non-
white and female bureaucrats use discretion to correct for unequal treatment of female and non-
white clients, though these effects are also context-sensitive and contingent (Nicholson-Crotty et
al 2016; Watkins-Hayes 2011; Ryzin, Riccucci & Li 2017; Zwicky and Kübler 2019; Andersen
and Guul 2019). At the same time, absent active intervention, bureaucrats may either passively
or actively perpetuate inequalities in public services through their (in)actions (Lipsky 1980). An
argument in favor of replacing street-level bureaucratic discretion with technological automation
is that a properly designed system should, all else equal, produce uniform outcomes irrespective
of clients’ race/ethnicity or gender. In other words, the possibility of a street-level bureaucrat
“disentitling” a client based on their protected characteristics -- whether deliberately or
unconsciously -- is removed (Wenger and Wilkins 2009; Miller and Keiser 2020; Young et al
2019). Expectations of “technical neutrality” in practice are, however, contested (Broussard
2018; Eubanks 2018). Examining whether and how technological tools mitigate inequities in
public services is therefore an important step in understanding and addressing disparities in
administrative processes and burdens.
We find evidence of systematic differences by claimant race and ethnicity in the odds of
a state workforce agency making an error in processing unemployment insurance claims.
Compared to white claimants, Black and Latinx claimants are more likely to experience an
agency-responsible error in their claim process. Furthermore, those errors are also more likely to
disadvantage these groups. Black and Latinx claimants are more likely than white claimants to
be affected by agency errors that result in underpayment or wrongful denial of benefits.
Although the odds of a claim containing an agency-caused error are lower when clients file for
6
benefits using an automated process compared to in-person filing, disparities across white and
non-white clientele persist. In sum, we find support for our theory of racialized administrative
errors in unemployment insurance, and we provide evidence that automation (specifically the
automation of claims filing) improves outcomes, but is not a panacea. Unemployment insurance
plays a critical role in buffering Americans from economic insecurity, and our results show that
not all Americans receive the full benefits of insurance against joblessness to which they are
entitled.
Administrative Errors
Despite the long-held focus in public administration on organizational performance and
service quality, we know surprisingly little about the nature, causes, and consequences of
administrative errors. Errors in medicine (e.g., Reason 2000), aviation (e.g., Xu, Wickens, and
Rantanen 2007), and other fields with low or zero tolerance for failure have attracted some
attention. But administrative errors also occur in routine or mundane processes. When errors do
receive attention, it is often out of an interest in organizational efficiency, waste, or abuse. The
2002 Improper Payments Information Act was passed with this intention. It requires federal
agencies, in accordance with Office of Management and Budget guidance, to review all
programs and activities annually, identify those that may be susceptible to significant improper
payments, and estimate the annual amount of improper payment.
1
The purpose of this law, and
the subsequent amendment in 2010, is to monitor and reduce erroneously overpaid benefits.
From a managerial and organizational perspective, however, administrative errors are an
outcome of interest, with micro-level causes and consequences that may be maldistributed.
1
Improper Payments Information Act. 31 U.S.C. § 3321 (2002).
7
In the context of unemployment insurance, Wenger, O’Toole & Meier (2008) find that
errors in the processing of benefit claims occur more frequently in organizations that process
claims faster, suggesting that errors reflect performance quality. Ryu, Wenger, and Wilkins
(2012) show that prior error rates predict current organizational performance, with some offices
contributing disproportionately to state-level error rates. These findings suggest that
administrative errors are an indicator of performance and that they can be managed through
organizational strategy. The motivation to manage administrative errors may vary, however.
Compton (2021) finds that states with comparatively more generous UI benefit and eligibility
rules have lower rates of administrative errors, with the reduction being driven by those states
making fewer underpayments. More generous programs are less likely to wrongfully underpay
beneficiaries, which suggests that errors reflect larger political priorities that permeate
organizational processes. Further, Wenger and Wilkins (2009) show that states with greater use
of automation in processing unemployment insurance leads to more women receiving benefits—
suggesting that street-level bureaucrats in the absence of automation can and do use discretion to
disentitle (women) clientele. Together these works show that administrative errors are a
performance indicator, that they can be managed, and that priorities or values contribute to the
incidence of these errors. We still, however, know little about the consequences of these errors.
When administrative errors lead to reduced benefits or disentitlement, the monetary cost
of lost program benefits is borne by the claimant. Yet, even when administrative errors do not
result in loss of benefits, those errors may still impose a burden on the claimant. Especially when
automation and data sharing can rapidly disseminate information across offices and programs,
even a minor administrative error can quickly spread. Widlak and Peeters (2020) document the
plight of a Dutch man navigating a Kafkaesque administrative maze in his attempt to correct an
8
erroneous criminal charge. Once entered into a computer system, this error was promptly copied
to other organizations’ systems with no capability of recall. The cost of this error and the burden
of correcting it was borne by the citizen, even though the error was not their fault. Widlak and
Peeters (2020) go on to argue that administrative inaccuracies may be committed by the citizen
or the bureaucracy. If committed by the bureaucracy, these inaccuracies may be either
inadvertent errors or intentional maladministration. Whether deliberate or accidental, errors that
are the fault of the administrator are an important, and overlooked, cause of administrative
burden.
Administrative Burden
In recent years, questions of citizen access and uptake of public programs have been
advanced as theoretical extensions of administrative burden research. Administrative burden may
be defined either in terms of the experience by an individual in their interaction with policy
implementation or as the imposition of onerous demands by a government or policy agent
(Madsen et al. 2020). Administrative burden occurs when an individual’s interaction with policy
implementation is onerous (Burden et al. 2012, 741) due to “interactions with government that
impose (or lessen) burdens on individuals and organizations” (Heinrich 2018, 216). Building on
Burden et al.’s (2012) definition, administrative burdens can be understood as the subjective
“learning, psychological, and compliance costs that citizens experience in their interactions with
government” (Herd and Moynihan 2018, 22). Such burdens may be unintentionally imposed
through benign neglect or deliberately designed through “hidden politics” (Moynihan, Herd, &
Harvey 2015; Herd and Moynihan 2018), and they may be produced through formal procedures
9
and policy or through informal policy and individual bureaucrats’ coping behaviors (Peeters
2020; Widlak and Peeters 2020).
The consequences of burdensome administrative experiences are multiple, including (1)
direct first-order compliance, learning, or psychological costs borne by clientele; (2) losses
resulting from limited or denied access to benefits or exercise of rights; and (3) feedback effects
on individuals’ attitudes, preferences, or behavior. Scholars have identified organizational,
procedural, and policy-design factors that create burdensome experiences and result in lower
levels of program access and take-up, especially in the context of social program administration.
Herd et al. (2013) discuss examples of administrative changes that increased social welfare
program uptake including simplifying reporting procedures (Klerman and Danielson 2009) and
eliminating face-to-face interview requirements (Wolfe and Scivner 2005). Lee (2021) finds the
adoption of simplified reporting procedures in the US Supplemental Nutrition Assistance
Program resulted in clients’ committing fewer unintended errors resulting in a lower
underpayment rate. Conversely, the introduction of additional requirements for documentation
increases burden and reduces take-up (Herd et al. 2013).
Within the literature on administrative burden, scholars have explored the interaction of
administrative burden and discretionary authority. Bell et al. (2021) find that bureaucrats’
ideological beliefs shape their views of administrative burden. Politically conservative
bureaucrats revealed more support for burdensome policies and a belief that such policies serve
to reduce fraud and demonstrate client deservingness. Politically liberal bureaucrats, on the other
hand, express more opposition to administrative burden with the belief that such burdens threaten
social equity. Further, Bell and Smith (2021) show that bureaucrats’ perceptions of their roles
shape whether they use discretion to alleviate or exacerbate administrative burden for clientele,
10
resulting in differential client access to public services. These findings highlight a critical
contingency in the imposition and experience of administrative burden: the perspectives and
beliefs of the bureaucrats themselves. Street-level bureaucrats utilize discretion in their day-to-
day tasks to shape clientele experiences with policy implementation, and they do so in ways that
distribute administrative burdens according to their own ideologies and beliefs.
Recognizing that discretion may alleviate or exacerbate administrative burden for
clientele links to research on how discretion can reinforce racial, gendered, classed, and moral
biases (Maynard-Moody and Musheno 2000, 2003; Masood and Nisar 2021). Recent work on
racialized burdens examines how administrative burdens have historically been used to
normalize and facilitate racially disparate outcomes from public organizations that promise —
and are legally obligated to provide — fair and equal treatment (Ray, Herd, & Moynihan 2020).
Racialized Disparities in Public Administration
In their critical review of public administration, Alexander and Stivers (2020) discuss
three mechanisms through which race and racism shape administrative outputs in the United
States: racialized outcomes are produced through organizational, legal, and individual decision
processes. Hence, racist practices and outcomes may be perpetuated even without consciously
racist attitudes or beliefs of staff members. Individual malice or pernicious intent is not necessary
for racial discrimination to systematically disadvantage clientele through policy implementation,
as is the case with administrative evil more broadly (Young et al 2021). Understanding the
multiple sources of racialized outcomes in public administration is key to understanding the role
of administrative errors in producing those outcomes.
11
The first mechanism through which race and racism shape outcomes of policy
implementation is individual bias. Street-level bureaucrats’ exercise of discretion is powerfully
shaped by their beliefs and values (Lipsky 1980; Maynard-Moody and Musheno 2000, 2003),
and discretion is omnipresent in policy implementation. Davis (1969, 4) offers that “A public
officer has discretion wherever the effective limits on his power leave him free to make a choice
among possible courses of action and inaction.” Put differently, discretion is the latitude afforded
individuals (public administrators in particular in this context) with delegated responsibilities to
use their judgment when making a decision (Bullock 2019; Bullock et al 2020). Formal rules and
norms set the boundaries of administrative discretion, but, perhaps unsurprisingly, Scott (1997)
finds that characteristics extraneous to client needs contribute to the judgment and treatment of
clientele, including organizational control and client characteristics. When this characteristic is
protected, it forms the basis for direct discrimination. Street-level bureaucrats may treat non-
white clientele differently due to their internally held (implicit or explicit) racist attitudes or
racial stereotyping (Schram, et al 2009; Andersen and Guul 2019). Discriminatory beliefs held
by individual bureaucrats is an important, but certainly not the only, source of race-based biases
that shape outcomes for clientele.
The second mechanism through which race and racism shape policy outcomes is legal
bias — the legacies of racial bias persist in federal, state, and local law. Legislation and
regulation can be explicitly or implicitly biased against some groups in ways that perpetuate
power and resource inequities, and these laws and their consequences continue to shape
administrative outcomes in contemporary American policy and administration. The
consequences of statutory racial bias on social policy have been well documented by scholars
including Hero (1998), Quadagno (1996), and Lieberman (1998). Examples of legal
12
discrimination against can be found throughout the US. Jenkins (2021) demonstrates how racial
inequalities in San Francisco shaped municipal finance arrangements which were in turn central
in determining the distribution of resources in the city. Rothstein (2017) documents how laws
and policies enacted by localities, states, and federal governments promoted discriminatory racial
zoning and continue to shape social and economic outcomes today. Debate over the design of
unemployment insurance itself, in the lead-up to the passage of the Social Security Act, was
shaped by race and racism (Davies and Derthick 1997, Poole 2006, Quadagno 1996, Norton and
Linder 1995).
The third mechanism through which race and racism can pervert policy implementation is
through organizational structures and procedures (Alexander and Stivers 2020). Recent work on
racialized organizations focuses on how race and beliefs about race become entrenched in norms
and processes which exist outside of any individual employee’s preferences, and thus also shape
resource distribution (Ray 2019). Administrative structures and practices have been influenced
by historical events and narratives, especially those involving people of color. These structures
channel resources unevenly and inequitably across racial groups (Michener 2018). Watkins-
Hayes (2011) finds that racial diversity among street-level bureaucracies can positively impact
administration as expected by representative bureaucracy theory, but that “organizational context
and intragroup politics within minority communities greatly inform how race is mediated within
these institutions” (i233). This suggests that organizational context, processes, and practices
shape administrative discretion (Ray, Herd, and Moynihan 2020). The influence of individual,
legal, or organizational biases may combine in ways that wholly exclude non-white clients from
accessing the service to which they are entitled (Brodkin and Majmundar 2010), or, they may
layer in a manner that imposes costs on the clients seeking to access benefits by making the
13
administrative process more onerous (Herd and Moynihan 2018). What we do not know, yet, is
how these influences shape the incidence and distribution of administrative errors.
Theory of Racialized Administrative Errors as Burdens
We offer a theory of racialized administrative errors, building on previous work on street-
level bureaucracy, administrative discretion, and administrative burden. We expect that the
likelihood of administrative errors will differ for clientele from historically marginalized racial
and ethnic groups. This expectation is grounded in theory and evidence that clientele of color
experience harsher sanctioning in social welfare policy implementation (e.g., Soss et al. 2001;
Schram et al 2009; Keiser et al 2004). Having discretion in where to devote time and attention,
street-level bureaucrats can apply greater scrutiny to some claims than to others. This discretion
will be shaped by their beliefs and perspectives (Lipsky 1980), including racial and ethnic
prejudice which persists in contemporary American society and administration (Alexander and
Stivers 2020). Thus, if heightened scrutiny, time, and attention is given to claims by historically
discriminated groups, we would expect to observe fewer errors, and greater accuracy, in
procedural decisions in cases filed by claimants of color. Alternatively, street-level bureaucrats
may perceive marginalized clientele as more deserving of help (Jilke and Tummers 2018), in
which case discretion in street-level bureaucrats’ use of time and attention would instead be used
to prioritize and assist claimants of color to better serve their needs. Thus, we expect claimants of
color to experience administrative errors in the processing of their UI claims at different rates
than white claimants.
Hypothesis 1: White claimants experience administrative errors at different rates
than non-white claimants.
14
If supported, Hypothesis 1 would suggest that a difference exists in the services provided
to white and non-white clientele, but we cannot infer the cause or consequences of this
difference. If this difference is driven by street-level bureaucrats’ discretion, and that discretion
is influenced by racial bias, we would expect racialized administrative errors to disadvantage
clientele of color. Empirically, this would be evidenced by non-white clientele being erroneously
underpaid or wrongfully denied benefits at higher rates than white clientele. Alternatively,
organizational procedures, norms, and assumptions are all too often (consciously or
unconsciously) designed in ways that systematically disadvantage clientele of color. Less severe
forms of this organizational bias may be seen in non-white experiencing more administrative
burden to receive their entitled benefits (Herd and Moynihan 2018). At the extreme,
“administrative exclusion” occurs when burdensome organizational processes exclude receipt of
benefits, regardless of claimant preferences or eligibility status (Brodkin & Majmundar 2010).
Either due to individual bureaucrats’ prejudice or through organizational-level rules or processes,
we expect that administrative errors are more likely to disadvantage non-white claimants
compared to white claimants by wrongfully denying or underpaying claims.
Hypothesis 2: Administrative errors are more likely to wrongfully deny or
withhold benefits or services from non-white clientele claims than white clientele.
If Hypothesis 2 is supported, we can infer that administrative errors disproportionately
disadvantage non-white clientele, which could be due to either racialized organizational
processes or individual-level biases. To uncover the role of individual bureaucrats in racialized
administrative errors, our third expectation focuses on the role of discretion. Where street-level
bureaucrats have discretion in implementation, bias creeps into decision-making and negatively
impacts non-white clientele (e.g., Lipsky 1980; Einstein & Glick 2017; Wenger & Wilkins 2009;
15
Soss, Fording & Schram 2011; Brodkin 1997). Discriminatory scrutiny and punitive judgement
in institutionalized practices or in individual staff discretion can disadvantage non-white
clientele. Bureaucrats’ political beliefs and perceptions shape their view of administrative burden
as well as their willingness to use discretion in their jobs to alleviate or exacerbate those burdens
for clientele (Bell and Smith 2021; Bell et al. 2021). Thus, bureaucrats can and will use
discretion to shape administrative burden according to their personal views. Combining this
recognition with the overwhelming evidence that racial and ethnic prejudice continues to
permeate individual bureaucrats’ decision-making, it follows that discretion in bureaucratic
decision-making may provide an opportunity for racial bias to influence outcomes. The impact of
racial discrimination should be greatest in decisions requiring greater bureaucratic discretion —
where more subjective judgement is necessary.
Hypothesis 3: White clientele are less likely to experience administrative errors
in decisions requiring greater discretionary judgement than non-white clientele.
If Hypothesis 3 were supported, it would mean that non-white clientele experience more
administrative errors than white clientele in decisions over which individual agents have greater
discretion. This evidence would suggest that race is a factor shaping those agents’ decision-
making and would support the argument that individuals’ racial bias affects the incidence and
distribution of administrative errors.
Technology’s Effect on Racialized Administrative Errors
If the exercise of discretion introduces the risk of both administrative errors and
disentitlement and discrimination against marginalized populations, then one solution would be
16
to sharply curtail or eliminate administrative discretion. The advancement and proliferation of
computers and related Information and Communication Technologies (hereafter “ICT”) is central
to attempts to reshape administrative processes and discretion in service of multiple normative
goals such as efficiency, effectiveness, and equity.
We argue that introducing ICT into administrative processes is likely to directly influence
whether and how racialized administrative errors occur. We further argue that this relationship is
likely to be multifaceted, owing to the different sources of both administrative errors and
racialized disparities identified earlier. For example, pressure to increase the timeliness of
administrative decisions on benefits claims without corresponding increases in labor resources is
a known source of administrative errors (Peeters 2020; Wenger, O’Toole & Meier 2008). ICT-
enabled automation of claims processing in this context could, if implemented properly,
simultaneously increase efficiency and effectiveness while also reducing error rates. Thus, more
claims are processed per unit cost and at a faster rate over time, and with fewer errors.
This example highlights an important distinction for considering ICT’s effect on
administrative discretion and, by extension, both discrimination and errors: ICT automation of
administrative processes exists on a spectrum. The boundaries of this spectrum include the
absence of ICT automation on one end and the complete workflow automation on the other. In-
between these ends exist a range of cases where ICT augments bureaucrats’ workflows and, as a
result, whether and how they exercise discretion. ICT has long been seen as a way for senior
management to impose discretion-limiting processes on street-level bureaucrats (Garcon 1981).
On the other hand, the effect of ICT on biased decision-making may be muted or even
exacerbated. ICT can also enable new pathways for bureaucrats to exercise discretion, even as it
limits previous pathways (Buffat 2015; Bullock, Young and Wang 2020; Busch and Hendriksen
17
2018; Hansen, Lundberg, and Syltevik 2018). For example, de Boer and Raaphorst (2021) use
the case of newly introduced ICT workflow automation in Dutch food and consumer product
safety inspection to demonstrate that ICT changed the inspectors’ style of discretion, but this
change was perceived by inspectors as a loss of overall discretion.
While we acknowledge the contested claims of ICT’s effect on discretion when used to
augment street-level bureaucrats’ workflows, we focus on the use of ICT to remove face-to-face
interactions between claimant and street-level bureaucrat by automating claim filing processes.
We focus on this context for two reasons. First, in western society, and particularly the
anglosphere, it is commonly observed that political priorities drive budget cuts in social service
agencies and incentivize automation of application processes to save on labor costs. Second, the
complete automation of client-state interactions during the application process allows us to test
competing theories of the source of racialized disparities and administrative errors: individual
prejudice vs. organizational/procedural biases.
If ICT eliminates direct interaction between clients and street-level bureaucrats, there
should be fewer opportunities for individual-level biases and discretion to shape outcomes
(Bovens & Zouridis 2002; Busch & Hendriksen 2018). This comports with prior research on the
effect of introducing telephone-based filing of unemployment insurance claims in the United
States on gender-based discrimination (Wenger and Wilkins 2009). But automating away
interactions between clients and street-level bureaucrats also eliminates opportunities for the
ameliorating effects of active representation by street-level bureaucrats who share demographic
characteristics with their clients (Zwicky and Kubler 2019). Removing this pathway for active
representation is not a substantive problem if the only source of racialized discrimination and/or
errors clients are likely to face is from street-level bureaucrats who are demographically different
18
from them. But if systemic organizational/procedural factors persist, street-level automation via
ICT may expose clients to the same – or greater – risks of experiencing a racialized
administrative error and the corresponding imposed administrative burdens (Obermeyer, Powers,
& Mullainathan 2019; Ruha 2019; and Ledford 2019).
Evidence exists to support both the argument that automating client-state interactions
reduces racialized administrative errors and that automation simply strengthens the effect of
system-level biases. At a perceptual level, client beliefs on the fairness with which they are
treated by the state is shown to be higher when the interaction is moderated by ICT (Miller and
Keiser 2020). This effect is also demonstrable on an operational level, as Wenger and Wilkins
(2009) showed that telephonic automation of unemployment claims resulted in fewer
administrative errors in benefits determinations for female clients. On the other hand, complete
workflow automation has been shown to allow decision makers to embed negative social
constructions of client populations in designing complicated eligibility rules and logic that are
then expressed in complicated and error-prone automated application processes (Brodkin and
Majmundar 2010; Eubanks 2018; Ray, Herd, and Moynihan 2020). Furthermore, active
representation via street-level bureaucrats is one of the few empirically demonstrated ways for
correcting structural risks to vulnerable client populations. Thus, we expect that ICT-enabled
automation may reduce administrative errors in aggregate, but is insufficient to overcome
structural contributors to racialized administrative errors. We therefore expect that the racialized
difference in the likelihood of administrative errors occurring in a claim will not be mitigated
when claimants file their claims using automated methods.
Hypothesis 4: The likelihood of an administrative error is not equivalent and
statistically different across racial and ethnic groups when client-state interactions
involve ICT-enabled automation.
19
If our fourth hypothesis were supported, we would infer that organizations’ use of ICT-enabled
automation does not produce equitable service quality for all race and ethnic groups. Put
differently, when non-white clientele experience administrative errors at a different rate than
white clientele when claims are filed using ICT-enabled automation it means that racialized
administrative errors occur in the absence of face-to-face client-state interactions. Such a result
would imply that racialized administrative errors cannot be explained solely by individual
bureaucrats’ racial bias or discretionary judgment. In the following sections, we introduce our
empirical context and research design to test these expectations, and present our results. We
conclude with future directions for research.
Empirical Context and Research Design
We use audit data from the United States unemployment insurance (UI) program to test
our hypotheses on racialized administrative errors. UI was created by the Social Security Act in
1935.
2
Congressional legislation and federal regulations set minimum guidelines for program
rules and eligibility requirements, which are implemented by state workforce agencies (SWAs).
In the 1980s, the Department of Labor (DOL) implemented an improper payment detection
system named Benefit Accuracy Measurement (BAM). States’ BAM programs are federally
mandated audits of UI claim determinations by state workforce agencies. States are responsible
for staffing BAM units within their workforce agencies; these organizational units are required to
2
Interestingly, UI “provoked the most extended discussions and the widest differences of opinion” of all
the programs considered in the development and writing of the Social Security Act (Witte 1945, 30). One point of
debate was the exclusion of agricultural and domestic workers from UI, a decision which effectively deprived most
Black Americans of insurance against joblessness and allowed Southern states to perpetuate racial discrimination
(Davies and Derthick 1997, Quadagno 1996).
20
be independent of and not accountable to any of the workforce agency divisions whose work
products are subject to BAM audits (Department of Labor, 2009). We use data from the DOL
collected from states’ BAM programs from 2002-2018 to test our hypotheses.
3
Two audit programs exist within the BAM system. First, Paid Claims Accuracy (PCA)
samples UI claims that have been approved by state agencies to assess whether the dollar amount
of benefits paid was proper. Second, Denied Claims Accuracy (DCA) assesses whether the
decision to deny the claim was proper. Selected claims are then assigned to BAM auditors for
review. The auditors’ investigatory processes differ between paid and denied claims. DCA
claims are “narrowly” audited; BAM auditors only investigate whether the State agency’s
specific reason for the denial decision was applied correctly or not. In PCA audits, however, “all
prior determinations affecting claimant eligibility for the compensated week are evaluated”
during the investigation, with auditors required to fully investigate every discrepancy that might
affect whether the benefit payment was properly made (Department of Labor, 2009, VIII-1).
One consequence of this difference is that audits of benefit payments review a larger set
of claimant and administrative inputs and decisions, which, all else equal, ought to increase the
baseline rate of error detection compared to DCA audits. The equity implications of this policy
design choice are beyond the scope of this article, but for our purposes it is important to note that
our analysis of DCA audits in testing hypotheses 2 and 4 are likely to be conservative relative to
the true, unobserved population-level error rate in UI claim denials.
3
2002 is the first year for which publicly available BAM data are available from the Department of Labor.
These data are publicly available upon request from the Unemployment Insurance Office of the Employment &
Training Administration in the United States Department of Labor.
21
States are required to randomly sample UI claims data every week for auditing, according
to established quarterly case audit thresholds to assure that BAM data are robust to seasonal
labor market effects. DCA-specific samples are stratified by denial determination classification:
monetary, separation, and non-monetary non-separation reasons. All samples are validated for
randomization according to claimant gender, ethnic group, and age, as well as by UI program
type (Department of Labor 2009). Publicly available BAM data include information on claimant
demographics, including age, gender, racial and ethnic identity, educational attainment, and
citizenship.
Outcome Variables
We are interested in estimating the likelihood of administrative errors in UI claims
processing, conditional on claimant characteristics. We are also interested in estimating the
disparate impact of agency errors that lead to (1) underpayment of benefits and (2) wrongful
denials in cases where agents have the broadest discretion to approve or deny claims in
particular.
Two factors affecting these measures deserve attention. First, the BAM audit process
accounts for joint fault across multiple parties involved with the claim. Second, auditors may
find multiple errors within a claim; each discrete error may have different responsible parties.
We consider any claim with at least one error for which the state workforce agency is responsible
(an agency-responsible error) to be a claim with an administrative error. We also include all
observations in which the agency is included in a multi-party responsibility determination as an
administrative error. Thus, our operationalization of “administrative error” are those decisions
that the audit process determines to be (1) erroneous or inaccurate and (2) the responsibility of
22
the state workforce agency. All errors identified to be the sole responsibility of claimants,
employers, or third-parties are excluded from our analysis.
4
Our first outcome of interest is whether the BAM audit identified an error that is the
responsibility of the State Workforce Agency (hereafter, agency) in a sampled claim. The BAM
auditor assigned to the claim determines the fault for any identified error; these decisions are
reviewed and approved by BAM managers. Our second outcome of interest is whether an
agency-responsible error resulted in a claimant receiving fewer benefits than they were entitled
to, either through underpayment or wrongful denial of benefits.
5
Our third and final outcome of
interest is the specific circumstance where a claim was wrongfully denied for reasons other than
(a) wages, earnings, or other monetary reasons, or (b) reasons related to the nature of the
separation between claimant and previous employer.
Eligibility rules consist of two types: monetary and non-monetary. Monetary eligibility
criteria include qualifying wage levels or employment duration, whereas non-monetary
eligibility is determined by criteria such as reason for separation, work-search efforts, or denial
of suitable work. In recent decades, the process of determining monetary eligibility has shifted
mostly to electronic verification and processing. Wage and other employment details reported by
an applicant to the state workforce agency can be verified using national databases of new hires
and payroll details. Non-monetary eligibility rules allow for greater discretion on the part of
agents (Rubin 1983), and require subjective judgements about the claimant’s work-search efforts
4
According to the Department of Labor Employment & Training Administration Office of Unemployment
Insurance (2022), agent responsibility for an error is determined through the BAM audit processes and includes any
error “for which the [state workforce agency] was either solely responsible or shared responsibility with claimants,
employers, or third parties, such as labor unions or private employment referral agencies. [This]…includes fraud,
nonfraud recoverable overpayments, nonfraud nonrecoverable overpayments, official action taken to reduce future
benefits, and payments that are technically proper due to finality or other rules”.
5
The BAM data do not specify an order for multiple errors within claims.
23
or work-separation behavior. Thus, it is in the agents’ non-monetary eligibility determinations
that we expect greater subjective discretion and therefore greater room for racial bias to shape
outcomes.
Independent variables
Our primary independent variable of interest is the claimant’s self-identified race and
ethnicity. The technology claimants use to file their claim is our second independent variable of
interest. Claimants may use a range of methods to file a claim for unemployment insurance:
telephone, in person, mail, internet, by employer, or some other electronic method. We include
an indicator variable for the technology that the claimant used to file their claim. We reduce the
categories to telephone (referent), in person, internet, and ‘other’ for parsimony. We further
control for claimant gender, education, US citizenship, and age. We include a polynomial
(squared) function of claimant age to account for potential correspondence between young and
old claimants and uncommon employment and eligibility conditions that could affect the
baseline risk.
We include an indicator for whether the claim was made as part of the standard UI
program or if it was filed through a supplementary or alternative program. These alternative
programs have additional and different eligibility and benefit level rules, and are less common,
so they are likely to have a different baseline risk of agency error. We also include information
on the method used to contact claimants during the audit process to account for the effect of
different communication technologies on the auditor’s judgment of the claimant’s culpability and
recollection of the filing process and claim conditions.
Finally, we include indicators for the state (including Puerto Rico and the District of
Columbia), the year and the fiscal quarter in which the claim was filed (e.g., Q1, January-
24
March), and the North American Industry Classification System code for the claimant’s last
employer as fixed effects to control for unobserved systematic variation from State law,
workforce agency characteristics, and temporal- and industry-specific effects, respectively. Table
1 provides summary statistics for all independent variables.
[Table 1 about here]
Estimation Strategy
We test hypothesis 1 using a logistic regression where the dependent variable is equal to
1 if the claim has an agency-responsible error (an administrative error), and 0 otherwise. We
report these results in Table 2. We test hypotheses 2 and 3 using multinomial logistic regression.
In this model, the dependent variable is an indicator where agency errors resulting in wrongful
denial are separated from underpayments, and we further separate wrongful denials for reasons
other than monetary or separation considerations from other wrongful denials. Thus, this
measure has five outcome categories: (1) no error attributable to the agency; (2) an agency-
caused overpayment; (3) an agency-caused underpayment; (4) an agency-caused wrongful denial
for monetary- and/or separation-based factors; and (5) an agency-caused wrongful denial for
neither monetary- nor separation-based factors. The combination of categories 3, 4, and 5
correspond to our second outcome of interest and hypothesis. The fifth category represents the
outcome of interest as specified in our third hypothesis.
We embed a test of hypothesis 4 in both models by including an interaction effect of
claimant race and ethnicity with the technology they used to submit their claim. In this way,
submission technology acts as a moderating variable between a claimant’s racial and ethnic
identity and agency bureaucrat’s propensity to make administrative errors that are systematically
associated with those identities. All models use robust standard errors clustered at the State level.
25
Our analytic sample for model 1 in Table 2 includes all observations of key week claims
audited through the BAM program from 2002 through 2018 with complete covariates (N =
521,664). For model 2’s estimate of specific types of agency errors in Table 3, we restrict our
analytic sample to only include key week claims that were eligible to receive one or more benefit
payments and were audited between 2002 and 2018 with complete covariates (N = 361,709).
This excludes audited claims which were properly denied and approved claims that were later
identified as fraudulent, as these claims have idiosyncratic characteristics that make them
fundamentally distinct from eligible claims. Differences in the information about claims
contained in PCA vs. DCA data also affect our initial model specifications. BAM data on paid
claims contain significantly more information about claim characteristics than the denied claims
data, in part because of the difference in auditing processes discussed earlier in this section.
One potential source of bias in our estimates would be persistent and systematic
differences in the choice of claim filing technology used by different racial/ethnic groups. Figure
1 presents the proportion of claims filed by UI clients by race/ethnicity over technology across
our sample period. There is an appreciable lag in the use of automated filing technologies by
claimants of color relative to whites, but each racial/ethnic group appears to follow similar
adoption/usage trends across different technologies.
[Figure 1 about here]
Our analysis is subject to the familiar limitations of cross-sectional estimation without an
instrumental variable. Future research on racialized administrative errors should consider
employing an appropriate instrumental variable approach. Additionally, including State-level
fixed effects prohibit us from estimating the effect of variation between State-level policy and
26
institutional characteristics that are likely to affect the underlying risk of agency-caused errors.
Estimates of these effects could hold significant policy implications, and are another avenue for
future research on this topic.
Findings & Discussion
Estimates from our models are reported in Tables 2 and 3. Coefficients are reported as
odds ratios for ease of interpretation and to center our analysis on the relative deviation in
administrative error rates faced by minorities compared to white Non-Hispanic claimants.
6
The
results support rejecting the null hypothesis in favor of hypothesis 1 that white claimants
experience administrative errors at different rates than non-white claimants. Across every
specification and sampling framework, Black claimants were consistently more likely to
experience an administrative error. For example, in model 2’s estimation of the odds of any
claim having any agency error across all claims, Black claimants were 14% more likely to have
an agency-responsible error when controlling for the differential effect of key week claim filing
technology by race and ethnicity as well as all other covariates (p < 0.001). The multinomial
logistic estimates show Black claimants are more likely to experience an agency error across all
subtypes of error, all of which are statistically significant at the p < 0.05 level or higher. Model 2
also shows higher odds of agency errors in wrongful denials for monetary or job separation
issues for Hispanic claimants (11% increased odds, p < 0.05).
[Table 2 about here]
[Table 3 about here]
6
The effect of a coefficient (x) on the relative odds, expressed in percentages, of experiencing a given
outcome are found by: (x - 1) * 100. Thus, for example, an odds ratio of 1.14 corresponds to 14% increased
odds while an odds ratio of 0.13 corresponds to 87% lower odds.
27
Our results also support rejecting the null in favor of our second hypothesis that
administrative errors are more likely to wrongfully deny or withhold benefits from non-white
clientele. The estimates from our multinomial logistic regression specification in model 2 shows
that both Black and Hispanic claimants are more likely to have agency-responsible errors in their
claims that reduce the benefits they receive, both when controlling for key week claim
submission technology in general and with the technology as a mediator of agency error by race
and ethnicity. For Black claimants in particular the effect sizes for agency-responsible wrongful
denials are high: compared to white non-Hispanic claimants, Black claimants are 37% more
likely to have their claim wrongfully denied for monetary or job separation reasons, conditional
on other covariates (p < 0.001).
Similarly, the results support our third hypothesis that white clientele are less likely to
experience administrative errors in decisions requiring greater discretionary judgement than non-
white clientele as shown in model 2. Conditional on an agent making a wrongful denial for non-
monetary, non-separation related reasons as compared to making no error, an underpayment, an
overpayment, or any other type wrongful denial, and controlling for all other factors, Black
claimants are 19% more likely to have their claims erroneously rejected than white Non-
Hispanic claimants (p < 0.001).
Finally, our estimates paint a nuanced picture of the effect of automating claims filing via
ICT on observed racialized administrative errors. The results support hypothesis 4: differences in
likelihood of racialized administrative errors across groups exist even when ICT-enabled
automation is used in client-state interactions. When compared to the referent category of white
clients, Black and Latinx clients’ conditional odds of experiencing an administrative error in the
processing of their unemployment claim is not substantively different, whether examined in
28
aggregate (model 1, Table 2) or according to error type (model 2, Table 3). However, when
examined in absolute rather than relative terms, Black and Latinx claimants are less likely to
experience an administrative error in their unemployment claim when using automation
compared to working with a street-level bureaucrat in several contexts. Black clients filing a key
week claim through the internet have 14% decreased odds of experiencing a wrongful denial for
non-monetary, non-separation related reasons (p < 0.05). Hispanic claimants are 208% more
likely to experience an agency-caused wrongful denial for monetary or separation reasons when
filing in person (p < 0.05), and Asian and Pacific Islander claimants are 300% more likely to
have an agency-caused underpayment error when filing their key week claim in person (p <
0.05). Figures 2 and 3 present these absolute changes from filing technology by race and
ethnicity as average marginal effects on the predicted probability of a client experiencing an
administrative error in their claim. We report 83% confidence intervals in both figures to account
for the interaction effects between variables (Knol et al. 2011).
[Figure 2 about here]
[Figure 3 about here]
As shown in figure 2, the predicted probability of Black or Hispanic/Latinx clients
experiencing an administrative error is greater for claims filed in-person than via any of the ICT-
enabled automated systems. This effect persists for high-discretion claims in the form of denial-
based errors in Figure 3, and is also true in this context for American Indian & Alaskan Native
clients. That these effects are not, as a whole, statistically different from white client errors
suggests that any structural factors contributing to racialized administrative errors are likely to
persist. Yet both Figures 2 and 3 suggest that ICT automation of UI claims applications also
29
reduce the probability of white clients experiencing an administrative error, both in general
(Figure 2) and for denial-based errors (Figure 3).
These results provide consistent evidence that agency-caused errors in claim processing
disproportionately affect and disadvantage non-white claimants. When technologies are
employed that limit the exposure of clients to direct interactions with potentially biased program
administrators, disparities in service quality are less, but Black and Hispanic claimants still
experience more agency-responsible administrative errors than white claimants. Structural biases
may explain this result. Biases may be embedded into different technology-mediated processes,
or it is possible that automated or digitally-submitted claims data retain racialized information
(e.g., claimant names) that elicit patterns of bias in agency decision-making. At the same time,
the corresponding decrease in the predicted probability of white clients experiencing
administrative errors when using ICT-enabled automation for filing UI claims, compared to in-
person filing with face-to-face interactions, suggests that technology reduces the risk of
administrative errors and the resulting administrative burdens for clients irrespective of client
demographics. Overall, these results suggest that ICT-enabled automation reduces the likelihood
of administrative errors for all groups, including Black and Hispanic clientele.
Several of our control variables were also significant across our models, and deserve
note. In particular, the contact method used by the BAM auditor to collect information from the
claimant is one avenue through which biases may further exert influence on the ultimate
determination of whether an error is attributed to the claimant or the agency. We also find
evidence of increased rates of wrongful denial for non-monetary or separation reasons for female
claimants (18% increased odds, p < 0.001), consistent with prior work on gender-based
30
discrimination in UI programs (Smith et al. 2003; Wenger and Wilkins 2009). Along with State-
level policy variation, these factors will be explored in more detail in future research.
It is also worth noting that while we operationalize underpayments as a particularly
pernicious harm that arises from administrative errors in this context, benefit overpayments are
also burdensome for claimants. Those who are unemployed face severe financial constraints and
often must make particularly difficult choices about how to budget the limited money they have
on hand and receive through social insurance programs. When those programs erroneously
overpay claimants for some period of time and then, at some point in the future, demand
repayment of these funds, claimants may find themselves facing a budget shock that is at least as
disruptive as their initial loss of employment.
Conclusion
Our contribution builds on four existing lines of research. First, administrative errors are
an often overlooked dimension of public organizational performance with real consequences for
clientele’s experience with government. Second, as a field, public administration has come to
focus on administrative burdens as intentionally or unintentionally designed limits on citizens’
access to benefits or exercise of rights. Third, public administration has extensively documented
how race and racism shape public administration and public service outputs through multiple
mechanisms. Scholars have further begun to consider inequities in the administrative burden
(Chudnovsky and Peeters 2020) and in particular how racial disparities may be perpetuated by
the maldistribution of administrative burden (Ray, Herd, and Moynihan 2020). Fourth, a line of
research in public administration on digital discretion (Busch & Henriksen 2018) examines the
ways in which public organizations choose to employ automation and information
31
communication tools, and the consequences of those choices for public service outcomes.
Bringing these lines of work together is necessary to contextualize our theory of racialized
administrative errors.
Recent work on administrative burdens has focused on the costs citizens experience in
their interactions with government programs. One focus of this literature has been on the racial
dimensions of administrative burden, hurdles in access to social programs, and disparities in
service quality that disempowered individuals disproportionately face. We contribute to this
literature by examining the U.S. Unemployment Insurance program and the costs imposed on
clientele through both the wrongful denial of UI benefits and other payment errors. We also test
the hypothesis that technology can be used to mitigate these errors. Overall, we find strong
evidence that historically marginalized groups, particularly Black and Hispanic claimants, are
more likely to experience agency-caused administrative UI errors compared to white non-
Hispanic claimants. Also, Black and Hispanic claimants are more likely to have their claims
erroneously denied compared to white non-Hispanic claimants. We find that advances in
information communication technology can make a difference by reducing the baseline of
administrative errors compared to traditional face-to-face filing. We also find, however, that
differences in the incidence of administrative errors across racial groups persist, casting doubt on
the claim that technology will wholly mitigate the racialized administrative errors in the UI
system. A summary of our hypotheses and operationalization of key variables are presented in
Table 4.
32
Table 4: Summary of Results
Expectation
Operationalization
Test
Result
Hypothesis 1: White claimants experience
administrative errors at different rates than
non-white claimants.
DV: Any agency-responsible error
in a claim
IV: Self-identified claimant race
and ethnicity
Table 2
Supported; Black and Latinx
clients are significantly more likely
to experience an administrative
error
Hypothesis 2: Administrative errors are
more likely to wrongfully deny or withhold
benefits or services from non-white
clientele claims than white clientele
DV: An agency-responsible error
resulting in the wrongful denial or
underpayment
IV: Self-identified claimant race
and ethnicity
Table 3
Supported; Black claimants are
significantly more likely to be
wrongfully underpaid or wrongfully
denied
Hypothesis 3: White clientele are less
likely to experience administrative errors in
decisions requiring greater discretionary
judgement than non-white clientele
DV: An agency-responsible error
in a non-monetary non-separation
continuing eligibility
determination
IV: Self-identified claimant race
and ethnicity
Table 3
Supported; Black and Latinx
claimants are significantly more
likely to experience an
administrative error in discretionary
determinations
Hypothesis 4: The likelihood of
administrative errors is not equivalent and
statistically different across racial and
ethnic groups when client-state interactions
involve ICT-enabled automation.
DV: Any agency-responsible error
in a claim
IV: Self-identified claimant race
and ethnicity; filing method of
claim; and multiplicative
interactions between race and
filing method
Tables 2 & 3,
Figures 1 & 2
Supported; Automation reduces
likelihood of errors for all groups,
but Black claimants are still more
likely than white claimants to
experience an administrative error,
even when filing using ICT-enabled
automation
Note: DV=dependent variable; IV=independent variable
33
There are, of course, limitations to our study that offer opportunities for future theoretical
development and empirical examination. First, given the political-historical context in the United
States, we expect that our theory of racialized administrative errors is likely limited to the
implementation of transfer programs in the US. Our conclusion that administrative errors are
shaped by both discretion and structural factors, and that these errors disproportionately
disadvantage marginalized clientele, may travel beyond this program type or this national
context. The empirical implications of these conclusions, however, will look quite different in
places with different socio-political histories.
Second, UI is a small program in terms of both clientele and monetary expenditures,
when compared to other social insurance programs like Social Security or Medicare. Testing our
expectations of racialized administrative errors in the implementation of programs that vary in
politicization, decentralization, discretionary authority, and size will be important. We might
expect more centralized programs to be implemented with less discriminatory bias and thus more
equitable (less racialized) outcomes (see, for example, Soss, Fording, and Schram 2008 or
Fording, Soss, and Schram 2011), but this will depend on other factors, including use of
technology in processing.
Third, implementation is more difficult when programs are technically complex and
require costly information, and such difficulties likely increase administrative errors (Hill and
Hupe 2002). Programs with greater complexity may experience more administrative errors
overall, but whether complexity contributes to racialized administrative errors and how
technology mediates these errors will be an interesting path for future research.
Another limitation of our research design is our inability to examine the street-level
bureaucrat and client interaction as a dyadic relationship. Work on representative bureaucracy
34
notes the importance of bureaucrats’ identities in shaping administrative outcomes for clientele,
and historically marginalized clientele in particular. Unfortunately, information on the gender,
race, ethnicity, or other characteristics of the employees processing these claims (at the
individual or aggregate level) are not available. We are thus unable to examine how bureaucrats'
race or ethnicity contribute to or mitigate racialized administrative errors.
Relatedly, we have treated state workforce agencies as black boxes, with little
information on internal processes or procedures (technological or otherwise) because data on
internal operations are not available to examine. This is an area that should be addressed in
future research, and will require a notable data collection effort. Also, it will be important to
examine changes in organizational processes and ICT use over time and to theorize on
explanations for between-state variation more thoroughly.
Finally, our analysis does not generalize to intersectional considerations with respect to
race, ethnicity, gender, and education. Intersectionality is an important consideration in public
administration generally, and in representative bureaucracy theory specifically (see Fay, Fryar,
Meier, and Wilkins 2021). Future work should seek to add nuance to our theory of racialized
administrative errors by taking intersectionality into account.
Nevertheless, our focus on racialized administrative errors in the context of
unemployment insurance contributes to the growing body of work on administrative burdens in
social programs and offers a number of advantages. First, by examining UI claim audits of filed
claims, we can examine whether biases creep into the routine decisions that street-level
bureaucrats make. Quality control audits offer a window into everyday decisions -- from
mundane mistakes to significant procedural violations -- by detecting and documenting
responsibility and source of errors. These rich data can reveal how discretion is employed by
35
state workforce agency staff, whether eligibility or benefit rules are regularly followed or
ignored, and, crucially, the protected characteristics of clients who regularly benefit or lose from
those decisions.
Second, our theoretical approach in bridging extant research on racial disparities in public
service provision and research on the role of technology in shaping administrative disparities is
also an advance. Disparities in service provision across clientele groups by race or ethnicity are
well documented and well studied in social welfare policy and implementation. Less is known
about disparities in service provision in the administration of unemployment insurance. We go
one step further than identifying disparities, with a preliminary search for equity-promoting
practices in the organizations processing unemployment insurance claims.
Acknowledging these issues and gaining a baseline understanding of racialized
administrative errors allows us to explore future research questions around improving UI
accuracy and equity. For example, in response to the COVID-19 pandemic, new federal
unemployment insurance relief programs were introduced: the Families First Conoronavirus
Response Ace (FFCRA) and the Coronavirus Aid, Relief, and Economic Security (CARES) Act
were introduced in response to the COVID-19 pandemic. The speed of design and
implementation of these programs came with a number of administrative challenges that can be
explored further to assess how racialized administrative errors may have changed as states
struggled to implement these new programs during historically high volumes of unemployment
claims. Additionally, the technology used to submit and process claims over the last several
years has changed dramatically and these findings provide a framework to use when evaluating
those new technologies and their effect on administrative errors.
36
Data Availability
Replication data are available in the Harvard Dataverse at
https://doi.org/10.7910/DVN/QGLNFE . Deidentified administrative data used in this study were
provided to the research team directly by the Department of Labor Employment & Training
Administration Office of Unemployment Insurance.
The authors thank Thomas Stuart for research assistance.
37
References
Alexander, Jennifer, and Camilla Stivers. 2020. “Racial Bias: A Buried Cornerstone of the
Administrative State.” Administration and Society.
Andersen, S. C., & Guul, T. S. 2019. “Reducing Minority Discrimination at the Front Line—
Combined Survey and Field Experimental Evidence. Journal of Public Administration
Research and Theory.” https://doi.org/10.1093/jopart/muy083
Baekgaard, Martin, and Tara Tankink. 2021. “Administrative Burden: Untangling a Bowl of
Conceptual Spaghetti.” Perspectives on Public Management and Governance, no.
January: 0–6.
Bell, Elizabeth, and Kylie Smith. 2021. “Working Within a System of Administrative Burden:
How Street-Level Bureaucrats’ Role Perceptions Shape Access to the Promise of Higher
Education.” Administration and Society.
Bell, Elizabeth, Ani Ter-Mkrtchyan, Wesley Wehde, and Kylie Smith. 2021. “Just or Unjust?
How Ideological Beliefs Shape Street-Level Bureaucrats’ Perceptions of Administrative
Burden.” Public Administration Review, 81 (4): 610–24.
Bovens, M. and Zouridis, S., 2002. “From street-level to system-level bureaucracies: how
information and communication technology is transforming administrative discretion and
constitutional control.” Public Administration Review, 62(2), pp.174-184.
Brodkin, Evelyn Z. 1997. “Inside the Welfare Contract: Discretion and Accountability in State
Welfare Administration.” Social Service Review 71(1): 1–33.
Brodkin, Evelyn Z., and Malay Majmundar. 2010. “Administrative Exclusion: Organizations and
the Hidden Costs of Welfare Claiming.” Journal of Public Administration Research and
Theory 20(4): 827–48.
Broussard, M. 2018. Artificial unintelligence: How computers misunderstand the world.
Cambridge, MA: MIT Press.
Buffat, A., 2015. Street-level bureaucracy and e-government. Public management review, 17(1),
pp.149-161.
Bullock, Justin B. 2014. “Theory of Bureaucratic Error.” In Academy of Management
Proceedings, 2014:17469. Academy of Management Briarcliff Manor, NY 10510.
Bullock, Justin, Young, Matthew M., and Wang, Yi-Fan.“Artificial Intelligence, Bureaucratic
Form, and Discretion in Public Service”. 1 Jan. 2020 : 491 – 506.
38
Burden, Barry C., David T. Canon, Kenneth R. Mayer, and Donald P. Moynihan. 2012. “The
Effect of Administrative Burden on Bureaucratic Perception of Policies: Evidence from
Election Administration.” Public Administration Review 72(5): 741–51.
Busch, Peter André1, and Helle Zinner1 Henriksen. 2018. “Digital Discretion: A Systematic
Literature Review of ICT and Street-Level Discretion.” Information Polity: 23 (1): 3–28.
Campbell, Andrea L. 2003. “How Politics Make Citizens: Senior Political Activism and the
American Welfare State.” Princeton, NJ: Princeton University Press.
Chang, Yu-Ling. 2020. “Unequal Social Protection under the Federalist System: Three
Unemployment Insurance Approaches in the United States, 2007–2015.” Journal of
Social Policy 49 (1): 189–211. https://doi.org/10.1017/S0047279419000217.
Chudnovsky, Mariana, and Rik Peeters. 2020. “The Unequal Distribution of Administrative
Burden: A Framework and an Illustrative Case Study for Understanding Variation in
People’s Experience of Burdens.” Social Policy and Administration (July): 1–16.
Compton, Mallory E. 2021. “Serving the Unemployed : Do More Generous Social Insurance
Programs Provide Better Quality Service ?” Journal of Policy Studies 36 (3): 1–11.
Compton, Mallory E., and Paul ’t Hart, eds. 2019. Great Policy Successes. Oxford: Oxford
University Press.
Davies, Gareth, and Martha Derthick. 1997. “Race and Social Welfare Policy: The Social
Security Act of 1935.” Political Science Quarterly 112 (2): 217.
Davis, Kenneth Culp. 1969. “Discretionary Justice: A Preliminary Inquiry”. Baton Rouge:
Louisiana State University Press.
de Boer, Noortje., and Nadine Raaphorst. 2021. “Automation and discretion: Explaining the
effect of automation on how street-level bureaucrats enforce.” Public Management
Review, 1–21.
Department of Labor. 2009. “Benefit Accuracy Measurement State Operations Handbook No.
395.” 5th ed. Washington, D.C.
Desante, Christopher D. 2013. “Working Twice as Hard to Get Half as Far: Race, Work Ethic,
and America’s Deserving Poor.” American Journal of Political Science 57(2): 342–56.
Einstein, Katherine Levine, and David M. Glick. 2017. “Does Race Affect Access to
Government Services? An Experiment Exploring Street-Level Bureaucrats and Access to
Public Housing.” American Journal of Political Science 61(1): 100–116.
Ernst, Rose, Linda Nguyen, and Kamilah C. Taylor. 2013. “Citizen Control: Race at the Welfare
Office.” Social Science Quarterly 94(5): 1283–1307.
39
Eubanks, Virginia. 2018. Automating Inequality: How High-Tech Tools Profile, Police, and
Punish the Poor. New York: Picador, St Martin’s Press.
Fay, Daniel L., Alisa Hicklin Fryar, Kenneth J. Meier, and Vicky Wilkins. 2021.
“Intersectionality and Equity: Dynamic Bureaucratic Representation in Higher
Education.” Public Administration 99 (2): 335–52.
Fording, Richard C., Joe Soss, and Sanford F. Schram. 2011. “Race and the Local Politics of
Punishment in the New World of Welfare.” American Journal of Sociology 116(5):
1610–57.
Friedman, Gillian. 2020. “States Overpaid Unemployment Benefits and Want Money Back.” The
New York Times, December 11, 2020.
https://www.nytimes.com/2020/12/11/business/economy/unemployment-benefit-
payback.html.
Hansen, H.T., Lundberg, K. and Syltevik, L.J., 2018. “Digitalization, street‐level bureaucracy
and welfare users' experiences.” Social Policy & Administration, 52(1), pp.67-90.
Heinrich, Carolyn J. 2018. “Presidential Address: ‘A Thousand Petty Fortresses’: Administrative
Burden in U.S. Immigration Policies and Its Consequences.” Journal of Policy Analysis
and Management 37 (2): 211–39.
Hero, Rodney E. 1998. “The Faces of Inequality: Social Diversity in America.” New York:
Oxford University Press
Hill, Michael, and Peter L. Hupe. 2002. Implementing Public Policy. London: SAGE
Publications.
Jilke, Sebastian, and Lars Tummers. 2018. “Which Clients Are Deserving of Help? A
Theoretical Model and Experimental Test.” Journal of Public Administration Research
and Theory 28(2): 226–38.
Johnson, Austin P., Kenneth J. Meier, and Kristen M. Carroll. 2018. “Forty Acres and a Mule:
Housing Programs and Policy Feedback for African-Americans.” Politics, Groups, and
Identities 6(4): 612–30.
Keiser, Lael R., Peter R. Mueser, and Seung Whan Choi. 2004. “Race, Bureaucratic Discretion,
and the Implementation of Welfare Reform.” American Journal of Political Science
48(2): 314–27.
Keiser, Lael R., and Susan M. Miller. 2020. “Does Administrative Burden Influence Public
Support for Government Programs? Evidence from a Survey Experiment.” Public
Administration Review 80 (1): 137–50.
Klerman, Jacob, and Caroline Danielson. 2009. “Determinants of the Food Stamp Program
Caseload. Contractor and Cooperator Report no. 50, U.S. Department of Agriculture,
Economic Research Service and Food and Nutrition Assistance Research
40
Program.”http://ddr.nal.usda.gov/bitstream/10113/32849/1/ CAT31024406.pdf [accessed
July 16, 2013]
Ledford, Heidi. 2019. "Millions of black people affected by racial bias in health-care
algorithms." Nature 574(7780): 608-610.
Lieberman, Robert C. 1998. “Shifting the Color Line: Race and the American Welfare State.”
Cambridge: Harvard University Press.
Lipsky, Michael. 1980. Street-Level Bureaucracy: Dilemmas of the Individual in Public
Services. New York: Russell Sage Foundation.
Madsen, Jonas K., Kim S. Mikkelsen, and Donald P. Moynihan. 2020. “Burdens, Sludge,
Ordeals, Red Tape, Oh My!: A User’s Guide to the Study of Frictions.” Public
Administration, no. December 2020: 1–19.
Maltby, Elizabeth. 2017. “The Political Origins of Racial Inequality.” Political Research
Quarterly 70(3): 535–48.
Masood, Ayesha, and Muhammad Nisar. 2021. “Administrative Capital and Citizens’ Responses
to Administrative Burden.” Journal of Public Administration Research and Theory 31
(1): 56–72.
Maynard-Moody, Steven, and Michael Musheno. 2000. “State Agent or Citizen Agent: Two
Narratives of Discretion.” Journal of Public Administration Research and Theory 10 (2):
329–58.
Maynard-Moody, Steven, and Michael Musheno. 2003. “Cops, Teachers, Counselors: Stories
from the Front Lines of Public Service.” Ann Arbor: The University of Michigan Press.
McDermott, Marie Tae, and Jill Cowan. 2020. “Where Are Your Jobless Benefits?” The New
York Times, 2020. https://www.nytimes.com/2020/05/07/us/unemployment-benefits-edd-
california.html.
Michener, Jamila. 2018. “Fragmented Democracy: Medicaid, Federalism, and Unequal Politics.”
Cambridge: Cambridge University Press.
Moynihan, Donald, Pamela Herd, and Hope Harvey. 2015. “Administrative Burden: Learning,
Psychological, and Compliance Costs in Citizen-State Interactions.” Journal of Public
Administration Research and Theory 25(1): 43–69.
Nicholson-Crotty, S., Grissom, J. A., Nicholson-Crotty, J., & Redding, C. (2016). “Disentangling
the Causal Mechanisms of Representative Bureaucracy: Evidence From Assignment of
Students to Gifted Programs.” Journal of Public Administration Research and Theory,
26(4), 745–757.
Obermeyer Z, Powers B, Vogeli C, Mullainathan S. 2019. “Dissecting racial bias in an algorithm
used to manage the health of populations.” Science. 366:447-453.
41
Office of Unemployment Insurance. 2022. Unemployment Insurance Payment Integrity.
Washington, D.C.: The U.S. Department of Labor. https://oui.doleta.gov/unemploy
Peeters, Rik. 2020. “The Political Economy of Administrative Burdens: A Theoretical
Framework for Analyzing the Organizational Origins of Administrative Burdens.”
Administration and Society 52(4): 566–92.
Quadagno, Jill. 1996. “The Color of Welfare: How Racism Undermined the War on Poverty.”
2nd ed. Oxford University Press.
Ray, Victor, Pamela Herd, and Donald Moynihan. (2020, December 9). “Racialized Burdens:
Applying Racialized Organization Theory to the Administrative State.” Unpublished
manuscript. https://doi.org/10.31235/osf.io/q3xb8.
Ray, Victor. 2019. “A Theory of Racialized Organizations.” American Sociological Review
84(1): 26–53.
Reason, J. 2000. “Human Error: Models and Management.” British Medical Journal 320 (7237):
768–70. https://doi.org/10.1136/bmj.320.7237.768.
Rosenberg, Eli. 2020. “Workers Are Pushed to the Brink as They Continue to Wait for Delayed
Unemployment Payments.” The Washington Post, July 13, 2020.
https://www.washingtonpost.com/business/2020/07/13/unemployment-payment-delays/.
Rothstein, Richard. 2017. “The Color of Law: A Forgotten History of How Our Government
Segregated America”. New York, NY: Liveright Publishing Corporation.
Rubin, Murray. 1983. “Federal-State Relations in Unemployment Insurance.” Kalamazoo: W.E.
Upjohn Institute for Employment Research.
Benjamin, Ruha. 2019. "Assessing risk, automating racism." Science 366(6464): 421-422.
Ryu, Sangyub, Jeffrey B. Wenger, and Vicky M. Wilkins. 2012. “When Claimant Characteristics
and Prior Performance Predict Bureaucratic Error.” The American Review of Public
Administration 42(6): 695–714.
Ryzin, G. G. V., Riccucci, N. M., & Li, H. (2017). “Representative bureaucracy and its symbolic
effect on citizens: A conceptual replication.” Public Management Review, 19(9), 1365–
1379. https://doi.org/10.1080/14719037.2016.1195009
Schram, Sanford F., Joe Soss, Richard C. Fording, and Linda Houser. 2009. “Deciding to
Discipline: Race, Choice, and Punishment at the Frontlines of Welfare Reform.”
American Sociological Review 74(3): 398–422.
Scott, Patrick G. 1997. “Assessing Determinants of Bureaucratic Discretion: An Experiment in
Street-Level Decision Making.” Journal of Public Administration Research and Theory
7(1): 35–57.
42
Smith, R., McHugh, R., & Stettner, A. (2003). “Between a Rock and a Hard Place: Confronting
the Failure of State UI Systems to Serve Women and Working Families.” National
Employment Law Project.
Soss, Joe, and Sanford F. Schram. 2007. “A Public Transformed? Welfare Reform as Policy
Feedback.” The American Political Science Review 101(1): 111–27.
Soss, Joe, Richard C. Fording, and Sanford F. Schram. 2008. “The Color of Devolution: Race,
Federalism, and the Politics of Social Control.” American Journal of Political Science
52(3): 536–53.
Soss, Joe, Sanford F. Schram, Thomas P. Vartanian, and Erin O’Brien. 2001. “Setting the Terms
of Relief: Explaining State Policy Choices in the Devolution Revolution.” American
Journal of Political Science 45(2): 378–95.
Soss, Joe. 1999. “Welfare Application Encounters: Subordination, Satisfaction, and the Puzzle of
Client Evaluations.” Administration and Society 31(1): 50–94.
Watkins-Hayes, Celeste. 2009. “The New Welfare Bureaucrats: Entanglements of Race, Class,
and Policy Reform.” Chicago: University of Chicago Press.
Watkins-Hayes, Celeste. 2011. “Race, Respect, and Red Tape: Inside the Black Box of Racially
Representative Bureaucracies.” Journal of Public Administration Research and Theory
21(SUPPL. 2): 233–51.
Wenger, Jeffrey B., and Vicky M. Wilkins. 2009. “At the Discretion of Rogue Agents: How
Automation Improves Women’s Outcomes in Unemployment Insurance.” Journal of
Public Administration Research and Theory 19(2): 313–33.
Widlak, Arjan, and Rik Peeters. 2020. “Administrative Errors and the Burden of Correction and
Consequence: How Information Technology Exacerbates the Consequences of
Bureaucratic Mistakes for Citizens.” International Journal of Electronic Governance
12(1): 40–56.
Wilson, James Q. 1989. Bureaucracy: What Government Agencies Do and Why They Do It. 2nd
ed. New York: Basic Books.
Witte, Edwin E. 1945. “Development of Unemployment Compensation.” The Yale Law Journal
55 (1): 21–52.
Wolfe, Barbara, and Scott Scrivner. 2005. “The Devil May Be in the Details: How the
Characteristics of SCHIP Programs Affect Take-Up.” Journal of Policy Analysis and
Management 24(3): 499–522.
Xu, X., C.D. Wickens, and E.M. Rantanen. 2007. “Effects of Conflict Alerting System
Reliability and Task Difficulty on Pilots’ Conflict Detection with Cockpit Display of
Traffic Information.” Ergonomics 50 (1): 112–30.
43
Young, Matthew M., Johannes Himmelreich, Justin B Bullock, Kyoung-Cheol Kim. 2021.
“Artificial Intelligence and Administrative Evil.” Perspectives on Public Management
and Governance.
Young, Matthew M., Mallory E. Compton, Justin B. Bullock, and Robert Greer. 2020.
“Administrative Errors and Improper Payments in Unemployment Insurance.” Presented
to the 43rd Annual Fall Research Conference of the Association for Public Policy
Analysis & Management, Washington, D.C.
Zwicky, R., & Kübler, D. (2019). “Microfoundations of Active Representation in Public
Bureaucracies: Evidence From a Survey of Personnel Recruitment in the Swiss Federal
Civil Service.” Journal of Public Administration Research and Theory, 29(1), 50–66.
44
Table 1. Descriptive Statistics for All Independent Variables
Variable
Freq.
Percent
Variable
Freq.
Percent
Race/Ethnicity
Education
White non-Hispanic
329,178
63.1
No High
School/GED
67,620
13.0
Black non-Hispanic
95,423
18.3
High School/GED
215,789
41.4
Asian/Pacific Islander
14,234
2.7
Some College
123,293
23.6
American Indian/Alaskan
Native
11,452
2.2
AA
35,482
6.8
Hispanic
63,479
12.2
BA/BS
59,522
11.4
Other
7,898
1.5
Graduate
19,958
3.8
Key Week Claim
Citizenship
Telephone
267,380
51.3
US Citizen
95.59
95.6
In-person
7,966
1.5
Non-citizen eligible
22,507
4.3
Internet
197,575
37.9
Non-citizen
ineligible
513
0.1
Other
48,743
9.3
Audit Interview Method
UI Program
In-person
72,503
13.9
UI
510,386
97.8
Telephone
263,182
50.5
Other (e.g., UI-
UCX)
11,278
2.2
Mail or other
185,979
35.7
Gender
Female
237,961
45.6
Male
283,703
54.4
Note: claimant age included as a continuous variable with mean = 41.8 and s.d. 13.1 (n = 521,664)
45
Table 2. Odds ratios from logistic regressions for agency errors in unemployment
insurance claims audited between 2002 and 2018.
Model 1
OR
SE
Race/Ethnicity, ref. White
Black
1.136
(0.000)
.028
Hispanic
1.052
.038
Asian or Pacific Islander
.876
.127
American Indian or Alaskan Native
.940
.100
Other
1.012
.113
Key Week Filing Method, ref. Telephone
In-person filing
1.449
(0.004)
.187
Internet filing
.962
.031
Other filing method
.914
.107
Audit Interview Method, ref. Mail, Email, or Fax
In-person interview
.651
(0.000)
.079
Telephone interview
1.226
(0.004)
.088
Gender, ref. Male
Female
1.020
.015
Age
.987
(0.000)
.003
Age squared
1.000
(0.003)
.000
Education, ref. High School or GED
No HS/GED
1.063
(0.001)
.019
Some College
1.079
(0.000)
.017
AA
1.012
.028
BA/BS
.993
.022
Graduate
1.048
.034
Citizenship, ref. US Citizen
Alien Eligible
1.103
(0.005)
.039
Alien Ineligible
2.054
(0.000)
.380
Program Type, ref. Standard Unemployment Insurance
Nonstandard UI Program
1.160
.089
Interactions Between Race/Ethnicity and Filing Method
Black # In-person
.822
(0.046)
.081
Black # Internet
.958
.040
Black # Other
.939
.072
Hispanic # In-person
1.060
.178
Hispanic # Internet
1.011
.054
Hispanic # Other
1.301
(0.029)
.157
46
Asian or Pacific Islander # In-person
.841
.214
Asian or Pacific Islander # Internet
1.214
.235
Asian or Pacific Islander # Other
1.343
(0.011)
.156
American Indian or Alaskan Native # In-person
1.392
.536
American Indian or Alaskan Native # Internet
.974
.100
American Indian or Alaskan Native # Other
1.198
.191
Other # In-person
.826
.362
Other # Internet
1.035
.148
Other # Other
1.220
.214
N = 521664
Two-sided p-values < 0.05 reported in parentheses
Note: Model includes fixed effects for prior employment industry, state, year, and
quarter of filing. Reported standard errors are robust, clustered on State.
47
Table 3. Odds ratios from multinomial logistic regression for agency overpayments, underpayments, and wrongful denials by
type in eligible unemployment insurance claims audited between 2002 and 2018.
Model 2
Overpayment
Underpayment
Wrongful Denial for
Monetary or Job
Separation
Wrongful Denial for Non-
Monetary and
Non-Separation
OR
SE
OR
SE
OR
SE
OR
SE
Race/Ethnicity, ref. White
Black
1.203
(0.000)
.049
1.222
(0.017)
.103
1.370
(0.000)
.059
1.191
(0.005)
.074
Hispanic
.996
.068
1.242
.171
1.112
(0.044)
.059
1.087
.081
Asian or Pacific Islander
1.025
.183
.710
.252
1.064
.142
.788
.106
AIAN
.876
.152
.970
.136
1.071
.169
.901
.083
Other
.889
.150
.617
.203
1.199
.136
1.246
.257
Key Week Filing Method, ref. Telephone
In-person filing
1.215
.294
1.540
(0.043)
.328
2.225
(0.001)
.528
4.979
(0.000)
1.515
Internet filing
.896
(0.019)
.042
.863
(0.006)
.047
.899
(0.047)
.048
.923
.071
Other filing method
.693
(0.045)
.127
.963
.116
1.564
(0.001)
.205
1.177
.227
Audit Interview Method, ref. Mail, Email, or Fax
In-person interview
.916
.160
1.274
(0.004)
.107
.129
(0.000)
.052
.088
(0.000)
.029
Telephone interview
.987
.107
1.234
(0.003)
.088
2.075
(0.003)
.261
2.195
(0.000)
.286
Gender, ref. Male
Female
.987
.028
1.037
.053
1.019
.026
1.175
(0.000)
.027
Age
.971
(0.000)
.005
.961
(0.002)
.012
.983
(0.002)
.006
.966
(0.000)
.006
Age squared
1.000
(0.000)
.000
1.000
(0.007)
.000
1.000
.000
1.000
(0.000)
.000
Education, ref. High School or GED
No HS/GED
1.130
(0.001)
.040
1.239
(0.000)
.076
1.082
(0.003)
.029
1.013
.038
Some College
1.053
(0.046)
.027
1.057
.051
1.025
.026
1.182
(0.000)
.037
AA
.928
.048
1.035
.084
.950
.036
1.168
(0.001)
.056
BA/BS
.927
.040
.970
.068
.766
(0.000)
.031
1.242
(0.000)
.048
48
Graduate
1.065
.066
.727
(0.024)
.103
.802
(0.004)
.062
1.308
(0.000)
.076
Citizenship, ref. US Citizen
Alien Eligible
1.110
.066
1.060
.102
.939
.050
1.135
.085
Alien Ineligible
8.984
(0.000)
2.584
.000
(0.000)
.000
1.551
.562
4.892
(0.000)
2.019
Program Type, ref. Standard Unemployment Insurance
Nonstandard UI Program
1.159
.092
1.613
(0.001)
.228
1.008
.149
.873
.094
Interactions Between Race/Ethnicity and Filing Method
Black # In-person
1.153
.199
.522
.330
1.037
.219
.700
.137
Black # Internet
.965
.067
.912
.115
.975
.070
.855
(0.047)
.067
Black # Other
.869
.122
.923
.213
1.053
.190
.822
.153
Hispanic # In-person
1.199
.264
.375
(0.000)
.086
3.074
(0.024)
1.534
1.067
.653
Hispanic # Internet
1.187
.119
1.015
.154
.969
.094
.914
.097
Hispanic # Other
1.582
(0.006)
.264
.780
(0.011)
.076
1.102
.116
1.097
.177
API # In-person
.926
.342
4.013
(0.034)
###
.518
.301
.697
.305
API # Internet
1.080
.205
1.275
.456
1.033
.190
1.420
.336
API # Other
1.327
.322
1.762
.691
.926
.147
1.484
(0.036)
.280
AIAN # In-person
2.749
1.667
.000
(0.000)
.000
1.621
.497
.351
.392
AIAN # Internet
.838
.129
1.134
.404
1.123
.240
1.034
.232
AIAN # Other
.897
.486
.891
.571
.910
.290
1.170
.496
Other # In-person
1.222
1.012
3.822
###
.553
.421
.397
.266
Other # Internet
1.279
.314
1.437
.533
.900
.140
.853
.210
Other # Other
1.294
.391
2.488
(0.041)
###
1.050
.457
1.351
.353
N = 361709
Two-sided p-values < 0.05 reported in parentheses
Note: Model includes fixed effects for prior employment industry, state, year, and quarter of filing.
Reported standard errors are robust, clustered on State.
49
Figure 1. Method of claim submission by race/ethnicity over time.
50
Figure 2. Average marginal effects of claim submission technology by claimant race and
ethnicity on the predicted probability of a claim having any agency-caused error.
51
Figure 3. Average marginal effects of claim submission technology by claimant race and
ethnicity on the predicted probability of a claim having overpayment, underpayment, non-
monetary- and non-separation-based denial, and all other claim denial errors.