Content uploaded by Stuart William Shulman
Author content
All content in this area was uploaded by Stuart William Shulman on Mar 05, 2022
Content may be subject to copyright.
Content uploaded by Stuart William Shulman
Author content
All content in this area was uploaded by Stuart William Shulman on Mar 05, 2022
Content may be subject to copyright.
Page 1
‘To Submit a Form or Not to Submit a Form, That is the (Real) Question’:
Deliberation and Mass Participation in U.S. Regulatory Rulemaking*
David Schlosberg
Department of Political Science, Northern Arizona University
Stephen Zavestoski
Department of Sociology, University of San Francisco
Stuart Shulman
School of Information Sciences, University of Pittsburgh
ABSTRACT
In this paper we report data collected through a survey of 1,553 recent participants in regulatory
rulemaking public comment processes. Our analysis focuses on the differences between those
who used newly available electronic tools and those who mailed or faxed letters on paper and
also between those who submitted original letters and those who submitted a version of a mass-
mailed form letter. We first discuss current research and theory developing around the issue of
electronic rulemaking and online policy deliberation. Next we provide background on the
particular rulemakings from which our sample of survey respondents was drawn. After
describing the survey methodology, we focus on three types of findings: 1) the absence of a
significant difference in deliberative practices between electronic and paper commenters, 2) the
presence of unexpectedly high levels of deliberative engagement across all survey respondents,
and 3) the significant differences between respondents who submitted original comments and
those who submitted form letters. Finally, we conclude with discussion of the implications of our
findings, and specific suggestions for both agencies and interest groups.
* The authors wish to acknowledge Cary Coglianese and Vincent Price for their valuable
comments on earlier drafts of this paper.
Page 2
To Submit a Form or Not to Submit a Form, That is the (Real) Question:
Deliberation and Mass Participation in U.S. Regulatory Rulemaking1
Introduction
The United States federal government is, more uniformly than ever, facilitating the
electronic submission of citizen comments during federal regulatory rulemaking comment
periods.2 Concurrently, citizens of many stripes (but particularly environmentalists) are taking
advantage of newly developed web-based tools for generating large numbers of public
comments. The confluence of these two trends–the pull of an increasingly accessible and
searchable federal system for collecting public comments and the push of advocacy coalitions
and their electronic tools–has created a hybrid eRulemaking environment. Interest group-
initiated mass mailed postcards, familiar from past activism, have been modestly enhanced as
customizable form letters, often by expensive for-profit intermediaries.3 This Internet-enabled
participation will likely be the dominant form of mass political communication between average
citizens and decision-makers in controversial rulemakings.
As a result of these and other trends, a growing research community is looking closely at
electronic rulemaking and the possibilities for online political deliberation in general.4 A range
of scholarly activities spanning elaborate conceptual specifications for deliberation5 to research
centers and interdisciplinary conferences6 and online deliberative polls7 now dot the intellectual
landscape. This new scholarship begins to more systematically articulate and test theories about
the role of deliberation8, information9, communications technology10, architecture11, design12, as
well as a host of other factors linked to theories of democratic governance.13
Page 3
The fledgling interdisciplinary research community is generally long on theory, hopes,
and predictions while too often short on empirical data. For many indicators of online
deliberative political activity we have no baseline data or agreed upon metrics. In this paper we
offer an attempt at generating precisely such baseline data, in this instance collected through a
survey of 1,553 participants in regulatory public comment processes. Our analysis focuses on
the differences between those who used newly available electronic tools and those who mailed or
faxed letters on paper. We also examine differences between those who submitted original
letters and those who submitted a version of a mass-mailed form letter.
Our initial research question asked whether new electronic forms of participation
introduce a degree of public deliberation absent in the traditional mailing or faxing of letters that
dominated pre-Internet era public comment periods.14 Contrary to much research and
development in this field, we did not seek to develop new forms of online interaction that
optimize deliberative behavior; rather, we set out to evaluate the deliberative nature of existing
forms of electronic citizen participation.
Overall, our data failed to reveal evidence of deliberative differences between electronic
and paper commenters, but we did find some support that the comment process does induce
some deliberative behavior generally. We also discovered that some fundamental attitudinal
differences exist between citizens who submit original comments and those who submit form
letters. The differences exist not just in terms of their self-described deliberative practices, but
also in terms of their overall trust in government and feelings of efficacy as participants in the
rulemaking process. Stated bluntly, form letter writers, whether using paper or the Internet, are
simplistic, cynical, and less inclined to deliberative behavior, whereas the writers of original
Page 4
comments report personal practices that embody many of the characteristics of deliberative
democracy.
In what follows, we first discuss current research and theory developing around the issue
of online policy deliberation. Next we provide background on the particular rulemakings from
which our sample of survey respondents was drawn. After describing the survey methodology,
we focus on three types of findings: 1) the absence of a significant difference in self-reported
practices between electronic and paper commenters, 2) the presence of unexpectedly high levels
of deliberative engagement across all survey respondents, and 3) the significant differences
between respondents who submitted original comments and those who submitted form letters.
Finally, we conclude with discussion of the implications of our findings and suggestions for
further research.
Online Deliberation and the Focus on Rulemaking
Citizen access to rulemaking information is quite different from what it was when the
Administrative Procedure Act (APA) was initially adopted in 1946. The framers of the APA
could not have imagined the ways that new media and tools created using information and
communications technologies (ICTs) have created a complex and teeming digital landscape. It is
a democratic and deliberative environment unlike anything ever encountered by modern
representative forms of government. Though many observers continue to note the potential to
use these technologies to fulfill the transparency and public participation goals of the APA, these
new tools also pose many challenges. The once reasonably straightforward processes of
democratic participation found in the classic works of political science15 are now largely
antiquated in the age of web logs (blogs), listservs, mass email campaigns, and a proliferating
array of web services. Deliberation today, aside from still crucial questions of digital
Page 5
inequality,16 is open to anyone who cares to participate. A pressing question in this context is:
exactly how do they deliberate?
With the 2004 publication of Democracy Online: The Prospects for Political Renewal
Through the Internet, we can glimpse the spirit animating the short history of online deliberation
research. Editor Peter Shane signals the tempered hopes of a growing body of cautious
“cyberrealist” reaction to earlier scholarship, reminding readers “we cannot really know the
promise or limitations of ICTs until people can actually experience them”.17 One of the problems
with this research is that there are so many avenues for such an experience–websites, usenets,
bulletin boards, chats, blogs, podcasts–making it difficult to systematically track and measure the
impact of online deliberation. As Froomkin notes, “the Internet can be seen as a giant electronic
talkfest, a medium that is discourse-mad”.18 Our focus, however, is on just one particular
element in that talkfest: public participation in regulatory rulemaking.
Why focus on rulemaking in an examination of electronic deliberation? First, the
development of new rulemaking technology has embodied a democratic direction. Many
agencies now use open electronic dockets, which allow citizens to see and comment on the rules
proposed by agencies, supporting documentation, and the comments of other citizens. In an
early benchmark case of mass deliberation online, personnel managing the National Organic
Program rulemaking at the United States Department of Agriculture (USDA) allowed citizens to
read comments as they were posted, whether they came via fax, paper, or online. The
Environmental Protection Agency (EPA) and the Department of Transportation (DOT) were
path-breaking agencies that developed and deployed open-docket systems that were agency-
wide. Electronic rulemaking in the U.S. is therefore an ICT testbed in which large numbers of
Page 6
actual citizens have begun to experience concretely the limits and opportunities afforded by the
online environment.19
Second, electronic rulemaking systems are highly structured, hence quite different from
other web-based discourse that is one-way, isolated, or homogenous. Sunstein argues that the
web enables people to pay attention to other, like-minded people, and ignore those who are
unlike them or disagree with their positions on issues.20 The web, for Sunstein, diminishes
exposure to heterogeneity and is far from the ideal of a real public forum. Yet the argument here
is that the structure of e-rulemaking, in particular the open docket system, may enable and indeed
encourage citizens to engage the positions of others, including those with whom they disagree.
The open docket architecture of e-rulemaking may mitigate some of the anti-deliberative dangers
lurking elsewhere on the web.
Other reasons to examine rulemaking are more specifically political. For example, on
environmental issues, the big political battles have moved out of the legislative arena and into the
realm of regulatory rulemaking. “Perhaps the most significant administrative law development
during the last two decades,” notes Jeffrey Lubbers, “has been the increased presidential
involvement in federal agency rulemaking”.21 While one of the reasons for this move has
certainly been to try to avoid controversy, recent administration decisions and proposals have
drawn considerable attention to the rulemaking process itself, in turn increasing the likelihood of
large numbers of public comments.22
Rulemaking also goes somewhere; simply put, the process frequently leads to actual
changes of agency-enforced rules. Here, a focus on rulemaking differs from other examinations
of web-based discourse. A common critique of online deliberative polling, cyberjuries, or web-
based policy discussions is that the deliberative work often produces few if any tangible or
Page 7
pragmatic results. People spend time and energy working toward consensus, only to see it
ignored or rejected politically. This is a problem of implementation deficit and it can deplete
citizen energy devoted to discourse. Under the APA, rulemaking requires agencies to respond
to, and incorporate, substantive public comments. It may be the only form of online deliberation
that regularly ends in some form of actual implementation.
Finally, and interestingly, from the point of view of democratic discourse, one of the
intents of the APA was to increase the gathering of substantive information from the public
before agencies were to implement decisions. In its focus on substance, rather than aggregative
opinion, the rulemaking process is a ripe area of study for deliberative, rather than aggregative,
democrats.
Electronic Deliberation: Recent Research and Theory
Public participation and citizen deliberation continue to be hallmarks of democratic
theory. Over the past decade, there has been a renewed and expanded interest on deliberation as
a crucial aspect of democratic practice; the role of discussion, reasoning, and engagement across
lines of difference has become a central focus of democratic theorists. Some deliberative
democrats make the argument that deliberation already occurs in current liberal democratic
governments, legislatures, and/or courts.23 Most in the field, however, call for expanding public
discourse and deliberation on policy issues.24 As Dryzek notes, “the essence of democracy itself
is now widely taken to be deliberation.”25 Our central aim in this project is to evaluate the move
to web-based public participation in rulemaking against various criteria established by theorists
of deliberative democracy.
Page 8
A central challenge in this research is the search for valid inferences about the impact of
deliberation on an individual’s decision process, or observable indicators of deliberative behavior
in cyberspace. Developing widely agreed upon metrics poses stiff conceptual and operational
challenges.26 Many in this field identify deliberative attributes (such as autonomy from power,
reflexivity, heterogeneity, inclusion, equality, etc.) as conducive to better decisions and
democratic legitimacy.27 These attributes are drawn from various theories of reflexive
democratic discourse. Yet major differences exist across such theories of deliberation and
discursive democracy, making a specific focus on deliberative attributes rather difficult.28
Research ranges from the specific aspects of speech to the larger effect of deliberative processes
on the public sphere.
In this study, we focus on a few key attributes of deliberation noted across the spectrum
of deliberative democratic theory. For example, one of the basic concepts in the field is that
deliberation is reflective rather than simply reactive. We assume reflection is based on collecting
diverse information and forming an understanding of various positions on an issue. A second
central concept in deliberative theory is that such engagement with other positions will bring
recognition of others in the process. Participants in democratic deliberation ideally listen to
others, treat them with respect, and make an effort to understand them. Third, deliberative
theory examines the relation between discourse and the transformation of individual preferences.
The ideal of deliberation is that of communication that actually changes the preferences of
participants as they engage the positions of others. The perceived authenticity of the process and
citizen efficacy are also central to deliberative democracy, as deliberation is offered as a more
authentic form of political participation. Our questionnaire, which we describe shortly, included
items intended to measure each of these dimensions of deliberation. While we do not claim to
Page 9
cover the full range of concerns of every deliberative theorist, our measures capture the concepts
central to recent developments in democratic theory, and will give a reasonable indication of the
level of deliberative activity present in the rulemaking process.
Research Design, Case Selection, and Sampling Frame Construction
Our interest in the deliberative characteristics of regulatory rulemaking public comment
processes originated with two cases characterized by a large volume of public comments and
much political controversy: the USDA’s National Organic Standard (300,000+ comments in two
rounds); and the U.S. Forest Service’s Roadless Area Conservation rulemaking (which received
over 1.5 million comments spanning several rounds). Though these two cases were excluded
due to data quality issues, the choice to focus the study not just on large comment-receiving
regulatory actions, but ones focused on environmental issues, was based on several factors.29
First, environmental rules, especially over the last few years, have been highly
controversial, attracting large numbers of comments. More comments potentially could mean
more discourse and increasingly diverse participants. We also sought to ensure a chance for
deliberation, which meant restricting ourselves to rules in which the lead agency posted citizen
comments to its website so that visitors could see the comments of others. Both the EPA and
DOT implemented such “open docket” systems.
Second, much of the environmental politics literature claims high levels of democratic
involvement in environmental policy-making. “One of the most distinctive features of modern
U.S. environmental protection policy,” writes Andrews, is the “broad right of access to the
regulatory process, which extends not only to affected businesses but to citizens advocating
environmental protection”.30 Paehlke argues that the environmental arena has led all others in
Page 10
the scope and extent of innovations in public participation, including public inquiries, right-to-
know legislation, alternative dispute resolution, advisory committees, and policy dialogues.31
Hence a leading edge of democratic public participation in the US is in the environmental field;
this seems to have continued into web-based participation processes.
Overview of the Regulatory Actions
Given our interest in controversial environmental regulations that elicited large numbers
of public comments, we settled on the following cases (with the colloquial designations shown in
bold):
1) EPA’s advanced notice of proposed rulemaking (ANPRM) on the Clean Water Act
regulatory definition of the "Waters of the United States" (Waters)
2) EPA’s proposed National Emissions Standards for Hazardous Air Pollutants (Mercury)
3) DOT’s advanced notice of proposed rulemaking (ANPRM) on the Corporate Average
Fuel Economy Standards (CAFE).
We summarize the events leading up to each of the regulatory actions that we selected as cases
below.
The Waters ANPR
On January 15, 2003, the EPA published an Advance Notice of Proposed Rulemaking
(ANPR) on the Clean Water Act regulatory definition of the “Waters of the United States” and in
response the EPA received approximately 133,000 public comments.32 An EPA press release
dated December 16, 2003 announced that the EPA would not issue a new rule clarifying the
extent of federal jurisdiction over so-called “isolated” wetlands.33 Critics of the Bush
administration’s environmental policies were “surprised and delighted”34 by the unexpected
decision to forgo a rulemaking in the wake of the confusion created by the 2001 Supreme
Court’s controversial ruling in Solid Waste Agency of Northern Cook City v. Army Corps of
Page 11
Engineers (SWANCC). Whereas development lobbies saw the prospect of a Bush
administration rulemaking as an opportunity to free up considerable chunks of land that had been
protected for 30 years, environmentalists feared the potential rollback of federal regulatory
powers would undermine core principles articulated in the landmark 1972 Clean Water Act.
The Mercury Rulemaking
On January 20, 2004, the EPA published a proposed rule titled “Proposed National
Standards for Hazardous Air Pollutants” and in response the “Mercury” rulemaking received
approximately 500,000 public comments.35 An EPA press release dated December 15, 2003
quoted Administrator Leavitt claiming the proposed actions represented “the largest air pollution
reductions of any kind not specifically mandated by Congress.”36 Like many significant federal
regulatory actions, the EPA’s mercury rule resulted from a drawn out mix of congressional,
administrative, and legal proceedings stretching back to the Clean Air Act (CAA) Amendments
of 1990. A suit by the Natural Resources Defense Council filed in 1992 and another by the Sierra
Club in 1994 were settled later in 1994 and ultimately resulted in a “Mercury Study Report to
Congress” (RTC), which was released in December of 1997.37 On December 14, 2000, one day
after Al Gore conceded the 2000 election, EPA Administrator Carol Browner announced an EPA
“finding” that it was “appropriate and necessary” to regulate coal- and oil-fired electric utilities
under section 112 of the CAA. This proposed rule was the Bush administration’s response.
After the rulemaking process, the EPA issued a final rule on March 15, 2005, a court appointed
deadline, and was met with promises of lawsuits by a number of states and non-governmental
actors.
Page 12
The CAFE ANPR
On December 29, 2003, The DOT published an Advance Notice of Proposed Rulemaking
on reforming the automobile fuel economy standards program and in response the “CAFE”
ANPR received 66,786 public comments.38 Congress enacted the Energy Policy and
Conservation Act (EPCA) in 1975 as a response to the 1973-1974 oil embargo, thereby creating
an automotive fuel economy regulatory program. The Corporate Average Fuel Economy
(CAFE) program set requirements for the manufacturer’s fleets of 19 mpg for 1978 and 27.5
mpg for 1985. These requirements were frozen for most of the 1990s by Congress and in 2001
DOT Secretary Mineta successfully asked the Senate appropriations committee to lift the
restriction on improvements to the CAFE standard. In late 2002 the DOT issued new proposed
rules that took effect in late 2004 increasing CAFE standard by 1.5 mpg over the model years
2005-2007. The ANPR published in 2003, however, sought public comments on revising the
CAFE program’s structure to address the continuing criticism of the CAFE program related to
energy security, traffic safety, economic practicability, and the definition of the separate category
for light trucks.
Sampling Frame Construction
Having selected our cases, we worked with the Social Research Laboratory (SRL) at
Northern Arizona University to construct a sampling frame that would be used to complete the
telephone survey portion of the study. Submitted comments become part of the public record, so
we were able to rely on relatively open access to the comment sets on each rule. Our challenge
was to find commenters who left a phone number, or at least a full name and address so that we
could locate their phone numbers using a reverse phone-number look-up.
Page 13
Because our research design did not require comparisons across the cases, we did not
attempt to sample proportionally from the three cases. Instead, we were interested in making two
types of comparisons. First, we wanted to compare attitudes of people who submitted
electronically as opposed to on paper. Second, we were interested in comparisons between those
who submitted original letters as opposed to form letters. The goal was to complete 375 surveys
for each of the following four types of commenters: 1) electronic submission of form letters
(E/F); 2) electronic submission of originals (E/O); 3) paper submission of form letters (P/F); and
4) paper submission of originals (P/O). Table 1 lists the number of completed surveys for each
of the four types of commenters we were looking for. Table 2 describes the total number of
comments on each rule, the number of comments to which we had access, the limitations with
respect to the way in which the accessible comments had been selected by the agencies, and the
approach we took to sampling for each rule.
TABLE 1 ABOUT HERE
As Table 2 illustrates, we had to employ a number of different approaches to reach our
sample size goals. In each case, graduate research assistants trained as sample collectors located
the comments on the Federal agency web-based docket systems (EPA’s “EDOCKET” or DOT’s
“Docket Management System”).39 Comments were available from these websites as either
Adobe Acrobat (.pdf) or text (.txt) files. In the case of the mercury rule, EPA also provided us
with a large number of .txt files containing roughly 536,000 e-mailed comments.40
Determination of submission type was based on the content and/or appearance of the submitted
comment. Form letters included identical content and were submitted by multiple participants
that filled in their contact information. Determination of an original comment was based on
whether the letter contained a unique opinion, authored by the commenter.
Page 14
TABLE 2 ABOUT HERE
As sampling progressed, it became apparent that we lacked access to a sufficient number
of form comments on the EPA rules to ensure a balance of comment types across all three rules.
This was due to the EPA’s practice of putting one example of each form letter, rather than every
single submission, into the EDOCKET system. As noted above, comparing across rules was not
integral to the research design, so we relied on access to a greater number of form submissions in
the CAFE set of comments to complete the sample.
Since potential respondents were to be contacted by telephone, we obtained telephone
numbers either from the actual comment or by looking them up using a web-based phone
number database.41 Because we were using a systematic random sampling method, when we
could not locate a phone number, we moved to the next “nth” comment. Due to the range of
difficulties faced—from agencies failing to provide access to the entire set of submitted
comments, to obtaining phone numbers for individuals—the results of the survey are not
generalizable to the whole population of citizen commenters on these regulatory actions.
Administering the Survey
A telephone survey instrument developed in collaboration with the SRL, was
administered using a computer-assisted telephone interviewing (CATI) system. Thirty trained
interviewers completed the telephone surveys. The interviews took an average of 14 minutes to
complete. Respondents qualified to complete the interview if they recalled submitting a
comment to a Federal public agency and if they were 18 years of age or older. The survey was
completed by 1553 respondents between the dates of August 30 and November 24, 2004. This
represented a cooperation rate of 48%, with a margin of error of +/- 2.5%.42
Page 15
The survey asked questions regarding the respondents’ general commenting practices
such as the number of times that they had commented, how much information they obtained
before commenting, how they typically submit a comment, whether they refer to other citizens’
comments and, if so, the effect this has on their comments, and the reasons that they commented.
Respondents were also asked questions pertaining to the results of the final rule-making process
such as whether they thought their comments were reviewed by a government employee,
whether they heard about the final agency decision, and if so, were they satisfied with the final
decision. Respondents were also asked questions about Federal agency websites that include the
frequency of the visits, the type of information they accessed, whether they used these websites
to submit a comment, their general perceptions of the effect Federal agency websites have on
commenting, and if they would be likely to submit a comment on an agency rule in the future.
Finally, respondents were asked if they believe submitting comments individually, or as a group,
has the ability to change the outcome of the final rule. Demographic variables include age,
gender, education, income, political ideology, voting behavior, race, ethnicity and weekly
internet use (in hours).43
Survey Findings
We organize the discussion of our findings around three important discoveries. First,
electronic commenters do not appear to be any more deliberatively engaged than paper
commenters. In fact, we observed a significant difference on only one measure, where in fact
paper commenters scored higher on one key deliberative measure. Second, despite failing to find
that electronic commenters are more deliberative, we observed greater levels of self-reported
deliberative activity across all types of commenters than expected. A surprisingly large number
Page 16
of respondents reported that they read other individuals’ comments, acquire increased
understanding of other people’s positions as a result, and even occasionally change their own
positions. Third, rather than significant differences between electronic and paper commenters,
the main differences we found were between individuals who submitted original comments and
those who posted form letters.
Differences Between Paper and Electronic Commenters
The main goal of the survey was to look for differences between those who submitted
comments on paper, either through postal mail or fax, and those who used agency web-based
forms, interest group websites, or email to comment. The survey suggests that those differences
simply do not exist. There was a significant difference between electronic and paper
commenters on only one question. Paper commenters, by 74.6% to 67.1% (df=9; p<.01) over
web-based commenters, were more likely to refer to the “arguments, studies, statements, or
positions made by agencies or individual organizations.” Since paper submitters are more likely
to say that they reference other people’s work, an essential practice for creating quality
discourse, our hypothesis that electronic commenters would demonstrate greater deliberative
activity than paper commenters is not supported. We suspect this may be due to the fact that
many submitters of original paper comments also use the Internet, and web-based agency
dockets, extensively as a resource to collect information in crafting their comments. While there
is a distinction between the means citizens use to comment, all types of commenters used
electronic means to gather information in the commenting process. As for the lack of discursive
indicators by electronic commenters, it may be that the technology, which makes commenting
easier than ever before, encourages the rapid submission of comments, which is antithetical to
Page 17
more thoughtful and carefully reasoned arguments.
The Prevalence of Deliberative Indicators
While differences between electronic and paper commenters are practically nonexistent,
there are indicators that all types of commenters practice, or benefit from, certain types of
deliberative activity. In this section we report on four indicators of such deliberative discourse:
the frequency with which commenters seek out a variety of information, the tendency to review
other citizen’s comments, gaining an understanding of the positions of others, and changing
one’s own position after being exposed to the arguments of others. The findings are summarized
in Table 3.
Commenters are information-seekers
The use of information in developing a public comment is quite high. Overall,
commenters, regardless of medium, are information-seekers. When asked how much
information they receive on rules before submitting a comment, 45.2% said they get a lot of
information, and a full 90% say they get a lot or some information. Those that write original
paper comments claim the most; nearly 51% say they get a lot of information before submitting a
comment. Over 71% of those surveyed said that they referred to the arguments, studies,
statements or positions of agencies or independent organizations before submitting a comment;
again, those that submitted original paper comments were at the top with 76.7%. Agency
websites are important sources of information for commenters; a full 50% surveyed said they
used these sites in developing their comment. Again, a large majority of commenters are seeking
out information, even those who submit form letters. Few commenters, at least from what they
report, simply submit comments without trying to understand the issue.
Page 18
Commenters review other’s comments
Surprisingly, 68.0% of those surveyed said that they had read the comments of others at
some point. As these comments are only available either in person in the agency docket rooms
in DC or on the newly developed agency websites, it may be that all types of commenters are
using the agency websites to examine the docket, when such comments are available.44 For those
that specifically reported using the agency websites, 69.4% said that the site helped them review
other citizens’ comments. Again, and counter to our original hypotheses, such access to
information was reported highest (75.5%) by those who ultimately submitted original paper
comments. Still, overall reporting of the review of others’ comments is high regardless of
submission type, illustrating attention to the positions of others in the rulemaking process.
Commenters gain an understanding of other positions
Reading of other citizen’s comments is not just for information; commenters report that
they gain an understanding of the positions of others as well. Overall, nearly three-quarters
(73.2%) say they get a better understanding of the positions of other citizens by reading their
comments, and 41.5% say that they have found the comments of other citizens persuasive. Of
the commenters who said that they visited and used agency websites, a very large percentage
(71.7%) said that they somewhat or strongly agreed with the statement that the agency websites
helped them to understand the positions of others. As the difference across types of commenters
is insignificant, this finding suggests that commenters in general are gaining an understanding of
the positions of other citizens commenting on a rule. Agency websites seem to have added to
this particular indicator of democratic deliberation.
Page 19
Commenters change their own positions
Finally, over one-third (36.3%) of those surveyed report that their position on an issue
actually changed after reading others’ comments. That is less than the 47% who report no
change in their position, but the percentage that acknowledges such change is significant, and is
yet another indicator that the limited discourse made possible by access to others’ comments is
having an impact on the reasoning of citizen commenters. However, this is a question that needs
further research. It may be that people are not changing their positions from “opposed” to “for”
or vice versa, but instead changing one or more reasons for being opposed, or for, the proposed
rule. Commenters might change their reasons due to information or arguments learned from
commenters with whom they agree.
TABLE THREE ABOUT HERE
Differences Between Original and Form Commenters
The interesting and significant differences in this study are between those who submit
original comments and those who submit form-based comments (see Tables 4 and 5). A better
understanding of these differences may impact how agencies respond to public comment and
how interest groups refine their campaigns. Numerous civil servants have reported at
workshops, focus groups, and interviews over the last four years, that agencies are required to
respond to substantive comments, but not to sheer numbers. Notice and comment rulemaking
was designed to bring diverse information into rulemaking process, not to be a referendum.45
Agency officials and rulewriters are consistently adamant on this point. Many interest groups, in
addition to drawing on their legal and scientific staff to draft detailed comments, respond to the
rulemaking process with an aggregative approach, soliciting mass numbers of identical or near-
Page 20
duplicate comments from their members and other interested citizens. By all accounts, new ICTs
have enabled the number of comments to increase well beyond the capacity of agencies to cope
without expensive, outside private consulting firms to report on the content of citizen comments.
A key question is whether or not this technology improves or degrades the overall efficacy of
citizen discourse46.
Form versus original differences in information-seeking
In the survey findings, the differences between original and form commenters start with
the use of information. Over half (54.2%) of original commenters report having used an agency
website to read information on a proposed rule. This compares to only 44.2% of the form
commenters, suggesting a significant difference (df=2; p<.01). Both form and original
submitters, however, claim they gather information on rules before submitting a comment.
48.1% of original submitters claim to receive “a lot” of information, compared to 42.4% of form
submitters (df=f; ns). Similarly, there is not a great difference in the rate at which the two types
of commenters report referring to other arguments in their comments. Nevertheless, the
substance of a comment–original or form–is a bigger indicator of the use of information before
commenting than is the method of submission.
Form versus original differences in viewing of others’ comments
While there is no significant difference between original and form commenters on their
reading of others’ comments, their perceptions of others’ comments as persuasive, or their
having changed their mind as a result of reading another comment, original commenters are
significantly more likely to report (76.7% vs. 69.8%) gaining “a greater understanding of the
positions or arguments of other citizens by reading their comments” (df=3; p<.05). While both
sets of commenters read the positions of others, original submitters are more likely to report
Page 21
having a better understanding of those positions. It may be that original commenters see others’
comments as part of a larger discourse and so pay attention to all sides of the issue. Yet this
difference may also be a function of original commenters having greater faith that their
comments could actually change an outcome. If form commenters are more pessimistic about
their ability to affect outcomes, they may not read others’ comments with any intentions of
actually understanding others’ positions. The differences, as well as some similarities, are
summarized in Table 4.
TABLE 4 ABOUT HERE
Form versus original differences in trust
In addition to the modest differences between original and form commenters on the
deliberative indicators described above, there are significant differences between the two on a
number of indicators of trust in the process and the agency involved. For example, 62.7% of
original commenters (both paper and electronic) believe their comments were actually read by a
government employee, compared to only 45.6% of form commenters(df=3; p<.01). This is one
of the most significant differences we found between form and original commenters. Electronic
form commenters appear to be the most cynical in terms of their feeling that their participation
will have an impact on their satisfaction with the final rule. Conversely, those that sent paper
original comments are the most satisfied with their participation and the outcome. Not only are
form submitters more cynical about having their comments read and making a difference, they
are also more likely to say that their participation led to a negative view of the agency running
the rulemaking (45.4% for form commenters, vs. 36.0% of original commenters). Original
commenters are about 6% more likely (19.7% to 13.4%) to report a positive view of the agency
(df=5; p<.01). Original commenters report being slightly more satisfied than form commenters
Page 22
with agency decisions on issues they have commented on (57.7% of originals are unsatisfied vs.
65.7% of form submitters) (df=3; p<.05).
Finally, users of form letters are simply more negative about the government in general.
By 41.3% to 27.8% compared to original commenters they “rarely” trust the government to do
what is right (df=4; p<.01). When “rarely” and “never” trust the government are combined,
49.5% of form commenters report this distrust, while 32.5% of original commenters do. Simply
put, original submitters have significantly higher levels of trust in the government to do what is
right. These differences are reported below in Table 5.
TABLE FIVE ABOUT HERE
Overall, the survey illustrates that people believe that form letters are less likely to be
read by government employees and have an actual impact. It may be the case that a negative
view of the agency and government in general was one of the reasons for commenting in the first
place. A central question here is whether a lack of faith in the agency has led to some citizens’
refusal to take the time to write an original letter. On the other side, it may be the case that
original commenters understand the rulemaking process more thoroughly, and have more
knowledge of (and maybe sympathy for) the agency involved.
On the Value of Electronic Comment and Mass E-Mail Campaigns
There is one other key finding regarding the difference between form and original
commenters. Though it contradicts the lack of trust in government noted above, form
commenters are more likely than original commenters to think that groups that organize mass
mail campaigns have the ability to change proposed rules (86.7% to 81.7%). This may partly
explain why form commenters are much more likely to submit comments more often than
Page 23
original commenters. Sixty-two percent of form commenters report submitting comments more
than ten times, while only 44% of original commenters report that level of participation. This
difference, however, can also be explained by the expertise and time involved in many original
comments.
This faith that mass email campaigns have an impact has led to the increase in the
popularity of the tactic. Nearly 50% of those surveyed said they submitted their last comment
through an interest group website, and almost 40% reported that this method will also be how
they comment next time. Only those that had submitted paper original comments said that they
would continue that route over all others. While agencies such as the EPA and DOT have
worked to improve the information on their web-based docket systems, and the federal
government continues to develop a Federal Docket Management System as a single web-based
public comment portal, very few commenters plan to use such systems, only 12% versus the
nearly 40% who plan to use interest group websites. Mass-mailed form letter comments
originating from various environmental and other interest groups make up the vast majority of
comments submitted on rules, and will continue to do so for the near future.
This practice should be worrisome for those interested in the potential of the web to
increase discourse on important issues in the rulemaking process. Commenters who submitted
using form emails via interest group websites were the least likely to look at other information
and the least likely to report that their positions have changed as a result of reading others’
comments. In other words, electronic form commenters show the lowest scores on many
deliberative indicators. Mass email campaigns, as they are currently designed, are only useful in
an aggregative form of democracy; such an approach is better suited to pressure on legislators
than it is on agencies required to abide by the APA.
Page 24
In addition, there is little evidence to support the belief that mass email campaigns
actually do change proposed rules. While the proposed “Waters” rulemaking was dropped, other
highly controversial rulemakings went forward while tens of thousands and sometimes hundreds
of thousands of comments came in against them. Interviews with agency rulewriters and
officials show that agencies do not value (and often openly resent) form letters; they simply do
not meet the minimum requirement of a “substantive” comment. The EPA, in fact, simply prints
and stores an inaccessible hard copy of all but one example of each identical or similar mass e-
mail after taking note of how many they received of each specific version. Importantly,
however, our interviews and focus groups show that these same officials would welcome more
substantive and original comments, as they could return the rulemaking process to that designed
by the APA – based on the collection of information and substantive input from interested parties
outside of the government.
Conclusion: Building Political Capability Among New Commenters
The distinction between paper and electronic commenters, which was the basis of our
original set of hypotheses, simply does not exist as we imagined it might. A majority of
commenters, regardless of the medium of submission, are using electronic means of researching
an issue, with paper commenters reporting a greater use of web-based agency docket systems.
Comparing paper and electronic commenters on recent rules does not help us understand whether
the new electronic systems are more deliberative than past paper-based notice and comment
processes. One could try to explore differences between current rulemaking processes and past,
pre-internet processes, but given the weakness of the human memory, a survey would be an
inappropriate method.
Page 25
That said, the issue of the difference between original and form-based participation is
obviously at the forefront of the questions regarding the potential deliberative activity centered
on the rulemaking process. Original commenters embody many of the deliberative qualities we
hypothesized given the move to an accessible open-docket system. The range of significant
differences between original letter writers and form letter submitters might be partially explained
by the introduction of a large number of commenters (mostly form users) who are new to the
rulemaking process. The ease with which interest groups can spread information to constituents
about proposed rules open for public comment, and the sophistication of email action alert
systems that allow individuals to “participate” by doing little more than clicking the “send”
button on an interest group’s website, means agencies are getting more comments, especially
from people who have not participated in the process in the past. Though many of these
participants, even electronic form submitters, reported to us that they seek out information before
sending in their comments, form submitters are nevertheless much more cynical about the
process, and much less deliberative in their engagement. This leads us to conclude that there is a
certain amount of political capability that must be acquired before these new participants have a
level of efficacy and trust in the process that will justify the effort required to become more
deliberative participants.
Perhaps as the very technology that has brought more participants into the process is
better utilized to handle increased levels of participation, all types of participants—from paper
original letter writers to electronic form submitters—will feel their participation is meaningful.
In turn, theoretically, these participants will invest time in becoming more educated, thoughtful,
and deliberative commenters. This is important since 91% of respondents said they are very or
somewhat likely to submit comments again in the future. Whereas 39.7% report they will go
Page 26
through an interest group website to submit their comments (and, presumably, the majority of
these will send the “click to send” variety of form letter), only 12% plan to use an agency
website. Federal agencies do not necessarily need to figure out how to get more people to
comment through their websites, but they do need to figure out how to get more commenters to
trust the process and invest time in enhancing the discourse surrounding a proposed rule. In the
meantime, the deliberative divide between citizens who invest time to write original letters and
those who merely submit form letters has the potential to increase feelings of powerlessness or
disenfranchisement.
So we conclude by noting the potential of electronic rulemaking to enhance democratic
deliberation on key issues in the American polity. Certainly, we see that some citizens are
interested in rules, information surrounding various issues, and in what other citizens have to say
in the comment process; many citizens are also willing to have their own positions challenged
and possibly transformed in the engagement with others. We also see that technology exists both
to enhance the deliberative process (the open dockets and access to information on agency
websites) and to degrade discourse (the easy click-to-send web pages on interest group websites).
Obviously, the technology will not stand still; we only hope that research like this will push the
agencies and interest groups alike to develop systems that meet the ideals of both the APA notice
and comment process and deliberative democracy to increase the amount of information and
exchange of views in the development of better policy.
Page 27
Works Cited
Andrews, Richard N.L. 1999. Managing the Environment, Managing Ourselves: A History of
American Environmental Policy. New Haven: Yale.
Barber, Benjamin. 1984. Strong Democracy: Participatory Politics for a New Age. Berkeley:
University of California Press.
Beierle, Thomas C. 2004. Digital Deliberation: Engaging the Public Through Online Policy
Dialogues. In Democracy Online: The Prospects for Political Renewal Through the
Internet, ed. Peter Shane, 155-166. New York: Routledge.
Berkman Center for Internet & Society. 2005. Online Deliberative Discourse Project.
http://cyber.law.harvard.edu/projects/deliberation/ [Last accessed February 28, 2005].
Bessette, Joseph M. 1994. The Mild Voice of Reason: Deliberative Democracy and American
National Government. Chicago: University of Chicago Press.
Bimber, Bruce. 2003. Information and American Democracy: Technology in the Evolution of
Political Power. New York: Cambridge University Press.
Bimber, Bruce. 2000. The Study of Information Technology and Civic Engagement, Political
Communication. 17 (4): 329-333.
Bohman, James. 1996. Public Deliberation: Pluralism, Complexity and Democracy. Cambridge,
MA: MIT Press.
Coglianese, Cary. 2004. E-Rulemaking: Information Technology and Regulatory Policy.
Regulatory Policy Program Report No. RPP-05.
Coglianese, Cary. 2003. The Internet and Public Participation in Rulemaking. Paper prepared for
conference on Democracy in the Digital Age, Yale Law School, April 2003.
Coleman, S., & Gøtze, J. 2001. Bowling Together: Online Public Engagement in Policy
Deliberation. http://www.bowlingtogether.net/ [Last accessed January 31, 2003].
Dahl, Robert. 1961. Who Governs?. New Haven: Yale University Press.
Dahlbergh, Lincoln. 2001. Computer-Mediated Communication and the Public Sphere: A
Critical Analysis. Journal of Computer-Mediated Communication. 7 (1)
http://www.ascusc.org/jcmc/vol7/issue1/dahlberg.html [Last accessed March 1, 2005].
Dryzek, John S. 2000. Deliberative Democracy and Beyond: Liberals, Critics, Contestations
New York: Oxford University Press.
Froomkin, A. Michael. 2004. Technologies for Democracy. In Democracy Online: The
Prospects for Political Renewal Through the Internet, ed. Peter Shane, 3-20. New York:
Routledge. 3-20.
Froomkin, A. Michael. 2003. Habermas@Discourse.Net: Toward a Critical Theory of
Cyberspace. Harvard Law Review 116 (3): 751-873.
Habermas, Jurgen. 1996. Between Facts and Norms: Contributions to a Discourse Theory of
Law and Democracy. Cambridge, MA: MIT Press.
Page 28
Janssen, Davy and Raphaël Kies. 2004. Online Forums and Deliberative Democracy:
Hypotheses, Variables and Methodologies. Prepared for the Conference on Empirical
Approaches to Deliberative Politics, European University Institute, Florence (May 22-23,
2004) http://edc.unige.ch/publications/e-workingpapers/onlineforums.pdf [Last accessed
February 28, 2005].
Lessig, Lawrence. 1999. Code and Other Laws of Cyberspace. New York: Basic Books.
Lubbers, Jeffrey S. 2002. The Future of Electronic Rulemaking: A Research Agenda.
Administrative and Regulatory Law News 27 (4): 6-7, 22-23.
Lubbers, Jeffrey S. 1998. A Guide to Federal Agency Rulemaking. Third Edition. Chicago:
ABA.
Muhlberger, Peter. 2004. Polarization of Political Attitudes and Values on the Internet. Paper
presented at the 2004 International Communication Association meeting, New Orleans,
LA http://communityconnections.heinz.cmu.edu/papers/index.jsp [last accessed March 1,
2005].
Noveck, Beth S. 2004. The Electronic Revolution in Rulemaking. http://snipurl.com/88jf [Last
accessed February 28, 2005].
Paehlke, Robert. 1989. Environmentalism and the Future of Progressive Politics. New Haven:
Yale University Press.
Rawls, John. 1996. Political Liberalism. New York: Columbia University Press.
Schlosberg, David, Stuart Shulman, and Stephen Zavestoski. 2005. Virtual Environmental
Citizenship: Web-Based Public Participation on Environmental Rulemaking in the U.S..
In Environmental Citizenship: Getting from Here to There, eds. Andy Dobson and Derek
Bell. Cambridge, MA: MIT Press.
Schlosberg, David, and John S. Dryzek. 2002. Digital Democracy: Authentic or Virtual?
Organization & Environment 15 (3): 332-335.
Shane, Peter. 2004. Introduction: The Prospects for Electronic Democracy. In Democracy
Online: The Prospects for Political Renewal Through the Internet, ed. Peter Shane, xi-xx.
New York: Routledge.
Shulman, Stuart W. 2004a. The Internet Still Might (but Probably Won’t) Change Everything:
Stakeholder Views on the Future of Electronic Rulemaking.
http://erulemaking.ucsur.pitt.edu/doc/reports/e-rulemaking_final.pdf [Last accessed April
11, 2005].
Shulman, Stuart W. 2004b. Whither Deliberation? Mass e-Mail Campaigns and U.S. Regulatory
Rulemaking. http://erulemaking.ucsur.pitt.edu/doc/papers/Smarttape12.04.pdf [Last
accessed April 11, 2005].
Shulman, Stuart W. 2003. An Experiment in Digital Government at the United States National
Organic Program. Agriculture and Human Values 20, (3): 253-265.
Shulman, Stuart W., David Schlosberg, Stephen Zavestoski, and David Courard-Hauri. 2003.
Electronic Rulemaking: New Frontiers in Public Participation. Social Science Computer
Review 21 (2): 162-178.
Page 29
Sunstein, Cass R. 2001. Republic.com. Princeton, N.J.: Oxford: Princeton University Press.
Truman, David. 1960. The Governmental Process: Political Interests and Public Opinion. New
York: Knopf.
Witschge, Tamara. 2004. Online Deliberation: Possibilities of the Internet for Deliberative
Democracy. In Democracy Online: The Prospects for Political Renewal Through the
Internet, ed. Peter Shane, 109-122. New York: Routledge.
Young, Iris Marion. 2000. Inclusion and Democracy. Oxford: Oxford University Press.
Zavestoski, Stephen, Stuart Shulman and David Schlosberg. 2006. Democracy and the
Environment on the Internet: Electronic Citizen Participation in Regulatory Rulemaking.
Science, Technology & Human Values, forthcoming.
Zavestoski, Stephen and Stuart W. Shulman. 2002. The Internet and Environmental Decision-
Making. Organization & Environment 15 (3): 323-327.
Page 30
Table 1. Summary of Completed Surveys
Electronic Submission Paper Submission Total
Form Comment 376 421 797
Original Comment 381 375 756
Total 757 796 1553
Table 2. Case Characteristics and Data Access.
Waters Mercury CAFE
Estimated total
number of
comments ~135,000 ~490,000 66,786
Comments in
sampling frame 3,223 4,264;
+~536,000 emails 66,786
Access to
comments
EPA’s “eDocket” web-
based docket management
system
EPA’s “eDocket” web-
based docket management
system; EPA also supplied
.txt files containing
~536,000 emails
DOT’s web-based “Docket
Management System”
(DMS)
Limitations
EPA places only unique
original comments in the
eDocket system, plus one
example of each type of
form letter received,
therefore our ability to
include submitters of form
letters on this rule was
limited; EPA reports
having deleted over
125,000 emails
Access to limited form
letters in the eDocket
system was off-set by .txt
files containing all
~536,000 emails submitted,
the vast majority of which
were form submissions
DOT did not enumerate all
66,786 comments in the
DMS, but rather created
“records” of form
comments containing
anywhere from 2 to 25,432
versions of a form letter
Sampling
approach
We collected contact info
from every single
electronic and paper form
letter that existed in
eDocket (therefore no
sampling actually took
place); for original letters,
we collected info from
eDocket using systematic
random sampling until the
target of 125 paper and 125
electronic originals was
reached
We collected contact info
from every single
electronic and paper form
letter that existed in
eDocket, and then used text
searching to acquire phone
numbers from the ~536,000
email submitters; for
original letters, we
collected info from
eDocket using systematic
random sampling until the
target of 125 paper and 125
electronic originals was
reached
Systematic random
sampling was used to select
records from DMS; when a
record contained no contact
info, the next record was
selected; when a record
contained multiple versions
of a form letter, systematic
random sampling was used
within the batch of form
letters
Page 31
Table 3. Summary of Deliberation Measures
(Cell values are percentages
Ns are reported in “Total”
column)
Paper
Original
Paper
Form
Electronic
Original
Electronic
Form
Total
%(N)
Signif-
icance
Commenters are information-seekers
“A lot” 50.8 43.3 45.4 41.4 45.2 (700)
“Some” 40.4 47.6 44.4 47.3 45.0 (697)
“A little” 7.0 6.2 7.6 9.9 7.6 (118)
“None at all” 1.1 1.9 1.3 .8 1.3 (20)
In general, how much
information do you receive on
rules before submitting a
comment?
“Don’t Know” .8 1.0 1.3 .5 .9 (14)
.284
(df=12)
YES 76.7 72.6 67.5 66.7 70.9 (1099)
NO 7.0 11.4 12.6 16.3 11.8 (183)
When preparing your
comments, do you refer to
arguments, studies,
statements or positions made
by agencies or independent
organizations? OTHER 16.3 16.0 19.9 17.1 17.3 (268)
.009
(df=9)
YES 50.0 42.7 58.4 45.9 49.1 (763)
NO 48.9 55.0 39.5 51.5 48.9 (760)
Have you ever used a federal
agency’s website to read
information on a proposed
rule? OTHER 1.1 2.4 2.1 2.7 2.1 (32)
.000
(df=6)
Commenters review other comments
YES 70.1 72.6 67.9 71.6 70.6 (806)
NO 27.3 23.6 27.6 26.2 26.2 (299)
Have you ever read other
citizen’s comments before
sending in a comment? OTHER 2.5 3.7 4.4 2.2 3.2 (37)
.605
(df=6)
Commenters gain an understanding of other positions
Among those who reported
reading others’ comments:
YES 79.5 69.0 74.1 70.6 73.2 (824)
NO 9.5 13.4 9.7 11.0 10.9 (123)
Do you gain a greater
understanding of the positions
or arguments of other citizens
by reading their comments? OTHER 11.0 17.6 16.2 18.3 15.8 (178)
.196
(df=9)
YES 37.5 44.3 45.4 38.1 41.5 (459)
NO 22.7 23.5 19.4 27.3 22.0 (243)
Have you found that other
citizen’s comments are
persuasive? OTHER 39.8 32.1 35.2 39.6 36.6 (405)
.488
(df=9)
Commenters change their own positions
YES 36.9 37.2 37.4 33.6 36.3 (408)
NO 46.9 45.2 45.7 51.5 47.2 (531)
Has your own position on
issues EVER changed at all
as a result of reading other
citizens’ comments? OTHER 16.2 17.6 16.9 15.0 16.4 (185)
.939
(df=9)
Page 32
Table 4. Summary of Form vs. Original Differences in Deliberation Measures
(Cell values are percentages
Ns are reported in “Total”
column)
Originals
Forms
Total
%(N)
Signif-
icance
Information Seeking
“A lot” 48.1 42.4 45.2 (700)
“Some” 42.4 47.5 45.0 (697)
“A little” 7.3 7.9 7.6 (118)
“None at all” 1.2 1.4 1.3 (20)
“Don’t Know” 1.1 .8 .9 (14)
In general, how much
information do you receive on
rules before submitting a
comment?
Total (N=) 755 794 1549
.225
(df=4)
YES 72.0 69.8 70.9 (1099)
NO 9.8 13.7 11.8 (183)
OTHER 18.1 16.5 17.3 (268)
When preparing your
comments, do you refer to
arguments, studies,
statements or positions made
by agencies or independent
organizations? Total (N=) 755 795 1550
.110
(df=3)
YES 54.2 44.2 49.1 (763)
NO 44.2 53.3 48.9 (760)
OTHER 1.6 2.5 2.1 (32)
Have you ever used a federal
agency’s website to read
information on a proposed
rule? Total (N=) 756 799 1555
.000
(df=2)
Viewing Others’ Comments
YES 69.0 72.2 70.6 (481)
NO 27.5 24.9 26.2 (299)
OTHER 3.5 2.9 3.2 (37)
Have you ever read other
citizen’s comments before
sending in a comment?
Total (N=) 571 571 1142
.497
(df=2)
Among those who reported
reading others’ comments:
YES 76.7 69.8 73.2 (824)
NO 9.6 12.2 10.9 (123)
OTHER 13.7 18.0 15.8 (178)
Do you gain a greater
understanding of the positions
or arguments of other citizens
by reading their comments? Total (N=) 563 562 1125
.041
(df=3)
Page 33
Table 5. Original vs. Form Differences in Trust and Satisfaction
(Cell values are percentages
Ns are reported in “Total”
column)
Originals
Forms
Total
%(N)
Signif-
icance
YES 62.7 45.6 54.0 (837)
NO 11.0 17.8 14.5 (225)
OTHER 26.3 36.6 31.5 (490)
Do you think the comments
you have submitted were
viewed by a government
employee? Total (N=) 756 796 1552
.000
(df=3)
POSITIVE 19.7 13.4 16.5 (173)
NEGATIVE 36.0 45.4 40.7 (427)
OTHER 44.3 41.2 42.8 (448)
Does your participation
generally lead you to have a
positive or negative view of
the agency involved? Total (N=) 512 536 1048
.003
(df=5)
YES 13.7 10.9 12.3 (162)
NO 57.7 65.7 31.8 (816)
OTHER 28.6 23.4 25.4 (342)
Have you been satisfied with
the agencies’ decisions on
issues that you have
commented on? Total (N=) 643 677 1320
.022
(df=3)
All or some of the
time 66.1 49.1 57.3 (872)
Rarely or Never 32.5 49.5 41.2 (627)
Don’t Know 1.4 1.5 1.4 (22)
How often do you trust the
federal government to do
what is right?
Total (N=) 740 781 1521
.000
(df=4)
1 This project was funded by a grant (SES-0322622) from the National Science Foundation,
Social and Economic Sciences, Program on Social Dimensions of Engineering, Science and
Technology (SDEST), Ethics and Values Studies. Any opinions, findings, conclusions, or
recommendations expressed in this material are those of the authors and do not necessarily
reflect those of the National Science Foundation.
2 The federal eRulemaking Initiative (http://www.regulations.gov/eRuleMaking.cfm) is one of 24
E-Government efforts at the federal level (http://www.whitehouse.gov/omb/egov/). On the
progress of the President’s Management Agenda to date, see the GAO report “Electronic
Government: Initiatives Sponsored by the Office of Management and Budget Have Made Mixed
Progress” GAO-04-561T available at: http://www.gao.gov/new.items/d04561t.pdf.
3 See http://www.getactive.com/ or http://capitoladvantage.com/ for examples of firms that sell e-
advocacy services to groups by highlighting millions of constituent messages delivered and
dollars raised for their customer organizations. The pitch is backed by claims that tools such an
“Email Relationship Manager,” for example, will improve communication not only between
citizens and their government, but also between group leaders and members. The data mining
that goes on, however, tends to be for the purpose of building and strengthening the organization
itself through targeted electronic mail appeals to commenters. Some activists privately report that
limits on staff time and resources mean that organizations rarely mine their own members’
comments for good ideas.
Page 34
4 On electronic rulemaking see Shulman et al. 2003; Shulman 2004a; Coglianese 2004; Lubbers
2002, on online political deliberation see Shane 2004.
5 Berkman Center 2005
6 See: http://www.online-deliberation.net/conf2005/.
7 See: http://cdd.stanford.edu/polls/docs/2004/onlinedp-release.pdf.
8 Beierle 2004; Schlosberg & Dryzek 2002; Sunstein 2001.
9 Bimber 2003 & 2000.
10 Froomkin 2004; Coleman and Gøtze 2001.
11 Lessig 1999.
12 Noveck 2004.
13 Zavestoski & Shulman 2002.
14 Shulman, et al. 2003; Schlosberg, et al. 2005.
15 For example Dahl 1961; Truman 1960.
16 A recent report from the American Political Science Association Task Force on Inequality and
American Democracy stated “the Internet may ‘activate the active’ and widen disparities
between participants and the politically disengaged by making it easier for the already politically
engaged to gain political information.” See “American Democracy in an Age of Rising
Inequality” at http://snipurl.com/7egy.
17 Shane 2004: xx; see also Coglianese 2003; Muhlberger 2004.
18 Froomkin 2003, 777.
19 One of the central challenges for research in this field is that most cases are exceptional; the
business practice of an institution or the architecture of an electronic interface is often novel,
experimental, or entirely idiosyncratic (Beierle 2004; Shulman 2003). Our own research looks
only at those environmentally-driven regulatory actions where the architecture of the online
notice and comment process permitted commenters to view other comments before writing their
own comments, and where the total number of public comments received numbered in the tens
or hundreds of thousands. Our survey respondents are therefore drawn from exceptional rather
than ordinary rulemakings.
20 Sunstein 2001.
21 Lubbers 1998: 19.
22 See Amy Goldstein and Sarah Cohen, “Bush Forces a Shift in Regulatory Thrust,” Washington
Post (August 15, 2004), A1, the first of a series of three in the Post on recent regulatory politics
and which appeared about the same time as Joel Brinkley, “Out of the Spotlight, Bush Overhauls
U.S. Regulations,” New York Times (August 14, 2004).
23 See, for example, Bessette 1994; Rawls 1996.
24 Barber 1984; Bohman 1996; Dryzek 2000; Habermas 1996; Young 2000.
25 Dryzek 2000: 1
26 Janssen and Kies 2004.
27 Froomkin 2004; Witschge 2004; Dahlberg 2001.
28 See Dryzek 2000.
29 Zavestoski et al 2006.
30 Andrews 1999: 240
31 Paehlke 1989.
Page 35
32 See Federal Register Vol 68, No. 10 pp. 1991-1998 (available at: http://snipurl.com/dac3). At
a June 10, 2003 hearing before a Senate Subcommittee, G. Tracy Mehan, Assistant
Administrator for Water, noted that most of the comments received were “the result of e-mail or
write-in campaigns,” whereas about 500 were “substantive” comments. See S. HRG. 108-352, p.
16.
33 See http://snipurl.com/dace.
34 See http://www.nrdc.org/bushrecord/water_wetlands.asp.
35 See Federal Register Vol. 69, No. 20 pp. 4652-4752 (available at: http://snipurl.com/dab9).
“As of February 2005, EPA E-Docket shows an actual count of more than ~490,000 public
comments and close to 4,500 unique comments received. The initial count of 680,000 and 5,000
included duplicate and triplicate e-mails and comments related to other rules.” See:
http://snipurl.com/dabd.
36 See http://snipurl.com/dabh.
37 See: http://www.epa.gov/mercury/report.htm.
38 See Federal Register Vol. 68, No. 248, pp. 74908-74931 (available at:
http://snipurl.com/daw6).
39 Thanks go to Michael Aquino, Tina Eyraud, Meg Inokumu, Jonathan Nez, Suzuki Susumu,
Paul Vaughn, and Baohua Yen.
40 Between the time we received the 536,000+ text files and this writing, the EPA determined that
nearly 50,000 of the e-mails were exact duplicates, triplicates, spam, or submissions for other
rulemakings, hence there is a discrepancy between the estimated total number of comments
received and the number of comments in our sample frame.
41 We used www.whitepages.com, and found that we were able to obtain phone numbers for
slightly more than 60% of the names and addresses we entered.
42 While we are discussing “citizen” commenters, we should make clear that a small percentage
of our respondents were involved in the rulemaking process in roles other than private citizen.
Of those surveyed, 86.4% reported that they generally commented as private citizen, 7.1% as a
paid employee, 3.4% as an unpaid volunteer, and 3.2% as something else (though mostly as a
representative of an interest group).
43 Obviously, there are problems with operationalizing our questions within the methodology of
survey research. Participants may understand the questions in ways different than we intended,
self-reporting may exaggerate discursive indicators, and citizens may simply be mistaken about
what they actually did during the rulemaking process. Still, we think it is central in an
examination of these issues to get direct input from a large number of citizen participants in the
rulemaking process, and are confident that our methods meet the standards of survey research.
44 Then again, as only 50% say they visited agency websites, and it seems unlikely that 18%
physically visited a docket room, this number needs further explanation. It may be that some
who report reading others’ comments saw samples on interest group websites.
45 Shulman 2004a.
46 Shulman 2004b.