Conference PaperPDF Available

'Security by Obscurity': Journalists’ Mental Models of Information Security.

Authors:

Abstract

Despite wide-ranging threats and tangible risks, journalists have not done much to change their information or communications security practices in recent years. Through in-depth interviews, we provide insight into how journalists conceptualize security risk. By applying a mental models framework, we identify a model of “security by obscurity”—one that persists across participants despite varying levels of investigative experience, information security expertise and job responsibilities. We fnd that the prevalence of this model is attributable at least in part to poor understandings of technological communication systems, and recommend future research directions in developing educational materials focused on these concepts.
“Security by Obscurity”: Journalists’ Mental Models of Information Security
33
“Security by Obscurity”: Journalists’ Mental Models
of Information Security
Susan E. McGregor and Elizabeth Anne Watkins
Despite wide-ranging threats and tangible risks, journalists have not done much to change their
information or communications security practices in recent years. Through in-depth interviews,
we provide insight into how journalists conceptualize security risk. By applying a mental models
framework, we identify a model of “security by obscurity”—one that persists across participants
despite varying levels of investigative experience, information security expertise and job
responsibilities. We nd that the prevalence of this model is attributable at least in part to poor
understandings of technological communication systems, and recommend future research
directions in developing educational materials focused on these concepts.
Introduction
Among the rst and most shocking of the Snowden revelations of 2013 was the public
disclosure of the U.S. government’s large-scale collection of communications metadata,
including the information of U.S. citizens (Greenwald, 2013). While there was nothing
to suggest that journalists were particular targets of this effort, the revelations were
nonetheless a shock to the U.S. journalism community in particular, which for decades
had operated with the understanding that their communications with sources were
effectively protected from government interference by the network of so-called “shield
laws” that prevented law enforcement from using the legal system to compel journalists
to reveal their sources. That the Snowden documents also implied government inltration
of the systems of companies (such as Google) (Greenwald & MacAskill, 2013) to which
many journalistic organizations had recently turned over their own email services, was
even more unsettling (Wagstaff, 2014).
Moreover, the Snowden revelations came at a time when the journalism industry was
feeling particularly sensitive about the government’s collection and use of metadata: only
several weeks prior, the U.S. Department of Justice had notied the Associated Press
that it had been secretly monitoring both the ofce and mobile phone lines of several
AP journalists as part of a leak investigation (Horowitz, 2013). Though outcry from the
industry over this activity eventually resulted in promises from the attorney general that
orders for journalists’ information would be reviewed more closely (Savage, 2013), these
were only good-faith assurances. Should they be contravened, a news organization
#ISOJ Volume 6, Number 1, Spring 2016
34
could not, as Senior Vice President and General Counsel of Hearst Corporation Eve
Burton put it, “march into court and sue the DOJ” (McGregor, 2013, para. 7).
At the same time, the U.S. government’s willingness to treat metadata as legally
dispositive was playing a role in multiple high-prole journalistic leak investigations.
Though another 18 months remained in the standoff between the Department of Justice
and The New York Times reporter James Risen over Risen’s refusal to identify the
source of classied information included in his 2006 book State of War, District Judge
Leonie Brinkema had quashed the subpoena for Risen’s testimony, on the basis that
the “numerous telephone records, e-mail messages, computer les and testimony that
strongly indicates that Sterling was Risen’s source” (Brinkema, 2011, p.23). In 2015,
Jeffrey Sterling was convicted and sentenced under the Espionage Act, though Risen
never testied (Maass, 2015). Similarly, just a few weeks before the Snowden revelations
The Washington Post reported on the DOJ’s use of Fox News reporter James Rosen’s
telephone and other metadata to build a case against Stephen Jin-Woo Kim in 2010
(Marimow, 2013).
The security risks to journalists and journalistic organizations in recent years have
not been conned to legal mechanisms and leak prosecutions, however. During 2013
alone a host of major news organizations—including The New York Times, The Wall
Street Journal, Bloomberg, and The Washington Post—revealed that their digital
communications systems had been the target of state-sponsored digital attacks
(Perlroth, 2013), a pattern that was corroborated by independent security researchers in
the spring of 2014 (Marquis-Boire & Huntley, 2014). In at least some cases, the objective
of the attacks seemed to be the identication of journalists’ sources. In the case of The
New York Times, for example, the timing and pattern of the attack suggested that the
motivation was to uncover the identities of sources for a range of embarrassing stories
about Chinese government ofcials (Perlroth, 2013). In other cases, hacking efforts
appeared more ad-hoc and retaliatory, as when the Syrian Electronic Army (SEA)
defaced the VICE website following a story that allegedly revealed the real identity
of SEA member “Th3 Pr0” (Greenberg, 2013), or when the Associated Press’ Twitter
account was hacked, leading to false reports of a bomb detonating near the White House
(Blake, 2013).
Despite the wide range of threats and tangible consequences of these events (for
example, both Sterling and Kim were convicted and sentenced to prison time as a
result of their implication as journalistic sources), research shows that in the roughly 30
months since the Snowden revelations, even investigative journalists have not done
much to change their practices with respect to information or communications security.
For example, a Pew Research Survey of investigative journalists conducted in late 2014
found that fully half of these practitioners did not report using information security tools
in their work, and less than 40% reported changing their methods of communicating
with sources since the Snowden revelations (Mitchell, Holcomb & Purcell, 2015a). Yet
the same research indicates that the majority of investigative journalists believe that
the government has collected data about their communications (Mitchell, Holcomb &
Purcell, 2015a). And while the Pew survey found that fully 88% of respondents reported
“Security by Obscurity”: Journalists’ Mental Models of Information Security
35
“decreasing resources in newsrooms” as the top challenge facing journalists today, more
than half (56%) named legal action against journalists as the second.
On its surface, these results offer an apparent contradiction: roughly the same majority
proportion of investigative journalists (62%) had not changed the way they communicate
with sources in the 18-months after the Snowden revelations, despite the belief that the
government is collecting data about their communications (Mitchell, Holcomb & Purcell,
2015a), and that legal action against journalists is the second-biggest challenge faced
in the profession today. And, as noted above, these concerns are well founded given
the signicant reliance by law enforcement on communications’ metadata to prosecute
journalistic sources.
Literature Review
Mental Models and Journalists’ Security Practices
Discrepancies between belief and practice are hardly unique to journalists, and a range
of frameworks is used in the behavioral sciences to both describe and these gaps and
design mechanisms for change (Gastin & Gerjo, 1996; Festinger, 1962). Of these,
however, only a mental models framework captures both the systemic and technological
nature of journalists’ information security space.
While there are many denitions of the term mental model across elds (Doyle & Ford,
1998), one useful denition comes from Norman, who characterizes a mental model as a
construct that a person or group uses to represent a system and make decisions about it
(1983, p. 7). Based on our research and the fact that journalists’ security understandings
and practices exist at the intersection of multiple technological and human systems
of which journalists themselves may have varying levels of understanding (Mitchell,
Holcomb & Purcell, 2015a), we nd that exploring and characterizing journalists’ mental
models of information security helps illuminate how and why journalists make the
information security choices that they do.
Growing Digital Risk
The majority of both legal and technological security risks to journalists and sources
in recent years have centered on digital communications technology. In the United
States, the most high-prole of these were leak prosecutions that relied on digital
communications metadata (Horowitz, 2013; Brinkema, 2011), and technical attacks by
state actors on U.S. news organizations (Perlroth, 2013a; 2013b).
While such incidents are becoming unsettlingly common, however, this does not mean
that they constitute an appropriate proxy for the breadth of security risk actually faced
by journalists and journalistic organizations, even in a solely U.S. context. While by
2013 the Obama administration had brought a total of seven cases against journalists’
source under the Espionage Act (Currier, 2013) more than twice that of all previous
administrations combined—this record is not of a particular policy decision or a greater
#ISOJ Volume 6, Number 1, Spring 2016
36
absolute number of leaks, but also of more general policies and the greater feasibility of
tracking disclosures (Shane & Savage, 2012). As one department ofcial put it:
As a general matter, prosecutions of those who leaked classied information to
reporters have been rare, due, in part, to the inherent challenges involved in
identifying the person responsible for the illegal disclosure and in compiling the
evidence necessary to prove it beyond a reasonable doubt (Liptak, 2012, p.1)
In other words the recent urry of leak prosecutions is not the result of the administration
working harder, but because the process is getting easier, including “a proliferation of
e-mail and computer audit trails that increasingly can pinpoint reporters’ sources” (Shane
and Savage, 2012, para. 3).
Similarly, while sophisticated technical attacks by nation-states like China (Perlroth,
2013a) and North Korea (Grisham, 2015) have been prominently reported, more
commonplace attacks have also become more frequent. For example, more generalized
phishing attacks (Greenberg, 2014) and exploitation attacks (Mattise, 2014) have also
been on the rise.
Thus, while the industry consciousness has been focused on leak prosecutions and
technical attacks relating to national-security beats, the reality is that the general security
risk for journalists has been growing in recent years across the board. From SEC
investigations (Coronel, 2014; Hurtado, 2014) to phishing attacks (Associated Press,
2013; Greenberg, 2014), evidence suggests that while thus far the consequences of
national-security related threats have been more severe, the risks faced by journalists
are more general across the board.
Despite both the severity and pervasiveness of these attacks, however, research
indicates that journalists believe that information security is “as a serious concern mainly
for journalists who cover national security, foreign affairs or the federal government”
(Mitchell, Holcomb & Purcell, 2015a, 13). Reecting this attitude, more than 60% of
investigative journalists had never participated in any type of information security training
(Mitchell, Holcomb & Purcell, 2015a).
Mental Models
Journalists’ failure to engage with information security topics and tools can be explained
in a number of ways; indeed, failure to adopt information secure tools and practices has
been the subject of substantial research within the security community, especially since
Alma Whitten and J. D. Tygar’s seminal paper on the topic, “Why Johnny can’t encrypt”
(1999). Like Whitten and Tygar, computer security researchers have tended to focus
on either the usability of the security tools available (Renaud, Volkamer & Renkema-
Padmos, 2014), or to uncritically label information security failures as user errors (as
discussed in Sasse et al., 2001). Even if accurate, however, these explanations do little
to explain why journalists may not see information security practices as essential in the
rst place.
“Security by Obscurity”: Journalists’ Mental Models of Information Security
37
By contrast, understanding journalists’ mental models of information security can provide
valuable insight into how they interact with security-related systems and processes.
Because mental models comprise “what people really have in their heads and guide their
use of things” (Norman, 1983, p. 12), they can offer both “explanatory and predictive
power” (Rook & Donnell, 1993, p. 1650) for journalists’ decisions about systems and
situations like digital communications and information security.
A complete mental model is usually comprised of one or more system models along with
related knowledge and concepts about how that system behaves in particular domains
(Brandt & Uden, 2003). For example, a mental model of using a search engine to locate
information on the Internet might be comprised of a system model of how the search
engine retrieves and ranks information, along with conceptual models about what types
of search terms will yield the preferred results. Taken together, these models would
constitute the particular users mental model of Internet searching.
Importantly, however, the system models that help make up a given model are not
always complete or accurate; while this may reduce the efcacy of the mental model,
it does not necessarily render it completely useless. For example, many of us are able
to employ sufciently useful mental models of searching with Google that we can use it
to nd the Web information we are looking for; given that their search algorithm is both
complex and proprietary, however, we do not have a complete system model of how the
search engine actually functions. As such, it is possible for users to have mental models
based upon inaccurate or missing system models that are still sufcient for use.
Moreover, experience with a system does not necessarily translate to an accurate
system or mental model of it. For example, early research on users’ mental models of
the Internet found that only a small number of the users surveyed—many of whom used
it quite extensively and effectively for their desired purposes—possessed a complete
and detailed mental model of how the Internet functioned. This led the researchers to
conclude that “frequent use of the Internet appears to be more of a necessary than a
sufcient condition for detailed and complete mental models of the Internet” (Thatcher
& Greyling, 1998, 304). This nding has been echoed in related ndings about users’
mental models of search engines (Brandt & Uden, 2003), email (Renaud et al., 2014)
and credential management (Wastlund, Angulo, & Fischer-Hubner, 2012). In the case
of encrypted email in particular, even a computer-science background-which might
presumably affect participants’ understandings of technical systems-had no apparent
impact on the completeness or accuracy of participants’ mental models of email
communication (Renaud et al., 2014). These smaller experimental results are also
supported by broader, more recent ndings. For example, a signicant percentage of
global social network users are unaware that services like Facebook are on the Internet
(Mirani, 2015).
Methods
In order to learn more about how journalists’ mental models of information security
might be inuencing their related attitudes and behaviors, we conducted in-depth,
#ISOJ Volume 6, Number 1, Spring 2016
38
semi-structured interviews with journalists (N = 15) and editors (N = 7) about their
security preferences, practices and concerns. Although there is no single methodology
for working with or identifying mental models (Stevens & Gentner, 1983; Renaud et al.,
2014), we determined that in-depth interviews would offer us the most comprehensive
view of “what people really have in their heads and guide their use of things” (Norman,
1983, 12). To help understand how the interplay between journalists’ individual work with
sources and other professional responsibilities—such as editing for and organizing other
reporters—shaped their needs and practices with respect to information security, the
interview script varied according to each participant’s primary role as a reporter or editor.
Thus, while both sets of interview questions focused on security attitudes and behaviors,
the “reporter” script focused on questions around individual attitudes and practices while
the “editor” script included broader policy questions. We made this distinction based on
our understanding of the differing scope of responsibility and awareness between these
two roles in journalistic organizations, differences that had some impact on our ndings,
as discussed below.
Participants
All of the interview subjects were full-time employees at well-respected media
organizations, ranging in size and focus from small, U.S.- or issue-focused news outlets
to large, international media services with bureaus around the world. While the majority
of the participants was located in the United States, some of the participants were
located based in Europe (n = 8) and were interviewed in their native language, with the
interview responses translated to English during transcription. Ten participants were men
and 12 were women.
Ethical Considerations
The entire protocol for this research was conducted under the auspices of the Columbia
University IRB, and special care was taken to limit the creation or exposure of any
sensitive information during the course of the research process. To this end, participants
were often recruited through existing professional networks via person-to-person
conversations; as such, the identity of particular interview subjects was often unknown to
the researcher prior to the interview itself.
Similarly, we were careful during the interviews to discourage participants from sharing
identifying information or sensitive details about particular sources, stories or incidents,
in order to limit the risk of compromising any individuals or the efcacy of particular
practices.
Participants were also given the option to decline recording of the interview, and to
decline to answer any individual questions, though all participants agreed to recording
and responded fully. All audio recordings were kept encrypted and labeled only in coded
form, both in storage and in transit.
“Security by Obscurity”: Journalists’ Mental Models of Information Security
39
Grounded-theory
Once all interviews were complete, the audio recordings were translated, if necessary,
and then transcribed in English, and coded by the researchers using a grounded theory
approach (Glaser & Strauss, 1967). The grounded theory method is designed to help
identify authentic themes from qualitative interview material through successive iterations
of coding and synthesis. By beginning with an initial coding process that relies heavily
on the actual language used by participants, a grounded theory method helps minimize
the inuence of researcher expectation and bias when evaluative qualitative results by
drawing topic classications directly from the participants’ interview material, rather than
by bucketing responses according to a predetermined rubric. Once a set of themes is
identied via the initial coding, these are then synthesized and rened—a process known
as “focused coding”—for application across the wider data set.
Participant roles and expertise
In addition to the themes identied through our grounded theory analysis, we also
evaluated our results in the context of users’ primary role as a reporter or editor, and
on our own analysis of their emergent expertise in information security. As we discuss
below, however, none of these factors had a signicant interaction with participants’
mental models of security.
Results
Overall, our results indicate that journalists’ mental model of information security can
best be characterized as a type of “security by obscurity”: the belief one need not take
particular security precautions unless one is involved in work that is sensitive enough to
attract the attention of government actors. While we are intentionally using this term in a
way that deviates from the typical computer-security denition (Anderson, 2001; Mercuri
& Neumann, 2003) we do so in part to acknowledge the tangible security benets that
obscure solutions can offer to organizations in terms of slowing down or reducing the
severity of an attack. As we discuss below, however, we nd that there is little actual
“obscurity” available to journalists, making this conceptually attractive characterization of
security risk of little practical value.
“Sensitivity” as a proxy for risk exposure
In line with previous ndings (Mitchell, Holcomb & Purcell, 2015a), a recurring theme in
our work was participants’ use of the “sensitivity” of particular stories, subjects, sources,
or geography as a proxy for security risk exposure, with more than half of our subjects
indicating the need for security precautions was dependent on the presence of one of
these features. As one participant put it:
It depends on the sector, but not everyone has sensitive information. We have
many open sources that don’t require any particular protection...It’s just in
certain cases that one really needs to be careful.
#ISOJ Volume 6, Number 1, Spring 2016
40
This characterization of security risk applied to participants on both sides of the issue,
i.e. both journalists on, for example, national-security beats and those on other beats
suggested that the need for security was dependent on one’s coverage area. As another
participant commented:
If you were on the national security beat [security technology] would be really
useful. But I write about domestic social problems, education, crime, poverty.
When asked about the need for specically information security-related practices, one
participant put it even more simply:
I feel like it depends on how much you think someone is actively spying on you.
Overall, these comments indicate that participants perceived security risk to be primarily
related to how sensitive or visible one’s subject of reporting may be to powerful actors,
rather than the particular vulnerabilities of the collaboration, sharing, recording and
transcribing mechanisms through which that reporting is done. Participants who did not
consider their coverage areas controversial, then, tended to minimize or dismiss the
existence of information security risks to themselves and their sources. Participants who
did cover “sensitive” beats, likewise, distinguished their own needs from those of other
colleagues who did not do this type of work.
This pattern was pervasive across both reporters and editors, despite the fact that editors
knew details of specic security incidents that did not necessarily support a relationship
between particular beats and security risk. While both groups adhered to this model of
security risk, our research suggests that the two groups rationalized it differently. Many
reporters expressed a lack of rst-hand experience with security incidents or concerns.
As one reporter described it:
I haven’t really dealt with something that was life or death. An extra level of
security just didn’t seem necessary.
For editors, however, information security was beat-dependent enough that other, more
universal newsroom concerns were a higher priority. As one editor said:
[Information security is] handled kind of on an ad-hoc basis by different
reporters and teams depending on the sensitivity of the kind of stories they’re
working on…it’s just not a big enough priority for the kind of journalism we do
for it to be anywhere near the top of my tech wish list.
In addition to the above, the researchers also evaluated results for an interaction
between information-security expertise, investigative experience, and the use of subject
“sensitivity” as a proxy for security risk, but found no effect for these characteristics. In
other words, participants described security risk in terms of subject sensitivity regardless
of their information-security expertise or investigative experience.
“Security by Obscurity”: Journalists’ Mental Models of Information Security
41
Face-to-face conversation as risk mitigation
In keeping with their view of security risk as contingent on the sensitivity of coverage,
our participants reported using a wide variety of security-enhancing tools and techniques
in particular situations, some of which will be discussed below. One security strategy
referenced by the vast majority of participants, however, was the use of face-to-face
conversation as a security strategy. One participant described this in the context of
working with a sensitive source:
If something is sensitive, I say to that person, I’ll come and see you.
However, this strategy also extended to communications with colleagues when dealing
with sensitive sources or topics. As another participant explained:
We don’t put anonymous sources in the emails, we don’t memorialize them in
the reporter’s notes—it’s all done verbally.
This strategy of avoiding the use of technology as a privacy or security measure has
been previously categorized as a privacy-enhancing avoidance behavior (Caine, 2009,
3146). In this framework, individuals make behavioral choices explicitly intended to avoid
situations where privacy could be compromised or violated.
As in previous research (Mitchell, Holcomb & Purcell, 2015a), the majority of our
participants spoke of in-person conversations as a go-to security strategy. This was
true irrespective of participants’ role, information-security expertise, or experience with
investigative journalism. As we will discuss in more detail below, this may at least be
in part because this method is guaranteed to be understood by and accessible to all
parties. As one editor described it:
I tried to send an encrypted email to a manager, and she doesn’t have
[encrypted] email. So, it’s available to our company…but it hasn’t been a priority
for that manager. So I sent a note to her reporter…who was encrypted but was
not in the ofce. So I said, “I’ll walk over and have a conversation with you,
because I can’t send you what I would like to send you. I don’t want to put this
in writing.”
Discussion
Though technically a misappropriation of the computer-science term, we describe
journalists’ mental models of information security as “security by obscurity” to reect
the two most salient and common features of journalists’ thinking about security risk
and avoidance in relation to digital communications technology. Specically, this mental
model treats as “secure” any type of journalism that is sufciently “obscure” to not be
of interest to powerful actors, such as nation-states. We also note, however, that while
“security by obscurity” is largely dismissed in the computer science community as a false
promise (Anderson, Neumann & Mercuri, 2003), it has been argued that in real-world
#ISOJ Volume 6, Number 1, Spring 2016
42
applications, “obscure” solutions can help delay the onset or mitigate the severity of an
actual attack (Stuttard, 2005). Given the large proportion of our participants and those in
previous studies whose mental model of security appears to t with this characterization,
we examine the ways in which this mental model both ts and fails journalists’ actual
information-security needs.
The appropriateness of “security by obscurity” as a mental model for journalists’
information-security risk lies in its ability to reect or predict actual information security
risk. Accepting this model as accurate would require two things: rst, an indication that
being “obscure” as a journalist or journalistic organization is possible, and second, that
being lower prole in this way offers a measure of security. If this is so, then it may
be that “security by obscurity” is a sensible, if imperfect, mental model of journalists’
information-security risk.
If not, however, it is worth looking deeper into the possible reasons why journalists
continue to use this mental model, to appreciate what might replace it, and how.
Are journalists “obscure”?
While research conrms that large news organizations are under regular attack
(Marquis-Boire & Huntley, 2014), it is difcult to ascertain the extent to which smaller
news organizations may face similar threats. That said, there are certain types of attacks
known to affect media organizations in general: third-party malvertising attacks. Small
and large news organizations alike tend to rely on third-party platforms to serve ads,
and the organizations affected when an ad platform is breached often number in the
hundreds (Brandom, 2014; Cox, 2015; Whitwam, 2016). Since employees of a news
organization are also likely to constitute its “readers,” the potential for exposure to such
risks is arguably higher than the average reader.
Are “obscure” journalists more secure?
Given that all of our participants came from well-recognized media organizations,
their assessment of security risk tended to relate to individual topics, beats, regions or
stories, rather than applying to the media organization as a whole. As noted above, the
vast majority of our participants felt that security was a concern primarily for reporters
covering national security-related beats, rather than those covering local or social topics.
Under this rubric, do non-national security journalists face fewer security risks?
In this case, the evidence is less equivocal: because many high-prole breaches and
hacks are actually perpetrated through spearphishing campaigns, in which “targets”
receive emails written to look like they came from a friend or colleague, often addressed
directly to the target’s name with a personal-sounding salutation. Virtually anyone with
an organizational email address is an equally likely “target”; one need not even be a
journalist. Such campaigns have been a documented or posited part of several high-
prole media breaches, including the Associated Press’ Twitter account hack (Oremus,
2013), and hacks of VICE (Greenberg, 2013) and Forbes (Greenberg, 2014).
“Security by Obscurity”: Journalists’ Mental Models of Information Security
43
Understanding the “security by obscurity” mental model
Given the mechanisms through which security breaches at journalistic institutions
have been enacted—as well as the more general targeting of journalistic institutions in
general—“security by obscurity” appears to be a poor t for journalists’ actual level of
information security risk. Yet while all of the above-cited evidence was publicly reported
(much of it before this study began), this mental model of information security risk still
persists across both our study population and that of other researchers. To understand
the potential sources of this incongruence, we examined our results for themes that
might illuminate why this mental model might persist in the face of such limitations.
Insufcient system models
As we noted above, mental models are typically composed of one or more “system
models” along with domain-specic knowledge and concepts (Brandt & Uden, 2003).
There is, however, no requirement that a given system model be complete or even
accurate in order to serve as part of a useful mental model. Of the 22 participants in
this study, only a handful of these demonstrated what could be described as coherent
and complete systems models of digital communications (this assessment was reached
based on comments made throughout the interview regarding both ownership and
operation of various systems, as well as their specic functions).
Otherwise, even participants who expressed an interest in greater information security
were aware of the challenge presented by their own limited understanding of the
systems with which they were dealing. As one participant put it:
I’ve been trying to reduce my Dropbox usage, and so I’ve been using just a
USB stick or something. Which, I actually have no idea how safe that is. It
seems more safe.
Another participant described information security risk as equally predictable (and,
presumably, comprehensible) as a natural disaster:
It’s one of those things, like worrying about earthquakes or hurricanes … It’s
the sort of thing where a terrible incident could be catastrophic, and that’s
something that you worry about. However, there are lots of other res to put out
every day.
Comments like these also illuminate another aspect of our ndings: that the most
common security measure mentioned by participants was meeting in person. When
contrasted with the opacity and uncertainty of technological systems, meeting face-to-
face offers clarity and assurance.
This tendency to rely on security strategies that are well understood was underscored by
one participant who shared that where salient explanations for security measures were
provided, they were well-accepted and understood:
#ISOJ Volume 6, Number 1, Spring 2016
44
There’s many ways to roll out security tweaks, and doing them where you
make a clear and lucid case for what you’re doing and why—there was just no
pushback whatsoever. Everyone was just like, “Okay, great. We’ll do that.”
“Good enough” is good enough
Particularly in complex or ill-dened subject areas, such as information security, it is
typical for individuals to build mental models around simple explanations that capture the
features of a system or situation that are most readily apparent (Feltovich et al, 1996).
While these models can be useful insofar as they provide initial support for reasoning
about complex situations, they can also hinder more complete understandings (Feltovich
et al, 1996). Once established, moreover, a given mental model is rarely amended.
Instead, contradictory evidence is either dismissed or interpreted in such a way that is
congruent with the existing mental model.
It is possible, then, to appreciate journalists’ “security by obscurity” mental model as a
way to reason about information security risk that is congruent with the most salient and
accessible features of high-prole security incidents. For example, while there have been
repeated reports of aggressive leak investigations by the SEC (Coronel, 2014; Hurtado,
2016) most recent leak prosecutions were related to national security reporting (e.g.
Jeffrey Sterling and Stephen Jin-Woo Kim). Moreover, such cases are often reported on
in great detail. By contrast, only rarely do news organizations share details of technical
or spearphishing attacks, making such events far less memorable. For most journalists,
then, there is a naturally dominant association between national security and other
“sensitive” beats and security risk, despite the greater frequency and, arguably, greater
threat, posed by simple phishing campaigns, for example.
Conclusions
By employing a mental models framework to journalists’ information security attitudes
and behaviors, we identify an approach to information security risk that can best be
described as “security by obscurity”: the belief that journalists do not need to concern
themselves with information security unless they are working on topics of perceived
interest to nation-state actors. Although this model is a demonstrably poor t for the
actual security risk faced by our participants (who are all part of well-recognized media
organizations), this “security by obscurity” model may persist because it is congruent
with the most high-prole security incidents in recent years, and because journalists
have poor systems models of digital communications technology.
At the same time, given that one’s actual security risk is more likely to be related
to one’s work as a journalist no matter the capacity, the question remains of how
journalists’ mental models of information security risk can be updated to reect their
actual threat landscape. Based on our ndings, we recommend further study with a
focus on developing training modules and educational interventions designed to improve
journalists’ systems models of digital communications and understanding of threats.
“Security by Obscurity”: Journalists’ Mental Models of Information Security
45
References
Anderson, R. (2001). Why information security is hard: An economic perspective. Proceedings of
the 17th Annual Computer Security Applications Conference. 358 doi: http://dl.acm.org/citation.
cfm?id=872016.872155
Associated Press (2013, April 23). Hackers compromise AP Twitter account. Associated Press.
Retrieved from http://bigstory.ap.org/article/hackers-compromise-ap-twitter-account
Blake, A. (2013, April 23). AP Twitter account hacked; hacker tweets of ‘explosions in the White
House’. The Washington Post. Retrieved from https://www.washingtonpost.com/news/post-politics/
wp/2013/ 04/23/ap-twitter-account-hacked-hacker-tweets-of-explosions -in-the-white-house/
Brandom, R. (2014, September 19). Google’s doubleclick ad servers exposed millions of computers
to malware. The Verge. Retrieved from http://www.theverge.com/2014/9/19/6537511/google-ad-
network-exposed-millions-of-computers-to-malware
Brandt, D. S., & Uden, L. (2003, July). Insight into the mental models of novice Internet searchers.
Communciations of the ACM (7). 133-136.
Brinkema, J. L. (2011). U.S. v. Sterling, Fourth Circuit. Retrieved from http://www.documentcloud.
org/documents/229733-judge-leonie-brinkemas-ruling-quashing-subpoena.html
Caine, K. E. (2009). Supporting privacy by preventing misclosure. Extended abstracts of the ACM
conference on human factors in computing systems. (Doctoral Consortium).
Coronel, S. S. (2014, August 13). SEC aggressively investigates media leaks. Columbia Journalism
Review. Retrieved from http://www.cjr.org/the_kicker/sec_investigation_media_leaks_reuters.php
Cox, J. (2015, October 13). Malvertising hits ‘The Daily Mail,’ one of the biggest news sites on the
Web. Motherboard. Retrieved from http://motherboard.vice.com/read/malvertising-hits-the-daily-
mail-one-of-the-biggest-news-sites-on-the-web
Currier, C. (2013, July 30). Charting Obama’s crackdown on national security leaks. ProPublica.
Retrieved from https://www.propublica.org/special/sealing-loose-lips-charting-obamas-crackdown-
on-national-security-leaks
Doyle. J. K. & Ford, D.N. (1998). Mental models concepts for system dynamics research. System
Dynamics Review, 14. 3-29.
Feltovich, P. J., Spiro, R.J., Coulson, R.L. & Feltovich, J. (1996). Collaboration within and among
minds: Mastering complexity, within and among groups. In T. Koschmann (Ed.) CSCL: Theory and
Practice of an Emerging Paradigm. (pp. 27-34). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.
Festinger, L. (1962). Cognitive dissonance. Scientic American, 207(4), 93-107. doi: http://dx.doi.
org/10.1038/scienticamerican1062-93
Gaston, G. & Gerjo, K. (1996). The theory of planned behavior: A review of its applications to
health-related behaviors. American Journal of Health Promotion, 11(2), 87-98 doi: http://dx.doi.
org/10.4278/0890-1171-11.2.87
#ISOJ Volume 6, Number 1, Spring 2016
46
Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative
research. Chicago, IL: Aldine Publishing Company.
Greenberg, A. (2014, February 20). How the Syrian electronic army hacked us: A detailed timeline.
Forbes. Retrieved from http://www.forbes.com/ sites/andygreenberg/2014/02/20/how-the-syrian-
electronic -army-hacked-us-a-detailed-timeline/
Greenberg, A. (2013, November 11). Vice.com hacked by Syrian Electronic Army. SCMagazine.
Retrieved from http://www.scmagazine.com/vicecom-hacked-by -syrian-electronic-army/
article/320466/
Greenwald, G. (2013, June 6). NSA collecting phone records of millions of Verizon customers daily.
The Guardian. Retrieved from http://www.theguardian.com/world/2013/jun/06/nsa-phone-records-
verizon-court-order
Greenwald, G. & MacAskill, E. (2013, June 7). NSA Prism program taps in to user data of Apple,
Google and others. The Guardian. Retrieved from http://www.theguardian.com/world/2013/jun/06/
us-tech-giants-nsa-data
Grisham, L. (2015, January 5). Timeline: North Korea and the Sony Pictures hack. USA Today.
Retrieved from http://www.usatoday.com/story/news/nationnow/2014/12/18/sony-hack-timeline-
interview-north-korea/ 20601645/
Gross, J. B., & Rosson, M. B. (2007). Looking for trouble: Understanding end-user security
management. Proceedings of the 2007 Symposium on Computer Human Interaction for the
Management of Information Technology. doi: 10.1145/1234772.1234786
Holmes, H., & McGregor, S. E. (2015, February 5). Making online chats really ‘off the record’. Tow
Center. Retrieved from http://towcenter.org/ making-online-chats-really-off-the-record/
Horowitz, S. (2013, May 13). Under sweeping subpoenas, Justice Department obtained AP phone
records in leak investigation. The Washington Post. Retrieved from https://www.washingtonpost.
com/world/national-security/under-sweeping-subpoenas-justice-department-obtained-ap-phone-
records-in-leak-investigation/2013/05/13/11d1bb82-bc11-11e2-89c9-3be8095fe767_story.html
Hurtado, P. (2016, February 23). The London whale. Bloomberg. Retrieved from http://www.
bloombergview.com/quicktake/the-london-whale
Kerr, J. C. (2013, June 19). AP president Pruitt accuses DOJ of rule violations in phone records
case; source intimidation. The Associated Press. Retrieved from http://www.ap.org/Content/AP-In-
The-News/2013/AP-President-Pruitt-accuses-DOJ-of-rule-violations-in-phone-records-case-source-
intimidation
Kulwin, N. (2015). Encrypting your email: What is PGP? Why is it important? And how do I use it?
re/code. Retrieved from: http://recode.net/2015/05/13/ encrypting-your-email-what-is-pgp-why-is-it-
important-and -how-do-i-use-it/
Liptak, A. (2012, February 11). A high-tech war on leaks. The New York Times. Retrieved from:
http://www.nytimes.com/2012/02/12/sunday-review/a-high-tech-war-on-leaks.html
“Security by Obscurity”: Journalists’ Mental Models of Information Security
47
Marimow, A.E. (2013, May 20). Justice Department’s scrutiny of Fox News reporter James Rosen in
leak case draws re. The Washington Post. Retrieved from https://www.washingtonpost.com/local/
justice-departments-scrutiny-of-fox-news-reporter-james-rosen-in-leak-case-draws-re/2013/05/20/
c6289eba-c162-11e2-8bd8-2788030e6b44_story.html
Marquis-Boire, M., & Huntley, S. (2014, March). Tomorrow’s news is today’s Intel: Journalists as
targets and compromise vectors. Black Hat Asia 2014. Retrieved from https://www.blackhat.com/
docs/asia-14/materials/Huntley/BH_Asia_2014_Boire_Huntley.pdf
Mass, P. (2015, May 11). CIA’s Jeffrey Sterling sentenced to 42 months for leaking to New York
Times journalist. The Intercept. Retrieved from https://theintercept.com/2015/05/11/sterling-
sentenced-for-cia-leak-to-nyt/
Mattise, N. (2014, June 22). Syrian electronic army targets Reuters again but ad network provided
the leak. ArsTechnica. Retrieved from http://arstechnica.com/ security/2014/06/syrian-electronic-
army-targets-reuters -again-but-ad-network-provided-the-leak/
McGregor, S. (2013, May 15). AP phone records seizure reveals telecoms risks for journalists.
Columbia Journalism Review. Retrieved from http://www.cjr.org/cloud_control/ap_phone_records_
seizure_revea.php
McGregor, S. T. H., Charters, P. & Roesner, F. (2015). Investigating the security needs and practices
of journalists. Proceedings of the 24th USENIX Security Symposium.
Mercuri, R. T. & Neumann, P. G. (2003) Security by obscurity. Communications of the ACM, 46(1).
Mirani, L. (2015, February 9). Millions of Facebook users have no idea they’re using the Internet.
Quartz. Retrieved from http://qz.com/333313/milliions-of-facebook-users-have-no-idea-theyre-using-
the-internet/
Mitchell, A., Holcomb, J., & Purcell, K. (2015a, February). Investigative journalists and digital
security: Perceptions of vulnerability and changes in behavior. Pew Research Center. Retrieved
from http://www.journalism.org/les/2015/02/PJ_InvestigativeJournalists_0205152.pdf
Mitchell, A., Holcomb, J., & Purcell, K. (2015b, February). Journalist training and knowledge about
digital security. Pew Research Center. Retrieved from http://www.journalism.org/2015/02/05/
journalist-training-and-knowledge-about-digital-security/
Norman, D. A. (1983). Some observations on mental models. In A. L. Stevens & D. Gentner (Eds.),
Mental models (pp. 7-14). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.
Oremus, W. (2013, April 23). Would you click the link in this email that apparently tricked the AP?
Slate. Retrieved from http://www.slate.com/blogs/future_tense/2013/04/23/ap_twitter_hack_would_
you_click_the_link_in_this _phishing_email.html
Perlroth, N. (2013, January 31). Hackers in China attacked The Times for last 4 months. The New
York Times. Retrieved from http://www.nytimes.com/2013/ 01/31/technology/chinese-hackers-
inltrate-new-york-times-computers.html
Perlroth, N. (2013, July 12). Washington Post joins list of news media hacked by the Chinese. The
New York Times. Retrieved from http://www.nytimes.com/2013/02/02/technology/washington-posts-
joins-list-of-media-hacked-by-the-chinese.html?_r=0
#ISOJ Volume 6, Number 1, Spring 2016
48
Renaud, K., Volkamer, M., & Renkema-Padmos, A. (2014). Why doesn’t Jane protect her privacy?
Proceedings of the 2014 Privacy Enhancing Technology Symposium. (Amsterdam, Netherlands).
Rook, F. W., & Donnell, M. L. (1993). Human cognition and the expert system interface: Mental
models and inference explanations. IEEE Transactions on Systems, Man, and Cybernetics (6),
1649-1661.
Ruane, K. A. (2011). Journalists’ privilege: Overview of the law and legislation in recent Congresses.
Congressional Research Service.
Sasse, M. A., Brostoff, S., & Weirich, D. (2001). Transforming the ‘weakest link’: A human/computer
interaction approach to usable and effective security. B T Technology Journal, 19(3), 122-131.
Savage, C. (2013, July 12). Holder tightens rules on getting reporters’ data. The New York Times.
Retrieved from http://www.nytimes.com/2013/07/13/us/holder-to-tighten-rules-for-obtaining-
reporters-data.html
Shane, S. & Savage, C. (2012, June 19). Administration took accidental path to setting record for
leak cases. The New York Times. Retrieved from http://www.nytimes.com/2012/06/20/us/politics/
accidental-path-to-record-leak-cases-under-obama.html?_r=0
Staggers, N., & Norcio, A. F. (1993). Mental models: Concepts for human-computer interaction
research. International Journal of Man-Machine Studies 38(4) 587-605.
doi:10.1006/imms.1993.1028
Stevens, A. L., & Gentner, D. (1983). Introduction. In A. L. Stevens & D. Gentner (Eds.), Mental
models (pp. 1-6). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.
Stuttard, D. (2005). Security & obscurity. Network Security, 2005 (7), 10-12. doi:10.1016/S1353-
4858(05)70259-2
Thatcher, A., & Greyling, M. (1998). Mental models of the Internet. International Journal of Industrial
Ergonomics, 22(4-5), 299-305. doi:10.1016/S0169-8141(97)00081-4
Wagstaff, J. (2014, March 28). Journalists, media under attack from hackers: Google
researchers. Reuters. Retrieved from http://www.reuters.com/article/us-media-cybercrime-
idUSBREA2R0EU20140328
Wastlund, E., Angulo, J., & Fischer-Hubner, S. (2012). Evoking comprehensive mental models of
anonymous credentials. iNetSec 2011, 1-14. doi: 10.1007/978-3-642-27585-2_1
Whitten, A., & Tygar, J. D. (1999). Why Johnny can’t encrypt: A usability evaluation of PGP 5.0.
Proceedings of the 8th USENIX Security Symposium.
Whitwam, R. (2016, January 10) Forbes forced readers to disable ad-blocking, then served them
malware ads. Geek.com. Retrieved from http://www.geek.com/news/forbes-forced-readers-to-
disable-ad-blocking-then-served-them-malware-ads-1644231/
“Security by Obscurity”: Journalists’ Mental Models of Information Security
49
Susan E. McGregor is assistant director of the Tow Center for Digital Journalism and
assistant professor at Columbia Journalism School, where she helps supervise the
dual-degree program in Journalism & Computer Science. She teaches primarily in areas
of data journalism & information visualization, with a research interests in information
security, knowledge management and alternative forms of digital distribution. McGregor
was the Senior Programmer on the News Graphics team at the Wall Street Journal
Online for four years before joining Columbia Journalism School in 2011. In 2012,
McGregor received a Magic Grant from the Brown Institute for Media Innovation for her
work on Dispatch, a mobile app for secure source communication. In June of 2014 she
published the Tow/Knight report “Digital Security and Source Protection for Journalists,”
which explores the legal and technical underpinnings of the challenges journalists face
in protecting sources while using digital tools to report. In the fall of 2015, the National
Science Foundation funded McGregor and collaborators Drs. Kelly Caine and Franzi
Roesner to research and develop secure, usable communications tools for journalists
and others. She conducts regular trainings with journalists and academics on practical
strategies for protecting sources and research subjects.
Elizabeth Anne Watkins is a maker, writer, and researcher interested in the future of
collaborative, meaningful work in digital ecosystems. Using a mixed-methods approach,
she stitches research in knowledge management and organizational behavior together
with insights gleaned from innovative community practices in art-making and storytelling.
Her written case studies have been published by Harvard Business School, where she
also worked with startups at the Harvard Innovation Lab and the Berkman Center for
Internet and Society. She studied video art at the University of California at Irvine and
received a Master of Science degree in Art, Culture, and Technology at MIT. She’s
currently pursuing a PhD in Communications at Columbia University in the city of
New York, where she’s a Research Assistant afliated with the Tow Center for Digital
Journalism contributing to studies in information security.
#ISOJ Volume 6, Number 1, Spring 2016
50
Quieting the Commenters
51
Quieting the Commenters: The Spiral of Silence’s
Persistent Effect on Online News Forums
Hans K. Meyer and Burton Speakman
The Internet may help overcome the Spiral of Silence because posters can remain anonymous.
Forum moderators could alleviate some concerns by imposing group norms, such as moderation,
to ensure civility. Through a nationwide survey, this study focuses specically on comments at the
end of news stories to examine the impact journalists can have on the conversation. Despite online
advantages, the study nds the spiral of silence persists, but journalists who noticeably moderate
comments have an effect. The key to overcoming the spiral of silence is helping commenters feel
part of a community with other forum participants.
Introduction
Only a small percentage of readers are willing to comment on news stories online
(Chung & Nah, 2009; Larsson, 2011), but online comments represent one way a
newsroom can fulll its democratic mission. Online comments at the end of news
stories can serve as a “forum for public criticism and compromise,” that Kovach and
Rosenstiel (2004, p. 6) call one of the essential elements of journalism. They can also
help a newroom increase engagement and build audience as legacy media’s audience is
shrinking.
Newsrooms need to understand why people do not join conversations publically, and
the Spiral of Silence may help. People are unwilling to comment publicly because they
want to avoid the isolation their minority opinions can cause (Noelle-Neumann, 1993). A
bandwagon effect occurs when one side of an issue is more aggressive and causes their
opinion to surge in popularity (Noelle-Neumann, 1993). It does not matter if those who
are more aggressive actually represent the majority of the population (Noelle-Neumann,
1993). Journalists will play an important role in applying the democractic principles of the
Internet in order to decrease their readers’ fears.
This study examines what elements a journalist can control to help overcome the spiral
of silence and ensure that comments at the end of news stories create the public forum
Kovach and Rosenstiel (2004) envisioned. Through a nationwide survey (N = 1,007)
of Internet users that specically asks them whether they comment at the end of news
stories, the study measures participants’ experience with the spiral of silence within
... In general, what journalists conjure up of listening about infosec, is studied by Susan McGregor and Elizabeth Watkins [57] to come up with their mental model. ...
Article
The purpose of this review paper is to garner knowledge about the information security and cryptography encryption practices implementation for journalistic work and its effectiveness in thwarting software security breaches in the wake of ‘Journalism After Snowden’. Systematic literature review for the ‘information security and cryptography encryption in journalism’ employed with an eye to synthesize existing practices in this field. For this, at first the existing approachable research article databases and search engines employed to download or get the abstract of relevant scientific articles which are then used for citation and summarization works in a systematic rigorous anatomization. Contingent upon them their analysis and synthesis employed to arrive at the findings. Research papers collated for the purpose of writing this review paper lighted up the vital issues related to investigative journalists’ safety practices promulgation inadequacies even after the UNESCO 2017 and 2022 guidelines for urgent instrumentalization needs of journalists on the part of its’ member States.
... Additionally, journalistic perceptions of security and their particular beat inform what, if any, digital security precautions they might take. McGregor and Watkins (2016) revealed that journalists consider security risks through a mental model of "security by obscurity," or the belief that they do not need to concern themselves with security risks unless they are working on particularly sensitive beats. Crete-Nishihata et al. (2020) have argued that investigative reporters have mental models about digital security that are distinct from those of non-investigative colleagues and are more likely to cite surveillance, harassment, and legal actions against them as primary concerns. ...
Article
Mob censorship, which “expresses the will of ordinary citizens to exert power over journalists through discursive violence” is traditionally considered a grassroots phenomenon. However, within technically mediated systems, who is behind the mob is sometimes unclear. We therefore ask how the technical affordances of the Internet and telecommunications networks complicate the identification of attackers and their motivations and multiply the forms of retaliation that attackers level against journalists. We conducted 18 semistructured interviews with seven current or former journalists, as well as 11 professionals with experience defending news organizations, including security specialists, press freedom advocates, and newsroom infrastructure support staff. Through a constructivist grounded theory approach and in conversation with Lewis and Westlund’s (2015) 4A framework, we found that journalists and those defending news organizations do not reliably identify sources and motivations behind attacks, which may be grassroots in nature but may also be instigated by corporate or government actors. Journalists nonetheless infer attribution and motivation from the context surrounding attacks. Systemic issues related to the lack of diversity, ongoing financial constraints, and journalistic norms of engagement, alongside a lack of internal and platform support, exacerbate repercussions from these attacks and harm journalism’s role in a democracy.
Chapter
Threats associated with the consumer Internet of Things (IoT) may particularly inhibit the work and wellbeing of journalists, especially because of the danger of technological surveillance and the imperative to protect confidential sources. These issues may have knock-on effects on societal stability and democratic processes if press freedom is eroded. Still, journalists remain unaware of potential IoT threats, and so are unable to incorporate them into risk assessments or to advise their sources. This shows a clear gap in the literature, requiring immediate attention. This article therefore identifies and organises distinctive and novel threats to journalism from the consumer IoT. The article presents a novel conceptualisation of threats to the press in six categories: regulatory gaps, legal threats, profiling threats, tracking threats, data and device modification threats and networked device threats. Each of the threats in these categories includes a description and hypothetical consequences that include real-life ways in which IoT devices can be used to inhibit journalistic work, building on interdisciplinary literature analysis and expert interviews. In so doing, this article synthesises technical information about IoT device capabilities with human security and privacy requirements tailored to a specific at-risk population: journalists. It is therefore important for cyber science scholarship to address the contemporary and emerging risks associated with IoT devices to vulnerable groups such as journalists. This exploratory conceptualisation enables the evidence-based conceptual evolution of understandings of cyber security risks to journalists.
Chapter
Full-text available
Mental models are essential in learning how to adapt to new and evolving circumstances. The landscape of best practices in cybersecurity is a constantly changing area, as the list of best practices evolves in response to the increasing complexity and scope of threats. In response, users have adapted to the threats and corresponding countermeasures with mental models that simplify the complex networked environments that they inhabit. This paper presents an overview that spans over a decade of research in mental models of users when dealing with cybersecurity threats and corresponding security measures in different kinds of environments. The lessons from over a decade of research in mental models for cybersecurity offer valuable insights about how users learn and adapt, and how their backgrounds and situational awareness play a critical role in shaping their mental models about cybersecurity.
Article
Journalists are increasingly attacked in response to their work yet they often lack the necessary support and training to protect themselves, their sources, and their communications. Despite this, there has been limited scholarly attention that addresses how journalism schools approach digital security education. This paper draws from an analysis of 106 US programs and 23 semi-structured interviews with journalism students and professors to examine how the next generation of journalists learn about digital security practices. Our findings show that most programs (88.7%) don’t offer formal digital security programming and that digital security skills are often deprioritized in favor of skills seen as more significant contributors to post-graduate hiring—a key priority of journalism programs. Additional barriers include a lack of space and time in existing curricula for added digital security coursework, a perception that students are not interested, and few professors with related knowledge. When security education is introduced, it’s done so in often informal and ad-hoc ways, largely led by “security champions,” both within and outside of journalism, who advocate for its legitimacy. These findings have important implications for journalism education and journalists’ capacity to carry out their work amidst a deteriorating safety environment in the United States.
Article
Investigative journalism, like other sectors of social life, has undergone significant changes due to globalization, technological progress, and the Western world’s turn to neoliberalism. This context has facilitated the emergence of new practices within the profession, particularly new modes of communication between journalists and their confidential sources. This qualitative study focuses on the meaning journalists attribute to the use of these technologies in their relationships with their sources. Anonymity tools are being used to build the professional identity of investigative journalism (both collectively and individually) and therefore constitute a resource in the power relationship between journalists and their sources, a relationship that is not fundamentally changed by their use.
Article
Full-text available
Information security (infosec) has become a field of primary interest for journalism, especially in the wake of the 2013 Edward Snowden revelations about the ramifications of Internet mass surveillance. Following the increasing dangers posed by digital threats—and surveillance in particular—to the safety of journalists and their sources, newsrooms and reporters have shown an increased interest in technological solutions for improved protection of their work and sources. In particular, the adoption of strong encryption tools for communication purposes has become an urgent matter for journalists worldwide, becoming a niche of research in journalism studies as well. By reviewing the existing literature in the field, this article examines how journalism studies approach the use of encryption and information security tools for journalistic purposes. Based on research on the major journalism studies journals and other publications, the article offers an overview of the research advancements, highlighting current major trends and research areas.
Article
Internet surveillance has become a crucial issue for journalism. The “Snowden moment” has shed light on the risks that journalists and their sources face while communicating online and has shown how journalists themselves can be targets of surveillance operations or other forms of malicious digital attacks from different actors. More recent revelations, such as those coming from the “Pegasus Project”, have underlined even more dangerous threats posed to the safety of journalists, increasingly targeted with spyware technology. Due to the sensitivity of their work and sources and given their strong “watchdog” role in democracies, investigative reporters are in a particularly dangerous position when it comes to the potential chilling effects of surveillance on their work of journalists. This paper analyzes investigative journalists’ views and self-reflections on the impacts of Internet surveillance on their work by means of in-depth qualitative interviews with reporters affiliated with the International Consortium of Investigative Journalists (ICIJ) and working in Italy, Germany, Hungary, Spain, Switzerland, and the UK. The paper touches on different angles of the Internet surveillance issue by analyzing journalists’ concerns about national and international surveillance players and the overall impact of surveillance on news work.
Article
This paper utilizes concepts from new institutionalism to help explain journalists’ and news organizations’ resistance to implementing security-related practices despite a deteriorating safety and security environment for journalists in the United States. Through 30 interviews with journalists, technologists, and media lawyers, I identify three main variables for the resistance to the development of newsroom security cultures, as well as a new social actor necessary for the development of security cultures in newsrooms: the “security champion.” The emergence of this new institutional entrepreneur highlights an intriguing tension. Although news organizations have engaged in slow adoption of the anonymous whistleblowing platform SecureDrop, they have not necessarily engaged in an institutionalization of security practices throughout the newsroom. The decoupling of these two factors represents attempts by news organizations to have institutional legitimacy while not changing core practices. In conjunction with this phenomenon, inspired individuals in newsrooms across the country are becoming ad hoc “security champions” in order to build security cultures from the ground up.
Article
Full-text available
Information security tools have gained prominence and importance in the journalism field and are now being adopted more frequently by newsrooms and investigative journalists. SecureDrop, an open-source software for operating whistleblowing platforms, is now a common component of the toolboxes of journalists willing to work with stronger levels of security, especially in regard to source protection. By means of a content analysis of publicly available documents and semi-structured interviews with journalists using the software, this article looks at news organizations’ uses of SecureDrop, journalists’ perceptions of the software's strengths and limitations, and the accountability practices adopted by news organizations in regard to their use of SecureDrop. Overall, this article contributes to the understanding of how SecureDrop and information security in general are entering the journalistic field and becoming accepted journalistic practices.
Conference Paper
Full-text available
End-to-end encryption has been heralded by privacy and security researchers as an effective defence against dragnet surveillance, but there is no evidence of widespread end-user uptake. We argue that the non-adoption of end-to-end encryption might not be entirely due to usability issues identified by Whitten and Tygar in their seminal paper “Why Johnny Can’t Encrypt”. Our investigation revealed a number of fundamental issues such as incomplete threat models, misaligned incentives, and a general absence of understanding of the email architecture. From our data and related research literature we found evidence of a number of potential explanations for the low uptake of end-to-end encryption. This suggests that merely increasing the availability and usability of encryption functionality in email clients will not automatically encourage increased deployment by email users. We shall have to focus, first, on building comprehensive end-user mental models related to email, and email security. We conclude by suggesting directions for future research.
Article
Full-text available
As the amount of knowledge and the number of users connected to the Internet rapidly expands so the need to understand how users conceptualize this giant network becomes more important. Through the medium of sophisticated software interfaces users must navigate through cyberspace and access relevant information. Access to the user's mental model of the Internet will enable designers and information technologists to better understand and structure the knowledge deposits of the future. This study explores the use of drawings to access users' mental models of the Internet from a group of South African's with varying experience with the Internet and computers. Mental models from 51 University respondents were categorized by three independent raters. The mental models were arranged into six categories. Analyses on these categories suggest that the mental model categories may be hierarchically ordered according to respondents' experience with the Internet. These results are discussed in terms of the organization of knowledge on the Internet and in terms of designing Internet interfaces.Relevance to industryThe results from this paper suggest that current windows-based Internet interfaces are not facilitating a deeper understanding of the underlying structure of the Internet, which may lead to users experiencing increasing navigation problems. Designers should therefore be considering embedding cues that facilitate accurate mental model formation into software interfaces.
Conference Paper
Full-text available
Anonymous credentials are a fundamental technology for preserving end users' privacy by enforcing data minimization for online applications. However, the design of user-friendly interfaces that convey their privacy benefits to users is still a major challenge. Users are still unfamiliar with the new and rather complex concept of anonymous credentials, since no obvious real-world analogies exists that can help them create the correct mental models. In this paper we explore different ways in which suitable mental models of the data minimization property of anonymous credentials can be evoked on end users. To achieve this, we investigate three different approaches in the context of an e-shopping scenario: a card-based approach, an attribute-based approach and an adapted card-based approach. Results show that the adapted card-based approach is a good approach towards evoking the right mental models for anonymous credential applications. However, better design paradigms are still needed to make users understand that attributes can be used to satisfy conditions without revealing the value of the attributes themselves.
Article
Although “mental models” are of central importance to system dynamics research and practice, the field has yet to develop an unambiguous and agreed upon definition of them. To begin to address this problem, existing definitions and descriptions of mental models in system dynamics and several literatures related to cognitive science were reviewed and compared. Available definitions were found to be overly brief, general, and vague, and different authors were found to markedly disagree on the basic characteristics of mental models. Based on this review, we concluded that in order to reduce the amount of confusion in the literature, the mental models concept should be “unbundled” and the term “mental models” should be used more narrowly. To initiate a dialogue through which the system dynamics community might achieve a shared understanding of mental models, we propose a new definition of “mental models of dynamic systems” accompanied by an extended annotation that explains the definitional choices made and suggests terms for other cognitive structures left undefined by narrowing the mental model concept. Suggestions for future research that could improve the field's ability to further define mental models are discussed. © 1998 John Wiley & Sons, Ltd.
Article
In Branzburg v. Hayes, 408 U.S. 665, 679-680 (1972), the Supreme Court wrote journalists claim that to gather news it is often necessary to agree either not to identify the source of information published or to publish only part of the facts revealed, or both; that if the reporter is nevertheless forced to reveal these confidences to a grand jury the source so identified and other confidential sources of other reporters will be measurably deterred from furnishing publishable information, all to the detriment of the free flow of information protected by the First Amendment. The Court held, nonetheless, that the First Amendment did not provide even a qualified privilege for journalists to refuse to appear and testify before state or federal grand juries. The only situation it mentioned in which the First Amendment would allow a reporter to refuse to testify was in the case of grand jury investigations ... instituted or conducted other than in good faith.... Official harassment of the press undertaken not for purposes of law enforcement but to disrupt a reporter s relationship with his news sources would have no justification. Though the Supreme Court concluded that the First Amendment does not provide a journalists privilege in grand jury proceedings, 49 states have adopted a journalists privilege in various types of proceedings; 33 have done so by statute, and 16 by court decision. Journalists have no privilege in federal proceedings. On July 6, 2005, a federal district court in Washington, DC, found Judith Miller of the New York Times in contempt of court for refusing to cooperate in a grand jury investigation relating to the leak of the identity of an undercover CIA agent. The court ordered Ms. Miller to serve time in jail. Ms. Miller spent 85 days in jail. She secured her release only after her informant, I. Lewis Libby, gave her permission to reveal his identity.
Conference Paper
End users are often cast as the weak link in computer security; they fall victim to social engineering and tend to know very little about security technology and policies. This paper challenges this view as derogatory and unconstructive, arguing that users, as agents of organizations, often have sophisticated strategies regarding sensitive data, and are quite cautious. Existing work on user security practice has failed to consider how users view security; this paper provides content on and analysis of end user perspectives on security management. We suggest that properly designed systems would bridge the knowledge gap (where necessary) and mask levels of detail (where possible), allowing users to manage their security needs in synchrony with the needs of the organization. The evidence for our arguments comes from a set of in-depth interviews with users with no special training on, knowledge of, or interest in computer security. We conclude with guidelines for security and privacy tools that better leverage existing users knowledge.
Conference Paper
Despite extensive concerns about privacy and multiple potential consequences of revealing personal information, many users still experience invasions of privacy when interacting with technology. For this reason, privacy is an important and complex issue in HCI. This thesis focuses on specific psychological issues of privacy in HCI, primarily the accidental disclosure of information or misclosure. Using multiple methods including focus groups, a diary study, and an experimental manipulation, this thesis seeks to catalog the incidence of such errors, identify the interface issues associated with each type of error, and provide design recommendations for preventing each type of disclosure error.