ArticlePDF Available

Open Source Intelligence and Privacy Dilemmas: Is it Time to Reassess State Accountability?

Authors:

Abstract

Open source intelligence (OSINT) is increasingly used for security and safety purposes. Even though security - and intelligence agencies and the police are using messages on social networking sites, weblogs, blogs or apps, state accountability mechanisms found it difficult to adapt to the online culture. Consider for instance the dilemma that open source information (OSINF) is frequently collected, processed, minded and stored by private companies. From a human right perspective, this gathering of OSINT demands proper checks and balances. Even though laws, regulations and policies may recognise this, it is important to review whether the gathering of OSINF online leads to new dilemmas. We conclude that state accountability should at least entail that the actual process and outcome of data collection, processing, mining and sharing is subjected to review and/or sanctions. Furthermore, it should become transparent which entity or who carries responsibility for the use of OSINT.
Open source intelligence and privacy dilemmas: Is
it time to reassess state accountability?
Quirine Eijkman 1 and Daan Weggemans 2
Introduction
Providing for safety and security is a core task of the state. The rapid development
of technology has, in many ways, affected the dynamics of this responsibility.
Intelligence- and security agencies and the police increasingly rely on information
technology that facilitates the collection of Open Source Information (OSINF). OSINF
forms the basis of Open Source Intelligence (OSINT), which is gathered through
publicly available sources that are unclassified and include sources ranging from
(foreign) newspapers, governmental reports, public data, maps, academic sites to
blogs, social networking sites, apps and web-based communities.
With the evolution of the internet, a vast array of information has become
retrievable with the click of a mouse. In addition to this accumulation of valuable
data, the internet also contains large quantities of personal information, often posted
online by people themselves through social networking sites, blogging or apps.
Individuals regularly share personal information online, which is stored as digital
data in databases or in the cloud. This, in turn, has led to new perceptions about
how this personal data may be used for security and safety purposes. In many areas
the use of OSINF e.g. the monitoring of different social networking sites, blogs or
apps is growing significantly. Several research centres and think tanks, both
public and private, have been established solely with the aim to study, coordinate or
develop new approaches to (the gathering of) OSINF and the acquiring of OSINT.
Quoting one of the main themes of the ‘2010 Naked intelligence’ conference; ‘the
gathering of knowledge in a see through world’ has become a prominent aspect
within the security and intelligence industry.3
New strategies for using OSINF are also designed to anticipate national
security threats such as international terrorism. Although the chances are slim that a
terrorist will post his or her selected target location online, these measures are
helpful in monitoring violent extremist views. In 2012 this was confirmed by the
Dutch General Intelligence and Security Service (AIVD) ‘Jihadism on the Web’
report, in which the internet was labelled as the ‘most important medium for the
1 Quirine Eijkman (Phd) is a senior researcher / lecturer at the Centre for Terrorism and
Counterterrorism (CTC), Leiden University-Campus the Hague. Email:
q.a.m.eijkman@cdh.leidenuniv.nl.
2 Daan Weggemans (MSc.) is a researcher / lecturer at the Centre for Terrorism and
Counterterrorism (CTC), Leiden University-Campus the Hague. Email:
d.j.weggemans@cdh.leidenuniv.nl
3 The conference Naked Intelligence, ‘Gathering Knowledge in a See Through World’, origins
from the collaboration of two private corporations in the field of Open Source intelligence
production; Sandstone s.a. and Infosphere AB, 2010, Retrieved 27 January 2013,
http://www.telestrategies.com/ni_10/index.htm.
Quirine Eijkman and Daan Weggemans
Security and Human Rights 2012 no. 4
286
dissemination of these (jihadist) ideologies’.4 Returning to the state’s core task of
providing safety and security for its citizens, we argue that gathering Open Source
Information is a legitimate tool for security governance. As Ben Hayes stated, ‘(…)
security services would be negligent if they didn’t utilize information in the public
domain to inform their work’.5
However, the legitimacy of the growing use of OSINF cannot be derived
solely from the pursuit of security or safety concerns. (Side) effects for human
rights should also be considered. In this article we therefore discuss challenges
associated with the use of OSINF mainly in relation to the freedom of internet and
the rights to privacy and data protection. From a human rights perspective, the
gathering of OSINT demands proper checks and balances. This is especially
important, when security and intelligence agencies as well as private companies
use and exchange information. In this article, we (re)assess the checks and balances
for the use and sharing of open source intelligence by, and between, security and
intelligence agencies and law enforcement agencies. Even though laws, regulations
and policies in relation to OSINF may recognise the need for checks and balances
including the value of the right to privacy, data protection or a fair trial, it is
nevertheless important to review whether the gathering of OSINF online requires
more (state) accountability. Henceforth, in this article we focus on what dilemmas
arise with the collection of open source information by security and intelligence
agencies and to a lesser extent the police. We thus address the following questions:
Have state accountability mechanisms been able to keep up with the rapidly
increasing practice of open source information gathering and exchange? Are
sufficient safeguards provided to protect human rights?
Open source information in context
After the 9/11 attacks the National Commission on Terrorist Attacks upon the
United States recommended a greater role for OSINT within security agencies.6 The
following institutionalization of open source collection has been called ‘one of the
most high profile reforms (..) aimed at the preventing of terrorists attacks and
avoiding intelligence failures’.7 Meanwhile it is often stated that ‘ninety percent of
intelligence comes from open sources’.8
The definition of open source information, which is at the base of this form
of intelligence, is ambiguous. In this section we review some of the definitions and
4 General Dutch Intelligence and Security Service, ‘Jihadism on the Web’, 2012, p.3. Retrieved
27 July 2012, https://www.aivd.nl/@2872/jihadistisch.
5 B. Hayes, ‘Spying in a see through world: The ‘Open Source’ Intelligence Industry’, in
Statewatch Bulletin, 2010, no. 1, p. 2.
6 National Commission on Terrorist Attacks Upon the United States, The 9/11 Commission
Report, 2004. Retrieved 25 August 2012, http://govinfo.library.unt.edu/911/report/index.htm
7 H. Bean (eds.), No More Secrets: Open Source Information and the Reshaping of U.S.
Intelligence, Praeger, Santa Monica, CA, 2011, p. 12.
8 R.A. Best and A. Cumming, ‘Open Source Intelligence (OSINT): Issues for Congress’,
Congressional Research Service, 2007, p. 7. Retrieved 2 July 2012,
http://www.au.af.mil/au/awc/awcgate/crs/rl34270.pdf.
Open source intelligence and privacy dilemmas
Security and Human Rights 2012 no. 4
287
developments concerning the use of open sources for national and public
security purposes. Open source information is information that is publicly available.
In other words; what is not ‘confidential’ and out there in the (digital) public
domain. It is the information that anyone can ‘lawfully obtain by request, purchase,
or observation’.9 Examples of open information sources include the media (e.g.
radio, television, newspapers, websites, blogs), official (governmental) reports,
academic sources (papers, conferences, seminars), commercial data and so called
‘gray literature’ such as working papers, unofficial government documents and
surveys.10 In this article we focus on the increased availability of personal open
source information on the World Wide Web (‘www’). Not only online news pages
but also ‘weblogs’, ‘chat rooms’, ‘social networking sites’ including Facebook,
Twitter or Skype and ‘apps’ such as Whatsapp or WeChat, are perceived as
potential valuable sources for intelligence - and security services and the police.
Here, one can, through information technology find unique information about the
lives of millions of (world) citizens.11 Open Source Centre director Doan Naquin
once said ‘We’re looking at YouTube, which carries some unique and honest-to-
goodness intelligence’.12
To acquire this Open Source Intelligence, OSINF (raw data) in the form of an
interview, a photograph, a tweet, etc. is ‘analyzed, edited, filtered and validated’.
Furthermore, the data is linked with different media sources (e.g. the internet,
academic journals, official reports, newspapers, radio and television), in order to
verify, complement and contextualize the collected information.13 As is described
above, an advantage of this data is that it has become widely available nowadays.
Especially with the ‘information explosion’, which is the result of the rapid
development of the internet, obtaining OSINF has become significantly cheaper.
Simultaneously, the frequent use of open sources by security and
intelligence and law enforcement agencies has been facilitated by legislation. The
Dutch Intelligence and Security Services Act, for instance, states that first open
sources need to be checked before any other methods can be applied.14 Other
countries, though sometimes implicitly, often demand similar approaches to
9 National Open Source Enterprice, Intelligence Community Directive 301, July 2006.
10 A. Sands, ‘Integrating Open Sources into Transnational Threat Assessments’, in J. Sims and
B. Gerber (eds.), Transforming US Intelligence, Washington, D.C., 2005, pp.64-65.
11 For instance, Facebook had at the end of March 2012, 901 million active users. At the first
quarter of 2012 an average of 3.2 billion likes and comments were generated by its users each
day. See: Facebook, ‘Key Facts’, 2012. Retrieved 2 July 2012,
http://newsroom.fb.com/content/default.aspx?NewsAreaId=22.
12 D. Naquin, ‘CIRA Newsletter, Remarks by Doug Naquin’, in CIRA Luncheon, 2007, p.7.
Retrieved 2 July 2012, http://www.fas.org/irp/eprint/naquin.pdf.
13 NATO, NATO Open Source Intelligence Handbook, 2001, p.2. Retrieved 4 July 2012,
http://www.oss.net/dynamaster/file_archive/030201/ca5fb66734f540fbb4f8f6ef759b258c/NA
TO%20OSINT%20Handbook%20v1.2%20-%20Jan%202002.pdf.
14 Wet op inlichtingen en Veiligheidsdiensten 2002 (The Intelligence and Security Services Act
2002), 2002. Retrieved 3 July 2012, http://www.st-ab.nl/wetten/0662_Wet_op_de_
inlichtingen -_en_veiligheidsdiensten_2002.htm.
Quirine Eijkman and Daan Weggemans
Security and Human Rights 2012 no. 4
288
information gathering.15 In addition to the growth of available information, the
number of public - and private organizations concerned with OSINF and OSINT have
increased substantially. In the United States, the National Open Source Center
(NOSC) was opened on 1 November 2005 with the goal to effectively collect
available open source information. In Europe EUROSINT Forum was installed in
2007 to exchange knowledge and experiences between different professional OSINF
analysts. Moreover, initiatives by universities, including the Open Source
Intelligence Exchange at Fairmont State University, USA, or think thanks, such as
Rand Cooperation, respond to the constant demand for new knowledge in this
area.16 But not only new public institutes, training courses or expertise were
created. In addition, the increased outsourcing of information gathering, data
mining and analyzing by private companies has been a major development within
the security industry. This also affects intelligence gathering.17 Various private
entities are becoming more involved in national and personal security. For
instance, the privately funded company, the OSINT-Group was founded in 2007 and
‘utilizes in each case the best and most relevant sources to respond to established
client needs with sensitive yet unique and important ‘open source
intelligence’ rather than just ‘information’.18 Several other private companies like
Stratfor, Infosphere AB and Sandstone AB have gained substantial OSINF-market
shares as well.19 Simultaneously, the rise of websites like Wikileaks or OpenLeaks,
which facilitate the mass publication of classified information by whistleblowers
further reflect this development.
Open source intelligence dilemmas
The benefits of OSINF are emphasized by security consultants, scientists, the media
as well as the intelligence community. OSINF is cheap and more widely available
than the traditional public information acquired by clandestine services. Moreover,
it also provides extra information which sometimes cannot be gained by other
intelligence sources (e.g. human intelligence). In addition, as a result of the wide
availability of (local) news coverage throughout the internet, the use of online open
sources enables security and intelligence agencies to be more up-to-date.20
Simultaneously, online open sources may in times of crisis e.g. a war be a
more reliable and safe way of acquiring intelligence than by polarized human
15 See for instance: The United Kingdom Intelligence Services Act 1994 or the Australian
Intelligence Services Act 2001.
16 Hayes, p. 5 (see note 5 above)
17 S. Chesterman, ‘We can’t Spy… If we can’t Buy!: The Privatization of Intelligence and the
Limits of Outsourcing ‘Inherently Governmental Functions’’, in The European Journal of
International Law, 2008, no. 5, p. 1057.
18 OSINT-Group, Overview, 2012. Retrieved 2 July 2012, http://www.theosintgroup.com/
overview.html.
19 Hayes, p. 3-5 (see note 5 above)
20 L. Pouchard, J. Dobson and J. Trien, ‘A Framework for the Systematic Collection of Open
Source Intelligence’, 2009, p.1. Retrieved 29 July 2012,
http://info.ornl.gov/sites/publications/files/Pub13152.pdf.
Open source intelligence and privacy dilemmas
Security and Human Rights 2012 no. 4
289
intelligence. The large scale usage of (online) open sources has created new
contexts and perspectives that assist intelligence and security agencies to better
understand the complexity of certain security developments within local or national
contexts. It enables intelligence and security agencies to verify (classified)
information with various open media sources and data. Finally, it has been argued
that because in comparison to other sources online information is more widely
available and less secretive, the use of OSINF for intelligence purposes has lowered
the threshold for sharing information between intelligence and security
agencies.21
Apart from information security, the (side) effects of the use of OSINF for
security – and safety purposes have received little attention in the literature. In this
section we therefore introduce some dilemma’s regarding data collecting,
processing, mining and sharing of open source information. Of course there
are many other dilemmas in relation to the use of OSINF by security and
intelligence agencies or the police. These include the construction of virtual
personal identities by others22 or the facilitation of more social control by the
state.23 In this article, however, we focus primarily on the collecting, processing,
mining and sharing of open source information retrieved from social networking
sites, blogs or apps.
Collection
Whether the wider range of available digital information will be considered a
blessing or a curse remains to be seen. As a result of an information overload, the
need of intelligence and security agencies and the police to be more critical
about information has become more evident. As Hamilton Bean states ‘the amount
of available, and potentially useful, information for analysts to consider (…) (is)
increasing to nearly unmanageable levels’.24 The search of intelligence analysts in
the information jungle called the internet has become more difficult and requires
new skills from open source analysts. This kind of expertise needs to be developed
by intelligence and security agencies and the police.25
A second obstacle is the multiplication of individual sources. This so called
‘echo’ effect occurs when a news-item that appears in one source (e.g. a website) is
21 M.D. Cross, ‘EU Intelligence Sharing & The Joint Situation Centre: A Glass Half-Full’,
Meeting of the European Union Studies Association, 2011, p.10. Retrieved 25 July 2012,
http://www.euce.org/eusa/2011/papers/3a_cross.pdf.
22 E. Morozov, The Net Revolution: How not to liberate the world, London, 2010.
23 M. Hildebrandt, B. Koops and K. De Vries, ‘Where Idem-Identity meets Ipse-Identity.
Conceptual Explorations’, in Future of Identity in the Information Society, 2008. Retrieved 26
July 2012, http://www.fidis.net/fileadmin/fidis/deliverables/fidis-WP7-del7.14a-
idem_meets_ipse_conceptual_explorations.pdf.
24 Bean, p.7 (see note 7 above)
25 See e,g, H. Minas, ‘Can the open source intelligence emerge as an indispensible discipline for
the intelligence community in the 21st century?’, Research Institute for European and
American Studies, 2010.
Quirine Eijkman and Daan Weggemans
Security and Human Rights 2012 no. 4
290
spread among a considerable number of other media sources.26 The risk arises
when a story is framed in differing ways by a variety of sources of which many
only reflect a certain part of the story, sometimes in different ways. This may
mislead OSINF analysts. Consider, for example, a two minute lasting YouTube film
of one man hitting another man on a street. A second film of the incident, which for
the sake of the argument lasted five minutes, showed that the man, who was hit on
the head, had kicked his girlfriend numerous times and subsequently she had
begged for help, had not been posted online but was recorded by someone else on a
mobile phone. If journalists were to write about this incident, they would probably
find the YouTube film and several references on social networking sites and may
be tempted to conclude that the first man was at fault. In turn this creates a reality of
its own and the echo effect could lead to the first man being labelled as the ‘villain’,
whereas in reality the second man carries primary responsibility for the turn of
events.
Personal data: Processing, sharing, mining and storage
Further dilemmas arise with the collection, processing and storage of intelligence
by security and intelligence agencies and the police. In 2009 the American
Federal Bureau of Investigation (FBI) invested in a private company that specialized
in monitoring social networking sites.27 Similar developments have been reported
about Europol.28 It is not widely known that information from social networking
sites is being gathered and monitored by intelligence - and security agencies and
shared among national - and international actors, but there are many incidents
that suggest this is the case. For example, when in 2012 two British tourists were
detained and deported for tweeting that they were going to ‘destroy America’
during their holiday29, it became clear that Twitter accounts were monitored .30 In
the same year Saudi journalist Hamza Kashgaru31 was deported from Malaysia.
With alleged support of Interpol he had been located there after he fled Saudi
26 Best and Cumming, p.7 (see note 8 above)
27 N. Shachtman, ‘Exclusive: U.S. spies buy stake in firm that monitors blogs, tweets’, 2009.
Retrieved 29 July 2012, http://www.wired.com/dangerroom/2009/10/exclusive-us-spies-buy-
stake-in-twitter-blog-monitoring-firm.
28 European Parliament, ‘Parliamentary Questions, Subject: Wikileaks Global Intelligence files,
Generalised data mining by the US and EU, profiling EU citizens’, 2012. Retrieved 25 July
2012, http://www.europarl.europa.eu/sides/getDoc.do?type=WQ&reference=E-2012-002428
&format=XML&language=EN#def4.
29 Huffingtonpost, ‘British Tourists Detained, Deported For Tweeting ‘Destroy America’’,
2012. Retrieved 2 July 2012, http://www.huffingtonpost.com/2012/01/30/british-tourists-
deported-for-tweeting_n_1242073.html.
30 M. Hosenball, ‘Homeland Security Watches Twitter, social media’, 2012. Retrieved 10
August 2012, http://www.reuters.com/article/2012/01/11/us-usa-homelandsecurity-websites-
idUSTRE80A1RC20120111; For other examples please see Huffingtonpost (2010),’Arrested
over Twitter: 8 Tweets that got people BUSTED’, 25 august 2010. Retrieved on 20 August
2010 via http://www.huffingtonpost.com/2010/08/25/arrested-over-twitter-8-t_n_693866
.html#s130765&title=Man_Arrested_Fined
31 Not his real name.
Open source intelligence and privacy dilemmas
Security and Human Rights 2012 no. 4
291
Arabia due to an ‘insulting’ tweet about the Prophet Muhammed.32 A further
expansion of this monitoring of social networking sites was indicated by an
article in February 2012 that proclaimed the ‘US seeks to mine social media to
predict future’. More specifically, the development of new software was
discussed which enables security - and intelligence agencies to mine information
of social networking sites.33 In a response, Open Source Centre director Patrick
O’Neill stated that ‘we need to see social media as intelligence gathering very
similar to spying’.34
An important dilemma with the processing of the information that is collected
from the social media relates to the storage of large datasets that contain quantities
of digital personal information. Subsequently, ‘data analysis tools (are used) to
discover previously unknown, valid patterns and relationships’.35 Data mining tools
in relation to collected information from e.g. social networking sites can be used by
law enforcement and security and intelligence agencies to develop risk profiles
and label individuals as potential security risks. For most people this profiling takes
place without the data subject even knowing that he or she is being profiled.36 This
development has led to significant concerns about privacy and data-protection as
well as the right to a fair trial. What if an angry ex-girlfriend posts a Facebook
message that her former boyfriend is a ‘hi-tech terrorist’ and he is subsequently
barred from entering the USA; Will he ever find out why?
By stating ‘just because data is accessible doesn’t mean that using it is
ethical’ Dana Boyd raises one of the major concerns regarding large scale
personal data storage.37 Consider, for example, a youngster who has expressed
radical views about animal rights online and is confronted with this information
ten years later during a job interview for an administrative position in a toy store.
Would it be considered ethical if the employer asked her about it? Points of view
may differ about this: What is in ‘the public domain’ has been redefined to
‘accessible and available for any purpose under any circumstance‘. According to
Boyd we have ‘stripped content out of context, labelled it data, and justified our
actions by the fact that we had access to it in the first place’.38
People who are concerned with privacy and data protection in relation to
data mining of open source information are not afraid, at least initially, of the loss
of ownership over their digital personal data. They are primarily concerned
32 Al Jazeera, Malaysia arrests Saudi blogger over tweets, 2012. Retrieved 5 February 2009,
http://www.aljazeera.com/news/asia-pacific/2012/02/20122105349670993.html.
33 M. Wohlsen, ‘US seeks to mine social media to predict future’, Associated Press, 2012.
Retrieved 1 December 2012, http://www.news.com.au/technology/us-seeks-to-mine-social-
media-to-predict-future/story-e6frfro0-1226269477144.
34 P. O’Neill, ‘Spies give way to ‘sexy’ social media’, Federal News Radio, 2012.
35 J.W. Seifert,’Data mining and Homeland Security: An overview. CRS Report for Research’,
Congressional Research Service, 2007.
36 Cf. Hildebrandt et al., p.24 (see note 23 above); Pouchard et al., p.2 (see note 19 above)
37 D. Boyd, ‘Privacy and Publicity in the Context of Big Data’, 2010. Retrieved 27 July 2012,
http://www.danah.org/papers/talks/2010/WWW2010.html.
38 Boyd (see note 37 above)
Quirine Eijkman and Daan Weggemans
Security and Human Rights 2012 no. 4
292
that their data is disconnected from the context in which they intended it to be. To
quote Helen Nissenbaum39, ‘privacy is about expectations about the environment,
and the norms that accompany this environment, in which your information is
shared’. When people share information (e.g. about their health), they share it with
a certain audience of people (a doctor), within a certain environment (the hospital)
where certain norms apply (e.g. doctor-patient confidentiality). Different to this
example is the environment of social networking sites, which deceivingly appear
to be for a selected audience. In reality sites like these are often fully transparent
with many people listing and reproducing your pursuits. We want to upgrade our
statuses and ‘like’ certain pictures on Facebook so others can see who we are, but
this information is usually only intended for a specific audience. Boyd therefore
stresses rightly that ‘Making content publicly accessible is not equal to asking for
it to be distributed, aggregated, or otherwise scaled’.40 The data mining of social
networking sites for security and intelligence purposes is therefore a violation of
privacy.
In addition, the Council of Europe highlighted the risks of automatic data
processing profiling.41 When data for someone is produced based on the data of
others, the data subject ‘a priori cannot suspect the existence of correlation
processes that might result in certain characteristics of other individuals being
attributed to him or her on the basis of a probability calculation’.42 For instance,
when a certain person has been detained at an airport and deported because of his
alleged violent radical views. Other people who perhaps have updated their
Facebook statuses with words that relate to the profile of the deported radical,
may subsequently be treated differently by customs and border control or be
blacklisted as risk passengers, which, in reality, does not correspond to their
actual ‘threat level’. In other words, in addition to its benefits, profiling also
dilutes valuable information concerning implicit personal characteristics that may
be crucial for detecting potential violent extremist. Stories and lives, which take
place outside the realm of social networking sites are not taken into account and
this may lead to the profiling of persons who in reality are no threat (false
positives) - e.g. the case of the British tourists - or not identifying those who
actually pose a threat (false negatives).
Hildebrandt, Koops and De Vries43 also focus on the side effects of
profiling. They state that the process of identity construction might be affected by
profiling based on data mining techniques. Accordingly, profiling may lead to a
different treatment that could affect real-life opportunities. When people are
39 H. Nissenbaum, ‘Privacy as Contextual Integrity’, in Washington Law Review, 2004, no. 1,
pp. 101-139.
40 Boyd (see note 37 above)
41 Council of Europe, The Protection of Individuals with Regard to Automatic Processing of
Personal data in the Context of Profiling, Recommendation CM/REC(2010)13 and explanatory
memorandum, 2010, pp.28-32.
42 Ibid.
43 Hildebrandt et al., p.8 (see note 23 above)
Open source intelligence and privacy dilemmas
Security and Human Rights 2012 no. 4
293
unaware of being profiled or where their personal data is stored, they may change
their behaviour after experiencing the negative consequences of this profiling. This
can be considered a privacy violation; at least when privacy is defined as the
‘freedom from unreasonable constraints on the construction of one’s identity’.44
Open source accountability
Open source information has increased the range of security tools at the disposal of
security and intelligence officials or police officers. Nonetheless, the side effects
of this new method of intelligence gathering should be balanced by a form of
accountability, which is sufficient both in theory and in practice. To illustrate, in
most Western societies there is strict legislation for phone or internet tapping,
however for social networking sites or apps this is less evident. Furthermore, many
security officials probably do not see the need for more accountability for OSINT.
They may argue, for example, that social networking sites are part of the public
domain and that therefore anyone is able to access it. Henceforth what is the issue
with monitoring by public or private security analysts? And, why should they
keep track of what kind of information they collect? The difference, however,
between just anyone and a security official is that OSINT can indirectly or directly
affect someone’s private life or future opportunities. As mentioned before, the use
of OSINF for intelligence purposes has real life consequences. From a human rights
perceptive these side effects should be balanced. Henceforth, state accountability
for the use of OSINF is reviewed in this section.
As a concept accountability has a normative aspect intertwined with notions
of justice, responsibility, integrity, fairness and democracy.45 Simultaneously
accountability is concrete and ‘value free’ and focuses on the ‘obligations to
evidence management or performance imposed by law, agreement or regulation’.46
Blind distinguishes between ‘accountability as the philosophy of government’ and
accountability as the ‘means’ of government.47 In this article we recognize this
difference and differentiate between the process through which politicians or
heads of security and intelligence agencies or the police inform society about
their plans and actions in terms of open source collection and justify the need to do
so, while the actual behaviour of officials and the outcome analyses by security
and intelligence agencies or the police are subject to review or sanctions.48
44 P.E. Agre and M. Rotenberg (eds.), Technology and Privacy: The New Landscape,
Cambridge, Massachusetts, 2001, p.7.
45 P.K. Blind, ‘Accountability in Public Service Delivery: A multidisciplinary review of the
concept’, Expert Group Meeting Engaging Citizens to Enhance Public Sector Accountability
and Prevent Corruption in the Delivery of Public Services, Vienna, 2011. Retrieved 21
August 2012, http://unpan1.un.org/intradoc/groups/public/documents/un-dpadm/unpan0463
63.pdf.
46 E.L. Kohler, A Dictionary for Accountants, New Jersey, 1975.
47 Blind, p.4 (see note 45 above)
48 J.M. Ackerman, ‘Social Accountability in the Public Sector: A conceptual discussion’, in
Social Development Papers: Participation and Civic Engagement, 2005, no. 82, p.6.
Retrieved 26 August 2012, http://siteresources.worldbank.org/INTPCENG/214574-
Quirine Eijkman and Daan Weggemans
Security and Human Rights 2012 no. 4
294
This aforementioned form of accountability is characterised by its focus on the rule
of law and good governance, as well as the inclusion of civil society or ordinary
people.49 It could imply that the head of a security and intelligence agency, or
those politically responsible, not only publically announce the purpose of
collection, processing, mining or sharing of OSINF, but also limit its use to
predefined threats such as national security (e.g. for cyber espionage, international
terrorism). Furthermore, international and/or national law makers should
determine what the boundaries are (the rule of law) and how data subjects can seek
readdress (internal or external accountability mechanisms). Finally, the design of
software that enables OSINT and simultaneously emphasizes accountability (data
protection by design) should be modified accordingly. This includes privacy-
enhancing-technologies (PET’s) or transparency-enhancing-technologies (TET’s).50
These measures may be considered as a form of good governance in relation to
balancing the security officials’ use of OSINT.
Apart from the necessity to protect national security interests, which may
interfere with transparency efforts, there are other accountability dilemmas in
relation to OSINF. Ensuring accountability is more complex if the information is not
collected by the security agency itself, but by other public or private entities. It is
not uncommon for intelligences and security agencies to share information on an
(inter-)national level and since 9/11 it has become more common for law
enforcement agencies to do so as well. This is a recent development because
‘traditionally a distinction exists between collecting intelligence for national
security purposes and gathering evidence for criminal investigations, as they serve
different purposes’.51 Security and intelligence agencies prefer to keep their
sources confidential, whereas law enforcement agencies ultimately have to share
the case file with the defence council. Nonetheless, OSINT used by intelligence
and security agencies is also collected and processed by other state agencies and
sometimes shared with international partners and this has affected accountability in
practise. Consider for instance whether security analysts know what the original
source of a piece of information containing personal data was, let alone if those
affected may ever have the opportunity to access or correct it. What if the
personal data is used by the security analyst’s agency for a security clearance
investigation of refugees?
Likewise OSINF and OSINT are also collected by private companies. As is
discussed above, there has been a major growth in private companies involved in
the collection of OSINF and the acquiring of OSINT. The Wikileaks Global
1116506074750/20542263/FINALAckerman.pdf.
49 Blind (see note 45 above); Ackerman (see note 46 above)
50 M. Hildebrandt, ‘Privacy Enhancing Technologies’, Hide Project, Pets 2nd Focus Group,
2011. Retrieved 26 August 2012, http://ebookbrowse.com/hide-fg-privacy-enhancing-
technologies-minutes-20091016-pdf-d113363089; J.J.F.M. Borking, ‘Privacyrecht is een
Code: Over het gebruik van Privacy Enhancing Technologies’ (Privacy is a Code: About the
use of Privacy Enhancing Technologies), Deventer, 2010.
51 Q.A.M. Eijkman and B. van Ginkel, ‘Compatible or Incompatible: Intelligence and human
rights in terrorist trials’, in Amsterdam Law Forum, 2011, no. 4, p.4.
Open source intelligence and privacy dilemmas
Security and Human Rights 2012 no. 4
295
Intelligence Files, for example, reflect how the company Stratfor provides
intelligence to public and private entities including the US Defense Intelligence
Agency.52 One of the released 5.5 million hacked Stratfor emails reveals the
existence, a predictive software system TrapWire of TrapWire Inc. that combines
CCTV images and number plate recognition (ANPR) collected in the public domain
of two America cities.53 Another multinational security company Raytheon has
developed a riot program that mines social networking sites and on the basis of the
outcome is able to track people at their location.54 Evidently, private entities are
able to sell or share the software or the open source information with security
and intelligence agencies or the police, who probably use it for investigation
purposes.55 These developments take place, while questions of accountability for
the use of the personal data stored on social networking sites remain unaddressed.
As Ben Hayes concludes ‘we must (then) develop the tools and communities
needed to bring them under democratic control’.56
In response to these developments, civil society is in the best position to
hold the state accountable. Regarding the use of open social networking one of the
challenges in terms of holding security officials or their agencies accountable is that
in most cases the data subject has no idea that their online behaviour has been
monitored. When this monitoring leads to differential treatment accountability
issues become more realistic. But who will issue a complaint? Consider if you are
not aware that your ‘denial of an opportunity’ is the result of a private company or
an intelligence agency keeping track of your digital pursuits? To some extent civil
society can hold security agencies accountable by blogging, writing reports,
informing the public or petitioning to public authorities. Nonetheless, OSINT is a
growing business and challenging to monitor for outsiders. Subsequently, some
take more drastic steps such as hacking (governmental) websites or developing
encryption software programs to communicate ‘without anyone watching’.
52 Wikileaks, ‘Stratfor Emails: Wikileaks impact is Stratfor’s bottom line’, The Global
Intelligence Files, 2012a. Retrieved 1 March 2012, http://wikileaks.org/WikiLeaks-Impact-is-
Stratfor-s.html.
53 Parent company Cubic Cooperation. Previously owned by Abraxas Applications, who had
created it under Abraxas Applications Inc.. It is a private company that employs several
former public officials of the Central Intelligence Agency (CIA) and other agencies. See: RT,
‘TrapWire Tied to Anti-Occupy Internet-spy-program’, 2012. Retrieved 28 August 2012,
http://rt.com/usa/news/trapwire-abraxas-cubic-surveillance-251/; C. Arthur, ‘Trapwire
surveillance system exposed in document leak’, in The Guardian Online, 2012. Retrieved 13
August 2012, http://www.guardian.co.uk/world/2012/aug/13/trapwire-surveillance-system-
exposed-leak
54 R. Gallagher, ‘Software that tracks people on social media created by defense firm’, in The
Guardian Online, 2012. Retrieved 10 February 2013, http://m.guardian.co.uk/world/2013/feb
/10/software-tracks-social-media-defence
55 Public Intelligence, ‘Unraveling TrapWire: The CIA-connected global suspicious activity
surveillance system’, 2012. Retrieved 29 August 2012, http://publicintelligence.net/
unravelling-trapwire; Wikileaks, ‘Stratfor Emails’, Wikileaks: The Global Intelligence Files,
2012b. Retrieved 9 August 2012, http://www.wikileaks .org/gifiles/releasedate/2012-08-
09.html.
56 Hayes, p.8 (see note 5 above)
Quirine Eijkman and Daan Weggemans
Security and Human Rights 2012 no. 4
296
Reflections
In this article we argue that the increased use of Open Source Information (OSINF)
and Open Source Intelligence (OSINT) for safety and security purpose needs to be
balanced by assessing what state accountability in a digital world should entail.
Even though security and intelligence agencies and the police are tempted to
increase their use of social networking sites, tweets, blogs or apps, state
accountability mechanisms have struggled to adapt to the online open source
culture. Consider for instance the dilemma that intelligence is frequently collected,
processed, minded and stored by external entities including foreign security and
intelligence agencies or private companies. Subsequently, it is reasonable to
assume that the security officials or analysts, who are the end-users, are oblivious
to the original source or its specific context. The use of OSINT requires, despite
the fact that ordinary people are usually unaware of being profiled or where or
how their digital personal data is stored in data bases or ‘the cloud‘, that the
public at large begins to ask more questions. The most important reason being that
despite the safety and security benefits, this information may have real-life
consequences.
State accountability for OSINT should at least require that it is used on the
basis of a law. Furthermore politicians, heads of security agencies or security
officials should proactively inform society about their plans and actions in relation
to OSINF and justify the need to acquire OSINT. This should preferably happen
online and also, if possible, at the public entity where the data subjects are
confronted with the outcome. Even though the security official, who collects
information about a person who potentially poses a threat, may be aware that this
particular information needs to be verified, it is reasonable to assume that in real
life no risks are taken.
Furthermore, state accountability should entail that the actual process and
outcome of data collection, processing, mining and sharing is subjected to
review or sanctions. In practice, however, this is challenging: Which entity or who
carries the responsibility? And, what about accountability? On an individual level:
Is it the analyst, the risk profiler, the executive security official etc.? Is individual
accountability in this context realistic or is it simply a matter that when everybody,
the whole chain of security officials who collect, mind, process and store data, is
responsible, nobody is accountable? Formally those entities who collect and store
OSINT are accountable, but in practice it is unlikely that a data subject will ever have
access or correct information (probably for security reasons). Therefore before any
form of new state accountability mechanism is considered, an informed public
and political debate about the desirability of OSINF accountability by the state
should take place. The use of OSINT for safety and security purposes is a reality, but
(re)assessing state accountability is necessary for its legitimate use by security
and intelligence agencies and the police.
... The availability of user-generated photos, audio recordings, and videos is a salient feature of the ongoing process of datafication (Mejias and Couldry, 2019). While these media objects can be understood as eyewitness accounts of human conflict, which are especially valuable for staying informed on areas that are hard to reach (Dubberley et al., 2020), they also risk harming their creators if used uncaringly (Eijkman and Weggemans, 2012). As a result, ethical questions about methods, data sources, and data processing seem to be inherent to the practice of OSINV. ...
... Through sharing which sources were used and which steps were undertaken, transparency leads to the replicability of the investigation, enabling people to trace back a story and fact-check it themselves (Phillips, 2010). Like in academia, journalistic replicability leads to a sense of accuracy and legitimacy among consumers (Eijkman and Weggemans, 2012). At a time when the markers of journalistic authoritymonopoly of news selection, objectivity, commitment to democracydo not hold self-evident legitimacy anymore, transparency is increasingly viewed as able to retrieve this authority (Perdomo and Rodrigues-Rouleau, 2022). ...
Article
Full-text available
With the exploding availability of online data, digital open-source investigations (OSINV) methods have become increasingly popular in journalism. However, practitioners face novel challenges related to the tension between journalism's transparency ideals and its duty to safeguard the privacy and security of data subjects. This article explores this tension by drawing on data from eight in-depth interviews with professional open-source investigative journalists in the Netherlands. The findings of our study reveal that OSINV investigators rely heavily on personal assessments and ongoing dialogues with colleagues to make privacy-related editorial choices, as rules and guidelines have only recently emerged. This research provides valuable insights into the intricacies of OSINV journalism , uncovering the delicate balance between journalistic transparency and privacy/ security considerations.
... The single most well-known capability of social media is the ability to keep in contact with connections or create new ones, regardless of location, time, and physical constraints of the people involved. Even if its implications, drawbacks, and potential are still not fully understood by most of the public, especially the privacy risks involved in having an online persona (Eijkman & Weggemans, 2012). The ability to communicate easily and quickly through messaging, video, and voice calls, for most outweighs the real or perceived risks involved in this sort of online activities. ...
Article
Full-text available
This data captures people’s experiences as unknowing targets of disinformation. Participants were US citizens naive to the actions of the different entities using social media to target Americans with disinformation in the months leading up to the 2016 US presidential election. Results indicated participants reported notable changes in their interactions on social media in the form of disruptions to existing relationships. Specifically, participants reported that they argued with their connections more, observed others disagree more, and reported an increase in the loss of friends and family connections through the unfriending or unfollowing features of social media. While, some participants found these changes amusing, most reported increased psychological distress. Not one participant mentioned Russian election interference or disinformation as the cause of these interpersonal difficulties. Analysis of text responses did not include any mention of disinformation, Cambridge Analytica, or Russia as causes of these disruptions. These results suggest that social media use has implications for individuals’ social relationships and these disruptions may impact their psychological functioning. Implications of these results for the psychological impacts of social media use will be discussed.
... Општи тренд држава усмерен је на то да се повећа безбедност путем онлајн надзора. Ово се остварује деловањем служби задуженим за међународну безбедност, као и деловањем страних обавештајних служби (Buchanan 2020), али и настојањем да се доследно примењују закони како унутар појединачних држава тако и на међународном нивоу (Eijkman -Weggemans 2013). Док владе и медији непрестано дистрибуирају информације о сајбер претњама, прави сајбер напади који резултирају смрћу и повредама остају углавном ствар холивудских филмова или теорије завере. ...
Article
Full-text available
The spread of digital technologies and their power to connect billions of people around the world, has enabled modern society to communicate more efficiently, access information, do business, but also have fun. However, there are many examples in which this possibility is abused for cybercriminal activities, among which is certainly cyber terrorism. Through the activities of traditional terrorism, terrorists try to provoke anxiety in the population, choosing targets randomly or selectively, against which violence is committed, with the intention of serving as an example to other members of society. Such incitement to fear is precisely the means of terrorist individuals and groups, who want to put pressure on the government and the general public to meet their demands. Recently, there are more and more examples in which such activities are moving (with more or less success) to the digital environment. Cyber terrorism is a means of spreading radical ideology, propaganda, recruiting likeminded people, but also organizing cyber terrorist attacks. The advantage that a terrorist activist sees in using such methods is certainly the possibility to remain anonymous and easily hide in the cyber world. Cybercriminals easily adapt and find new methods of committing cyberbullying, which must be a call to governments to regulate legislation in a synchronized effort, but also to form expert teams that will identify, prevent and adequately sanction such criminal activities and protect their population and critical infrastructure. This paper seeks to bring closer and present information to the academic community, but also to experts in the field of cyber terrorism that can help them improve strategies for recognizing and defending against such cyber activities. The main goal of this paper is to analyze the collected information from the available scientific literature and to offer new guidelines in order to reduce such activities to a minimum in the near future.
... In these cases, non-compliance with ethical norms may lead to the violation of rights, such as the right to private life, intellectual property, and copyright (Baiasu, 2020). In this context, Danah Boyd, a researcher at Microsoft, stated that "publication and free access to certain content are not equivalent to requesting that it be distributed, aggregated, or modified in any way" (Eijkman & Weggemans, 2013). ...
Conference Paper
Full-text available
At the heart of every individual is a code of ethics, a code that must be used when making any decision. People develop a sense of ethics based on culture, faith, and ethnicity, which makes each unique. Using online information, and most social media, most cases of violation of ethical norms can be observed. Social media encompasses and covers a large scale of websites, but the link between all of these sites is the ability of users to interact. Although, not long ago it seemed just a trend that skeptics have claimed and insisted will not be successful, is now attracting more users than ever before as it grows by billions of dollars, evolving to the point where if you don't exist in online as a person, but especially as an organization, you can't succeed. All the actions a social user makes make him/her vulnerable, and that is why during time ethical standards were set clear to protect users’ rights and privacy. The changes in the Internet over the last two decades have led to major changes in people's lives. Thus, online activity has become a Wild West, which has led to Cyber Ethics - a philosophical study of ethics on the Internet. In-depth information from online users can lead to the avoidance of fake news as well as the avoidance of certain risks. Through this paper, we highlight that many principles of ethics are violated, mostly knowingly. Many social media players violate these principles to obtain various financial benefits, manipulate, and misinform. With each passing day, we notice more and more illegal information about economic, political, and social events. A correct information process and appropriate and up-to-date legislation can lead to respect for both human rights and ethical values.
... While the above articles highlight how the nature of OSINT makes it a crucial tool for democratic oversight, this very nature can also be seen as a threat to citizens' rights when authorities exploit it to increase social control (Wells 2016). Concerns about increased state surveillance and profiling have long been expressed in the literature (Eijkman and Weggemans 2012), together with the opacity in the analysis and OSINT-based decision-making by state authorities and private companies. Indeed, it has also been shown that the growing public awareness of state surveillance practices and fear of profiling can lead users to contemplate withholding or even falsifying personal information shared online (Bayerl and Akhgar 2015). ...
Article
Full-text available
Today, open source intelligence (OSINT), i.e., information derived from publicly available sources, makes up between 80 and 90 percent of all intelligence activities carried out by Law Enforcement Agencies (LEAs) and intelligence services in the West. Developments in data mining, machine learning, visual forensics and, most importantly, the growing computing power available for commercial use, have enabled OSINT practitioners to speed up, and sometimes even automate, intelligence collection and analysis, obtaining more accurate results more quickly. As the infosphere expands to accommodate ever-increasing online presence, so does the pool of actionable OSINT. These developments raise important concerns in terms of governance, ethical, legal, and social implications (GELSI). New and crucial oversight concerns emerge alongside standard privacy concerns, as some of the more advanced data analysis tools require little to no supervision. This article offers a systematic review of the relevant literature. It analyzes 571 publications to assess the current state of the literature on the use of AI-powered OSINT (and the development of OSINT software) as it relates to the GELSI framework, highlighting potential gaps and suggesting new research directions.
... Thus, OSINT systems can be used for event detection and analysis, while profiling and analyzing personal data have higher legal barriers. However, this discourse will continue, as questions of accountability and transparency have to be discussed [15]. ...
Article
Full-text available
The use of Open Source Intelligence (OSINT) to monitor and detect cybersecurity threats is gaining popularity among Cybersecurity Emergency or Incident Response Teams (CERTs/CSIRTs). They increasingly use semi-automated OSINT approaches when monitoring cyber threats for public infrastructure services and incident response. Most of the systems use publicly available data, often focusing on social media due to timely data for situational assessment. As indirect and affected stakeholders, the acceptance of OSINT systems by users, as well as the conditions which influence the acceptance, are relevant for the development of OSINT systems for cybersecurity. Therefore, as part of the ethical and social technology assessment, we conducted a survey (N=1,093), in which we asked participants about their acceptance of OSINT systems, their perceived need for open source surveillance, as well as their privacy behavior and concerns. Further, we tested if the awareness of OSINT is an interactive factor that affects other factors. Our results indicate that cyber threat perception and the perceived need for OSINT are positively related to acceptance, while privacy concerns are negatively related. The awareness of OSINT, however, has only shown effects on people with higher privacy concerns. Here, particularly high OSINT awareness and limited privacy concerns were associated with higher OSINT acceptance. Lastly, we provide implications for further research and the use of OSINT systems for cybersecurity by authorities. As OSINT is a framework rather than a single technology, approaches can be selected and combined to adhere to data minimization and anonymization as well as to leverage improvements in privacy-preserving computation and machine learning innovations. Regarding the use of OSINT, the results suggest to favor approaches that provide transparency to users regarding the use of the systems and the data they gather.
Article
Full-text available
Digital open-source evidence has become ubiquitous in the context of modern conflicts, leading to an evolution in investigative practices within the context of mass atrocity crimes and international criminal law. Despite its extensive promulgation, international criminal tribunals have had few opportunities to address the admissibility of user-generated open-source evidence. Through semi-structured interviews with experts and analyses of primary and secondary sources, this article examines the current standards and practices governing the use of user-generated open-source evidence. Current practices illuminate a number of gaps in the realm of digital open-source evidence in international criminal law. This article posits the establishment of a standing international, investigative mechanism as a solution to a need for increased standardization and co-ordination within the realm of user-generated open-source evidence. By standardizing the collection and use of such evidence, investigative bodies will be prepared to more effectively serve the international justice community.
Chapter
This chapter critically discusses the use of social media and artificial intelligence in digital policing to counter terrorism. Key concepts are defined followed by a critical analysis of the ethical, legal, technological and organizational challenges. Additionally, a number of recommendations is included, such as a transparent and clear framework of policing social media nationally and internationally, screening for biased algorithms and more transparency of processes within artificial intelligence as well as restructuring of and more funding for the police to improve digital investigations. Moreover, the importance of preventing human rights violations has been discussed by evaluating the restriction of freedom of speech on social media platforms within legal and ethical contexts, suggesting a more open approach of redirecting users at risk of radicalization to verified sources.
Article
Full-text available
Today’s threats use multiple means of propagation, such as social engineering, email, and application vulnerabilities, and often operate in different phases, such as single device compromise, lateral network movement, and data exfiltration. These complex threats rely on advanced persistent threats supported by well-advanced tactics for appearing unknown to traditional security defenses. As organizations realize that attacks are increasing in size and complexity, cyber threat intelligence (TI) is growing in popularity and use. This trend followed the evolution of advanced persistent threats, as they require a different level of response that is more specific to the organization. TI can be obtained via many formats, with open-source intelligence one of the most common, and using threat intelligence platforms (TIPs) that aid organizations to consume, produce, and share TI. TIPs have multiple advantages that enable organizations to quickly bootstrap the core processes of collecting, analyzing, and sharing threat-related information. However, current TIPs have some limitations that prevent their mass adoption. This article proposes AECCP, a platform that addresses some of the TIPs limitations. AECCP improves quality TI by classifying it accordingly a single unified taxonomy , removing the information with low value, enriching it with valuable information from open-source intelligence sources, and aggregating it for complementing information associated with the same threat. AECCP was validated and evaluated with three datasets of events and compared with two other platforms, showing that it can generate quality TI automatically and help security analysts analyze security incidents in less time.
Article
Full-text available
This article focuses on the special criminal procedures for the use of intelligence in terrorist trials in Canada, France, the Netherlands and the United Kingdom. Since 9/11 and the terror attacks in London and Madrid, gathering intelligence as well as the prosecution of suspects of terrorist crimes have become strategic tools in countering terrorism. By reviewing the special procedures for the use of intelligence, their compatibility with human rights standards, including the right to fair trial, is discussed. Concerns include the extent to which disclosure is made possible and to whom. The differences in criminal procedures for the use of intelligence in terrorist trials also raises questions if intelligence origins from a third state, in which different regulations with regard to disclosure of information apply.
Article
Data mining has become one of the key features of many homeland security initiatives. Often used as a means for detecting fraud, assessing risk, and product retailing, data mining involves the use of data analysis tools to discover previously unknown, valid patterns and relationships in large data sets. In the context of homeland security, data mining can be a potential means to identify terrorist activities, such as money transfers and communications, and to identify and track individual terrorists themselves, such as through travel and immigration records. While data mining represents a significant advance in the type of analytical tools currently available, there are limitations to its capability. One limitation is that although data mining can help reveal patterns and relationships, it does not tell the user the value or significance of these patterns. These types of determinations must be made by the user. A second limitation is that while data mining can identify connections between behaviors and/or variables, it does not necessarily identify a causal relationship. Successful data mining still requires skilled technical and analytical specialists who can structure the analysis and interpret the output. Data mining is becoming increasingly common in both the private and public sectors. Industries such as banking, insurance, medicine, and retailing commonly use data mining to reduce costs, enhance research, and increase sales. In the public sector, data mining applications initially were used as a means to detect fraud and waste, but have grown to also be used for purposes such as measuring and improving program performance. However, some of the homeland security data mining applications represent a significant expansion in the quantity and scope of data to be analyzed.
Article
Omgevingsanalyse van onze samenleving toont aan dat door de toenemende informatisering privacyproblemen (identiteitsfraude, datalekken) zullen toenemen. Door gegevensontdekkende-, gegevensvolgende- en gegevenskoppelende technologieën erodeert de privacy van de burger in onze risico-toezichtmaatschappij ernstig. Het vertrouwen in het gebruik van ICT en het elektronisch zaken doen komt hierdoor sterk onder druk te staan. Er bestaat uitvoerige Europese wet- en regelgeving om onze privacy te beschermen. In dit boek is deze wetgeving geanalyseerd. De wetsartikelen die direct betrekking hebben op de verwerking van persoonsgegevens kunnen beschouwd worden als de juridische specificaties voor het ontwerp van informatiesystemen. De wetgeving verplicht tot een privacyrisico analyse voorafgaande aan het gebruik van informatiesystemen. Dit vindt nauwelijks plaats. In dit boek worden zeven privacy bedreigingsanalyses (o.a PIA) besproken. Met de privacyrisicoanalyse kan opdrachtgevers en ontwerpers van informatiesystemen de potentiële risico’s voor de privacy van de burger in kaart te brengen. Uit onderzoek is duidelijk geworden dat persoonsgegevens het best beschermd kunnen worden als bij de verwerking van de persoonsgegevens de identificeerde gegevens direct worden scheiden worden van andere gegevens. Privacy Enhancing Technologies (PET), die de juridische vereisten omzetten in technische specificaties, kunnen hiervoor zorgdragen. De inhoud, reikwijdte en succesvolle toepassingen van PET worden in dit boek uiteengezet. Aan de hand van de Diffusion of Innovation (DOI) theorie van Rogers kunnen de positieve en negatieve factoren voor organisaties worden vastgesteld, die van invloed zijn op het invoeren van identity en access management, privacy bescherming en de adoptie van PET voor de bescherming van persoonsgegevens. Bovendien is gebleken dat de maturiteit van de organisatie bepalend is of de organisatie aandacht besteedt aan privacybescherming. Met de in dit boek toegelichte Return On Investment (ROI) formules kan de economische rechtvaardiging voor privacy beschermende investeringen worden onderbouwd. Het boek sluit af met een stappenplan om privacyveilige informatiesystemen in organisaties te implementeren en doet tien aanbevelingen
Software that tracks people on social media created by defense firm', in The Guardian Online
  • R Gallagher
R. Gallagher, 'Software that tracks people on social media created by defense firm', in The Guardian Online, 2012. Retrieved 10 February 2013, http://m.guardian.co.uk/world/2013/feb /10/software-tracks-social-media-defence
Spies give way to 'sexy' social media', Federal News Radio
  • P O Neill
34 P. O'Neill, 'Spies give way to 'sexy' social media', Federal News Radio, 2012.
Malaysia arrests Saudi blogger over tweets
  • Al Jazeera
Al Jazeera, Malaysia arrests Saudi blogger over tweets, 2012. Retrieved 5 February 2009, http://www.aljazeera.com/news/asia-pacific/2012/02/20122105349670993.html. 33
Social Accountability in the Public Sector: A conceptual discussion
  • J M Ackerman
J.M. Ackerman, 'Social Accountability in the Public Sector: A conceptual discussion', in Social Development Papers: Participation and Civic Engagement, 2005, no. 82, p.6. Retrieved 26 August 2012,
US seeks to mine social media to predict futureus-seeks-to-mine-social- media-to-predict-future
  • M Wohlsen
M. Wohlsen, 'US seeks to mine social media to predict future', Associated Press, 2012. Retrieved 1 December 2012, http://www.news.com.au/technology/us-seeks-to-mine-social- media-to-predict-future/story-e6frfro0-1226269477144.
Privacy and Publicity in the Context of Big Data
  • D Boyd
D. Boyd, 'Privacy and Publicity in the Context of Big Data', 2010. Retrieved 27 July 2012, http://www.danah.org/papers/talks/2010/WWW2010.html.
It is a private company that employs several former public officials of the Central Intelligence Agency (CIA) and other agencies. See: RT, 'TrapWire Tied to Anti-Occupy Internet-spy-program', 2012. Retrieved 28Trapwire surveillance system exposed in document leak
Parent company Cubic Cooperation. Previously owned by Abraxas Applications, who had created it under Abraxas Applications Inc.. It is a private company that employs several former public officials of the Central Intelligence Agency (CIA) and other agencies. See: RT, 'TrapWire Tied to Anti-Occupy Internet-spy-program', 2012. Retrieved 28 August 2012, http://rt.com/usa/news/trapwire-abraxas-cubic-surveillance-251/; C. Arthur, 'Trapwire surveillance system exposed in document leak', in The Guardian Online, 2012. Retrieved 13