Content uploaded by David S. Wall
Author content
All content in this area was uploaded by David S. Wall on Feb 03, 2015
Content may be subject to copyright.
Electronic copy available at: http://ssrn.com/abstract=1124662
Please Cite as Wall, D.S. (2008) ‘Cybercrime, Media and Insecurity: the shaping of public perceptions of cybercrime’,
International Review of Law, Computers and Technology, vol. 22, nos. 1-2, pp. 45–63 (ISSN 0965-528X).
(Page 45)
CYBERCRIME, MEDIA AND INSECURITY:
The shaping of public perceptions of cybercrime
1
DAVID S. WALL
*
Centre for Criminal Justice Studies, University of Leeds, UK, d.s.wall@leeds.ac.uk
ABSTRACT. All too often claims about the prevalence of cybercrimes lack clarification as
to what it is that is particularly ‘cyber’ about them. Perhaps more confusing is the startling
contrast between the many hundreds of thousands of incidents of cybercrime supposedly
reported each year and the relatively small number of known prosecutions. This contrasting
evidence exposes a large gap in our understanding of cybercrime and begs a number of
important questions about the quality of the production of criminological knowledge about
it. For example, how reliable and partial are the informational sources that mould our
opinions about cybercrime. Do we fully understand the epistemological differences between
the various legal, academic, expert and popular (lay) constructions of cybercrime?
Alternatively, is the criminal justice system just woefully inefficient at bringing wrongdoers
to justice? Or are there still some major questions to be answered about the conceptual
basis upon which information is gathered and assumptions about cybercrime made? This
article takes a critical look at the way that public perceptions of cybercrime are shaped and
insecurities about it are generated. It explores the varying conceptualisations of cybercrime
before identifying tensions in the production of criminological knowledge that are causing
the rhetoric to be confused with reality. It then contrasts the mythology of cybercrime with
what is actually going on in order to understand the reassurance gap that has opened up
between public demands for Internet security and its provision.
Keywords: cybercrime; media and crime; insecurity
Introduction: the cybercrime conundrum
The drama of the spectacle of technology now permeating our everyday lives, combined with the
fear of the potentially dystopic futures it could inflict upon us guarantees news articles about
‘cybercrime’ a place on the front page. Yet, all too often the claims made in those articles about the
prevalence of cybercrimes lack clarification as to what it is that is particularly ‘cyber’ about them.
Indeed, on the rare occasions that so-called cases of cybercrime come to court – typically
comprising Internet fraud, theft, pornography, paedophilia, even hacking, etc. – they often have the
familiar ring of the ‘traditional’ rather than the ‘cyber’ about them.
Perhaps more confusing is the startling contrast between the many hundreds of thousands, even
millions, of threats that Internet users supposedly receive each year and the relatively small number
of known prosecutions. In September 2007, the cyber-security group Symantec published their
seventh Internet Security Threat Report. It found that in ‘the first half of 2007, 212,101 new
malicious code threats were reported to Symantec. This is a 185 percent increase over the second
half of 2006’.
2
In the week following its publication, the report’s findings became the basis for
many news stories that were published in reliable and less reliable news
*
This article is based upon papers delivered at CRIMPREV, 12–13 October 2007, Ljubljana, Slovenia and the Max
Planck Institute, Frieburg, Germany, 11 December 2007. See Endnote 1 for further details. The few minor differences to
the published paper are in dark blue.
Electronic copy available at: http://ssrn.com/abstract=1124662
(Page 46) print and broadcast outlets.
3
The news story bylines continued a long-standing line of
reporting that portrays cybercrime as immensely prevalent and threatening. It is a practice that
inevitably shapes and raises public and also media expectations of its impacts. These expectations
contrast dramatically with the actual experience of cybercrime within the criminal justice system. In
the UK, for example, there were only about 200 prosecutions under the Computer Misuse Act 1990
in the first 15 years following its introduction.
4
In 2007 the Ministry of Justice still ‘had only a
“handful” of cases pending’.
5
Similar trends are found elsewhere.
6
This conundrum needs to be resolved, because it is a graphic example of the continuing
uncertainties about the Internet that are impeding the benefits that it can bring for citizenship
through improved and accessible governance, increased commerce and benefits to leisure.
7
It also
reveals some of the current tensions in the policy making process relating to cybercrime. In
September 2007, the House of Lords Science and Technology select committee report described the
Internet as ‘increasingly the playground of criminals’
8
and urged the government to take stronger
measures against cybercrime. This was a view echoed by industry and others.
9
Yet, the
Government’s reply
10
was less accepting of the report’s claims, no doubt, informed by the British
Crime Survey (described later) which found relatively low levels of reported victimisation.
11
Does this low prosecution rate represent an ‘absence of evidence’ or is it evidence of the absence of
cybercrimes – to paraphrase former US Secretary of Defence, Donald Rumsfeld.
12
Three possible
explanations suggest themselves. The first is that the ‘cybercrime problem’ has simply been blown
up out of all proportion and that the media news gathering process has, unintentionally fabricated an
apparent crime wave out of a few novel and dramatic events. The worst case scenario is that we are
in fact witnessing a deliberately calculated attempt to peddle fear, uncertainty and doubt by the
media and cyber-security industry to further its interests, or by government to effect governance
through fear of (cyber)crime! A second and very simple explanation is simply to say that the
criminal justice process is woefully inefficient at bringing wrongdoers to justice? Indeed, can we
realistically expect two century-old criminal justice processes designed to counter the social effects
of urban migration to respond to an entirely new set of globalised ‘virtual’ problems? There are
always post-hoc examples of police inefficiency that can be illustrated to support the point. The
third possible explanation, to quote another Rumsfeldism, is that this contrast is evidence of
‘unknown unknowns’ about cybercrime. In other words, instead of accepting the information that
we are given, should we be asking more fundamental and critical questions about what cybercrimes
are? Questions that look deep into the conceptual basis upon which information is gathered and
assumptions about cybercrimes are made.
Clearly, the contrasting positions of high reporting versus low prosecutions expose a large gap in
our understanding of cybercrimes and beg a number of important questions about the quality of the
production of criminological knowledge about them. This article develops recent research
13
findings
to take a critical look at the various roles of the media in generating insecurities and shaping public
perceptions of cybercrime. It looks specifically at the conceptualisation of cybercrime before
identifying tensions in the production of knowledge about it that are causing the rhetoric to be
confused with reality. It then contrasts the mythology of cybercrime with what is actually going on
in order to understand the reassurance gap that has opened up between public demands for Internet
security and its provision. The final part looks at the various challenges that cybercrimes pose for
criminal justice and at how the digital realism of cybercrime and policing cybercrime can contribute
to closing the reassurance gap.
The conceptual origins of cybercrime
The term ‘cybercrime’ is widely used today to describe the crimes or harms that result from
opportunities created by networked technologies. Its origins lie in science fiction forums,
(Page 47) novels and films that progressively linked ‘cyberspace’ with ‘crime’ to describe crimes
that take place in virtual environments. The term cyberspace, for example, appears to have been
coined by William Gibson in his 1982 short story ‘Burning Chrome’ published in Omni Magazine
14
– a science fiction meets science forum that existed between 1978 and 1998. The concept of
cyberspace and virtual environments then evolved through cyberpunk science fiction short stories
by Bruce Bethke
15
and others, and novels such as Gibson’s Neuromancer
16
and Stephenson’s
Snowcrash
17
. The linkage between cyberspace and crime was always explicit in the concept of
cyberpunk and it framed most dramatic fiction narratives, though the actual term ‘cybercrime’
seems to have emerged in the late 1980s or even early 1990s in the later cyberpunk novels and
comic books.
18
The cultural fusion of cyberspace and crime into mainstream popular culture is largely due to three
generations of movies portraying hackers which infused some of the cyberpunk ideas. The first are
the early first generation films defined by ‘the hack’ into an infrastructural system – Italian Job,
1969; Die Hard, 1988. They were followed by a second generation of films that were defined by,
and romanticised, the gender specific (male) ‘hacker’ – War Games, 1983. The later second
generation films shifted from portraying hacks across communications networks to hacks in
different types of virtualised environments, with hackers still young, but less gender specific and
less likely to adopt moral high ground than in earlier films – Johnny Mnemonic, 1995;
Independence Day, 1996. The third generation of films were defined by both ‘hacker and hack’
being in virtual environments and epitomised by The Matrix, 1999,
19
which itself is reputedly based
upon Baudrillard’s ideas about Simulacra
20
and simulation. The conceptual links between these
images were subsequently reproduced in various similar forms in TV movies, TV programmes,
comic books, novels and so on. The significance of these various sources of visual and textual
imagery is that ‘contemporary movie and media imagery subconsciously orders the line between
fact and fiction’
21
and these ‘factional’ images (fact and fiction) have crystallised ‘the hacker’
offender stereotype as the archetypal ‘cybercriminal’.
22
When framed by contemporary cultural reactions to technological change this ‘factional’ imagery
distorts expectations of cybercrime. The dramatic narrative of the extra-human power that
technological developments can bestow upon possessors and the social change it can create has long
been popular with authors and readers. The Victorian science fiction novels of H.G. Wells
23
and
others were written during a time of great social upheaval caused by technological innovation and
they described worlds transformed, but also threatened by new technologies. This traditional
continued through to the present day via the works of Brian Aldiss, Aldous Huxley and
contemporaries. Interweaved with science fiction was the social science fiction novel,
24
of which
the most well known example was probably Orwell’s Nineteen Eighty Four.
25
In his, now, classic
novel Orwell combined contemporary events with social theory and ideas about technological
change to predict a dystopic future that captured the public imagination.
While these creative works set the scene, contemporary cultural reactions to techno/social change
frame the social science commentaries in a number of distinctive ways. First, there is Toffler’s
Futureshock,
26
which describes a fear of the future and the change it brings and which tends to rear
its head whenever there is a significant period of technological transformation. Second, through a
series of cultural processes described by Furedi
27
and others, the culture of fear imbues the public
with an ideological fear of crime that leads to exaggerated expectations of crime and danger –
regardless of whether any actually exist. Such process is not far from Garland’s ‘crime complex’
whereby public anxiety about crime has become the norm and now frames our every-day lives
28
so
that we expect crime to exist regardless, and we are shocked when we do not find it! Simon and
perhaps less explicitly, Garland, have gone further to suggest that governments and policymakers
tactically use prevailing fears of
(Page 48) dangerous crime to control a broad range of risks.
29
That this tactic should also be used
with cybercrime is of no surprise. Third, and finally, the gap that emerges between the expected
threat and the provision of security, displayed, for example, so graphically by the cybercrime
conundrum illustrated earlier, is a reassurance gap
30
that needs to be closed. The need for
reassurance typically becomes expressed in the form of demands for more ‘police’ action which, of
course, the police find it hard to provide because their funding models are usually determined by
routine activities based upon the 170 year old Peelian model of policing dangerousness.
31
A model
that remains similar in principle since the late 1820s, even though it is now performed in more
complex late-modern societies.
Myths and the construction of cyberfear – rhetoric vs reality
The science fiction moulded conceptualisation of cybercrime has shaped and distorted our
expectations of them as being dramatic, futuristic and potentially dystopic. This ‘distortion’ has
reflected the way that incidents of cybercrime are reported, with the knock-on effect of heightening
cyberfear. News reporting tends to simultaneously feed and feed off the public’s lust for ‘shocking’
information and this endless demand for sensation sustains the confusion of rhetoric with reality to
create what Baudrillard has called ‘the dizzying whirl of reality’.
32
By blurring predictions about
‘what could happen’ with ‘what is actually happening’ the impression is given by news reports that
novel events are far more prevalent than they really are. Once a ‘signal event’, such as an example
of a new form of cybercrime becomes reported, then other news sources will feed off the original
report. Although signal events may not necessarily constitute a major infraction of criminal law, or
necessarily a minor one, they ‘nonetheless disrupt the sense of social order’.
33
They capture the
media’s imagination and exert ‘a disproportionate impact upon public beliefs and attitudes when
compared with their “objective” consequences’,
34
raising levels of (cyber)fear and sustaining folk
myths about the Internet.
Since the early 1990s a number of popular myths about the Internet have become widespread and,
while any factual basis that may have originally contributed to their creation may long since have
disappeared, they still circulate and perpetuate the culture of fear and also distort our understanding
of a new range of issues that are emerging in their place. A paragraph from the abstract of the
House of Lords Science and Technology select committee report conveniently encapsulates this
process.
35
While the report and its evidence section are informative and useful documents, the
summary tends to be what journalists pick upon.
But the Internet is now increasingly the playground of criminals. Where a decade ago the public
perception of the e-criminal was of a lonely hacker searching for attention, today’s ‘bad guys’ belong
to organised crime groups, are highly skilful, specialised, and focused on profit. They want to stay
invisible, and so far they have largely succeeded. While the incidence and cost of e-crime are known
to be huge, no accurate data exist.
36
The phrasing of this paragraph reflects a range of assumptions and prevailing myths: that the
Internet is unsafe; users cannot be trusted and need to be protected from themselves; hackers are all
powerful; hackers have become part of organised crime; hackers are anonymous; they cannot be
tracked; they go unpunished; there is no data. With the exception of the latter point, which is dealt
with in the next section, they are unpicked and explored below, along with responses and also some
of the new issues that are emerging in their wake.
Cyberspace is pathologically unsafe and criminogenic
The sheer volume of users and increasing volume of person to person, person to business, business
to business, government to person and business transactions that take place must be
(Page 49) testament to the fact that the Internet is currently working. If a good quality security
product is installed and Internet browsers are used wisely then users will reduce levels of risk.
Despite this, there remains the underlying public concern, propagated by media reportage and
adverse commentary, that the Internet is not only criminogenic, but downright dangerous in that
hackers can still make planes fall from the sky and interfere disastrously with aspects of the critical
infrastructure. Yet, longstanding critical incident management plans mean that very few aspects (if
any) of critical infrastructure are still connected directly to the Internet – if they are, then this is
extremely problematic, if not negligent, and needs to be remedied. The main concerns about
infrastructure are not so much the environment itself, but the management of the large amounts of
critical information within it, especially when it is concentrated within one source such as a
database. This issue became highlighted in November 2007, when CD-ROM disks containing the
personal information of 25 million child benefit claimants were lost in postal (not electronic) transit
from one public agency to another.
37
The resulting panic not only brought to light other losses,
38
but
it has also illustrated how potentially vulnerable such concentrations of data are and how important
security policies are to the politics of personal security. At the time of writing no evidence has yet
been presented to suggest that any of the missing data has caused loss through fraud – though there
may a time delay before any frauds are detected.
In addition to outlining potential vulnerabilities, the data losses also emphasise the importance of
maintaining existing security policies when critical functions are increasingly being outsourced,
especially the security of public data when it is shared across the public and private sectors. Yet,
these possibilities should also be part of information management plans. What these incidents do
highlight most of all is that all activities within cyberspace will carry risks, as they do in the
terrestrial world, and such risks have to be identified and remedied, however, as the events in the
UK have illustrated, the risk lies more in human failings than in the virtual environment.
Users need to be protected from themselves
There seems to be a popular assumption in the security community that users have to be protected
from themselves, from becoming victims or becoming offenders. This view runs counter to the
findings of the British Crime Survey and Offending, Crime and Justice Surveys,
39
which found
relatively little personal victimisation and offending. The prospect of third party intervention also
runs against the original end-to-end principles of the Internet, which favoured open communications
with end users making the choices as to what to send and receive.
40
Intertwined with the innate
distrust in Internet users is the fairly widespread view that not only does the Internet place
individuals at risk but it also corrupts normally law abiding individuals who go on a moral holiday
when on the Internet. The Internet certainly broadens Internet users’ life experience and exposes
them to a range of social activity that may be outside the confines of their everyday life. But the
evidence of moral usage suggests that the greater majority of individuals tend to take their social
values with them online.
41
Furthermore, we must not lose sight of the fact that individual users
should and do take responsibility for their actions and, rather controversially in light of some of the
security reports, the Internet is remarkably ordered if you consider the sheer number of users and
the volume of transactions that take place on it. All the more so because of the various mechanisms
that currently exist to govern online behaviour (described later).
The omnipotent (darkside) hacker or ‘superuser’
The old-style hackers were renowned and even feared, for their expert knowledge of the workings
of communications systems. The mythology surrounding the omnipotent hacker assumes
(Page 50) that once the ethical hacker’s moral bind has eroded and they go over to the ‘darkside’
then they become a danger to society. The threat of these omnipotent ‘darkside’ hackers is found
everywhere from the cyberpunk literature to the policy debates. But remember that at the height of
hacker mystique in the 1980s and 1990s, overall levels of security were much lower than today. It
was not uncommon, for example, to find systems with a default user identity of ‘Admin’ being
accompanied by the password ‘Admin’. Where security was tighter, the majority of deep level
penetration was and still is the result of ‘social engineering’ – persuading those in low level
occupations within an organisation to reveal their legitimate access codes. Yet, despite the evidence
that criminals tend to focus their efforts upon the easy victim or ‘low hanging fruit’,
42
as in the
terrestrial world, the cybercrime expert discourse is replete with the construction of what Paul
Ohm
43
calls the ‘superuser’, a mythic superhacker who is assumed to be immune to technical
constraints and aware of legal loopholes. This ‘superuser’ is the person most feared by policy
makers, who, Ohm argues, exaggerate his (the assumption is that he is male) power and ‘too often,
overreact “passing overbroad”, ambiguous laws intended to ensnare the Superuser, but which are
used instead against culpable, ordinary users’.
44
The old style of hacking characterised by the penetration of computer systems has more or less lost
the social capital, credibility and popularity it once had. Sommer has argued that the hacker myth is
today little more than: ‘an amusing diversion and [no longer] an opportunity to dust down 20-year
old cliche´s about teenage geniuses’.
45
Ironically, the omnipotent hacker myth contrasts with a new
style of automated hacking that is potentially more potent in the way that it casts a wide net for
victims by using malicious software (viruses, worms) that install remote administration software,
key-stroke loggers, spyware, etc. These tools are unwittingly downloaded onto computers through
spam, deceptive emails or from fake www sites. Once activated, the software enables others to
obtain access information, and even use the computers remotely. Consequently, identity theft and
account take-over is now of great concern, though it is mostly related to collecting information as a
precursor to credit card fraud rather than what we conventionally understand as hacking. It is not the
result of a direct hack attack, but an electronic trawl or ‘phishing’ expedition. Once the impact of ID
theft is felt beyond the credit card and its attendant bank guarantees then the intrusion into one’s life
can become extremely invasive – a spectre that has the hallmarks for spawning a new series of
myths.
Hackers have become part of organised crime
The alleged link between hackers and organised crime has always lacked conclusive proof;
however, what is understood as constituting organised crime can vary considerably. Views vary
from three or four individuals working together to commit a crime and then dissipating, to massive
‘mafia’ type organisations with clearly defined lines of command and control. The term ‘organised
crime’ invokes imagery of the latter rather than the former. In her study, Brenner predicted that
organised criminal activity on the Internet would more likely manifest itself in ‘transient, lateral and
fluid’ forms, as networks of criminals
46
rather than replicate the ‘gang’ and hierarchical US ‘Mafia’
models of organised criminal activity found offline in the terrestrial world. Mainly because they
evolved in response to real world opportunities and constraints that are largely absent in cyberspace.
In support of Brenner’s 2002 prediction, there have since been a number of examples of the
emergence of new forms of online organisation. The first is the finding in 2004 by German
Magazine C’T, following the botnet explosion in 2003/4,
47
that virus writers had been selling the IP
addresses of computers infected with their remote administration Trojans to spammers.
48
The
second was in June 2005 when NISCC (The National Infrastructure Security Co-ordination Centre)
warned users about ‘a highly sophisticated
(Page 51) high-tech gang’ reputed to be located in the Far-East using various means, including
botnets, to infect sensitive computer systems to steal Government and business secrets.
49
The final example is ‘Operation Firewall’ which led to the investigation and prosecution of
‘shadowcrew’, an international identity theft network that hosted online forums that shared
information about stealing, trading and selling personal information that could be used to commit
frauds. The various reports of the investigation and prosecution illustrate how different the groups
were in terms of their networked organisation. The head of e-crime at SOCA
50
observed that the
Shadowcrew worked ‘remotely, without ever needing to meet’, which is ‘typical of how the new e-
crime networks operate compared to the old-style “top down” organised crime groups’.
51
These
groups have a very detailed division of labour with specific skill sets rather than the ‘usual pyramid
structure’. One person would provide the documents, ‘another would buy credit card details,
another would create identities while another would provide the drop address’.
52
The key difference
is its networked structure and global reach. Together both examples detail the relatively new forms
of networked criminal organisation that depart from traditional thinking about hierarchically
organised crime.
Criminals are anonymous and cannot be tracked
While it is true that individuals can use false identities to go online, one of the more stunning and
frequently overlooked features about networked technologies is that every move online can be
tracked and the ‘mouse droppings’ as they are called, leave a data trail behind. The issue is therefore
not so much one of anonymity, but one of investigators having the human and technological
resources available to follow the digital trail. See for example, the case of the ‘Shadowcrew’
investigations mentioned earlier. The members of the ring had never met in person and thought their
participation was anonymous, but the ring had been penetrated by the US Secret service and the
UK’s SOCA who tracked their activities electronically and remotely.
53
We are actually witnessing
the ‘disappearance of disappearance’
54
because we cannot hide any more, only disguise immediate
identities and even then our online behaviour patterns leave ‘signatures’ that can allegedly be traced
with the right technology.
Criminals go unpunished and get away with crime
The low report to prosecution rate observed at the beginning of this article could be interpreted to
suggest that there is some substance to this statement. However, low reporting to prosecution rates
are to be found with nearly all aspects of crime, terrestrial or online. Is something being overlooked
here? Do cybercrimes have different qualities perhaps? While it is important to note here that some
types of Internet offending may be proceeded with under other bodies of law, this displacement will
not explain such low prosecution rates. Earlier research into offences reported to, and recorded by,
the police found very few related to the Internet.
55
Furthermore, not only is the nature of cybercrime
victimisation considerably different (see later) because there are more of them and over a broader
area, but, as the example of ‘Operation Bot Roast’ described later illustrates, there are also
relatively fewer offenders than at first appears because the technologies give criminals a wider
reach.
This critical appraisal of prevailing assumptions is important in an information age in which
technological development and its associated thinking changes very quickly, we have to continually
subject our conventional wisdoms to critical appraisal, because an acceptable position, say, 2 years
ago may have changed by the present time. Such appraisal is an important tool in the dispelling of
myths, because they are at their most destructive when they triumph over reason:
(Page 52)
... myths are not like truths: they are the triumph of credulity over evidence; there is a minimum of
evidence with which it must comply, if it is to live; but once lip service has been paid to that
undeniable minimum, the human mind is free to indulge its own capacity for self-deception.
56
While Trevor-Roper was describing here the deliberate creation and manipulation of myths for
malicious reasons, his observation about their effects is still very pertinent and reminds us how hard
myths are to dissipate. Understanding change as it happens around us requires new types of
methodological thinking and is a challenge to be faced. After all, the paragraph cited earlier from
the House of Lords select committee report does imply in the last sentence that the statements are
not based upon ‘accurate data’ and must therefore be assumptions.
Tensions in the production of criminological knowledge about cybercrimes
One of the principal reasons for the persistence of Internet myths is the tensions that currently exist
in the ways that knowledge about cybercrimes are produced and currently impede the full
perspective.
The existence of multiple discourses about cybercrime – legislative–academic–expert–popular
Cybercrime means different things to different groups of people and so it is viewed differently by
each in terms of their respective functions. Contemporary debates about cybercrime encompass a
range of legal–political, technological, social, and economic discourses that have led to some very
different epistemological constructions of cybercrime. The legal–administrative discourse explores
what is supposed to happen by establishing and clarifying the rules that identify boundaries of
acceptable and unacceptable behaviour; the criminological and general academic discourse mainly
provides an informed analysis about what has happened, and why; the security expert identifies
what is actually happening and what, from their own perspective, the tactical solutions might be,
and what will happen if those solutions are not adopted; the popular–lay discourse reflects what the
person on the street thinks is happening. The security expert discourse is currently the most
influential yet prescient of the various discourses because it combines selfcollated statistical
information (see next subsection) to form predictions about the future.
The disintermediation of news sources and statistical information
The networking of information and the proliferation of different and alternative news sources has
dis-intermediated the reporting of news. Today, editors do not exercise the overall level of editorial
control over the news process that they once did so that sources of information are no longer
subjected to the same balances and checks.
57
Not only has the editorial top– down gate-keeping role
lost the control it once exercised over the news process, but there has also taken place ‘a major
restructuring of the relationship between public and media’ with the effect that public discourse has
become disintermediated because the public can now directly access politicians and the political
process, and vice versa.
58
Just as news sources have become dis-intermediated, then so have the ways that statistics about
cybercrime are collected. Without commonly applied standards there are arguably fewer checks on
information quality and mis-information can be circulated. Not only are there disagreements about
what constitutes cybercrime and therefore what is included in statistical compilations, but there are
also no reliable centralised collection points, such as the police. The importance of having reliable
statistical information was highlighted by the UK All Party Internet Group in May 2004 when it
reviewed the Computer Misuse Act 1990. The
(Page 53) group found that without understanding exactly how big a problem cybercrime is, there is
no political pressure to deal with the issue and the police will not therefore be adequately
resourced.
59
This view, however, assumes that cybercrime is a problem that the police can deal with
effectively!
The primary source of statistical information tends to be the cybercrime security industry, which
arguably has an interest in perpetuating the illusion of high levels of crime – to which end
Rosenberger
60
rather cynically stated: ‘would umbrella manufacturers predict good weather?’ The
fact is that these statistics do exist and in many cases they are all we have. Therefore, it is arguable
that if we understand what they represent, we are aware of their limitations and how they were
collected, then we can use them within that specific context. Until we develop new methodologies
or even just establish standard protocols for collecting statistics then they will remain open to mis-
interpretation. To this end, a very useful case in example is the report of the ‘212,101’ threats
mentioned in the cyber-conundrum earlier. These data are not actually personal reports by net users
of their victimisation, rather the 212,101 statistic actually represents infringements of scientific
rules, not the law per se, which are automatically reported by Symantec’s proprietary brand of
security software and recorded on a database. The journalists covering the stories have tended to
reduce Symantec’s important phrase ‘new malicious code threats’
61
to the word ‘threats’,
62
and in so
doing completely change the meaning of the findings to imply general threats.
63
Consequently, if
industry generated statistics are used to indicate levels of offending, they will be greatly misleading,
because they over-report the problem, however, if the methodologies of those same statistics are
understood then they may be of some use when describing, for example, levels of potential online
risk, or data trends over time.
Victimisation surveys are much more reliable methods of measuring levels of victimisation than
either reporting statistics or the numbers of crimes recorded by the police and such surveys do exist.
The findings do however tend to show some marked contrasts, though for explainable reasons that
relate to the victim groups being canvassed. Different Internet constituencies have different
experiences of victimisation. Surveys of business victims, on the one hand, such as the DTI
64
survey
in the UK and the CSI
65
survey in the USA, report high rates of business victimisation. This
victimisation does not tend to get reported to the police because of the commercial sensitivity of
their losses and the victim’s preference to deal with the matter privately. The few surveys of
individual victimisation, including the British Crime Survey, on the other hand, show very low
levels of overall victimisation that individuals consider serious enough to warrant action, which is
consistent with the earlier observations.
66
These differences in findings are understandable and
communicable, however, the media reportage focuses only upon the more sensational findings,
notably the results of the business surveys and generalises them to all Internet usage.
67
Under-reporting by victims
Following on from the above we have the curious phenomenon of the simultaneous over-reporting
and under-reporting of cybercrimes. Some of the automatically reported threats will, of course,
eventually result in actual victimisations, for example, if personal information stolen by Trojans is
subsequently used to defraud the owner of the data. Yet there exist a number of distinct reasons why
these crimes are under-reported. The fraud, for example, may be reported straight to the bank and
may not ever appear as an Internet related statistic. Even when there is a clear Internet link,
individuals may be embarrassed to report their victimisation, or their loss may be small. Otherwise,
the dangers posed may not be immediately evident, or not regarded as serious by the victim, or the
loss may genuinely not be serious. Alternatively, it may be the case, as with credit card frauds, that
police refer reportees back to their banks who are viewed as the real victims. Where the victims are
corporate entities, such as banks, reporting losses may expose
(Page 54) a commercial weakness and threaten their business model, which raises clear conflicts
between the private vs public justice interest with regard to cybercrimes.
It may also be that the consequences of being victimised are not immediately apparent to victims.
For example, computer integrity cybercrimes such as hacking or identity theft are often precursors
for more serious offending. The information gathered may later be used against the owner, or
crackers may use remote administration Trojans to control the computers of others. Similarly,
computer-related cybercrimes, such as Internet scams, may seem individually minor in impact to
each victim, but serious by nature of their sheer volume. Finally, computer content crimes may
seem less significant because it is informational, but it may nevertheless be extremely personal or
politically offensive, or could even subsequently contribute to the incitement of violence or
prejudicial actions.
Low levels of offender profiles
Because so few cybercrimes are reported, relatively little is therefore known about the profiles of
offenders, especially those who commit the small-impact, bulk impact victimisations, although
these profiles are being compiled as offenders get identified and apprehended.
Jurisdictional disparities in terms of (a) definitions of cybercrime (b) levels of co-operation
between police
Disparities in legal coding across jurisdictions can frustrate law enforcement efforts, despite
attempts by the likes of the Council of Europe Cybercrime Convention to harmonise laws.
68
Pan-
jurisdictional idiosyncrasies’ legal process can also interfere with levels of interjurisdictional police
cooperation.
Lack of common definitions of cybercrimes
Despite there being a fairly common agreement that cybercrimes exist, there is an absence of
consensus as to what they actually are that enables confusion to thrive.
Low level of public knowledge about risks
Because of an overall lack of public knowledge about the real risks of cybercrimes, those who are
not discouraged from going online are often unable to make informed choices about the risks that
they may face, especially where the threat is new.
The tensions in the production of knowledge about cybercrime outlined above explain the sporadic
nature of media reportage and also why the ‘signal event’ (mentioned earlier) has gained in
significance. A useful example of a signal event was cyber-stalking, which became a matter of
concern in the early 2000s when the issue of stalking became mixed up with Internet grooming. A
few cases, followed by news reports sparked off a viral flow of information across the
‘blogosphere’ which created a minor panic that, without any concrete evidence, led to demands for
anti-stalking legislation – demands that were ultimately resisted. The concerns continue to be
expressed today because debates over stalking and Internet grooming remain tied to the more
general concern over online vulnerability. Recent examples of the power of viral information
flows
69
across the blogosphere causing mass public reaction have been the Northern Rock financial
panic
70
in October and November 2007, also the widespread public concerns throughout 2007 over
child abduction following the disappearance of Madeleine McCann in Portugal.
71
(Page 55)
The ‘real realities’ of cybercrime
If reality is ‘the state of things as they actually exist’,
72
as opposed to the ‘dizzying whirl of’ reality,
then the reality of cybercrimes is that network technologies have transformed crime in the following
distinctly different ways.
73
Networked technologies have contributed to the reorganisation of criminal labour online
Criminal labour, like labour in the work process has been prone to rationalisation in an effort to
make it more efficient and cheaper to deliver. Such a process begins by dividing the work into
component tasks and then automating them where possible. This process deskills the individual
components of the task; however, in so doing, eventually results in the reskilling of the individuals
who then use technology to control the whole process. Network technologies enable one or possibly
two people to now control a whole criminal process. A good example of this transformation is to
compare the complexities of organising a US$40 million bank robbery with the organisation of 40
million US$1 thefts using computers.
74
The bank robbery is high risk with a relatively low return on
investment and takes much organisation, whereas computer crimes, in contrast, are low risk with a
high return on investment and one person can theoretically control the whole process.
Two recent events emphasise the scale of the above transformation in criminal labour. In June and
November 2007, the US FBI’s anti-botnet crime initiatives, ‘Operation Bot Roast I & II’,
75
broke up
botnets that exploited up to two million infected computers. An FBI report
76
on this initiative,
subsequently reported by the media, claims that more that US$20 million losses were reported by
victims. As part of ‘Operation Bot Roast’, the New Zealand police questioned a teenager, known as
‘Akill’ who was thought to be the ringleader of an international cyber-crime group alleged to have
infected more than onemillion computers and stolen millions of dollars from people’s bank
accounts.
77
In a further case, in December 2007, an American computer security consultant, using
the name ‘Acid’ or ‘Acidstorm’, pleaded guilty to using botnets to illegally install malware
(malicious software) on more than 250,000 computers to steal online banking identities and later
transfer money out of his victims’ accounts. He was also found to have taken US$19,000 in
commission from a third party for installing adware on 110,000 end users’ machines without their
permission.
78
These cases demonstrate two points made earlier, the globalised extent of the
offender’s reach and also that while individual losses may be relatively small,
79
the aggregate loss is
relatively high due to the volume of offending.
We have experienced the evolution of cybercrime from traditional to sui generis cybercrimes
through three generations
The first generation of cybercrimes were traditional or ordinary crimes that used computers to
communicate or gather precursor information that assists in the organisation of a crime. Remove the
Internet and the behaviour continues because offenders simply revert to using other forms of
available communication or information gathering. The second are the hybrid cybercrimes, or
traditional–existing crimes for which network technology has created entirely new global
opportunities. Take away the Internet and the behaviour continues by other means, but not in such
great volume or upon such a global scale. In contrast to the earlier generations, the third generation
are true cybercrimes and solely the product of the Internet – remove it and they vanish. True
cybercrimes are, like cyberspace, informational, globalised and networked.
80
This latter generation of true cybercrime includes spamming, online identity theft and variations of
intellectual property piracy.
81
One of its defining characteristics is that it utilises
(Page 56) malicious software (viruses and worms) to automate victimisation. Offenders are now
employing spammers who employ hackers and virus writers to write the scripts to do the spamming
that launches the hacking software for them! This is a new world of low-impact multiple victim
crimes that creates de minimis problems for law enforcement and for the policing of offenders.
Since 2002–3 there has emerged a new generation of automated cybercrime caused by malicious
viruses and worms, spread by spam, which can render infected computers susceptible to remote
administration. These ‘botnets’ of infected computers can be used to commit a range of networked
computer offences of which Phishing
82
(stealing personal financial information) is perhaps the most
best known. These new forms of offending are evolving quickly, see for example, the evolution of
Phishing (using emails) to Pharming
83
(DNS cache poisoning that directly connects to the www
page once the email is opened) into SMiShing
84
(using SMS texting) to Vishing
85
(using Voice over
Internet Protocol).
Cybercrimes fall into three groups of crime with different characteristics (integrity, assisted and
content crime)
Cybercrimes include three main offending patterns. The focus of the offending can either be the
integrity of the system (hacking) or the computer can be used to commit an offence, else the content
of the computer itself can be the object of the offending. Each are represented by specific bodies of
law and generate different crime and technology discourses.
86
By understanding the way that technology has changed the nature of criminal labour and by
discerning between these three generations of cybercrime and also the three different groups of
behaviour, we can begin to make sense of some of the differences in understanding cybercrime.
Whereas criminal justice systems tend to see cybercrimes as ‘familiar’ crimes, already the subject of
legislation, that use the Internet, there is a new generation of crimes of the Internet that is by nature
of being informational, global and networked, mainly small-impact multiple victimisations.
What are the challenges that cybercrimes pose for criminal justice?
For the reasons outlined above, cybercrimes tend to be missed by the Criminal Justice radar. This
omission introduces a number of challenges for the criminal justice system to overcome, resolve or
in some circumstances accept. Of these, the under-reporting of cybercrimes (those which are
deemed serious by victims) was discussed earlier.
Despite the existence of applicable bodies of law backed up by international harmonisation and
police co-ordination treaties, the characteristics of cybercrimes that are serious enough to get
reported, conspire to impede the traditional investigative process. Simply put, their informational,
networked and globalised qualities cause them to fall outside the traditional localised, even national,
operational purview of police. They are clearly different from the regular police crime diet, which is
one reason that they can evade the criminal justice gaze. On the few occasions where cybercrimes
are familiar to the routine police diet, then it is often the case that the computing misuse component
of the offending gets dropped in favour of a charge for the offence for which the computer was used
for. For the most part, however, cybercrimes tend to be too individually small in impact (de
minimis)
87
to warrant the expenditure of finite police resources in the public interest. Also, by
falling outside routine police activities then the police accrue little general experience in dealing
with them as a mainstream crime. This becomes additionally problematic when disparities in legal
coding across jurisdictions conspire to frustrate law enforcement initiatives. The big question here is
how might these challenges be addressed?
(Page 57) The digital realism of policing cybercrime
In the process of governing behaviour online through policing, the direct impact of law is limited as
it is found to be only one of a number of factors that can shape behaviour. Law is important, but its
regulatory effect has to be understood alongside other factors such as social values (shaping
behaviour through information and education), market forces (shaping behaviour by manipulating
the laws of supply and demand) and technological devices (shaping behaviour by manipulating the
architecture of the environment in order to restrict opportunity). These factors proscribe the digital
realism of cybercrime.
The digital realism of networked technologies is such that the same technologies that give rise to
criminal opportunity can also be used to police it.
88
Examples of such technological methods, range
from spam filters, to honeynets (www sites that attract and entrap offenders),
89
through to optical
character recognition and sophisticated online surveillance techniques. There is a distinct danger
here in reducing law to scientific rules in software because then the spectre of ubiquitous and
automated policing arises, which coldly applies rules without the prospect of discretion. So, the
application of technology has a number of ramifications, not least that it accelerates the onset of the
actuarial control society away from the current disciplinary society. In other words, the concern is
that when technology is used to replace discretion (without checks and balances), individuals may
come to be judged by what they might or could do, rather by what they have done and disciplined
by exclusion. Evidence of this change from a disciplinary society to control society is becoming
increasingly characterised by the gradual shift in the burden of proof and the increase in the number
of strict liability offences.
The increased use of technology to police populations creates added strain on finite police resources
by unearthing evidence of many new, mostly minor, offences – previously invisible – which
nevertheless have to be investigated by police and, thus, draw officers away from the primary police
mission and create a further division of policing labour. Also there is the temptation to investigate
the offences that are the easiest (the low-hanging fruit).
90
In such exercises there is a specific legal
need to discern between collecting evidence of wrong-doing that can be used in court and collecting
criminal intelligence that cannot, but can lead to further investigation. Furthermore, ‘ubiquitous’
online policing through surveillance creates the potential problem of not being able to call the police
to account for their actions – Quis Custodiet Ipsos Custodiet. Yet, with a twist of fate the same
networked technologies that create criminal opportunity and which can be used to police online
behaviour, can also be used to police the police because they also cannot escape from technology’s
panoptic gaze. The same technologies can also assist with the implementation of police reform.
Clearly, cybercrimes are characteristically incompatible with traditional routine police practice.
Despite being in the twenty-first century information age, the police still continue to work mainly
along the lines of their 170 year old public mandate to regulate the ‘dangerous classes’. Hence the
understandable focus upon policing dangerous individuals such as paedophiles, child
pornographers, fraudsters and terrorists who threaten the infrastructure. However, this is not to say
that cyberspace goes un-policed, because, as in the terrestrial world: ‘not all policing lies in the
police’.
91
Nor is it necessarily the case that police activity is either inefficient or ineffective. Rather,
the public police role has to be understood within a broader and largely informal architecture of
networked Internet policing that not only enforces laws, but also maintains order through different
layers of governance online.
92
Internet users and user groups, for example, maintain online behaviour through the application of
moral censure. Virtual environment security managers are collectively emerging as a new strata of
behaviour governors. They ‘police’ the behaviour of the online community according to its
particular norms and can apply a range of sanctions from censure, to temporary
(Page 58) or permanent withdrawal of access rights. Network infrastructure providers (ISPs) draw
upon the terms and conditions of their contracts with clients. The ISPs themselves, are also subject
to the terms and conditions laid down in their contracts with the telecommunications providers who
host their services. Corporate security organisations preserve their corporate interests through
contractual terms and conditions; but also use the threat of removal of privileges or the threat of
private (or criminal) prosecution. Non-governmental, non-police organisations, such as the Internet
Watch Foundation, act as gatekeepers by accepting and processing reports of offending then passing
them on (mostly related to obscenities), as well as contributing to cybercrime prevention and public
awareness. Governmental non-police organisations use a combination of rules, charges, fines and
the threat of prosecution. Not normally perceived as ‘police’, they include agencies such as
Customs, the Postal Service and Trading Standards. A higher tier of these agencies also oversees
and enforces national Internet infrastructure protection policies. Public police organisations, as
stated earlier, therefore play only a relatively small, but nevertheless significant, role in imposing
criminal sanctions upon wrongdoers. Although located within nation states, the public police are
joined together in principle by a tier of transnational policing organisations, such as Europol and
Interpol.
Conclusions: is the absence of evidence, evidence of its absence, or just evidence of something
else?
To return to the cyber-conundrum outlined earlier and also to respond to the ‘Rumsfeldisms’: the
absence of evidence in terms of the low number of prosecutions is certainly not evidence of the
absence of cybercrimes. Cybercrimes certainly do exist, but they are being looked at through the
wrong lens. Their digital realism is paradoxically different to that which the mythology of
cybercrime predicts. Why that should be the case lies within the initial conceptualisation of
cybercrime. Its science fiction origins, along with competing media processes framed cybercrime in
a language of prospective technological impacts that distorts our understanding of it – in the
absence of reliable information. The reporting of actual, and especially novel, events therefore
become dramatised, propagates the various myths circulating about cybercrime and reinforces the
culture of fear. Add these distortions to the construction of crime forged by Peelian concepts of
policing and criminal justice, and a popular view emerges of cybercrimes as dramatic, highly
prevalent, dystopic and catastrophic events – events that the police, as governmental protectors of
peace and enforcers of law, are expected to deal with, but for good reasons often cannot. In short,
cybercrimes scare us and we expect to be scared by them, a fear that is made worse by the gap that
has opened up between our expectations of cybercrime and our expectations of Internet security.
Any attempt to close this ‘reassurance’ gap tends to be thwarted by tensions in the production of our
knowledge about cybercrimes which serve to perpetuate both the culture of fear about cybercrime
and also the various myths that have emerged about it.
If there is a good side to this debate, it is that all parties are gradually learning more about the
impact of networked technologies upon criminal behaviour through research findings. In addition,
the public (and private) police are also establishing a corpus of policing experience in the field
because of the maturation of the various hi-tech crime units at national and regional levels within
the police services. Legislation is also being tuned to the problems of today through revisions.
93
The key to formulating effective responsive strategies to cybercrime is to understand the different
perspectives that the different actors in the field of cybercrime bring to the subject rather than see
them in binary terms as either right or wrong. See, for example, the different, but real, experiences
of the business community and the individual user. It is also crucial to
(Page 59) hold realistic expectations of what the police can and cannot do. This includes accepting
that not all policing lies with the police because the policing function is also to be found within
other nodal and networked structures of order. In the ideal world, the governance of online
behaviour should therefore be designed to assist and strengthen the Internet’s natural inclination to
police itself, keeping levels of intervention relevant while installing appropriate structures of
accountability. Remember that the same networked technologies that create criminal opportunity
can also be harnessed to police cybercrime and also that the bodies policing the Internet include the
many other nodes in the networked model of Internet policing outlined earlier. Without a
framework of accountability there arises the uncomfortable prospect of over-policing using
technology, and all the more worrying if the basis of the policing exercise is the application of
scientific rather than legal rules. We therefore need to be clear about where we set the balance
between the need to maintain order online and the need to enforce law, until this balance has been
achieved the cybercrime ‘reassurance gap’ will not be closed.
Notes
1. This article is based upon a paper that was first delivered at the ‘Media and Insecurities’ CRIMPREV
meeting, 12–13 October 2007, Faculty of Criminal Justice, University of Maribor at Ljubljana, Slovenia. It
was later delivered in more advanced form at the Max Planck Institute, Freiburg im Breisgau, Germany, 11
December 2007. The primary reference point for this article is D.S. Wall, Cybercrimes: The transformation of
crime in the information age (Cambridge, Polity, 2007). All of the definitions are derived from the book’s
glossary at pp. 219–232. Specific and additional references are also included.
2. Symantec Internet Security Threat Report, Trends for January–June 07, Vol. XII (Cupertino, CA:
Symantec, September 2007), 7 and 77.
3. The ‘212,101’ statistic was also covered by many of the reliable sources of information including the BBC,
‘Hi-Tech Crime “is Big Business”’, BBC News Online, 17 September 2007, http://
news.bbc.co.uk/1/hi/technology/6998068.stm (accessed 10 March 2008); The Observer, see N. Mathiason,
‘The Guardian of Cyberspace’, The Observer, 23 September 2007, http://
www.guardian.co.uk/business/2007/sep/23/technology.interviews (accessed 10 March 2008); Fox News.com,
‘Symantec: Online Criminals Now More Organized, Professional’, Fox News.com, 17 September 2007
http://www.foxnews.com/story/0,2933,297063,00.html (accessed 10 March 2008). The extent and variation of
the coverage can be seen by simply doing a Google search for ‘212,101’.
4. See Hansard, 26 March 2002: column WA35; also Wall, Cybercrimes, 54.
5. Evidence of the Ministry of Justice to the 2007 House of Lords Science and Technology select committee
enquiry into ‘Personal Internet Security’. Reported in BBC, ‘Lords Offer New Angle on E-Crime’, BBC News
Online, 24 April 2007, http://news.bbc.co.uk/1/hi/technology/6589137. stm (accessed 10 March 2008).
6. See among others, R.G. Smith, P.N. Grabosky and G. Urbas, Cyber Criminals on Trial (Cambridge:
Cambridge University Press, 2004).
7. J. Leyden, ‘Security Fears Stymy Online Sales: Nothing to Fear But Fear Itself’, The Register, 17
December 2007, http://www.theregister.co.uk/2007/12/17/e_commerce_security_fears/ (accessed 10 March
2008). Also see further the survey of Internet users by Get Safe Online, http://www. getsafeonline.org/
(accessed 10 March 2008).
8. House of Lords, Personal Internet Security, vol. I: Report, Science and Technology Committee, 5th Report
of Session 2006–07, HL Paper 165–I, 10 August 2007 (London: The Stationery Office, 2007), 6. Available at
http://www.publications.parliament.uk/pa/ld200607/ldselect/ldsctech/165/ 165i. pdf (accessed 10 March
2008).
9. For example, the Corporate IT Forum which ‘was initiated in 1996 by some of the largest corporate users
of IT creating a confidential, vendor-free environment in which IT professionals could exchange practical and
strategic intelligence’ (http://www.tif.co.uk/); see further R. Cellan-Jones, ‘Government “Failing on E-
Crime”’, BBC News Online, 5 December 2007, http://news.bbc.co.uk/1/hi/ technology/7128491.stm (accessed
10 March 2008). (Page 60)
10. Home Office, The Government Reply to the Fifth Report from The House of Lords Science and
Technology Committee, Session 2006-07 HL Paper 165, Personal Internet Security, October 2007. Available
at http://www.official-documents.gov.uk/document/cm72/7234/7234.pdf (accessed 10 March 2008).
11. Reported in J. Allen, S. Forrest, M. Levi, H. Roy and M. Sutton, ‘Fraud and Technology Crimes: Findings
from the 2002/03 British Crime Survey and 2003 Offending, Crime and Justice Survey’, Home Office Online
Report 34/05, 2005. Available at http://www.homeoffice.gov.uk/rds (accessed 10 March 2008). Also D.
Wilson, A. Patterson, G. Powell and R. Hembury, ‘Fraud and Technology Crimes: Findings from the 2003/04
British Crime Survey, the 2004 Offending, Crime and Justice Survey and Administrative Sources’, Home
Office Online Report 09/06, 2006. Available at http:// www.homeoffice.gov.uk/rds/pdfs06/rdsolr0906.pdf
(accessed 10 March 2008).
12. M. Barone, ‘The National Interest: Absence of Evidence is Not Evidence of Absence’, US News & World
Report, 24 March 2004. Available at http://www.usnews.com/usnews/opinion/barone web/mb_040324.htm
(accessed 10 March 2008).
13. See generally Wall, Cybercrimes, and references therein.
14. Later reproduced in J. Dann and G. Dozois, eds., Hackers (New York, NY: Ace Books, 1996). Please note
that there are a number of different claims over the development of the concepts. At the time there was a lot
of discussion about the concepts because they excited participants in the discourse. Regardless of the actual
assignation, the main point is the cultural formation and links that were made.
15. Writer Bruce Bethke is accredited with coining the word ‘Cyberpunk’ in his 1980 story Cyberpunk. See
B. Bethke, ‘The Etymology of “Cyberpunk”’, 1997, available at http://www.brucebethke.com/ nf_cp.html
(accessed 10 March 2008).
16. W. Gibson, Neuromancer (London: HarperCollins, 1984).
17. N. Stephenson, Snowcrash (London: ROC/Penguin, 1992).
18. Please note that the examples of books and films given here and below are intended to be representative
and not exhaustive. Choices of particular media can become very personal, the object of the exercise is to
draw conclusions of types that illustrate change.
19. In The Matrix Neo kept his computer disks in a hollowed out copy of Jean Baudrillard’s, Simulacra and
Simulation (Ann Arbor: University of Michigan Press, 1994). Baudrillard’s ideas are reputed to have inspired
the film’s producers and writers, although Baudrillard is reported to have curmudgeonly stated that he thought
they misunderstood his work. See R. Hanley, ‘Simulacra and Simulation: Baudrillard and the Matrix’,
Whatisthematrix, December 2003. Available at http://whatisthematrix.
warnerbros.com/rl_cmp/new_phil_fr_hanley2.html (accessed 10 March 2008).
20. Simulacra (simulacrum in singular) is a term used by Baudrillard (Simulacra and Simulation) to describe a
situation where one can have copies without originals. It introduces a useful language to describe the
construction and dissemination of multi-media materials in computer file format. Simulation captures the
essence of a real object in such a way that it can be used to observe and predict how that object may behave
when subject to changing inputs.
21. F. Furedi, Culture of Fear (London: Continuum, 2002).
22. Wall, Cybercrimes, 16.
23. H.G. Wells’ better known science fiction novels are The Time Machine, 1895; The Island of Dr Moreau,
1896; The Invisible Man, 1897; The War of the Worlds, 1898; The First Men in the Moon, 1901.
24. Social science fiction is a sub-genre of science fiction that focuses upon the forms of society that result
from technological change.
25. G. Orwell, 1984 Nineteen Eighty-Four (London: Penguin Books, 1990).
26. Toffler, Future Shock (New York: Bantam Books, 1970).
27. Furedi, Culture of Fear.
28. D. Garland, The Culture of Control (Oxford: Oxford University Press, 2001), 367.
29. ‘Governance through crime’ is a discourse in (terrestrial) criminology that locks into the work of Jonathan
Simon and David Garland respectively. For the (terrestrial) debates, see further, J. Simon, Governing through
Crime: How the War on Crime Transformed American Democracy and Created a Culture of Fear (New
York: Oxford University Press, 2007). Also Garland, Culture of Control.
30. M. Innes, ‘Reinventing Tradition? Reassurance, Neighbourhood Security and Policing’, Criminal Justice
4, no. 2 (2004): 151–71.
31. See further, the references in Wall, Cybercrimes, 161. (Page 61)
32. J. Baudrillard, The Consumer Society: Myths and Structures (London: Sage, 1998), 34.
33. Innes, ‘Reinventing Tradition?’, 151.
34. M. Innes, ‘Why Disorder Matters? Antisocial Behaviour and Incivility as Signals of Risk’, paper given to
the Social Contexts and Responses to Risk (SCARR) Conference, Kent, UK, 28–29 January 2005, p. 5.
Available at http://www.kent.ac.uk/scarr/papers/papers.htm (accessed 10 March 2008).
35. Convenient, because I did not have to search far. Despite this rather journalistic abstract, the House of
Lords Science and Technology Select Committee report is a very informative and useful document.
36. Ibid., 6.
37. BBC, ‘The Government’s Explanation for the Loss of Discs with 25 m Child Benefit Records on is Facing
Fresh Scrutiny’, BBC News Online, 22 November 2007. Available at http://
news.bbc.co.uk/1/hi/uk_politics/7106987.stm (accessed 10 March 2008).
38. BBC, ‘Up to 3,000 Patients’ Data Stolen’, BBC News Online, 14 December 2007. Available at http://
news.bbc.co.uk/1/hi/wales/7143358.stm (accessed 10 March 2008). Also BBC, ‘Data of 60,000 on Stolen
Computer’, BBC News Online, 7 December 2007. Available at http://news.bbc.co.uk/1/hi/
northern_ireland/7133194.stm (accessed 10 March 2008).
39. See Allen et al., ‘Fraud and Technology Crimes’; and also, Wilson et al., ‘Fraud and Technology Crimes’.
40. See further J. Saltzer, D. Reed and D. Clark, ‘End-to-End Arguments in System Design’, ACM
Transactions in Computer Systems 2, no. 4 (1984): 277–88.
41. See further Walker and Bakopoulos’s research into young people and chatrooms. R. Walker and B.
Bakopoulos, ‘Conversations in the Dark: How Young People Manage Chatroom Relationships’, First
Monday 10, no. 4 (2005). Available at http://firstmonday.org/issues/issue10_4/walker/ index.html (accessed
10 March 2008).
42. B. Ortega, ‘News’, Security & Privacy Magazine 4, no. 6 (2006): 6–9.
43. P. Ohm, ‘The Myth of the Superuser: Fear, Risk, and Harm Online’, University of Colorado Law Legal
Studies Research Paper no. 07-14, 22 May 2007.
44. Ibid, 1.
45. P. Sommer, ‘The Future for the Policing of Cybercrime’, Computer Fraud & Security 1 (2004): 8–12 at
10.
46. S. Brenner, ‘Organized Cybercrime? How Cyberspace may affect the Structure of Criminal
Relationships’, North Carolina Journal of Law & Technology 4, no. 1 (2002), 1–41 at 1.
47. Botnets comprise lists of the Internet Protocol (IP) addresses of ‘zombie’ computers that have been
infected by remote administration tools (malcode) and that can subsequently be controlled remotely. Botnets
are valuable commodities because of the power they can place in the hands of the remote administrators (bot
herders) to deliver a range of harmful malicious software. For this purpose they can be hired out, sold, or
traded.
48. ‘Uncovered: Trojans as Spam Robots’, C’T Magazine, 23 February, 2004. Available at http://www.
heise.de/english/newsticker/news/44879 (accessed 10 March 2008).
49. See ‘Targeted trojan email attacks’, NISCC Briefing 08/2005, 16 June, 2005. Available at http://
www.cpni.gov.uk/Docs/ttea/pdf (accessed 10 March 2008). Also P. Warren, ‘UK Trojan Siege has been
Running over a Year’, The Register, 17 June, 2005. Available at http://www.theregister.
co.uk/2005/06/17/niscc_warning/ (accessed 10 March 2008).
50. See further http://www.soca.gov.uk ‘The Serious Organised Crime Agency (SOCA) is an Executive Non-
Departmental Public Body sponsored by, but operationally independent from, the Home Office. The Agency
has been formed from the amalgamation of the National Crime Squad (NCS), National Criminal Intelligence
Service (NCIS), that part of HM Revenue and Customs (HMRC) dealing with drug trafficking and associated
criminal finance and a part of UK Immigration dealing with organised immigration crime (UKIS). SOCA is
an intelligence-led agency with law enforcement powers and harm reduction responsibilities. Harm in this
context is the damage caused to people and communities by serious organised crime.’
51. L. Rodgers, ‘Smashing the Criminals’ E-Bazaar’, BBC News Online, 20 December 2007. Available at
http://news.bbc.co.uk/1/hi/uk/7084592.stm (accessed 10 March 2008).
52. Ibid.
53. See Wall, Cybercrimes, 80; Rodgers, ‘Smashing the Criminals’ E-Bazaar’; also E. Parizo, ‘Busted: The
Inside Story of “Operation Firewall”’, SearchSecurity.com, 28 November 2005. Available at
http://searchsecurity.techtarget.com/originalContent/0,289142,sid14_gci1146949,00.html (accessed 10 March
2008). (Page 60)
54. K. Haggerty and R. Ericson, ‘The Surveillant Assemblage’, British Journal of Sociology 51, no. 4 (2000):
605–622.
55. Wall, Cybercrimes, 164.
56. H. Trevor-Roper, The Last Days of Hitler (London: Pan Books, 1972), 259.
57. It must be noted, however, that pre disintermediated news sources were the subject of criticism for the
opposite reasons – that the editors exercised too much control and in so doing applied their own value
systems!
58. See further R. Sambrook, ‘How the Net is Transforming News’, BBC News Online, 20 January 2006.
Available at http://news.bbc.co.uk/1/hi/technology/4630890.stm (accessed 10 March 2008).
59. APIG chairman Derek Wyatt, cited by M. Broersma, ‘Boost UK Govt Cybercrime Resources’,
ComputerWeekly, 17 May 2004. Available at http://www.computerweekly.com/Articles/2004/
05/17/202467/boost-uk-govt-cybercrime-resources.htm (accessed 10 March 2008).
60. R. Rosenberger, ‘Would Umbrella Manufacturers Predict Good Weather? (Part 2)’, Vmyths, 17 August
2001. Available at http://www.vmyths.com/column/1/2001/8/17/ (accessed 10 March 2008).
61. Symantec, note 2, 7, 77.
62. See note 3, for example, the BBC, ‘Hi-Tech Crime’, or Mathiason, ‘The Guardian of Cyberspace’.
63. This is the case in the BBC and The Observer articles referenced above (note 62).
64. DTI, Information Security Breaches Survey 2004, Department of Trade and Industry, London, 2004.
65. CSI/FBI, CSI/FBI Computer Crime and Security Survey 2006 (San Francisco: Computer Security
Institute, 2006).
66. In the UK, for example, the British Crime Survey contains Internet crime questions. See for the British
Crime Survey, Allen et al., ‘Fraud and Technology Crimes’; and also, Wilson et al. ‘Fraud and Technology
Crimes’. In the USA the National Crime Victimization Survey (NCVS) has, or is due to have, similar
questions included.
67. No examples of media reportage were found that described low levels of findings.
68. For details of the convention and the additional protocol, plus the latest on signatories see COE,
‘Convention on Cybercrime’, Council of Europe, Budapest, 23 November 2001 (ETS no. 185). Available at
http://conventions.coe.int/Treaty/EN/Treaties/Html/185.htm (accessed 10 March 2008). Also the COE,
‘Additional Protocol to the Convention on Cybercrime, Concerning the Criminalisation of Acts of a Racist
and Xenophobic Nature Committed through Computer Systems’, Council of Europe, Strasbourg, 28 January
2003 (ETS no. 189). Available at http://conventions.coe.int/ Treaty/en/Treaties/Html/189.htm (accessed 10
March 2008).
69. Viral information flow is a term that describes how information proliferates across distributed networks
by word of ‘mouse’ (rather than mouth). The information flows can be almost viral in the way that they are
distributed exponentially from node to node across networks. The term ‘viral’ is now used colloquially to
describe the Internet video phenomenon; indeed they are actually called ‘virals’.
70. BBC, ‘Northern Rock Besieged by Savers’, BBC News Online, 17 September 2007, http://
news.bbc.co.uk/1/hi/business/6997765.stm (accessed 10 March 2008).
71. See for example, Daily Telegraph.co.uk ‘Madeleine McCann’ pages, http://www.telegraph.co.uk/
news/main.jhtml?xml¼%2Fnews/exclusions/madeleine/nosplit/madeleine.xml (accessed 10 March 2008).
72. ‘Reality’ entry in the Shorter Oxford Dictionary.
73. For more elaborate explanations see further, Wall, Cybercrimes.
74. Wall, Cybercrimes, 3.
75. See FBI, ‘Over 1 Million Potential Victims of Botnet Cyber Crime’, FBI Press Release, 13 June 2007.
Available at http://www.fbi.gov/pressrel/pressrel07/botnet061307.htm (accessed 10 March 2008); also news
coverage at D. Goodin, ‘Botmaster Owns Up to 250,000 Zombie PCs: He’s a Security Consultant. Jail
Beckons’, The Register, 9 November 2007. Available at http://www.theregister.co.uk/
2007/11/09/botmaster_to_plea_guilty/ (accessed 10 March 2008).
76. FBI, ‘BOT Roast II: Cracking Down on Cyber Crime’, FBI Headline Archives, 29 November 2007.
Available at http://www.fbi.gov/page2/nov07/botnet112907.html (accessed 10 March 2008).
77. BBC, ‘Arrests Made in Botnet Crackdown’, BBC News Online, 30 November 2007. Available at
http://news.bbc.co.uk/1/hi/technology/7120251.stm (accessed 10 March 2008).
78. D. Goodin, ‘FBI Crackdown on Botnets gets Results, but Damage Continues: 2 Million Zombies and
Counting’, The Register, 29 November 2007. Available at http://www.theregister.co.uk/2007/11/
29/fbi_botnet_progress_report/ (accessed 10 March 2008). (Page 63)
79. The amount that ‘small’ refers to varies according to constituency. Individuals will only write off very
small amounts of money. Corporate entities, however, are alleged to write off much larger sums, for example
£500 was one sum mentioned, because of the costs of investigation. While this may efficiently resolves the
private interest in an economic sense, it does nevertheless mean that important criminal intelligence is lost.
80. See further Wall, Cybercrimes, chaps. 4–6.
81. See further Wall, Cybercrimes, chaps. 4 and 7 for the reasons why spamming etc are different to hybrids.
82. Phishing is the use of Internet communications, e.g. emails, to socially engineer (trick people) out of
personal financial information. Variations include ‘spear phishing’ where specific, rather than blanket, targets
are chosen. Also pharming, smishing, vishing.
83. Pharming is an automated version of phishing that does not rely upon social engineering to trick the
recipient because it automatically redirects the recipient to the offending site.
84. Smishing is a form of phishing that uses bulk text messaging facilities to target mobile devices such as
phones or PDAs (personal digital assistants) with urgent text requests for the recipient to call an alleged bank
phone number or log onto a website and change their security information, thereby revealing it.
85. Vishing is another form of phishing that uses VoIP (voice over Internet protocol) to spam recorded
messages to telephone numbers. The VoIP messages purport to be from banks, other financial institutions,
online merchants such as Amazon or Internet auction houses such as eBay, and warn that their credit card has
been used for fraudulent transactions. As with phishing and its variations, recipients are asked to contact a
phone number or logon to a website to verify and change their security information.
86. See further Wall, Cybercrimes, chaps. 4–6.
87. De minimism is from de minimis non curat lex, where it means the ‘law does not deal with trifles’. It is
used in this book to describe low-impact, bulk victimisations that cause large aggregated losses spread out
globally across potentially all known jurisdictions.
88. See Wall, Cybercrimes.
89. Honeynets are fake websites constructed to socially engineer offenders into accessing them and showing
intent by wilfully passing through various levels of security, agreeing at each stage that they are aware of the
content and indicating their intent. Offenders eventually find themselves facing a law enforcement message (a
‘gotcha’) and a notice that that their details will be recorded or that they will become subject to investigation
in cases where intent is clear. See further, Honeynet Project, Know Your Enemy: Revealing the Security Tools,
Tactics, and Motives of the Blackhat Community (Harlow, Essex: Addison Wesley, 2002).
90. M. Goodman, ‘Why the Police Don’t Care about Computer Crime’, Harvard Journal of Law and
Technology 10 (1997): 645–694 at 484. Also Y. Jewkes and C. Andrews, ‘Policing the Filth: The Problems of
Investigating Online Child Pornography in England and Wales’, Policing & Society 15, no. 1 (2005): 42–62 at
51.
91. R. Reiner, The Politics of the Police, 3rd ed. (Oxford: Oxford University Press, 2000), xi.
92. See Wall, Cybercrimes, chap. 8.
93. In the UK, computer misuse legislation has, for example, been revised to assist the policing of cybercrime.
The Police and Justice Act 2006 amends the Computer Misuse Act 1990 to include DDOS attacks (CL 40), as
well as increasing the penalty for unauthorised access (CL 39) and making illegal the making, supplying or
obtaining articles for use in computer misuse offences (CL 41) (HL Bill 104, 2005–6).