ArticlePDF Available

Deep web, dark web, dark net: A taxonomy of “hidden” Internet

Authors:

Abstract and Figures

Recently, online black markets and anonymous virtual currencies have become the subject of academic research, and we have gained a certain degree of knowledge about the dark web. However, as terms such as deep web and dark net, which have different meanings, have been used in similar contexts, discussions related to the dark web tend to be confusing. Therefore, in this paper, we discuss the differences between these concepts in an easy-to-understand manner, including their historical circumstances, and explain the technology known as onion routing used on the dark web.
Content may be subject to copyright.
277
Deep Web, Dark Web, Dark Net:
A Taxonomy of “Hidden” Internet
Masayuki HATTAa)
Abstract: Recently, online black markets and anonymous virtual
currencies have become the subject of academic research, and we
have gained a certain degree of knowledge about the dark
web. However, as terms such as deep web and dark net, which
have different meanings, have been used in similar contexts,
discussions related to the dark web tend to be
confusing. Therefore, in this paper, we discuss the differences
between these concepts in an easy-to-understand manner,
including their historical circumstances, and explain the technology
known as onion routing used on the dark web.
Keywords: anonymity, deep web, dark web, dark net, privacy
a) Faculty of Economics and Management, Surugadai University. 698 Azu, Hanno, Saitama,
Japan. hatta.masayuki@surugadai.ac.jp
A version of this paper was presented at the ABAS Conference 2020 Summer (Hatta, 2020b).
© 2020 Masayuki Hatta. This is an Open Access article distributed under the terms of the Creative
Commons Attribution License, which permits unrestricted reuse, distribution, and reproduction in any
medium, provided the original work is properly cited.
Annals of Business Administrative Science 19 (2020) 277–292
https://doi.org/10.7880/abas.0200908a
Received: September 8, 2020; accepted: October 16, 2020
Published in advance on J-STAGE: December 5, 2020
Hatta
278
Introduction
In recent years, the term “dark web” has become popular. The dark
web, i.e., a World Wide Web wherein your anonymity is guaranteed
and one that cannot be accessed without using special software, was,
until recently, of interest to only a few curious people. However, in
2011, the world’s largest online black market, Silk Road (Bolton,
2017), was established on the dark web; with the presence of virtual
currencies, which incorporate the anonymity provided on the dark
web (Todorof, 2019), it has become a topic of economic and business
research. Words similar to “dark web” (such as “deep web” and “dark
net”) are used in the same context, but they are completely different
technical concepts; this leads to confusion.
Deep Web
Historically, among the three terms (“dark web,” “deep web,” and
“dark net”), the term “deep web” was the first to emerge. Computer
technician and entrepreneur, Michael K. Bergman, first used it in his
white paper “The deep web: Surfacing hidden value” (Bergman,
2001). Bergman likened web searches to the fishing industry and
stated that legacy search engines were nothing more than fishing
nets being dragged along the surface of the sea, even though there is
a lot of important information deep in the sea, where the nets do not
reach. Therefore, he stated that, moving forward, it was important to
reach deep areas as well. This was the advent of the deep web.
Bergman stated that the deep web was 400–550 times larger than
the normal web, and that the information found in the deep web was
1,000–2,000 times the quality of the normal web. The problem is that
even now this is used in the context of the dark web. What Bergman
(2001) first raised as detailed examples of the “deep” web were the
National Oceanic and Atmospheric Administration (NOAA) and
Deep web, dark web, dark net
279
United States Patent and Trademark Office (USPTO) data, JSTOR
and Elsevier fee-based academic literature search services, and the
eBay and Amazon electronic commerce sites; these are still referred
to as the “deep web” today. In short, Bergman referred to the
following as the deep web:
(a) Special databases that could only be accessed within an
organization
(b) Sites with paywalls wherein content can only be partly seen or
not seen at all without registration
(c) Sites in which content is dynamically generated each time they
are accessed
(d) Pages that cannot be accessed without using that site’s search
system
(e) Electronic email and chat logs
That is to say, it refers to a Web that normal search engines, such
as Google, cannot edit or index.
Incidentally, according to Bergman, in 1994, there were already
people using the “invisible web,” in the sense that it could not be
searched by a search engine. However, Bergman asserted that the
deep web was just deep, and not “invisible,” i.e., it could be searched
with innovations. The start-up that he was managing at that time
was selling this very technology. Furthermore, following this, Google
formed a separate agreement with the owners of databases and
started the Google Books project with university libraries, becoming
involved in “deep” field indexing; thus, in 20 years, the
deep web
in the sense that Bergman used it
is considered to
have shrunk considerably.
In this manner, originally the “deep” in deep web was simply
somewhere that was deep and difficult to web-crawl, and did not
contain nuances of good or evil. Despite this, “deep” is a powerful
word and, as will be described later, this has led the way in
Hatta
280
entrenching the image associated with the dark web as something
thick and murky.
Dark Net
The term “dark net” became popular at virtually the same time as
the term “dark web” did. There is a hypothesis that this has been
used since the 1970s and although even today, in concrete terms, an
IP address that is not allocated to a host computer is referred to as
the dark net, the trigger for it being used as a general term as it is
now was in 2002 (published in 2003), when a paper was written by
four engineers including Peter Biddle (he was working at Microsoft at
that time), who called the dark net as the future of content
distribution (Biddle, England, Peinado, & Willman, 2003).
Sweeping the world at that time was the P2P file sharing service
software Napster service (started in 1999) and Gnutella (released in
2000). Operation of File Rogue started at around the same time in
Japan. There were fears of copyright infringement, and in the paper
written as part of the research on Digital Rights Management (DRM)
and copy protection (Biddle et al., 2003), the term “dark net” was
clearly being used in the negative meaning of illegal activity.
Biddle et al. (2003) broadly defined dark net as “a collection of
networks and technologies used to share digital content” (Biddle et
al., 2003, p. 155). Based on this, it can be summarized as follows.
(1) This started with the manual carrying of physical media such
as CDs, DVDs, and more recently, USB memory
the
so-called “Sneakernet.”
(2) With the spread of the Internet, files such as music files began
to be stored on one server, giving birth to the “central server”
model. However, if the central server were destroyed, that
would be the end.
Deep web, dark web, dark net
281
(3) Files or parts of files were shared on multiple servers using
Napster or Gnutella and by the shared servers (peer)
communicating together
a Peer to Peer (P2P) model
(meaning that if only one point of the network was destroyed,
the network as a whole would survive) appeared.
This P2P model was realized on the existing physical network,
using technology known as an overlay network that utilizes
non-standard applications and protocols.
Additionally, Biddle et al. (2003) noted that as Napster had a
central server for searching, it could be controlled using
that. Moreover, although Gnutella was completely distributed, the
individual peers were not anonymous and you could learn their IP
addresses, so it was possible to track them and hold them legally
responsible. In this way, measures could be taken in regard to the
P2P file sharing at the time, but it was predicted that a new dark net,
where these weaknesses were overcome, would emerge.
Biddle et al. (2003) considered that, even if protected, it could be
widely diffused via the dark net, and that the dark net would
continue to evolve. They reached the conclusion that DRM was
fundamentally meaningless, and that to eradicate pirated versions,
official versions also needed to have a reasonable price and be
convenient for customers, as well as compete on the same
ground. This pronouncement put the jobs of Biddle et al. at risk (Lee,
2017). However, considering that attempts at measures against
piracy through copyright enforcement have continually failed, and
that currently piracy is being put to the sword by the emergence of
superior platforms, such as Netflix and Spotify, such
pronouncements have proven to be correct.
Hatta
282
Yet Another Dark Net: F2F
Possibly due to the fact that dark net is an attractive name, around
the same time as Biddle et al. (2003), the term “dark net” began to be
used as a general term for a slightly different technology. This is
called Friend-to-Friend (F2F),1 and as this was implemented as
Darknet mode by Freenet, which is one of the main types of dark web
software (to be described later), this also became known as Darknet.
In this sense, Darknet, or F2F, is a type of P2P network, and the
user only directly connects with acquaintances (in many cases, they
have met in real life and built up trust via a non-online route). A
password or digital signature is used for authentication. The basic
concept behind F2F is that a P2P overlay network is constructed over
the existing relationships of trust between users. This is a method in
1 The term F2Fitself was invented in the year 2000 (Bricklin, 2000).
Figure 1. Topology of Darknet. A participant with
malicious intent (e.g., a red one) cannot easily
understand the entire network.
Source: the author.
Deep web, dark web, dark net
283
which the network is subdivided and, rather than an unspecified
large number of people, they connect to a much smaller group of, say,
five people as shown as an example in Figure 1, whom they know well
and trust. In this sense, the term Opennet, used in Figure 2, is an
antonym of Darknet.
Overlay network
Here, an overlay network is a general term for a network
constructed “over” a separate network. Typically, it refers to a
computer network constructed over the Internet. The problem in this
case is one of routing. On the Internet, based on TCP/IP, it is possible
to reach other servers by specifying an IP address. However, in the
case of an overlay network, this IP address is not necessarily known
or usable, so technology such as a Distributed Hash Table (DHT) is
utilized to route an existing node using a logical address.
In F2F, each user operates as an overlay node. Contrary to an
Opennet P2P network, with F2F, it is not possible to connect to an
Figure 2. Topology of Opennet. The outside observer
can understand the entire network thanks to the
existence of a directory server.
Source: the author.
Hatta
284
arbitrary node and exchange information. Instead, each user
manages “acquaintances” that they trust and establishes safe and
authenticated channels only with a fixed number of other nodes.
As pointed out by Biddle et al. (2003), in the Gnutella network, for
example, there was the problem that the attributes of network
participants, such as IP addresses, were known by all network
participants. The participants could be infiltrated by police or
information agencies, and if their attributes are known, there is the
danger of them being tracked and of legal action being taken against
them. Additionally, as connections are concentrated on powerful
nodes with an abundance of network resources, as shown in Figure 3,
when that node becomes an adversary, the overall image of the
adversary’s network can be grasped in a so-called “harvesting”
attack. With F2F, it is possible to create a P2P network that can
withstand harvesting.
Contrary to other dark web implementations such as Tor or I2P
(described later), F2F network users are unable to know who is
Figure 3. Harvesting attack. If there is a powerful
server (possibly run by adversaries), all nodes
would try to connect that server; thus, the entire
network would be revealed.
Source: the author.
Deep web, dark web, dark net
285
participating in the network other than their own “acquaintances,” so
the scale of the network can be expanded without losing the
anonymity of the network as a whole. In other words, dark means
that it is difficult to see and grasp an overall image of the network.
In a “simple” F2F network, there is no path that reaches beyond
“acquaintances,” so the average network distance is infinite. However,
as indirect anonymous communication between users who do not
know or trust each other is supported, even if it is between nodes for
which trust has not been established, as long as there are common
nodes that are acquaintances of both, by going via this node, it is
possible for both to communicate with anonymity (small world
network).
It is interesting how both trends of the dark net Watts and Strogatz
(1998) described herein and social networks were established once
this small world phenomenon became commonplace. It can be said
that the dark net is like a twin sibling to social networks, which are at
the peak of their prosperity.
Dark Web
It is unclear when the dark web first appeared. The term dark web
began to be used around 2009, but merging with the deep web was
already seen at that time (Becket, 2009). To understand the dark web,
which is different from the deep web and dark net (which are
comparatively simple, technologically), an understanding of
computer network basics is required.
Internet basics
On the Internet, “access” is realized by the correspondence of a
high quantity of messages between a client at the user location and a
server in a remote location. For example, when viewing a website
using a web browser, a request message saying “send data on this
Hatta
286
page here” from the computer of the viewer is sent to the web server
on the server side in accordance with the fixed rules (known as
“protocols”). The web server receiving this message then sends the
requested data.
At this time, the message is minutely subdivided into data of a fixed
size, called “packets,” in which data called a “header” (wherein
control information such as the IP address of the sender and
destination is described at the start of each header fragment) is
attached, are exchanged. The side receiving the packet connects and
reconstructs the message and takes action accordingly.
On the Internet, such packets are sent in a packet relay via many
server machines to the destination server. This type of packet flow is
called “traffic.” Looking at the header and deciding where and how to
send the packet is known as “routing,” and the general name given to
devices and software that make these decisions and transmit these
packets is “routers.”
Assumed anonymity versus real anonymity
The above text describes the mechanism by which data on the
Internet (described as the Clearnet, in contrast to the dark net) is
exchanged; however, when the Internet is accessed, a “record” always
remains. For example, if you view a website from a PC, the server
hosting this website will have a record (access log) showing at what
hour and what minute this page was accessed and from where. In
many cases, the only thing recorded is the identifier number, known
as the IP address. IP addresses are allocated to individual
communications devices, and as the IP address assigned may change
with every connection, it may be difficult to identify the location and
person involved using the IP address alone. However, as you will
know the Internet service provider (ISP) used by the device for this
connection, you can then get information on the contracted party
from the ISP. So, you can trace each step back one by one.
Deep web, dark web, dark net
287
The log is often stored on the server side for a fixed period of time
(in many cases, from three months to one year or more). Therefore, if
the investigatory body receives submission of a log from the body
managing the server or ISP, etc., they can start to track down the
sender. Of course, there are issues with freedom of expression and
secrecy of communications, so even investigatory authorities are
unable to acquire sender information in an unlimited way. However,
in cases involving requests for disclosure of sender information based
on the Law on Restrictions on the Liability for Damages of Specified
Telecommunications Service Providers, identification of information
senders by the police or the prosecution after acquiring a warrant
from the courts is an everyday occurrence.
Anonymization by Tor: Onion routing
Therefore, there are systems, such as Tor, that make it difficult for
information senders to be identified. Tor is software designed to
enable Internet users to communicate while maintaining anonymity
and has been named based on the initial letters for “The Onion
Router.” Tor is an open-source software that runs on various
platforms2 and can be freely obtained by anyone (and is often free of
charge). In recent years, the development of Tor has been furthered
on a volunteer basis. Originally, however, this technology was
developed at a US-navy laboratory at the start of the1990s.
Tor adds a tweak to the basic mechanism of data exchange over the
normal Internet. Tor constructs a virtual network over the Internet
and functions as a special router over this network. This is a unique
form of routing and, as the name suggests, it uses a type of
technology known as “onion routing.”
For example, in the same way that if the destination on postal
items is written in a code, mail will not be delivered, the sender IP
2 For the development process of Open source software, refer to Hatta (2018,
2020a), etc.
Hatta
288
address in the packet header is described in unencrypted data (plain
text). Thus, the access log can be captured on the server. Additionally,
as the destination IP address is also described, it is possible to lie in
wait on a server during the course of traffic, i.e., a packet relay, and
edit or eavesdrop on the packet header as it passes, and statistically
analyze the type and frequency of access, exposing the sender’s
identity. For example, if there is somebody in the outback of
Afghanistan who frequently accesses a US army-related site, even if
they do not know the content of the communications, there is a high
possibility that an intelligence agent who has infiltrated the US army
server is accessing the US army site. The general name for this type of
method is traffic analysis.
Onion routing was invented as a means of countering such traffic
analysis.3 If you want to access a particular server anonymously,
you should install Tor on your own computer, change the proxy
settings on your web browser, and set all packets going from your
computer to do so via Tor. If you do so, based on the following steps,
Tor will provide you with anonymity.
Step 1: Choosing relay nodes randomly
Tor, which picks up packets leaving your computer, obtains a list
of IP addresses of servers on which the Tor onion router is set from
directory servers on the Internet (called Tor nodes or relays) and
selects at least three of these nodes at random. If the selected nodes
are set to Tor node A, Tor node B, and Tor node C, Tor routing is then
performed on your own computer.
Your computer Tor node A Tor node B Tor node C
destination server
3 I2P, known as a non-Tor implementation of the dark web, uses garlic
routing as an improved version of onion routing. Although it uses Freenet
or a different algorithm, it is basically the same as onion routing.
Deep web, dark web, dark net
289
It is determined that the packet is sent using this route. (Tor node
C is the terminal point on the virtual network created by Tor, and as
this is a connecting point that reconnects to the Internet, in
particular, this is referred to as an exit node.)
Step 2: Peeling onions at each nodes
Next, Tor attaches a Tor header to the packets to be sent. At each
node, the header that has the Tor node IP address to pass the packet
to next is attached.
A) Tor node A header: Tor node B IP address
B) Tor node B header: Tor node C IP address
C) Tor node C header: Destination Server IP address
In practice, as shown in Figure 4, the header is wound from the
inside, in reverse order. First, C) in the Tor node C header, the IP
address of the destination server where you want to send the packet
is written, and the whole thing is encrypted with a key which can only
Figure 4. Onion
Source: the author.
Hatta
290
be decoded on Tor node C. Additionally, on top of that B) the Tor node
B header is attached, and the whole thing is encrypted with a key
that can only be decoded on Tor node B. The same process is carried
out for the A) Tor node A header. In other words, the one closest to
the final destination is placed on the inside, and each layer is locked
with keys so that, at each stage, it can be decoded only by specific Tor
nodes. The packet is passed to Tor node A on top of this. After
decoding the packet headers received at each Tor node, the packet is
transferred, as in a packet relay, to the next node written there.
In this way, it is as if you are peeling the skin off an onion one layer
at a time, and each node opens the header for itself one at a time,
decodes it, and passes the packet to the next node. This is the reason
that it is called “onion” routing. The peeled-off skin, i.e., the header
for yourself that you have decoded is discarded by you.
The header for the previous node is discarded by that node. So, for
example, Tor node C knows that a packet has come from Tor node B,
but it does not know that the one before Tor node B was Tor node
A. Additionally, Tor node A knows that it has received a packet from
the departure point node and that it needs to pass it to Tor node B,
but it does not know the next node or that the contents of the packet
are encrypted with the Tor node B key. Therefore, the most important
thing is that, from the destination server perspective, the packet does
not come from the departure point but appears to come from Tor
node C. Therefore, even if an access log is taken for the destination
server, what is recorded in the log is the exit node of the Tor node C IP
address and not the departure point IP address. Additionally, Tor
node C is operated by the Tor node and is simply selected at random,
and there is effectively nothing linking the departure point and Tor
node C.
Deep web, dark web, dark net
291
Conclusion
In this paper, we have provided a simple explanation of the
deep web, dark web, and (two) dark nets, including their technical
aspects. These concepts, as shown in Table 1, are easy to understand
based on what each one is the opposite of. Discussions of the dark
web, etc., tend to be hampered by the image that the word
presents. When researching this, moving forward, a precise
understanding based on its historical and technical nature is
required.
Acknowledgments
This work was supported by JSPS Grant-in-Aid for Publication of
Scientific Research Results, Grant Number JP16HP2004.
Table 1. Deep web, dark web, and (two) dark nets
Type The opposite The strength of
anonymity
Deep web WWW None
Dark net The legit contents
distribution
Medium
Dark net (F2F) Open net Strong
Dark web Clearnet Strong
Source: the author
Hatta
292
References
Becket, A. (2009, November 26). The dark side of the internet. The
Guardian. Retrieved from http://www.theguardian.com/technology/
2009/nov/26/dark-side-internet-freenet
Bergman, M. K. (2001). White paper: The deep web: Surfacing hidden
value. Journal of Electronic Publishing, 7(1). doi:
10.3998/3336451.0007.104
Biddle, P., England, P., Peinado, M., & Willman, B. (2003). The darknet
and the future of content protection. In J. Feigenbaum (Ed.), Digital
rights management (pp. 155–176). Berlin, Heidelberg, Germany:
Springer. doi: 10.1007/978-3-540-44993-5_10
Bolton, N. (2017). American kingpin: The epic hunt for the criminal
mastermind behind the Silk Road. New York, NY: Portfolio/Penguin.
Bricklin, D. (2000, August 11). Friend-to-Friend Networks [Web log
message]. Retrieved from http://www.bricklin.com/f2f.htm
Hatta, M. (2018). The role of mailing lists for policy discussions in open
source development. Annals of Business Administrative Science, 17,
31–43. doi: 10.7880/abas.0170904a
Hatta, M. (2020a). The right to repair, the right to tinker, and the right to
innovate. Annals of Business Administrative Science, 19,
143–157. doi:10.7880/abas.0200604a
Hatta, M. (2000b, August). Deep web, dark web, dark net: A taxonomy of
“hidden” Internet. Paper presented at ABAS Conference 2020 Summer,
University of Tokyo, Japan.
Lee, T. B. (2017, November 24). How four Microsoft engineers proved that
the “darknet” would defeat DRM [Web log message]. Retrieved from
https://arstechnica.com/tech-policy/2017/11/how-four-microsoft-e
ngineers-proved-copy-protection-would-fail/
Todorof, M. (2019). FinTech on the dark web: The rise of cryptos. ERA
Forum, 20(1), 1–20. doi: 10.1007/s12027-019-00556-y
Watts, D. J., & Strogatz, S. H. (1998). Collective dynamics of ‘small-world’
networks. Nature, 393(6684), 440–442. doi: 10.1038/30918
... The inherent anonymity of cryptocurrencies often entangles them in illicit activities, a focus of this literature review. The darknet, a subset of the deep web, is substantially larger than the surface web (Hatta, 2020;Raman et al., 2023;Rudesill et al., 2015), where the anonymous nature of cryptocurrency is prevalent in various illicit activities, categorizing organized crime into drug trafficking, terrorism, money laundering, and the distribution of child sexual abuse material (CSAM). The review aims to investigate the role of Bitcoin and other cryptocurrencies in criminal activities, highlighting the need for a unified regulatory framework. ...
... Throughout the reviewed literature, a consensus has emerged regarding the pivotal role of the darknet as a hub for illicit activities, largely facilitated by transactions conducted using cryptocurrencies, which present significant challenges for tracking (Cong et al., 2022;Reynolds, Irwin, 2017). The anonymity inherent in the dark web frequently links it to illegal activities, encompassing a range of illicit actions such as drug trafficking, arms sales, hacking services, counterfeiting, distribution of CSAM, and financial fraud (Chertoff, Simon, 2015;Hatta, 2020;Raman et al., 2023). It is essential, however, to recognize that not all dark web activities are nefarious; it also provides refuge for whistleblowers, activists, and individuals seeking privacy, particularly in the face of authoritarian regimes Kfir, 2020;Patsakis et al., 2023). ...
... It is essential, however, to recognize that not all dark web activities are nefarious; it also provides refuge for whistleblowers, activists, and individuals seeking privacy, particularly in the face of authoritarian regimes Kfir, 2020;Patsakis et al., 2023). The combination of relatively easy access and the use of cryptocurrencies in transactions underscores the absence of a universal regulatory framework, effectively perpetuating bank secrecy within the dark web (Chertoff, Simon, 2015;Hatta, 2020;Piazza, 2017;Raman et al., 2023). In 2011, Ross William Ulbricht launched Silk Road, a website accessible via the darknet, designed as a global online marketplace catering to illicit transactions ( Figure 3). ...
Article
Full-text available
Cryptocurrency has emerged as a lucrative yet volatile landscape for cybercriminal activity, presenting novel challenges for law enforcement and policymakers alike. This review seeks to explore the diverse array of cybercrimes occurring within the cryptocurrency domain, examining their types, motives, techniques, and the regulatory responses shaping this complex ecosystem. Utilizing a scoping literature search methodology, this study analyzes 228 pertinent sources drawn from a pool of over 4,000 reviewed publications. The findings elucidate the intricate interplay between cryptocurrencies and illicit activities, revealing the multifaceted nature of cybercrimes within this realm. From the exploitation of the dark web for illicit transactions to the pervasive threat of crypto ransomware targeting entities globally, the review underscores the diverse methods and motivations driving such nefarious endeavors. By shedding light on the evolving tactics employed by cybercriminals and exploring future directions for technological and regulatory measures adopted by governments, this paper offers valuable insights to navigate this dynamic landscape effectively.
... Zasoby Dark Web nie są indeksowane przez powszechnie wykorzystywane wyszukiwarki (np. Google i Bing), tak więc nie pojawiają się w wynikach wyszukiwania (Finklea, 2022;Hatta, 2020;Okyere-Agyei, 2022). ...
... Oprócz tego funkcjonuje węższy znaczeniowo termin Dark Net obejmujący anonimowe węzły typu komputery, serwery, rutery połączone w techniczną sieć umożliwiającą funkcjonowanie zasobów, narzędzi i usług Dark Web (Finklea, 2022;Hatta, 2020). Zasoby internetowe, które znajdują się w Dark Web, dostępne są jedynie za pośrednictwem specjalnego oprogramowania, np. ...
... TOR umożliwia ukrycie tożsamości (lokalizacji) zarówno konsumenta treści (tj. zwykłego użytkownika), jak i jej dostarczyciela (Hatta, 2020, Okyere-Agyei, 2022. ...
Article
Full-text available
Cel: Opracowanie modelu istniejącego w Dark Web, w pierwszej połowie roku 2023, systemu dzielenia się informacją i wiedzą w kształcie, w jakim jest on budowany, lecz także postrzegany przez jego uczestników – osoby dzielące się informacją i wiedzą. Metoda: Badania miały charakter empiryczny i polegały na pozyskiwaniu danych jakościowych bezpośrednio z przedmiotu badań (sieci TOR). Przeprowadzono jakościową obróbkę danych (kwalifikację), w efekcie czego wyróżniono zasoby, w których użytkownicy dzielą się informacjami i wiedzą, oraz dokonano podziału tychże zasobów na w miarę jednorodne grupy składające się na model systemu dzielenia się informacją i wiedzą. Następnie wyróżniono te cechy systemu, które okazały się specyficzne dla sieci TOR. Rezultaty: W sieci TOR dzielenie się informacją i wiedzą zachodzi w systemie kojarzącym ze sobą potrzeby i motywacje twórców (głownie merkantylne) oraz użytkowników zasobów z możliwościami stworzonymi przez technologie, w tym anonimowość, obchodzenie ograniczeń cenzuralnych oraz metody płacenia z wykorzystaniem kryptowalut. W sposób bardzo wyraźny na kształt tegoż systemu wpływa specyficzna kultura wolności bazująca na wspomnianych możliwościach technologicznych; specyficzna, często przeradzająca się bowiem w anarchizm i łamanie prawa. Istotną cechą opisywanego modelu systemu jest zmienność, efemeryczność i nietrwałość znacznej części zasobów oraz niska skuteczność narzędzi służących do wyszukiwania konkretnych treści.
... A friend-to-friend (F2F) network is a type of peer-to-peer (P2P) network. Its main concept is that the P2P overlay network is built on trust relationships (see Fig. 2) [19,20]. Instead of an indefinite number of people, it is a method that creates subnets where smaller groups of people they know well are connected. ...
Article
Full-text available
The internet has numerous profoundly infiltrated facets of human endeavor, evolving into a pivotal substitute for traditional ways of information acquisition and dissemination. This phenomenon enables the swift propagation of information, transcending geographical and temporal barriers through advanced information and communication technologies. As global communications gravitate towards cyberspace, social interactions increasingly occur within this digital realm. Individuals united by common ideologies or interests shift from physical reality to virtual communities, thereby shaping distinct virtual societies. Despite the internet’s extensive reach and integration into diverse domains, access to all online resources is not universal. The vast array of internet resources, serving varied purposes, has resulted in a layered and segmented cyberspace, comprising various closed or private virtual networks accessible only by specific groups. This stratification, coupled with the utilization of different network layers for illicit activities, has precipitated several global challenges encompassing economic, political, and cybersecurity dimensions. Consequently, entities such as nation-states, corporations, groups, and individuals have erected virtual barriers and borders within the internet, establishing different usage policies. This paper highlights the cybersecurity implications of implementing heterogeneous usage policies in the internet environment. It also identifies and analyzes the interconnection points among different internet layers, offering insights into their structural and functional dynamics.
... On the Dark Web, hidden services like Silk Road facilitate the purchase of drugs, weapons, child pornography or even assassinations (Kaur, Randhawa 2020). Much like the abyssal and hadal zones of the ocean, the lowest reaches of the Web became associated with the darkness and murkiness of inhospitable depths (Hatta 2020). ...
Article
Full-text available
Counter commonplace associations with superficial mediation and networked flatness, the digital seems to have its own peculiar depths, which range from the infrastructural (deep sea cables, deep packet inspection, crawl depth) to the metaphorical (Deep Web, deep learning, deepfakes). This article reviews recent discussions of digital depth and argues that this concept is central to understanding multiple aspects of digital media ranging from folk theorizations to technical expertise. What is digital depth? What is deep about digital media? How does this depth interface with volumes and scales beyond the digital? Through this effort, depth emerges as an underlying feature of deeply mediatized societies.
... The origin of the dark web dates back to 2000. It is linked to the creation of the Freenet project designed initially for anonymously sharing files online and to the use of The Onion Router (TOR) Project, which was invented in the 1990s by the US Navy laboratory to secure encrypted communications (Hatta 2020). The darknet provides anonymous access to drug marketplaces as part of the organized crime groups larger set of activities, which include the online trafficking of a variety of illicit goods (Lamy et al. 2021;Lamy, Daniulaityte, and Dudley 2023;Munksgaard and Tzanetakis 2022;Spagnoletti, Ceci, and Bygstad 2022;Tzanetakis 2018). ...
... In addition, it leads to the interpretation of results in inaccurate contexts. Inconsistent use of these terms is also mentioned in other works [21,26,53]. ...
Article
Full-text available
The darknet terminology is not used consistently among scientific research papers. This can lead to difficulties in regards to the applicability and the significance of the results and also facilitates misinterpretation of them. As a consequence, comparisons of the different works are complicated. In this paper, we conduct a review of previous darknet research papers in order to elaborate the distribution of the inconsistent usage of the darknet terminology. Overall, inconsistencies in darknet terminology in 63 out of 97 papers were observed. The most common statement indicated that the dark web is a part of the deep web. 19 papers equate the terms darknet and dark web. Others do not distinguish between dark web and deep web, or between deep web and darknet.
Chapter
The Dark Web presents a challenging and complex environment where cyber criminals conduct illicit activities with high degrees of anonymity and privacy. This chapter describes a honeypot-based data collection approach for Dark Web browsing that incorporates honeypots on three isolated virtual machines, including production honeypots, an onion-website-based research honeypot (Honey Onion) offering illegal services and a log server that collects and securely stores the honeypot logs. Experiments conducted over 14 days collected more than 250 requests on the Honey Onion service and in excess of 28,000 chat records from the Dark Web forum. The log server also monitored Honey Onion traffic, providing details such as packet types, timestamps, network data, and HTTP requests. The data collection results provide valuable insights into Dark Web activities, including malicious, benign and uncategorized activities. The data analysis identified common user categories such as malicious actors, researchers and security professionals, and uncategorized actors. The experimental results demonstrate that honeypot-based data collection can advance Dark Web investigations as well as enable the development of effective cyber security strategies and efforts to combat cyber crime in the Dark Web.
Article
Full-text available
In the World Wide Web the dark net is still theoretically part of the Internet. But its access software or configuration and permission is different from the other sites. The dark web is not indexed by the public web search engines, such as Google and Bing. If you type in any keyword there will be pages of millions returned as a result. For example, trying to quantify the vastness of the Internet. If one were to think about how many sites are representative for just one hundred keywords, and how many since “n” different number combinations are equally valid on search engine queries? When we browse the Internet using the web browsers to visit sites our eyes really just see what is called the Surface Web. Which is a part of the Internet you can visit from different hypertext information displayed as webpages simply. Still, without the help of search engines, you can use the manifold resources from Internet that the Surface Web cannot present to you. The deep web where Tor is used is a place hard for hackers, spies, and even government agencies to watch over users. However when they are monitoring the site they cannot tell how website operators are processing with their data files on them all of a sudden. This paper describes the Businessman, Scientist, Students & Any Person safely and Anonymously Getting onto the DarkNet 100% with HeLL9 Project.
Article
Full-text available
The article contains a review of research on the cultural aspects of the Dark Web. A semi-systematic literature review was used for this purpose, as such an approach enables the reviewing of a multi- or interdisciplinary research area. Scopus, an international and multidisciplinary database indexing academic publications, was the source of bibliographic data. It turned out that there is a relatively short history of research on the cultural aspects of the Dark Web. The first publications on the topic only appeared in the Scopus database in 2005, and it continues to be a niche area of research, with the annual number of indexed publications around a few dozen. The author distinguished the following groups of issues that researchers deal with: (1) motivations among Dark Web users for making use of this part of the internet; (2) the impact of the Dark Web’s existence and the anonymity it provides on human behaviour; (3) the specificity of trading activity on “dark markets”; (4) steps taken by illegal traders to gain a reputation and customer trust; (5) communication practices on discussion forums; (6) the freedom to disseminate information; and (7) the methodology for studying communities on “dark networks”.
Article
Full-text available
Creating new products by incorporating new and original ideas derived from learning the internal mechanisms and structures of machines and other objects at hand through the process of repairing or tinkering with them is fundamental to the innovation, which is a staple of human existence. Recently, however, increasing product complexity, technical constraints, and regulations have gradually narrowed the scope of the user's ability to tinker. This aspect has given momentum to the movement to explicitly reclaim the Right to Repair and the Right to Tinker. This paper thus outlines the process that led to recognition of the importance of these rights.
Article
Full-text available
This document analyzes the evolution of policy-related discussions in open source software by using several projects’ policy mailing list archives and focusing on the Debian Project. More specifically, it utilizes approximately 70,000 pieces of mail exchanged since the end of the 1990s, investigating the rise and fall in activity and what sort of topics was discussed. The results of this paper’s inquiry suggest that mail volumes peaked in 2005, that policy discussions were led and mainly contributed to by a relatively small subset of persons who only posted related to policy, and that overall mailing list traffic (not only related to policy) declined after 2006, possibly due to a transfer of discussion to Wikis, chats, and other such platforms.
Article
Full-text available
Searching on the Internet today can be compared to dragging a net across the surfgace of the ocean. While a great deal may be caught in the net, there is still a wealth of information that is deep, and therefore, missed. The reason is simple. Most of the Web's information is buried far down on dynamically generated sites, and standard search engines never find it.
Article
Financial technology (FinTech) based trading activities operate above and underground. Due to their nature, FinTech products can be used within the dedicated regulatory framework or into the deep layers of the Internet. This feature of FinTech alone could potentially undermine the existing financial services regulation (FSR), the aim of which is to ensure financial stability through investor protection measures and measures protecting the integrity of the financial market. To this end, much of the illegal activities on the Dark Web have an interconnected and cross-sectorial impact on the FSR’s aims—for example, the on-going problem with stealing of personal data has implications on investor protection, often via market manipulation channels. Such considerations make it relevant to analyse the ways FinTech used on the Dark Web could affect traditional and FinTech activities in the financial sector. How strong this impact could be would probably depend on how ubiquitous FinTech innovations are on the Deep or Dark Web. This article will argue that by definition their market presence is not only across the entire Internet, but also very high in volume. This is all the more the case because FinTechs brainchild, the cryptos, have become the go to currency for many activities outside the law.
Conference Paper
We investigate the darknet - a collection of networks and technologies used to share digital content. The darknet is not a separate physical network but an application and protocol layer riding on exist- ing networks. Examples of darknets are peer to peer file sharing, CD and DVD copying, and key or password sharing on email and newsgroups. The last few years have seen vast increases in the darknet's aggregate bandwidth, reliability, usability, size of shared library, and availability of search engines. In this paper we categorize and analyze existing and future darknets, from both the technical and legal perspectives. We spec- ulate that there will continue to be setbacks to the effectiveness of the darknet as a distribution mechanism, but ultimately the darknet ge- nie will not be put back into the bottle. In view of this hypothesis, we examine the relevance of content protection and content distribution ar- chitectures.
Article
Networks of coupled dynamical systems have been used to model biological oscillators, Josephson junction arrays, excitable media, neural networks, spatial games, genetic control networks and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks 'rewired' to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them 'small-world' networks, by analogy with the small-world phenomenon (popularly known as six degrees of separation. The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices.
American kingpin: The epic hunt for the criminal mastermind behind the Silk Road
  • N Bolton
Bolton, N. (2017). American kingpin: The epic hunt for the criminal mastermind behind the Silk Road. New York, NY: Portfolio/Penguin.
How four Microsoft engineers proved that the “darknet” would defeat DRM
  • T B Lee
The dark side of the internet. The Guardian
  • A Becket
Becket, A. (2009, November 26). The dark side of the internet. The Guardian. Retrieved from http://www.theguardian.com/technology/ 2009/nov/26/dark-side-internet-freenet