ArticlePDF Available

Big Data: New Opportunities and New Challenges [Guest editors' introduction]



We can live with many of the uncertainties of big data for now, with the hope that its benefits will outweigh its harms, but we shouldn't blind ourselves to the possible irreversibility of changes-whether good or bad-to society. The first Web extra at is an audio recording in which Katina Michael at the University of Wollongong discusses the June 2013 Computer magazine special issue on "Big Data: New Opportunities and New Challenges," introducing the special issue, the guest editors, the authors, the articles, and the IEEE Society on Social Implications of Technology (SSIT). The second Web extra at is an audio recording in which Katina Michael at the University of Wollongong talks about the IEEE Society on the Social Implications of Technology (SSIT), IEEE Technology and Society (T&S) magazine, and the International Symposium on Technology and Society (ISTAS). The third Web extra at an audio recording in which Katina Michael at the University of Wollongong discusses how we can live with many of the uncertainties of big data for now, with the hope that its benefits will outweigh its harms, but we shouldnít blind ourselves to the possible irreversibility of changes-whether good or bad-to society.
22 computer Published by the IEEE Computer Society 0018-9162/13/$31.00 © 2013 IEEE
Katina Michael, University of Wollongong
Keith W. Miller, University of Missouri–St. Louis
We can live with many of the uncertainties
of big data for now, with the hope that its
benefits will outweigh its harms, but we
shouldn’t blind ourselves to the possible
irreversibility of changes—whether good
or bad—to society.
It’s no secret that both private enterprise and govern-
ment seek greater insights into people’s behaviors and
sentiments. Organizations use various analytical tech-
niques—from crowdsourcing to genetic algorithms to
neural networks to sentiment analysis—to study both
structured and unstructured forms of data that can aid
product and process discovery, productivity, and policy-
making. This data is collected from numerous sources
including sensor networks, government data holdings,
company market lead databases, and public profiles on
social networking sites.
Although data mining in one form or another has
occurred since people started to maintain records in the
modern era, so-called big data brings together not only
large amounts of data but also various data types that
previously never would have been considered together.
These data streams require ever-increasing processing
speeds, yet must be stored economically and fed back into
business-process life cycles in a timely manner.
Since the Internet’s introduction, we’ve been steadily
moving from text-based communications to richer data
that include images, videos, and interactive maps as well
as associated metadata such as geolocation information
and time and date stamps. Twenty years ago, ISDN lines
couldn’t handle much more than basic graphics, but
today’s high-speed communication networks enable the
transmission of storage-intensive data types.
For instance, smartphone users can take high-quality
photographs and videos and upload them directly to social
networking sites via Wi-Fi and 3G or 4G cellular networks.
We’ve also been steadily increasing the amount of data
captured in bidirectional interactions, both people-to-
machine and machine-to-machine, by using telematics
and telemetry devices in systems of systems. Of even
greater importance are e-health networks that allow for
data merging and sharing of high-resolution images in
the form of patient x-rays, CT scans, and MRIs between
Advances in data storage and mining technologies
make it possible to preserve increasing amounts of data
generated directly or indirectly by users and analyze it
to yield valuable new insights. For example, companies
can study consumer purchasing trends to better target
marketing. In addition, near-real-time data from mobile
phones could provide detailed characteristics about
shoppers that help reveal their complex decision-making
processes as they walk through malls.1
Big Data: New
and New
JuNe 2013 23
Big data can expose people’s hidden behavioral patterns
and even shed light on their intentions.2 More precisely,
it can bridge the gap between what people want to do
and what they actually do as well as how they interact
with others and their environment.3 This information is
useful to government agencies as well as private compa-
nies to support decision making in areas ranging from
law enforcement to social services to homeland security.
It’s particularly of interest to applied areas of situational
awareness and the anticipatory approaches required for
near-real-time discovery.
In the scientific domain, secondary uses of patient
data could lead to the discovery of cures for a wide range
of devastating diseases and the prevention of others.4
By revealing the genetic origin of illnesses, such as
mutations related to cancer, the Human Genome Project,
completed in 2003, is one project that’s a testament to
the promises of big data. Consequently, researchers are
now embarking on two major efforts, the Human Brain
Project (EU;
and the US BRAIN Initiative (
press-office/2013/04/02/fact-sheet-brain-initiative), in
a quest to construct a supercomputer simulation of the
brain’s inner workings, in addition to mapping the activity
of about 100 billion neurons in the hope of unlocking
answers to Alzheimer’s and Parkinson’s. Other types of
big data can be studied to help solve scientific problems
in areas ranging from climatology to geophysics to
While big data can yield extremely useful information,
it also presents new challenges with respect to how much
data to store, how much this will cost, whether the data
will be secure, and how long it must be maintained.
For example, both companies and law enforcement
agencies increasingly rely on video data for surveillance
and criminal investigation. Closed-circuit television (CCTV)
is ubiquitous in many commercial buildings and public
spaces. Police cars have cameras to record pursuits and
traffic stops, as well as dash-cams for complaint handling.
Many agencies are now experimenting with body-worn
video cameras to record incidents and gather direct
evidence from a crime scene for use in court, obviating
the need for eyewitness versions of events.5 Taser guns
also now come equipped with tiny cameras. Because all
of these devices can quickly generate a large amount of
data, which can be expensive to store and time-consuming
to process, operators must decide whether it is more cost-
effective to let them run continuously or only capture
selective images or scenes.
Big data also presents new ethical challenges.
Corporations are using big data to learn more about
their workforce, increase productivity, and introduce
revolutionary business processes. However, these
improvements come at a cost: tracking employees’ every
move and continuously measuring their performance
against industry benchmarks introduces a level of oversight
that can quash the human spirit. Such monitoring might be
in the best interest of a corporation but is not always in the
best interest of the people who make up that corporation.
In addition, as big multimedia datasets become
commonplace, the boundaries between public and private
space will blur. Emerging online apps will not only enable
users to upload video via mobile social networking but
will soon incorporate wearable devices in the form of a
digital watch or glasses to allow for continuous audiovisual
capture. People will essentially become a camera.6 This
publicly available data will dwarf that generated by today’s
CCTV cameras.
However, unlike surveillance cameras, smartphones
and wearable devices afford no privacy protection to
innocent bystanders who are captured in a video at the
right place at the wrong time. For example, in the wake
of the recent Boston bombings, images of several people
photographed at the scene were mistakenly identified as
suspects on social media sites.
In fact, one of the major challenges of big data is
preserving individual privacy. As we go about our everyday
lives, we leave behind digital footprints that, when
combined, could denote unique aspects about ourselves
that would otherwise go unnoticed, akin to digital DNA.7
Examples include our use of language and punctuation
in blog and forum posts, the clothes we wear in different
contexts, and the places we frequent—do we spend our
Sunday mornings outdoors playing sports, indoors online,
visiting friends, attending religious services, or cruising a
bad part of town? Something as innocuous as when and
how we use energy in our homes reveals many details
about us.8 Outside our homes, drones could well be used
for ad hoc monitoring, spotting unusual changes in land
use patterns and feeding data back to operation centers
about emergencies.
Big data analytics will draw on aspects of our home,
work, and social lives to make assumptions beyond typical
“market segmentations” and delve deep into ontological
questions such as, “Who are you?” This has metaphysical
implications. For example, people will consciously alter
their online activity, and will modify their behavior in
surveilled spaces, to protect their privacy. Big data will
change how we live in both small and large ways. Are
we on a trajectory toward an uberveillance society?
Big data will change how we live in both
small and large ways.
24 computer
Will pervasive and ubiquitous computing converge with
underlying network infrastructure providing uber-views
using advanced data analytics for convenience, care, and
control purposes?9
Finally, many big data applications will have unintended
and unpredictable results as the data scientist seeks to
reveal new trends and patterns that were previously
hidden. For example, genetic screening could reveal the
likelihood of being predisposed to an incurable disease
like Alzheimer’s that leads to long-term anxiety about the
future, such as being ineligible for life insurance. Likewise,
technotherapeutics could assist elderly patients in one way
but assert unhealthy controls on others.10
We can live with many of these uncertainties for now
with the hope that the benefits of big data will outweigh
the harms, but we shouldn’t blind ourselves to the possible
irreversibility of changes—whether good or bad—to society.
Members of the IEEE Society for Social Implications
of Technology are actively engaged in exploring big data
developments and their social and ethical implications. This
special issue presents some of the subjects important to SSIT.
The five articles we selected represent perspectives from
diverse interests from both operational and nonoperational
stakeholders in the big data value chain.
Jess Hemerly provides us with an overview of public
policy considerations for a data-driven future. Hemerly, a
public policy and government relations analyst at Google,
emphasizes the need to tread carefully in the regulation
of data flows so as not to adversely impact innovation
stemming from the data sciences.
Paul Tallon addresses the need for big data governance by
positing that data does have a measurable economic value
and that there are technical, reputational, and economic
risks to manage. Tallon also presents an important
discussion on the cost of big data to organizations.
Jeremy Pitt and his coauthors write on the need to
understand big data within the context of collective
awareness, as a smart grid infrastructure can have
a positive impact on societal transformation toward
sustainability. The authors argue that computational
management of common-pool resources requires a new
approach—institution science.
Marcus Wigan and Roger Clarke are more circumspect
about the role of big data in society, pointing to the fact
that underlying problems have been in existence since the
inception of automated computers. Instead, the authors
point to the consequences of big data, including legality,
data quality, disparate data meanings, and process quality,
as just a few of the bigger issues needing attention.
Finally, we include a case study on the hopes of big
data in the health informatics space in an article written
by Carolyn McGregor. This article focuses on discovery
and the future possibilities that monitoring real-time
physiological characteristics of humans may afford to
health and well-being.
We need improved powers of discernment, as
well as verifiable proof, to better understand big
data’s opportunities and risks. It will unquestion-
ably become an integral part of our society, used in both
commercial and government applications. Our challenge
will be to maximize the benefits of big data while minimiz-
ing its harms. We hope that this special issue of Computer
inspires readers to help meet this increasingly important
1. K. Michael and R. Clarke, “Location and Tracking of Mobile
Devices: Überveillance Stalks the Streets,” Computer Law
& Security Rev., vol. 29, 2013, pp. 216-228.
2. R. Abbas, “The Social Implications of Location-Based
Services: An Observational Study of Users,” J. Location-
Based Services, vol. 5, nos. 3-4, 2011, pp. 156-181.
3. J. Pitt, “Design Contractualism for Pervasive/Affective
Computing,” IEEE Technology and Society Magazine, vol.
31, no. 4, 2012, pp. 25-28.
4. E. Strickland, “The Gene Machine and Me,” IEEE Spectrum,
Mar. 2013, pp. 26-32.
5. A. Hayes, “Cyborg Cops, Googlers and Connectivism,”
IEEE Technology and Society Magazine, vol. 32, no. 1, 2013,
pp. 23-24.
6. S. Mann, “Through the Glass, Lightly,” IEEE Technology and
Society Magazine, vol. 31, no. 3, 2012, pp. 10-14.
7. K. Michael and M.G. Michael, “The Social and Behavioural
Implications of Location-Based Services,” J. Location-Based
Services, vol. 5, nos. 3-4, 2011, pp. 121-137.
8. F. Sestini, “Collective Awareness Platforms: Engines for
Sustainability and Ethics,” IEEE Technology and Society
Magazine, vol. 31, no. 4, 2012, pp. 54-62.
9. M.G. Michael and K. Michael, “Towards a State of
Uberveillance,” IEEE Technology and Society Magazine,
vol. 29, no. 2, 2010, pp. 9-16.
10. M. Gagnon, J.D. Jacob, and A. Guta, “Treatment Adherence
Redefined: A Critical Analysis of Technotherapeutics,”
Nursing Inquiry, vol. 20, no. 1, 2013, pp. 60-70.
Katina Michael is an associate professor in the School of
Information Systems and Technology at the University of
Wollongong, New South Wales, Australia. Her research fo-
cuses on emerging technologies as well as national security
technologies and their corresponding social implications.
Michael received a PhD in information and communica-
tion technology from the University of Wollongong. She is a
senior member of IEEE. Contact her at
Keith W. Miller is the Orthwein Endowed Professor for
Life-Long Learning in the Sciences at the University of
Missouri–St. Louis. His research interests include software
testing and computer ethics. Miller received a PhD in com-
puter science from the University of Iowa. He is a member
of IEEE and ACM. Contact him at
... Heavy magnitudes of data material of production are sought by people (Boyd & Crawford, 2012). Michael and Miller (2013) predicted that big data would transform people's lifestyles in minor and major ways. And it has come to pass because big data has evolved as a knowledge system which is transforming spheres of knowledge and impacting society significantly. ...
... Some are overdependent on data, some are threatened by cybersecurity risks, while some others are at risk of having big data issues that outweigh its merits. In addition, there exists a risk that big data applications of the future could have analyses which are unpredictable as data scientists aim to release hidden patterns (Michael & Miller, 2013). In the finance sector, the big data challenge includes integrated data, unclear data strategy, extremely high goals, and unreliable data (Sun et al., 2020). ...
... Big data could be utilized for security and enforcement of law in the security industry. It could also be utilized for criminal and surveillance investigations (Michael & Miller, 2013). Big data also serves as a large unbounded resource for society. ...
The great effect of big data in this present age is highly significant. Consequently, big data has become more applicable to society. Big data technology has been adopted by many, and its applications are utilized in nations, organizations, and industries. This paper reviews the social implications, risks, challenges, and present and future opportunities of big data.
... To achieve audio classification, machine learning approaches, like Support Vector Machines (SVMs) and decision trees, have been effectively used in most studies [15,16]. With the increasing amounts of data due to the advances of data storage technologies [17], the traditional machine learning methods are struggling to process massive data [18]. The recent advent of deep learning techniques has shown a great contribution to the classification of large-scale audio data sets compared to conventional 1. Introduction machine learning methods. ...
Full-text available
Automatically recognising audio signals plays a crucial role in the development of intelligent computer audition systems. Particularly, audio signal classification, which aims to predict a label for an audio wave, has promoted many real-life applications. Amounts of efforts have been made to develop effective audio signal classification systems in the real world. However, several challenges in deep learning techniques for audio signal classification remain to be addressed. For instance, training a deep neural network (DNN) from scratch is time-consuming to extracting high-level deep representations. Furthermore, DNNs have not been well explained to construct the trust between humans and machines, and facilitate developing realistic intelligent systems. Moreover, most DNNs are vulnerable to adversarial attacks, resulting in many misclassifications. To deal with these challenges, this thesis proposes and presents a set of deep-learning-based approaches for audio signal classification. In particular, to tackle the challenge of extracting high-level deep representations, the transfer learning frameworks, benefiting from pre-trained models on large-scale image datasets, are introduced to produce effective deep spectrum representations. Furthermore, the attention mechanisms at both the frame level and the time-frequency level are proposed to explain the DNNs by respectively estimating the contributions of each frame and each time-frequency bin to the predictions. Likewise, the convolutional neural networks (CNNs) with an attention mechanism at the time-frequency level is extended to atrous CNNs with attention, aiming to explain the CNNs by visualising high-resolution attention tensors. Additionally, to interpret the CNNs evaluated on multi-device datasets, the atrous CNNs with attention are trained in the conditional training frameworks. Moreover, to improve the robustness of the DNNs against adversarial attacks, models are trained in the adversarial training frameworks. Besides, the transferability of adversarial attacks is enhanced by a lifelong learning framework. Finally, the experiments conducted with various datasets demonstrate that these presented approaches are effective to address the challenges.
... Emerging online applications will not only enable users to upload video via mobile social networking but will soon incorporate wearable devices in the form of a digital watch or glasses to allow for continuous audiovisual capture. People will essentially become a camera [18]. ...
Big Data is a new term used to identify the datasets but due to their large size and complexity, we cannot manage them with our current methodologies or data mining software tools. With the fast development of networking, data storage, and the data collection capacity, Big Data is now rapidly expanding in all science and engineering domains, including physical, biological and biomedical sciences. Big Data mining is the capability of extracting useful information from these large datasets or streams of data, that due to its volume, variability, and velocity, it was not possible before to do it. The Big Data challenge is becoming one of the most exciting opportunities for the next years. This paper represents a broad overview of the topic, Big Data challenges, Data Mining Challenges with Big Data, Big Data processing framework and forecast to the future.
... Much has been devoted to the technical aspects of big data analytics. However, little attention has been given as yet to the consequences for public policy and governance (Michael & Miller, 2013). The ability of real-time data to provide critical information to decision makers is enormous. ...
Full-text available
i>Policy Analysis in Canada brings together original contributions from many of the field’s leading scholars. Contributors chronicle the evolution of policy analysis in Canada over the past fifty years and reflect on its application in both governmental and non-governmental settings.
... 'Big data', which were born in the 1980s, can be used to perform various comprehensive analyses to support the improved development of systems based on policies. The continuously evolving 'big data' science may be used to help scientists, policy makers and city planners to formulate policies, strategies, procedures and practices (Lan et al., 2018;Michael & Miller, 2013;Zhu et al., 2017). In the field of catering, big data can be combined with other new analysis methods. ...
The development of O2O (online to offline) e-commerce and instant delivery has not only made it easy to access information, but also it has overcome the limitation of walking distance, promoting the transformation of traditional commercial space and making online catering a common form of food consumption. Studying the spatial distribution of urban catering industry under new technologies is helpful to understand the current development of the catering industry, which is of great significance for guiding the development of the catering industry. However, the trend and mechanism of changes need to be further studied. Based on the spatial distribution of urban catering industry, this study establishes a framework for analysing offline and online catering spaces. In this framework, big data are used to conduct GIS kernel density analysis and spatial autocorrelation analysis to investigate the spatial distribution laws of the catering industry, and multinomial logistic regression analysis is applied to explore the main factors influencing location selection of offline and online catering spaces. The main results include the following: (a) the new pattern of catering space still follows the traditional location selection theory and (b) population density and road network density are the main factors affecting the catering space in Shenzhen.
... In today's economy, software quality is becoming increasingly critical. This is due to the fact that the advantages of its usage outweighs the detrimental aspect of it [7]. Spanning from performing basic mathematics to performing complex computations, software make living easier. ...
Full-text available
Software reliability analysis is very vital in software development. Software manufacturers assess the quality of their developed software through this analysis. This has triggered the development of reliability models. Software reliability growth models have been used extensively to examine the quality of manufactured software before they are sent to the market. This study presents a new software reliability growth model using Chen distribution. The Chen software reliability growth model is then used to establish sequential probability ratio test limits for determining whether a manufactured software is reliable or unreliable. The applications of the proposed model revealed that it performs better than some of the existing software reliability growth models for the given datasets.
... Emerging "Big Data" platforms and applications call for the invention of novel data analysis techniques that are capable to effectively and efficiently handle large amount of data (Chen & Zhang, 2014;Michael & Miller, 2013). There is therefore an increasing need to use real-life datasets for data-driven experiments but the scarcity of significant datasets is a critical issue for research studies (Weikum, 2013). ...
Full-text available
The development of platforms and techniques for emerging Big Data and Machine Learning applications requires the availability of real‐life datasets. A possible solution is to synthesize datasets that reflect patterns of real ones using a two‐step approach: first, a real dataset X is analyzed to derive relevant patterns Z and, then, to use such patterns for reconstructing a new dataset X′ that preserves the main characteristics of X. This survey explores two possible approaches: (1) Constraint‐based generation and (2) probabilistic generative modeling. The former is devised using inverse mining (IFM) techniques, and consists of generating a dataset satisfying given support constraints on the itemsets of an input set, that are typically the frequent ones. By contrast, for the latter approach, recent developments in probabilistic generative modeling (PGM) are explored that model the generation as a sampling process from a parametric distribution, typically encoded as neural network. The two approaches are compared by providing an overview of their instantiations for the case of discrete data and discussing their pros and cons. This article is categorized under: Fundamental Concepts of Data and Knowledge > Big Data Mining Technologies > Machine Learning Algorithmic Development > Structure Discovery
Weather has a huge impact on every day human life like planning outdoor events, agriculture, aviation, etc. It is very important to apprehend it in every way possible and be able to predict it as accurately as possible. In recent years, we have accumulated huge amounts of data, which make it presumptive to extract as much information as possible. In this paper, we propose an approach based on Hadoop and MapReduce technologies using Sliding Window Algorithm to forecast the weather in the region of Beni Mellal-Khenifra.
Full-text available
I begin this article with the fundamental premise that wearable computing will fundamentally improve the quality of our lives [1]. I can make this claim because for the past 20 years I have been walking around with digital eye glasses (DEG), and I believe my life has been enhanced as a result. Perhaps I am biased about wearable computing, but like my EyeTap invention that computationally processes everything I see, I try to tell it like it is. I am of course, only a one person case study, but I know there are others out there who feel the same way as I do, and perhaps for very different reasons. It is well known that when traditional optical eyeglasses were first invented, many wearers of these eyeglasses were treated poorly and discriminated against. But as time went on, society began to accept eyeglasses, even to the point where they have, in some instances, become fashion statements. Many people, who have no need for spectacles, will purchase zero prescription eyeglasses just to look smart. This says a lot about technological innovation and how society responds to it over generations of varying levels of acceptance.
Full-text available
The paper mentions that Project Glass is a research and development program by Google to develop an augmented reality head-mounted display (HMD). The intended purpose of Project Glass products is the hands-free display of information available to most smartphone users, allowing for interaction with the Internet via natural language voice commands. Given that Project Glass connects wearers en-mass and ostensibly ensures that they can continue with physical activity hands-free, it creates arguably one of the largest known veillance vehicles into previously unmapped territories that humans already frequent. A hands-free, fashionable, and constantly connected technology positions the product well among the seemingly unending array of Google's seamless and integrated services.
Full-text available
The plausible scenarios for the evolution of our future digital society oscillate between a "big brother"-like closed economic model supporting competitive individual interests and a "collective awareness" open social space exploiting the richness of human connections enabled by technology for collaboration. This paper analyses under which conditions the latter scenario may evolve, the role that it can play to improve the sustainability (a beyond-GDP, low carbon economy) and ethics (self-regulation, beyond commercially-driven) of our current social and economic models, and the strategy that the EU can implement to make it possible. We analyze several open questions addressing how the "platforms for collective awareness" can create an extended awareness of the environment and of the consequences of our actions, to take informed and sustainability-aware decisions. Ultimately, such platforms can enable dialogues and discussions in the civil society to collectively orchestrate the most appropriate actions in a truly democratic, informed and non-mediated manner.
Full-text available
Überveillance is an emerging concept, and neither its application nor its power have yet fully arrived [38]. For some time, Roger Clarke's [12, p. 498] 1988 dataveillance concept has been prevalent: the “systematic use of personal data systems in the investigation or monitoring of the actions of one or more persons.”
Full-text available
GAGNON M, JACOB JD and GUTA A. Nursing Inquiry 2013; 20: 60–70 Treatment adherence redefined: a critical analysis of technotherapeutics Treatment adherence issues in the context of chronic illnesses have become an important concern worldwide and a top priority in the field of health-care. The development of devices that will allow healthcare providers to track treatment adherence and monitor physiological parameters with exact precision raises important questions and concerns. The aim of this study is to interrogate the use of these new technological devices which allow for previously unavailable data to be recorded on an ongoing basis and transmitted via a tiny microchip inserted into the body. Drawing on the work of Michel Foucault, we analyze how this anatomo-political and bio-political instrument serves to discipline chronically ill individuals and govern the health of entire populations who suffer from chronic conditions. To support our analysis, this article comprises three sections. First, we provide an overview of treatment adherence and technotherapeutics. Then, we explain how technotherapeutics concern the government of bodies and conducts at the individual level and population level more generally. Lastly, we provide an example of how this analysis can be connected to routine nursing practice in the field of HIV.
I want to learn my own biological secrets. I want to get a look at the unique DNA sequence that defines my physical quirks, characteristics, and traits, including my nearsighted blue eyes, my freckles, my type O-positive blood, and possibly some lurking predisposition to disease that will kill me in the end. So I'm not going to see just any man, but the mad scientist of genomics himself, the arrogant upstart of biotechnology, an inventor and entrepreneur who has upended the business of genetic sequencing once before-and now appears to be doing it again.
The convergence of pervasive, stream, and cloud computing, with advances in sensor technology and signal processing, provides a platform for a wide range of innovative applications based on a more-refined understanding of the users state, wherever they may be and whatever they might be doing. Recent developments have made it possible to infer intentional (goal-driven) behavior and affective (emotional) states from scent, gestures, facial expressions, and other physiological signals, and these signals can be continuously detected in a pervasive environment in which our bodies, clothing, and physical surroundings are saturated with sensors. There is overwhelming evidence that people will trade this data in order to receive value-added services or derive other social benefits. There is equal evidence that some application or service providers will appropriate what is essentially personal data and aggregate it, privatize it, or data-mine it, with potentially disturbing implications for individual and collective privacy. Ideally, we would leverage these technological advances to provide smarter pervasive and affective applications, without compromising (often hardwon) social constructs such as privacy and other civil liberties.
During the last decade, location-tracking and monitoring applications have proliferated, in mobile cellular and wireless data networks, and through self-reporting by applications running in smartphones that are equipped with onboard global positioning system (GPS) chipsets. It is now possible to locate a smartphone user's location not merely to a cell, but to a small area within it. Innovators have been quick to capitalise on these location-based technologies for commercial purposes, and have gained access to a great deal of sensitive personal data in the process. In addition, law enforcement utilises these technologies, can do so inexpensively and hence can track many more people. Moreover, these agencies seek the power to conduct tracking covertly, and without a judicial warrant. This article investigates the dimensions of the problem of people-tracking through the devices that they carry. Location surveillance has very serious negative implications for individuals, yet there are very limited safeguards. It is incumbent on legislatures to address these problems, through both domestic laws and multilateral processes.