A study of security and privacy issues associated with the Amazon Echo

To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The Echo is a smart speaker with an integrated IVA that provides a variety of services activated though voice commands. The main purpose behind the 'Echo' is to assist users in making purchases, setting reminders, asking for the weather, playing music and many other functionalities that are expanding exponentially (Jackson and Orebaugh 2018). In fact, the idea behind its design was to create a hub within users' homes that integrates and controls other elements inside the home with a simple voice command (Euromonitor 2017). ...
... In fact, the idea behind its design was to create a hub within users' homes that integrates and controls other elements inside the home with a simple voice command (Euromonitor 2017). Through the help of Alexa, Amazon has created a smart ecosystem able to analyze consumers' behaviors and patterns, cater responses based on previous interactions, and become smarter with repeated use (Jackson and Orebaugh 2018). ...
... The Amazon Echo has several features that permit users to personify the device. To operate an Echo, the users need to interact with Alexa by initiating a command activated by the wake word "Alexa" (Jackson and Orebaugh 2018). This interaction categorizes the device as a socially interactive device (Fong et al. 2003). ...
Interactive Voice Assistants (IVAs), such as Apple’s Siri, Microsoft’s Cortana, Amazon’s Alexa and Google’s Assistant, are increasingly being integrated into consumers’ daily lives. Indeed, IVAs are unfolding new opportunities for retailers to capitalize on Internet of Things (IoT) technologies and generate incremental value. This is affecting the consumer purchasing cycle through the advent of what this study coins as the Voice Moment of Truth (VMOT). This paper introduces a conceptual framework discussing this new moment of truth, which is expected to alter shoppers’ behavior, as well as brands’ and retailers’ strategies. The VMOT is presented to be segmented into five key components: activation, conversation, perception formation, duration, and relationship formation. In addition, the introduction of the VMOT sets forth a new relation between shoppers, brands and online retailers that is crucial for its success.
... In regards to the data circulation around the home premise due to the smart home device ecosystem. Another threat to users is the smart assistant device data security, whereby a study had shown that the Amazon Alexa [3] can constantly hear users' voices in standby mode and the data stored are anonymously in their online database. This produces a concern for users as these smart devices are built to make their lives easier, but in return the home privacy is being compromised by a single microphone contained on the smart assistant device. ...
... 76). Ironically, security and privacy issues are two of the most commonly reported worries about the same devices (Zeng et al., 2017), and the fact that privacy issues do exist (Jackson & Orebaugh, 2018;Wang et al., 2018) and that users do not fully understand them (Lau et al., 2018) have both been well documented. Although they can be helpful, Benlian et al. (2020) state that, "SHA's [Smart Home Assistants] are met with consumer skepticism regarding the amount of information that is collected and processed by their voice user interfaces" (p. ...
Interest in and ownership of smart home voice assistants like Amazon Alexa and Google Home devices have exponentially increased in recent years. Many people may purchase or be gifted such devices without knowing their potential for connecting with other home technology, listening to private conversations, sharing information with companies, and creating problems due to misunderstanding vocal commands or technological capabilities. Concerns and worries about these devices may be exacerbated over time or by a specific incident. To understand reactions to such situations, we conducted semi-structured in-depth interviews with 10 people who reported different types of worrying incidents with a range of smart home devices and their reactions to reduce that worry. Conducting a thematic coding analysis, we detail how each case study shows a person’s worries about their smart home technology developed vis-à-vis the incident or over time, and their strategies to alleviate their worry. The two dominant reactions were restricted acceptance or discontinuance of the smart home technology, while three other interviews revealed nuanced reactions on the acceptance-rejection continuum. For each interviewee, we highlight their technology use, any major incidents, and their psychological processes leading up to their actions to reduce worry. This provides an in-depth look at worry around smart home technology products themselves, not their ability to perform, and how discontinuance, restricted acceptance, and other reactions reduce those worries.
... From these examples it becomes clear that there are severe risks from (mis)use of BD by private companies, state actors, and criminals. Those features that render BD valuable for many parties at the same time poses such a threat to the privacy and everyday lives of citizens around the globe (e.g., Jackson and Orebaugh 2018). Hence, assessing such risks and preserving privacy for individuals are major issues in the ethics of BD, although not the only issues. ...
Full-text available
While big data (BD) has been around for a while now, the social sciences have been comparatively cautious in its adoption for research purposes. This article briefly discusses the scope and variety of BD, and its research potential and ethical implications for the social sciences and sociology, which derive from these characteristics. For example, BD allows for the analysis of actual (online) behavior and the analysis of networks on a grand scale. The sheer volume and variety of data allow for the detection of rare patterns and behaviors that would otherwise go unnoticed. However, there are also a range of ethical issues of BD that need consideration. These entail, amongst others, the imperative for documentation and dissemination of methods, data, and results, the problems of anonymization and re-identification, and the questions surrounding the ability of stakeholders in big data research and institutionalized bodies to handle ethical issues. There are also grave risks involved in the (mis)use of BD, as it holds great value for companies, criminals, and state actors alike. The article concludes that BD holds great potential for the social sciences, but that there are still a range of practical and ethical issues that need addressing.
... Privacy violations that happens at the SmH UI layer can be divided in to two major segments: (1) privacy violations that could happen outside the SmH due to information leakages to third party entities [1] [2] [3] [4] and (2) privacy violations that happen between SmH users [5] [6]. This research is focused on the latter type of privacy risk which is referred to as Interpersonal Cyber Physical Privacy. ...
... Unsurprisingly, IVAs such as the Amazon Echo and Google Home have come under scrutiny for their security and privacy practices. In [19]," Jackson et al. argue that security and privacy concern for the Amazon Echo revolve around mutual trust rooted in accuracy, fairness, and privacy. ...
Full-text available
As the concept of the Smart Home is being embraced globally, IoT devices such as the Amazon Echo, Google Home, and Nest Thermostat are becoming a part of more and more households. In the data-driven world we live in today, internet service providers (ISPs) and companies are collecting large amounts of data and using it to learn about their customers. As a result, it is becoming increasingly important to understand what information ISPs are capable of collecting. IoT devices in particular exhibit distinct behavior patterns and specific functionality which make them especially likely to reveal sensitive information. Collection of this data provides valuable information and can have some serious privacy implications. In this work I present an approach to fingerprinting IoT devices behind private networks while only examining last-mile internet traffic . Not only does this attack only rely on traffic that would be available to an ISP, it does not require changes to existing infrastructure. Further, it does not rely on packet contents, and therefore works despite encryption. Using a database of 64 million packets logged over 15 weeks I was able to train machine learning models to classify the Amazon Echo Dot, Amazon Echo Show, Eufy Genie, and Google Home consistently. This approach combines unsupervised and supervised learning and achieves a precision of 99.95\%, equating to one false positive per 2,000 predictions. Finally, I discuss the implication of identifying devices within a home.
Since its launch in 2014, Amazon Echo family of devices has seen a considerable increase in adaptation in consumer homes and offices. With a market worth millions of dollars, Echo is used for diverse tasks such as accessing online information, making phone calls, purchasing items, and controlling the smart home. Echo offers user-friendly voice interaction to automate everyday tasks making it a massive success. Though many people view Amazon Echo as a helpful assistant at home or office, few know its underlying security and privacy implications. In this paper, we present the findings of our research on Amazon Echo’s security and privacy concerns. The findings are divided into different categories by vulnerability or attacks. The proposed mitigation(s) to the vulnerabilities are also presented in the paper. We conclude that though numerous privacy concerns and security vulnerabilities associated with the device are mitigated, many vulnerabilities still need to be addressed.
Smart speakers become a versatile and convenient device to provide digital contents in our daily life. However, given the challenges of privacy management in smart speakers, it is required to examine how users cope with privacy threats of the devices. In this study, we examine how the privacy-enhancing attributes of smart speakers influence users’ preferences when choosing devices. The preference structure was generated from a conjoint approach, dealing with four privacy-enhancing features (parasite function, data storage method, security notice, and speaker recognition function) and two typical factors affecting consumer decision making (brand and price). Data collected from a survey of 516 consumers identified that, aside from the attribute of price, parasite function was ranked as the most preferred attribute. We also compared the findings across three user clusters. By examining concrete and explicit attributes regarding privacy-enhancing features, the present study provides novel insights into users’ understanding of privacy.
Despite the opportunities afforded by the growing use of drones, their civilian use causes concern as they can capture, share and store images of events and people following a set of norms still in development. The main challenges risen by drone usage relates mainly to existing gaps in regulation and the difficulty to enforce approved guidelines. Further changes and implementation of drone legislation will broadly affect current applications and future opportunities shaping the way civilians will be allowed to use drones. This article explores these issues qualitatively, analysing (i) civilian users’ and developers’ views on drone usage, (ii) their knowledge of relevant regulations and (iii) how they imagine its use should be regulated. In its conclusion, this article discusses the general concern of potential privacy infringements and the importance to take into account civilians’ thoughts in further implementation of drone law in order to protect recreational practices.
From innovations such as virtual fit through 3D body scanning, smart clothes, wearable technology and virtual styling assistants to more mundane capabilities such as digital photography and social media, deciding what to wear and how to wear an item is now accompanied by a range of new information and perspectives. This article examines the sociotechnical systems that support everyday decisions about what to wear, and how this decision-making process is being re-imagined in response to technology. Drawing upon closet ethnographies with women in the USA and Australia, we focus upon the ways in which women make decisions about what will help them to ‘look professional’. Specifically, we attend to two key dimensions of the decision-making process – visions and validations – to understand the ways in which women weigh the opinion of other people, media and technologies, and the real and imagined role of how new technologies such as the Amazon Echo Look may be integrated into this process. Through fine-grained analysis of the ways that women receive, reject or ignore information about their performance of looking professional, we reflect upon the relative importance of different technologies in the process of decision-making.
Smart speakers can transform interactions with users into retrievable data, posing new challenges to privacy management. Privacy management in smart speakers can be more complex than just making decisions about disclosure based on the risk–benefit analysis. Hence, this study attempts to integrate privacy self-efficacy and the multidimensional view of privacy management behaviors into the privacy calculus model and proposes an extended privacy calculus model for smart speaker usage. The study explicates three types of privacy management strategies in smart speaker usage: privacy disclosure, boundary linkage, and boundary control. A survey of smart speaker users ( N = 474) finds that perceived benefits are positively associated with privacy disclosure and boundary linkage, whereas perceived privacy risks are negatively related to these two strategies. Also, perceived privacy risks are positively related to boundary control. Finally, privacy self-efficacy promotes all three strategies while mitigating the impact of perceived privacy risks and boosting the impact of perceived benefits on privacy management.
Purpose Based on the post-acceptance model of information system continuance (PAMISC), this study investigates the influence of the early-stage users' personal traits (specifically personal innovativeness and technology anxiety) and ex-post instrumentality perceptions (specifically price value, hedonic motivation, compatibility and perceived security) on social diffusion of smart technologies measured by the intention to recommend artificial intelligence-based voice assistant systems (AIVAS) to others. Design/methodology/approach Survey data from 400 US AIVAS users were collected and analyzed with Statistical Product and Service Solutions (SPSS) 18.0 and the partial least square technique using advanced analysis of composites (ADANCO) 2.1. Findings AIVAS technology is presently at the early stage of market penetration (about 25% of market penetration in the USA). A survey of AIVAS technology users reveals that personal innovativeness is directly and indirectly (through confirmation and continuance) associated with a stronger intention to recommend the use of the device to others. Confirmation is associated with all four ex-post instrumentality perceptions (hedonic motivation, compatibility, price value and perceived security). Among the four, however, only hedonic motivation and compatibility are significant predictors of satisfaction, which lead to use continuance and, eventually, intention to recommend. Finally, technology anxiety is found to be indirectly (but not directly) associated with a lower intention to recommend. Originality/value This is the first study conducted on the early-stage AIVAS users that evaluates the influence of both personal traits and ex-post instrumentality perceptions on users' intention for continuance and recommendation to others.
Recent years have seen the rapid development and integration of the Internet of Things (IoT) and cloud computing. The market is providing various consumer-oriented smart IoT devices; the mainstream cloud service providers are building their software stacks to support IoT services. With this emerging trend even growing, the security of such smart IoT cloud systems has drawn much research attention in recent years. To better understand the emerging consumer-oriented smart IoT cloud systems for practical engineers and new researchers, this article presents a review of the most recent research efforts on existing, real, already deployed consumer-oriented IoT cloud applications in the past five years using typical case studies. Specifically, we first present a general model for the IoT cloud ecosystem. Then, using the model, we review and summarize recent, representative research works on emerging smart IoT cloud system security using 10 detailed case studies, with the aim that the case studies together provide insights into the insecurity of current emerging IoT cloud systems. We further present a systematic approach to conduct a security analysis for IoT cloud systems. Based on the proposed security analysis approach, we review and suggest potential security risk mitigation methods to protect IoT cloud systems. We also discuss future research challenges for the IoT cloud security area.
Conference Paper
Smart Home Personal Assistants (SPA) have a complex ecosystem that enables them to carry out various tasks on behalf of the user with just voice commands. SPA capabilities are continually growing , with over a hundred thousand third-party skills in Amazon Alexa, covering several categories, from tasks within the home (e.g. managing smart devices) to tasks beyond the boundaries of the home (e.g. purchasing online, booking a ride). In the SPA ecosystem, information flows through several entities including SPA providers, third-party skills providers, providers of Smart Devices, other users and external parties. Prior studies have not explored privacy norms in the SPA ecosystem, i.e., the acceptability of these information flows. In this paper, we study privacy norms in SPAs based on Con-textual Integrity through a large-scale study with 1,738 participants. We also study the influence that the Contextual Integrity parameters and personal factors have on the privacy norms. Further, we identify the similarities in terms of the Contextual Integrity parameters of the privacy norms studied to to distill more general privacy norms, which could be useful, for instance, to establish suitable privacy defaults in SPA. We finally provide recommendations for SPA and third-party skill providers based on the privacy norms studied.
Smart speakers, which provide continuous real-time information and convenient services, increasingly permeate into users’ homes. On the other hand, the environments of the smart speaker also cause privacy issues for the users by always listening to users’ voices in their private homes. The privacy-prone environments of the smart speakers lead to the users’ coping behaviors against privacy threats not only by increasing the users’ privacy concerns but also by creating negative emotions. However, prior studies have predominantly focussed on cognitive frameworks and overlooked the impact of effect on users’ coping behaviors in the context of privacy threats. Drawing on the stimuli-organisms-responses(S-O-R) framework, this study examines the mediating role of three representative negative emotions (anger, anxiety, disappointment) between users’ privacy concerns and behaviors. The results indicate that the relationships between privacy concerns, negative emotions, and various privacy behaviors in the context of the smart speaker, emphasizing the mediating role of negative emotions. Findings of the study can enhance our knowledge of privacy research by adding the influence of effect in the cognitive-dominant framework.
Full-text available
Smart speakers are useful and convenient, but they are associated with numerous security and privacy threats. We conducted thirteen interviews with users of smart speakers to explore the effect of user experience (UX) factors on security and privacy. We analyzed the data using Grounded Theory and validated our results with a qualitative meta-synthesis. We found that smart speaker users lack privacy concerns towards smart speakers, which prompts them to trade their privacy for convenience. However, various trigger points such as negative experiences evoke security and privacy needs. When such needs emerge, existing security and privacy features were not found to be user-friendly which resulted in compensatory behavior. We used our results to propose a conceptual model demonstrating UX’s effect on risk, perceptions and balancing behavior. Finally, we concluded our study by recommending user-friendly security and privacy features for smart speakers.
This paper presents an attack and defence modelling framework for conceptualising the security of the speech interface. The modelling framework is based on the Observe-Orient-Decide-Act (OODA) loop model, which has been used to analyse adversarial interactions in a number of other areas. We map the different types of attacks that may be executed via the speech interface to the modelling framework, and present a critical analysis of the currently available defences for countering such attacks, with reference to the modelling framework. The paper then presents proposals for the development of new defence mechanisms that are grounded in the critical analysis of current defences. These proposals envisage a defence capability that would enable voice-controlled systems to detect potential attacks as part of their dialogue management functionality. In accordance with this high-level defence concept, the paper presents two specific proposals for defence mechanisms to be implemented as part of dialogue management functionality to counter attacks that exploit unintended functionality in speech recognition functionality and natural language understanding functionality. These defence mechanisms are based on the novel application of two existing technologies for security purposes. The specific proposals include the results of two feasibility tests that investigate the effectiveness of the proposed mechanisms in defending against the relevant type of attack.
The constant evolution of software development technologies has provided new interactions mechanisms for improving the usability of software systems and increasing the productivity of healthcare professionals. In this sense, speech recognition uses methods and technologies that allow the capture and transcription of spoken language automatically. However, few studies only have used health professionals to prototype and validate graphical user interfaces with speech recognition for helping in the development of healthcare applications. This paper specifies a computational solution that makes use of speech recognition to assist healthcare professionals in recording patients’ clinical care data. Six physicians have participated in our study by prototyping activities and specifying workflow to be carried out in patient care. After that, the software architecture is specified and the proposed solution, which has been implemented based on the prototyping task performed by the end users, is detailed. To evaluate the proposed solution, we have conducted interviews with health professionals and the results showed a reduction in time and effort for recording patient information. In addition, using a quantitative approach, aspects of learnability, memorability, efficiency and satisfaction were investigated, where the proposed healthcare application tool obtained an average evaluation of 88% with respect to usability.
Conference Paper
Full-text available
Voice Command Devices and the Intelligent Personal Assistants they embody have become ubiquitous in homes and offer individuals many convenient and entertaining features. The Amazon Echo and its intelligent personal assistant, "Alexa", is a leading innovation in this area. This novel research examines aspects of trust and privacy relating to personal use of the Echo. It aims to demonstrate the types of data that may be vocally extracted from a selection of the multitude of applications that may be linked to the Echo. In the era of Voice IoT, Big Data and Artificial Intelligence, trust and privacy concerns are paramount for the individual. Personal data has never been more valuable, both to large reputable corporations and to criminal groups. The European Union's General Data Protection Regulations (GDPR) came into force in May 2018, aiming to protect the privacy of personal data of EU citizens. This has further highlighted the trust issues stemming from this technological medium. This paper demonstrates that a typically configured Echo device can prove to be a vulnerable channel by which personal information may be accessed. Where no safeguards are implemented, a plethora of data including personal identifiable information and personal health information is available from the device. Data exposure by simple vocal request leaves the system vulnerable to inquisition by any unauthorized individual who is within "ear shot" of the device. The research explores the extent to which these risks can be reduced or mitigated, offering a set of recommendations aimed at building trust and preserving user privacy, while still enabling functionality of the device. Trust and privacy are based on a triad of shared responsibility. While the GDPR enforces trust between the voice service providers and the consumers, adherence to these recommendations will empower individuals to trust against privacy breaches from local sources.
In the past few years, there has been exponential growth in the volume of “always listening” intelligent virtual assistant devices used in the home. The adoption of intelligent virtual assistants is also moving rapidly into the applications and devices utilized by businesses. Recently, organizations such as MyFirmsApp and SAP have partnered with Amazon to embed Amazon’s Lex Artificial Intelligence technology in their applications for use in accounting workplaces. Although there are highly publicized drawbacks related to the home use of this technology, the benefits of this more convenient, spoken interface are many. Therefore, it is important and timely to to review the relevant literature, such as Altman’s (1975) regulation of interpersonal barriers, Nissenbaum’s (2010) notions of contextual integrity, the privacy paradox (Norberg et al. 2007), and Eyal’s (2014) work on habit-forming products, to help understand the concerns related to the adoption of these efficiency- and productivity-boosting assistants. This paper will discuss the advantages and disadvantages of these interfaces, imagine how accountants might use these devices in the workplace currently and in the near future, examine the challenges of adopting digital assistants, and provide recommendations for future research in this area.
Conference Paper
Full-text available
Die Nutzung von digitalen Sprachassistenten wie Amazon Alexa oder Google Assistant nimmt in privaten Haushalten stark zu. Ihre Popularität verdanken sie der Art der Interaktion zwischen Menschen und Maschine, die deutlich natürlicher ist als bei einer Website oder App. Dank ihrer großen Verbreitung und der einfachen Erweiterbarkeit, liegt es nahe, Sprachassistenten als zusätzlichen Kanal für den Kundenservice in kommunalen Unternehmen einzusetzen. Dabei sind eine Reihe von technischen Herausforderungen wie Authentifizierung des Nutzers, Integration mit Backend-Systemen sowie datenschutzkonforme Verarbeitung zu adressieren. Weiterhin stellt sich die Frage nach der Auswahl der Kundenanliegen, die für die Unterstützung durch Sprachassistenten geeigneten sind. Dieser Beitrag zeigt anhand des Beispiels der Stadtwerke Leipzig, wie die Integration von Sprachassistenten in den Kundenservice technisch und fachlich durchgeführt wurde. Damit soll ein Beitrag zur Entwicklung von Gestaltungswissen zur Einbindung von Sprachassistenten in kommunalen Betrieben und öffentlichen Verwaltungen geleistet werden.
Are digital assistants sensitive to the values of their users? In this paper, we report the results of ten interviews with users of Apple's Siri, Amazon's Alexa, and Google Assistant about how they use these digital assistants. Our thematic analysis led us to identify three key themes across the ten interview transcripts: functionality of the technology, trust in the manufacturer, and privacy versus convenience. We conclude that the manufacturers of digital assistants need to be sensitive to users' values when designing and deploying these technologies.
The increasing popularity of specialized Internet-connected devices and appliances, dubbed the Internet-of-Things (IoT), promises both new conveniences and new privacy concerns. Unlike traditional web browsers, many IoT devices have always-on sensors that constantly monitor fine-grained details of users' physical environments and influence the devices' network communications. Passive network observers, such as Internet service providers, could potentially analyze IoT network traffic to infer sensitive details about users. Here, we examine four IoT smart home devices (a Sense sleep monitor, a Nest Cam Indoor security camera, a WeMo switch, and an Amazon Echo) and find that their network traffic rates can reveal potentially sensitive user interactions even when the traffic is encrypted. These results indicate that a technological solution is needed to protect IoT device owner privacy, and that IoT-specific concerns must be considered in the ongoing policy debate around ISP data collection and usage.
Amazon Echo-Amazon Official Site-Alexa-Enabled [online] https
  • Amazon
Amazon. (2017a) Amazon Echo-Amazon Official Site-Alexa-Enabled [online] https://www. (accessed February 23). Help: Alexa and Alexa Device FAQs
  • Amazon
Amazon. (2017b) Help: Alexa and Alexa Device FAQs [online] (accessed February 23).
Amazon Echo and the internet of things that spy on you
  • K Atherton
Atherton, K. (2017) 'Amazon Echo and the internet of things that spy on you', Popular Science [online] (accessed 11 October 2017).
Alexa, are you listening?' MWR Labs
  • M Barnes
Barnes, M. (2017) Alexa, are you listening?' MWR Labs [online] blog/alexa-are-you-listening/ (accessed 11 October 2017).
United States, 116 U.S. 616, 6 S. Ct. 524, 29 L
  • Boyd V
Boyd v. United States, 116 U.S. 616, 6 S. Ct. 524, 29 L. Ed. 746, 1886 U.S. LEXIS 1806, 3 A.F.T.R. (P-H) 2488, 1 February 1886, US.
Amazon Echo Is Now Available for Everyone to Buy for $179.99, Shipments Start on
  • J Callaham
Callaham, J. (2015) Amazon Echo Is Now Available for Everyone to Buy for $179.99, Shipments Start on July 14, 23 June, Android Central [online] (accessed 11 October 2017).
Why Google, Microsoft and Amazon Love the Sound of Your VoiceBloomberg, 13 December, Bloomberg Technology [online] news/articles/2016-12-13/why-google-microsoft-and-amazon-love-the-sound-of-your-voice
  • J Cao
  • D Bass
Cao, J. and Bass, D. (2016) Why Google, Microsoft and Amazon Love the Sound of Your VoiceBloomberg, 13 December, Bloomberg Technology [online] news/articles/2016-12-13/why-google-microsoft-and-amazon-love-the-sound-of-your-voice (accessed 11 October 2017).
Amazon's Alexa Is Not Even Remotely Secure and I Really Don't Care, Gizmodo
  • A Cranz
Cranz, A. (2016) Amazon's Alexa Is Not Even Remotely Secure and I Really Don't Care, Gizmodo [online] (accessed 11 October 2017).
Amazon Alexa, Apple Siri Headed to Hotel Rooms
  • B Crothers
Crothers, B. (2017) Amazon Alexa, Apple Siri Headed to Hotel Rooms, Report Says, Fox News [online] (accessed 11 October 2017)
Amazon has Sold more than 11 Million Echo Devices
  • A Gonzales
Gonzales, A. (2017) Amazon has Sold more than 11 Million Echo Devices, Morgan Stanley Says, January, Seattle Times [online] (accessed 11 October 2017).
Always on: privacy implications of microphone-enabled devices', Future of Privacy Forum
  • S Gray
Gray, S. (2016) 'Always on: privacy implications of microphone-enabled devices', Future of Privacy Forum.
Amazon Echo's Alexa Went Dollhouse Crazy
  • R Hackett
Hackett, R. (2017) Amazon Echo's Alexa Went Dollhouse Crazy, Fortune [online] (accessed 5 May 2017).
Fighting Echo Warrant, Amazon Has Scant Law to Draw On, 1 March, Inside Counsel
  • B Hancock
Hancock, B. (2017) Fighting Echo Warrant, Amazon Has Scant Law to Draw On, 1 March, Inside Counsel [online] (accessed 11 October 2017).
Amazon Disputes Claims that Echo Show's Drop-In Feature is a Security Risk
  • B Heater
Heater, B. (2017) Amazon Disputes Claims that Echo Show's Drop-In Feature is a Security Risk, TechCrunch [online] (accessed 11 October 2017).
Trust semantics in IoT entities' deployment
  • K Kotis
  • G A Vouros
Kotis, K. and Vouros, G.A. (2016) 'Trust semantics in IoT entities' deployment', Workshop on Artificial Intelligence and Internet of Things (AI-IoT), 9th Hellenic Conference on Artificial Intelligence (SETN 2016).
How to Secure Your Amazon Echo
  • D Mcclelland
McClelland, D. (2017) How to Secure Your Amazon Echo, 17 January, Tech Radar [online] (accessed 11 October 2017).
Suspect Oks Amazon to hand over Echo Recordings in Murder Case
  • E Mclaughlin
McLaughlin, E. (2017) Suspect Oks Amazon to hand over Echo Recordings in Murder Case, CNN [online] (accessed 11 October 2017)
Amazon Echo and Google Home Record What You Say. What Happens to Your Data?
  • T Moynihan
Moynihan, T. (2016) Amazon Echo and Google Home Record What You Say. What Happens to Your Data?, 5 December, Wired [online] (accessed 11 October 2017).
Amazon Resists Request for Echo Info in Arkansas Slaying
  • T Mukunyadzi
Mukunyadzi, T. (2017) Amazon Resists Request for Echo Info in Arkansas Slaying, 22 February, ABC News [online] (accessed 11 October 2017).
Intelligent Cognitive Assistant Workshop Summary and Recommendations
National Science Foundation. (2016) Intelligent Cognitive Assistant Workshop Summary and Recommendations [online]
Constitutional Law and Politics: Civil Rights and Civil Liberties
  • D M O'brien
O'Brien, D.M. (2014) Constitutional Law and Politics: Civil Rights and Civil Liberties, 9th ed., pp.1003-1004, Vol. 2, W.W. Norton, New York.
Your Echo Is Listening, Which Could Someday Lead to an Invasion of Your Privacy
  • D Pogue
Pogue, D. (2017) Your Echo Is Listening, Which Could Someday Lead to an Invasion of Your Privacy, March, Scientific American [online] your-echo-is-listening-which-could-someday-lead-to-an-invasion-of-your-privacy/.
Supreme Court of the United States
  • Riley V California
Riley v. California (2014) Supreme Court of the United States.
South Park Trolls Amazon Alexa Owners in this Week's Episode, The Verge
  • T Warren
Warren, T. (2017) South Park Trolls Amazon Alexa Owners in this Week's Episode, The Verge [online] (accessed 11 October 2017).
The Wynn Las Vegas is Putting an Amazon Echo in Every Hotel Room, The Verge
  • C Welch
Welch, C. (2016) The Wynn Las Vegas is Putting an Amazon Echo in Every Hotel Room, The Verge [online] (accessed 11 October 2017).
Burger King Created a Commercial that Hijacks your Google Assistant
  • R Witwam
Witwam, R. (2017) Burger King Created a Commercial that Hijacks your Google Assistant, Forbes [online] (accessed 5 May 2017).
Amazon Echo -Amazon Official Site -Alexa-Enabled
  • Amazon
Amazon. (2017a) Amazon Echo -Amazon Official Site -Alexa-Enabled [online] https://www. (accessed February 23).
Why Google, Microsoft and Amazon Love the Sound of Your Voice -Bloomberg, 13 December, Bloomberg Technology
  • J Cao
  • D Bass
Cao, J. and Bass, D. (2016) Why Google, Microsoft and Amazon Love the Sound of Your Voice -Bloomberg, 13 December, Bloomberg Technology [online] news/articles/2016-12-13/why-google-microsoft-and-amazon-love-the-sound-of-your-voice (accessed 11 October 2017).
  • M Smith
Smith, M. (2017a) Cops to Increasingly Use Digital Footprints from IoT Devices for Investigations, 2 January, Network World [online] article/3154064/security/cops-to-increasingly-use-digital-footprints-from-iot-devices-forinvestigations.html (accessed 11 October 2017).