Article

How private is your mental health app data? An empirical study of mental health app privacy policies and practices

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Digital mental health services are increasingly endorsed by governments and health professionals as a low cost, accessible alternative or adjunct to face-to-face therapy. App users may suffer loss of personal privacy due to security breaches or common data sharing practices between app developers and third parties. Loss of privacy around personal health data may harm an individual's reputation or health. The purpose of this project was to identify salient consumer issues related to privacy in the mental health app market and to inform advocacy efforts towards promoting consumer interests. We conducted a critical content analysis of promotional (advertising)materials for prominent mental health apps in selected dominant English-speaking markets in late 2016-early 2017, updated in 2018. We identified 61 prominent mental health apps, 56 of which were still available in 2018. Apps frequently requested permission to access elements of the user's mobile device, including requesting so-called ‘dangerous’ permissions. Many apps encouraged users to share their own data with an online community. Nearly half of the apps (25/61, 41%)did not have a privacy policy to inform users about how and when personal information would be collected and retained or shared with third parties, despite this being a standard recommendation of privacy regulations. We consider that the app industry pays insufficient attention to protecting the privacy of mental health app users. We advocate for increased monitoring and enforcement of privacy principles and practices in mental health apps and the mobile ecosystem, more broadly. We also suggest a re-framing of regulatory attention that places consumer interests at the centre of guidance.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... However, the users are generally unaware of what types of data and permissions they provide to mobile app developers and platform owners. Application developers, especially those using the "free" business model, finance their development by selling the acquired data [1][2][3]. This can violate human rights (the right to privacy) if users are unaware of it or it is being done without their explicit consent. ...
... Furthermore, about 50% of the analyzed applications did not have a text on data privacy that could inform users about the use of their data [2]. Therefore, application developers and owners should be held accountable and minimize the damage resulting from inappropriate data processing. ...
... In these texts, companies write how they will obtain, use, disclose and manage the user's personal data. The privacy policies (should) also state to which third parties this information will be provided [2]. ...
Chapter
Full-text available
The “free” business model prevails in mobile apps available through the major channels, hinting at the possibility that users “pay” for the use of the mobile apps by sharing their private data with the developers and platform providers. Several types of personal data and permissions of mobile applications were analyzed. We examined 636 apps in several categories, such as medical, health & fitness, business, finance, and entertainment. The types of personal data being requested by the apps were collected from their privacy policies and the list of permissions was scraped from the platform’s store. We implemented a privacy policy word processing algorithm, the purpose of which was to gain a better insight into the types of data collected. Using the algorithm results, we also performed statistical analyses, based on which we found, expectedly, that free mobile applications collect more data than paid ones. However, there are discrepancies between the permissions we obtained from the privacy policy texts and those stated on the Google Play and Apple App Store websites. More permission requirements emerged from the privacy policy texts than were shown on corresponding app stores, which is a worrying result.
... Based on these definitions, the six types of privacy violations summarized by this paper are introduced and defined as follows. V 1: Lack of access permission In software systems, unauthorized data acquisition is a very common cause of privacy violations [34][35][36][37][38][39][40][41]. For example, unauthorized patient data acquisition in medical systems may lead to the disclosure of patient data, such as medical history [34,35,41]. ...
... V 1: Lack of access permission In software systems, unauthorized data acquisition is a very common cause of privacy violations [34][35][36][37][38][39][40][41]. For example, unauthorized patient data acquisition in medical systems may lead to the disclosure of patient data, such as medical history [34,35,41]. Unauthorized citizen data acquisition in smart city systems may lead to the disclosure of citizen data, such as citizen identity information [36][37][38][39]. ...
... V 3: Leakage of external entities' privacy data One of the main reasons for privacy disclosure is that the data acquirers who have obtained access permission leak the data to third parties, such as advertisers [34,36,43]. For example, Facebook's advertising interface could disclose users' personal information, such as phone numbers, to advertisers [36]. ...
Article
Full-text available
Nowadays, large-scale software systems in many domains, such as smart cities, involve multiple parties whose privacy policies may conflict with each other, and thus, data privacy violations may arise even without users being aware of it. In this context, identifying data security requirements and detecting potential privacy violations are crucial. In the area of model-based security requirements analysis, numerous research efforts have been done. However, few existing studies support automatic privacy violation identification from software requirements. To fill this gap, this paper presents MBIPV, a Model-Based approach for Identifying Privacy Violations from software requirements. First, this paper identifies six types of privacy violations in software requirements. Second, the MBIPV profile is proposed to support modeling software requirements using UML. Third, the MBIPV prototype tool is developed to generate formal models and corresponding privacy properties automatically. Then, the privacy properties are automatically verified by model checking. We evaluated the MBIPV method through case studies of four representative software systems from different domains: smart health, smart transportation, smart home, and e-commerce. The results show that MBIPV has high accuracy and efficiency in identifying the privacy violations from the software requirements. To the best of our knowledge, MBIPV is the first model-based approach that supports the automatic verification of privacy properties of UML software requirement models. The source code of the MBIPV tool and the experimental data are available online at https://github.com/YETONG1219/MBIPV.
... Reviewing privacy policy is a vital document that allow apps developers to inform users about their data collection practices [37]. Some mobile apps do not provide clear transparency about exactly what data is being shared, with whom, and for what purposes [40]. ...
... Most of the users skip reading through the privacy policy and look for required action (ticking the box) to install the app. The finding of this is consistent with previous research such as [37,40,42]. It should be noted that the entire time which our participants spent on the privacy policy page does not mean that participants were reviewing it. ...
... Therefore, future research could consider employing accurate mechanisms such as eye-tracking (requires accessing the device camera) to record the time which participants are actually spending. Whilst there are several studies that indicated that there is a lack of clarity and transparency in presenting such policies for mHealth apps [36,37,40], and recommendations to improve the privacy policy for users [1,24,29,33], further work can be done to examine users reactions when presenting the privacy policy through using innovative methods including summarizing the main points, using visualization, or a displaying it in video. This approach would be effective to encourage users, especially those who have little or no IT knowledge, to spend more time to understand what data, and how their data are being handled. ...
Preprint
Full-text available
Mobile applications, mobile apps for short, have proven their usefulness in enhancing service provisioning across a multitude of domains that range from smart healthcare, to mobile commerce, and areas of context sensitive computing. In recent years, a number of empirically grounded, survey-based studies have been conducted to investigate secure development and usage of mHealth apps. However, such studies rely on self reported behaviors documented via interviews or survey questions that lack a practical, i.e. action based approach to monitor and synthesise users actions and behaviors in security critical scenarios. We conducted an empirical study, engaging participants with attack simulation scenarios and analyse their actions, for investigating the security awareness of mHealth app users via action-based research. We simulated some common security attack scenarios in mHealth context and engaged a total of 105 app users to monitor their actions and analyse their behavior. We analysed users data with statistical analysis including reliability and correlations tests, descriptive analysis, and qualitative data analysis. Our results indicate that whilst the minority of our participants perceived access permissions positively, the majority had negative views by indicating that such an app could violate or cost them to lose privacy. Users provide their consent, granting permissions, without a careful review of privacy policies that leads to undesired or malicious access to health critical data. The results also indicated that 73.3% of our participants had denied at least one access permission, and 36% of our participants preferred no authentication method. The study complements existing research on secure usage of mHealth apps, simulates security threats to monitor users actions, and provides empirically grounded guidelines for secure development and usage of mobile health systems.
... From the perspective of mHealth app providers, in addition to ensuring that secure mHealth apps have been delivered by developers, end-user training and securityawareness must be treated as a priority before deploying and operationalizing mobile health systems (Zubaydi et al., 2015). On the contrary, some recent studies highlight that a lack of knowledge or understanding of end-users regarding security features is still being overlooked as a threat (Zubaydi et al., 2015;Hussain et al., 2018b;Plachkinova et al., 2015a;Parker et al., 2019). In the context of mHealth SDCL, prime importance is given to the development of security-aware apps with developers and mHealth providers having a collective assumption that the delivered app is secure. ...
... Use of personal devices for mHealth systems: In clinical setting environments, health practitioners are often encouraged to use their own devices (i.e., Bring Your Own Device-BYOD) (Parker et al., 2019). Personal devices can be customized and convenient for practitioners to work with mHealth systems; however, such devices lack strict authentication or lock mechanisms, which make end-users' data vulnerable to undesired access. ...
... As a standard practice, privacy policies are presented to end-users before the installation of mHealth apps. A lack of awareness of end-users about privacy policies can be mainly due to (i) mHealth apps providers lacking clarity and transparency in presenting such policies or (ii) end-users themselves overlooking or failing to read through such policies to understand their consequences (Plachkinova et al., 2015a;Parker et al., 2019). A study by Parker et al. in 2019(Parker et al., 2019 conducted content analysis to investigate privacy issues for 61 mental health apps. ...
Article
Mobile health apps (mHealth apps) are being increasingly adopted in the healthcare sector, enabling stakeholders such as medics and patients, to utilize health services in a pervasive manner. Despite having several benefits, mHealth apps entail significant security and privacy challenges that can lead to data breaches with serious social, legal, and financial consequences. This research presents an empirical investigation into security awareness of end-users of mHealth apps that are available on major mobile platforms. We conducted end-users’ survey-driven case study research in collaboration with two mHealth providers in Saudi Arabia to survey 101 end-users, investigating their security awareness about (i) existing and desired security features, (ii) security-related issues, and (iii) methods to improve security knowledge. The results indicate that while security awareness among the different demographic groups was statistically significant based on their IT knowledge level and education level ,security awareness based on gender, age, and frequency of mHealth app usage was not statistically significant. We also found that the majority of the end-users are unaware of the existing security features provided (e.g., restricted app permissions); however, they desire usable security (e.g., biometric authentication) and are concerned about the privacy of their health information (e.g., data anonymization). End-users suggested that protocols such as two-factor authentication positively impact security but compromise usability. Security-awareness via peer guidance, or training from app providers can increase end-users’ trust in mHealth apps. This research investigates human-centric knowledge based on a case study and provides a set of guidelines to develop secure and usable mHealth apps.
... social media or public records) may enable identification of individuals [3,18]. The related risks range from unvetted or intrusive targeted advertising to inferences about an individual's behaviour and health condition, which might affect employment or promotion prospects [4,19]. ...
... As consumer wearable analysis outcomes are immediately available in digital form, they can be easily shared with third parties: intentionally for financial benefits or unintentionally and indirectly by using third-party services such as libraries, analytics, and customer service support [45,19]. Here too, deidentification does not always guarantee users' privacy, as app user data can be cross-linked with data from other sources and become easily re-identifiable. ...
... Finally, privacy policies are a recognised weak spot in health apps and digital health products and services in general. Besides not always being available [50], privacy policies and terms often do not consensually request users for access to their data, applying a "take it or leave it" policy, or simply lack comprehensibility [19,10]. ...
Chapter
Digital health products and services (digital therapeutics) are fueling a transformation of traditional healthcare structures, enabling patients to become more proactive in managing their health. However, as digital therapeutics involve the processing of sensitive information—potentially by several entities—security and privacy vulnerabilities inherent to these systems also introduce risks to patients’ data. These risks range from unvetted or intrusive targeted advertising to inferences about an individual's behavior and health condition. These risks are even more crucial for the mental health domain; hence digital therapeutics designers and researchers must be aware of these risks, apply appropriate procedures for evaluating data practices, and design necessary safeguards. In this chapter, we map the common risks related to the protection of patient data in digital therapeutics, provide pointers for navigation of the relevant regulatory landscape, and outline available evaluation methods and guidelines to address security and privacy risks in digital therapeutics.
... Secondly, mental health apps potentially collect and analyse sensitive data, resulting in concerns about data privacy 12 . Few publicly available apps address these concerns, with many mental health and women's health apps not including a privacy policy [42][43][44] , and few seeking consent from user 44 , driving mistrust. Privacy policies in apps that do include them typically score low on readability 42 , reducing users' ability to provide consent 44 . ...
... Few publicly available apps address these concerns, with many mental health and women's health apps not including a privacy policy [42][43][44] , and few seeking consent from user 44 , driving mistrust. Privacy policies in apps that do include them typically score low on readability 42 , reducing users' ability to provide consent 44 . Given this, app developers should endeavour to provide an accessible privacy policy which provides details on how the data they provide is analysed and stored. ...
Article
Full-text available
Premenstrual symptoms are common, with premenstrual syndrome and premenstrual dysphoric disorder associated with decreased wellbeing and increased suicidality. Apps can offer convenient support for premenstrual mental health symptoms. We aimed to understand app preferences and Health Belief Model (HBM) constructs driving app use intention. An online survey was delivered. Structural equation modelling (SEM) explored HBM constructs. Data from 530 United Kingdom based participants who reported their mental health was impacted by their menstrual cycle (mean age = 35.85, SD = 7.28) were analysed. In terms of preferred app features, results indicated that symptom monitoring (74.72%, n = 396) and psychoeducation (57.92%, n = 307) were sought after, with 52.64% ( n = 279) indicating unwillingness to pay for an app for mental health symptoms related to the menstrual cycle. Regarding HBM results, Satorra–Bentler-scaled fit statistics indicated a good model fit (χ ² (254) = 565.91, p < 0.001; CFI = 0.939, RMSEA = 0.048, SRMR = 0.058). HBM constructs explained 58.22% of intention to use, driven by cues to action ( β = 0.49, p < 0.001), perceived barriers ( β = −0.22, p < 0.001), perceived severity ( β = 0.16, P = 0.012), and perceived benefits ( β = 0.10, p = 0.035). Results indicate that app developers should undertake co-design, secure healthcare professional endorsement, highlight therapeutic benefits, and address barriers like digital discomfort, privacy concerns, and quality.
... Laws and regulations, such as the General Data Protection Regulation (GDPR) in the European Union, focus primarily on user data protection [1], while the Health Insurance Portability and Accountability Act (HIPAA) aims to protect user healthcare information [38]. Nonetheless, when patients receive consultations via mobile applications or web services, sometimes it is necessary to document their issues, such as symptoms or medical history, for treatment purposes [44,55]. These services may store not only symptoms but also personal details like name, age, user location, and gender-specific information or make them accessible to third parties [31]. ...
... The list of dangerous permissions reported by MobSF along with Exploited and Ghost permissions reported by RiskInDroid is alarming. Access to storage, camera, location, accounts, phone contacts, body sensors, managing accounts, accessing user profiles, and reading phone state are some of the permissions that should be considered when developing applications [44]. ...
Preprint
In the wake of the COVID-19 pandemic, a rapid digital transformation has taken place in the mental healthcare sector, with a marked shift towards telehealth services on web and mobile platforms. This transition , while advantageous in many ways, raises critical questions regarding data security and user privacy given the sensitive nature of the information exchanged. To evaluate these concerns, we undertook a rigorous security and privacy examination of 48 web services and 39 mobile applications specific to mental healthcare, utilizing tools such as MobSF, RiskInDroid, AndroBugs, SSL Labs, and Privacy Check. We also delved into privacy policies, manually evaluating how user data is acquired, disseminated , and utilized by these services. Our investigation uncovered that although a handful of mental healthcare web services comply with expert security protocols, including SSL certification and solid authentication strategies, they often lack crucial privacy policy provisions. In contrast, mobile applications exhibit deficiencies in security and privacy best practices, including underdeveloped permission modeling, absence of superior encryption algorithms, and exposure to potential attacks such as Janus, Hash Collision, and SSL Security. This research underscores the urgency to bolster security and privacy safeguards in digital mental healthcare services, concluding with pragmatic recommendations to fortify the confidentiality and security of healthcare data for all users.
... Most researchers focus only on the apps' privacy policies (O'Loughlin et al. 2019;Powell et al. 2018; Robillard et al. 2019;Rosenfeld et al. 2017). Another work investigates only the apps' permissions (Huang and Bashir 2017), or the combination of apps' permissions and privacy policies (Parker et al. 2019). Another study (Muchagata and Ferreira 2019) proposes a scope of analysis to check for GDPR compliance, i.e., assessing the types of collected data, apps' permissions, and evidence of consent management, data portability and data deletion features. ...
... In contrast, mental health app users would consider this invasive since most users would not even want other people to know that they are using mental health apps. Research has shown that breaches of mental health information have severe repercussions, such as exploitative targeted advertising and negative impacts on an individual's employability, credit rating, or ability to access rental housing (Parker et al. 2019). ...
Article
Full-text available
Unlabelled: An increasing number of mental health services are now offered through mobile health (mHealth) systems, such as in mobile applications (apps). Although there is an unprecedented growth in the adoption of mental health services, partly due to the COVID-19 pandemic, concerns about data privacy risks due to security breaches are also increasing. Whilst some studies have analyzed mHealth apps from different angles, including security, there is relatively little evidence for data privacy issues that may exist in mHealth apps used for mental health services, whose recipients can be particularly vulnerable. This paper reports an empirical study aimed at systematically identifying and understanding data privacy incorporated in mental health apps. We analyzed 27 top-ranked mental health apps from Google Play Store. Our methodology enabled us to perform an in-depth privacy analysis of the apps, covering static and dynamic analysis, data sharing behaviour, server-side tests, privacy impact assessment requests, and privacy policy evaluation. Furthermore, we mapped the findings to the LINDDUN threat taxonomy, describing how threats manifest on the studied apps. The findings reveal important data privacy issues such as unnecessary permissions, insecure cryptography implementations, and leaks of personal data and credentials in logs and web requests. There is also a high risk of user profiling as the apps' development do not provide foolproof mechanisms against linkability, detectability and identifiability. Data sharing among 3rd-parties and advertisers in the current apps' ecosystem aggravates this situation. Based on the empirical findings of this study, we provide recommendations to be considered by different stakeholders of mHealth apps in general and apps developers in particular. We conclude that while developers ought to be more knowledgeable in considering and addressing privacy issues, users and health professionals can also play a role by demanding privacy-friendly apps. Supplementary information: The online version contains supplementary material available at 10.1007/s10664-022-10236-0.
... Data security, informed consent, and privacy issues are of the utmost significance in the digital world. When teenagers engage in technology-assisted treatments, it is imperative to put their privacy first because they can divulge private information [42]. Establishing and upholding ethical standards in the provision of mental health services is contingent upon this. ...
Article
Full-text available
This paper offers a thorough examination of the present condition and prospective advancements in technology-based anger management therapies for teenagers with aims to accomplish two objectives: firstly, to assess the efficacy of technology-assisted anger management programmes, and secondly, to examine the wider implications and obstacles associated with using technology into interventions for adolescent mental health. The paper examines both traditional and technology-assisted interventions and highlights the good results they yield, including better anger management, more emotional intelligence, and greater coping mechanisms that are linked to technology-driven approaches. The literature study examines conventional psychotherapy and compares it to emerging technology-based approaches, particularly mobile applications, and virtual reality programmes. The discourse assesses the effectiveness, user contentment, and lasting effects of technology-assisted interventions by analysing past research, offering valuable insights into the possibilities of these interventions. The effectiveness section provides a comprehensive analysis of favourable results, with a particular focus on the disadvantages of depending on technology and ethical concerns within the digital environment. The practical implications underscore the incorporation of technology into programmes addressing the mental health of adolescents, the necessity of training mental health practitioners in the use of technology, and the need to consider varied populations, particularly in Nigeria. Future directions entail the need to fill up gaps in research, investigate future technologies, and comprehend the lasting impacts of these treatments.
... Research suggests that existing mental mHealth apps have not generally been fully evaluated, raising concerns about their safety, efficacy, and impact (70,71). For instance, a recent systematic review (72) of the top 100 mobile apps for bipolar disorder reveals insufficient academic research about the effectiveness of the apps accessible in the marketplace, with only one app supported by a peer-reviewed study. ...
Article
Full-text available
Introduction Anxiety and depression are major causes of disability in Arab countries, yet resources for mental health services are insufficient. Mobile devices may improve mental health care delivery (mental m-Health), but the Arab region's mental m-Health app landscape remains under-documented. This study aims to systematically assess the features, quality, and digital safety of mental m-Health apps available in the Arab marketplace. We also contrast a set of recommended Australian apps to benchmark current strategies and evidence-based practices and suggest areas for improvement in Arabic apps. Methods Fifteen Arab country-specific iOS Apple Stores and an Android Google Play Store were searched. Apps that met the inclusion criteria were downloaded and evaluated using the Mobile App Rating Scale (MARS) and the Mobile App Development and Assessment Guide (MAG). Results Twenty-two apps met the inclusion criteria. The majority of apps showed no evidence of mental health experts being involved in the app design processes. Most apps offered real-time communication with specialists through video, text, or audio calls rather than evidence-based self-help techniques. Standardized quality assessment showed low scores for design features related to engagement, information, safety, security, privacy, usability, transparency, and technical support. In comparison to apps available in Australia, Arabic apps did not include evidence-based interventions like CBT, self-help tools and crisis-specific resources, including a suicide support hotline and emergency numbers. Discussion In conclusion, dedicated frameworks and strategies are required to facilitate the effective development, validation, and uptake of Arabic mental mHealth apps. Involving end users and healthcare professionals in the design process may help improve app quality, dependability, and efficacy.
... However, to the best of our knowledge, there has been no work investigating how such technologies could be adapted to benefit group therapy, where the group composition is constantly changing. Yet, we also note that privacy concerns will be paramount in the case of mental health treatment, particularly given existing concerns about the privacy of other digital mental health support tools [55]. Future research should, therefore, be aware of and consider the privacy limitations when supporting meetings in group therapy. ...
Preprint
Full-text available
Psychotherapy, such as cognitive-behavioral therapy (CBT), is effective in treating various mental disorders. Technology-facilitated mental health therapy improves client engagement through methods like digitization or gamification. However, these innovations largely cater to individual therapy, ignoring the potential of group therapy-a treatment for multiple clients concurrently, which enables individual clients to receive various perspectives in the treatment process and also addresses the scarcity of healthcare practitioners to reduce costs. Notwithstanding its cost-effectiveness and unique social dynamics that foster peer learning and community support, group therapy, such as group CBT, faces the issue of attrition. While existing medical work has developed guidelines for therapists, such as establishing leadership and empathy to facilitate group therapy, understanding about the interactions between each stakeholder is still missing. To bridge this gap, this study examined a group CBT program called the Serigaya Methamphetamine Relapse Prevention Program (SMARPP) as a case study to understand stakeholder coordination and communication, along with factors promoting and hindering continuous engagement in group therapy. In-depth interviews with eight facilitators and six former clients from SMARPP revealed the motivators and demotivators for facilitator-facilitator, client-client, and facilitator-client communications. Our investigation uncovers the presence of discernible conflicts between clients' intrapersonal motivation as well as interpersonal motivation in the context of group therapy through the lens of self-determination theory. We discuss insights and research opportunities for the HCI community to mediate such tension and enhance stakeholder communication in future technology-assisted group therapy settings.
... This is to be expected: mental health apps account for approximately one-third of the overall mobile health app market 20 and are designed to cover all stages of clinical care provision, from symptom tracking and passive data collection to immediate crisis intervention, prevention, diagnosis, primary treatment, supplement to in-person therapy, and post-treatment condition management 21 . This dominance of the market reflects a recent surge in demand for mental health apps, particularly among those under the age of 25, as the need for mental health support has increased (especially in the wake of the COVID-19 crisis), but access to inperson treatment has worsened [22][23][24][25] . However, it has long been clear that this increased demand for mental health apps has not translated into increased pressure to ensure they are safe, effective, and overall high quality [26][27][28] . ...
Article
Full-text available
Background: There are more than 350,000 health apps available in public app stores. The extolled benefits of health apps are numerous and well documented. However, there are also concerns that poor-quality apps, marketed directly to consumers, threaten the tenets of evidence-based medicine and expose individuals to the risk of harm. This study addresses this issue by assessing the overall quality of evidence publicly available to support the effectiveness claims of health apps marketed directly to consumers. Methodology To assess the quality of evidence available to the public to support the effectiveness claims of health apps marketed directly to consumers, an audit was conducted of a purposive sample of apps available on the Apple App Store. Results We find the quality of evidence available to support the effectiveness claims of health apps marketed directly to consumers to be poor. Less than half of the 220 apps (44%) we audited state that they have evidence to support their claims of effectiveness and, of these allegedly evidence-based apps, more than 70% rely on either very low or low-quality evidence. For the minority of app developers that do publish studies, significant methodological limitations are commonplace. Finally, there is a pronounced tendency for apps-particularly mental health and diagnostic apps-to either borrow evidence generated in other (typically offline) contexts or to rely exclusively on unsubstantiated, unpublished user metrics as evidence to support their effectiveness claims. Conclusions Health apps represent a significant opportunity for individual consumers and healthcare systems. Nevertheless, this opportunity will be missed if the health apps market continues to be flooded by poor quality, poorly evidenced, and potentially unsafe apps. It must be accepted that a continuing lag in generating high-quality evidence of app effectiveness and safety is not inevitable: it is a choice. Just because it will be challenging to raise the quality of the evidence base available to support the claims of health apps, this does not mean that the bar for evidence quality should be lowered. Innovation for innovation's sake must not be prioritized over public health and safety.
... However, psychological support apps can be considered new in the context of the mobile app industry, and privacy practices related to those apps have not been fully addressed. Parker et al. (2019) analyzed 61 mental health apps from a dominant-English speaking market, and they found that nearly half of the examined apps did not provide a privacy policy for users. In a similar study, Robillard et al. (2019) reviewed 100 mental health apps from popular digital stores, indicating that most of the apps did not provide a privacy policy and a ToS agreement. ...
Conference Paper
The basic research project KARLI is funded by the German Federal Ministry for Economic Affairs and Climate Action and also by the European Parliament. Among other topics KARLI aims to develop a methodological approach to empirically identify and evaluate social, legal and ethical implications already from the early stages of the development process of innovative technologies.One aspect of our work is to develop the legal implications for the development of products and manufacturers, as well as for the users themselves. The idea is to develop an overall picture of this unstructured legal structure for the development and use of artificial intelligence systems by providing legal guidance for this fast technology. The legal landscape of data protection is a particular focus of my work. Through workshops with groups of legal experts, we have collected various aspects of this legal structure. I would like to use the example of automated driving to show the evolution and development of guidelines for AI systems. There is a landscape of many legal touch points for different groups, such as developers, but also users of A.I. structured systems. The current legal history, with various new European laws such as the AI Act or the Data Act, shows how quickly and how important it is to have these guidelines in place. I want to show various aspects with a future perspective for users and developers through my research.
... The crucial value of privacy appears to conflict with the use of devices to collect data on "global positioning system (GPS), voice, keyboard usage, photos, video and overall phone usage behavior" (Torous et al., 2019, p. 97), with regulatory mechanisms currently seeming insufficient for data protection. Studies suggest that even health and wellbeing apps "certified as clinically safe and trustworthy by the UK NHS Health Apps Library" (Huckvale et al., 2015, p. 1) commonly place personal information at risk of interception (Parker et al., 2019). Finally, the idea of the 'therapeutic alliance', a model of working that involves "an agreement on goals, an assignment of task or a series of tasks, and the development of bonds" (Bordin, 1979, p. 253), represents a concept of relationality that could be threatened by DMH. ...
Article
Full-text available
This paper aims to understand how science and technology experts working in the digital mental health field interpret the ethical and social implications of its technologies, combining an ‘expert interview’ methodology with insights from sociotechnical systems theory. Following recruitment of experts in science and technology fields who had experience of supporting the development of DMH interventions, 11 semi-structured interviews were conducted and analyzed in accordance with the Framework Method. A single theme of ‘complexity of implications’ is presented here and divided into the categories of ‘implications for users’, ‘implications for healthcare professionals and systems’, and ‘implications for society’. Participants identified a range of ethical and social implications of digital mental health technologies at the three different levels, which this discussion relates to three key aspects of complex sociotechnical systems identified in existing theoretical work. These are ‘heterogeneity’, ‘interdependence’ and ‘distribution’, each of which raises important questions for future research about how complex values, relationships and responsibilities should be negotiated in digital mental health. The paper concludes that this study’s approach provides a model for understanding the implications of digital health more broadly, with participants’ combined experience and knowledge shedding light on key interventions at the forefront of digitalization in healthcare.
... Sharing of personal information 12306 will share user personal information with suppliers and third-party merchants of various functional goods or technical services in the privacy policy. It is necessary to be vigilant:App users may suffer loss of personal privacy due to security breaches or common data sharing practices between app developers and third parties [9].This sharing is conditional on the following: ...
... However, the results from our research show that most mental health applications do not comply with legal regulations about data privacy [10], [12], [13], [14], that those data privacy data are difficult to understand and normally only exist in English [7], [8], [11], and that users have concerns about security issues, which are justified, as most mental health applications show present security risks [8], [9]. ...
Article
Full-text available
Background: With the emergence of eHealth and mHealth, the use of mental health apps has increased significantly as an accessible and convenient approach as an adjunct to promoting well-being and mental health. There are several apps available that can assist with mental health monitoring and management, each with specific features to meet different needs. The intersection of mental health and cyber technology presents a number of critical legal and ethical issues. As mental health monitoring apps and devices become more integrated into clinical practice, cybersecurity takes on paramount importance. Objective: To address the ethical and legal aspects of health cybersecurity related to applications in mental health monitoring and management. Methods: We carried out a thematic synthesis of the best scientific evidence. Results: These tools have the potential to significantly improve access to and quality of care for users with mental health conditions, but they also raise substantial concerns about privacy and informed consent. Cybersecurity in mental health is not only a matter of technology, but also of human rights. The protection of sensitive mental health information is critical, and legal and ethical measures to safeguard this information must be implemented in a robust and transparent manner. Conclusion: the use of information technologies and mobile devices is now part of the clinical reality and its future perspectives. It is important to mention that while these apps can be helpful for self-care and mental well-being management, they are not a substitute for the advice and support of a qualified mental health professional (psychologist or psychiatrist). As we move into the digital age, it is imperative that mental health monitoring and management apps are developed and used responsibly, ensuring the safety, dignity, and well-being of users.
... The authors suggest that app stores should incorporate stricter standards for privacy policies to assist users to understand privacy disclosures (Parker et al., 2019). ...
Article
Full-text available
The General Data Protection Regulation (GDPR) obliges data controllers to inform users about data processing practices. Long criticised for inefficiency, privacy policies face a substantive shift with the recent introduction of privacy labels by the Apple App Store and the Google PlayStore. This paper illustrates how privacy disclosures of apps are governed by both the GDPR and the contractual obligations of app stores and is complemented by empirical insights into the privacy disclosures of 845,375 apps from the Apple App Store and 1,657,353 apps from the GooglePlay Store. While the GDPR allows for the use of privacy labels as a complementary tool next to privacy policies, the design of the privacy labels does not satisfy the standards set in Art. 5(1)(a)GDPR and Art. 12-14 GDPR. The app stores may consequently distort the compliance of apps with data protection laws. The empirical data highlight further problems with the privacy labels. The design of the labels favours disclosures of developers that offer a variety of apps that can process data across different services and contradictory disclosures do not get flagged nor verified by app stores. The paper contributes to the overall discussion of how app stores in their role as intermediaries govern privacy standards and the impact of private sector-led initiatives.
... Interviewees expressed that to address these concerns, digital mental health apps must provide clear and transparent information on how they handle user data. Lamentably, despite interviewees conveying a desire for this information, many mental health apps do not offer a privacy policy to users [60]. Of those that do provide a privacy policy, many demonstrate low readability scores [61], potentially fostering a sense of mistrust in how collected data are being analyzed and used. ...
Article
Full-text available
Background Mental health care provision in the United Kingdom is overwhelmed by a high demand for services. There are high rates of under-, over-, and misdiagnosis of common mental health disorders in primary care and delays in accessing secondary care. This negatively affects patient functioning and outcomes. Digital tools may offer a time-efficient avenue for the remote assessment and triage of mental health disorders that can be integrated directly into existing care pathways to support clinicians. However, despite the potential of digital tools in the field of mental health, there remain gaps in our understanding of how the intended user base, people with lived experiences of mental health concerns, perceive these technologies. Objective This study explores the perspectives and attitudes of individuals with lived experiences of mental health concerns on mental health apps that are designed to support self-assessment and triage. Methods A semistructured interview approach was used to explore the perspectives of the interviewees using 5 open-ended questions. Interviews were transcribed verbatim from audio data recordings. The average interview lasted 46 minutes (rounded to the nearest min; SD 12.93 min). A thematic analysis was conducted. Results Overall, 16 individuals were interviewed in this study. The average age was 42.25 (SD 15.18) years, half of the interviewees identified as women (8/16, 50%), and all were White (16/16, 100%). The thematic analysis revealed six major themes: (1) availability and accessibility, (2) quality, (3) attitudes, (4) safety, (5) impact, and (6) functionality. Conclusions Engaging in clear communication regarding data security and privacy policies, adopting a consent-driven approach to data sharing, and identifying gaps in the app marketplace to foster the inclusion of a range of mental health conditions and avoid oversaturation of apps for common mental health disorders (eg, depression and anxiety) were identified as priorities from interviewees’ comments. Furthermore, reputation was identified as a driver of uptake and engagement, with endorsement from a respected source (ie, health care provider, academic institution) or direct recommendation from a trusted health care professional associated with increased interest and trust. Furthermore, there was an interest in the role that co-designed digital self-assessments could play in existing care pathways, particularly in terms of facilitating informed discussions with health care professionals during appointments and by signposting individuals to the most appropriate services. In addition, interviewees discussed the potential of mental health apps to provide waiting list support to individuals awaiting treatment by providing personalized psychoeducation, self-help tips, and sources of help. However, concerns regarding the quality of care being affected because of digital delivery have been reported; therefore, frequent monitoring of patient acceptability and care outcomes is warranted. In addition, communicating the rationale and benefits of digitizing services will likely be important for securing interest and uptake from health care service users.
... Privacy policy has been a popular site of empirical inquiry, due to the fact that it is publicly available and regularly archived. It has extensively examined within diverse contexts e.g., mental health apps (Parker et al. 2019) or public sector websites (Beldad et al. 2009) As a business practice emerged in the late 1990s, it predates the legal requirements for the transparency of personal data processing. The practice began to gain prominence in late 1990s and early 2000s when industry and advocacy groups sought to 'self-regulate' by adopting privacy policies as a means to address the concerns of privacy and data protection (FTC 2000). ...
Article
Full-text available
In the realm of data protection, a striking disconnect prevails between traditional domains of doctrinal, legal, theoretical, and policy-based inquiries and a burgeoning body of empirical evidence. Much of the scholarly and regulatory discourse remains entrenched in abstract legal principles or normative frameworks, leaving the empirical landscape uncharted or minimally engaged. Since the birth of EU data protection law, a modest body of empirical evidence has been generated but remains widely scattered and unexamined. Such evidence offers vital insights into the perception, impact, clarity, and effects of data protection measures but languishes on the periphery, inadequately integrated into the broader conversation. To make a meaningful connection, we conduct a comprehensive review and synthesis of empirical research spanning nearly three decades (1995- March 2022), advocating for a more robust integration of empirical evidence into the evaluation and review of the GDPR, while laying a methodological foundation for future empirical research.
... Threats to data privacy in mobile applications are profiteering (Parker et al. 2019). Mobile application companies are frequently collecting data from users such as behaviours, usernames, passwords, contact information, age, gender, location, International Mobile Equipment Identity (IMET) and phone numbers. ...
Article
Full-text available
Since the coronavirus disease (COVID-19) began in 2020, it has changed the way people live such as social life and healthcare. One of the simplest ways to avoid wide spread of the virus is to minimize physical contact and avoid going to a crowded place. Besides that, it also has prompted countries across the world to employ digital technologies such as wireless communication systems to combat this global crisis. Digital healthcare is one of the solutions that play a crucial role to support the healthcare sector in order to prevent and minimize physical contact through telehealth and telemedicine such as monitoring, diagnosis and patient care. 5G network has the potential to advance digital healthcare along with its key technology such as enhanced Mobile Broadband (eMBB), Ultra Reliable and Low Latency Communication (URLLC), and massive Machine Type Communication (mMTC). Despite the benefits of digital healthcare by leveraging the 5G technology, there are still challenges to be overcome such as privacy protection issues, 5G deployment and limited connectivity. In this review, it highlights the relevance and challenges of 5G wireless cellular networks for digital healthcare during the COVID-19 pandemic. It also provides potential solutions and future research areas for researchers on 5G to reduce COVID-19 related health risks.
... Secondly, mental health apps potentially collect and analyse sensitive data, resulting in concerns about data privacy (12). Few publicly available apps address these concerns, with many mental health and women's health apps not include a privacy policy (39)(40)(41), and few seeking consent from user (41), driving mistrust. Privacy policies in apps that do include them typically score low on readability (39), reducing users' ability to provide consent (42). ...
Preprint
Full-text available
Premenstrual symptoms are common, with premenstrual syndrome and premenstrual dysphoric disorder associated with decreased wellbeing and suicidality. High-quality apps can offer convenient support for premenstrual mental health symptoms. We aimed to understand app preferences and Health Belief Model (HBM) constructs driving app use intention. A online survey was delivered. Structural equation modelling (SEM) explored HBM constructs. Data from 530 participants were analysed. Symptom monitoring (74.72%, n = 396) and psychoeducation (57.92%, n = 307) were sought after, with 52.64% (n = 279) indicating unwillingness to pay. Satorra Bentler-scaled fit statistics indicated a good model fit (χ ² (254) = 565.91, p < .001; CFI = .939, RMSEA = .048, SRMR = .058). HBM constructs explained 58.22% of intention to use, driven by cues to action (β = .49, p < .001), perceived barriers (β=-.22, p < .001), perceived severity (β = .16, P = .012), and perceived benefits (β = .10, p = .035). Results indicate that app developers should engage in co-design, secure endorsement from healthcare professionals, highlight therapeutic benefits, and address barriers like digital discomfort, privacy concerns, and quality.
... In [38], the authors analyzed a set of dementia apps for compliance with data privacy regulations advised by the GDPR. Compliance with data sharing, collecting, and processing recommendations was unsatisfactory. ...
... 44 Relatedly, privacy concerns about access to personal data have long been established as a barrier to seeking help online for mental health problems, as commonly discussed in studies on digital mental health. [45][46][47] Therefore, to further identify factors particularly relevant to AI technology, we propose the following hypotheses. ...
Article
Full-text available
Background: Artificial intelligence-based chatbots (AI chatbots) can potentially improve mental health care, yet factors predicting their adoption and continued use are unclear. Methods: We conducted an online survey with a sample of U.S. adults with symptoms of depression and anxiety (N = 393) in 2021 before the release of ChatGPT. We explored factors predicting the adoption and continued use of AI chatbots, including factors of the unified theory of acceptance and use of technology model, stigma, privacy concerns, and AI hesitancy. Results: Results from the regression indicated that for nonusers, performance expectancy, price value, descriptive norm, and psychological distress are positively related to the intention of adopting AI chatbots, while AI hesitancy and effort expectancy are negatively associated with adopting AI chatbots. For those with experience in using AI chatbots for mental health, performance expectancy, price value, descriptive norm, and injunctive norm are positively related to the intention of continuing to use AI chatbots. Conclusions: Understanding the adoption and continued use of AI chatbots among adults with symptoms of depression and anxiety is essential given that there is a widening gap in the supply and demand of care. AI chatbots provide new opportunities for quality care by supporting accessible, affordable, efficient, and personalized care. This study provides insights for developing and deploying AI chatbots such as ChatGPT in the context of mental health care. Findings could be used to design innovative interventions that encourage the adoption and continued use of AI chatbots among people with symptoms of depression and anxiety and who have difficulty accessing care.
... Interviewees expressed that, in order to address these concerns, it is necessary for digital mental health apps to provide clear and transparent information on how they handle user data. Lamentably, despite interviewees conveying a desire for this information, many mental health apps do not offer a privacy policy to users [ 56], and of those which do provide a privacy policy many demonstrate low readability scores [57], potentially fostering a sense of mistrust in how collected data is being analyzed and used. Conversely, some interviewees expressed a more nonchalant attitude in regard to data security. ...
Preprint
Full-text available
BACKGROUND Mental health care provision in the UK is overwhelmed, with high demand for services. There are also high rates of under-, over-, and misdiagnosis of common mental health disorders in primary care and delays to accessing secondary care. This negatively impacts on patient functioning and outcomes. Digital tools may offer a time-efficient avenue for remote assessment and triage of mental health disorders which can be integrated directly into existing care pathways to support clinicians. However, despite the potential of digital tools for mental health there remain gaps in our understanding of how the intended userbase, people with lived experienced of mental health concerns, perceive these technologies. OBJECTIVE To explore the perspectives and attitudes of individuals with lived experience of mental health concerns on mental health apps that are designed to support self-assessment and triage. METHODS A semi-structured interview approach was employed, exploring perspectives of interviewees using five open-ended questions. Interviews were transcribed verbatim from audio data recordings. The average interview lasted 46 minutes (rounded to the nearest minute; SD=12.93 minutes). Thematic analysis (TA) was conducted. RESULTS A total of 16 individuals were interviewed in the current study. The average age was 42.25 (SD=15.18), half the interviewees were female (50%, n=8), and all were white (100%, n=16). TA revealed six major themes: (1) availability and accessibility, (2) quality, (3) attitudes, (4) safety, (5) impact and (6) functionality. CONCLUSIONS Engaging in clear communication regarding data security and privacy policies, adopting a consent-driven approach to data sharing, and identifying gaps in the app marketplace to foster inclusion of a range of mental health conditions and avoid oversaturation of apps for common mental health disorders (eg, depression and anxiety) were identified as priorities from interviewee comments. Additionally, reputation was identified as a driver to uptake and engagement, with endorsement from a respected source (ie, health care provider, academic institution) or direct recommendation from a trusted health care professional associated with increased interest and trust. Furthermore, there was interest in the role apps could play in existing care pathways, particularly in terms of utilizing a results report from a digital self-assessment in facilitating informed discussions with health care professionals during appointments, and by signposting individuals to the most appropriate services. Additionally, interviewees discussed the potential for mental health apps to provide waiting list support to individuals awaiting treatment by providing personalized psychoeducation, self-help tips, and sources of help to support self-care and management.
... On some websites, privacy policies are difficult to locate, requiring multiple clicks [119]. In studies of mental health apps, less than half even had a privacy policy [137,138]. Privacy policies can be changed without notification to the user [119]. ...
Article
Full-text available
Purpose of Review Telepsychiatry practiced by psychiatrists is evidence-based, regulated, private, and effective in diverse settings. The use of telemedicine has grown since the COVID-19 pandemic as people routinely obtain more healthcare services online. At the same time, there has been a rapid increase in the number of digital mental health startups that offer various services including online therapy and access to prescription medications. These digital mental health firms advertise directly to the consumer primarily through digital advertising. The purpose of this narrative review is to contrast traditional telepsychiatry and the digital mental health market related to online therapy. Recent Findings In contrast to standard telepsychiatry, most of the digital mental health startups are unregulated, have unproven efficacy, and raise concerns related to self-diagnosis, self-medicating, and inappropriate prescribing. The role of digital mental health firms for people with serious mental illness has not been determined. There are inadequate privacy controls for the digital mental health firms, including for online therapy. We live in an age where there is widespread admiration for technology entrepreneurs and increasing emphasis on the role of the patient as a consumer. Yet, the business practices of digital mental health startups may compromise patient safety for profits. Summary There is a need to address issues with the digital mental health startups and to educate patients about the differences between standard medical care and digital mental health products.
... The authors indicated that the majority of apps did not provide privacy policies, while many apps stated that users' information might be shared with third parties. In a similar fashion, and with a focus on mental health apps [20] and depression apps [16], the researchers examined a different number of relatively small apps for privacy compliance. All studies reported missing privacy policies for a very high percentage of the apps and a lack of transparency around handling users' data. ...
Article
Full-text available
Mobile app developers are often obliged by regulatory frameworks to provide a privacy policy in natural comprehensible language to describe their apps’ privacy practices. However, prior research has revealed that: (1) not all app developers offer links to their privacy policies; and (2) even if they do offer such access, it is difficult to determine if it is a valid link to a (valid) policy. While many prior studies looked at this issue in Google Play Store, Apple App Store, and particularly the iOS store, is much less clear. In this paper, we conduct the first and the largest study to investigate the previous issues in the iOS app store ecosystem. First, we introduce an App Privacy Policy Extractor (APPE), a system that embraces and analyses the metadata of over two million apps to give insightful information about the distribution of the supposed privacy policies, and the content of the provided privacy policy links, store-wide. The result shows that only 58.5% of apps provide links to purported privacy policies, while 39.3% do not provide policy links at all. Our investigation of the provided links shows that only 38.4% of those links were directed to actual privacy policies, while 61.6% failed to lead to a privacy policy. Further, for research purposes we introduce the App Privacy Policy Corpus (APPC-451K); the largest app privacy policy corpus consisting of data relating to more than 451K verified privacy policies.
... However, we do not see this as a bypass for adhering to the appropriate health care and data privacy regulations. Recently, there have been worrisome reports of the breaches of data privacy and data security by wellness mobile apps [73][74][75]. In addition to data breaches, other concerns include the aggregation of multisource data and generating digital twins without appropriate access control for the individual represented by the digital twin. ...
Article
Full-text available
Digital health interventions are being increasingly incorporated into health care workflows to improve the efficiency of patient care. In turn, sustained patient engagement with digital health interventions can maximize their benefits toward health care outcomes. In this viewpoint, we outline a dynamic patient engagement by using various communication channels and the potential use of omnichannel engagement to integrate these channels. We conceptualize a novel patient care journey where multiple web-based and offline communication channels are integrated through a “digital twin.” The principles of implementing omnichannel engagement for digital health interventions and digital twins are also broadly covered. Omnichannel engagement in digital health interventions implies a flexibility for personalization, which can enhance and sustain patient engagement with digital health interventions, and ultimately, patient quality of care and outcomes. We believe that the novel concept of omnichannel engagement in health care can be greatly beneficial to patients and the system once it is successfully realized to its full potential.
... Indeed, several studies confirm that usage rates of these mental health apps drop to less than 5% within 10 days [14,15], and apps that collect data without visualizing it for users are often found to be unengaging [16]. Many people are reluctant to share their digital signals because of privacy concerns [17] and not understanding what the data are used for. Data visualization offers a solution, in that it can help people learn how their raw data are used, how that raw data can be transformed into privacy-preserving digital biomarkers, and how those digital biomarkers relate to their health. ...
Article
Full-text available
Background: While digital phenotyping smartphone apps can collect vast amounts of information on participants, less is known about how these data can be shared back. Data visualization is critical to ensuring applications of digital signals and biomarkers are more informed, ethical, and impactful. But little is known about how sharing of these data, especially at different levels from raw data through proposed biomarkers, impacts patients' perceptions. Methods: We compared five different graphs generated from data created by the open source mindLAMP app that reflected different ways to share data, from raw data through digital biomarkers and correlation matrices. All graphs were shown to 28 participants, and the graphs' usability was measured via the System Usability Scale (SUS). Additionally, participants were asked about their comfort sharing different kinds of data, administered the Digital Working Alliance Inventory (D-WAI), and asked if they would want to use these visualizations with care providers. Results: Of the five graphs shown to participants, the graph visualizing change in survey responses over the course of a week received the highest usability score, with the graph showing multiple metrics changing over a week receiving the lowest usability score. Participants were significantly more likely to be willing to share Global Positioning System data after viewing the graphs, and 25 of 28 participants agreed that they would like to use these graphs to communicate with their clinician. Discussion/conclusions: Data visualizations can help participants and patients understand digital biomarkers and increase trust in how they are created. As digital biomarkers become more complex, simple visualizations may fail to capture their multiple dimensions, and new interactive data visualizations may be necessary to help realize their full value.
Article
Full-text available
Introduction Mobile health (mHealth) technologies, including smartphone apps and wearables, have improved health care by providing innovative solutions for monitoring, education and treatment, particularly in mental health. Method This review synthesises findings from a series of reviews on mHealth interventions in psychiatry. Publications were systematically searched in PubMed, MEDLINE, PsycINFO, ScienceDirect, Scopus, Web of Science and Cochrane Library. Results Out of 2147 records, 111 studies from 2014 to 2024 focusing on anxiety and depression were included. These studies highlight the effectiveness of mHealth interventions in reducing symptoms through cognitive–behavioural therapy, mindfulness and psychoeducation, benefitting adolescents, perinatal women and marginalised groups. Additionally, mHealth shows promise in managing substance use disorders and severe mental illnesses like schizophrenia, bipolar disorder and psychosis. Conclusion Despite positive outcomes, challenges such as data privacy, user engagement and healthcare integration persist. Further robust trials and evidence-based research are needed to validate the efficacy of mHealth technologies.
Chapter
The convergence of digitization and globalization has revolutionized various sectors, including healthcare, by enabling rapid expansion and efficient communication through the Internet and digital methods. Digital technologies such as blockchain, cloud computing, and artificial intelligence (AI) have empowered the healthcare sector to collect, analyze, and utilize extensive patient data. However, as mentioned in the previous chapters, integrating digital technologies in healthcare has raised concerns about security and privacy. Despite efforts to protect patient data, the healthcare sector faces challenges in maintaining data security, leading to frequent data breaches (Jalali et al., 2019). As healthcare continues its digital transformation, addressing security and privacy concerns remains crucial for the integrity and reliability of digital healthcare systems.
Conference Paper
Mood logging, where people track mood-related data, is commonly used to support mental healthcare. Speech agents could prove beneficial in supporting mood logging for clients. Yet we know little about how Mental Healthcare Practitioners (MHPs) view speech as a tool to support current care practices. Through a thematic analysis of semi-structured interviews with 15 MHPs, we show that MHPs see opportunities in the convenience, and the data richness that speech agents could afford. However, MHPs also saw this richness as noisy, with using speech potentially diminishing a client’s focus on mood logging as an activity. MHPs were wary of overusing AI-based tools, expressing concerns around data ownership, access and privacy. We discuss the role of speech agents within blended care, outlining key considerations when using speech for mood logging in a blended mental healthcare context.
Chapter
Mental health apps (MHapps) are designed to provide digital tools and techniques for self-managing psychological forms of distress (e.g. stress and anxiety). In a psychological context, the power and efficacy of these apps is typically evidenced using clinical methods (e.g. randomized controlled trials). This chapter will describe the benefits and challenges to using these methods and will then examine the advances that can be made by considering an applied psychosocial approach to understanding MHapps. This will explore the ways people negotiate the emotional and affective landscape of these apps, considering how MHapps allow for certain ways of thinking, acting, and feeling. This follows a vital materialist perspective and aims to recognize how the lived material experience of using MHapps shapes (and is shaped) by the intersection of a range of different bodies (both human and non-human) in the unique space of MHapps.
Article
In this article we address the question ‘what is tracking in the mobile ecosystem’ through a comprehensive overview of the Software Development Kit (SDK). Our research reveals a complex infrastructural role for these technical objects connecting end-user data with app developers, third parties and dominant advertising platforms like Google and Facebook. We present an innovative theoretical framework which we call a data monadology to foreground this interrelationship, predicated on an economic model that exchanges personal data for the infrastructural services used to build applications. Our main contribution is an SDK taxonomy, which renders them more transparent and observable. We categorise SDK services into three main categories: (i) Programmatic AdTech for monetisation; (ii) App Development, for building, maintaining and offering additional artificial intelligence features and (iii) App Extensions which more visibly embed third parties into apps like maps, wallets or other payment services. A major finding of our analysis is the special category of the Super SDK, reserved for platforms like Google and Facebook. Not only do they offer a vast array of services across all three categories, making them indispensable to developers, they are super conduits for personal data and the primary technical means for the expansion of platform monopolisation across the mobile ecosystem.
Article
Full-text available
Most university students with mental disorders remain untreated. Evaluating the acceptance of intervention targets in mental health treatment, promotion, and prevention, as well as mental health service delivery modes is crucial for reducing potential barriers, increasing healthcare utilization, and efficiently allocating resources in healthcare services. The study aimed to evaluate the acceptance of various intervention targets and delivery modes of mental health care services in German first-year university students. In total, 1,376 first-year students from two German universities from the 2017–2018 multi-center cross-sectional cohort of the StudiCare project, the German arm of the World Mental Health International College Student Survey initiative, completed a web-based survey assessing their mental health. Mental disorder status was based on self-reported data fulfilling the DSM-IV criteria. We report frequencies of accepted delivery modes [categories: group or in-person therapy with on or off campus services, self-help internet- or mobile-based intervention (IMI) with or without coaching, or a combination of a in-person and IMI (blended)]. In a multinomial logistic regression, we estimate correlates of the preference for in-person vs. IMI vs. a combination of both modes (blended) modalities. Additionally, we report frequencies of intervention targets (disorder specific: e.g., social phobia, depressive mood; study-related: test anxiety, procrastination; general well-being: sleep quality, resilience) their association with mental disorders and sex, and optimal combinations of treatment targets for each mental illness. German university students' acceptance is high for in-person (71%–76%), moderate for internet- and mobile-based (45%–55%), and low for group delivery modes (31%–36%). In-person treatment (72%) was preferred over IMI (19%) and blended modalities (9%). Having a mental disorder [odds ratio (OR): 1.56], believing that digital treatments are effective (OR: 3.2), and showing no intention to use services (OR: 2.8) were associated with a preference for IMI compared to in-person modes. Students with prior treatment experience preferred in-person modes (OR: 0.46). In general, treatment targets acceptance was higher among female students and students with mental disorders. However, this was not true for targets with the highest (i.e., procrastination) and the lowest (i.e., substance-use disorder) acceptance. If only two intervention targets were offered, a combination of study-related targets (i.e., procrastination, stress, time management) would reach 85%–88% of the students. In-person services are preferred, yet half of the students consider using IMI, preferably aiming for a combination of at least two study-related intervention targets. Student mental health care services should offer a combination of accepted targets in different delivery modes to maximize service utilization.
Chapter
In the wake of the COVID-19 pandemic, a rapid digital transformation has taken place in the mental healthcare sector, with a marked shift towards telehealth services on web and mobile platforms. This transition, while advantageous in many ways, raises critical questions regarding data security and user privacy given the sensitive nature of the information exchanged. To evaluate these concerns, we undertook a rigorous security and privacy examination of 48 web services and 39 mobile applications specific to mental healthcare, utilizing tools such as MobSF, RiskInDroid, AndroBugs, SSL Labs, and Privacy Check. We also delved into privacy policies, manually evaluating how user data is acquired, disseminated, and utilized by these services. Our investigation uncovered that although a handful of mental healthcare web services comply with expert security protocols, including SSL certification and solid authentication strategies, they often lack crucial privacy policy provisions. In contrast, mobile applications exhibit deficiencies in security and privacy best practices, including underdeveloped permission modeling, absence of superior encryption algorithms, and exposure to potential attacks such as Janus, Hash Collision, and SSL Security. This research underscores the urgency to bolster security and privacy safeguards in digital mental healthcare services, concluding with pragmatic recommendations to fortify the confidentiality and security of healthcare data for all users.KeywordsSecurity and Privacy AnalysisWeb ServicesMobile ApplicationsMental HealthcareTelehealth
Chapter
The combination of health issues and communication concerns brings to light significant ethical considerations for health communication scholars and practitioners. Our discussion of these considerations focuses on guiding precepts from bioethics, key ethical concerns from a clinical perspective, those considerations that relate most directly to digital and social media about health factors, and health campaign and social marketing issues as they connect to ethical consequences. All of these factors remind us that we need to take cultural issues into consideration; they are particularly relevant when communicating with individuals from diverse backgrounds and amplified in times of emergency situations or pandemics.
Chapter
This chapter provides an overview of the ethical issues that arise when healthcare practitioners (HCPs) prescribe or recommend digital therapeutics, in particular for treating mental health and addiction issues. We show that the lack of adequate clinical validation and regulatory frameworks for digital therapeutics leaves HCPs with particularly high responsibility. We group ethical issues into those affecting patients directly, those arising for society, and HCPs’ new roles and responsibilities. We identify privacy, transparency, autonomy, lack of clinical validation, fairness, and equality, as well as HCP's changing responsibilities and roles as major ethical issues. We illustrate why these issues matter for patients and society, discuss how and where they occur in practice and provide suggestions on what HCPs can practically do about these issues. We argue that HCPs have high overall responsibility and should pay special attention to ethical issues when recommending digital therapeutics or using them with their patients.
Article
Full-text available
Background: Health apps are a booming, yet under-regulated market, with potential consumer harms in privacy and health safety. Regulation of the health app market tends to be siloed, with no single sector holding comprehensive oversight. We sought to explore this phenomenon by critically analysing how the problem of health app regulation is being presented and addressed in the policy arena. Methods: We conducted a critical, qualitative case study of regulation of the Australian mental health app market. We purposively sampled influential policies from government, industry and non-profit organisations that provided oversight of app development, distribution or selection for use. We used Bacchi’s critical, theoretical approach to policy analysis, analysing policy solutions in relation to the ways the underlying problem was presented and discussed. We analysed the ways that policies characterised key stakeholder groups and the rationale policy authors provided for various mechanisms of health app oversight. Results: We identified and analysed 29 policies from Australia and beyond, spanning 5 sectors: medical device, privacy, advertising, finance, and digital content. Policy authors predominantly framed the problem as potential loss of commercial reputations and profits, rather than consumer protection. Policy solutions assigned main responsibility for app oversight to the public, with a heavy onus on consumers to select safe and high-quality apps. Commercial actors, including powerful app distributors and commercial third parties were rarely subjects of policy initiatives, despite having considerable power to affect app user outcomes. Conclusion: A stronger regulatory focus on app distributors and commercial partners may improve consumer privacy and safety. Policy-makers in different sectors should work together to develop an overarching regulatory framework for health apps, with a focus on consumer protection.
Article
Full-text available
Recent advances in hardware and telecommunications have enabled the development of low cost mobile devices equipped with a variety of sensors. As a result, new functionalities, empowered by emerging mobile platforms, allow millions of applications to take advantage of vast amounts of data. Following this trend, mobile health applications collect users health-related information to help them better comprehend their health status and to promote their overall wellbeing. Nevertheless, healthrelated information is by nature and by law deemed sensitive and, therefore, its adequate protection is of substantial importance. In this article we provide an in-depth security and privacy analysis of some of the most popular freeware mobile health applications. We have performed both static and dynamic analysis of selected mobile health applications, along with tailored testing of each application’s functionalities. Long term analyses of the life cycle of the reviewed apps and our GDPR compliance auditing procedure are unique features of the present article. Our findings reveal that the majority of the analyzed applications does not follow well-known practices and guidelines, not even legal restrictions imposed by contemporary data protection regulations, thus jeopardizing the privacy of millions of users.
Article
Full-text available
Background Apps targeted at health and wellbeing sit in a rapidly growing industry associated with widespread optimism about their potential to deliver accessible and cost-effective healthcare. App developers might not be aware of all the regulatory requirements and best practice principles are emergent. Health apps are regulated in order to minimise their potential for harm due to, for example, loss of personal health privacy, financial costs, and health harms from delayed or unnecessary diagnosis, monitoring and treatment. We aimed to produce a comprehensive guide to assist app developers in producing health apps that are legally compliant and in keeping with high professional standards of user protection. Methods We conducted a case study analysis of the Australian and related international policy environment for mental health apps to identify relevant sectors, policy actors, and policy solutions. ResultsWe identified 29 policies produced by governments and non-government organisations that provide oversight of health apps. In consultation with stakeholders, we developed an interactive tool targeted at app developers, summarising key features of the policy environment and highlighting legislative, industry and professional standards around seven relevant domains: privacy, security, content, promotion and advertising, consumer finances, medical device efficacy and safety, and professional ethics. We annotated this developer guidance tool with information about: the relevance of each domain; existing legislative and non-legislative guidance; critiques of existing policy; recommendations for developers; and suggestions for other key stakeholders. Conclusions We anticipate that mental health apps developed in accordance with this tool will be more likely to conform to regulatory requirements, protect consumer privacy, protect consumer finances, and deliver health benefit; and less likely to attract regulatory penalties, offend consumers and communities, mislead consumers, or deliver health harms. We encourage government, industry and consumer organisations to use and publicise the tool.
Article
Full-text available
Objective: Suicide is a significant public health issue, and is especially concerning in adolescents and young adults, who are over-represented both in attempts and completed suicide. Emerging technologies represent a promising new approach to deliver suicide prevention interventions to these populations. The current systematic review aims to identify online and mobile psychosocial suicide prevention interventions for young people, and evaluate the effectiveness of these interventions. Method: PsycINFO, Medline, Embase and The Cochrane Library were electronically searched for all articles published between January, 2000 and May, 2015. Peer-reviewed journal articles reporting on interventions for young people aged 12-25 years with suicidality as a primary outcome were eligible for inclusion. No exclusions were placed on study design. Results: One study met inclusion criteria, and found significant reductions in the primary outcome of suicidal ideation, as well as depression and hopelessness. Two relevant protocol papers of studies currently underway were also identified. Conclusions: There is a paucity of current evidence for online and mobile interventions for suicide prevention in youth. More high quality empirical evidence is required to determine the effectiveness of these novel approaches to improving suicide outcomes in young people.
Article
Full-text available
Background: Poor information privacy practices have been identified in health apps. Medical app accreditation programs offer a mechanism for assuring the quality of apps; however, little is known about their ability to control information privacy risks. We aimed to assess the extent to which already-certified apps complied with data protection principles mandated by the largest national accreditation program. Methods: Cross-sectional, systematic, 6-month assessment of 79 apps certified as clinically safe and trustworthy by the UK NHS Health Apps Library. Protocol-based testing was used to characterize personal information collection, local-device storage and information transmission. Observed information handling practices were compared against privacy policy commitments. Results: The study revealed that 89 % (n = 70/79) of apps transmitted information to online services. No app encrypted personal information stored locally. Furthermore, 66 % (23/35) of apps sending identifying information over the Internet did not use encryption and 20 % (7/35) did not have a privacy policy. Overall, 67 % (53/79) of apps had some form of privacy policy. No app collected or transmitted information that a policy explicitly stated it would not; however, 78 % (38/49) of information-transmitting apps with a policy did not describe the nature of personal information included in transmissions. Four apps sent both identifying and health information without encryption. Although the study was not designed to examine data handling after transmission to online services, security problems appeared to place users at risk of data theft in two cases. Conclusions: Systematic gaps in compliance with data protection principles in accredited health apps question whether certification programs relying substantially on developer disclosures can provide a trusted resource for patients and clinicians. Accreditation programs should, as a minimum, provide consistent and reliable warnings about possible threats and, ideally, require publishers to rectify vulnerabilities before apps are released.
Article
Full-text available
Digital technology has the potential to transform mental healthcare by connecting patients, services and health data in new ways. Digital online and mobile applications can offer patients greater access to information and services and enhance clinical management and early intervention through access to real-time patient data. However, substantial gaps exist in the evidence base underlying these technologies. Greater patient and clinician involvement is needed to evaluate digital technologies and ensure they target unmet needs, maintain public trust and improve clinical outcomes. Royal College of Psychiatrists.
Article
Full-text available
Background: Smartphone applications for mental illnesses offer great potential, although the actual research base is still limited. Major depressive disorder and bipolar disorder are both common psychiatric illness for which smartphone application research has greatly expanded in the last two years. We review the literature on smartphone applications for major depressive and bipolar disoders in order to better understand the evidence base for their use, current research opportunities, and future clinical trends. Methods: We conducted an English language review of the literature, on November 1st 2014, for smartphone applications for major depressive and bipolar disorders. Inclusion criteria included studies featuring modern smartphones running native applications with outcome data related to major depressive or bipolar disorders. Studies were organized by use of active or passive data collection and focus on diagnostic or therapeutic interventions. Results: Our search identified 1065 studies. Ten studies on major depressive disorder and 4 on bipolar disorder were included. Nine out of 10 studies on depression related smartphone applications featured active data collection and all 4 studies on bipolar disorder featured passive data collection. Depression studies included both diagnostic and therapeutic smartphone applications, while bipolar disorder studies featured only diagnostics. No studies addressed physiological data. Conclusions: While the research base for smartphone applications is limited, it is still informative. Numerous opportunities for further research exist, especially in the use of passive data for, major depressive disorder, validating passive data to detect mania in bipolar disorder, and exploring the use of physiological data. As interest in smartphones for psychiatry and mental health continues to expand, it is important that the research base expands to fill these gaps and provide clinically useful results.
Article
Full-text available
Background: Mobile health (mHealth) apps aim at providing seamless access to tailored health information technology and have the potential to alleviate global health burdens. Yet, they bear risks to information security and privacy because users need to reveal private, sensitive medical information to redeem certain benefits. Due to the plethora and diversity of available mHealth apps, implications for information security and privacy are unclear and complex. Objective: The objective of this study was to establish an overview of mHealth apps offered on iOS and Android with a special focus on potential damage to users through information security and privacy infringements. Methods: We assessed apps available in English and offered in the categories “Medical” and “Health & Fitness” in the iOS and Android App Stores. Based on the information retrievable from the app stores, we established an overview of available mHealth apps, tagged apps to make offered information machine-readable, and clustered the discovered apps to identify and group similar apps. Subsequently, information security and privacy implications were assessed based on health specificity of information available to apps, potential damage through information leaks, potential damage through information manipulation, potential damage through information loss, and potential value of information to third parties. Results: We discovered 24,405 health-related apps (iOS; 21,953; Android; 2452). Absence or scarceness of ratings for 81.36% (17,860/21,953) of iOS and 76.14% (1867/2452) of Android apps indicates that less than a quarter of mHealth apps are in more or less widespread use. Clustering resulted in 245 distinct clusters, which were consolidated into 12 app archetypes grouping clusters with similar assessments of potential damage through information security and privacy infringements. There were 6426 apps that were excluded during clustering. The majority of apps (95.63%, 17,193/17,979; of apps) pose at least some potential damage through information security and privacy infringements. There were 11.67% (2098/17,979) of apps that scored the highest assessments of potential damages. Conclusions: Various kinds of mHealth apps collect and offer critical, sensitive, private medical information, calling for a special focus on information security and privacy of mHealth apps. In order to foster user acceptance and trust, appropriate security measures and processes need to be devised and employed so that users can benefit from seamlessly accessible, tailored mHealth apps without exposing themselves to the serious repercussions of information security and privacy infringements.
Article
Full-text available
In a world where the industry of mobile applications is continuously expanding and new health care apps and devices are created every day, it is important to take special care of the collection and treatment of users’ personal health information. However, the appropriate methods to do this are not usually taken into account by apps designers and insecure applications are released. This paper presents a study of security and privacy in mHealth, focusing on three parts: a study of the existing laws regulating these aspects in the European Union and the United States, a review of the academic literature related to this topic, and a proposal of some recommendations for designers in order to create mobile health applications that satisfy the current security and privacy legislation. This paper will complement other standards and certifications about security and privacy and will suppose a quick guide for apps designers, developers and researchers.
Article
Full-text available
Mobile health (mHealth) customers shopping for applications (apps) should be aware of app privacy practices so they can make informed decisions about purchase and use. We sought to assess the availability, scope, and transparency of mHealth app privacy policies on iOS and Android. Over 35 000 mHealth apps are available for iOS and Android. Of the 600 most commonly used apps, only 183 (30.5%) had privacy policies. Average policy length was 1755 (SD 1301) words with a reading grade level of 16 (SD 2.9). Two thirds (66.1%) of privacy policies did not specifically address the app itself. Our findings show that currently mHealth developers often fail to provide app privacy policies. The privacy policies that are available do not make information privacy practices transparent to users, require college-level literacy, and are often not focused on the app itself. Further research is warranted to address why privacy policies are often absent, opaque, or irrelevant, and to find a remedy.
Article
Full-text available
The rapid growth in the use of mobile phone applications (apps) provides the opportunity to increase access to evidence-based mental health care. Our goal was to systematically review the research evidence supporting the efficacy of mental health apps for mobile devices (such as smartphones and tablets) for all ages. A comprehensive literature search (2008-2013) in MEDLINE, Embase, the Cochrane Central Register of Controlled Trials, PsycINFO, PsycTESTS, Compendex, and Inspec was conducted. We included trials that examined the effects of mental health apps (for depression, anxiety, substance use, sleep disturbances, suicidal behavior, self-harm, psychotic disorders, eating disorders, stress, and gambling) delivered on mobile devices with a pre- to posttest design or compared with a control group. The control group could consist of wait list, treatment-as-usual, or another recognized treatment. In total, 5464 abstracts were identified. Of those, 8 papers describing 5 apps targeting depression, anxiety, and substance abuse met the inclusion criteria. Four apps provided support from a mental health professional. Results showed significant reductions in depression, stress, and substance use. Within-group and between-group intention-to-treat effect sizes ranged from 0.29-2.28 and 0.01-0.48 at posttest and follow-up, respectively. Mental health apps have the potential to be effective and may significantly improve treatment accessibility. However, the majority of apps that are currently available lack scientific evidence about their efficacy. The public needs to be educated on how to identify the few evidence-based mental health apps available in the public domain to date. Further rigorous research is required to develop and test evidence-based programs. Given the small number of studies and participants included in this review, the high risk of bias, and unknown efficacy of long-term follow-up, current findings should be interpreted with caution, pending replication. Two of the 5 evidence-based mental health apps are currently commercially available in app stores.
Article
Purpose: Many who seek primary health care advice about mental health may be using mobile applications (apps) claiming to improve well-being or relieve symptoms. We aimed to identify how prominent mental health apps frame mental health, including who has problems and how they should be managed. Methods: We conducted a qualitative content analysis of advertising material for mental health apps found online in the United States, the United Kingdom, Canada, and Australia during late 2016. Apps were included if they explicitly referenced mental health diagnoses or symptoms and offered diagnosis and guidance, or made health claims. Two independent coders analyzed app store descriptions and linked websites using a structured, open-ended instrument. We conducted interpretive analysis to identify key themes and the range of messages. Results: We identified 61 mental health apps: 34 addressed predominantly anxiety, panic, and stress (56%), 16 addressed mood disorders (26%), and 11 addressed well-being or other mental health issues (18%). Apps described mental health problems as being psychological symptoms, a risk state, or lack of life achievements. Mental health problems were framed as present in everyone, but everyone was represented as employed, white, and in a family. Explanations about mental health focused on abnormal responses to mild triggers, with minimal acknowledgment of external stressors. Therapeutic strategies included relaxation, cognitive guidance, and self-monitoring. Apps encouraged frequent use and promoted personal responsibility for improvement. Conclusions: Mental health apps may promote medicalization of normal mental states and imply individual responsibility for mental well-being. Within the health care clinician-patient relationship, such messages should be challenged, where appropriate, to prevent overdiagnosis and ensure supportive health care where needed.
Conference Paper
Most smartphone apps collect and share information with various first and third parties; yet, such data collection practices remain largely unbeknownst to, and outside the control of, end-users. In this paper, we seek to understand the potential for tools to help people refine their exposure to third parties, resulting from their app usage. We designed an interactive, focus-plus-context display called X-Ray Refine (Refine) that uses models of over 1 million Android apps to visualise a person's exposure profile based on their durations of app use. To support exploration of mitigation strategies, emphRefine can simulate actions such as app usage reduction, removal, and substitution. A lab study of emphRefine found participants achieved a high-level understanding of their exposure, and identified data collection behaviours that violated both their expectations and privacy preferences. Participants also devised bespoke strategies to achieve privacy goals, identifying the key barriers to achieving them.
Article
“The success of the paradigm... is at the start largely a promise of success ... Normal science consists in the actualization of that promise...”
Article
Objective: To explore the privacy and security of free medication applications (apps) available to Canadian consumers. Methods: The authors searched the Canadian iTunes store for iOS apps and the Canadian Google Play store for Android apps related to medication use and management. Using an Apple iPad Air 2 and a Google Nexus 7 tablet, 2 reviewers generated a list of apps that met the following inclusion criteria: free, available in English, intended for consumer use and related to medication management. Using a standard data collection form, 2 reviewers independently coded each app for the presence/absence of passwords, the storage of personal health information, a privacy statement, encryption, remote wipe and third-party sharing. A Cohen's Kappa statistic was used to measure interrater agreement. Results: Of the 184 apps evaluated, 70.1% had no password protection or sign-in system. Personal information, including name, date of birth and gender, was requested by 41.8% (77/184) of apps. Contact information, such as address, phone number and email, was requested by 25% (46/184) of apps. Finally, personal health information, other than medication name, was requested by 89.1% (164/184) of apps. Only 34.2% (63/184) of apps had a privacy policy in place. Conclusion: Most free medication apps offer very limited authentication and privacy protocols. As a result, the onus currently falls on patients to input information in these apps selectively and to be aware of the potential privacy issues. Until more secure systems are built, health care practitioners cannot fully support patients wanting to use such apps.
Article
This study examines the privacy risks and the relationship between privacy disclosures and practices of health apps.Mobile health apps can help individuals manage chronic health conditions.1 One-fifth of smartphone owners had health apps in 2012,2 and 7% of primary care physicians recommended a health app.3 The US Food and Drug Administration has approved the prescription of some apps.4 Health apps can transmit sensitive medical data, including disease status and medication compliance. Privacy risks and the relationship between privacy disclosures and practices of health apps are understudied.
Article
In recent years, there has been explosive growth in smartphone sales, which is accompanied with the availability of a huge number of smartphone applications (or simply apps). End users or consumers are attracted by the many interesting features offered by these devices and the associated apps. The developers of these apps benefit financially, either by selling their apps directly or by embedding one of the many ad libraries available on smartphone platforms. In this paper, we focus on potential privacy and security risks posed by these embedded or in-app advertisement libraries (henceforth "ad libraries," for brevity). To this end, we study the popular Android platform and collect 100,000 apps from the official Android Market in March-May, 2011. Among these apps, we identify 100 representative in-app ad libraries (embedded in 52.1% of the apps) and further develop a system called AdRisk to systematically identify potential risks. In particular, we first decouple the embedded ad libraries from their host apps and then apply our system to statically examine the ad libraries for risks, ranging from uploading sensitive information to remote (ad) servers to executing untrusted code from Internet sources. Our results show that most existing ad libraries collect private information: some of this data may be used for legitimate targeting purposes (i.e., the user's location) while other data is harder to justify, such as the user's call logs, phone number, browser bookmarks, or even the list of apps installed on the phone. Moreover, some libraries make use of an unsafe mechanism to directly fetch and run code from the Internet, which immediately leads to serious security risks. Our investigation indicates the symbiotic relationship between embedded ad libraries and host apps is one main reason behind these exposed risks. These results clearly show the need for better regulating the way ad libraries are integrated in Android apps.
Article
Objective: E-mental health technologies are increasing rapidly, both in number and in utilisation by consumers, health systems and researchers. This review aimed to: (i) examine the features and scientific evidence for e-mental health programs; (ii) describe the growth in these programs in the past decade, and track the extent and quality of scientific research over time; and (iii) examine Australian and international contribution to the field. Method: Two types of e-mental health programs; 'web interventions' and mobile applications'; targeting depression, bipolar disorder, generalised anxiety disorder, social anxiety, panic disorder and general stress were included. Data were collected from the Beacon website (www.beacon.anu.edu.au; last updated July 2011). Features of each program and their supporting scientific evidence were coded. Results: In total, 62 web interventions and 11 mobile applications were identified. Half of these were developed in Australia. The majority of programs were aimed towards adults and were based upon cognitive behavioural therapy. Approximately equal numbers of programs were developed for all targeted disorders except bipolar disorder, which was underrepresented. Only 35.5% of programs, all of which were web-based, had been evaluated by at least one RCT. The number of publications over the last decade is increasing. The majority were from Australian sources. Non-Australian research was lower in diversity and quantity. Conclusions: E-mental health research is increasing globally. Australia continues to be an international leader in this field. Depression, anxiety and panic disorder remain the disorders most targeted. Whilst the scientific evidence supporting e-mental health programs is growing, a substantial lack of high-quality empirical support was evident across the field, particularly for mobile applications and bipolar and social anxiety.
Article
During more than a decade of direct-to-consumer advertising (DTC) of pharmaceuticals in the United States, several highly controversial and contested disease states have been promoted to affect diagnostic and prescribing outcomes that are favorable to a company's branded drug. Influencing medical diagnosis is essential to the branding of a disease, which helps to protect pharmaceutical intellectual property and assures higher profits for drug companies. Enormous marketing as well as medical resources are deployed to ensure that new diagnoses of disease states are recognized. While much work has been done investigating the marketing processes necessary to shape and define diagnoses for many of these new disease states, such as Premenstrual Dysphoric Disorder (PMDD), the promotion of self-diagnosis within pharmaceutical marketing campaigns garner little sociological attention. This article reviews and analyzes branded disease awareness campaigns sponsored by pharmaceutical companies that employ self-diagnostic "tools". By using the example of one specific disease state, PMDD, I illustrate how the marketing of self-diagnosis transforms the patient into a consumer in order to achieve the aims of a drug company. This example is contextualized within the larger theoretical framework on the sociology of diagnosis. Consideration is given to how the marketing of self-diagnosis goes beyond Jutel's (2009) description of diagnosis as being the "classification tool of medicine" and becomes a marketing tool to construct a well-educated consumer who will demand medical diagnoses inline with a drug company's objectives.
Dangerous permissions
  • Google Play
Medical appointment booking app HealthEngine sharing clients' personal information with lawyers
  • McGrath
Cambridge Analytica used our secrets for profit – The same data could be used for public good
  • Watkin