Access to this full-text is provided by SAGE Publications Inc.
Content available from Therapeutic Advances in Drug Safety
This content is subject to copyright.
https://doi.org/10.1177/20420986241293303
https://doi.org/10.1177/20420986241293303
Ther Adv Drug Saf
2024, Vol. 15: 1–9
DOI: 10.1177/
20420986241293303
© The Author(s), 2024.
Article reuse guidelines:
sagepub.com/journals-
permissions
journals.sagepub.com/home/taw 1
Creative Commons Non Commercial CC BY-NC: This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 License
(https://creativecommons.org/licenses/by-nc/4.0/) which permits non-commercial use, reproduction and distribution of the work without further permission
provided the original work is attributed as specified on the Sage and Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage).
THERAPEUTIC ADVANCES in
Drug Safety
Governance of artificial intelligence and
machine learning in pharmacovigilance:
what works today and what more is needed?
Michael Glaser and Rory Littlebury
Keywords: artificial intelligence, governance, machine learning, pharmacovigilance,
regulatory authorities
Received: 5 April 2024; revised manuscript accepted: 4 October 2024.
Correspondence to:
Michael Glaser
GSK, Development Global
Medical, Global Safety
and Pharmacovigilance
Systems, 1250 South
Collegeville Road, Upper
Providence, PA 19464, USA
michael.x.glaser@gsk.
com
Rory Littlebury
GSK, Development Global
Medical, Global Safety
and Safety Governance,
Stevenage, UK
1293303TAW0010.1177/20420986241293303Therapeutic Advances in Drug SafetyM Glaser and R Littlebury
research-article20242024
Editorial
Introduction
The potential uses of artificial intelligence and
machine learning (AI/ML) within the healthcare
field of pharmacovigilance are significant and
possibly limitless.1–5 But while AI/ML has poten-
tial, there are also limitations and challenges to
successful implementation. Within pharmacovig-
ilance, the uses of technologies such as Robotic
Process Automation and AI/ML are not new1 and
offer promise to dramatically impact all aspects of
pharmacovigilance.4,5 Possible benefits range
from reducing the cost of current pharmacovigi-
lance activities and improving the “as-is” to
more broad-ranging activities with the potential
to revolutionize the pharmacovigilance field.2,3
However, with all the anticipated promise and
hype of AI/ML, we must remember that pharma-
covigilance remains a highly regulated space; the
thalidomide tragedy is one reminder of why there
must be controls and regulations regarding the
safety of medicines and vaccines so that no patient
suffers avoidable harm. Healthcare professionals
prescribe medicines and vaccines that patients
consume, trusting that their safety has been ade-
quately assessed, well described, continues to be
monitored, and that safety issues, should they
arise, are rapidly and transparently communi-
cated. Behind the information relied on by health-
care professionals and patients are a diverse set of
legal regulations that mandate the scientific eval-
uation and communication of benefits and risks
for medicines and vaccines. This framework is
complex and is further complicated by the regula-
tory variations that exist worldwide.6 Aligned to
these varied regulations are pharmaceutical com-
pany processes, including governance frame-
works, to ensure the integrity of the final outputs
arising from pharmacovigilance activities, which
are essential to ensuring patient safety and main-
taining trust in medicines and vaccines.
A systematic analysis of articles from 2000 to
20217 demonstrated that the uptake of AI/ML in
pharmacovigilance has been slow; additionally,
only 42 articles out of 393 discussed adopting
solutions reflecting current best practices in this
area. A reason for this may be that regulatory
requirements for pharmacovigilance activities
that use AI/ML are currently partially formed,8
and one challenge to the wider adoption of AI/
ML in pharmacovigilance is the imperative for a
harmonized global regulatory environment.1,4,9
In addition, there is very limited thought, opin-
ion, or scientific commentary about how a phar-
maceutical company should govern AI/ML
within the current highly regulated pharmacovig-
ilance regulatory framework, and existing litera-
ture in the public domain suggests regulatory
alignment is still some way off.10,11 It is this gap
within the scientific commentary that this article
aims to fill.
We suggest that existing robust processes that
govern and control the implementation of com-
puterized systems within pharmacovigilance are
directly applicable and can be leveraged and
expanded under a new pharmacovigilance para-
digm that utilizes AI/ML.
2 journals.sagepub.com/home/taw
Volume 15
THERAPEUTIC ADVANCES in
Drug Safety
Governance of AI/ML in pharmacovigilance
Pharmaceutical company responsibility
When AI/ML is utilized to support the responsi-
bilities of a pharmacovigilance department,12–20 it
must be done so in an ethical,21 risk-based man-
ner, ensuring any change in, or impact to, busi-
ness processes is fully understood and can be
successfully managed by the pharmacovigilance
department. Ensuring AI/ML is managed through
a risk-based approach with a focus on audit readi-
ness is paramount (Figure 1).
Roles and responsibilities within a pharmaceutical
company pharmacovigilance department
Establishing and maintaining roles and responsi-
bilities within a pharmacovigilance department for
governing AI/ML can be accomplished by defining
a decision-making matrix, such as the proposed
RACI (Responsible, Accountable, Consulted, or
Informed) matrix (Table 1). Defining the neces-
sary training, education, and work experience
parameters of these roles is of critical importance,22
and must be tailored carefully to each pharma-
covigilance department. Accountability for AI/ML
governance, once tested, validated, and deployed
for use by a pharmacovigilance department, must
lie with the pharmacovigilance process owner and
not a technologist (e.g., data scientists or AI/ML
engineers). Accountability is placed with a single
decision-maker who can pull together a team of
individuals comprising an understanding of tech-
nology, pharmacovigilance processes, pharma-
covigilance regulations, and the benefit/risk
perspective of the patient.23
Figure 1. Responsibilities of pharmacovigilance departments using AI/ML.
AI, artificial intelligence; ML, machine learning.
M Glaser and R Littlebury
journals.sagepub.com/home/taw 3
Table 1. RACI (Responsible, Accountable, Consulted, or Informed) structure for a pharmacovigilance department using AI/ML.
Organizational persona or roles Technical implementation Management Operations
Algorithm
understanding
Product
architecture
and
deployment
Risk
management
Human
involvement/
monitoring ramp-
down plan
Metrics Data
integrity/
privacy/
security
Data quality
Process owner, business
An individual from the PV team is responsible for the
business process using specific AI/ML software. It is
expected that this individual is non-technical and focused on
the business process
A A A A R/A A A
Data owner, business
An individual from the PV team is accountable for the
classification, protection, use, and quality of the data being
used as inputs to the specific AI/ML software. It is expected
that this individual is non-technical and focused on business
data
I I — I — I R
Product owner, technology
An individual from the PV team who is a technical expert
who focuses on closing the gap between the technical and
business sides of AI/ML software product development
R R C C I R I
Risk management, business
An individual from the PV team is responsible for
coordinating and managing risks associated with AI/ML
software
— — R C — I —
Oversight board, business
A group of individuals from the PV team or the extended
enterprise, comprised of technical, business, and risk skills,
that provide oversight and governance for AI/ML software.
This group does not need to be standalone, and the function
may be incorporated into other business or technical
governance groups
I I I/C I/C I I I
Head of safety, business
An individual from the PV team who bears the responsibility
that the AI/ML software is designed, tested, validated,
implemented, managed, and monitored correctly according
to internal policies and external regulations
I I I I I I I
AI, artificial intelligence; ML, machine learning; PV, pharmacovigilance.
4 journals.sagepub.com/home/taw
Volume 15
THERAPEUTIC ADVANCES in
Drug Safety
Technology understanding and implementation
Master list. It is imperative that the safety depart-
ment keeps a central listing of all AI/ML in use
within the department for audit purposes. One
potential location is within the Pharmacovigi-
lance System Master File (PSMF) for pharma-
ceutical companies operating in the European
Union or other regions where a PSMF is required
or a similar managed document.
AI/ML understanding and transparency. Similar to
existing pharmacovigilance information technol-
ogy systems, it is imperative that the pharmaco-
vigilance process owner possesses a comprehensive
understanding of the AI/ML at a pharmacovigi-
lance process level, can effectively communicate
its operation as related to patient safety and risks,
and is partnered with other individuals that can
bridge knowledge gaps between technical under-
standing of AI/ML and business process implica-
tions.23 The pharmacovigilance process owner
must also have a clear understanding of both
training and production datasets, bias testing, and
relevant performance metrics, as these are para-
mount in understanding the production perfor-
mance of the AI/ML implementation. These
understandings must be appropriately docu-
mented and open to audit.24
AI/ML algorithm details may be examined by an
inspector, and pharmaceutical pharmacovigilance
departments should be prepared to explain what
the AI/ML does and should consider how they
would explain the AI/ML to non-experts
to give assurance to regulators. While there is lim-
ited value in reviewing algorithms for assurance
purposes,25 the pharmacovigilance process owner
must consider having an agreement in place with
the AI/ML provider (whether the provider is
internal to the pharmacovigilance department,
internal to the wider pharmaceutical company
organization, or an external supplier) to provide
support in case of an inspection request. Even
though regulatory inspections are confidential in
nature, this agreement is likely still restrictive,
particularly when dealing with an external sup-
plier, to protect potential patent or proprietary
trade secrets from entering the public domain.
AI/ML characteristics that should be considered
for documentation and audit readiness should
follow Good Machine Learning Practice,26 Good
Practice (GxP) regulations, and align with a
pharmaceutical company’s Certified Software
Quality Analyst certification processes.
AI/ML implementation management
Establishing a framework for trustworthy AI/ML is
important when implementing and leveraging the
power of AI/ML within any system or process.27
This can be realized by existing pharmacovigilance
system principles including validation, production
monitoring, and risk planning.4 Overlapping tradi-
tional pharmacovigilance system management
principles with guidance from the US National
Institute of Standards and Technology (NIST)27
results in validation, accountability/transparency,
and reliability emerging as major themes for man-
aging AI/ML within pharmacovigilance.
AI/ML validation. All computerized systems
within the pharmacovigilance department that
support processes bound by GxP regulations are
validated proportional to the potential risk to
patient safety. AI/ML is a computer system com-
ponent and must also be validated. Validation, fol-
lowing procedures approved by a supporting
quality/compliance department, involves demon-
strating through documented evidence that an AI/
ML implementation is reliable, fit for its specific
purpose, and compliant with regulatory and cor-
porate requirements.28,29 AI/ML must be assessed
to identify potential risks, which are documented,
monitored, and included in quality management
documents, inspection readiness documents, and
a control plan. AI/ML provided by third-party
providers must also be evaluated, and audits con-
ducted by the third party, aligned with current
pharmacovigilance regulations. Pharmacovigi-
lance process owners must prepare for inspec-
tions by regulatory agencies and must maintain
system registers, overviews, and procedures that
document the use of the GxP system. The com-
pliance status must be reviewed and periodically
updated to include the cumulative effects of
changes or revisions to the deployed AI/ML.
AI/ML monitoring. While validation documenta-
tion requirements will already exist in a pharma-
covigilance department, necessitating that
training datasets, validated AI/ML code, and test
results be retained and managed, we suggest addi-
tional documentation is required when imple-
menting AI/ML systems for accountability and
transparency purposes. A control plan is one
M Glaser and R Littlebury
journals.sagepub.com/home/taw 5
mechanism for achieving these purposes, provid-
ing accountability and transparency, document-
ing the AI/ML risk plan, and defining the
performance parameters for both the AI/ML and
the operating infrastructure to enable decisions
regarding whether the AI/ML is operating as
defined, and when the AI/ML or the operating
infrastructure should be modified or updated.
Detecting deviations caused by varying input
data, such as detecting outliers and data drift is
critical.30 Monitoring an AI/ML’s input and out-
put data, with care given to considering data vol-
umes and AI/ML-to-AI/ML interactions, is
analogous to quality check procedures in place to
verify that human workers are performing tasks
within defined performance parameters. A robust
incident and event management process for time-
critical notifications needing human involvement
is important to notify necessary individuals of
sensitive production issues.
A pharmaceutical department may find it benefi-
cial to maintain a closed platform (so-called
“walled garden”) for each AI/ML implementa-
tion, where access is restricted and regulated
under a data use agreement.31 The walled garden,
containing training data, AI/ML code, and test
data, is used for both information sharing with
regulators and continued AI/ML refinement. The
walled garden must mirror the applicable produc-
tion environment such that results from the
walled garden can be generalized to production.
In the current regulatory environment, incremen-
tal AI/ML updates and training and test datasets
must be versioned and retained.
AI/ML reliability. A reliable AI/ML implementa-
tion must offer benefits that outweigh negative
effects and ensure that unacceptable effects can
be monitored for resolution.27,32 When the reli-
ability of AI/ML is reduced, as production input
data changes from the test data for example, the
AI/ML control plan must capture a clear under-
standing of the AI/ML’s reliability, monitoring
conditions, and necessary actions.
Risk management
The documentation of risk management strate-
gies describes the risks and mitigations associated
with AI/ML that are involved in a pharmacovigi-
lance workflow. The existing experience with risk
management frameworks33 in pharmacovigilance
must be incorporated into the approach for AI/
ML, where risks are identified, assessed, and pri-
oritized in terms of their importance.
All risks must have mitigation plans developed,
and a quality management approach should be
taken that includes actions, timeframes, allocated
responsible persons, and effectiveness checks. The
risk mitigations are managed within defined time-
frames and reviewed routinely. When risks have
been suitably mitigated, or potential risks have not
been observed upon implementation, these risks
are removed from the plan to ensure focus, atten-
tion, and effort remains on mitigation of identified
risks. We suggest that removal of any risks in the
control plan must be agreed to by a quorum, led by
the pharmacovigilance process owner.
Risk management strategies are structured to last
the lifecycle of the AI/ML implementation and
are reviewed routinely as identified risks change.
We suggest that risk management strategies are
documented within the control plan.
AI/ML risks. Risks may be specific to using an indi-
vidual AI/ML implementation, or to the more gen-
eral use of AI/ML. NIST has developed a framework
to highlight risks surrounding the use of these sys-
tems generally.27 General risks must always con-
sider the impact on the wider pharmacovigilance
system and should balance the level of transparency
available against AI/ML performance.34
Specific risks associated with the system in ques-
tion must be developed and may be linked to
technical details, implementation of the system in
an already established process, or linked to a
human component, such as training. Where a
pharmacovigilance system has multiple AI/ML
implementations being utilized within it, the
potential cross-interference at different process
points must be considered. Specific risks must
also be considered within the wider goals of phar-
macovigilance and the processes that these tools
are intended to perform; for example, the detec-
tion of black swan events in signal detection
remains a relevant risk whether it is a human or
AI/ML tool performing the task.35
As trust in an AI/ML implementation grows, a
pharmacovigilance department may desire to
reduce human monitoring to gain additional scale
and efficiency. It is important to keep in mind
6 journals.sagepub.com/home/taw
Volume 15
THERAPEUTIC ADVANCES in
Drug Safety
that AI/ML is not required to perform a defined
task “better than or equal” to a human but rather
AI/ML must be monitored against defined per-
formance parameters outlined in the control plan.
In alignment with identified risk tolerance, human
monitoring may be stepwise reduced, and the
approach taken for reducing human monitoring
should be documented in the control plan.
The plan for the reduction of human monitoring
must be reviewed against the transparency,
accountability, and risk sections of the control
plan and use the defined performance parameters
documented in the control plan to measure
acceptable performance.
Quality management
Quality management is mandated in pharma-
covigilance through government and regulatory
legislation and guidance, and the framework
of a quality management system is outlined in
global guidance and is made pharmacovigilance-
specific through European regulations.22,36–38
There is extensive experience of quality manage-
ment in pharmacovigilance. Pharmaceutical
pharmacovigilance departments are well placed
to ensure a quality approach is adopted and
should access and draw on existing experience
when setting up these systems. Activities incorpo-
rated into quality management include process
and technical documentation, vendor contracts,
issue management, training, record management
and archiving, and oversight and assurance activi-
ties. Additional considerations which have not
already been discussed include vendor manage-
ment, and oversight and assurance activities.
Vendor management. The setup of the relation-
ship with a vendor must consider the increased
scrutiny on pharmacovigilance systems utilizing
AI/ML. Additional clauses are needed in the con-
tract for the vendor to support the pharmacovigi-
lance department when there is scrutiny of the
system via internal (audit, oversight) or external
(inspections) mechanisms. There must be consid-
eration of the interactivity between the vendor
and the pharmacovigilance department for all ele-
ments outlined in the control plan. While the
third party may have developed the AI/ML solu-
tion in use, it is the pharmacovigilance depart-
ment that bears the legal responsibility for
implementation in its pharmacovigilance system.
The contract must support the pharmaceutical
company’s procedures governing any AI/ML that
is adopted.
Consideration should be given to allowing visibil-
ity or access to regulators of data or information
that would not routinely be available for review
by auditors or pharmacovigilance departments
through routine business activity including AI/
ML algorithms and test datasets.
Oversight and assurance. Pharmacovigilance
depart ments must have oversight mechanisms in
place prior to AI/ML going live in production. In
addition, these pharmacovigilance systems must
be included in audit programs. An audit is recom-
mended prior to go-live to ensure that validation
documentation, control plan, and risk manage-
ment activities are appropriate and aligned with
the framework set by the pharmacovigilance
department; this also allows implementing pre-
ventative actions prior to system go-live.
A new assurance paradigm is required
The implementation of AI/ML in pharmacovigi-
lance represents a challenge to conventional audit
and inspection methodology. Current assurance
processes require “snap-shots” and documenta-
tion that are used to reconstruct a true represen-
tation of a time in history to determine either level
of compliance or performance, or to scrutinize
decision-making processes.39 The pharmacovigi-
lance framework requires exhaustive record and
archiving procedures covering all pharmacovigi-
lance data and documentation for the full life
cycle of all medicinal products,36,38,40 including
the systems being utilized for pharmacovigilance
activities. Currently, these processes and systems
are static and can be faithfully restored using
archives and audit trails. Some regulators may
assume these practices will still be a valid way of
getting assurance for AI/ML; a recent article from
the European Medicines Agency stated that there
is an expectation that when AI/ML is used to sup-
port benefit-risk assessments, algorithms and
datasets should be made available for review.10
This way of thinking must be challenged, and for
AI/ML, a new paradigm of assurance is required
as the current assurance methodology is impracti-
cal if not impossible. The current expectation to
keep an audit history and detailed record of every
change that is required, for example, a detailed
M Glaser and R Littlebury
journals.sagepub.com/home/taw 7
copy of a safety database or safety data test set
when either is up-versioned, does nothing to sup-
port the implementation of AI/ML in pharma-
covigilance but rather creates a data storage
problem. The challenge then is with pharmaceu-
tical companies to be able to demonstrate that
without a typical audit trail, other controls are in
place that give assurance the AI/ML is working as
intended: indeed, alternative methodologies can
be proposed.41 Additional complications exist
where AI/ML benefitting pharmacovigilance
activities are utilized, and data privacy, ethical,
and consent considerations may exist.
In addition, the quality assurance departments
within pharmaceutical companies and regulatory
authorities must adopt different approaches when
it comes to the review of AI/ML in either audit or
inspection scenarios. It is imperative that industry
and regulators work together to ensure that assur-
ance activities are robust and that expectations
are aligned so that the benefits that AI/ML can
offer to patient safety can be realized.
Conclusion
AI/ML offers great promise within pharmacovigi-
lance for improving how the benefit–risk of medi-
cines and vaccines is monitored; however,
increased scrutiny on pharmacovigilance systems
incorporating AI/ML can be expected and is
welcomed. This presents an opportunity for phar-
macovigilance departments to leverage their
extensive experiences in the governance of com-
puterized systems to form the basis of AI/ML
governance. Organizing around a RACI matrix,
appropriately governing the implemented AI/ML,
developing and utilizing both a control plan and a
plan for risk management, and being transparent
for internal audits and external regulators, all lev-
eraging experience and helping to build a high
level of confidence that the pharmacovigilance
department is performing appropriate risk-based
management of AI/ML implementations. None
of these activities is novel. All reflect existing pro-
cesses within well-functioning pharmacovigilance
departments that can be tailored and expanded to
address requirements associated with AI/ML. As
AI/ML expands into pharmacovigilance to ensure
patient safety worldwide, it is important that
regulators and the pharmaceutical industry
have an open dialogue and agree on internation-
ally aligned performance indicators and verifica-
tion processes to prevent unnecessary added
complexity and continue to ensure data integrity
and patient safety.
Declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Author contributions
Michael Glaser: Conceptualization; Writing –
original draft; Writing – review & editing.
Rory Littlebury: Conceptualization; Writing –
original draft; Writing – review & editing.
Acknowledgements
The authors thank the Akkodis Belgium platform
for editorial assistance and manuscript coordina-
tion, on behalf of GSK, and Dr Joanne Wolter
(independent, on behalf of GSK) for providing
writing support.
Funding
The authors disclosed receipt of the following
financial support for the research, authorship,
and/or publication of this article: GlaxoSmithKline
Biologicals SA took responsibility for all costs
associated with the development and publishing
of the present manuscript.
Competing interests
All authors are employees of GSK and hold finan-
cial equities in GSK.
Availability of data and materials
Not applicable.
ORCID iD
Michael Glaser https://orcid.org/0000-0001-
8843-1662
References
1. Painter JL, Kassekert R and Bate A. An industry
perspective on the use of machine learning in
drug and vaccine safety. Front Drug Saf Regul
2023; 3: 1110498.
2. Bate A and Hobbiger SF. Artificial intelligence,
real-world automation and the safety of
medicines. Drug Saf 2021; 44: 125–132.
8 journals.sagepub.com/home/taw
Volume 15
THERAPEUTIC ADVANCES in
Drug Safety
3. Bate A and Stegmann J. Artificial intelligence
and pharmacovigilance: what is happening, what
could happen and what should happen? Health
Policy Technol 2023; 12: 100743.
4. Kassekert R, Grabowski N, Lorenz D, etal.
Industry perspective on artificial intelligence/
machine learning in pharmacovigilance. Drug Saf
2022; 45: 439–448.
5. Lewis DJ and McCallum JF. Utilizing advanced
technologies to augment pharmacovigilance
systems: challenges and opportunities. Ther Innov
Regul Sci 2020; 54: 888–899.
6. Zatovkaňuková P and Slíva J. Diverse
pharmacovigilance jurisdiction—the right way for
global drug safety? Eur J Clin Pharmacol 2024; 80:
305–315.
7. Kompa B, Hakim JB, Palepu A, etal. Artificial
intelligence based on machine learning in
pharmacovigilance: a scoping review. Drug Saf
2022; 45: 477–491.
8. Ball R and Dal Pan G. “Artificial intelligence” for
pharmacovigilance: ready for prime time? Drug
Saf 2022; 45: 429–438.
9. No Author. Patient safety needs innovation. Nat
Med 2022; 28: 1725.
10. Hines PA, Herold R, Pinheiro L, etal. Artificial
intelligence in European medicines regulation.
Nat Rev Drug Discov 2023; 22: 81–82.
11. Nong P, Hamasha R, Singh K, etal. How
academic medical centers govern AI prediction
tools in the context of uncertainty and evolving
regulation. NEJM AI 2024; 1: 20240131.
12. World Health Organization. The importance of
pharmacovigilance. Safety monitoring of medicinal
products, https://apps.who.int/iris/bitstream/
handle/10665/42493/a75646.pdf (2002, accessed
25 May 2023).
13. Beninger P. Pharmacovigilance: an overview. Clin
Ther 2018; 40: 1991–2004.
14. U.S. Department of Health and Human Services,
Food and Drug Administration. Code of Federal
Regulations: Title 21—Food and drugs, Chapter
I—Food and Drug Administration, Department of
Health and Human Services, Subchapter D—Drugs
for human use, https://www.ecfr.gov/current/title-
21/chapter-I/subchapter-D (2023, accessed 25
May 2023).
15. European Medicines Agency. Pharmacovigilance:
overview, https://www.ema.europa.eu/en/human-
regulatory/overview/pharmacovigilance-overview
(2022, accessed 25 May 2023).
16. U.S. Food and Drug Administration. Data
mining, https://www.fda.gov/science-research/
data-mining (2019, accessed 25 May 2023).
17. U.S. Food and Drug Administration. FDA’s role
in managing medication risks, https://www.fda.gov/
drugs/risk-evaluation-and-mitigation-strategies-
rems/fdas-role-managing-medication-risks (2018,
accessed 25 May 2023).
18. European Medicines Agency. Human
Regulatory, Risk management plans, https://
www.ema.europa.eu/en/human-regulatory/
marketing-authorisation/pharmacovigilance/
risk-management/risk-management-plans (2022,
accessed 25 May 2023).
19. European Medicines Agency, EudraVigilance
Expert Working Group (EV-EWG). Guideline
on the use of statistical signal detection method
in the EudraVigilance data analysis system.
EMEA/106464/2006, https://www.ema.europa.
eu/en/documents/regulatory-procedural-guideline/
draft-guideline-use-statistical-signal-detection-
methods-eudravigilance-data-analysis-system_
en.pdf (2006, accessed 25 May 2023).
20. CIOMS. Practical aspects of signal detection in
pharmacovigilance. Report of CIOMS Working
Group VIII, https://cioms.ch/working_groups/
working-group-viii/ (2010, accessed 25 May 2023).
21. Greene D, Hoffmann AL and Stark L. Better,
nicer, clearer, fairer: a critical assessment of the
movement for ethical artificial intelligence and
machine learning. In: Proceedings of the 52nd
Hawaii international conference on system sciences,
Hawaii, 8–11 January 2019.
22. European Medicines Agency. Guideline on good
pharmacovigilance practices (GVP). Module
VIII—Post-authorisation safety studies (Rev. 3).
EMA/813938/2011, https://www.ema.europa.eu/
en/human-regulatory-overview/post-authorisation/
pharmacovigilance-post-authorisation/good-
pharmacovigilance-practices (2017, accessed 25
May 2023).
23. Upadhyay U, Gradisek A, Iqbal U, etal. Call
for the responsible artificial intelligence in the
healthcare. BMJ Health Care Inform 2023; 30
e100920.
24. Felzmann H, Fosch-Villaronga E, Lutz C, etal.
Towards transparency by design for artificial
intelligence. Sci Eng Ethics 2020; 26: 3333–3361.
25. Busuioc M. Accountable artificial intelligence:
holding algorithms to account. Public Adm Rev
2021; 81: 825–836.
26. U.S. Food and Drug Administration. Good
machine learning practice for medical device
M Glaser and R Littlebury
journals.sagepub.com/home/taw 9
development: guiding principles. https://www.fda.
gov/medical-devices/software-medical-device-
samd/good-machine-learning-practice-medical-
device-development-guiding-principles. (2021,
accessed 25 May 2023).
27. U.S. Department of Commerce, National Institute
of Standards and Technology. AI risk management
framework (2nd draft), https://www.nist.gov/system/
files/documents/2022/08/18/AI_RMF_2nd_draft.
pdf (2022, accessed 25 May 2023).
28. U.S. Food and Drug Administration. Guidance
for Industry—Part 11, Electronic records; electronic
signature—scope and application, https://www.fda.
gov/regulatory-information/search-fda-guidance-
documents/part-11-electronic-records-electronic-
signatures-scope-and-application (2018, accessed
25 May 2023).
29. Huysentruyt K, Kjoersvik O, Dobracki P, etal.
Validating intelligent automation systems
in pharmacovigilance: insights from good
manufacturing practices. Drug Saf 2021; 44:
261–272.
30. Klaise J, Van Looveren A, Cox C, etal.
Monitoring and explainability of models in
production. arXiv 2007.06299 [stat.ML].
31. Beam AL, Manrai AK and Ghassemi M.
Challenges to the reproducibility of machine
learning models in health care. JAMA 2020; 323:
305–306.
32. Balagurunathan Y, Mitchell R and El Naqa I.
Requirements and reliability of AI in the medical
context. Phys Med 2021; 83: 72–78.
33. Vermeer NS, Duijnhoven RG, Straus SMJM,
etal. Risk management plans as a tool for
proactive pharmacovigilance: a cohort study of
newly approved drugs in Europe. Clin Pharmacol
Ther 2014; 96: 723–731.
34. Bate A and Luo Y. Artificial intelligence and
machine learning for safe medicines. Drug Saf
2022; 45: 403–405.
35. Kjoersvik O and Bate A. Black swan events
and intelligent automation for routine safety
surveillance. Drug Saf 2022; 45: 419–427.
36. European Parliament and Council of the
European Union. Directive 2001/83/EC of the
European Parliament and of the Council of 6
November 2001 on the Community code relating
to medicinal products for human use. https://
eur-lex.europa.eu/LexUriServ/LexUriServ.do?u
ri=CONSLEG:2001L0083:20121116:EN:PDF
(2001, accessed 25 May 2023.
37. European Medicines Agency. ICH Q10
Pharmaceutical quality system—scientific
guideline, https://www.ema.europa.eu/en/ich-
q10-pharmaceutical-quality-system-scientific-
guideline (2014, accessed 6 May 2023).
38. European Commission. Commission
implementing regulation (EU) No. 520/2012,
https://eur-lex.europa.eu/LexUriServ/LexUriServ.
do?uri=OJ:L:2012:159:0005:0025:EN:PDF
(2012, accessed 25 May 2023).
39. European Medicines Agency. Union procedure
on the preparation, conduct and reporting of
EU pharmacovigilance inspections. EMA/INS/
PhV/192230/2014, https://www.ema.europa.eu/
en/documents/regulatory-procedural-guideline/
union-procedure-preparation-conduct-reporting-
eu-pharmacovigilance-inspections_en.pdf (2014,
accessed 13 July 2023).
40. European Medicines Agency. Guideline on good
pharmacovigilance practices (GVP). Module
I—Pharmacovigilance systems and their quality
systems. EMA/541760/2011, https://www.ema.
europa.eu/en/documents/scientific-guideline/
guideline-good-pharmacovigilance-practices-
module-i-pharmacovigilance-systems-their-
quality-systems_en.pdf (2012, accessed 13 July
2023)
41. Stegmann JU, Littlebury R, Trengove M, etal.
Trustworthy AI for safe medicines. Nat Rev Drug
Discov 2023; 22: 855–856.
Visit Sage journals online
journals.sagepub.com/
home/taw
Sage journals