Harshvardhan J. Pandit’s research while affiliated with Dublin City University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (64)


Mapping the Regulatory Learning Space for the EU AI Act
  • Preprint
  • File available

February 2025

·

69 Reads

·

·

Delaram Golpayegani

·

Harshvardhan J. Pandit

The EU's AI Act represents the world first transnational AI regulation with concrete enforcement measures. It builds upon existing EU mechanisms for product health and safety regulation, but extends it to protect fundamental rights and by addressing AI as a horizontal technology that is regulated across multiple vertical application sectors. These extensions introduce uncertainties in terms of how the technical state of the art will be applied to AI system certification and enforcement actions, how horizontal technical measures will map into vertical enforcement responsibilities and the degree to which different fundamental rights can be protected across EU Member States. We argue that these uncertainties, coupled with the fast changing nature of AI and the relative immaturity of the state of the art in fundamental rights risk management require the implementation of the AI Act to place a strong emphasis on comprehensive and rapid regulatory learning. We define parameterised axes for the regulatory learning space set out in the Act and describe a layered system of different learning arenas where the population of oversight authorities, value chain participants and affected stakeholders may interact to apply and learn from technical, organisational and legal implementation measures. We conclude by exploring how existing open data policies and practices in the EU can be adapted to support regulatory learning in a transparent manner that supports the development of trust in and predictability of regulated AI. We discuss how the Act may result in a regulatory turn in the research of AI fairness, accountability and transparency towards investigations into implementations of and interactions between different fundamental rights protections and reproducible and accountable models of metrology for AI risk assessment and treatment.

Download

ADAPT Centre Contribution on Implementation of the EU AI Act and Fundamental Right Protection

February 2025

·

11 Reads

·

·

Harshvardhan J. Pandit

·

[...]

·

Arthit Suriyawongku

This document represents the ADAPT Centre's submission to the Irish Department of Enterprise, Trade and Employment (DETE) regarding the public consultation on implementation of the EU AI Act.


Developing an Ontology for AI Act Fundamental Rights Impact Assessments

December 2024

·

22 Reads

The recently published EU Artificial Intelligence Act (AI Act) is a landmark regulation that regulates the use of AI technologies. One of its novel requirements is the obligation to conduct a Fundamental Rights Impact Assessment (FRIA), where organisations in the role of deployers must assess the risks of their AI system regarding health, safety, and fundamental rights. Another novelty in the AI Act is the requirement to create a questionnaire and an automated tool to support organisations in their FRIA obligations. Such automated tools will require a machine-readable form of information involved within the FRIA process, and additionally also require machine-readable documentation to enable further compliance tools to be created. In this article, we present our novel representation of the FRIA as an ontology based on semantic web standards. Our work builds upon the existing state of the art, notably the Data Privacy Vocabulary (DPV), where similar works have been established to create tools for GDPR's Data Protection Impact Assessments (DPIA) and other obligations. Through our ontology, we enable the creation and management of FRIA, and the use of automated tool in its various steps.


Towards An Automated AI Act FRIA Tool That Can Reuse GDPR's DPIA

December 2024

·

44 Reads

The AI Act introduces the obligation to conduct a Fundamental Rights Impact Assessment (FRIA), with the possibility to reuse a Data Protection Impact Assessment (DPIA), and requires the EU Commission to create of an automated tool to support the FRIA process. In this article, we provide our novel exploration of the DPIA and FRIA as information processes to enable the creation of automated tools. We first investigate the information involved in DPIA and FRIA, and then use this to align the two to state where a DPIA can be reused in a FRIA. We then present the FRIA as a 5-step process and discuss the role of an automated tool for each step. Our work provides the necessary foundation for creating and managing information for FRIA and supporting it through an automated tool as required by the AI Act.


Developing an Ontology for AI Act Fundamental Rights Impact Assessments

December 2024

The recently published EU Artificial Intelligence Act (AI Act) is a landmark regulation that regulates the use of AI technologies. One of its novel requirements is the obligation to conduct a Fundamental Rights Impact Assessment (FRIA), where organisations in the role of deployers must assess the risks of their AI system regarding health, safety, and fundamental rights. Another novelty in the AI Act is the requirement to create a questionnaire and an automated tool to support organisations in their FRIA obligations. Such automated tools will require a machine-readable form of information involved within the FRIA process, and additionally also require machine-readable documentation to enable further compliance tools to be created. In this article, we present our novel representation of the FRIA as an ontology based on semantic web standards. Our work builds upon the existing state of the art, notably the Data Privacy Vocabulary (DPV), where similar works have been established to create tools for GDPR's Data Protection Impact Assessments (DPIA) and other obligations. Through our ontology, we enable the creation and management of FRIA, and the use of automated tool in its various steps.


Towards An Automated AI Act FRIA Tool That Can Reuse GDPR's DPIA

December 2024

The AI Act introduces the obligation to conduct a Fundamental Rights Impact Assessment (FRIA), with the possibility to reuse a Data Protection Impact Assessment (DPIA), and requires the EU Commission to create of an automated tool to support the FRIA process. In this article, we provide our novel exploration of the DPIA and FRIA as information processes to enable the creation of automated tools. We first investigate the information involved in DPIA and FRIA, and then use this to align the two to state where a DPIA can be reused in a FRIA. We then present the FRIA as a 5-step process and discuss the role of an automated tool for each step. Our work provides the necessary foundation for creating and managing information for FRIA and supporting it through an automated tool as required by the AI Act.


Datasheets for Healthcare AI: A Framework for Transparency and Bias Mitigation

December 2024

·

7 Reads

The use of AI in healthcare has the potential to improve patient care, optimize clinical workflows, and enhance decision-making. However, bias, data incompleteness, and inaccuracies in training datasets can lead to unfair outcomes and amplify existing disparities. This research investigates the current state of dataset documentation practices, focusing on their ability to address these challenges and support ethical AI development. We identify shortcomings in existing documentation methods, which limit the recognition and mitigation of bias, incompleteness, and other issues in datasets. We propose the ‘Healthcare AI Datasheet’ to address these gaps, a dataset documentation framework that promotes transparency and ensures alignment with regulatory requirements. Additionally, we demonstrate how it can be expressed in a machine-readable format, facilitating its integration with datasets and enabling automated risk assessments. The findings emphasise the importance of dataset documentation in fostering responsible AI development.


How to Manage My Data? With Machine--Interpretable GDPR Rights!

December 2024

·

44 Reads

The EU GDPR is a landmark regulation that introduced several rights for individuals to obtain information and control how their personal data is being processed, as well as receive a copy of it. However, there are gaps in the effective use of rights due to each organisation developing custom methods for rights declaration and management. Simultaneously, there is a technological gap as there is no single consistent standards-based mechanism that can automate the handling of rights for both organisations and individuals. In this article, we present a specification for exercising and managing rights in a machine-interpretable format based on semantic web standards. Our approach uses the comprehensive Data Privacy Vocabulary to create a streamlined workflow for individuals to understand what rights exist, how and where to exercise them, and for organisations to effectively manage them. This work pushes the state of the art in GDPR rights management and is crucial for data reuse and rights management under technologically intensive developments, such as Data Spaces.


Figure 1: An overview of the AICat Profile
Comparison of AI cataloguing approaches (a black circle (•) indicates the criterion is satisfied, while a blank circle (∘) indicates that it is not)
Registration requirements for high-risk AI systems under the EU AI Act
Specifications for representing AI systems and models in AICat
AICat: An AI Cataloguing Approach to Support the EU AI Act

December 2024

·

48 Reads

The European Union’s Artificial Intelligence Act (AI Act) requires providers and deployers of high-risk AI applications to register their systems into the EU database, wherein the information should be represented and maintained in an easily-navigable and machine-readable manner. Given the uptake of open data and Semantic Web-based approaches for other EU repositories, in particular the use of the Data Catalogue vocabulary Application Profile (DCAT-AP), a similar solution for managing the EU database of high-risk AI systems is needed. This paper introduces AICat—an extension of DCAT for representing catalogues of AI systems that provides consistency, machine-readability, searchability, and interoperability in managing open metadata regarding AI systems. This open approach to cataloguing ensures transparency, traceability, and accountability in AI application markets beyond the immediate needs of high-risk AI compliance in the EU. AICat is available online at https://w3id.org/aicat under the CC-BY-4.0 license.


How to Manage My Data? With Machine–Interpretable GDPR Rights!

December 2024

·

5 Reads

The EU GDPR is a landmark regulation that introduced several rights for individuals to obtain information and control how their personal data is being processed, as well as receive a copy of it. However, there are gaps in the effective use of rights due to each organisation developing custom methods for rights declaration and management. Simultaneously, there is a technological gap as there is no single consistent standards-based mechanism that can automate the handling of rights for both organisations and individuals. In this article, we present a specification for exercising and managing rights in a machine-interpretable format based on semantic web standards. Our approach uses the comprehensive Data Privacy Vocabulary to create a streamlined workflow for individuals to understand what rights exist, how and where to exercise them, and for organisations to effectively manage them. This work pushes the state of the art in GDPR rights management and is crucial for data reuse and rights management under technologically intensive developments, such as Data Spaces.


Citations (36)


... The need for reporting AI risks both at the level of models and specific uses is partly driven by standards like the NIST AI Risk Management Framework [51] and regulations like the EU AI Act [14], which mandate risk documentation based on the particular use and context [26,31]. Consequently, various AI impact assessment reports [1,6,65] and cards [25] have been proposed to help AI developers prepare the required documentation, particularly for high-risk systems. ...

Reference:

RiskRAG: A Data-Driven Solution for Improved AI Model Risk Reporting
AI Cards: Towards an Applied Framework for Machine-Readable AI and Risk Documentation Inspired by the EU AI Act

... In the public consciousness, two vivid risks people envision from AI are existential threats to humanity (Cameron, 1984) and the risk of being replaced by machines (Autor, 2015;. In the past few years, multiple groups have introduced AI incident trackers (Abercrombie et al., 2024;Hutiri et al., 2024;McGregor, 2021;OECD, 2024) and taxonomies (Arda, 2024;Cattell et al., 2024;Critch & Russell, 2023;Shelby et al., 2022;Weidinger, 2022) to analyse the potential harms and risks of AI. ...

A Collaborative, Human-Centred Taxonomy of AI, Algorithmic, and Automation Harms

... The DPVCG plans to refine its TECH and AI extensions based on existing works [36,37] providing taxonomies for AI techniques, capabilities, lifecycle stages, risks and risk sources, and to enable stakeholders to express specific usecases (e.g., involving generative AI) in a manner that supports requirements for EU AI Act and ISO standards [36]. The DPVCG is also continuing its efforts to develop vocabularies to represent key 'data and AI regulations' notably in EU the Digital Services Act (DSA), Data Markets Act (DMA), Data Act, and Data Spaces, and modelling laws in other jurisdictions, e.g., Ireland, USA, and UK. ...

AI Cards: Towards an Applied Framework for Machine-Readable AI and Risk Documentation Inspired by the EU AI Act

... This unified model goes beyond simply assigning attributes to data sources. These ontologies provide a structured representation of privacy requirements, helping to differentiate them from security requirements and prevent privacy violations [9,10]. Privacy ontologies are crucial tools when addressing privacy concerns in various domains. ...

Data Privacy Vocabulary (DPV) - Version 2
  • Citing Preprint
  • April 2024

... Moreover, ML models were trained with DPV's taxonomies to identify personal data processing activities in code repositories [44,84] and textual datasets [32,33]. DPV's outputs were also used to model access and usage control policies [10,12,16,86], and in particular applied to Solid [3,13,14,20,21,25,30] and health data-sharing use cases [64,82], as well as to describe consent records and contracts for sensor data [51,53]. In the context of data spaces, DPV was used to provide descriptions of health data handling activities [43] and to create user-centric privacy interfaces [31,39,59]. ...

Enhancing Data Use Ontology (DUO) for health-data sharing by extending it with ODRL and DPV

Semantic Web

... When it comes to extensions performed over DPV, most were contributed back to DPV to be integrated into DPVCG's outputs. Concerning work on GDPR requirements, there were pro-posed extensions focusing on consent [15,61], in particular related to the processing of electronic health record data [71], as well as on building semantic models to represent records of processing activities [73,74,77,78], data protection impact assessments [60], data breaches' reports [68], and international data transfer notices [45]. Moreover, extensions focusing on GDPR's data subject rights and exemptions to these rights [19] and on DGA requirements [18,24] were also contributed back to DPVCG's outputs. ...

Towards a Semantic Specification for GDPR Data Breach Reporting

... Concerning work on GDPR requirements, there were pro-posed extensions focusing on consent [15,61], in particular related to the processing of electronic health record data [71], as well as on building semantic models to represent records of processing activities [73,74,77,78], data protection impact assessments [60], data breaches' reports [68], and international data transfer notices [45]. Moreover, extensions focusing on GDPR's data subject rights and exemptions to these rights [19] and on DGA requirements [18,24] were also contributed back to DPVCG's outputs. ...

Semantics for Implementing Data Reuse and Altruism Under EU’s Data Governance Act

... The need for reporting AI risks both at the level of models and specific uses is partly driven by standards like the NIST AI Risk Management Framework [51] and regulations like the EU AI Act [14], which mandate risk documentation based on the particular use and context [26,31]. Consequently, various AI impact assessment reports [1,6,65] and cards [25] have been proposed to help AI developers prepare the required documentation, particularly for high-risk systems. ...

To Be High-Risk, or Not To Be—Semantic Specifications and Implications of the AI Act’s High-Risk AI Applications and Harmonised Standards
  • Citing Conference Paper
  • June 2023

... On a technical level, several initiatives of 'privacy signals' have been emerging as a tool for users to communicate their preferences, e.g., DNT [54], the Global Privacy Control (GPC) [61], or the Advanced Data Protection Control (ADPC) [62] efforts. However, despite their benefits, they are still lacking adoption, are not standardized for interoperability nor legally tested to fulfill ePrivacy requirements [63]. ...

How could the upcoming ePrivacy Regulation recognise enforceable privacy signals in the EU?
  • Citing Preprint
  • February 2023