April 2025
·
2 Reads
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
April 2025
·
2 Reads
April 2025
·
2 Reads
February 2025
·
10 Reads
Microchips are fundamental components of modern electronic devices, yet they remain opaque to the users who rely on them daily. This opacity, compounded by the complexity of global supply chains and the concealment of proprietary information, raises significant security, trust, and accountability issues. We investigate end users' understanding of microchips, exploring their perceptions of the societal implications and information needs regarding these essential technologies. Through an online survey with 250 participants, we found that while our participants were aware of some microchip applications, they lacked awareness of the broader security, societal, and economic implications. While our participants unanimously desired more information on microchips, their specific information needs were shaped by various factors such as the microchip's application environment and one's affinity for technology interaction. Our findings underscore the necessity for improving end users' awareness and understanding of microchips, and we provide possible directions to pursue this end.
February 2025
·
5 Reads
Machine learning-driven rankings, where individuals (or items) are ranked in response to a query, mediate search exposure or attention in a variety of safety-critical settings. Thus, it is important to ensure that such rankings are fair. Under the goal of equal opportunity, attention allocated to an individual on a ranking interface should be proportional to their relevance across search queries. In this work, we examine amortized fair ranking -- where relevance and attention are cumulated over a sequence of user queries to make fair ranking more feasible in practice. Unlike prior methods that operate on expected amortized attention for each individual, we define new divergence-based measures for attention distribution-based fairness in ranking (DistFaiR), characterizing unfairness as the divergence between the distribution of attention and relevance corresponding to an individual over time. This allows us to propose new definitions of unfairness, which are more reliable at test time. Second, we prove that group fairness is upper-bounded by individual fairness under this definition for a useful class of divergence measures, and experimentally show that maximizing individual fairness through an integer linear programming-based optimization is often beneficial to group fairness. Lastly, we find that prior research in amortized fair ranking ignores critical information about queries, potentially leading to a fairwashing risk in practice by making rankings appear more fair than they actually are.
January 2025
·
46 Reads
The susceptibility to biases and discrimination is a pressing issue in today's labor markets. Though digital recruitment systems play an increasingly significant role in human resources management, thus far we lack a systematic understanding of human-centered design principles for fair online hiring. This work proposes a fair recruitment framework based on job-seekers' fairness concerns shared in an online forum. Through qualitative analysis, we uncover four overarching themes of job-seekers' fairness concerns, including discrimination against sensitive attributes, interaction biases, improper interpretations of qualifications, and power imbalance. Based on these findings, we derive design implications for algorithms and interfaces in recruitment systems, integrating them into a fair recruitment framework spanning different hiring stages and fairness considerations.
January 2025
·
63 Reads
Since the emergence of generative AI, creative workers have spoken up about the career-based harms they have experienced arising from this new technology. A common theme in these accounts of harm is that generative AI models are trained on workers' creative output without their consent and without giving credit or compensation to the original creators. This paper reports findings from 20 interviews with creative workers in three domains: visual art and design, writing, and programming. We investigate the gaps between current AI governance strategies, what creative workers want out of generative AI governance, and the nuanced role of creative workers' consent, compensation and credit for training AI models on their work. Finally, we make recommendations for how generative AI can be governed and how operators of generative AI systems might more ethically train models on creative output in the future.
January 2025
·
1 Read
SSRN Electronic Journal
September 2024
·
72 Reads
·
11 Citations
ACM Transactions on Intelligent Systems and Technology
Employers are adopting algorithmic hiring technology throughout the recruitment pipeline. Algorithmic fairness is especially applicable in this domain due to its high stakes and structural inequalities. Unfortunately, most work in this space provides partial treatment, often constrained by two competing narratives, optimistically focused on replacing biased recruiter decisions or pessimistically pointing to the automation of discrimination. Whether, and more importantly what types of , algorithmic hiring can be less biased and more beneficial to society than low-tech alternatives currently remains unanswered, to the detriment of trustworthiness. This multidisciplinary survey caters to practitioners and researchers with a balanced and integrated coverage of systems, biases, measures, mitigation strategies, datasets, and legal aspects of algorithmic hiring and fairness. Our work supports a contextualized understanding and governance of this technology by highlighting current opportunities and limitations, providing recommendations for future work to ensure shared benefits for all stakeholders.
September 2024
·
192 Reads
General Data Protection Regulations (GDPR) aim to safeguard individuals' personal information from harm. While full compliance is mandatory in the European Union and the California Privacy Rights Act (CPRA), it is not in other places. GDPR requires simultaneous compliance with all the principles such as fairness, accuracy, and data minimization. However, it overlooks the potential contradictions within its principles. This matter gets even more complex when compliance is required from decision-making systems. Therefore, it is essential to investigate the feasibility of simultaneously achieving the goals of GDPR and machine learning, and the potential tradeoffs that might be forced upon us. This paper studies the relationship between the principles of data minimization and fairness in recommender systems. We operationalize data minimization via active learning (AL) because, unlike many other methods, it can preserve a high accuracy while allowing for strategic data collection, hence minimizing the amount of data collection. We have implemented several active learning strategies (personalized and non-personalized) and conducted a comparative analysis focusing on accuracy and fairness on two publicly available datasets. The results demonstrate that different AL strategies may have different impacts on the accuracy of recommender systems with nearly all strategies negatively impacting fairness. There has been no to very limited work on the trade-off between data minimization and fairness, the pros and cons of active learning methods as tools for implementing data minimization, and the potential impacts of AL on fairness. By exploring these critical aspects, we offer valuable insights for developing recommender systems that are GDPR compliant.
August 2024
·
27 Reads
Through a systematization of generative AI (GenAI) stakeholder goals and expectations, this work seeks to uncover what value different stakeholders see in their contributions to the GenAI supply line. This valuation enables us to understand whether fair use advocated by GenAI companies to train model progresses the copyright law objective of promoting science and arts. While assessing the validity and efficacy of the fair use argument, we uncover research gaps and potential avenues for future works for researchers and policymakers to address.
... However, the scope of Art. 89 is highly controversial (See (Kindt et al., 2021;Biega & Finck, 2021) for more details). Because of this controversial debate, we focus on the EU-wide law. ...
December 2021
Technology and Regulation
... As AI systems become increasingly embedded in such critical domains, concerns about fairness and transparency have grown, particularly regarding their effects on protected groups defined by gender, race, or other protected attributes. For example, studies have shown that many AI-driven hiring systems exhibit bias against women, reflecting historical inequalities [16]. Similarly, the COMPAS system, used for recidivism prediction, has been found to assign higher risk scores to black defendants and lower risk scores to white defendants compared to their actual scores [33], highlighting the potential for discriminatory outcomes. ...
September 2024
ACM Transactions on Intelligent Systems and Technology
... The understanding can also be affected by the individual's domain expertise in the decision-making task [83] as well as the explanation's modality (e. g., textual, visual, or interactive) [65]. Speith et al. [73] connect explainability to hardware in the context of requirements engineering, with a particular focus on microchips. Among their future research directions, they explicitly propose to explore end-users' mental models of microchips. ...
June 2024
... One thing is known for sure: we cannot assume that the current multi-billion dollar investments from regulators will guarantee end-user trust in microchips. Therefore, similar to existing research on user perceptions of rights prescribed in the General Data Protection Regulation (GDPR) [32,38,47,77], more work is needed to understand end users' perceptions of ongoing regulatory initiatives around microchips in order to capture and embed laypeople's opinions about microchips into policymaking. ...
May 2024
... Due to position bias, individuals gain exposure based on their position in the ranking, which directly influences the attention they receive [8,53]. Under the normative principle of equal opportunity, the objective of exposure-based fair ranking is to assign rankings such that the attention allocated to each individual is proportional to their merit [5,54,65,66]. In practical terms, merit is operationalized as a value proportional to relevance. ...
July 2023
... Many studies use popularity-based 'top lists' of websites -created by services such as Tranco, Alexa, Majestic, or Google -rather than a manually curated set of domains. 3 Often, these studies simply use the list of globally most popular sites [14,40,62,74,88,104], whereas others create subsets from these top lists. For example, by filtering the list using the country code top-level domain (ccTLD, such as .be ...
April 2023
... AI-related technology presents its own developments and challenges, such is the case of AI chips, specialized for different functions within the armed and security forces [34,23]. For example, AI for air superiority and defense uses vision systems and machine learning to identify and track targets. ...
February 2023
... Information access systems, including information retrieval and recommender systems, commonly rely on ranking to present outcomes pertinent to the user's information requirements [133]. Ranking algorithms mistreat specific groups unfairly [47]. Many online platforms usually adopt user-centric optimization where the ranking lists are generated by their estimated relevance [184]. ...
February 2023
... For instance, the prohibition to discriminate on the basis of certain characteristics could be operationalized through an algorithm that optimizes, depending on the context [10], for demographic parity or equalized odds [15]. The legal principle of data minimization could be operationalized through a combination of a performance-based criterion [16,17] and a k-anonymization technique (see, e.g., [18]) and product liability laws may require operationalization through the use of inherently interpretable models [19,20]. To evaluate the impact on the different legal obligations, metrics on the ML model's performance on different ethical aspects (e.g. ...
June 2022
... Drawing on critical scholarships in HCI that challenge the normative assumptions of well-being [2,45,81], we highlight the need for researchers to resist curative framings of interventions [183,201] and embrace the necessity of negative emotions in care receivers' recovery journey [63,89,125]. Beyond sociocultural norms of what and how emotions "should" be expressed, participatory designs may strive to elevate the agency of individuals to explore and process intense, negative feelings, for instance, through playful experiments with accessible, familiar materials in their everyday environments to easily craft sensory narratives of negative emotions. ...
April 2022