Alexander von Humboldt: Institut für Internet und Gesellschaft
Recent publications
Context Recent laws to ensure the security and protection of personal data establish new software requirements. Consequently, new technologies are needed to guarantee software quality under the perception of privacy and protection of personal data. Therefore, we created a checklist-based inspection technique (LGPDCheck) to support the identification of defects in software systems based on the principles established by the Brazilian General Data Protection Law (LGPD). Objective To evaluate the effectiveness and efficiency of LGPDCheck for verifying privacy and data protection (PDP) in software artifacts and systems under execution compared to ad-hoc techniques. Method To assess LGPDCheck and ad-hoc techniques experimentally through two quasi-experiments (two factors, two treatments). The first, in vitro, inspected the vision and requirements specifications artifacts of an integrated accounting information management system. The second, in vivo, they inspected a public and available mobile application used to support legal services to citizens in Brazil. Results The studies indicate that LGPDCheck improves inspection consistency, detects more PDP defects compared to ad-hoc inspections, is more efficient than ad-hoc inspections when inspecting vision and requirements specifications artifacts, and is comparable in effectiveness to ad-hoc inspections. After initial use, LGPDCheck received positive feedback and was perceived as a useful and easy-to-use technique for detecting PDP defects. The professionals strongly recommended using LGPDCheck for this purpose. Conclusion LGPDCheck is a feasible checklist-based inspection technique for detecting PDP defects in software systems. However, further studies are necessary to strengthen confidence in LGPDCheck as a recommended technique for privacy and data protection in software artifacts and systems in Brazil.
Participation is a prevalent topic in many areas, and data-driven projects are no exception. While the term generally has positive connotations, ambiguities in participatory approaches between facilitators and participants are often noted. However, how facilitators can handle these ambiguities has been less studied. In this paper, we conduct a systematic literature review of participatory data-driven projects. We analyse 27 cases regarding their openness for participation and where participation most often occurs in the data life cycle. From our analysis, we describe three typical project structures of participatory data-driven projects, combining a focus on labour and resource participation and/or rule- and decision-making participation with the general set-up of the project as participatory-informed or participatory-at-core. From these combinations, different ambiguities arise. We discuss mitigations for these ambiguities through project policies and procedures for each type of project. Mitigating and clarifying ambiguities can support a more transparent and problem-oriented application of participatory processes in data-driven projects.
Can measuring and valuing the impact of business on society and the planet lead to a more environmentally and socially oriented style of capitalism? This is the main hope and assertion of corporate environmental and social impact measurement and valuation (IMV), which calls on organizations to measure their positive and negative impacts on their stakeholders and the environment and to subsequently translate them into monetary units. This curated dialog critically examines the components of this concept—environmental and social impact, its measurement, and its monetary valuation—by bringing together leading experts in the field who discuss the opportunities and risks of IMV. The purpose of this article is to place IMV under deep investigation and envision new ways that work with, complement, or replace organizations’ desire for management via quantification and financialization.
While there is a strong scholarly interest surrounding the content of political misinformation online, much of this research concerns misinformation in Western, Educated, Industrialized, Rich and Democratic (WEIRD) countries. Although such research has investigated the topical and stylistic characteristics of misinformation, its findings are frequently not interpreted systematically in relation to properties that journalists rely on to capture the attention of audiences, that is, in relation to news values. We close the gap on comparative studies of news values in misinformation with a perspective that emphasizes non-WEIRD countries. Relying on a dataset of URLs that were shared on Facebook in twenty-four countries and reported by users as containing false news, we compile a large corpus of online news items and use an array of computational tools to analyze its content with respect to a set of five news values (conflict, negativity, proximity, individualization, and informativeness). We find salient differences for almost all news values and regarding the WEIRD/non-WEIRD and flagged/unflagged distinction. Moreover, the prevalence of individual news values differs strongly for individual countries. However, while almost all differences are significant, the effects we encounter are mostly small.
Artificial intelligence has become an issue in public policy. Multiple documents issued by public sector actors link artificial intelligence to a wide range of issues, problems or goals and propose corresponding measures and interventions. While there has been substantial research on national and supranational artificial intelligence strategies and regulations, this article is interested in unpacking the processes and priorities of artificial intelligence policy in the making. Conceptually, this article takes a controversy studies lens onto artificial intelligence policy, and complements this with concepts and insights from policy studies. Empirically, we investigate the emergence of German artificial intelligence policy based on content analyses of policy documents and expert interviews. The findings reveal a late, but then powerful institutionalisation of artificial intelligence policy in German federal politics. Artificial intelligence policy in Germany focuses on funding research and supporting industry actors in networked configurations, much more than addressing societal concerns on inequality, discrimination or political economy. With regard to controversies, we observe that German policy is evading controversies by normalising artificial intelligence both with regard to taking artificial intelligence integration in all sectors of society for granted, as well as by accommodating artificial intelligence issues into the routines and institutions of German policy.
Advanced assistive technologies like robots must go beyond being merely service-on-request devices but should be equipped with abilities to coexist in social environments. Many development activities concentrate on refining robot capabilities for understanding complex social nuances. In this paper, we argue for shifting focus; rather than aiming for flawless operation through optimized context understanding, strong emphasis should be put into designing robot systems for intervenability. The concept of intervenability, primarily known from the IT security and privacy domain, describes a systemic property which allows humans to meaningfully step-in robotic operation. To contextualize intervenability design relative to other approaches, we position it within a design space that represents development processes along optimizing opportunities and mitigating risks. We show that common boundaries can be overcome as it enables delegation of some context understanding complexity to humans in-situ. Examples from the domain of robotic life assistants will be used to illustrate the argumentation.
Automation is a defining feature of today’s societies—not only since ChatGPT and generative artificial intelligence (AI) have accomplished to produce yet another wave of hype. This essay introduces a special issue on automation and communication in the digital society. It aims to study how subjectivity, agency, and empowerment become defined and reconfigured in novel human–machine encounters and, more broadly, in societies which in large parts are kept going and sustained by complex digital infrastructures. The issue includes contributions from a wide array of disciplines and perspectives and engages with conditions, contexts, and consequences of automation in very different settings ranging from journalism to self-service hotels, and from social movements in Hong Kong to the Russian Invasion to the Ukraine. The articles offer critical perspectives on the transition of human activity into machine operations, and back, as well on the social dynamics changing and emerging in increasingly digitized and datafied societies.
University leaders play crucial roles in steering and fostering change within higher education institutions (HEIs). Drawing upon the complexity leadership theory (CLT) and organizational trust, we investigate how university leaders trusting staff with responsibilities tied to digital change contributed to an institutional culture of innovation. Through 68 interviews with staff members working in 8 European study programs, we found that leaders exhibited trust by creating flat hierarchies, sharing decision-making, and ensuring a safe space for experimentation with educational technologies (EdTech). This led to staff being intrinsically motivated to engage with technology and innovate with new formats. We also found that university leaders sometimes used ‘trust’ to justify allocating the responsibilities of digital change to the shoulders of staff without providing support such as infrastructure, funding, and guidance. This contributed to demotivation and stifled innovation. This study highlights the importance of university leaders trusting and empowering their staff members' creative processes with technology and supporting innovation within higher education.
This paper presents an in-depth case study about the Dialogue between Scientific Councils, also referred to as the Beirätedialog, which is a format for cross-sectoral science policy consulting on sustainable development in Germany. Set up to address current trends, it is designed to facilitate deliberation and collective knowledge creation between scientists and policymakers. Based on 4 years of participatory observation, we analyze to what extent this goal can be achieved and present some empirical insights about the main difficulties that occurred. We argue that creating a space for interaction does not guarantee collective knowledge production and identify key learnings that can help design such a process. In support of the growing interest in communication at the intersection of science and policymaking, our research seeks to deepen the understanding of the dynamics of co-creative processes and offer some insights on how to overcome the main challenges.
Zusammenfassung Die Diskussion über die weitere Gestaltung der Open-Access-Transformation erhielt durch die Schlussfolgerungen des Papiers „Wege des hochwertigen, transparenten, offenen, vertrauenswürdigen und fairen wissenschaftlichen Publizierens“ des Rats der Europäischen Union aus dem Mai 2023 eine neue Dynamik. Der Rat betont die Notwendigkeit von gemeinnützigen, wissenschaftsgeleiteten Open-Access-Publikationsmodellen. Als Beitrag zur Diskussion zum Thema wurden zur Satelliten-Konferenz „Wissenschaftsgeleitetes Open-Access-Publizieren“ am Institut für Bibliotheks- und Informationswissenschaft (IBI) der Humboldt-Universität zu Berlin am 26.09.2023 „Thesen zur Zukunft des wissenschaftsgeleiteten Open-Access-Publizierens“ erarbeitet. Die Veranstaltung begleitete die Open-Access-Tage 2023, die an der Freien Universität Berlin stattfanden. In diesem Aufsatz werden die Thesen vorgestellt und deren Entstehungs- und Diskussionsprozess beschrieben. Die zehn Thesen sind als Impuls für die Weiterentwicklung der Open-Access-Transformation zu verstehen.
Trustworthy artificial intelligence (TAI) is trending high on the political agenda. However, what is actually implied when talking about TAI, and why it is so difficult to achieve, remains insufficiently understood by both academic discourse and current AI policy frameworks. This paper offers an analytical scheme with four different dimensions that constitute TAI: a) A user perspective of AI as a quasi-other; b) AI's embedding in a network of actors from programmers to platform gatekeepers; c) The regulatory role of governance in bridging trust insecurities and deciding on AI value trade-offs; and d) The role of narratives and rhetoric in mediating AI and its conflictual governance processes. It is through the analytical scheme that overlooked aspects and missed regulatory demands around TAI are revealed and can be tackled. Conceptually, this work is situated in disciplinary transgression, dictated by the complexity of the phenomenon of TAI. The paper borrows from multiple inspirations such as phenomenology to reveal AI as a quasi-other we (dis-)trust; Science & Technology Studies (STS) to deconstruct AI's social and rhetorical embedding; as well as political science for pinpointing hegemonial conflicts within regulatory bargaining.
Design patterns, a concept originated in urban architecture and adopted also in software engineering, provides a potential approach also for translations between law and technology. This approach will be examined and elaborated from various viewpoints in this topical collection, for which this introductory article provides an overall framework. Here, we discuss design patterns as documentations of living practice, which embed legal concepts, rules, and thinking and between internal and external perspectives to law. We argue that design patterns provide a structured format for interdisciplinary discussions and enhance problem-solving and self-reflecting capabilities of legal scholarship.
The spread of misinformation has reached a level at which neither research nor fact-checkers can monitor it only manually anymore. Accordingly, there has been much research on models and datasets for detecting checkworthy claims. However, the research in NLP is mostly detached from findings in communication science on misinformation and fact-checking. Checkworthiness is a notoriously vague concept whose meaning is contested among different stakeholders. Against the background of news value theory, i.e., the study of factors that make an event relevant for journalistic reporting, this is not surprising. It is argued that this vagueness leads to inconsistencies and poor generalization across different datasets and domains. For the experiments, models are trained on one dataset, tested on the remaining, and evaluated against the results on the original performance, against a random baseline, and against the scores when the models are not trained at all. The study finds that there is a drastic reduction in comparison with the performance on the original dataset. Moreover, often the models are outperformed by the random baseline and training on one dataset has no or even a negative impact on the performance on the other datasets. This paper proposes that future research should abandon this task design and instead take inspiration from research in communication science. In the style of news values, Claim Detection should focus on factors that are relevant for fact-checkers and misinformation.
Zusammenfassung Der Beitrag stellt die Ergebnisse einer empirischen Studie an acht europäischen Hochschulen mit insgesamt 68 Interviews vor und geht der Frage nach dem Verhältnis der jeweiligen Digitalisierungsstrategie zwischen Hochschulleitung, Fakultätsleitung und Lehrenden nach. Es zeigt sich, dass die Entwicklung von Digitalisierungsstrategien durch die Coronapandemie einen Aufschwung erfahren hat. Die Lehrenden waren allerdings selten über Digitalisierungsstrategien informiert und der Fakultät war die Umsetzung der Digitalisierungsstrategien überlassen. Zur Erklärung dieses Phänomens eignet sich die Theorie lose gekoppelter Systeme von Karl E. Weick, die sich auf Hochschulen übertragen lässt.
This paper introduces Propositional Claim Detection (PCD), an NLP task for classifying claims to truth, and presents a publicly available dataset for it. PCD is applicable in practical scenarios, for instance, for the support of fact-checkers, as well as in many areas of communication research. By leveraging insights from philosophy and linguistics, PCD is a more systematic and transparent version of claim detection than previous approaches. This paper presents the theoretical background for PCD and discusses its advantages over alternative approaches to claim detection. Extensive experiments on models trained on the dataset are conducted and result in an F1F1\hbox {F}_{1}-score of up to 0.91. Moreover, PCD’s generalization across domains is tested. Models trained on the dataset show stable performance for text from previously unseen domains such as different topical domains or writing styles. PCD is a basic task that finds application in various fields and can be integrated with many other computational tools.
Digital beauty filters are pervasive in social media platforms. Despite their popularity and relevance in the selfies culture, there is little research on their characteristics and potential biases. In this article, we study the existence of racial biases on the set of aesthetic canons embedded in social media beauty filters, which we refer to as the Beautyverse. First, we provide a historic contextualization of racial biases in beauty practices, followed by an extensive empirical study of racial biases in beauty filters through state-of-the-art face processing algorithms. We show that beauty filters embed Eurocentric or white canons of beauty, not only by brightening the skin color, but also by modifying facial features.
The ‘blue economy’ is slowly emerging as a catch-all concept that captures the goals of sustaining economic development opportunities while simultaneously maintaining ocean ecosystem health. However, identifying the scope and boundaries of the blue economy has proven to be a challenging task. The aim of this article is to provide a new approach to finding a practical definition of the blue economy. Social equity is noted as the legal component and balancing mechanism that operates between the economy and the environment. This legal compensation mechanism has so far only been rudimentarily elaborated on in governance texts and must be given concrete form through legal methods and, in particular, through the creation of an institutional and procedural framework. The article introduces the idea that an advanced legal concept for the blue economy can be realised by using existing administrative processes and by strengthening the participation of private actors within these processes.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
50 members
Benedikt Fecher
  • Knowledge and Society
Jörg Pohle
  • Data, Actors and Infrastructures
Information
Address
Berlin, Germany