Article

The challenges and limits of big data algorithms in technocratic governance

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Big data is driving the use of algorithm in governing mundane but mission-critical tasks. Algorithms seldom operate on their own and their (dis)utilities are dependent on the everyday aspects of data capture, processing and utilization. However, as algorithms become increasingly autonomous and invisible, they become harder for the public to detect and scrutinize their impartiality status. Algorithms can systematically introduce inadvertent bias, reinforce historical discrimination, favor a political orientation or reinforce undesired practices. Yet it is difficult to hold algorithms accountable as they continuously evolve with technologies, systems, data and people, the ebb and flow of policy priorities, and the clashes between new and old institutional logics. Greater openness and transparency do not necessarily improve understanding. In this editorial we argue that through unravelling the imperceptibility, materiality and governmentality of how algorithms work, we can better tackle the inherent challenges in the curatorial practice of data and algorithm. Fruitful avenues for further research on using algorithm to harness the merits and utilities of a computational form of technocratic governance are presented.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Procedural justice contributes to perceptions of legitimacy (Mazerolle et al., 2013). Algorithmic decision-making has been identified as a problem for the legitimacy of decision-making processes (Danaher, 2016) as they are often opaque and might introduce bias (Janssen & Kuk, 2016). ...
... Comparable information will mean different things to different people. Trade-offs between values need to be taken into account and discretion should be used to deal with these trade-offs (Janssen & Kuk, 2016;Lipsky, 2010). This makes it impossible to objectively evaluate the 'right' course of action. ...
... Transparency might be a solution, as transparency can generate trust (Kizilcec, 2016). However, a pitfall is that offering superficial information might give the illusion of transparency and not actually increase understanding of the workings of the algorithm (Janssen & Kuk, 2016). Apart from transparency, an open dialogue could help managers increase understanding of public employees' perceptions of algorithmic decision-making. ...
Thesis
Full-text available
The rise of behavioral public administration demonstrated that we can understand and change decision-making by using insights about heuristics. Heuristics are mental shortcuts that reduce complex tasks to simpler ones. Whereas earlier studies mainly focused on interventions such as nudges, scholars are now broadening their scope to include debiasing, and psychological theories beyond heuristics. Scholars are moreover shifting their attention away from citizen-focused interventions to public sector worker-oriented interventions, i.e. the very people who are expected to nudge society. This dissertation seeks to explore how behavioral sciences can facilitate understanding and support decision-making across the public sector. We present four studies that investigate a range of behavioral theories, practices, issues and public sector workers. This dissertation shows that when handling heuristics in the public sector, we need to take into account the institutional and situational settings, as well as differences between public sector workers. The results of this dissertation can be used by practitioners and academics to understand and support decision-making in public sector contexts.
... The adoption, implementation, and use of Artificial Intelligence (AI) is an increasing trend in public organizations. Recent studies have highlighted the transformational capacities of AI technologies in the public sector across different services areas and policy sectors (Janssen & Kuk, 2016;Sun & Medaglia, 2019), or regarding to their implications for people working in public administrations (Criado, Valero, & Villodre, 2020;Margetts & Dorobantu, 2019), or addressing how citizens interact with public authorities (Agarwal, 2018;Androutsopoulou, Karacapilidis, Loukis, & Charalabidis, 2019;Vigoda, 2002). At the same time, empirical evidence about the different implications of this new wave of technological innovations in public administration and its organizational challenges is relatively limited (Medaglia, Gil-Garcia, & Pardo, 2021). ...
... Smartness in government has been studied from different perspectives (Gil-Garcia, Helbig, & Ojo, 2014;Gil-Garcia, Pardo, & Nam, 2015;Rodriguez-Bolívar & Meijer, 2016). Thus, the nascent development of AI in government is driving scholars to the study of how to govern algorithms in the public sector (Janssen & Kuk, 2016;Just & Latzer, 2017), requiring a more sophisticated interplay between public management and digital government to cope increasingly wicked problems (Gil-Garcia, Dawes, & Pardo, 2018). Different authors have acknowledged AI systems as a unique set of technological innovations that will make public services more efficient and effective, but also bring major changes to public administration and management (Dwivedi et al., 2020;Wirtz & Müller, 2019), and will shape the future of (public) organizations (Margetts & Dorobantu, 2019;Van der Voort, Klievink, Arnaboldi, & Meijer, 2019). ...
... Also, most CIOs agree with the idea that "AI in the public sector is not very different than AI in the private sector". Given the difference between public and private sectors in terms of values, goals, and infrastructure (Boyne, 2002;Hughes, 2012;Kernaghan & Langford, 1990;Lui & Cooper, 1997) it may be interesting to test whether these ideas on AI can affect AI implementation differently. When CIOs think about AI, they seem to take for granted its technological dimension, but this is not paired with neither social or politico-administrative aspects, nor archetypical values of the public sector, including transparency, accountability, participation, equality, or impartiality (Margetts & Dorobantu, 2019). ...
Article
Full-text available
Artificial Intelligence (AI) policies and strategies have been designed and adopted in the public sector during the last few years, with Chief Information Officers (CIOs) playing a key role. Using socio-cognitive and institutional approaches on Information Technologies (ITs) in (public) organizations, we consider that the assumptions, expectations, and knowledge (technological frames) of those in charge (CIOs) of designing AI strategies are guiding the future of these emerging systems in the public sector. In this study, we focus on the technological frames of CIOs in the largest Spanish local governments. Based on a survey administered to CIOs leading IT departments, this article presents original data about their technological frames on AI. Our results: (1) provide insights about how CIOs tend to focus on the technological features of AI implementation while often overlook some of the social, political, and ethical challenges in the public sector; (2) expand the theory on AI by enabling the construction of propositions and testable hypotheses for future research in the field. Therefore, the comparative study of technological frames will be key to successfully design and implement AI policies and strategies in the public sector and to tackle future challenges and opportunities.
... Whether and how new technologies subsumed under artificial intelligence (AI) could be used in public organizations has been much debated in recent years. While there is justified scepticism and fear that governments using AI may become too technocratic (Janssen and Kuk 2016), jeopardize privacy (Maciejewski 2017), reinforce inequalities, and even threaten democracy (Eubanks 2017;O'Neil 2016), it has also been pointed out that AI offers a plethora of opportunities for the public sector. ...
... Several studies have focused on challenges and risks of AI, such as privacy, legal, and ethical issues (Bannister and Connolly 2020;Janssen and Kuk 2016;Wirtz, Weyerer, and Geyer 2019), which mainly address questions of what and why (not). In light of the negative consequences of faulty AI for society, these studies are of high normative and practical relevance (see De la Garza (2020) for the example of the Michigan MiDAS system that wrongly accused citizens of tax fraud). ...
... Subsequent research might focus on long-term evaluations involving more stakeholders and striving for more generalizable results. Since our study deliberately focuses on adoption factors of AI in the public sector, it does not consider further the application risks within the public sector (Eubanks 2017;Janssen and Kuk 2016;Maciejewski 2017;O'Neil 2016), nor does it discuss the question of how public organizations might deal with algorithmic transparency . ...
Article
Full-text available
Despite the enormous potential of artificial intelligence (AI), many public organizations struggle to adopt this technology. Simultaneously, empirical research on what determines successful AI adoption in public settings remains scarce. Using the technology organization environment (TOE) framework, we address this gap with a comparative case study of eight Swiss public organizations. Our findings suggest that the importance of technological and organizational factors varies depending on the organization's stage in the adoption process, whereas environmental factors are generally less critical. Accordingly, this study advances our theoretical understanding of the specificities of AI adoption in public organizations throughout the different adoption stages.
... In a similar vein, the deployment of algorithms in harvesting and mediating big data has spawned a parallel process in which Northern societies are privileged and affirmed by algorithmic inclusions as they serve as a model society for machine learning, while Southern societies tend to be disadvantaged and disaffirmed by algorithmic exclusions as they are a non-model society for machine learning. A corollary of this is that, right from the onset, in an increasingly automated world and in a world where what Janssen and Kuk (2016) call Big and Open Linked Data (BOLD) is readily available, Southern societies are denied digital citizenship by algorithmic exclusions even if they were all to be dataand digitally-savvy. This is a consequential issue as who gets excluded by both big data and algorithms has their life chances negatively impacted by exclusions perpetuated by automated algorithms. ...
... This is also a problematic issue as those whose data is harvested and utilized by algorithms have no control and decisionmaking capacity over how their data is used. The point here is, as Janssen and Kuk (2016) pertinently argue, even though algorithms are thought to belong to the domain of computer programming, they nonetheless percolate into social and economic spheres. In fact, there is no gainsaying that big data and algorithms have almost colonized the life worlds of modern-day, automated societies, wherever their locations are. ...
Article
Full-text available
This paper explores digital marginalization, data marginalization, and algorithmic exclusions in the Souths. To this effect, it argues that underrepresented users and communities continue to be marginalized and excluded by digital technologies, by big data, and by algorithms employed by organizations, corporations, institutions, and governments in various data jurisdictions. Situating data colonialism within the Souths, the paper contends that data ableism, data disablism, and data colonialism are at play when data collected, collated, captured, configured, and processed from underrepresented users and communities is utilized by mega entities for their own multiple purposes. It also maintains that data coloniality, as opposed to data colonialism, is impervious to legal and legislative interventions within data jurisdictions. Additionally, it discusses digital citizenship (DC) and its related emerging regimes. Moreover, the paper argues that digital exclusion transcends the simplistic haves versus the have nots dualism as it manifests itself in multiple layers and in multiple dimensions. Furthermore, it characterizes how algorithmic exclusions tend to perpetuate historical human biases despite the pervasive view that algorithms are autonomous, neutral, rational, objective, fair, unbiased, and non-human. Finally, the paper advances a critical southern decolonial (CSD) approach to datafication, algorithms, and digital citizenship by means of which data coloniality, algorithmic coloniality, and the coloniality embodied in DC have to be critiqued, challenged, and dismantled.
... It is also important to involve internal stakeholders such as decision makers and AI algorithm users to avoid automation bias behavior while engaging in the design and development of the technology and its professional adoption. Their involvement is also relevant to avoid forms of technocratic governance, where all problems can be addressed with a new technology or by calling a technical expert (Janssen and Kuk, 2016). All professionals must be involved in the process, including accounting professionals. ...
... AI algorithms could therefore push further the "governing of the self" (Janssen and Kuk, 2016) through a pervasive capacity to collect data produced by citizens, while the same citizens are influenced by algorithms in their daily routines. This new form of surveillance will flourish in the midst of opaque AI algorithm public service delivery, which will allow for reduced control while reducing the costs of regulation (Janssen and Kuk, 2016). Accountability systems are crucial to avoid an emergence of this post-panopticon perspective by promoting citizens' participation and transparency in the design and use of AI algorithms. ...
Purpose Governments are increasingly turning to artificial intelligence (AI) algorithmic systems to increase efficiency and effectiveness of public service delivery. While the diffusion of AI offers several desirable benefits, caution and attention should be posed to the accountability of AI algorithm decision-making systems in the public sector. The purpose of this paper is to establish the main challenges that an AI algorithm might bring about to public service accountability. In doing so, the paper also delineates future avenues of investigation for scholars. Design/methodology/approach This paper builds on previous literature and anecdotal cases of AI applications in public services, drawing on streams of literature from accounting, public administration and information technology ethics. Findings Based on previous literature, the paper highlights the accountability gaps that AI can bring about and the possible countermeasures. The introduction of AI algorithms in public services modifies the chain of responsibility. This distributed responsibility requires an accountability governance, together with technical solutions, to meet multiple accountabilities and close the accountability gaps. The paper also delineates a research agenda for accounting scholars to make accountability more “intelligent”. Originality/value The findings of the paper shed new light and perspective on how public service accountability in AI should be considered and addressed. The results developed in this paper will stimulate scholars to explore, also from an interdisciplinary perspective, the issues public service organizations are facing to make AI algorithms accountable.
... Algorithms are now used to govern many aspects of our society and economy (Janssen & Kuk, 2016) as argued by the Committee of Experts MSI-AUT in the 2018 Draft Recommendation of the Committee of Ministers to Member States on the human rights impacts of algorithmic systems entitled "Addressing the Impacts of Algorithms on Human Rights" (Council of Europe, 2018): ...
... Algorithms are used to govern many aspects of our society and economy (Janssen & Kuk, 2016). As Osoba and Welser IV (2017) argue, an algorithm can be defined as "a computable function that can be implemented on computer systems. ...
... The former amounts to large consolidated data sets, which may result from merging several files together. "Big data," by contrast, typically refers to extremely large volumes of data from many sources, linked in a manner that makes them amenable to machine learning and semantic querying (Janssen and Kuk, 2016). The use of machine learning tools to solve problems in entrepreneurial strategy and policy has recently taken root in prominent policy and management journals (Guzman, 2017;Guzman and Stern, 2020;von Hippel and Cann, 2020). ...
... The use of multi-level policy analysis to design natural experiments has been established with regard to the SBIR program (Lanahan and Feldman, 2018). Scholars have also begun to unearth sources of exogenous variation in COVID-19 policies, capitalizing on abrupt Supreme Court decisions (Dave et al., 2020) and natural variation in the timing and extent of mask and stay-at-home orders (Janssen and Kuk, 2016;Lyu and Wehby, 2020). The use of relational database tools affords a higher degree of organization and precision in measuring exogenous sources of policy variation. ...
Article
Scholarly literature on the concept of entrepreneurial ecosystems has increased sharply over the past five years. The surge in interest has also heightened the demand for robust empirical measures that capture the complexity of dynamic relationships among ecosystem constituents. We offer a framework for measurement that places collaborative relationships among entrepreneurs, firms, government agencies, and research institutions at the center of the ecosystem concept. We further emphasize the four roles of the federal government as a catalyst, coordinator, certifier, and customer in shaping these relationships. Despite the central importance of these firm-government interactions, there is surprisingly little research on suitable methodologies and appropriate data for systematically and reliably incorporating them into measures of ecosystem health. Our study aims to address this gap in the literature by first developing a conceptual framework for measuring entrepreneurial ecosystems and then describing an array of accompanying databases that provide rich and detailed information on firms and their relationships with government organizations, accelerators, and research institutions. A major advantage of our approach is that all the underlying databases are drawn from non-confidential, publicly available sources that are transparently disclosed and regularly updated. This greatly expands the potential community of scholars, managers, and policymakers that may independently use these databases to test theories, make decisions, and formulate policies related to innovation and entrepreneurship.
... Citizens and society are interested in making these processes more transparent, since the basis on which these decisions are made are rarely available to the public (Kroll, 2015). Thus, it is essential to make the algorithms more transparent to the general community, in order to identify and deal with biases (Janssen & Kuk, 2016). Recent discussion have been held in relation to the transparency of the algorithms systems that are being used in decision-making and incorporated into public systems, such as transportation, health and lawenforcement (Fink, 2018). ...
... Recent discussion have been held in relation to the transparency of the algorithms systems that are being used in decision-making and incorporated into public systems, such as transportation, health and lawenforcement (Fink, 2018). These algorithms become increasingly autonomous and opaque, making it difficult for the public to examine and verify their impartiality (Janssen & Kuk, 2016). Decisions made by predictive algorithms can be 'opaque' due to many categorical factors, including technical (the algorithm may not be easy to explain); economic (the cost of providing transparency can be excessive, or result in compromising trade secrets), and social (disclosing 'entries' may violate privacy expectations) (ACM, 2017). ...
Article
The transparency and accountability of systems and algorithms aims to protect the user against undesirable or harmful results and to ensure the application of laws appropriate to digital environments. Thus, the objective of this study is to evaluate the transparency and accountability provided to citizens in three of the main digital public services (e-services) offered by the federal administration of Brazil (ComprasNet, Sisu and Naturalizar-se) locally recognized for their significant relevance and stage of development and use. Services were evaluated from eight perspectives: accessibility; awareness; access and repair; accountability; explanation; origin of data, privacy and fairness; auditing; validation, accuracy and testing. Adopting a qualitative approach through comparative case studies, this research contributes to information practices theory (construction of a model for assessing transparency and accountability in digital public services). The results obtained show the need to inform the user of possible bias and damage arising from these systems, which are not readily perceived; just as the need to clarify the benefits that arise from the collection of private data are not. This shows that computational models can be distorted as a result of biases contained in their input data, or algorithms. This paper contributes through an innovative combination of dimensions, as a tool to evaluate transparency and accountability of government services.
... Although consumers' voluntary participation in DCT applications is encouraged, it has been met with resistance, citing privacy issues as a primary concern (Altmann et al., 2020;Chan & Saqib, 2021;Hargittai et al., 2020;Jercich, 2020;Walrave et al., 2021). Typical privacy concerns expressed by consumers stem from the lack of personal control over their private information, how data will be stored and safeguarded, and most importantly the extent to which data will be used for its intended purpose (Janssen & Kuk, 2016;Kapa et al., 2020;Phelps et al., 2001). Moreover, digital applications often leave an electronic trail, which heightens the information risk as illustrated by the increasing cases of information leaks that have occurred in recent years (Drinkwater, 2016;Janssen & Kuk, 2016). ...
... Typical privacy concerns expressed by consumers stem from the lack of personal control over their private information, how data will be stored and safeguarded, and most importantly the extent to which data will be used for its intended purpose (Janssen & Kuk, 2016;Kapa et al., 2020;Phelps et al., 2001). Moreover, digital applications often leave an electronic trail, which heightens the information risk as illustrated by the increasing cases of information leaks that have occurred in recent years (Drinkwater, 2016;Janssen & Kuk, 2016). ...
Article
Digital contact tracing (DCT) applications, as a type of location‐based application, have been employed in many countries to help mitigate the spread of the COVID‐19 virus. However, the emergence of DCTs has amplified concerns over privacy issues as consumers are confronted with the ethical dilemma that arises regarding serving public and private interests. In other words, to what extent are consumers willing to negotiate their privacy concerns to gain perceived social benefits? Drawing on Social Exchange Theory as the theoretical lens to examine interpersonal relations between the government and consumers, this study investigates the extent to which consumers' perceived social benefits (e.g., reciprocity, trust, and reputation) mediate the relationship between privacy concerns and the intention to use DCT applications. Based on 269 usable responses, the results revealed that government trust was insignificant in mediating the relationship between privacy concerns and intention to use the DCT application. Rather, the expected reciprocal benefits and reputation enhancement were found to have significant mediating effects. Perceived government regulation was also found to moderate the relationship between privacy concerns and government trust. The paper concludes with suggestions for practitioners and policymakers on the plausible strategies to encourage the adoption of DCT applications.
... At the same time, there are concerns about AI's potential to erode public values. Automation based on algorithms developed using existing data tends to perpetuate historical biases that are embedded in government data [8,9] and to magnify system tendencies for inequality [9][10][11]. AI can significantly reduce the transparency of data, data analysis, and decision making [12]. ...
Article
Full-text available
While there has been growth in the literature exploring the governance of artificial intelligence (AI) and recognition of the critical importance of guiding public values, the literature lacks a systematic study focusing on public values as well as the governance challenges and solutions to advance these values. This article conducts a systematic literature review of the relationships between the public sector AI and public values to identify the impacts on public values and the governance challenges and solutions. It further explores the perspectives of U.S. government employees on AI governance and public values via a national survey. The results suggest the need for a broad inclusion of diverse public values, the salience of transparency regarding several governance challenges, and the importance of stakeholder participation and collaboration as governance solutions. This article also explores and reports the nuances in these results and their practical implications.
... This approach trains historical data to infer relationships between data without receiving explicit instructions, and it is suitable for various tasks that include random forests, neural networks, and other complex models. The misuse of AI algorithms may exaggerate predictive capabilities, introduce unintentional bias, reinforce historical discrimination, and support specific political leaning (Barth & Arnold, 1999;Janssen & Kuk, 2016). Meanwhile, these potential risks of AI algorithms raise concerns about fairness in public decision-making and the equal treatment of the public (Wirtz, Weyerer, & Sturm, 2020). ...
Article
Various types of algorithms are being increasingly used to support public decision-making, yet we do not know how these different algorithm types affect citizens' attitudes and behaviors in specific public affairs. Drawing on public value theory, this study uses a survey experiment to compare the effects of rule-driven versus data-driven algorithmic decision-making (ADM) on citizens' perceived fairness and acceptance. This study also examines the moderating role of familiarity with public affairs and the mediating role of perceived fairness on the relationship. The findings show that rule-driven ADM is generally perceived as fairer and more acceptable than data-driven ADM. Low familiarity with public affairs strengthens citizens' perceived fairness and acceptance of rule-driven ADM more than data-driven ADM, and citizens' perceived fairness plays a significant mediating role in the effect of rule-driven ADM on citizens' acceptance behaviors. These findings further imply that citizens' perceived fairness and acceptance of ADM is strongly shaped by how they perceive familiarity of the decision-making context. In high-familiarity AI application scenarios, the realization of public values may ultimately not be what matters for ADM acceptance among citizens.
... Indeed, thick data can inspire new questions and insights that nurture envisioning potential and future meaningful scenarios (Sestino et al., 2020). In other words, the complementary use of big and thick data can mitigate the presence of biases embedded in both data categories (Crawford, 2013;Hargittai, 2015;Janssen and Kuk, 2016). Thus, our findings enrich the design thinking literature by informing scholars that different practices are needed when dealing with data and technologies (i.e., the eight practices reported in Fig. 2) in design thinking processes and activities (Gruber et al., 2015), thereby enriching prior systematizations of practices in the new product development and innovation literature (Carlgren et al., 2016;Micheli et al., 2019). ...
Article
Full-text available
Scholars and practitioners have recognized that making innovation happen today requires renewed approaches focused on agility, dynamicity, and other organizational capabilities that enable firms to cope with uncertainty and complexity. In turn, the literature has shown that design thinking is a useful methodology to cope with ill-defined and wicked problems. In this study, we address the question of the little-known role of different types of data in innovation projects characterized by ill-defined problems requiring creativity to be solved. Rooted in qualitative observation (thick data) and quantitative analyses (big data), we investigate the role of data in eight design thinking projects dealing with ill-defined and wicked problems. Our findings highlight the practical and theoretical implications of eight practices that differently make use of big and thick data, informing academics and practitioners on how different types of data are utilized in design thinking projects and the related principles and practices.
... 2000'lerin başından itibaren kamu politikacıları, teknolojik dönüşümleri ve çeşitlenen vatandaş taleplerini anlamaya çalışırken bilgi ve iletişim teknolojilerinin gücünden yararlanmaktadır (Gül, 2018). Bilgi ve iletişim teknolojileri (BİT), web ve sosyal medya araçları hizmet ve iletişim kanallarını dijitalleştirirken aynı zamanda kamu yönetiminde karar alma ve yönetişimi de geliştirmektedir (Janssen ve Kuk, 2016). Elektronik devlet ya da e-devlet, devletlerin kamu yönetiminde hem vatandaşlara ve kamu yönetiminin diğer paydaşlarına yönelik bilgi ve hizmetlerin sağlanmasında hem de yönetimde katılımcılık, şeffaflık ve hesap verebilirlik gibi siyasi işlev ve süreçlerin yerine getirilmesinde bilgi ve iletişim teknolojilerinin kullanılmasını ifade etmektedir (Layne ve Lee, 2001;Yıldız ve Leblebici, 2018). ...
Chapter
Full-text available
Gelişmekte olan teknolojiler, son derece karmaşık ve anlaşılması güç “kara kutular” olarak tanımlanmaktadır. 21. yüzyılın dijital risk toplumunda, yapay zekâ devrimi ve algoritmalar günlük yaşamı, ekonomik ilişkileri ve idari yapıları önemli ölçüde etkilemeye devam etmektedir. Kamu otoriteleri; teknolojinin entegrasyonu, adaptasyonu ve düzenlenmesinin birincil sağlayıcıları ve kolaylaştırıcılarıdır. Günümüzde kamu politikacılarının ve kamu politikası araştırmacılarının en önemli sorunlarından biri mevcut düzenleyici politikalar ile teknolojinin hızla dönüşen atmosferi arasında bir “tempo problemi (pacing problem)” bulunmasıdır. Teknolojik gelişim o kadar hızlıdır ki mevcut kamu politikaları ve yasal düzenlemeler teknolojinin ilerleyişindeki hıza yetişememekte ve bilgi asimetrisindeki farkı kapatamamaktadır. Çağın en büyük teknolojik gelişimlerinden biri olan ve internetin iki boyutlu bir etkileşimden üç boyutlu sürükleyici bir deneyime evrilmesinde bir sonraki adım olarak ifade edilen metaverse teknolojisi, çeşitli protokol girişim ve keşfi beraberinde getirmektedir. Ayrıca metaverse teknolojisinin sunduğu artırılmış ve sanal gerçeklik imkânları eğitim ve sağlık gibi kamu hizmetlerinde yeni fırsatlar ve alternatif kamu hizmeti kanalları sunma potansiyeline sahipken özel hayatın gizliliği, verilerin korunması, birlikte çalışabilirlik ve merkezî olmayan güvenlik sorunlarını da beraberinde getirme riskini taşımaktadır. İnternetin ve buna bağlı olarak dijital teknoloji ekosisteminin giderek merkeziyetsizleşmesi, merkezî otoritelerin üzerindeki baskıyı arttırmaktadır. Bu çerçevede bu araştırmada, kamu otoritelerinin metaverse politikaları, yasal ve idari düzenleme boşlukları ve metaverse yönetişimini kamu sektörü ekseninde değerlendirilmiştir. Ayrıca metaverse teknolojisinin yönetsel alanda ortaya çıkarması muhtemel potansiyel fırsat ve tehditler kamu politikası bağlamında incelenmiştir. Ek olarak “metaverse yönetişimi” yaklaşımı, bileşenleri ve araştırma gündemi çeşitli boyutlarıyla ele alınmıştır. Metaverse Governance Emerging technologies are described as complex and mysterious “black boxes.” In the digital risk society of the 21st century, the artificial intelligence revolution and algorithms continue to affect daily life, economic relations, and administrative structures significantly. Public authorities are the primary providers and facilitators of technology integration, adaptation, and regulation. “Pacing problem” between current regulatory policies and the rapidly transforming atmosphere of technology is one of the most critical problems for politicians and policymakers. Technological development is rapidly fast that current public policies and legal regulations cannot keep up with the acceleration of technological change and the information asymmetry gap. Metaverse technology is one of the most significant technological evolutions expressed as the next step in the evolution of the internet from a two-dimensional interaction to a three-dimensional immersive experience that brings along various protocols, initiatives, and discoveries. In addition, the augmented virtual reality offered by metaverse technology has the potential to provide new opportunities and alternative public service channels such as education and health. However, metaverse brings privacy, data protection, interoperability, and decentralized security problems and risks. The expanding decentralization of the internet and the digital technology ecosystem stress central authorities. Therefore, this research discusses the metaverse policies, legal and administrative regulatory gaps, and metaverse governance from the public policy perspective. In addition, this research analyzes potential opportunities and threats that metaverse technology reveals in the administrative field. In addition, the “metaverse governance” approach, its components, and research agenda be discussed with its various dimensions.
... Algorithmic bias. Governments increasingly experiment with AI-based algorithms to improve efficiency through large-scale customisation of public services -a type of task that draws on citizen profiling (Janssen & Kuk, 2016). Examples of such applications include public hospitals using machine learning algorithms to predict virus outbreaks (Mitchell et al., 2016); analytics tools to predict hotspots of crime (Goldsmith & Crawford, 2014; Mejer and Wessels, 2019) and high-risk youth (Chandler et al., 2011); and AI systems used to target health inspections in restaurant businesses (Kang et al., 2013). ...
Technical Report
Full-text available
This publication is a Science for Policy report by the Joint Research Centre (JRC), the European Commission's science and knowledge service. It aims to provide evidence-based scientific support to the European policymaking process. The scientific output expressed does not imply a policy position of the European Commission. Neither the European Commission nor any person acting on behalf of the Commission is responsible for the use that might be made of this publication. For information on the methodology and quality underlying the data used in this publication for which the source is neither Eurostat nor other Commission services, users should contact the referenced source. The designations employed and the presentation of material on the maps do not imply the expression of any opinion whatsoever on the part of the European Union concerning the legal status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries.
... Smart cities provide a more effective and efficient use of urban resources, urban planning, urban infrastructure, and traffic (Chang et al., 2019). Furthermore, smart city applications also have substantial advantages in the effective and efficient use of public resources (Janssen & Kuk, 2016). ...
Article
Full-text available
Technological advancements have created notable turning points throughout the history of humanity. Influential transformations in the administrative structure are the result of modern technological discoveries. The artificial intelligence (AI) revolution and algorithms now affect daily lives, communities, and government structures more than ever. Governments are the main coordinators of technological transition and supervisors of the activities of modern public administration systems. Hence, public administration and policies have crucial responsibilities in integrating, governing, and regulating AI technology. This article concentrates on the big questions of AI in the public administration and policy literature. The big questions discussion started by Robert Behn in 1995 draws attention to the big questions as the primary driving force of a public administration research agenda. The fundamental motivation of the big questions approach is shaped by the fact that “questions are as important as answers.” Integrating AI into public administration and the policy-making process allows numerous opportunities. However, AI technology also contains multiple threats and risks in economic, social, and even political structures in the long term. This article aims to identify big questions and discuss potential answers and solutions from an AI governance research agenda perspective.
... (The Wall Street Journal, 2020). The techno-driven approach requires citizens to follow the protocols and does not consider the context (Janssen & Kuk, 2016). This is one of the reasons why it raises concerns about "erosion of privacy" and freedom (The Wall Street Journal, 2020). ...
Article
Technological advancements and big data have brought many improvements to smart city infrastructure. During the COVID‐19 outbreak, smart city technologies were considered one of the most effective means of fighting the pandemic. The use of technology, however, implies collecting, processing personal data, and making the collected data publicly available which may violate privacy. While some countries were able to freely use these technologies to fight the pandemic, many others were restricted by their privacy protection legislation. The literature suggests looking for an approach that will allow the effective use of smart city technologies during the pandemic, while complying with strict privacy protection legislation. This article explores the approach applied in Moscow, Russia, and demonstrates the existence of a hybrid model that might be considered a suitable tradeoff between personal privacy and public health. This study contributes to the literature on the role of smart city technologies during pandemics and other emergencies.
... However, owing to their Blackbox nature, they are not readily explainable. Existing X-AI methods make a genuine effort to break the Blackbox yet cannot fully explain all the contours of a prediction [75,110,111]. As opposed to existing X-AI methods, our decision stack framework mimics biological brain principles. ...
Article
Full-text available
European law now requires AI to be explainable in the context of adverse decisions affecting the European Union (EU) citizens. At the same time, we expect increasing instances of AI failure as it operates on imperfect data. This paper puts forward a neurally inspired theoretical framework called “decision stacks” that can provide a way forward in research to develop Explainable Artificial Intelligence (X-AI). By leveraging findings from the finest memory systems in biological brains, the decision stack framework operationalizes the definition of explainability. It then proposes a test that can potentially reveal how a given AI decision was made.
... The explainable AI consists in AI systems that can explain their rationale to a human user and characterize their strengths and weaknesses [44]. With these techniques, we could know how much each variable affected the outcome, helping to form knowledgeable opinions usable from criminal justice professionals to motivate their decision [45,46]. In this regard, it would be useful to use a human-in-the-loop approach that leverages the strengths of collaboration between humans and machines to produce the best results, reinforcing the importance of the synergistic work [47,48]. ...
Article
Full-text available
Recent evolution in the field of data science has revealed the potential utility of machine learning (ML) applied to criminal justice. Hence, the literature focused on finding better techniques to predict criminal recidivism risk is rapidly flourishing. However, it is difficult to make a state of the art for the application of ML in recidivism prediction. In this systematic review, out of 79 studies from Scopus and PubMed online databases we selected, 12 studies that guarantee the replicability of the models across different datasets and their applicability to recidivism prediction. The different datasets and ML techniques used in each of the 12 studies have been compared using the two selected metrics. This study shows how each method applied achieves good performance, with an average score of 0.81 for ACC and 0.74 for AUC. This systematic review highlights key points that could allow criminal justice professionals to routinely exploit predictions of recidivism risk based on ML techniques. These include the presence of performance metrics, the use of transparent algorithms or explainable artificial intelligence (XAI) techniques, as well as the high quality of input data.
... Klierink and Janssen (2009) introduce the concept of "joined-up government", where government departments could be "joined-up" by an information collecting and sharing system. This could allow policymaking and public services to be more seamlessly connected, well integrated and less prone to error (Janssen and Kuk 2016). This body of literature is vital to the foundation of understanding a government's enhanced capability of extracting intelligence and insights from Big Data, and assimilating it into policy making and service delivery. ...
Article
Full-text available
While the Big Data revolution is transforming public policy, some debates and competing perspectives on the impact of the disruptive technology of Big Data analytics remain. Although trade-offs among objectives are inevitable in Big Data applications, its ultimate impact would depend on the moderating factors, which vary across contexts such as policy areas and national systems. Integrating the literature from multiple disciplines, this article identifies some of the critical moderating factors accounting for the differentials of Big Data impacts and develops a typology of its applications in public policy as a heuristic to understand and reconcile competing perspectives.
... Regarding transformative effects, the defaults of an algorithm have been shown to be extremely important; these default values can determine the implementation of an algorithm for decades so these need to be considered with care [29,40]. Finally, within traceability there is the issue regarding where to place the moral judgement; it is crucial to be able to know the people who will be held accountable when an algorithmic decision results in negative societal impacts [17,32,34,37]. ...
Preprint
With the increasing pervasiveness of algorithms across industry and government, a growing body of work has grappled with how to understand their societal impact and ethical implications. Various methods have been used at different stages of algorithm development to encourage researchers and designers to consider the potential societal impact of their research. An understudied yet promising area in this realm is using participatory foresight to anticipate these different societal impacts. We employ crowdsourcing as a means of participatory foresight to uncover four different types of impact areas based on a set of governmental algorithmic decision making tools: (1) perceived valence, (2) societal domains, (3) specific abstract impact types, and (4) ethical algorithm concerns. Our findings suggest that this method is effective at leveraging the cognitive diversity of the crowd to uncover a range of issues. We further analyze the complexities within the interaction of the impact areas identified to demonstrate how crowdsourcing can illuminate patterns around the connections between impacts. Ultimately this work establishes crowdsourcing as an effective means of anticipating algorithmic impact which complements other approaches towards assessing algorithms in society by leveraging participatory foresight and cognitive diversity.
... The digital algorithm is thus a sequence of computational steps that transform the input into the output' (Cormen, Leiserson, Rivest, & Stein, 2009). The term algorithm has often been associated with big data to indicate algorithmic models capable of processing a huge amount of data and providing solutions to any question or problem (Finn, 2017;Janssen & Kuk, 2016;Li, Jiang, Yang, & Cuzzocrea, 2015;Moschovakis, 2001). Substantially, digital algorithms represent the 'language' adopted by the platform-based innovation ecosystem, allowing communication and interaction among heterogeneous actors. ...
Article
The platformization seems to be a demiurgic force, increasingly (re)shaping this millennium and its socio-economic, technological and physical structures, institutions, and human lives. Innovation ecosystems are experiencing this platformization, leading to the rise of platform-based innovation ecosystems. However, the industrial and managerial literature still lacks a shared definition, a consistent theoretical and strategic framework to explain how platform-based innovation ecosystems emerge and replicate from market to market. This conceptual work attempts to fill those gaps by integrating the extant literature on innovation ecosystems in two ways. First, moving from the literature on innovation ecosystems and industry platforms, using systems thinking framing, it explains the platformization of innovation ecosystems through the double lens structure-system. Second, it identifies the holographic strategy as one of the typical patterns featuring platform-based innovation ecosystem envelopment beyond extant market boundaries. These conceptualizations have insightful theoretical, managerial, and policy implications. In particular, the work discusses the ecosystem as a valid unit of analysis for understanding such an unprecedented shaped-by-platform landscape. Then, it describes the growth strategies of the platform-based innovation ecosystem supporting the platform sponsor in mastering multipoint competition. Eventually, the study pinpoints crucial issues for policymakers in regulating the impact that platformization is having on society.
... Governments have developed and tested AI-based algorithms to pursue efficiency in public services (Janssen & Kuk, 2016). For example, AI systems have led to the development of chatbot services in public sectors, autonomous vehicles, autonomous planning, translation, and medical services, based on collected big data and machine intelligence (Dwivedi et al., 2021). ...
Article
Full-text available
In scarcely a decade, a “labification” phenomenon has taken hold globally. The search for innovative policy solutions for social problems is embedded within scientific experimental-like structures often referred to as policy innovation labs (PILs). With the rapid technological changes (e.g., big data, artificial intelligence), data-based PILs have emerged. Despite the growing importance of these PILs in the policy process, very little is known about them and how they contribute to policy outcomes. This study analyzes 133 data-based PILs and examines their contribution to policy capacity. We adopt policy capacity framework to investigate how data-based PILs contribute to enhancing analytical, organization, and political policy capacity. Many data-based PILs are located in Western Europe and North America, initiated by governments, and employ multi-domain administrative data with advanced technologies. Our analysis finds that data-based PILs enhance analytical and operational policy capacity at the individual, organizational and systemic levels but do little to enhance political capacity. It is this deficit that we suggest possible strategies for data-based PILs.
... Several researchers have in fact pointed out that data is never neutral but has always intrinsic biases due to the culture in which they are generated, collected and analysed (Janssen & Kuk, 2016;van Dijck, 2014). There are demographics within societies which are structurally less involved in formal and data generating activities, and thus, provide less data, negatively impacting policy making and public services (Giest & Samuels, 2020), which is especially relevant for the use of social media data (Boyd & Crawford, 2012). ...
Article
Artificial Intelligence is increasingly being used by public sector organisations. Previous research highlighted that the use of AI technologies in government could improve policy making processes, public service delivery and the internal management of public administrations. In this article, we explore to which extent the use of AI in the public sector impacts these core governance functions. Findings from the review of a sample of 250 cases across the European Union, show that AI is used mainly to support improving public service delivery, followed by enhancing internal management and only in a limited number assist directly or indirectly policy decision-making. The analysis suggests that different types of AI technologies and applications are used in different governance functions, highlighting the need to further in-depth investigation to better understand the role and impact of use in what is being defined the governance “of, with and by AI”.
... Artificial Intelligence (AI) and machine learning are powerful digital technologies that could be used towards this direction (Lee, 2019). In this case, certain issues need to be addressed including ethical challenges, such as the lack of trust in AI-based decisions (Sun & Medaglia, 2019) and bias of algorithms in policy making (Janssen & Kuk, 2016), organizational and managerial challenges, such as resistance to data sharing (World Bank, 2020), and others. ...
Article
Full-text available
Emergency Departments (EDs) are the most overcrowded places in public hospitals. Machine learning can support decisions on effective ED resource management by accurately forecasting the number of ED visits. In addition, Explainable Artificial Intelligence (XAI) techniques can help explain decisions from forecasting models and address challenges like lack of trust in machine learning results. The objective of this paper is to use machine learning and XAI to forecast and explain the ED visits on the next on duty day. Towards this end, a case study is presented that uses the XGBoost algorithm to create a model that forecasts the number of patient visits to the ED of the University Hospital of Ioannina in Greece, based on historical data from patient visits, time-based data, dates of holidays and special events, and weather data. The SHapley Additive exPlanations (SHAP) framework is used to explain the model. The evaluation of the forecasting model resulted in an MAE value of 18.37, revealing a more accurate model than the baseline, with an MAE of 29.38. The number of patient visits is mostly affected by the day of the week of the on duty day, the mean number of visits in the previous four on duty days, and the maximum daily temperature. The results of this work can help policy makers in healthcare make more accurate and transparent decisions that increase the trust of people affected by them (e.g., medical staff).
... So far, in the Governments, improving governance, through the concept of e-Governance, has been the technology focus in public administration. Referring to Janssen and Kuk (2016). Governments can also leverage such techniques to enhance accuracy, efficiency, speed of policy formulation and implementation, and evaluation of such interventions through analytics. ...
... In the final chapters of her book Engineering a Safer World: Systems Thinking Applied to Safety, Leveson underlines the importance of adequate management and safety culture to accomplish any of the goals described in the previous sections (Leveson, 2012). The increasing scale and complexity of AI systems, makes that these challenges span broader institutional networks, often comprising public, private, knowledge and societal institutions (Janssen and Kuk, 2016). ...
Preprint
This chapter formulates seven lessons for preventing harm in artificial intelligence (AI) systems based on insights from the field of system safety for software-based automation in safety-critical domains. New applications of AI across societal domains and public organizations and infrastructures come with new hazards, which lead to new forms of harm, both grave and pernicious. The text addresses the lack of consensus for diagnosing and eliminating new AI system hazards. For decades, the field of system safety has dealt with accidents and harm in safety-critical systems governed by varying degrees of software-based automation and decision-making. This field embraces the core assumption of systems and control that AI systems cannot be safeguarded by technical design choices on the model or algorithm alone, instead requiring an end-to-end hazard analysis and design frame that includes the context of use, impacted stakeholders and the formal and informal institutional environment in which the system operates. Safety and other values are then inherently socio-technical and emergent system properties that require design and control measures to instantiate these across the technical, social and institutional components of a system. This chapter honors system safety pioneer Nancy Leveson, by situating her core lessons for today's AI system safety challenges. For every lesson, concrete tools are offered for rethinking and reorganizing the safety management of AI systems, both in design and governance. This history tells us that effective AI safety management requires transdisciplinary approaches and a shared language that allows involvement of all levels of society.
... While the field of analytics continues to grow, a surplus of data keeps being spawned and the limitations to capitalize on such data usage remain greatly unexplored (J€ arvinen and Karjaluoto, 2015;J€ arvinen et al., 2012). Data analytics is expected to provide valuable insights (Desouza and Jacob, 2017;Fosso Wamba et al., 2015;Janssen and Kuk, 2016;Kyriazis et al., 2020;Palma-Ruiz and G omez-Martínez, 2019). Consequently, there is a need to develop models and methods that are inclusive in terms of information and various stakeholders, involve reasonable analysis and synthesis, and are quick (Bryson et al., 2010, p. 13). ...
Article
Full-text available
Esports has seen a phenomenal explosion in popularity in recent years, gaining increasing interest from the media, sports, and technology industries. The purpose of this study is to show an overview of the recent evolution of the gaming market in representative countries in Eastern Asia, Western Europe, and North America during 2017-2019, and the corresponding growth projections for the next five years. For this purpose, descriptive, correlational, and forecasting analyses were used to assess the relationships among key variables associated with the growth of the gaming industry and to show different possibilities to address the data using data analytics. The games market revenues, total number of players, Google trends data, GDP per capita and online population were studied as possible key influencers to explain the industry's growth. Predictive analytics with MS Power BI revealed a positive correlation between GDP per capita and market revenues and players in European and North American countries, while in Asia was just the opposite. Also, a positive relationship between Google trends in esports and the games market revenues is noted. Forecasts showed significant growth for each region. Practical implications and future research directions are discussed.
... Though not prominent in the discourse, there is solid research on which to base such an effort. Trustworthy AI may well require programming and design that is technologically robust (Bellamy et al., 2019;Etzioni & Etzioni, 2016;Harrison et al., 2019;Kroll et al., 2017;Liao & Muller, 2019;Sokol et al., 2020;Veale et Krüger & Wilson -The Problem with Trust -prepublication version 24 al., 2018), processes and policy frameworks that protect citizens in the slightest case of doubt (Balaram et al., 2018;Brundage et al., 2020;Dignum, 2019;Janssen & Kuk, 2016;Kemper & Kolkman, 2019;Kolkman, 2020;Lee, 2018;Mulgan, 2016;Reisman et al., 2018;Vassilakopoulou, 2020), or public-sector routines and protocols that allow for AI and human-service offers to run side-by-side for the foreseeable future, questioning and learning from each other (Berscheid & Roewer-Despres, 2019;Janssen et al., 2020;Katell et al., 2020;Rahwan, 2017;Vestby & Vestby, 2019;Yeung & Lodge, 2019). ...
Preprint
Full-text available
This commentary draws critical attention to the ongoing commodification of trust in policy and scholarly discourses of artificial intelligence (AI) and society. Based on an assessment of publications discussing the implementation of AI in governmental and private services, our findings indicate that this discursive trend towards commodification is driven by the need for a trusting population of service users in order to harvest data at scale and leads to the discursive construction of trust as an essential good on a par with data as raw material. This discursive commodification is marked by a decreasing emphasis on trust understood as the expected reliability of a trusted agent, and increased emphasis on instrumental and extractive framings of trust as a resource. This tendency, we argue, does an ultimate disservice to developers, users, and systems alike, insofar as it obscures the subtle mechanisms through which trust in AI systems might be built, making it less likely that it will be.
Article
AI solutions can significantly leverage open government data (OGD) ecosystems in public governance. For that, it is important to design effective and transparent governance mechanisms that create value in an OGD ecosystem through AI solutions. This article develops a conceptual model for a systematic design of an OGD governance model, which adopts a platform governance approach and integrates the governance needs derived from the use of AI. The purpose of the conceptual model is to systematically identify and analyze the interrelationships among multiple change factors on OGD governance design and to project available AI-based solutions for the OGD ecosystem by assessing the managerial, organizational, legal, technological, moral, and institutional variances. The proposed ‘6-step model’ suggests that an AI-compatible OGD ecosystem design requires (i) identifying contingencies, (ii) identifying data prosumers, (iii) assigning data governance roles, (iv) identifying design values, (v) designing the governance of AI, and (vi) designing the governance by AI. Through the recursive and reflexive analysis of each step, policymakers and system designers can develop reliable strategies in leveraging AI solutions for the use of OGD in public governance.
Chapter
Most government agencies today have a perception that data is essential. However, creating a culture that encourages public servants to perceive data as an asset and make data-driven decisions is challenging. Data governance helps reduce the cost of data management and create value from the data. However, data is often dispersed across many organizations with different data policies in place, stored, and utilized. It can lead to accountability issues and poor data quality, and economic decline based on data utilization. The government data governance framework is one of the solutions to this problem, but there is a lack of discussion of a national data governance framework. Therefore, this paper analyzes the NDS of the US, the UK, Australia, and Japan based on the DGF of the DGI to derive the essential considerations in formulating national data strategies. And then, we suggest the components of the Government Data Governance Framework. These components are essential elements to be discussed in the establishment of NDS. This paper's results can help establish a new NDS or modify the established NDS.
Article
Digital platforms and application software have changed how people work in a range of industries. Empirical studies of the gig economy have raised concerns about new systems of algorithmic management exercised over workers and how these alter the structural conditions of their work. Drawing on the republican literature, we offer a theoretical account of algorithmic domination and a framework for understanding how it can be applied to ride hail and food delivery services in the on-demand economy. We argue that certain algorithms can facilitate new relationships of domination by sustaining a socio-technical system in which the owners and managers of a company dominate workers. This analysis has implications for the growing use of algorithms throughout the gig economy and broader labor market.
Article
Governments increasingly rely on large amounts of data to deliver public services. In response, there is a robust discussion about the implications of this trend for efficiency and economy, but much less attention is paid to social equity. To address this issue, our study synthesizes cross‐disciplinary research on the relationship between data‐driven public services and social equity. Based on a systematic literature review of 190 articles covering a decade of research, we demonstrate how public sector data applications relate to social equity in terms of access to services, treatment, service quality and outcomes. Our review identifies key mechanisms related to data collection, storage, analysis, and usage that need to be addressed to ensure more equitable data‐driven public services. This review contributes to public administration research and practice by highlighting the complexities of social equity in the digital age.
Article
How much do citizens support AI in government and politics at different levels of decision‐making authority and to what extent is this AI support associated with citizens’ conceptions of democracy? Using original survey data from Germany, the analysis shows that people are overall skeptical toward using AI in the political realm. The findings suggest that how much citizens endorse democracy as liberal democracy as opposed to several of its disfigurations matters for AI support, but only in high‐level politics. While a stronger commitment to liberal democracy is linked to lower support for AI, the findings contradict the idea that a technocratic notion of democracy lies behind greater acceptance of political AI uses. Acceptance is higher only among those holding reductionist conceptions of democracy which embody the idea that whatever works to accommodate people's views and preferences is fine. Populists, in turn, appear to be against AI in political decision‐making. This article is protected by copyright. All rights reserved
Chapter
The article presents an algorithm for analyzing the communicative behavior of actors in cyberspace to determine the perception and track opinions and attitude changes of metropolitan residents in terms of digital transformation during pandemic. In this study, authors focused of negative reactions of residents of the metropolis to the transformation of IT technologies. The study involved a cross-disciplinary approach. The materials for the study were data from instant messengers, microblogging, social networks, blogs, online media, forums, thematic portals, print media, TV, reviews, shops, video hosting services. The results of the study show that it is necessary to be made to the existing urban system of governance, and new methods for linking big data to findings of opinion polls on socially relevant issues need to be developed, urban communities have to be involved in the discussion of digital transformation of cities, and that a compromise has to be made between the implementation of new technologies and the protection of citizens from unwarranted interference with their private lives and abuse of their digital identities.
Article
Use of big data in the nonprofit sector is on the rise as a part of a trend toward “data-driven” management. While big data has its critics, few have addressed fundamental ontological and epistemological issues big data presents for the nonprofit sector. In this article, we address some of these issues including most prominently the notion that big data are value neutral and divorced from context. Drawing on data feminism, an intersectional feminist framework focusing on critically interrogating our experience with data and data-driven technologies, we examine the power differentials inherent in the construction of big data and challenge the claims, priorities, and inequities it produces specifically for nonprofit work. We conclude the article with a call for nonprofit scholars and practitioners to employ a data feminist framework to harness the power of big (and small) data for justice, equity, and co-liberation through nonprofit work.
Article
Administrative errors in unemployment insurance (UI) decisions give rise to a public values conflict between efficiency and efficacy. We analyze whether artificial intelligence (AI) – in particular, methods in machine learning (ML) – can be used to detect administrative errors in UI claims decisions, both in terms of accuracy and normative tradeoffs. We use 16 years of US Department of Labor audit and policy data on UI claims to analyze the accuracy of 7 different random forest and deep learning models. We further test weighting schemas and synthetic data approaches to correcting imbalances in the training data. A random forest model using gradient descent boosting is more accurate, along several measures, and preferable in terms of public values, than every deep learning model tested. Adjusting model weights produces significant recall improvements for low-n outcomes, at the expense of precision. Synthetic data produces attenuated improvements and drawbacks relative to weights.
Article
Calls for public engagement and participation in AI governance align strongly with a public value management approach to public administration. Simultaneously, the prominence of commercial vendors and consultants in AI discourse emphasizes market value and efficiency in a way often associated with the private sector and New Public Management. To understand how this might influence the consolidation of AI governance regimes and decision-making by public administrators, 16 national strategies for AI are subjected to content analysis. References to the public's role and public engagement mechanisms are mapped across national strategies, as is the articulation of values related to professionalism, efficiency, service, engagement, and the private sector. Though engagement rhetoric is common, references to specific engagement mechanisms and activities are rare. Analysis of value relationships highlights congruence of engagement values with professionalism and private sector values, and raises concerns about neoliberal technology frames that normalize AI, obscuring policy complexity and trade-offs.
Chapter
Algorithmic systems used in public administration can create or reinforce digital cages. A digital cage refers to algorithmic systems or information architectures that create their own reality through formalization, frequently resulting in incorrect automated decisions with severe impact on citizens. Although much research has identified how algorithmic artefacts can contribute to digital cages and their unintended consequences, the emergence of digital cages from human actions and institutions is poorly understood. Embracing a broader lens on how technology, human activity, and institutions shape each other, this paper explores what design practices in public organizations can result in the emergence of digital cages. Using Orlikowski’s structurational model of technology, we found four design practices in observations and interviews conducted at a consortium of public organizations. This study shows that design processes of public algorithmic systems (1) are often narrowly focused on technical artefacts, (2) disregard the normative basis for these systems, (3) depend on involved actors’ awareness of socio-technics in public algorithmic systems, (4) and are approached as linear rather than iterative. These four practices indicate that institutions and human actions in design processes can contribute to the emergence of digital cages, but also that institutional – opposed to technical – possibilities to address their unintended consequences are often ignored. Further research is needed to examine how design processes in public organizations can evolve into socio-technical processes, can become more democratic, and how power asymmetries in the design process can be mitigated.KeywordsPublic algorithmic systemDigital cageDesign processStructuration
Article
Public agencies have a strong interest in artificial intelligence (AI) systems. However, many public agencies lack tools and frameworks to articulate a viable business model and evaluate public value as they consider investing in AI systems. The business model canvas used extensively in the private sector offers us a foundation for designing a public AI canvas (PAIC). Employing a design science approach, this study reports on the design and evaluation of PAIC. The PAIC comprises three distinctive layers: (1) the public value-oriented AI-enablement layer; (2) the public value logic layer; and (3) the public value-oriented social guidance layer. PAIC offers guidance on innovating the business models of public agencies to create and capture AI-enabled value. For practitioners, PAIC presents a validated tool to guide AI deployment in public agencies.
Article
Courses and training in public decision-making have often disappeared from Public Administration curricula. This paper argues that this is unfortunate as skills therein are severely needed to steer developments towards the Fourth Industrial Revolution. Whereas some see this as a macro development that befalls countries, this paper argues otherwise. Decision-making by individual and corporate actors is judged to be central in the 4IR. This makes the steering thereof possible and desirable. Without being trained in the needed skills in decision-making our graduates will not be prepared to do so and will not become the responsible public officials able to direct 4IR developments.
Article
Drawing on the logic of Simon’s decision-making theory, this study compares the effects of AI versus humans on discretion, client meaningfulness, and willingness-to-implement, and examines the moderating role of different types of decisions on those relationships. The findings show that AI usage has a negative effect on perceived discretion and a positive effect on willingness-to-implement. Conversely, non-programmed decisions tend to have a positive effect on both perceived discretion and willingness-to-implement. Moreover, non-programmed decisions mitigated the effect of AI usage on perceived discretion, while programmed decisions interacted with AI usage to improve client meaningfulness and strengthen willingness-to-implement.
Article
Full-text available
This commentary draws critical attention to the ongoing commodification of trust in policy and scholarly discourses of artificial intelligence (AI) and society. Based on an assessment of publications discussing the implementation of AI in governmental and private services, our findings indicate that this discursive trend towards commodification is driven by the need for a trusting population of service users to harvest data at scale and leads to the discursive construction of trust as an essential good on a par with data as raw material. This discursive commodification is marked by a decreasing emphasis on trust understood as the expected reliability of a trusted agent, and increased emphasis on instrumental and extractive framings of trust as a resource. This tendency, we argue, does an ultimate disservice to developers, users, and systems alike, insofar as it obscures the subtle mechanisms through which trust in AI systems might be built, making it less likely that it will be.
Article
Ethics, explainability, responsibility, and accountability are important concepts for questioning the societal impacts of artificial intelligence and machine learning (AI), but are insufficient to guide the public sector in regulating and implementing AI. Recent frameworks for AI governance help to operationalize these by identifying the processes and layers of governance in which they must be considered, but do not provide public sector workers with guidance on how they should be pursued or understood. This analysis explores how the concept of sustainable AI can help to fill this gap. It does so by reviewing how the concept has been used by the research community and aligning research on sustainable development with research on public sector AI. Doing so identifies the utility of boundary conditions that have been asserted for social sustainability according to the Framework for Strategic Sustainable Development, and which are here integrated with prominent concepts from the discourse on AI and society. This results in a conceptual model that integrates five boundary conditions to assist public sector decision-making about how to govern AI: Diversity, Capacity for learning, Capacity for self-organization Common meaning, and Trust. These are presented together with practical approaches for their presentation, and guiding questions to aid public sector workers in making the decisions that are required by other operational frameworks for ethical AI.
Article
Full-text available
In this paper a framework is constructed to hypothesize if and how smart city technologies and urban big data produce privacy concerns among the people in these cities (as inhabitants, workers, visitors, and otherwise). The framework is built on the basis of two recurring dimensions in research about people's concerns about privacy: one dimensions represents that people perceive particular data as more personal and sensitive than others, the other dimension represents that people's privacy concerns differ according to the purpose for which data is collected, with the contrast between service and surveillance purposes most paramount. These two dimensions produce a 2×2 framework that hypothesizes which technologies and data-applications in smart cities are likely to raise people's privacy concerns, distinguishing between raising hardly any concern (impersonal data, service purpose), to raising controversy (personal data, surveillance purpose). Specific examples from the city of Rotterdam are used to further explore and illustrate the academic and practical usefulness of the framework. It is argued that the general hypothesis of the framework offers clear directions for further empirical research and theory building about privacy concerns in smart cities, and that it provides a sensitizing instrument for local governments to identify the absence, presence, or emergence of privacy concerns among their citizens.
Chapter
Full-text available
Algorithms (particularly those embedded in search engines, social media platforms, recommendation systems, and information databases) play an increasingly important role in selecting what information is considered most relevant to us, a crucial feature of our participation in public life. As we have embraced computational tools as our primary media of expression, we are subjecting human discourse and knowledge to the procedural logics that undergird computation. What we need is an interrogation of algorithms as a key feature of our information ecosystem, and of the cultural forms emerging in their shadows, with a close attention to where and in what ways the introduction of algorithms into human knowledge practices may have political ramifications. This essay is a conceptual map to do just that. It proposes a sociological analysis that does not conceive of algorithms as abstract, technical achievements, but suggests how to unpack the warm human and institutional choices that lie behind them, to see how algorithms are called into being by, enlisted as part of, and negotiated around collective efforts to know and be known.
Article
Full-text available
Algorithms, or rather algorithmic actions, are seen as problematic because they are inscrutable, automatic, and subsumed in the flow of daily practices. Yet, they are also seen to be playing an important role in organizing opportunities, enacting certain categories, and doing what David Lyon calls “social sorting.” Thus, there is a general concern that this increasingly prevalent mode of ordering and organizing should be governed more explicitly. Some have argued for more transparency and openness, others have argued for more democratic or value-centered design of such actors. In this article, we argue that governing practices—of, and through algorithmic actors—are best understood in terms of what Foucault calls governmentality. Governmentality allows us to consider the performative nature of these governing practices. They allow us to show how practice becomes problematized, how calculative practices are enacted as technologies of governance, how such calculative practices produce domains of knowledge and expertise, and finally, how such domains of knowledge become internalized in order to enact self-governing subjects. In other words, it allows us to show the mutually constitutive nature of problems, domains of knowledge, and subjectivities enacted through governing practices. In order to demonstrate this, we present attempts to govern academic writing with a specific focus on the algorithmic action of Turnitin.
Article
Full-text available
We offer an evaluation of the Social Security Administration demographic and financial forecasts used to assess the long-term solvency of the Social Security Trust Funds. This same forecasting methodology is also used in evaluating policy proposals put forward by Congress to modify the Social Security program. Ours is the first evaluation to compare the SSA forecasts with observed truth; for example, we compare forecasts made in the 1980s, 1990s, and 2000s with outcomes that are now available. We find that Social Security Administration forecasting errors—as evaluated by how accurate the forecasts turned out to be—were approximately unbiased until 2000 and then became systematically biased afterward, and increasingly so over time. Also, most of the forecasting errors since 2000 are in the same direction, consistently misleading users of the forecasts to conclude that the Social Security Trust Funds are in better financial shape than turns out to be the case. Finally, the Social Security Administration's informal uncertainty intervals appear to have become increasingly inaccurate since 2000. At present, the Office of the Chief Actuary, at the Social Security Administration, does not reveal in full how its forecasts are made. Every future Trustees Report, without exception, should include a routine evaluation of all prior forecasts, and a discussion of what forecasting mistakes were made, what was learned from the mistakes, and what actions might be taken to improve forecasts going forward. And the Social Security Administration and its Office of the Chief Actuary should follow best practices in academia and many other parts of government and make their forecasting procedures public and replicable, and should calculate and report calibrated uncertainty intervals for all forecasts.
Article
Full-text available
Large-scale data sets of human behavior have the potential to fundamentally transform the way we fight diseases, design cities, or perform research. Metadata, however, contain sensitive information. Understanding the privacy of these data sets is key to their broad use and, ultimately, their impact. We study 3 months of credit card records for 1.1 million people and show that four spatiotemporal points are enough to uniquely reidentify 90% of individuals. We show that knowing the price of a transaction increases the risk of reidentification by 22%, on average. Finally, we show that even data sets that provide coarse information at any or all of the dimensions provide little anonymity and that women are more reidentifiable than men in credit card metadata. Copyright © 2015, American Association for the Advancement of Science.
Article
Full-text available
This article examines how the availability of Big Data, coupled with new data analytics, challenges established epistemologies across the sciences, social sciences and humanities, and assesses the extent to which they are engendering paradigm shifts across multiple disciplines. In particular, it critically explores new forms of empiricism that declare ‘the end of theory’, the creation of data-driven rather than knowledge-driven science, and the development of digital humanities and computational social sciences that propose radically different ways to make sense of culture, history, economy and society. It is argued that: (1) Big Data and new data analytics are disruptive innovations which are reconfiguring in many instances how research is conducted; and (2) there is an urgent need for wider critical reflection within the academy on the epistemological implications of the unfolding data revolution, a task that has barely begun to be tackled despite the rapid changes in research practices presently taking place. After critically reviewing emerging epistemological positions, it is contended that a potentially fruitful approach would be the development of a situated, reflexive and contextually nuanced epistemology.
Article
Full-text available
‘Smart cities’ is a term that has gained traction in academia, business and government to describe cities that, on the one hand, are increasingly composed of and monitored by pervasive and ubiquitous computing and, on the other, whose economy and governance is being driven by innovation, creativity and entrepreneurship, enacted by smart people. This paper focuses on the former and, drawing on a number of examples, details how cities are being instrumented with digital devices and infrastructure that produce ‘big data’. Such data, smart city advocates argue enables real-time analysis of city life, new modes of urban governance, and provides the raw material for envisioning and enacting more efficient, sustainable, competitive, productive, open and transparent cities. The final section of the paper provides a critical reflection on the implications of big data and smart urbanism, examining five emerging concerns: the politics of big urban data, technocratic governance and city development, corporatisation of city governance and technological lock-ins, buggy, brittle and hackable cities, and the panoptic city.
Article
Full-text available
Algorithms for computing the inverse Laplace transform that consist essentially in choosing a series expansion for the original function are particularly effective in many cases and are widely used. The main purpose of this paper is to review these algorithms in the context of regularization. We relate this viewpoint to the design of reliable algorithms destined to be run on finite precision arithmetic systems.
Article
Full-text available
The era of Big Data has begun. Computer scientists, physicists, economists, mathematicians, political scientists, bio-informaticists, sociologists, and other scholars are clamoring for access to the massive quantities of information produced by and about people, things, and their interactions. Diverse groups argue about the potential benefits and costs of analyzing genetic sequences, social media interactions, health records, phone logs, government records, and other digital traces left by people. Significant questions emerge. Will large-scale search data help us create better tools, services, and public goods? Or will it usher in a new wave of privacy incursions and invasive marketing? Will data analytics help us understand online communities and political movements? Or will it be used to track protesters and suppress speech? Will it transform how we study human communication and culture, or narrow the palette of research options and alter what ‘research’ means? Given the rise of Big Data as a socio-technical phenomenon, we argue that it is necessary to critically interrogate its assumptions and biases. In this article, we offer six provocations to spark conversations about the issues of Big Data: a cultural, technological, and scholarly phenomenon that rests on the interplay of technology, analysis, and mythology that provokes extensive utopian and dystopian rhetoric.
Book
Full-text available
Every day, we make decisions on topics ranging from personal investments to schools for our children to the meals we eat to the causes we champion. Unfortunately, we often choose poorly. The reason, the authors explain, is that, being human, we all are susceptible to various biases that can lead us to blunder. Our mistakes make us poorer and less healthy; we often make bad decisions involving education, personal finance, health care, mortgages and credit cards, the family, and even the planet itself. Thaler and Sunstein invite us to enter an alternative world, one that takes our humanness as a given. They show that by knowing how people think, we can design choice environments that make it easier for people to choose what is best for themselves, their families, and their society. Using colorful examples from the most important aspects of life, Thaler and Sunstein demonstrate how thoughtful "choice architecture" can be established to nudge us in beneficial directions without restricting freedom of choice. Nudge offers a unique new take-from neither the left nor the right-on many hot-button issues, for individuals and governments alike. This is one of the most engaging and provocative books to come along in many years. © 2008 by Richard H. Thaler and Cass R. Sunstein. All rights reserved.
Book
Materiality and Space focuses on how organizations and managing are bound with the material forms and spaces through which humans act and interact at work. It concentrates on organizational practices and pulls together three separate domains that are rarely looked at together: sociomateriality, sociology of space, and social studies of technology. The contributions draw on and combine several of these domains, and propose analyses of spaces and materiality in a range of organizational practices such as collaborative workspaces, media work, urban management, e-learning environments, managerial control, mobile lives, institutional routines and professional identity. Theoretical insights are also developed by Pickering on the material world, Lyytinen on affordance, Lorino on architexture and Introna on sociomaterial assemblages in order to delve further into conceptualizing materiality in organizations.
Article
The European Union's policy on open data aims at generating value through re-use of public sector information, such as mapping data. Open data policies should be applied in full compliance with the principles relating to the protection of personal data of the EU Data Protection Directive. Increased computer power, advancing data mining techniques and the increasing amount of publicly available big data extend the reach of the EU Data Protection Directive to much more data than currently assumed and acted upon. Especially mapping data are a key factor to identify individual data subjects and consequently subject to the EU Data Protection Directive and the recently approved EU General Data Protection Regulation. This could in effect obstruct the implementation of open data policies in the EU. The very hungry data protection legislation results in a need to rethink either the concept of personal data or the conditions for use of mapping data that are considered personal data.
Article
Big and Open Linked Data (BOLD) results in new opportunities and have the potential to transform government and its interactions with the public. BOLD provides the opportunity to analyze the behavior of individuals, increase control, and reduce privacy. At the same time BOLD can be used to create an open and transparent government. Transparency and privacy are considered as important societal and democratic values that are needed to inform citizens and let them participate in democratic processes. Practices in these areas are changing with the rise of BOLD. Although intuitively appealing, the concepts of transparency and privacy have many interpretations and are difficult to conceptualize, which makes it often hard to implement them. Transparency and privacy should be conceptualized as complex, non-dichotomous constructs interrelated with other factors. Only by conceptualizing these values in this way, the nature and impact of BOLD on privacy and transparency can be understood, and their levels can be balanced with security, safety, openness and other socially-desirable values.
Article
The value of data as a new economic asset class is seldom realized on its own. With less reliance on self-administered survey, it offers new insights into behaviors and patterns. Yet, it involves a huge undertaking of bringing together multiple actors from different disciplines and diverse practices to examine the underexplored relationships between types of data. There are different inquiry systems and research cycles to make sense out of big and open data (BOLD). We argue that deploying theories from diverse disciplines, and considering using different inquiry systems and research cycles, offers a more disciplined and robust methodological approach. This allows us to break through the limits of backward induction from the evidence by moving back and forward in exploring the unknown through BOLD. As such, we call for developing a variety of rigorous approaches to counterbalance the current theory-free practice in the analysis and use of BOLD.
Article
The growth and popularity of music streaming are generally seen as win for music consumers, giving them greater freedom and virtually limitless access to musical content. This article offers a different view. It examines how four prominent music streaming services position themselves in the marketplace, based on their interfaces, the quality of their curatorial devices, the identity projected for users and the control users have over their music (or, lack thereof). We argue that, ultimately, streaming services are in the business of creating branded musical experiences, which appear to offer fluid and abundant musical content but, in reality, create circumscribed tiers of content access for a variety of scenarios, users and listening environments.
Article
Automated recommendation systems now occupy a central position in the circulation of media and cultural products. Using music as a test case, this article examines the use of algorithms and data mining techniques for the presentation and representation of culture, and how these tools reconfigure the process of cultural intermediation. Expanding Bourdieu’s notion of cultural intermediaries to include technologies like algorithms, I argue that an emerging layer of companies – call them infomediaries – are increasingly responsible for shaping how audiences encounter and experience cultural content. Through a critical analysis of The Echo Nest, a music infomediary whose databases underpin many digital music services, I trace the shift from intermediation to infomediation and explore what is at stake at the intersection of data mining, taste making and audience manufacture. The new infomediary logics at work are computational forms of power that shape popular culture and highlight the social implications of curation by code.
Article
This article describes an emergent logic of accumulation in the networked sphere, ‘surveillance capitalism,’ and considers its implications for ‘information civilization.’ The institutionalizing practices and operational assumptions of Google Inc. are the primary lens for this analysis as they are rendered in two recent articles authored by Google Chief Economist Hal Varian. Varian asserts four uses that follow from computer-mediated transactions: ‘data extraction and analysis,’ ‘new contractual forms due to better monitoring,’ ‘personalization and customization,’ and ‘continuous experiments.’ An examination of the nature and consequences of these uses sheds light on the implicit logic of surveillance capitalism and the global architecture of computer mediation upon which it depends. This architecture produces a distributed and largely uncontested new expression of power that I christen: ‘Big Other.’ It is constituted by unexpected and often illegible mechanisms of extraction, commodification, and control that effectively exile persons from their own behavior while producing new markets of behavioral prediction and modification. Surveillance capitalism challenges democratic norms and departs in key ways from the centuries-long evolution of market capitalism.
Article
This paper investigates how text analysis and classification techniques can be used to enhance e-government, typically law enforcement agencies' efficiency and effectiveness by analyzing text reports automatically and provide timely supporting information to decision makers. With an increasing number of anonymous crime reports being filed and digitized, it is generally difficult for crime analysts to process and analyze crime reports efficiently. Complicating the problem is that the information has not been filtered or guided in a detective-led interview resulting in much irrelevant information. We are developing a decision support system (DSS), combining natural language processing (NLP) techniques, similarity measures, and machine learning, i.e., a Naïve Bayes' classifier, to support crime analysis and classify which crime reports discuss the same and different crime. We report on an algorithm essential to the DSS and its evaluations. Two studies with small and big datasets were conducted to compare the system with a human expert's performance. The first study includes 10 sets of crime reports discussing 2 to 5 crimes. The highest algorithm accuracy was found by using binary logistic regression (89%) while Naive Bayes' classifier was only slightly lower (87%). The expert achieved still better performance (96%) when given sufficient time. The second study includes two datasets with 40 and 60 crime reports discussing 16 different types of crimes for each dataset. The results show that our system achieved the highest classification accuracy (94.82%), while the crime analyst's classification accuracy (93.74%) is slightly lower.
Book
The book is a collective meditation on the role of materiality in social affairs. The recent and growing interest in the concept of "materiality" certainly has diverse origins. Yet, it is closely associated with the diffusion of technological objects and artifacts through society and many have questioned how human choice and social practice are conditioned by the characteristics of such devices and systems. Many traditional technologies are easy to call "material" - they are made up of wood, steel, and other physical substrates that afford and constrain particular uses. Other technologies, such as software and rhetorical tropes, are not made up of such physical substrates, but they still have implications for human action in many of the same ways as the more traditional technologies. Thus, it is unclear how to talk about the materiality of technology in a way that includes both physical and nonphysical artifacts while still accounting for their effects. The book gathers together a group of scholars from various disciplines who approach the issues materiality raises from various angles, making evident that there is no single answer as to how the concept can be used to approach the perennial question of the ways technologies and humans bear upon one another. The book contributes to untangling the various meanings of materiality and clarifying the positions or perspectives from which they are produced.
Article
US outbreak foxes a leading web-based method for tracking seasonal flu.
Cutting Code. Software and Sociality
  • A Mackenzie
Mackenzie, A. (2006). Cutting Code. Software and Sociality. New York: Peter Lang.
Accountable algorithms. (Doctoral Dissertation) Princeton University (http://dataspace.princeton
  • J A Kroll
Kroll, J. A. (2015). Accountable algorithms. (Doctoral Dissertation) Princeton University (http://dataspace.princeton.edu/jspui/handle/88435/dsp014b29b837r).
Knowing Algorithms. Media in Transition 8
  • N Seaver
Seaver, N. (2013). Knowing Algorithms. Media in Transition 8. Cambridge: MA (http:// nickseaver.net/s/seaverMiT8.pdf).
http://culturedigitally.org/2016/05/facebook-trending-its-made-of- people-but-we-should-have-already-known-that
  • T Gillespie
Gillespie, T. (2016). http://culturedigitally.org/2016/05/facebook-trending-its-made-of- people-but-we-should-have-already-known-that/
Algorithm and Program; Information and Data
  • K E Knuth
Knuth, K. E. (1966). Algorithm and Program; Information and Data. Communications of the ACM, 9(9), 654.
Seeing the Sort: The Aesthetic and Industrial Defense of " The Algorithm Journal of the New Media Causes (http://median.newmediacaucus.org/art- infrastructures-information/seeing-the-sort-the-aesthetic-and-industrial-defense- of-the-algorithm/)
  • C Sandvig
Sandvig, C. (2014). Seeing the Sort: The Aesthetic and Industrial Defense of " The Algorithm ". Journal of the New Media Causes (http://median.newmediacaucus.org/art- infrastructures-information/seeing-the-sort-the-aesthetic-and-industrial-defense- of-the-algorithm/).
The hidden biases in Big Data Available at: https://hbr.org/2013/04/the-hidden-biases-in-big-data)
  • K Crawford
Crawford, K. (2013). The hidden biases in Big Data. Harvard Business Review (1 April. Available at: https://hbr.org/2013/04/the-hidden-biases-in-big-data).
The hidden biases in Big Data
  • D Butler
Butler, D. (2013). When Google got flu wrong. Nature, 494(7436), 155. Crawford, K. (2013). The hidden biases in Big Data. Harvard Business Review (1 April. Available at: https://hbr.org/2013/04/the-hidden-biases-in-big-data).