Figure 2 - uploaded by Jorge Saldivar
Content may be subject to copyright.
Dashboard for the analytics results in Civic CrowdAnalytics. We designed Civic CrowdAnalytics in collaboration with the City of Palo Alto to make the data analysis and synthesis crowdsourced policymaking more efficient. The city staff members and policymakers have been facing the issue of overwhelming amount of citizen comments in the crowdsourced Comp Plan update. Civic CrowdAnalytics is a web application, which allows the user to submit data sets and analyze them in various ways. The application uses APIs at Hewlett-Packard Enterprise's big data tool Haven onDemand.
Source publication
This paper examines the impact of crowdsourcing on a policymaking process by using a novel data analytics tool called Civic CrowdAnalytics, applying Natural Language Processing (NLP) methods such as concept extraction, word association and sentiment analysis. By drawing on data from a crowdsourced urban planning process in the City of Palo Alto in...
Citations
... Policymakers struggle when planning and managing multi-stakeholder participation processes. Participatory data often lack structure, consist of atomic units, and exhibit considerable divergence, resulting in heterogeneity in content and format [1,2,3]. When information overloads policymakers, they start filtering information, ignoring some or all inputs [4], which can make policies fail in the short term and erode legitimacy and trust. ...
... Policymakers, however, face a significant challenge in analyzing the inputs received from stakeholders during large-scale deliberation processes. Participatory data is often unstructured, atomic, and divergent, making it heterogeneous in content and format [1,2,3,29]. Policymakers struggle with identifying potential gaps in data, assessing its quality, and using it effectively to respond to citizens promptly [30,3,31,22,32]. ...
For policymakers, making sense of stakeholder participatory data is a complex task. Natural Language Processing (NLP) can aid in processing this data, reducing policymakers’ cognitive overload and supporting multi-stakeholder engagement. However, implementing NLP can be challenging in settings with limited resources, knowledge, or infrastructure. This study analyzes the feasibility and limitations of
using Latent Dirichlet Allocation (LDA) to examine data from Chile’s AI policy in which more than 1,700 people participated in a public deliberation process yielding data containing citizen reflections that varied in format, quality, depth, and length. We matched LDA topics from the public deliberation data to the objectives of Chile’s AI policy draft, written by five experts over 4-months. LDA effectively detected 87% of the topics in the draft, requiring the researchers only to manually inspect 26% of the participation data to deliver this result. We discuss the potential and limitations of using LDA in participatory processes and contribute by showing how it can aid in the strategic management of stakeholders in a real-world resource-constrained setting.
Full paper: https://ceur-ws.org/Vol-3737/paper30.pdf
... In addition to a great proportion of words having clear meanings, the authors then traced back to the collected literature to understand their potential meanings and used such additional information to classify the words that could not be clearly classified. For instance, "quality" referred to the quality of the information [54][55][56], "online" was related to the online discussion of the public [57][58][59][60], "agency" meant the government agency [61][62][63], etc. ...
Natural language processing (NLP), which is known as an emerging technology creating considerable value in multiple areas, has recently shown its great potential in government operations and public administration applications. However, while the number of publications on NLP is increasing steadily, there is no comprehensive review for a holistic understanding of how NLP is being adopted by governments. In this regard, we present a systematic literature review on NLP applications in governments by following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol. The review shows that the current literature comprises three levels of contribution: automation, extension, and transformation. The most-used NLP techniques reported in government-related research are sentiment analysis, machine learning, deep learning, classification, data extraction, data mining, topic modelling, opinion mining, chatbots, and question answering. Data classification, management, and decision-making are the most frequently reported reasons for using NLP. The salient research topics being discussed in the literature can be grouped into four categories: (1) governance and policy, (2) citizens and public opinion, (3) medical and healthcare, and (4) economy and environment. Future research directions should focus on (1) the potential of chatbots, (2) NLP applications in the post-pandemic era, and (3) empirical research for government work.
... This has entailed allowing citizens to propose new law and policy projects (e.g., VotaInteligente 7 in Chile [40]), share their opinion about existing law projects (e.g., Senador Virtual 8 also in Chile [4]), and issue concerns and complaints directly to legislators [4,40] (e.g., CRM systems [26]). It has also included enabling indirect citizen participation by collecting discussions on social media and interpreting them to inform legislators' political agendas [2,19] (e.g., the NOMAD project). ...
... Even if parliaments release the needed data, the feasibility for the average citizen to engage with tools that monitor legislators' actions such as voting, is rather low: conveying the complexity of government data and processes to the general public is a pending challenge for OPTs [43]. OPTs that collect citizen's opinions also face challenges on their own to be considered participatory; in all cases, legislators still have the power to decide how much of citizens' input to consider [2,7,26,29]. ...
... Participants' explorations of our tool facilitated conversations on the potential for average citizens to use and harness OPTs. As mentioned before, various eforts have proposed OPTs that enable citizens to shape parliaments' decisions [2,4,26,40]. Our data analysis stresses, however, that building platforms that allow citizens' input is not enough to motivate citizens' active participation. ...
... The types of civic technologies developed and used correspond to the technological trends in the larger field of ICTs. For example, as social media data become available and Natural Language Processing tools mature, machine learning starts to be applied to analyze civic content on social media [2,88,90]. ...
There have been initiatives that take advantage of information and communication technologies to serve civic purposes, referred to as civic technologies (Civic Tech). In this paper, we present a review of 224 papers from the ACM Digital Library focusing on Computer Supported Cooperative Work and Human-Computer Interaction, the key fields supporting the building of Civic Tech. Through this review, we discuss the concepts, theories and history of civic tech research and provide insights on the technological tools, social processes and participation mechanisms involved. Our work seeks to direct future civic tech efforts to the phase of by the citizens.
... En términos de participación de la ciudadanía digital en los parlamentos y elaboración de leyes, la revolución digital no ha significado cambios significativos y el proceso de elaboración de leyes ha seguido basándose casi exclusivamente en la interacción cara a cara entre los legisladores (Alsina y Martí, 2018) hasta la pandemia de COVID-19 que obligó a los parlamentos a funcionar telemáticamente (GovLab, 2020). Algunas iniciativas han intentado atraer a los ciudadanos a estas tareas, pero tienden a ser proyectos piloto de corto plazo, como tadiferentes ejemplos de Crowdsourcing 11 o CrowdLaw 12 en diversos lugares del mundo (Aitamurto y otros, 2016;Noveck, 2018). En América Latina, el Senador Virtual, una plataforma del Senado de Chile, con más de 17 años de funcionamiento, es una de las más antiguas iniciativas de este tipo en el mundo, pero su escala es todavía limitada (Feddersen y Santana, 2019). ...
... Without efficient analysis tools, crowdsourced civic participation efforts result in data loss and obscurity, preventing citizens from examining the extent to which their voices are reflected in the policies. The lack of transparency might paralyze the crowdsourced civic participation process [7]. ...
... Based on previous works that explored the use of artificial intelligence (AI) to enhance crowdsourcing for democratic practices [7], this study aims to expand this exploration into how NLP and ML can help civic organizations and governments synthesize and analyze civic contributions, contributing to the civic technology stack of the Participa Research Project 1 , carried out in the Spanish language context of Paraguay. One of the goals of this project was to design, develop and test data analysis tools for civic contributions, integrating techniques for concept extraction, sentiment analysis, classification of ideas, and, identification of similar ideas, as a module of algorithms into the Civic CrowdAnalytics platform [7], which provides a user interface to organize ideas into predefined categories, visualize the frequency of recurring concepts and explore the sentiments that are associated to the content of civic contributions. ...
... Based on previous works that explored the use of artificial intelligence (AI) to enhance crowdsourcing for democratic practices [7], this study aims to expand this exploration into how NLP and ML can help civic organizations and governments synthesize and analyze civic contributions, contributing to the civic technology stack of the Participa Research Project 1 , carried out in the Spanish language context of Paraguay. One of the goals of this project was to design, develop and test data analysis tools for civic contributions, integrating techniques for concept extraction, sentiment analysis, classification of ideas, and, identification of similar ideas, as a module of algorithms into the Civic CrowdAnalytics platform [7], which provides a user interface to organize ideas into predefined categories, visualize the frequency of recurring concepts and explore the sentiments that are associated to the content of civic contributions. For this purpose, Natural Language Processing (NLP) and Machine Learning (ML) techniques were adapted to the specific characteristics of the data in the target application domain, that of civic engagement content. ...
... Interactivity and transparency. It is common that commenting is disabled in crowdsourced policymaking processes [2,62] to maximize the efectiveness of the broadcast knowledge search, at the cost of users' needs to communicate on the platform, although interactivity and transparency play a crucial role in crowdsourced policymaking. The participants valued the interactive and transparent nature of crowdsourcing: The possibility to exchange information with one another and seeing others' opinions. ...
... Note.1 The 104 also replied to the survey 1.2 81 replied to the surveys 1 and 3, 65 replied to the surveys 2 and 3, and 65 replied to the surveys 1, 2, and 3.Table 2. Recipients and respondents in the three surveys. ...
In this paper, we examine the changes in motivation factors in crowdsourced policymaking. By drawing on longitudinal data from a crowdsourced law reform, we show that people participated because they wanted to improve the law, learn, and solve problems. When crowdsourcing reached a saturation point, the motivation factors weakened and the crowd disengaged. Learning was the only factor that did not weaken. The participants learned while interacting with others, and the more actively the participants commented, the more likely they stayed engaged. Crowdsourced policymaking should thus be designed to support both epistemic and interactive aspects. While the crowd's motives were rooted in self-interest, their knowledge perspective showed common-good orientation, implying that rather than being dichotomous, motivation factors move on a continuum. The design of crowdsourced policymaking should support the dynamic nature of the process and the motivation factors driving it.
Digital technologies can augment civic participation by facilitating the expression of detailed political preferences. Yet, digital participation efforts often rely on methods optimized for elections involving a few candidates. Here we present data collected in an online experiment where participants built personalized government programmes by combining policies proposed by the candidates of the 2022 French and Brazilian presidential elections. We use this data to explore aggregates complementing those used in social choice theory, finding that a metric of divisiveness, which is uncorrelated with traditional aggregation functions, can identify polarizing proposals. These metrics provide a score for the divisiveness of each proposal that can be estimated in the absence of data on the demographic characteristics of participants and that explains the issues that divide a population. These findings suggest that divisiveness metrics can be useful complements to traditional aggregation functions in direct forms of digital participation.