This research pioneers the use of fine-tuned Large Language Models (LLMs) to automate Systematic Literature Reviews (SLRs), presenting a significant and novel contribution in integrating AI to enhance academic research methodologies. Our study employed advanced fine-tuning methodologies on open-sourced LLMs, applying textual data mining techniques to automate the knowledge discovery and synthesis phases of an SLR process, thus demonstrating a practical and efficient approach for extracting and analyzing high-quality information from large academic datasets. The results maintained high fidelity in factual accuracy in LLM responses, and were validated through the replication of an existing PRISMA-conforming SLR. Our research proposed solutions for mitigating LLM hallucination and proposed mechanisms for tracking LLM responses to their sources of information, thus demonstrating how this approach can meet the rigorous demands of scholarly research. The findings ultimately confirmed the potential of fine-tuned LLMs in streamlining various labour-intensive processes of conducting literature reviews. As a scalable proof-of-concept, this study highlights the broad applicability of our approach across multiple research domains. The potential demonstrated here advocates for updates to PRISMA reporting guidelines, incorporating AI-driven processes to ensure methodological transparency and reliability in future SLRs. This study broadens the appeal of AI-enhanced tools across various academic and research fields, demonstrating how to conduct comprehensive and accurate literature reviews with more efficiency in the face of ever-increasing volumes of academic studies while maintaining high standards.
We explore an augmented democracy system built on off-the-shelf large language models (LLMs) fine-tuned to augment data on citizens’ preferences elicited over policies extracted from the government programmes of the two main candidates of Brazil’s 2022 presidential election. We use a train-test cross-validation set-up to estimate the accuracy with which the LLMs predict both: a subject’s individual political choices and the aggregate preferences of the full sample of participants. At the individual level, we find that LLMs predict out of sample preferences more accurately than a ‘bundle rule’, which would assume that citizens always vote for the proposals of the candidate aligned with their self-reported political orientation. At the population level, we show that a probabilistic sample augmented by an LLM provides a more accurate estimate of the aggregate preferences of a population than the non-augmented probabilistic sample alone. Together, these results indicate that policy preference data augmented using LLMs can capture nuances that transcend party lines and represents a promising avenue of research for data augmentation.
This article is part of the theme issue ‘Co-creating the future: participatory cities and digital governance’.
Interest in how Artificial Intelligence (AI) could be used within citizens’ assemblies (CAs) is emerging amongst scholars and practitioners alike. In this paper, I make four contributions at the intersection of these burgeoning fields. First, I propose an analytical framework to guide evaluations of the benefits and limitations of AI applications in CAs. Second, I map out eleven ways that AI, especially large language models (LLMs), could be used across a CAs full lifecycle. This introduces novel ideas for AI integration into the literature and synthesises existing proposals to provide the most detailed analytical breakdown of AI applications in CAs to date. Third, drawing on relevant literature, four key informant interviews, and the Global Assembly on the Ecological and Climate crisis as a case study, I apply my analytical framework to assess the desirability of each application. This provides insight into how AI could be deployed to address existing challenges facing CAs today as well as the concerns that arise with AI integration. Fourth, bringing my analyses together, I argue that AI integration into CAs brings the potential to enhance their democratic quality and institutional capacity, but realising this requires the deliberative community to proceed cautiously, effectively navigate challenging trade-offs, and mitigate important concerns that arise with AI integration. Ultimately, this paper provides a foundation that can guide future research concerning AI integration into CAs and other forms of democratic innovation.
Scenario engineering plays a vital role in various Industry 5.0 applications. In the field of autonomous driving systems, driving scenario data are important for the training and testing of critical modules. However, the corner scenario cases are usually rare and necessary to be extended. Existing methods cannot handle the interpretation and reasoning of the generation process well, which reduces the reliability and usability of the generated scenarios. With the rapid development of Foundation Models, especially the large language model (LLM), we can conduct scenario generation via more powerful tools. In this article, we propose LLMScenario, a novel LLM-driven scenario generation framework, which is composed of scenario prompt engineering, LLM scenario generation, and evaluation feedback tuning. The minimum scenario description specific to LLM is given by scenario analysis and ablation studies. We also appropriately design the score functions in terms of reality and rarity to evaluate the generated scenarios. The model performance is further enhanced through chain-of-thoughts and experiences. Different LLMs are also compared with our framework. Experimental results on naturalistic datasets demonstrate the effectiveness of LLMScenario, which can provide solid support for scenario engineering in Industry 5.0.
Historically, plant and crop sciences have been quantitative fields that intensively use measurements and modeling. Traditionally, researchers choose between two dominant modeling approaches: mechanistic plant growth models or data-driven, statistical methodologies. At the intersection of both paradigms, a novel approach referred to as “simulation intelligence”, has emerged as a powerful tool for comprehending and controlling complex systems, including plants and crops. This work explores the transformative potential for the plant science community of the nine simulation intelligence motifs, from understanding molecular plant processes to optimizing greenhouse control. Many of these concepts, such as surrogate models and agent-based modeling, have gained prominence in plant and crop sciences. In contrast, some motifs, such as open-ended optimization or program synthesis, still need to be explored further. The motifs of simulation intelligence can potentially revolutionize breeding and precision farming towards more sustainable food production.
Background
The recent COVID-19 pandemic highlighted the challenges for traditional forecasting. Prediction markets are a promising way to generate collective forecasts and could potentially be enhanced if high-quality crowdsourced inputs were identified and preferentially weighted for likely accuracy in real-time with machine learning.
Methods
We aim to leverage human prediction markets with real-time machine weighting of likely higher accuracy trades to improve performance. The crowd sourced Almanis prediction market longitudinal platform (n = 1822) and Next Generation Social Science (NGS2) platform (n = 103) were utilised.
Findings
A 43-feature model predicted accurate forecasters, those with top quintile relative Brier accuracy, with subsequent replication in two out-of-sample datasets (pboth <1 × 10⁻⁹). Trades graded by this model as having higher accuracy scores than others produced a greater AUC temporal gain in the overall market after vs before trade. Accuracy score-weighted forecasts had higher accuracy than market forecasts alone, particularly when the two systems disagreed by 5% or more for binary event prediction: the hybrid system demonstrating substantial % AUC gains of 13.2%, p = 1.35 × 10⁻¹⁴ and 13.8%, p = 0.003 in two out-of-sample datasets. When discordant, the hybrid model was correct for COVID-19 event occurrence 72.7% of the time vs 27.3% for market models, p = 0.007. This net classification benefit was replicated in the separate Almanis B dataset, p = 2.4 × 10⁻⁷.
Interpretation
Real-time machine classification followed by weighting human trades according to likely accuracy improves collective forecasting performance. This could provide improved anticipation of and thus response to emerging risks.
Funding
This work was supported by an AusIndustry R and D tax incentive program from the Department of Industry, Science, Energy and Resources, Australia, to SlowVoice Pty Ltd. (IR 2101990) and Fellowship (GNT 1110200) and Investigator grant (GNT 1197234) to A-L Ponsonby by the National Health and Medical Research Council of Australia.
Artificial Intelligence (AI) is a powerful tool for policymaking and policy implementation, allowing for efficiency enhancements, improvements in quality of public services, and time savings on administrative tasks. AI has applications across the various stages of the policy cycle, from agenda setting to policy formulation, decision making, implementation and evaluation. But while AI can be immensely powerful in data analysis and logic, it fares less well on policy-relevant concepts such as fairness, justice and equity, which are inherently human. The ability of AI to make sense of human reality, including understanding causality and cultural nuances, remains inadequate. Factors such as biases, prejudices or experience can influence AI algorithms and models and, ultimately, the results generated. This policy brief analyses the benefits and limitations of the use of AI in policymaking, and discusses policy options to ensure that the AI-augmented future remains human-centric.
Background
The aim of the present paper is to construct an emulator of a complex biological system simulator using a machine learning approach. More specifically, the simulator is a patient-specific model that integrates metabolic, nutritional, and lifestyle data to predict the metabolic and inflammatory processes underlying the development of type-2 diabetes in absence of familiarity. Given the very high incidence of type-2 diabetes, the implementation of this predictive model on mobile devices could provide a useful instrument to assess the risk of the disease for aware individuals. The high computational cost of the developed model, being a mixture of agent-based and ordinary differential equations and providing a dynamic multivariate output, makes the simulator executable only on powerful workstations but not on mobile devices. Hence the need to implement an emulator with a reduced computational cost that can be executed on mobile devices to provide real-time self-monitoring.
Results
Similarly to our previous work, we propose an emulator based on a machine learning algorithm but here we consider a different approach which turn out to have better performances, indeed in terms of root mean square error we have an improvement of two order magnitude. We tested the proposed emulator on samples containing different number of simulated trajectories, and it turned out that the fitted trajectories are able to predict with high accuracy the entire dynamics of the simulator output variables. We apply the emulator to control the level of inflammation while leveraging on the nutritional input.
Conclusion
The proposed emulator can be implemented and executed on mobile health devices to perform quick-and-easy self-monitoring assessments.
Computational objects (eg, algorithms, bots, surveillance technology and data) have become increasingly present in our daily lives and are consequential for our changing relations to texts, multimodality and identity. Yet, our current theories of literacy, and especially the prevalence of mediational and representational perspectives, are inadequate to account for these changing relations. What are the implications for critical literacy education when it takes seriously computational agents that interact, produce and process texts? While such work is only beginning in education, scholars in other fields are increasingly writing about how AI and algorithmic mediation are changing the landscape of online intra‐action, and business strategies and tactics for working with AI are advancing far ahead of critical literacy education. Drawing on our own and others’ research into non‐human actors online, and building on posthuman theories of networks, heterogeneous actants and the assemblage, in this conceptual paper, we sketch some of the forms of critical consciousness that media education might provide in this new mixed landscape.
Practitioner Notes
What is already known about this topic AI is a hot topic in education and in public discourse, but critical literacy theories have not sufficiently accounted for how AI and computational agents change what it means to be “critically literate.”
Technology is an important force in shaping (and is also shaped by) literacy practices and identity.
Corporate actors have an enormous influence on the texts we read and write, but this influence is often hidden.
What this paper adds We bridge between critical literacy studies and posthumanist theory to conceptualize critical posthuman literacy.
We argue for re‐imagining what texts, multimodality and identity are and do in the age of AI.
We pose new questions of our texts and ourselves, informed by posthuman critical literacy.
Implications for practice and/or policy Today’s readers and composers must be able to identify and interrogate networks of computational and human agents that permeate literacy practices.
Beyond identifying and understanding computational agents, posthuman critical literacy necessitates that people can actively build more ethical assemblages with computational agents.
Innovation and foresight should be two sides of the same coin. In the area of innovation, the concept of “responsible innovation” has emerged, signifying that innovation processes have to take into account ethical, social, and cultural considerations and changes. A better and more just society will not be created if we only give priority to technology and commerce. To make sure that responsible innovation has a positive and effective impact, it is important to develop “responsible foresight,” that is, a combination of “responsible futures” and a responsible foresight process. In particular, images of the images of the future have to be inspiring, both experts and nonexperts are involved and it has to be made sure that the use of foresight in innovation is not merely a cosmetic exercise.
The world is not more complicated or complex today than yesterday; when it comes to seeing and acting in any specific situation it is capacity that makes the difference, not the absolute number of permutations or even unfamiliarity. What seems complicated to a child may seem like child's play to an adult. In particular, what matters is the sophistication of our sense-making: our ability to discover, invent and construct the world around us. To date, considerable effort has been made to improve sense-making capabilities. Policymakers call on familiar and intuitive methods of everyday experience (preparation and planning), as well as techniques (such as forecasting, horizon scanning, scenarios, expert opinions) considered adequate based on past perceptions of our needs and capacities. Nevertheless, the perceived proliferation of so-called " wicked problems " in recent times has added to a mounting sense of uncertainty, and called into question both the decision-making value of these business-as-usual approaches as well as their sufficiency in accounting for complexity in practice. Recent advances in understanding complexity, uncertainty and emergence have opened up new ways of defining and using the future. The question is therefore not how to cope with a universe that seems to be getting more complex, but how to improve our ability to take advantage of the novel emergence that has always surrounded us.
Emering generic technologies seen set to make a revahtionary impact
on the economy and society. However, success in developing such technologies
depends uon advances in science. Confrnted with increasing global
economic competition, policy-makers and scientists are grappling
with the problem of how to select themost promising research areas
and emerging technologies on which to target resources and, hence,
derwe the greatest benefits. This bapaer analyzes the experiences
of Japan, the US, the Netherlands, Gennany, Australia, New Zealarul
and the UK in using foresight to help in selecting and exploiting
research that is likely to yeld longer-term economic and severl bendfits.
It puts forward a model of the foresight process for identifing research
areas and technoloies of strategic importance, and also analyzes
why some foresight exercises have proved more successful than other.
It concludes by drawing an analogy between models of innovation and
foresight.
Large language models (LLMs) match and sometimes exceed human performance in many domains. This study explores the potential of LLMs to augment human judgment in a forecasting task. We evaluate the effect on human forecasters of two LLM assistants: one designed to provide high-quality (‘superforecasting’) advice, and the other designed to be overconfident and base-rate neglecting, thus providing noisy forecasting advice. We compare participants using these assistants to a control group that received a less advanced model that did not provide numerical predictions or engage in explicit discussion of predictions. Participants (N = 991) answered a set of six forecasting questions and had the option to consult their assigned LLM assistant throughout. Our preregistered analyses show that interacting with each of our frontier LLM assistants significantly enhances prediction accuracy by between 24% and 28% compared to the control group. Exploratory analyses showed a pronounced outlier effect in one forecasting item, without which we find that the superforecasting assistant increased accuracy by 41%, compared with 29% for the noisy assistant. We further examine whether LLM forecasting augmentation disproportionately benefits less skilled forecasters, degrades the wisdom-of-the-crowd by reducing prediction diversity, or varies in effectiveness with question difficulty. Our data do not consistently support these hypotheses. Our results suggest that access to a frontier LLM assistant, even a noisy one, can be a helpful decision aid in cognitively demanding tasks compared to a less powerful model that does not provide specific forecasting advice. However, the effects of outliers suggest that further research into the robustness of this pattern is needed.
Human forecasting accuracy improves through the “wisdom of the crowd” effect, in which aggregated predictions tend to outperform individual ones. Past research suggests that individual large language models (LLMs) tend to underperform compared to human crowd aggregates. We simulate a wisdom of the crowd effect with LLMs. Specifically, we use an ensemble of 12 LLMs to make probabilistic predictions about 31 binary questions, comparing them with those made by 925 human forecasters in a 3-month tournament. We show that the LLM crowd outperforms a no-information benchmark and is statistically indistinguishable from the human crowd. We also observe human-like biases, such as the acquiescence bias. In another study, we find that LLM predictions (of GPT-4 and Claude 2) improve when exposed to the median human prediction, increasing accuracy by 17 to 28%. However, simply averaging human and machine forecasts yields more accurate results. Our findings suggest that LLM predictions can rival the human crowd’s forecasting accuracy through simple aggregation.
Imagining future scenarios arising from events and (in)actions is crucial for democratic participation, but is often left to experts who have in-depth knowledge of, for example, social, political, environmental or technological trends. A widely accepted method for non-experts to think about future scenarios is to write fictional short stories set in speculative futures. To support the writing process and thus further lower the barrier for this form of participation, we introduce Futuring Machines, a framework for collaborative writing of speculative fiction through instruction-based conversation between humans and AI. Futuring Machines is specifically designed to stimulate reflection on future scenarios in both participatory workshops and individual use.
In recent years, Generative Artificial Intelligence (GAI) has been increasingly applied to perform complex tasks. In this study, a GAI-embedded method for design futures was proposed. Through a week-long workshop guided by a workbook, the potential of GAI for scanning signals, constructing scenarios and assisting in the design of concepts is discussed and explored. According to questionnaire surveys, interviews and related content analysis, high technology can provide certain support for imagination and creative design for the uncertain futures. A key action for designers is to add human-factor-based high-touch guidance and correction to human-AI collaboration. In this regard, this study develops a GAI-embedded framework involving internal and external environments through the combination of future thinking and design thinking, which not only bridges the gap between rational technology and emotional design, but also expands the tools and methods of future design in the age of artificial intelligence.
Research exploring how to support decision-making has often used machine learning to automate or assist human decisions. We take an alternative approach for improving decision-making, using machine learning to help stakeholders surface ways to improve and make fairer decision-making processes. We created "Deliberating with AI", a web tool that enables people to create and evaluate ML models in order to examine strengths and shortcomings of past decision-making and deliberate on how to improve future decisions. We apply this tool to a context of people selection, having stakeholders---decision makers (faculty) and decision subjects (students)---use the tool to improve graduate school admission decisions. Through our case study, we demonstrate how the stakeholders used the web tool to create ML models that they used as boundary objects to deliberate over organization decision-making practices. We share insights from our study to inform future research on stakeholder-centered participatory AI design and technology for organizational decision-making.
In recent years, artificial intelligence (AI) has been increasingly put into use to address cities’ economic, social, environmental, and governance challenges. Thanks to its advanced capabilities, AI is set to become one of local governments’ principal means of achieving smart and sustainable development. AI utilisation for urban planning, nonetheless, is a relatively understudied area of research, particularly in terms of the gap between theory and practice. This study presents a comprehensive review of the areas of urban planning in which AI technologies are contemplated or applied, and it is analysed how AI technologies support or could potentially support smart and sustainable development. Regarding the methodological approach, this is a systematic literature review following the PRISMA protocol. The obtained insights include: (a) Early adopters’ real-world AI applications in urban planning are paving the way to wider local government AI adoption; (b) Achieving wider AI adoption for urban planning involves collaboration and partnership between key stakeholders; (c) Big data is an integral element for effective AI utilisation in urban planning, and; (d) Convergence of artificial and human intelligence is crucial to address urbanisation issues adequately and to achieve smart and sustainable development. These insights highlight the importance of making planning smarter through advanced data and analytical methods.
Advanced computer technologies such as big data, Artificial Intelligence (AI), cloud computing, digital twins, and edge computing have been applied in various fields as digitalization has progressed. To study the status of the application of digital twins in the combination with AI, this paper classifies the applications and prospects of AI in digital twins by studying the research results of the current published literature. We discuss the application status of digital twins in the four areas of aerospace, intelligent manufacturing in production workshops, unmanned vehicles, and smart city transportation, and we review the current challenges and topics that need to be looked forward to in the future. It was found that the integration of digital twins and AI has significant effects in aerospace flight detection simulation, failure warning, aircraft assembly, and even unmanned flight. In the virtual simulation test of automobile autonomous driving, it can save 80% of the time and cost, and the same road conditions reduce the parameter scale of the actual vehicle dynamics model and greatly improve the test accuracy. In the intelligent manufacturing of production workshops, the establishment of a virtual workplace environment can provide timely fault warning, extend the service life of the equipment, and ensure the overall workshop operational safety. In smart city traffic, the real road environment is simulated, and traffic accidents are restored, so that the traffic situation is clear and efficient, and urban traffic management can be carried out quickly and accurately. Finally, we looked forward to the future of digital twins and AI, hoping to provide a reference for future research in related fields.
bold xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">When we are thinking
about the ethics of AI and the ramifications for selfhood and society brought about by technologically enabled modes of modern indentured servitude, we are thinking about the future. We are anticipating future risks, imagining potential disruptions and disruptors, considering the balance of harms and benefits, assessing their probability and their scale, and planning mitigation strategies to deal with them. This article proposes some new ways through which we might better understand the scope (and the limitations) of this mode of anticipatory thinking and so develop a stronger sociotechnical capability in what we might characterize as “ethical AI futures literacy.” It also suggests some first steps toward developing this new approach by highlighting some of the cognitive biases and deficiencies which particularly affect such futures thinking and which shape the anticipatory dynamics of both human and artificial intelligence
[1]
.
The heuristic versatility of foresight is increasingly positioning this anticipatory instrument as a key resource to promote more responsible research and innovation practices. In a context where foresight’s multiple heuristic potential is sometimes wrapped up in a promissory rhetoric that could lead to its being taken for granted, this article underlines the need to understand the emergence of these heuristics as being dependent on how foresight’s dynamics unfold. By acknowledging the existence of more “open” or “closed” forms of foresight (which in turn can articulate more “open” or “closed” anticipations), the article argues that the degree of “openness/closure” of foresight activities is constituted during the ex-ante, ex-dure and ex-post processes, and according to the relations underlying their constructive dynamics. The main conclusion reached is that a pre-condition for foresight practices to become “instruments for” responsible innovation is to make them “subjects of” responsibility simultaneously. This involves monitoring the socio-epistemic relations whereby foresight practices are designed and executed, as well as monitoring how their emergent heuristics are translated into action.
Policymakers prepare society for the future and this book provides a practical toolkit for preparing pro-active, future-proof scientific policy advice for them. It explains how to make scientific advisory strategies holistic. It also explains how and where biases, which interfere with the proper functioning of the entire science-policy ecosystem, arise and investigates how emotions and other biases affect the understanding and assessment of scientific evidence. The book advocates explorative foresight, systems thinking, interdisciplinarity, bias awareness and the anticipation of undesirable impacts in policy advising, and it offers practical guidance for them. Written in an accessible style, the book offers provocative reflections on how scientific policy advice should be sensitive to more than scientific evidence. It is both an appealing introductory text for everyone interested in science-based policy and a valuable guide for the experienced scientific adviser and policy scholar.
Lieve Van Woensel is Head of Service at the European Parliament, where she introduced foresight methodologies into scientific advisory processes. She has a broad scientific background, has worked for over 30 years at the science-policy interface, and was the EU Visiting Fellow at St. Antony’s College, University of Oxford, UK, 2017-2018.
Abstract We review recent work in the integrated assessment modeling of global climate change. This field has grown rapidly since 1990. Integrated assessment models seek to combine knowledge from multiple disciplines in formal integrated representations; inform policy-making, structure knowledge, and prioritize key uncertainties; and advance knowledge of broad system linkages and feedbacks, particularly between socioeconomic and biophysical processes. They may combine simplified representations of the socioeconomic determinants of greenhouse gas emissions, the atmosphere and oceans, impacts on human activities and ecosystems, and potential policies and responses. We summarize current projects, grouping them according to whether they emphasize the dynamics of emissions control and optimal policy-making, uncertainty, or spatial detail. We review the few significant insights that have been claimed from work to date and identify important challenges for integrated assessment modeling in its relationships to disciplinary knowledge and to broader assessment seeking to inform policy- and decision-making.
Purpose
The purpose of this paper is to describe the application of scenario planning methods to: identifying disruptive innovations at an early stage, mapping out potential development paths for such innovations, and building appropriate organizational capabilities.
Design/methodology/approach
A combination of scenario planning with technology road‐mapping, expert analysis and creative group processes. The techniques described can be integrated with traditional tools of strategic technology planning. The paper presents a short illustrative case study and examples from practice.
Findings
Scenario techniques can be successfully applied to analysing disruptive innovation.
Practical implications
Scenario techniques help guide managers to more effective decision making by preparing for a wide range of uncertainty and by counteracting typical biases of over‐optimism and decision “framing”. The techniques presented can be used in executive development and in strategic planning for innovative and high‐tech industries.
Originality/value
This paper presents a novel way to combine scenario methods with technology road‐mapping and creative group analysis. It also provides an overview of the literature and research related to scenario planning for disruptive innovation.
The past is a dangerous predictor of the future. Fortunately, there is a structure to the present which contains the building blocks of the future. The future casts a shadow on the present when we see it as possibility rather than as an extension in time. The world can be viewed as a system of non-linear cause and effect, referred to as ‘emergent’, which sees relationships as the source of foresight. This emergent perspective suggests that foresight is a matter of interpretation, that there are tools of competence and that our ability to explore it is an organizational matter.
We analyze the extent to which simple markets can be used to aggregate
disperse information into efficient forecasts of uncertain future
events. Drawing together data from a range of prediction contexts,
we show that market-generated forecasts are typically fairly accurate,
and that they outperform most moderately sophisticated benchmarks.
Carefully designed contracts can yield insight into the marketâs
expectations about probabilities, means and medians, and also uncertainty
about these parameters. Moreover, conditional markets can effectively
reveal the marketâs beliefs about regression coefficients, although
we still have the usual problem of disentangling correlation from
causation. We discuss a number of market design issues and highlight
domains in which prediction markets are most likely to be useful.
Future Matters concerns contemporary approaches to the future – how the future is known, created and minded. In a social world whose pace continues to accelerate the future becomes an increasingly difficult terrain. While the focus of social life is narrowing down to the present, the futures we create on a daily basis cast ever longer shadows. Future Matters addresses this paradox and its deep ethical implications.
Integrating physics-based modeling with machine learning: A survey
J Willard
X Jia
S Xu
M Steinbach
V Kumar
The gesda 2024 science breakthrough radar: Geneva science and diplomacy anticipator’s annual report on science trends at 5, 10 and 25 years
Gesda
Ai assisted scenario building for sustainable development, Science-Policy Brief for the Multistakeholder Forum on Science, Technology and Innovation for the SDGs
H Carlsen
DisasterResponseGPT: Large language models for accelerated plan of action development in disaster response scenarios
V G Goecks
N R Waytowich
Theories of the policy cycle, Handbook of public policy analysis, Routledge
W Jann
K Wegrich
Can language models use forecasting strategies?
S Pratt
S Blumberg
P K Carolino
M R Morris
Optimal decision making through scenario simulations using large language models
S Rasal
E Hauer
Crafting desirable climate trajectories with rl explored socio-environmental simulations
J Rudd-Jones
F Thendean
M Pérez-Ortiz
Superforecasting: The art and science of prediction
P E Tetlock
D Gardner
Large language model for participatory urban planning
Z Zhou
Y Lin
D Jin
Y Li
Forecasting future world events with neural networks