Article

Wildfire prediction for California using and comparing Spatio-Temporal Knowledge Graphs

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The frequency of wildfires increases yearly and poses a constant threat to the environment and human beings. Different factors, for example surrounding infrastructure to an area (e.g., campfire sites or power lines) contribute to the occurrence of wildfires. In this paper, we propose using a Spatio-Temporal Knowledge Graph (STKG) based on OpenStreetMap (OSM) data for modeling such infrastructure. Based on that knowledge graph, we use the RDF2vec approach to create embeddings for predicting wildfires, and we align different vector spaces generated at each temporal step by partial rotation. In an experimental study, we determine the effect of the surrounding infrastructure by comparing different data composition strategies, which involve a prediction based on tabular data, a combination of tabular data and embeddings, and solely embeddings. We show that the incorporation of the STKG increases the prediction quality of wildfires.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Historical maps provide rich information for researchers in many areas, including the social and natural sciences. These maps contain detailed documentation of a wide variety of natural and human-made features and their changes over time, such as changes in transportation networks or the decline of wetlands or forest areas. Analyzing changes over time in such maps can be labor-intensive for a scientist, even after the geographic features have been digitized and converted to a vector format. Knowledge Graphs (KGs) are the appropriate representations to store and link such data and support semantic and temporal querying to facilitate change analysis. KGs combine expressivity, interoperability, and standardization in the Semantic Web stack, thus providing a strong foundation for querying and analysis. In this paper, we present an automatic approach to convert vector geographic features extracted from multiple historical maps into contextualized spatio-temporal KGs. The resulting graphs can be easily queried and visualized to understand the changes in different regions over time. We evaluate our technique on railroad networks and wetland areas extracted from the United States Geological Survey (USGS) historical topographic maps for several regions over multiple map sheets and editions. We also demonstrate how the automatically constructed linked data (i.e., KGs) enable effective querying and visualization of changes over different points in time.
Article
Full-text available
Knowledge graphs (KGs) are a novel paradigm for the representation, retrieval, and integration of data from highly heterogeneous sources. Within just a few years, KGs and their supporting technologies have become a core component of modern search engines, intelligent personal assistants, business intelligence, and so on. Interestingly, despite large‐scale data availability, they have yet to be as successful in the realm of environmental data and environmental intelligence. In this paper, we will explain why spatial data require special treatment, and how and when to semantically lift environmental data to a KG. We will present our KnowWhereGraph that contains a wide range of integrated datasets at the human–environment interface, introduce our application areas, and discuss geospatial enrichment services on top of our graph. Jointly, the graph and services will provide answers to questions such as “what is here,” “what happened here before,” and “how does this region compare to …” for any region on earth within seconds.
Article
Full-text available
Knowledge graphs (KGs) are a novel paradigm for the representation, retrieval, and integration of data from highly heterogeneous sources. Within just a few years, KGs and their supporting technologies have become a core component of modern search engines, intelligent personal assistants, business intelligence, and so on. Interestingly, despite large-scale data availability, they have yet to be as successful in the realm of environmental data and environmental intelligence. In this paper, we will explain why spatial data require special treatment, and how and when to semantically lift environmental data to a KG. We will present our KnowWhereGraph that contains a wide range of integrated datasets at the human–environment interface, introduce our application areas, and discuss geospatial enrichment services on top of our graph. Jointly, the graph and services will provide answers to questions such as “what is here,” “what happened here before,” and “how does this region compare to …” for any region on earth within seconds.
Article
Full-text available
A digital map of the built environment is useful for a range of economic, emergency response, and urban planning exercises such as helping find places in app driven interfaces, helping emergency managers know what locations might be impacted by a flood or fire, and helping city planners proactively identify vulnerabilities and plan for how a city is growing. Since its inception in 2004, OpenStreetMap (OSM) sets the benchmark for open geospatial data and has become a key player in the public, research, and corporate realms. Following the foundations laid by OSM, several open geospatial products describing the built environment have blossomed including the Microsoft USA building footprint layer and the OpenAddress project. Each of these products use different data collection methods ranging from public contributions to artificial intelligence, and if taken together, could provide a comprehensive description of the built environment. Yet, these projects are still siloed, and their variety makes integration and interoperability a major challenge. Here, we document an approach for merging data from these three major open building datasets and outline a workflow that is scalable to the continental United States (CONUS). We show how the results can be structured as a knowledge graph over which machine learning models are built. These models can help propagate and complete unknown quantities that can then be leveraged in disaster management.
Article
Full-text available
As most of the forest fires in South Korea are related to human activity, socioeconomic factors are critical in estimating their probability. To estimate and analyze how human activity is influencing forest fire probability, this study considered not only environmental factors such as precipitation, elevation, topographic wetness index, and forest type, but also socioeconomic factors such as population density and distance from urban area. The machine learning Maximum Entropy (Maxent) and Random Forest models were used to predict and analyze the spatial distribution of forest fire probability in South Korea. The model performance was evaluated using the receiver operating characteristic (ROC) curve method, and models' outputs were compared based on the area under the ROC curve (AUC). In addition, a multi-temporal analysis was conducted to determine the relationships between forest fire probability and socioeconomic or environmental changes from the 1980s to the 2000s. The analysis revealed that the spatial distribution was concentrated in or around cities, and the probability had a strong correlation with variables related to human activity and accessibility over the decades. The AUC values for validation were higher in the Random Forest result compared to the Maxent result throughout the decades. Our findings can be useful for developing preventive measures for forest fire risk reduction considering socioeconomic development and environmental conditions.
Article
Full-text available
The automatic extraction of valleys or ridges from DEM is a long term topic in the GIS and hydrology fields and a number of algorithms have been developed. The quality of drainage networks extraction depends on many impacts such as data source, DEM resolution and extraction algorithms. However, little consideration has been paid to the influence of different tessellation of grid-based DEM construction in terrain surface representation. Actually, hexagonal grid has been proved to be advantageous over square grid due to its consistent connectivity, isotropy of local neighbourhoods, higher symmetry, visual advantage, and so on. This study tries to explore the impact of different tessellation scheme for grid-based DEM on the accuracy of terrain representation reflected by the results of drainage networks extraction. The contour line data model is applied to grid-based DEM generation. Then, by analogy with traditional D8 algorithm, the D6 algorithm is introduced to extract drainage networks by calculating flow direction of each gird with the steepest slope neighbour criteria. From the comparison between D8 algorithm and D6 algorithm, we conclude that hexagonal grid-based DEM has a superior capability in maintaining the detailed shape and the characteristics of extracted drainage networks in coarser resolution.
Article
Full-text available
Purpose of Review I sought to review the contributions of recent literature and prior foundational papers to our understanding of drought and fire. In this review, I summarize recent literature on drought and fire in the western USA and discuss research directions that may increase the utility of that body of work for twenty-first century application. I then describe gaps in the synthetic knowledge of drought-driven fire in managed ecosystems and use concepts from use-inspired research to describe potentially useful extensions of current work. Recent Findings Fire responses to climate, and specifically various kinds of drought, are clear, but vary widely with fuel responses to surplus water and drought at different timescales. Ecological and physical factors interact with human management and ignitions to create fire regime and landscape trajectories that challenge prediction. Summary The mechanisms by which the climate system affects regional droughts and how they translate to fire in the western USA need more attention to accelerate both forecasting and adaptation. However, projections of future fire activity under climate change will require integrated advances on both fronts to achieve decision-relevant modeling. Concepts from transdisciplinary research and coupled human-natural systems can help frame strategic work to address fire in a changing world.
Article
Full-text available
Accurate estimates of wildfire probability and production of distribution maps are the first important steps in wildfire management and risk assessment. In this study, geographical information system (GIS)- automated techniques were integrated with the quantitative data-driven evidential belief function (EBF) model to predict spatial pattern of wildfire probability in a part of the Hyrcanian ecoregion, northern Iran. The historical fire events were identified using earlier reports and MODIS hot spot product as well as by carrying out multiple field surveys. Using the GIS-based EBF model, the relationships among existing fire events and various predictor variables predisposing fire ignition were analyzed. Model results were used to produce a distribution map of wildfire probability. The derived probability map revealed that zones of moderate, high, and very high probability covered nearly 60% of the landscape. Further, the probability map clearly demonstrated that the probability of a fire was strongly dependent upon human infrastructure and associated activities. By comparing the probability map and the historical fire events, a satisfactory spatial agreement between the five probability levels and fire density was observed. The probability map was further validated by receiver operating characteristic (ROC) using both success rate and prediction rate curves. The validation results confirmed the effectiveness of the GIS-based EBF model that achieved AUC values of 84.14 and 81.03% for success and prediction rates, respectively.
Conference Paper
Full-text available
Topological relationships between spatial objects represent important knowledge that users of geographic information systems expect to retrieve from a spatial database. A difficult task is to assign precise semantics to user queries involving concepts such as crosses, is inside, is adjacent. In this paper, we present two methods for describing topological relationships. The first method is an extension of the geometric point-set approach by taking the dimension of the intersections into account. This results in a very large number of different topological relationships for point, line, and area features. In the second method, which aims to be more suitable for humans, we propose to group all possible cases into a few meaningful topological relationships and we discuss their exclusiveness and completeness with respect to the point-set approach.
Article
Climate science has become more ambitious in recent years as global awareness about the environment has grown. To better understand climate, historical climate(e.g. archived meteorological variables such as temperature, wind, water, etc.) and climate-related data (e.g. geographical features and human activities) are widely used by today’s climate research to derive models for an explainable climate change and its effects. However, such data sources are often dispersed across a multitude of disconnected data silos on the Web. Moreover, there is a lack of advanced climate data platforms to enable multi-source heterogeneous climate data analysis, therefore, researchers must face a stern challenge in collecting and analyzing multi-source data. In this paper, we address this problem by proposing a climate knowledge graph for the integration of multiple climate data and other data sources into one service, leveraging Web technologies (e.g. HTTP) for multi-source climate data analysis. The proposed knowledge graph is primarily composed of data from the National Oceanic and Atmospheric Administration’s daily climate summaries, OpenStreetMap, and Wikidata, and it supports joint data queries on these widely used databases. This paper shows, with a use case in Ireland and the United Kingdom, how climate researchers could benefit from this platform as it allows them to easily integrate datasets from different domains and geographical locations.
Chapter
Knowledge graph embeddings and successive machine learning models represent a topic that has been gaining popularity in recent research. These allow the use of graph-structured data for applications that, by definition, rely on numerical feature vectors as inputs. In this context, the transformation of knowledge graphs into sets of numerical feature vectors is performed by embedding algorithms, which map the elements of the graph into a low-dimensional embedding space. However, these methods mostly assume a static knowledge graph, so subsequent updates inevitably require a re-run of the embedding process. In this work the Navi Approach is introduced which aims to maintain advantages of established embedding methods while making them accessible to dynamic domains. Relational Graph Convolutional Networks are adapted for reconstructing node embeddings based solely on local neighborhoods. Moreover, the approach is independent of the original embedding process, as it only considers its resulting embeddings. Preliminary results suggest that the performance of successive machine learning tasks is at least maintained without the need of relearning the embeddings nor the machine learning models. Often, using the reconstructed embeddings instead of the original ones even leads to an increase in performance.KeywordsKnowledge graphSemantic webDynamic embeddings
Chapter
Many knowledge graphs (KG) contain spatial and temporal information. Most KG embedding models follow triple-based representation and often neglect the simultaneous consideration of the spatial and temporal aspects. Encoding such higher dimensional knowledge necessitates the consideration of true algebraic and geometric aspects. Hypercomplex algebra provides the foundation of a well defined mathematical system among which the Dihedron algebra with its rich framework is suitable to handle multidimensional knowledge. In this paper, we propose an embedding model that uses Dihedron algebra for learning such spatial and temporal aspects. The evaluation results show that our model performs significantly better than other adapted models.
Chapter
This chapter is designed to provide the background needed to comprehend the database management system (DBMS) concepts that are necessary for understanding the subsequent chapters. The chapter illustrates the database specialist examples of geographic applications and the specific requirements of geospatial applications. A database is a large collection of interrelated data stored within a computer environment. Both large data volume and persistence, two major characteristics of databases, are in contrast with information manipulated by programming languages, which is small enough in volume to reside in the main memory and which disappears once the program terminates. A DBMS is a collection of software that manages the database structure and controls access to data stored in a database. It facilitates the process of defining, construction, storing, manipulating, retrieving data, and updating the database. A full-fledged geographic information system (GIS) is a type of database that is able to handle data input and verification, data storage and management, data output and presentation, data transformation, and interaction with end users. The chapter discusses the complex operations involved in a typical GIS operation, such as allocation and location of resources. The chapter also investigates the possible approaches when using a DBMS in a GIS environment and explains why pure relational databases are not suitable for handling spatial data.
Article
Multi-temporal land cover and change Methodology Implementation strategies A B S T R A C T The U.S. Geological Survey (USGS), in partnership with several federal agencies, has developed and released four National Land Cover Database (NLCD) products over the past two decades: NLCD 1992, 2001, 2006, and 2011. These products provide spatially explicit and reliable information on the Nation's land cover and land cover change. To continue the legacy of NLCD and further establish a long-term monitoring capability for the Nation's land resources, the USGS has designed a new generation of NLCD products named NLCD 2016. The NLCD 2016 design aims to provide innovative, consistent, and robust methodologies for production of a multi-temporal land cover and land cover change database from 2001 to 2016 at 2-3-year intervals. Comprehensive research was conducted and resulted in developed strategies for NLCD 2016: a streamlined process for assembling and pre-processing Landsat imagery and geospatial ancillary datasets; a multi-source integrated training data development and decision-tree based land cover classifications; a temporally, spectrally, and spatially integrated land cover change analysis strategy; a hierarchical theme-based post-classification and integration protocol for generating land cover and change products; a continuous fields biophysical parameters modeling method; and an automated scripted operational system for the NLCD 2016 production. The performance of the developed strategies and methods were tested in twenty World Reference System-2 path/row throughout the conterminous U.S. An overall agreement ranging from 71% to 97% between land cover classification and reference data was achieved for all tested area and all years. Results from this study confirm the robustness of this comprehensive and highly automated procedure for NLCD 2016 operational mapping.
Article
Eight-day composite land surface temperature (LST) images from the Moderate Resolution Imaging Specroradiometer (MODIS) sensor are extensively utilized due to their limited number of invalid pixels and smaller file size, in comparison to daily products. Remaining invalid values (the majority caused by cloud coverage), however, still pose a challenge to researchers requiring continuous datasets. Although a number of interpolation methods have been employed, validation has been limited to provide comprehensive guidance. The goal of this analysis was to compare the performance of all methods previously used for 8-day MODIS LST images under a range of cloud cover conditions and in different seasons. These included two temporal interpolation methods: Linear Temporal and Harmonic Analysis of Time Series (HANTS); two spatial methods: Spline and Adaptive Window; and two spatiotemporal methods: Gradient and Weiss. The impact of topographic, land cover, and climatic factors on interpolation performance was also assessed. Methods were implemented on high quality test images with simulated cloud cover sampled from 101 by 101 pixel sites (1-km pixels) across the conterminous United States. These results provide strong evidence that spatial and spatiotemporal methods have a greater predictive capability than temporal methods, regardless of the time of day or season. This is true even under extremely high cloud cover (>80%). The Spline method performed best at low cloud cover (<30%) with median absolute errors (MAEs) ranging from 0.2 °C to 0.6 °C. The Weiss method generally performed best at greater cloud cover, with MAEs ranging from 0.3 °C to 1.2 °C. The regression analysis revealed spatial methods tend to perform worse in areas with steeper topographic slopes, temporal methods perform better in warmer climates, and spatiotemporal methods are influenced by both of these factors, to a lesser extent. Assessed covariates, however, explained a low portion of the overall variation in MAEs and did not appear to cause deviations from major interpolation trends at sites with extreme values. While it would be most effective to use the Weiss method for images with medium to high cloud cover, Spline could be applied under all circumstances for simplicity, considering that (i) images with <30% cloud cover represent the vast majority of 8-day LST images requiring interpolation, and (ii) Spline functions are readily available and easy to implement through several software packages. Applying a similar framework to interpolation methods for daily LST products would build on these findings and provide additional information to future researchers.
Article
Knowledge graph (KG) embedding is to embed components of a KG including entities and relations into continuous vector spaces, so as to simplify the manipulation while preserving the inherent structure of the KG. It can benefit a variety of downstream tasks such as KG completion and relation extraction, and hence has quickly gained massive attention. In this article, we provide a systematic review of existing techniques, including not only the state-of-the-arts but also those with latest trends. Particularly, we make the review based on the type of information used in the embedding task. Techniques that conduct embedding using only facts observed in the KG are first introduced. We describe the overall framework, specific model design, typical training procedures, as well as pros and cons of such techniques. After that, we discuss techniques that further incorporate additional information besides facts. We focus specifically on the use of entity types, relation paths, textual descriptions, and logical rules. Finally, we briefly introduce how KG embedding can be applied to and benefit a wide variety of downstream tasks such as KG completion, relation extraction, question answering, and so forth.
Conference Paper
Tree boosting is a highly effective and widely used machine learning method. In this paper, we describe a scalable end-to-end tree boosting system called XGBoost, which is used widely by data scientists to achieve state-of-the-art results on many machine learning challenges. We propose a novel sparsity-aware algorithm for sparse data and weighted quantile sketch for approximate tree learning. More importantly, we provide insights on cache access patterns, data compression and sharding to build a scalable tree boosting system. By combining these insights, XGBoost scales beyond billions of examples using far fewer resources than existing systems.
Conference Paper
Linked Open Data has been recognized as a valuable source for background information in data mining. However, most data mining tools require features in propositional form, i.e., a vector of nominal or numerical features associated with an instance, while Linked Open Data sources are graphs by nature. In this paper, we present RDF2Vec, an approach that uses language modeling approaches for unsupervised feature extraction from sequences of words, and adapts them to RDF graphs. We generate sequences by leveraging local information from graph sub-structures, harvested by Weisfeiler-Lehman Subtree RDF Graph Kernels and graph walks, and learn latent numerical representations of entities in RDF graphs. Our evaluation shows that such vector representations outperform existing techniques for the propositionalization of RDF graphs on a variety of different predictive machine learning tasks, and that feature vector representations of general knowledge graphs such as DBpedia and Wikidata can be easily reused for different tasks.
Conference Paper
Understanding how words change their meanings over time is key to models of language and cultural evolution, but historical data on meaning is scarce, making theories hard to develop and test. Word embeddings show promise as a diachronic tool, but have not been carefully evaluated. We develop a robust methodology for quantifying semantic change by evaluating word embeddings (PPMI, SVD, word2vec) against known historical changes. We then use this methodology to reveal statistical laws of semantic evolution. Using six historical corpora spanning four languages and two centuries, we propose two quantitative laws of semantic change: (i) the law of conformity---the rate of semantic change scales with an inverse power-law of word frequency; (ii) the law of innovation---independent of frequency, words that are more polysemous have higher rates of semantic change.
Article
OpenStreetMap (OSM) data are widely used but their reliability is still variable. Many contributors to OSM have not been trained in geography or surveying and consequently their contributions, including geometry and attribute data inserts, deletions, and updates, can be inaccurate, incomplete, inconsistent, or vague. There are some mechanisms and applications dedicated to discovering bugs and errors in OSM data. Such systems can remove errors through user-checks and applying predefined rules but they need an extra control process to check the real-world validity of suspected errors and bugs. This paper focuses on finding bugs and errors based on patterns and rules extracted from the tracking data of users. The underlying idea is that certain characteristics of user trajectories are directly linked to the type of feature. Using such rules, some sets of potential bugs and errors can be identified and stored for further investigations.
Article
A database is described that has been designed to fulfill the need for daily climate data over global land areas. The dataset, known as Global Historical Climatology Network (GHCN)-Daily, was developed for a wide variety of potential applications, including climate analysis and monitoring studies that require data at a daily time resolution (e.g., assessments of the frequency of heavy rainfall, heat wave duration, etc.). The dataset contains records from over 80 000 stations in 180 countries and territories, and its processing system produces the official archive for U.S. daily data. Variables commonly include maximum and minimum temperature, total daily precipitation, snowfall, and snow depth; however, about two-thirds of the stations report precipitation only. Quality assurance checks are routinely applied to the full dataset, but the data are not homogenized to account for artifacts associated with the various eras in reporting practice at any particular station (i.e., for changes in systematic bias). Daily updates are provided for many of the station records in GHCN-Daily. The dataset is also regularly reconstructed, usually once per week, from its 20+ data source components, ensuring that the dataset is broadly synchronized with its growing list of constituent sources. The daily updates and weekly reprocessed versions of GHCN-Daily are assigned a unique version number, and the most recent dataset version is provided on the GHCN-Daily website for free public access. Each version of the dataset is also archived at the NOAA/National Climatic Data Center in perpetuity for future retrieval.
Article
Knowledge of ore grades and ore reserves as well as error estimation of these values, is fundamental for mining engineers and mining geologists. Until now no appropriate scientific approach to those estimation problems has existed: geostatistics, the principles of which are summarized in this paper, constitutes a new science leading to such an approach. The author criticizes classical statistical methods still in use, and shows some of the main results given by geostatistics. Any ore deposit evaluation as well as proper decision of starting mining operations should be preceded by a geostatistical investigation which may avoid economic failures.
Book
The Data Mining process encompasses many different specific techniques and algorithms that can be used to analyze the data and derive the discovered knowledge. An important problem regarding the results of the Data Mining process is the development of efficient indicators of assessing the quality of the results of the analysis. This, the quality assessment problem, is a cornerstone issue of the whole process because: i) The analyzed data may hide interesting patterns that the Data Mining methods are called to reveal. Due to the size of the data, the requirement for automatically evaluating the validity of the extracted patterns is stronger than ever. ii)A number of algorithms and techniques have been proposed which under different assumptions can lead to different results. iii)The number of patterns generated during the Data Mining process is very large but only a few of these patterns are likely to be of any interest to the domain expert who is analyzing the data. In this chapter we will introduce the main concepts and quality criteria in Data Mining. Also we will present an overview of approaches that have been proposed in the literature for evaluating the Data Mining results.
Article
Machine learning algorithms frequently require careful tuning of model hyperparameters, regularization terms, and optimization parameters. Unfortunately, this tuning is often a "black art" that requires expert experience, unwritten rules of thumb, or sometimes brute-force search. Much more appealing is the idea of developing automatic approaches which can optimize the performance of a given learning algorithm to the task at hand. In this work, we consider the automatic tuning problem within the framework of Bayesian optimization, in which a learning algorithm's generalization performance is modeled as a sample from a Gaussian process (GP). The tractable posterior distribution induced by the GP leads to efficient use of the information gathered by previous experiments, enabling optimal choices about what parameters to try next. Here we show how the effects of the Gaussian process prior and the associated inference procedure can have a large impact on the success or failure of Bayesian optimization. We show that thoughtful choices can lead to results that exceed expert-level performance in tuning machine learning algorithms. We also describe new algorithms that take into account the variable cost (duration) of learning experiments and that can leverage the presence of multiple cores for parallel experimentation. We show that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization on a diverse set of contemporary algorithms including latent Dirichlet allocation, structured SVMs and convolutional neural networks.
Article
Regular grids or lattices are frequently used to study ecosystems, for observations, experiments and simulations. The regular rectangular or square grid is used more often than the hexagonal grid, but their relative merits have been little discussed. Here we compare rectangular and hexagonal grids for ecological applications. We focus on the reasons some researchers have preferred hexagonal grids and methods to facilitate the use of hexagonal grids. We consider modelling and other applications, including the role of nearest neighbourhood in experimental design, the representation of connectivity in maps, and a new method for performing field surveys using hexagonal grids, which was demonstrated on montane heath vegetation.The rectangular grid is generally preferred because of its symmetrical, orthogonal co-ordinate system and the frequent use of rasters from Geographic Information Systems. Cells in a rectangular grid can also easily be combined to produce new grids with lower resolutions. However, efficient co-ordinate systems and multi-resolution partitions using the hexagonal grid are available. The nearest neighbourhood in a hexagonal grid is simpler and less ambiguous than in a rectangular grid. When nearest neighbourhood, movement paths or connectivity are important, the rectangular grid may not be suitable. We also investigate important differences between visualizations using hexagonal and rectangular grids. A survey of recent uses of grids in Ecological Modelling suggested that hexagonal grids are rarely used, even in applications for which they are more suitable than rectangular grids, e.g. connectivity and movement paths. Researchers should consider their choice of grid at an early stage in project development, and authors should explain the reasons for their choices.
Article
This is a review of the most important work in wildland fire mathematical modelling which has been carried out at different research centres around the world from the beginning of the 1940s to the present. A generic classification is proposed which allows wildland fire models to be sorted. Surface fire spread models, crown fire initiation and spread models, spotting and ground fire models are reviewed historically and the most significant ones are analysed in depth. The last two sections are dedicated to wildland fire behaviour calculation systems based on the reviewed models. The evolution and complexity of these systems is analysed in parallel with the development of new technologies. Special attention is given to the tools most commonly in current use by forestry agencies.
Spatial Wildfire Occurrence Data for the United States
  • K Short
From data to insights: constructing spatiotemporal knowledge graphs for city resilience use cases
  • A Anjomshoaa
  • H Schuster
  • J Wachs
  • A Polleres
Dynamic representations of global crises: creation and analysis of a temporal knowledge graph for conflicts, trade and value networks
  • J Gastinger
  • N Steinert
  • S Gründer-Fahrer
  • M Martin
Mcd64a1 modis/terra+aqua burned area monthly l3 global 500m sin grid v006
  • L Giglio
  • C Justice
  • L Boschetti
  • D Roy
Bayesian optimization: open source constrained global optimization tool for python
  • F Nogueira