"This paper reviews the changing way in which census geography has been treated with the increasing automation of census data processing. A four-stage model of modern census geography development is presented. In the context of this model, current practice is reviewed, and new opportunities for automated census geography design presented, culminating in a current prototype for the separation of purpose-designed data collection and output geographies. The narrative is presented primarily from a British perspective, but focuses on internationally relevant issues such as the implementation of census geography design, and the influence of census output geography on data analysis."
"The paper describes a problem faced by National Statistical Offices when publishing the results of decennial censuses for small geographical areas. If they publish statistical tables for two or more sets of areas, users can compare the tables and produce new statistics for the areas formed by differencing, which may have populations below confidentiality thresholds. To investigate the problem, the authors construct a software system and carry out a series of experiments using a large synthetic population base for Yorkshire and Humberside [in England]. The results indicate that publishing statistics for zones close in size to the primary areas is not safe unless the zones have been carefully designed. However, publishing statistics for sufficiently large areas such as 5km grid squares or postal sectors alongside enumeration districts is safe."
In this paper we present the GeoViz Toolkit, an open-source, internet-delivered program for geographic visualization and analysis that features a diverse set of software components which can be flexibly combined by users who do not have programming expertise. The design and architecture of the GeoViz Toolkit allows us to address three key research challenges in geovisualization: allowing end users to create their own geovisualization and analysis component set on-the-fly, integrating geovisualization methods with spatial analysis methods, and making geovisualization applications sharable between users. Each of these tasks necessitates a robust yet flexible approach to inter-tool coordination. The coordination strategy we developed for the GeoViz Toolkit, called Introspective Observer Coordination, leverages and combines key advances in software engineering from the last decade: automatic introspection of objects, software design patterns, and reflective invocation of methods.
A tremendous amount of attention has been given to the aging of population that is occurring in many parts of the world. However, very little work has focused upon the aging of minority populations. Because minority populations often have greater needs for health care and fewer resources to pay for it, it is important to assess the demand for services. This paper takes an initial step in that direction by focusing upon the geographical distribution of elderly minority populations in the US. The study is carried out at several spatial scales, and it is concluded that elderly minority populations tend to be even more segregated than their non-elderly counterparts.
"This paper reviews the practices, problems, and prospects of GIS-based urban modelling. The author argues that current stand-alone and various loose/tight coupling approaches for GIS-based urban modelling are essentially technology-driven without adequate justification and verification for the urban models being implemented. The absolute view of space and time embodied in the current generation of GIS also imposes constraints on the type of new urban models that can be developed. By reframing the future research agenda from a geographical information science (GISci) perspective, the author contends that the integration of urban modelling with GIS must proceed with the development of new models for the informational cities, the incorporation of multi-dimensional concepts of space and time in GIS, and the further extension of the feature-based model to implement these new urban models and spatial-temporal concepts according to the emerging interoperable paradigm."
Describes a new system for the reconstruction of geometric map
elements as developed in the Dutch TopSpin-PNEM (Provinciale
Noord-brabantse Energie Maatschappij) project “Knowledge-based
conversion of utility maps”. Vectorized map elements coming from
an image interpretation system are reconstructed and reference elements
are aligned w.r.t. a base map so that geometrical inference is made
possible in a geometrically correct topographic base map. This is done
using a map correspondence between the utility map and the base map
based on mutual topography and relational information supplied by a map
interpretation module, applying a novel conflict resolution strategy.
With the map correspondence, the relations and several object-specific
functions, the map elements are relationally and geometrically correctly
This is the editorial for the special issue on "data-intensive geospatial computing", which I guest edited with the International Journal of Geographical Information Science (Taylor & Francis). As remarked in the editorial, the special issue is particularly special in the sense that all source and data are published together with the published papers. This editorial elaborates on scholarly communication, with particular attention to publishing data alongside papers and the emergence of open access journals, in order to make our research more open access.
Urban streets are hierarchically organized in the sense that a majority of streets are trivial, while a minority of streets is vital. This hierarchy can be simply, but elegantly, characterized by the 80/20 principle, i.e. 80 percent of streets are less connected (below the average), while 20 percent of streets are well connected (above the average); out of the 20 percent, there is 1 percent of streets that are extremely well connected. This paper, using a European city as an example, examined, at a much more detailed level, such street hierarchies from the perspective of geometric and topological properties. Based on an empirical study, we further proved a previous conjecture that a minority of streets accounts for a majority of traffic flow; more accurately, the 20 percent of top streets accommodate 80 percent of traffic flow (20/80), and the 1 percent of top streets account for more than 20 percent of traffic flow (1/20). Our study provides new evidence as to how a city is (self-)organized, contributing to the understanding of cities and their evolution using increasingly available mobility geographic information.
Relying on random and purposive moving agents, we simulated human movement in large street networks. We found that aggregate flow, assigned to individual streets, is mainly shaped by the underlying street structure, and that human moving behavior (either random or purposive) has little effect on the aggregate flow. This finding implies that given a street network, the movement patterns generated by purposive walkers (mostly human beings) and by random walkers are the same. Based on the simulation and correlation analysis, we further found that the closeness centrality is not a good indicator for human movement, in contrast to a long standing view held by space syntax researchers. Instead we suggest that Google's PageRank, and its modified version - weighted PageRank, betweenness and degree centralities are all better indicators for predicting aggregate flow.
Land use classification is essential for urban planning. Urban land use types
can be differentiated either by their physical characteristics (such as
reflectivity and texture) or social functions. Remote sensing techniques have
been recognized as a vital method for urban land use classification because of
their ability to capture the physical characteristics of land use. Although
significant progress has been achieved in remote sensing methods designed for
urban land use classification, most techniques focus on physical
characteristics, whereas knowledge of social functions is not adequately used.
Owing to the wide usage of mobile phones, the activities of residents, which
can be retrieved from the mobile phone data, can be determined in order to
indicate the social function of land use. This could bring about the
opportunity to derive land use information from mobile phone data. To verify
the application of this new data source to urban land use classification, we
first construct a time series of aggregated mobile phone data to characterize
land use types. This time series is composed of two aspects: the hourly
relative pattern, and the total call volume. A semi-supervised fuzzy c-means
clustering approach is then applied to infer the land use types. The method is
validated using mobile phone data collected in Singapore. Land use is
determined with a detection rate of 58.03%. An analysis of the land use
classification results shows that the accuracy decreases as the heterogeneity
of land use increases, and increases as the density of cell phone towers
Two fundamental issues surrounding research on Zipf's law regarding city sizes are whether and why Zipf's law holds. This paper does not deal with the latter issue with respect to why, and instead investigates whether Zipf's law holds in a global setting, thus involving all cities around the world. Unlike previous studies, which have mainly relied on conventional census data, and census-bureau-imposed definitions of cities, we adopt naturally and objectively delineated cities, or natural cities, to be more precise, in order to examine Zipf's law. We find that Zipf's law holds remarkably well for all natural cities at the global level, and remains almost valid at the continental level except for Africa at certain time instants. We further examine the law at the country level, and note that Zipf's law is violated from country to country or from time to time. This violation is mainly due to our limitations; we are limited to individual countries, and to a static view on city-size distributions. The central argument of this paper is that Zipf's law is universal, and we therefore must use the correct scope in order to observe it. We further find that this law is reflected not only in city sizes, but also city numbers: the number of cities in individual countries follows an inverse power relationship; the number of cities in the first largest country is twice as many as that in the second largest country, three times as many as that in the third largest country, and so on.
KEYWORDS: Cities, night-time imagery, city-size distributions, head/tail breaks, big data
The analysis of local spatial autocorrelation for spatial attributes has been an important concern in geographical inquiry. In this chapter, we propose a concept and algorithm of k-order neighbours based on Delaunay’s triangulated irregular networks (TIN) and redefine Getis and Ord’s (Geographical Analysis, 24, 189–206, 1992) local spatial autocorrelation statistic as G
(k) with weight coefficient w
(k) based on k-order neighbours for the study of local patterns in spatial attributes. To test the validity of these statistics, an experiment is performed using spatial data of the elderly population in Ichikawa City, Chiba Prefecture, Japan. The difference between the weight coefficients of the k-order neighbours and distance parameter to measure the spatial proximity of districts located in the city center and near the city limits is found by Monte-Carlo simulation.
Based on the concepts of isovists and medial axes, we developed a set of algorithms that can automatically generate axial lines for representing individual linearly stretched parts of open space of an urban environment. Open space is the space between buildings, where people can freely move around. The generation of the axial lines has been a key aspect of space syntax research, conventionally relying on hand-drawn axial lines of an urban environment, often called axial map, for urban morphological analysis. Although various attempts have been made towards an automatic solution, few of them can produce the axial map that consists of the least number of longest visibility lines, and none of them really works for different urban environments. Our algorithms provide a better solution than existing ones. Throughout this paper, we have also argued and demonstrated that the axial lines constitute a true skeleton, superior to medial axes, in capturing what we perceive about the urban environment.
Keywords: Visibility, space syntax, topological analysis, medial axes, axial
Axial lines are defined as the longest visibility lines for representing individual linear spaces in urban environments. The least number of axial lines that cover the free space of an urban environment or the space between buildings constitute what is often called an axial map. This is a fundamental tool in space syntax, a theory developed by Bill Hillier and his colleagues for characterizing the underlying urban morphologies. For a long time, generating axial lines with help of some graphic software has been a tedious manual process that is criticized for being time consuming, subjective, or even arbitrary. In this paper, we redefine axial lines as the least number of individual straight line segments mutually intersected along natural streets that are generated from street center lines using the Gestalt principle of good continuity. Based on this new definition, we develop an automatic solution to generating the newly defined axial lines from street center lines. We apply this solution to six typical street networks (three from North America and three from Europe), and generate a new set of axial lines for analyzing the urban morphologies. Through a comparison study between the new axial lines and the conventional or old axial lines, and between the new axial lines and natural streets, we demonstrate with empirical evidence that the newly defined axial lines are a better alternative in capturing the underlying urban structure.
Keywords: Space syntax, street networks, topological analysis, traffic, head/tail division rule
Scaling of geographic space refers to the fact that for a large geographic area its small constituents or units are much more common than the large ones. This paper develops a noveol perspective to the scaling of geographic space using large street networks involving both cities and countryside. Given a street network of an entire country, we decompose the street network into individual blocks, each of which forms a minimum ring or cycle such as city blocks and field blocks. The block sizes demonstrate the scaling property, i.e., far more small blocks than large ones. Interestingly, we find that the mean of all the block sizes can easily separate between small and large blocks- a high percentage (e.g., 90%) of smaller ones and a low percentage (e.g., 10%) of larger ones. Based on this regularity, termed as the head/tail division rule, we propose an approach to delineating city boundaries by grouping the smaller blocks. The extracted city sizes for the three largest European countries (France, Germany and UK) exhibit power law distributions. We further define the concept of border number as a topological distance of a block far from the outmost border to map the center(s) of the country and the city. We draw an analogy between a country and a city (or geographic space in general) with a complex organism like the human body or the human brain to further elaborate on the power of this block perspective in reflecting the structure or patterns of geographic space.
Keywords: Power law distribution, scaling of geographic space, data-intensive geospatial computing, street networks
This paper provides a new geospatial perspective on whether or not Zipf's law
holds for all cities or for the largest cities in the United States using a
massive dataset and its computing. A major problem around this issue is how to
define cities or city boundaries. Most of the investigations of Zipf's law rely
on the demarcations of cities imposed by census data, e.g., metropolitan areas
and census-designated places. These demarcations or definitions (of cities) are
criticized for being subjective or even arbitrary. Alternative solutions to
defining cities are suggested, but they still rely on census data for their
definitions. In this paper we demarcate urban agglomerations by clustering
street nodes (including intersections and ends), forming what we call natural
cities. Based on the demarcation, we found that Zipf's law holds remarkably
well for all the natural cities (over 2-4 million in total) across the United
States. There is little sensitivity for the holding with respect to the
clustering resolution used for demarcating the natural cities. This is a big
contrast to urban areas, as defined in the census data, which do not hold
stable for Zipf's law.
Keywords: Natural cities, power law, data-intensive geospatial computing,
scaling of geographic space
In this paper, we introduced a novel approach to computing the fewest-turn
map directions or routes based on the concept of natural roads. Natural roads
are joined road segments that perceptually constitute good continuity. This
approach relies on the connectivity of natural roads rather than that of road
segments for computing routes or map directions. Because of this, the derived
routes posses the fewest turns. However, what we intend to achieve are the
routes that not only possess the fewest turns, but are also as short as
possible. This kind of map direction is more effective and favorable by people,
because they bear less cognitive burden. Furthermore, the computation of the
routes is more efficient, since it is based on the graph encoding the
connectivity of roads, which is significantly smaller than the graph of road
segments. We made experiments applied to eight urban street networks from North
America and Europe in order to illustrate the above stated advantages. The
experimental results indicate that the fewest-turn routes posses fewer turns
and shorter distances than the simplest paths and the routes provided by Google
Maps. For example, the fewest-turn-and-shortest routes are on average 15%
shorter than the routes suggested by Google Maps, while the number of turns is
just half as much. This approach is a key technology behind FromToMap.org - a
web mapping service using openstreetmap data.
There has been much excitement and activity in recent years related to the relatively sudden availability of earth-related data and the computational capabilities to visualize and analyze these data. Despite the increased ability to collect and store large volumes of data, few individual data sets exist that provide both the requisite spatial and temporal observational frequency for many urban and/or regional-scale applications. The motivating view of this paper, however, is that the relative temporal richness of one data set can be leveraged with the relative spatial richness of another to fill in the gaps. We also note that any single interpolation technique has advantages and disadvantages. Particularly when focusing on the spatial or on the temporal dimension, this means that different techniques are more appropriate than others for specific types of data. We therefore propose a space- time interpolation approach whereby two interpolation methods â€“ one for the temporal and one for the spatial dimension â€“ are used in tandem in order to maximize the quality of the result. We call our ensemble approach the Space-Time Interpolation Environment (STIE). The primary steps within this environment include a spatial interpolator, a time-step processor, and a calibration step that enforces phenomenon-related behavioral constraints. The specific interpolation techniques used within the STIE can be chosen on the basis of suitability for the data and application at hand. In the current paper, we describe STIE conceptually including the structure of the data inputs and output, details of the primary steps (the STIE processors), and the mechanism for coordinating the data and the processors. We then describe a case study focusing on urban land cover in Phoenix, Arizona. Our empirical results show that STIE was effective as a space-time interpolator for urban land cover with an accuracy of 85.2% and furthermore that it was more effective than a single technique.
A fractal can be simply understood as a set or pattern in which there are far more small things than large ones, e.g., far more small geographic features than large ones on the earth surface, or far more large-scale maps than small-scale maps for a geographic region. This paper attempts to argue and provide evidence for the fractal nature of maps and mapping. It is the underlying fractal structure of geographic features, either natural or human-made, that make reality mappable, large-scale maps generalizable, and cities imageable. The fractal nature is also what underlies the beauty of maps. After introducing some key fractal concepts such as recursion, self-similarity, scaling ratio, and scaling exponent, this paper demonstrates that fractal thought is rooted in long-standing map-making practices such as series maps subdivision, visual hierarchy, and Töpfer's radical law. Drawing on previous studies on head/tail breaks, mapping can be considered a ranking and head/tail breaking process; that is to recursively divide things, according to their geometric, topological and/or semantic properties, into the head and the tail for map generalization, statistical mapping, and cognitive mapping. Given the fractal nature of maps and mapping, cartography should be considered a perfect combination of science and art, and scaling must be formulated as a law of cartography, or that of geography in general.
KEYWORDS: Scaling of geographic features, map generalization, statistical mapping, cognitive mapping, head/tail breaks
It is well received in the space syntax community that traffic flow is significantly correlated to a morphological property of streets, which are represented by axial lines, forming a so called axial map. The correlation co-efficient (R square value) approaches 0.8 and even a higher value according to the space syntax literature. In this paper, we study the same issue using the Hong Kong street network and the Hong Kong Annual Average Daily Traffic (AADT) datasets, and find surprisingly that street-based topological representations (or street-street topologies) tend to be better representations than the axial map. In other words, vehicle flow is correlated to a morphological property of streets better than that of axial lines. Based on the finding, we suggest the street-based topological representations as an alternative GIS representation, and the topological analyses as a new analytical means for geographic knowledge discovery. Comment: 14 pages, 9 figures, 6 tables, submitted to International Journal of Geographic Information Science
Geographical information systems are ideal candidates for the application of
parallel programming techniques, mainly because they usually handle large data
sets. To help us deal with complex calculations over such data sets, we
investigated the performance constraints of a classic master-worker parallel
paradigm over a message-passing communication model. To this end, we present a
new approach that employs an external database in order to improve the
calculation/communication overlap, thus reducing the idle times for the worker
processes. The presented approach is implemented as part of a parallel
radio-coverage prediction tool for the GRASS environment. The prediction
calculation employs digital elevation models and land-usage data in order to
analyze the radio coverage of a geographical area. We provide an extended
analysis of the experimental results, which are based on real data from an LTE
network currently deployed in Slovenia. Based on the results of the
experiments, which were performed on a computer cluster, the new approach
exhibits better scalability than the traditional master-worker approach. We
successfully tackled real-world data sets, while greatly reducing the
processing time and saturating the hardware utilization.
A city can be topologically represented as a connectivity graph, consisting of nodes representing individual spaces and links if the corresponding spaces are intersected. It turns out in the space syntax literature that some defined topological metrics can capture human movement rates in individual spaces. In other words, the topological metrics are significantly correlated to human movement rates, and individual spaces can be ranked by the metrics for predicting human movement. However, this correlation has never been well justified. In this paper, we study the same issue by applying the weighted PageRank algorithm to the connectivity graph or space-space topology for ranking the individual spaces, and find surprisingly that (1) the PageRank scores are better correlated to human movement rates than the space syntax metrics, and (2) the underlying space-space topology demonstrates small world and scale free properties. The findings provide a novel justification as to why space syntax, or topological analysis in general, can be used to predict human movement. We further conjecture that this kind of analysis is no more than predicting a drunkard's walking on a small world and scale free network.
Keywords: Space syntax, topological analysis of networks, small world, scale free, human movement, and PageRank
This papers deals with multiway spatial joins when (i) there is limited time for query processing and the goal is to retrieve the best possible solutions within this limit (ii) there is unlimited time and the goal is to retrieve a single exact solution, if such a solution exists, or the best approximate one otherwise. The first case is motivated by the high cost of join processing in real-time systems involving large amounts of multimedia data, while the second one is motivated by applications that require "negative" examples. We propose several search algorithms for query processing under theses conditions. For the limited-time case we develop some non-deterministic search heuristics that can quickly retrieve good solutions. However, these heuristics are not guaranteed to find the best solutions, even without a time limit. Therefore, for the unlimited-time case we describe systematic search algorithms tailored specifically for the efficient retrieval of a single solution. Both types of algorithms are integrated with R-trees in order to prune the search space. Our proposal is evaluated with extensive experimental comparison. 1.
Spatial relations are the basis for many selections users perform when they query geographic information systems (GISs). Although such query languages use natural-language-like terms, the formal definitions of those spatial relations rarely reflect the same meaning people would apply when they communicate among each other. To bridge the gap between computational models for spatial relations and people's use of spatial terms in their natural languages, a model for the geometry of spatial relations was calibrated for a set of 59 English-language spatial predicates. The model distinguishes topological and metric properties. The calibration from sketches that were drawn by 34 human subjects identifies ten groups of spatial terms with similar properties and provides a mapping from spatial terms onto significant geometric parameters and their values. The calibration's results reemphasize the importance of topological over metric properties in the selection of English-language spatial terms. The model provides a basis for high-level spatial query languages that exploit natural-language terms and serves as a model for processing such queries.
This paper presents a new method for assessment of error in digital vector geographic data, where the features represented can be modelled closely by fractal geometry. Using example hydrological data from Ordnance Survey of Great Britain maps at a range of scales, a resolution smaller than which the digital representation of the feature does not exhibit fractal characteristics can be calculated. It is proposed that this resolution represents the minimum ground resolution of the map, which in turn can be related to the map scale.
Spatial relations are important in numerous domains, such as Spatial Query Languages, Image and Multimedia Databases, Reasoning and Geographic Applications. This paper is concerned with the retrieval of topological and direction relations using spatial data structures based on Minimum Bounding Rectangles. We describe topological and direction relations between region objects and we study the spatial information that Minimum Bounding Rectangles convey about the actual objects they enclose. Then we apply the results in R-trees and their variations, R+-trees and R*- trees, in order to minimise the number of disk accesses for queries involving topological and direction relations. We also investigate queries that express complex spatial conditions in the form of disjunctions and conjunctions, and we discuss possible extensions.
This paper reports the results of a series of experiments designed to establish how non-expert subjects conceptualize geospatial phenomena. Subjects were asked to give examples of geographical categories in response to a series of differently phrased elicitations. The results yield an ontology of geographical categories---a catalogue of the prime geospatial concepts and categories shared in common by human subjects independently of their exposure to scientific geography. When combined with nouns such as feature and object, the adjective geographic elicited almost exclusively elements of the physical environment of geographical scale or size, such as mountain, lake, and river. The phrase things that could be portrayed on a map, on the other hand, produced many geographical scale artefacts (roads, cities, etc.) and flat objects (states, countries, etc.), as well as some physical feature types. These data reveal considerable mismatch as between the meanings assigned to the terms `geography' and `geographic' by scientific geographers and by ordinary subjects, so that scientific geographers are not in fact studying geographical phenomena as such phenomena are conceptualized by nave subjects. The data suggest, rather, a special role in determining the subject-matter of scientific geography for the concept of what can be portrayed on a map. This work has implications for work on usability and interoperability in geographical information science, and it throws light also on subtle and hitherto unexplored ways in which ontological terms such as `object', `entity', and `feature' interact with geographical concepts.
. Increasing interest for configuration similarity is currently developing in the context of Digital Libraries, Spatial Databases and Geographical Information Systems. The corresponding queries retrieve all database configurations that match an input description (e.g., "find all configurations where an object x 0 is about 5km northeast of another x 1 , which, in turn, is inside object x 2 "). This paper introduces a framework for configuration similarity that takes into account all major types of spatial constraints (topological, direction, distance). We define appropriate fuzzy similarity measures for each type of constraint to provide flexibility and allow the system to capture real-life needs. Then we apply pre-processing techniques to explicate constraints in the query, and present algorithms that effectively solve the problem. Extensive experimental results demonstrate the applicability of our approach to images and queries of considerable size. 1. INTRODUCTION As opposed to visu...
. Modeling of erosion and deposition in complex terrain within a geographic information system (GIS) requires a high resolution digital elevation model (DEM), reliable estimation of topographic parameters, and formulation of erosion models adequate for digital representation of spatially distributed parameters. Regularized spline with tension was integrated within a GIS for computation of DEMs and topographic parameters from digitized contours or other point elevation data. For construction of flow lines and computation of upslope contributing areas an algorithm based on vector-grid approach was developed. The spatial distribution of areas with topographic potential for erosion or deposition was then modeled using the approach based on the unit stream power and directional derivatives of surface representing the sediment transport capacity. Presented methods are illustrated on study areas in central Illinois and the Yakima Ridge, Washington. 1 1. Introduction Several erosion models ...
As efforts grow to develop spatio-temporal database systems and temporal geographic information systems that are capable of conveying how geographic phenomena change, it is important to distinguish the elements that are fundamental to scenarios of change. This paper presents a model based on the explicit description of change with respect to states of existence and non-existence for identifiable objects. Such changes are of concern when, for instance, modeling and reasoning about nations that are subsumed through conflict only to return once more at a later time, or about water bodies that fluctuate due to seasonal or climatic change. The basis for tracing these changes is the concept of object identity. Identity, distinct from an object's properties, values, or structure, is that unique characteristic that distinguishes one object from another. Based on a small set of primitives relating to the identity states of objects, we model the semantics associated with change and through a systematic derivation, a complete set of identity-based change operations evolves from the primitives. These operations are basic to the types of change commonly experienced by geographic phenomena and modeled by researchers studying spatio-temporal change. This approach highlights the minimum elements necessary for reasoning about change, namely, object identity, an ordering of identity states, and co-occurrence of identity states.
This paper describes the process of building a GIS for use in real time by blind travellers. Initially the components of a Personal Guidance System (PGS) for blind pedestrians are outlined. The location finding and database components of the system are then elaborated. Next follows a discussion of the environmental features likely to be used by blind travellers, and a dsicussion of the different path following and environmental learning modes that can be activated in the system. Developments such as personalizing the system and accounting for veering are also presented. Finally, possible competing schemes and problems related to the GIS component are examined.
It is becoming easier to combine environmental data and models to provide information for problem-solving by environmental policy analysts, decision-makers, and land managers. However, the scale dependencies of each of these (data, model, and problem) can mean that the resulting information is misleading or even invalid. This paper describes the development of a systematic framework (dubbed the ‘Scale Matcher’) for identifying and matching the scale requirements of a problem with the scale limitations of spatial data and models.The Scale Matcher framework partitions the complex array of scale issues into more manageable components that can be individually quantified. First, the scale characteristics of data, model, and problem are separated into their scale components of extent, accuracy, and precision, and each is associated with suitable metrics. Second, a comprehensive set of pairwise matches between these components is defined. Third, a procedure is devised to lead the user through a process of systematically comparing or matching each scale component. In some cases, the matches are simple comparisons of the relevant metrics. Others require the combination of data variability and model sensitivity to be investigated by randomly simulating data and model imprecision and inaccuracy. Finally, a conclusion is drawn as to the scale compatibility of the Data–Model–Problem trio based on the overall procedure result. Listing the individual match results as a set of scale assumptions helps to draw attention to them, making users more aware of the limitations of spatial modelling.Application of the Scale Matcher is briefly illustrated with a case study, in which the scale suitability of two sources of soil map data for identifying areas of vulnerability to groundwater pollution was tested. The Scale Matcher showed that one source of soil map data had unacceptable scale characteristics, and the other was marginal for addressing the problem of nitrate leaching vulnerability. The scale-matching framework successfully partitioned the scale issue into a series of more manageable comparisons and gave the user more confidence in the scale validity of the model output.
While the business intelligence sector, involving data warehouses and online analytical processing (OLAP) technologies, is experiencing strong growth in the IT marketplace, relatively little attention has been devoted to the problem of utilizing such tools in conjunction with GIS. This study contributes to the development of this research area by examining the issues involved in the design and implementation of an integrated data warehouse and GIS system that delivers analytical OLAP and mapping results in real‐time across the Web. The case study chosen utilizes individual records from the US 1880 population census, which have recently been made available by the North Atlantic Population Project. Although historical datasets of this kind present a number of challenges for data warehousing, the results indicate that the integrated approach adopted offers a much more flexible and powerful analytical methodology for this kind of large social science dataset than has hitherto been available.
Large concentrations of herbicide were sprayed onto the forests of southern Vietnam in the 1960s and early 1970s. Over 30 years later, many of these contaminated forests have regained full canopy cover, albeit with reduced chlorophyll content. The European Space Agency produces an operational product for the estimation of terrestrial chlorophyll content over large areas of terrain. This product uses data recorded by the Medium Resolution Imaging Spectrometer (MERIS) on Envisat and is called the MERIS Terrestrial Chlorophyll Index (MERIS). The relationship between historical levels of herbicide contamination and contemporary MTCI was strong (R = 0.86) and negative, with high levels of herbicide contamination being associated (via low levels of chlorophyll concentration) with low levels of MTCI. This is the first published study to demonstrate a relationship between MTCI and a surrogate for chlorophyll content. The next stage of this research is to build on the strength of this relationship and use contemporary MTCI to estimate historical herbicide levels during the 1960s and 1970s across southern Vietnam.
Tropical cyclones (hurricanes and typhoons) produce high winds that can generate waves capable of damaging coral reefs. As cyclones frequently pass through northeast Australia's Great Barrier Reef (GBR), it is important to understand how the spatial distribution of reef damage changes over time. However, direct measurements of wave damage, or even wave heights or wind speeds, are rare within the GBR. An important factor in estimating whether cyclone damage was possible is the magnitude and duration of high‐energy wind and waves. Thus, before the spatio‐temporal dynamics of past cyclone damage can be modelled, it is necessary to reconstruct the spread, intensity, and duration of high‐energy conditions during individual cyclones. This was done every hour along the track taken by each of 85 cyclones that passed near the GBR from 1969 to 2003, by implementing a cyclone wind hindcasting model directly within a raster GIS using cyclone data available from the Australian Bureau of Meteorology. Three measures of cyclone energy (maximum wind speed—MAX, duration of gales—GALES, and continuous duration of gales—CGALES) were derived from these data. For three cyclones, where field data documenting actual reef damage from cyclone‐generated waves were available, the predictive ability of each measure was assessed statistically. All three performed better in predicting reef damage at sites surveyed along the high‐energy reef front than those surveyed along the more protected reef back. MAX performed best for cyclone Joy (r = 0.5), while CGALES performed best for cyclones Ivor (r = 0.23) and Justin (r = 0.48). Using thresholds for MAX and GALES obtained via comparison with field data of damage, it was possible to produce a preliminary prediction of the risk of wave damage across the GBR from each of the 85 cyclones. The results suggest that while up to two‐thirds of the GBR was at risk from some damage for 30–50% of the time series (∼18 out of 35 years), only scattered areas of the region were at risk more frequently than that.
China has experienced and is experiencing expeditious urban expansion in the recent decades, especially in the coastal areas and big cities. Rapid urban expansion and dramatic changes of landscape have caused great economic, environmental and social impacts consequently. It is crucial to understand urban temporal, spatial expansion patterns and their related effects. In this paper, urban expansion of Guangzhou, a rapid growing city in south‐east China, from 1979 to 2003 is studied temporally and spatially. Four time ranges including 1979–1990, 1990–1995, 1995–2000 and 2000–2003 are designed and the urban expansion area, expansion rate and the spatial expansion pattern are discussed by using remote sensing data and Geographical Information System (GIS) tool. Two transects are designed along two axes of Guangzhou expansion and the structural of urban expansion patches at different orientations are compared in order to quantitively understand the urban expansion of Guangzhou during the past 24 years. The gradient analysis integrating multi‐temporal data is performed in order to analyze and compare the spatial and temporal dynamics of urban expansion. Two indices of compactness and fractal dimensional index are used to describe the urban developing pattern in the study time durations. And the influence of different types of traffic roads to urban expansion is evaluated using the buffer analysis of GIS. The results show that: (1) temporally, urban area of Guangzhou increase 296.54 km from 141.15 km in 1979 to 437.70 km in 2003 and the increasing rate is up to 210.08%; (2) spatially, Guangzhou has different urban expanding directions in different stages and the general expanding directions are towards northeast, north, southeast and north in four studied time ranges; (3) transportation lines play a very important role in urban expansion of Guangzhou, but different types of road have different impacts. National roads and highways exhibit stronger control to urban expansion than provincial roads; and (4) expansion of Guangzhou has gradually changed from a compact pattern to leapfrogging and disordering patterns.