Rates of benthic denitrification were measured using two techniques, membrane inlet mass spectrometry (MIMS) and isotope ratio mass spectrometry (IRMS), applied to sediment cores from two NO3(-)-rich streams draining agricultural land in the upper Mississippi River Basin. Denitrification was estimated simultaneously from measurements of N2:Ar (MIMS) and 15N[N2] (IRMS) after the addition of low-level 15NO3- tracer (15N:N = 0.03-0.08) in stream water overlying intact sediment cores. Denitrification rates ranged from about 0 to 4400 micromol N x m(-2) x h(-1) in Sugar Creek and from 0 to 1300 micromol N x m(-2) x h(-1) in Iroquois River, the latter of which possesses greater streamflow discharge and a more homogeneous streambed and water column. Within the uncertainties of the two techniques, there is good agreement between the MIMS and IRMS results, which indicates that the production of N2 by the coupled process of nitrification/denitrification was relatively unimportant and surface-water NO3- was the dominant source of NO3- for benthic denitrification in these streams. Variation in stream NO3- concentration (from about 20 micromol/L during low discharge to 1000 micromol/L during high discharge) was a significant control of benthic denitrification rates, judging from the more abundant MIMS data. The interpretation that NO3- concentration directly affects denitrification rate was corroborated by increased rates of denitrification in cores amended with NO3-. Denitrification in Sugar Creek removed < or = 11% per day of the instream NO3- in late spring and removed roughly 15-20% in late summer. The fraction of NO3- removed in Iroquois River was less than that of Sugar Creek. Although benthic denitrification rates were relatively high during periods of high stream flow, when NO3 concentrations were also high, the increase in benthic denitrification could not compensate for the much larger increase in stream NO3- fluxes during high flow. Consequently, fractional NO3- losses were relatively low during high flow.
Intensively managed grain farms are saturated with large inputs of nitrogen (N) fertilizer, leading to N losses and environmental degradation. Despite decades of research directed toward reducing N losses from agroecosystems, progress has been minimal, and the currently promoted best management practices are not necessarily the most effective. We investigated the fate of N additions to temperate grain agroecosystems using a meta-analysis of 217 field-scale studies that followed the stable isotope 15N in crops and soil. We compared management practices that alter inorganic fertilizer additions, such as application timing or reduced N fertilizer rates, to practices that re-couple the biogeochemical cycles of carbon (C) and N, such as organic N sources and diversified crop rotations, and analyzed the following response variables: 15N recovery in crops, total recovery of 15N in crops and soil, and crop yield. More of the literature (94%) emphasized crop recovery of 15N than total 15N recovery in crops and soil (58%), though total recovery is a more ecologically appropriate indicator for assessing N losses. Findings show wide differences in the ability of management practices to improve N use efficiency. Practices that aimed to increase crop uptake of commercial fertilizer had a lower impact on total 15N recovery (3-21% increase) than practices that re-coupled C and N cycling (30-42% increase). A majority of studies (66%) were only one growing season long, which poses a particular problem when organic N sources are used because crops recover N from these sources over several years. These short-term studies neglect significant ecological processes that occur over longer time scales. Field-scale mass balance calculations using the 15N data set show that, on average, 43 kg N x ha(-1) x yr(-1) was unaccounted for at the end of one growing season out of 114 kg N x ha(-1) x yr(-1), representing approximately 38% of the total 15N applied. This comprehensive assessment of stable-isotope research on agroecosystem N management can inform the development of policies to mitigate nonpoint source pollution. Nitrogen management practices that most effectively increase N retention are not currently being promoted and are rare on the landscape in the United States.
Many studies have shown that intensive agricultural practices significantly increase the nitrogen concentration of stream surface waters, but it remains difficult to identify, quantify, and differentiate between terrestrial and in-stream sources or sinks of nitrogen, and rates of transformation. In this study we used the delta15N-NO3 signature in a watershed dominated by agriculture as an integrating marker to trace (1) the effects of the land cover and agricultural practices on stream-water N concentration in the upstream area of the hydrographic network, (2) influence of the in-stream processes on the NO3-N loads at the reach scale (100 m and 1000 m long), and (3) changes in delta15N-NO3 signature with increasing stream order (from first to third order). This study suggests that land cover and fertilization practices were the major determinants of delta15N-NO3 signature in first-order streams. NO3-N loads and delta15N-NO3 signature increased with fertilization intensity. Small changes in delta15N-NO3 signature and minor inputs of groundwater were observed along both types of reaches, suggesting the NO3-N load was slightly influenced by in-stream processes. The variability of NO3-N concentrations and delta15N signature decreased with increasing stream order, and the delta15N signature was positively correlated with watershed areas devoted to crops, supporting a dominant effect of agriculture compared to the effect of in-stream N processing. Consequently, land cover and fertilization practices are integrated in the natural isotopic signal at the third-order stream scale. The GIS analysis of the land cover coupled with natural-abundance isotope signature (delta15N) represents a potential tool to evaluate the effects of agricultural practices in rural catchments and the consequences of future changes in management policies at the regional scale.
The isotopic signatures of 15N and 18O in N2O emitted from tropical soils vary both spatially and temporally, leading to large uncertainty in the overall tropical source signature and thereby limiting the utility of isotopes in constraining the global N2O budget. Determining the reasons for spatial and temporal variations in isotope signatures requires that we know the isotope enrichment factors for nitrification and denitrification, the two processes that produce N2O in soils. We have devised a method for measuring these enrichment factors using soil incubation experiments and report results from this method for three rain forest soils collected in the Brazilian Amazon: soil with differing sand and clay content from the Tapajos National Forest (TNF) near Santarém, Pará, and Nova Vida Farm, Rondônia. The 15N enrichment factors for nitrification and denitrification differ with soil texture and site: -111 per thousand +/- 12 per thousand and -31 per thousand +/- 11 per thousand for a clay-rich Oxisol (TNF), -102 per thousand +/- 5 per thousand and -45 per thousand +/- 5 per thousand for a sandier Ultisol (TNF), and -10.4 per thousand +/- 3.5 per thousand (enrichment factor for denitrification) for another Ultisol (Nova Vida) soil, respectively. We also show that the isotopomer site preference (delta15Nalpha - delta15Nbeta, where alpha indicates the central nitrogen atom and beta the terminal nitrogen atom in N2O) may allow differentiation between processes of production and consumption of N2O and can potentially be used to determine the contributions of nitrification and denitrification. The site preferences for nitrification and denitrification from the TNF-Ultisol incubated soils are: 4.2 per thousand +/- 8.4 per thousand and 31.6 per thousand +/- 8.1 per thousand, respectively. Thus, nitrifying and denitrifying bacteria populations under the conditions of our study exhibit significantly different 15N site preference fingerprints. Our data set strongly suggests that N2O isotopomers can be used in concert with traditional N2O stable isotope measurements as constraints to differentiate microbial N2O processes in soil and will contribute to interpretations of the isotopic site preference N2O values found in the free troposphere.
The advent of molecular techniques has improved our understanding of the microbial communities responsible for denitrification and is beginning to address their role in controlling denitrification processes. There is a large diversity of bacteria, archaea, and fungi capable of denitrification, and their community composition is structured by long-term environmental drivers. The range of temperature and moisture conditions, substrate availability, competition, and disturbances have long-lasting legacies on denitrifier community structure. These communities may differ in physiology, environmental tolerances to pH and O2, growth rate, and enzyme kinetics. Although factors such as O2, pH, C availability, and NO3- pools affect instantaneous rates, these drivers act through the biotic community. This review summarizes the results of molecular investigations of denitrifier communities in natural environments and provides a framework for developing future research for addressing connections between denitrifier community structure and function.
Disturbances such as fire play a key role in controlling ecosystem structure. In fire-prone forests, organic detritus comprises a large pool of carbon and can control the frequency and intensity of fire. The ponderosa pine forests of the Colorado Front Range, USA, where fire has been suppressed for a century, provide an ideal system for studying the long-term dynamics of detrital pools. Our objectives were (1) to quantify the long-term temporal dynamics of detrital pools; and (2) to determine to what extent present stand structure, topography, and soils constrain these dynamics. We collected data on downed dead wood, litter, duff (partially decomposed litter on the forest floor), stand structure, topographic position, and soils for 31 sites along a 160-year chronosequence. We developed a compartment model and parameterized it to describe the temporal trends in the detrital pools. We then developed four sets of statistical models, quantifying the hypothesized relationship between pool size and (1) stand structure, (2) topography, (3) soils variables, and (4) time since fire. We contrasted how much support each hypothesis had in the data using Akaike's Information Criterion (AIC). Time since fire explained 39-80% of the variability in dead wood of different size classes. Pool size increased to a peak as material killed by the fire fell, then decomposed rapidly to a minimum (61-85 years after fire for the different pools). It then increased, presumably as new detritus was produced by the regenerating stand. Litter was most strongly related to canopy cover (r2 = 77%), suggesting that litter fall, rather than decomposition, controls its dynamics. The temporal dynamics of duff were the hardest to predict. Detrital pool sizes were more strongly related to time since fire than to environmental variables. Woody debris peak-to-minimum time was 46-67 years, overlapping the range of historical fire return intervals (1 to > 100 years). Fires may therefore have burned under a wide range of fuel conditions, supporting the hypothesis that this region's fire regime was mixed severity.
We offer a conceptual framework for managing forested ecosystems under an assumption that future environments will be different from present but that we cannot be certain about the specifics of change. We encourage flexible approaches that promote reversible and incremental steps, and that favor ongoing learning and capacity to modify direction as situations change. We suggest that no single solution fits all future challenges, especially in the context of changing climates, and that the best strategy is to mix different approaches for different situations. Resources managers will be challenged to integrate adaptation strategies (actions that help ecosystems accommodate changes adaptively) and mitigation strategies (actions that enable ecosystems to reduce anthropogenic influences on global climate) into overall plans. Adaptive strategies include resistance options (forestall impacts and protect highly valued resources), resilience options (improve the capacity of ecosystems to return to desired conditions after disturbance), and response options (facilitate transition of ecosystems from current to new conditions). Mitigation strategies include options to sequester carbon and reduce overall greenhouse gas emissions. Priority-setting approaches (e.g., triage), appropriate for rapidly changing conditions and for situations where needs are greater than available capacity to respond, will become increasingly important in the future.
Long-term fire exclusion has altered ecological function in many forested ecosystems in North America. The invasion of fire-sensitive tree species into formerly pyrogenic upland forests in the southeastern United States has resulted in dramatic shifts in surface fuels that have been hypothesized to cause reductions in plant community flammability. The mechanism for the reduced flammability or "mesophication" has lacked empirical study. Here we evaluate a potential mechanism of reduced flammability by quantifying moisture retention (response time and initial moisture capacity) of foliar litter beds from 17 southeastern tree species spanning a wide range of fire tolerance. A k-means cluster analysis resulted in four species groups: a rapidly drying cluster of eight species; a five-species group that absorbed little water but desorbed slowly; a two-species group that absorbed substantial moisture but desorbed rapidly; and a two-species cluster that absorbed substantial moisture and dried slowly. Fire-sensitive species were segregated into the slow moisture loss clusters while fire-tolerant species tended to cluster in the rapid drying groups. Principal-components analysis indicated that several leaf characteristics correlated with absorption capacity and drying rates. Thin-leaved species with high surface area : volume absorbed the greatest moisture content, while those with large, curling leaves had the fastest drying rates. The dramatic shifts in litter fuels as a result of invasion by fire-sensitive species generate a positive feedback that reduce the windows of ignition, thereby facilitating the survival, persistence, and continued invasion of fire-sensitive species in the uplands of the southeastern United States.
Historical land use can influence forest species composition and structure for centuries after direct use has ceased. In Wisconsin, USA, Euro-American settlement in the mid- to late 1800s was accompanied by widespread logging, agricultural conversion, and fire suppression. To determine the maximum magnitude of change in forest ecosystems at the height of the agricultural period and the degree of recovery since that time, we assessed changes in forest species composition and structure among the (1) mid-1800s, at the onset of Euro-American settlement; (2) 1930s, at the height of the agricultural period; and (3) 2000s, following forest regrowth. Data sources included the original U.S. Public Land Survey records (mid-1800s), the Wisconsin Land Economic Inventory (1930s), and U.S. Forest Service Forest Inventory and Analysis data (2000s). We derived maps of relative species dominance and tree diameters for the three dates and assessed change using spatial error models, nonmetric multidimensional scaling ordination, and Sørenson distance measures. Our results suggest that since the mid-1800s, hemlock and white pine have declined in absolute area from 22% to 1%, and the proportion of medium (25-<50 cm) and large-diameter (> or = 50 cm) trees of all species has decreased from 71% to 27% across the entire state. Early-successional aspen-birch is three times more common than in the mid-1800s (9% vs. 3%), and maple and other shade-tolerant species are increasing in southern areas formerly dominated by oak forests and savannas. Since the peak agricultural extent in the 1930s, species composition and tree size in northern forests have shown some recovery, while southern forests appear to be on a novel trajectory of change. There is evidence of regional homogenization, but the broad north-south environmental gradient in Wisconsin constrains overall species composition. Although the nature of the future forests will be determined in part by climate change and other exogenous variables, land use is likely to remain the driving factor.
European settlement of North America has involved monumental environmental change. From the late 19th century to the present, agricultural practices in the Great Plains of the United States have dramatically reduced soil organic carbon (C) levels and increased greenhouse gas (GHG) fluxes in this region. This paper details the development of an innovative method to assess these processes. Detailed land-use data sets that specify complete agricultural histories for 21 representative Great Plains counties reflect historical changes in agricultural practices and drive the biogeochemical model, DAYCENT, to simulate 120 years of cropping and related ecosystem consequences. Model outputs include yields of all major crops, soil and system C levels, soil trace-gas fluxes (N2O emissions and CH4 consumption), and soil nitrogen mineralization rates. Comparisons between simulated and observed yields allowed us to adjust and refine model inputs, and then to verify and validate the results. These verification and validation exercises produced measures of model fit that indicated the appropriateness of this approach for estimating historical changes in crop yield. Initial cultivation of native grass and continued farming produced a significant loss of soil C over decades, and declining soil fertility led to reduced crop yields. This process was accompanied by a large GHG release, which subsided as soil fertility decreased. Later, irrigation, nitrogen-fertilizer application, and reduced cultivation intensity restored soil fertility and increased crop yields, but led to increased N2O emissions that reversed the decline in net GHG release. By drawing on both historical evidence of land-use change and scientific models that estimate the environmental consequences of those changes, this paper offers an improved way to understand the short- and long-term ecosystem effects of 120 years of cropping in the Great Plains.
Pollen of forest trees can move on the scales of tens to hundreds of kilometers, but the question of its viability during this long distance dispersal (LDD) has yet to be answered. While empirical studies of pollen viability in forest tree species are rare, controlled and scalable data to outdoor studies of the contribution of UV irradiation on pollen viability are not available. A simple protocol that allows the quantification of the viability response of pollen to UV, temperature, and humidity is developed and described here. Bench-scale conditions that approximate a wide range of atmospheric conditions including different humidity, temperature, and UV irradiation condition are used to determine the independent effects of each abiotic stress factor, and empirical functions are fitted and used to scale these bench-scale experiments to outdoor conditions. As a case study, pollen was sampled from two populations of Pinus taeda during two years and was used to quantify the decrease in viability due to atmospheric conditions during LDD. Contrary to maize pollen, P. taeda pollen viability decreased due to humid and cold conditions. The viability response of pollen to UV-A and UV-B corresponded to a viability reduction of about 10% after a full day of exposure. These laboratory findings were corroborated by an outdoor solar exposure experiment. The Fu-Liou online radiation model and a data set of radiosonde observations were used to estimate the typical conditions that would be encountered by LDD pollen. If initially caught in a strong updraft, dispersing P. taeda pollen could be carried many days and thousands of kilometers in the air. The empirical equations for P. taeda pollen viability reduction due to abiotic stresses predicted that 50% of the pollen would survive 24 hours of LDD under typical external conditions. The viable range of the pollen is, therefore, shorter than the physical dispersal distance. The methods used in our experiments are applicable for determination of dispersing pollen viability, especially when effects of different adverse conditions need to be separated. The empirical viability equations that resulted from our experiments can be used in an atmospheric dispersal model to estimate the viable range of tree pollen.
The purpose of this paper is to quantify climatic controls on the area burned by fire in different vegetation types in the western United States. We demonstrate that wildfire area burned (WFAB) in the American West was controlled by climate during the 20th century (1916-2003). Persistent ecosystem-specific correlations between climate and WFAB are grouped by vegetation type (ecoprovinces). Most mountainous ecoprovinces exhibit strong year-of-fire relationships with low precipitation, low Palmer drought severity index (PDSI), and high temperature. Grass- and shrub-dominated ecoprovinces had positive relationships with antecedent precipitation or PDSI. For 1977-2003, a few climate variables explain 33-87% (mean = 64%) of WFAB, indicating strong linkages between climate and area burned. For 1916-2003, the relationships are weaker, but climate explained 25-57% (mean = 39%) of the variability. The variance in WFAB is proportional to the mean squared for different data sets at different spatial scales. The importance of antecedent climate (summer drought in forested ecosystems and antecedent winter precipitation in shrub and grassland ecosystems) indicates that the mechanism behind the observed fire-climate relationships is climatic preconditioning of large areas of low fuel moisture via drying of existing fuels or fuel production and drying. The impacts of climate change on fire regimes will therefore vary with the relative energy or water limitations of ecosystems. Ecoprovinces proved a useful compromise between ecologically imprecise state-level and localized gridded fire data. The differences in climate-fire relationships among the ecoprovinces underscore the need to consider ecological context (vegetation, fuels, and seasonal climate) to identify specific climate drivers of WFAB. Despite the possible influence of fire suppression, exclusion, and fuel treatment, WFAB is still substantially controlled by climate. The implications for planning and management are that future WFAB and adaptation to climate change will likely depend on ecosystem-specific, seasonal variation in climate. In fuel-limited ecosystems, fuel treatments can probably mitigate fire vulnerability and increase resilience more readily than in climate-limited ecosystems, in which large severe fires under extreme weather conditions will continue to account for most area burned.
Roads remove habitat, alter adjacent areas, and interrupt and redirect ecological flows. They subdivide wildlife populations, foster invasive species spread, change the hydrologic network, and increase human use of adjacent areas. At broad scales, these impacts cumulate and define landscape patterns. The goal of this study was to improve our understanding of the dynamics of road networks over time, and their effects on landscape patterns, and identify significant relationships between road changes and other land-use changes. We mapped roads from aerial photographs from five dates between 1937 and 1999 in 17 townships in predominantly forested landscapes in northern Wisconsin, U.S.A. Patch-level landscape metrics were calculated on terrestrial area outside of a 15-m road-effect zone. We used generalized least-squares regression models to relate changes in road density and landscape pattern to concurrent changes in housing density. Rates of change and relationships were compared among three ecological regions. Our results showed substantial increases in both road density and landscape fragmentation during the study period. Road density more than doubled, and median, mean, and largest patch size were reduced by a factor of four, while patch shape became more regular. Increases in road density varied significantly among ecological subsections and were positively related to increases in housing density. Fragmentation was largely driven by increases in road density, but housing density had a significantly positive relationship with largest patch area and patch shape. Without protection of roadless areas, our results suggest road development is likely to continue in the future, even in areas where road construction is constrained by the physical environment. Recognizing the dynamic nature of road networks is important for understanding and predicting their ecological impacts over time and understanding where other types of development are likely to occur in the future. Historical perspectives of development can provide guidance in prioritizing management efforts to defragment landscapes and mitigate the ecological impacts of past road development.
Rural America is witnessing widespread housing development, which is to the detriment of the environment. It has been suggested to cluster houses so that their disturbance zones overlap and thus cause less habitat loss than is the case for dispersed development. Clustering houses makes intuitive sense, but few empirical studies have quantified the spatial pattern of houses in real landscapes, assessed changes in their patterns over time, and quantified the resulting habitat loss. We addressed three basic questions: (1) What are the spatial patterns of houses and how do they change over time; (2) How much habitat is lost due to houses, and how is this affected by spatial pattern of houses; and (3) What type of habitat is most affected by housing development. We mapped 27 419 houses from aerial photos for five time periods in 17 townships in northern Wisconsin and calculated the terrestrial land area remaining after buffering each house using 100- and 500-m disturbance zones. The number of houses increased by 353% between 1937 and 1999. Ripley's K test showed that houses were significantly clustered at all time periods and at all scales. Due to the clustering, the rate at which habitat was lost (176% and 55% for 100- and 500-m buffers, respectively) was substantially lower than housing growth rates, and most land area was undisturbed (95% and 61% for 100-m and 500-m buffers, respectively). Houses were strongly clustered within 100 m of lakes. Habitat loss was lowest in wetlands but reached up to 60% in deciduous forests. Our results are encouraging in that clustered development is common in northern Wisconsin, and habitat loss is thus limited. However, the concentration of development along lakeshores causes concern, because these may be critical habitats for many species. Conservation goals can only be met if policies promote clustered development and simultaneously steer development away from sensitive ecosystems.
Considerable efforts have been made to assess the contribution of forest and grassland ecosystems to the global carbon budget, while less attention has been paid to agriculture. Net primary production (NPP) of Chinese croplands and driving factors are seldom taken into account in the regional carbon budget. We studied crop NPP by analyzing the documented crop yields from 1950 to 1999 on a provincial scale. Total NPP, including estimates of the aboveground and belowground components, was calculated from harvested yield data by (1) conversion from economic yield of the crop to aboveground mass using the ratio of aboveground residue production to the economic yield, (2) estimation of belowground mass as a function of aboveground mass, and (3) conversion from total dry mass to carbon mass. This approach was applied to 13 crops, representing 86.8% of the total harvested acreage of crops in China. Our results indicated that NPP in Chinese croplands increased markedly during this period. Averaging for each decade, the amount of NPP was 146 +/- 32, 159 +/- 34, 260 +/- 55, 394 +/- 85, and 513 +/- 111 Tg C/yr (mean +/- SD) in the 1950s, 1960s, 1970s, 1980s, and 1990s, respectively. This increase may be attributed to synthetic fertilizer application. A further investigation indicated that the climate parameters of temperature and precipitation determined the spatial variability in NPP. Spatiotemporal variability in NPP can be well described by the consumption of synthetic fertilizer and by climate parameters. In addition, the total amount of residue C and root C retained by the soils was estimated to be 618 Tg, with a range from 300 to 1040 Tg over the 50 years.
Abundance-occupancy (A-O) patterns were explored temporally and spatially for the Georges Bank finfish and shellfish community to evaluate long-term trends in the assemblage structure and to identify anthropogenic and environmental drivers impacting the ecosystem. Analyses were conducted for 32 species representing the assemblage from 1963 to 2006 using data from the National Marine Fisheries Service's annual autumn bottom trawl survey. For individual species, occupancy was considered the proportion of stations with at least one individual present, and abundance was estimated as the mean annual number of fish captured per station. Intraspecific relationships were estimated to provide information on utilization of space by a species. Multispecies interspecific relationships over all species for each year were fitted to estimate assemblage structural changes over the time series. Results indicated that the slopes and strengths of interspecific A-O relationships significantly declined over the duration of the time series, and this decline was significantly related to groundfish landings. However, the rate of decline was not constant, and a breakpoint analysis of interspecific slopes indicated that 1973 was a period of "state" change. More importantly a jackknife-after-bootstrap analysis indicated that the early 1970s followed by the 1990s were periods of higher than average probability of significant break points. While it is difficult to determine causation, the results suggest that long-term impacts such as habitat fragmentation may be influencing the species assemblage structure in the Georges Bank ecosystem. Further, we used slopes from the intraspecific A-O relationships to derive a measure of a species' potential risk of hyperstability, where catch rates remain high as the population declines. Combining this measure of the risk of hyperstability with resilience to exploitation provided a means to rank species risk of decline due to both demographics and the interaction of the behaviors of the species and fishing fleets.
Polar bears (Ursus maritimus) of the northern Beaufort Sea (NB) population occur on the perimeter of the polar basin adjacent to the northwestern islands of the Canadian Arctic Archipelago. Sea ice converges on the islands through most of the year. We used open-population capture-recapture models to estimate population size and vital rates of polar bears between 1971 and 2006 to: (1) assess relationships between survival, sex and age, and time period; (2) evaluate the long-term importance of sea ice quality and availability in relation to climate warming; and (3) note future management and conservation concerns. The highest-ranking models suggested that survival of polar bears varied by age class and with changes in the sea ice habitat. Model-averaged estimates of survival (which include harvest mortality) for senescent adults ranged from 0.37 to 0.62, from 0.22 to 0.68 for cubs of the year (COY) and yearlings, and from 0.77 to 0.92 for 2-4 year-olds and adults. Horvtiz-Thompson (HT) estimates of population size were not significantly different among the decades of our study. The population size estimated for the 2000s was 980 +/- 155 (mean and 95% CI). These estimates apply primarily to that segment of the NB population residing west and south of Banks Island. The NB polar bear population appears to have been stable or possibly increasing slightly during the period of our study. This suggests that ice conditions have remained suitable and similar for feeding in summer and fall during most years and that the traditional and legal Inuvialuit harvest has not exceeded sustainable levels. However, the amount of ice remaining in the study area at the end of summer, and the proportion that continues to lie over the biologically productive continental shelf (< 300 m water depth) has declined over the 35-year period of this study. If the climate continues to warm as predicted, we predict that the polar bear population in the northern Beaufort Sea will eventually decline. Management and conservation practices for polar bears in relation to both aboriginal harvesting and offshore industrial activity will need to adapt.
In the boreal forest of North America, as in any fire-prone biome, three environmental factors must coincide for a wildfire to occur: an ignition source, flammable vegetation, and weather that is conducive to fire. Despite recent advances, the relative importance of these factors remains the subject of some debate. The aim of this study was to develop models that identify the environmental controls on spatial patterns in area burned for the period 1980-2005 at several spatial scales in the Canadian boreal forest. Boosted regression tree models were built to relate high-resolution data for area burned to an array of explanatory variables describing ignitions, vegetation, and long-term patterns in fire-conducive weather (i.e., fire climate) at four spatial scales (10(2) km2, 10(3) km2, 10(4) km2, and 10(5) km2). We evaluated the relative contributions of these controls on area burned, as well as their functional relationships, across spatial scales. We also assessed geographic patterns of the influence of wildfire controls. The results indicated that extreme temperature during the fire season was a top control at all spatial scales, followed closely by a wind-driven index of ease of fire spread. However, the contributions of some variables differed substantially among the spatial scales, as did their relationship to area burned. In fact, for some key variables the polarity of relationships was inverted from the finest to the broadest spatial scale. It was difficult to unequivocally attribute values of relative importance to the variables chosen to represent ignitions, vegetation, and climate, as the interdependence of these factors precluded clear partitioning. Furthermore, the influence of a variable on patterns of area burned often changed enormously across the biome, which supports the idea that fire-environment relationships in the boreal forest are complex and nonstationary.
To better understand agricultural carbon fluxes in California, USA, we estimated changes in soil carbon and woody material between 1980 and 2000 on 3.6 x 10(6) ha of farmland in California. Combining the CASA (Carnegie-Ames-Stanford Approach) model with data on harvest indices and yields, we calculated net primary production, woody production in orchard and vineyard crops, and soil carbon. Over the 21-yr period, two trends resulted in carbon sequestration. Yields increased an average of 20%, corresponding to greater plant biomass and more carbon returned to the soils. Also, orchards and vineyards increased in area from 0.7 x 10(6) ha to 1.0 x 10(6) ha, displacing field crops and sequestering woody carbon. Our model estimates that California's agriculture sequestered an average of 19 g C x m(-2) x yr(-1). Sequestration was lowest in non-rice annual cropland, which sequestered 9 g C x m(-2) x yr(-1) of soil carbon, and highest on land that switched from annual cropland to perennial cropland. Land that switched from annual crops to vineyards sequestered 68 g C x m(-2) x yr(-1), and land that switched from annual crops to orchards sequestered 85 g C x m(-2) x yr(-1). Rice fields, because of a reduction in field burning, sequestered 55 g C x m(-2) x yr(-1) in the 1990s. Over the 21 years, California's 3.6 x 10(6) ha of agricultural land sequestered 11.0 Tg C within soils and 3.5 Tg C in woody biomass, for a total of 14.5 Tg C statewide. This is equal to 0.7% of the state's total fossil fuel emissions over the same time period. If California's agriculture adopted conservation tillage, changed management of almond and walnut prunings, and used all of its orchard and vineyard waste wood in the biomass power plants in the state, California's agriculture could offset up to 1.6% of the fossil fuel emissions in the state.
In the 1930s, after only three years of scientific investigation at the University of Michigan Institute for Fisheries Research, cheap labor and government-sponsored conservation projects spearheaded by the Civilian Conservation Corps allowed the widespread adoption of in-stream structures throughout the United States. From the 1940s through the 1970s, designs of in-stream structures remained essentially unchanged, and their use continued. Despite a large investment in the construction of in-stream structures over these four decades, very few studies were undertaken to evaluate the impacts of the structures on the channel and its aquatic populations. The studies that were undertaken to evaluate the impact of the structures were often flawed. The use of habitat structures became an "accepted practice," however, and early evaluation studies were used as proof that the structures were beneficial to aquatic organisms. A review of the literature reveals that, despite published claims to the contrary, little evidence of the successful use of in-stream structures to improve fish populations exists prior to 1980. A total of 79 publications were checked, and 215 statistical analyses were performed. Only seven analyses provide evidence for a benefit of structures on fish populations, and five of these analyses are suspect because data were misclassified by the original authors. Many of the changes in population measures reported in early publications appear to result from changes in fishing pressure that often accompanied channel modifications. Modern evaluations of channel-restoration projects must consider the influence of fishing pressure to ensure that efforts to improve fish habitat achieve the benefits intended. My statistical results show that the traditional use of in-stream structures for channel restoration design does not ensure demonstrable benefits for fish communities, and their ability to increase fish populations should not be presumed.
Regime shifts are a feature of many ecosystems. During the last 40 years, intensive commercial exploitation and environmental changes have driven substantial shifts in ecosystem structure and function in the northwest Atlantic. In the Georges Bank-southern New England region, commercially important species have declined, and the ecosystem shifted to one dominated by economically undesirable species such as skates and dogfish. Aggregated abundance indices indicate a large increase of small and medium-sized elasmobranchs in the early 1980s following the decline of many commercial species. It has been hypothesized that ecological interactions such as competition and predation within the Georges Bank region were responsible for and are maintaining the "elasmobranch outburst" at the heart of the observed ecosystem shift. We offer an alternative hypothesis invoking population connectivity among winter skate populations such that the observed abundance increase is a result of migratory dynamics, perhaps with the Scotian Shelf (i.e., it is an open population). Here we critically evaluate the survey data for winter skate, the species principally responsible for the increase in total skate abundance during the 1980s on Georges Bank, to assess support for both hypotheses. We show that time series from different surveys within the Georges Bank region exhibit low coherence, indicating that a widespread population increase was not consistently shown by all surveys. Further, we argue that observed length-frequency data for Georges Bank indicate biologically unrealistic population fluctuations if the population is closed. Neither finding supports the elasmobranch outburst hypothesis. In contrast, survey time series for Georges Bank and the Scotian Shelf are negatively correlated, in support of the population connectivity hypothesis. Further, we argue that understanding the mechanisms of ecosystem state changes and population connectivity are needed to make inferences about both the causes and appropriate management responses to large-scale system change.
Increasing volumes of treated and untreated human sewage discharged into rivers around the world are likely to be leading to high aquatic concentrations of toxic, unionized ammonia (NH3), with negative impacts on species and ecosystems. Tools and approaches are needed for assessing the dynamics of NH3. This paper describes a modeling approach for first-order assessment of potential NH3 toxicity in urban rivers. In this study daily dissolved NH3 concentrations in the Rio Grande of central New Mexico, USA, at the city of Albuquerque's treated sewage outfall were modeled for 1989-2002. Data for ammonium (NH4+) concentrations in the sewage and data for discharge, temperature, and pH for both sewage effluent and the river were used. We used State of New Mexico acute and chronic NH3- N concentration values (0.30 and 0.05 mg/L NH3-N, respectively) and other reported standards as benchmarks for determining NH3 toxicity in the river and for assessing potential impact on population dynamics for fish species. A critical species of concern is the Rio Grande silvery minnow (Hybognathus amarus), an endangered species in the river near Albuquerque. Results show that NH3 concentrations matched or exceeded acute levels 13%, 3%, and 4% of the time in 1989, 1991, and 1992, respectively. Modeled NH3 concentrations matched or exceeded chronic values 97%, 74%, 78%, and 11% of the time in 1989, 1991, 1992, and 1997, respectively. Exceedences ranged from 0% to 1% in later years after enhancements to the wastewater treatment plant. Modeled NH3 concentrations may differ from actual concentrations because of NH3 and NH4+ loss terms and additive terms such as mixing processes, volatilization, nitrification, sorbtion, and NH4+ uptake. We conclude that NH3 toxicity must be considered seriously for its potential ecological impacts on the Rio Grande and as a mechanism contributing to the decline of the Rio Grande fish community in general and the Rio Grande silvery minnow specifically. Conclusions drawn for the Rio Grande suggest that NH3 concentrations may be high in rivers around the world where alkaline pH values are prevalent and sewage treatment capabilities are poorly developed or absent.
The 1989 Exxon Valdez oil spill caused significant injury to wildlife populations in Prince William Sound, Alaska, USA. Harlequin Ducks (Histrionicus histrionicus) were particularly vulnerable to the spill and have been studied extensively since, leading to one of the most thorough considerations of the consequences of a major oil spill ever undertaken. We compiled demographic and survey data collected since the spill to evaluate the timing and extent of mortality using a population model. During the immediate aftermath of the spill, we estimated a 25% decrease in Harlequin Duck numbers in oiled areas. Survival rates remained depressed in oiled areas 6-9 years after the spill and did not equal those from unoiled areas until at least 11-14 years later. Despite a high degree of site fidelity to wintering sites, immigration was important for recovery dynamics, as the relatively large number of birds from habitats outside the spill zone provided a pool of individuals to facilitate numerical increases. On the basis of these model inputs and assumptions about fecundity rates for the species, we projected a timeline to recovery of 24 years under the most-likely combination of variables, with a range of 16 to 32 years for the best-case and worst-case scenarios, respectively. Our results corroborate assertions from other studies that the effects of spilled oil on wildlife can be expressed over much longer time frames than previously assumed and that the cumulative mortality associated with chronic exposure to residual oil may actually exceed acute mortality, which has been the primary concern following most oil spills.
Forest fires are influenced by weather, fuels, and topography, but the relative influence of these factors may vary in different forest types. Compositional analysis can be used to assess the relative importance of fuels and weather in the boreal forest. Do forest or wild land fires burn more flammable fuels preferentially or, because most large fires burn in extreme weather conditions, do fires burn fuels in the proportions they are available despite differences in flammability? In the Canadian boreal forest, aspen (Populus tremuloides) has been found to burn in less than the proportion in which it is available. We used the province of Ontario's Provincial Fuels Database and fire records provided by the Ontario Ministry of Natural Resources to compare the fuel composition of area burned by 594 large (>40 ha) fires that occurred in Ontario's boreal forest region, a study area some 430,000 km2 in size, between 1996 and 2006 with the fuel composition of the neighborhoods around the fires. We found that, over the range of fire weather conditions in which large fires burned and in a study area with 8% aspen, fires burn fuels in the proportions that they are available, results which are consistent with the dominance of weather in controlling large fires.
Multiple stressors to a shallow lake ecosystem have the ability to control the relative stability of alternative states (clear, macrophyte-dominated or turbid, algal-dominated). As a consequence, the use of remedial biomanipulations to induce trophic cascades and shift a turbid lake to a clear state is often only a temporary solution. Here we show the instability of short-term manipulations in the shallow Lake Christina (Minnesota, USA) is governed by the long-term state following a regime shift in the lake. During the modern, managed period of the lake, three top-down manipulations (fish kills) were undertaken inducing temporary (5-10 years) unstable clear-water states. Paleoecological remains of diatoms, along with proxies of primary production (total chlorophyll a and total organic carbon accumulation rate) and trophic state (total P) from sediment records clearly show a single regime shift in the lake during the early 1950s; following this shift, the functioning of the lake ecosystem is dominated by a persistent turbid state. We find that multiple stressors contributed to the regime shift. First, the lake began to eutrophy (from agricultural land use and/or increased waterfowl populations), leading to a dramatic increase in primary production. Soon after, the construction of a dam in 1936 effectively doubled the depth of the lake, compounded by increases in regional humidity; this resulted in an increase in planktivorous and benthivorous fish reducing phytoplankton grazers. These factors further conspired to increase the stability of a turbid regime during the modern managed period, such that switches to a clear-water state were inherently unstable and the lake consistently returned to a turbid state. We conclude that while top-down manipulations have had measurable impacts on the lake state, they have not been effective in providing a return to an ecosystem similar to the stable historical period. Our work offers an example of a well-studied ecosystem forced by multiple stressors into a new long-term managed period, where manipulated clear-water states are temporary, managed features.
Assessing potential future changes in arctic and boreal plant species productivity, ecosystem composition, and canopy complexity is essential for understanding environmental responses under expected altered climate forcing. We examined potential changes in the dominant plant functional types (PFTs) of the sedge tundra, shrub tundra, and boreal forest ecosystems in ecotonal northern Alaska, USA, for the years 2003-2100. We compared energy feedbacks associated with increases in biomass to energy feedbacks associated with changes in the duration of the snow-free season. We based our simulations on nine input climate scenarios from the Intergovernmental Panel on Climate Change (IPCC) and a new version of the Terrestrial Ecosystem Model (TEM) that incorporates biogeochemistry, vegetation dynamics for multiple PFTs (e.g., trees, shrubs, grasses, sedges, mosses), multiple vegetation pools, and soil thermal regimes. We found mean increases in net primary productivity (NPP) in all PFTs. Most notably, birch (Betula spp.) in the shrub tundra showed increases that were at least three times larger than any other PFT. Increases in NPP were positively related to increases in growing-season length in the sedge tundra, but PFTs in boreal forest and shrub tundra showed a significant response to changes in light availability as well as growing-season length. Significant NPP responses to changes in vegetation uptake of nitrogen by PFT indicated that some PFTs were better competitors for nitrogen than other PFTs. While NPP increased, heterotrophic respiration (RH) also increased, resulting in decreases or no change in net ecosystem carbon uptake. Greater aboveground biomass from increased NPP produced a decrease in summer albedo, greater regional heat absorption (0.34 +/- 0.23 W x m(-2) x 10 yr(-1) [mean +/- SD]), and a positive feedback to climate warming. However, the decrease in albedo due to a shorter snow season (-5.1 +/- 1.6 d/10 yr) resulted in much greater regional heat absorption (3.3 +/- 1.24 W x m(-2) x 10 yr(-1)) than that associated with increases in vegetation. Through quantifying feedbacks associated with changes in vegetation and those associated with changes in the snow season length, we can reach a more integrated understanding of the manner in which climate change may impact interactions between high-latitude ecosystems and the climate system.
Correcting the problems in the model of A. petiolata presented in Pardini et al. (2009) changes its dynamics and thus the management recommendations. As with any model, our revised model's-management predictions are conditional on model parameterization. Thus, managers should carefully consider at what spatial scales it is appropriate to infer management recommendations given the data used to build the model (e.g., is a management plan developed from a population in Missouri equally relevant to populations in Georgia, Maine, and Oregon?). In agreement with PDCK's conclusions, we found their A. petiolata study population to exhibit complex dynamics (two-point cycling) at lower efficacies of either rosette or adult management, and stable equilibria at higher management efficacies. This could have important implications for A. petiolata management techniques such as biological control if the biocontrol agents' population dynamics are dependent on A. petiolata density. While the predictions generated in our reanalysis represent an improvement over the original model, they should be tempered by the limited scope of the data used to parameterize the model. Running the model through previously published parameter ranges results in qualitatively different dynamics than those predicted in PDCK. Because of the tremendous spatiotemporal variability in A. petiolata demographic rates and the species' large geographical range, more general management recommendations will only arise from a larger set of demographic data that has greater coverage in space and time. Our revision of the model of Pardini et al. (2009) should therefore be considered as a subset of many possible models of A. petiolata population dynamics.
Meeting future biofuel targets set by the 2007 Energy Independence and Security Act (EISA) will require a substantial increase in production of corn. The Midwest, which has the highest overall crop production capacity, is likely to bear the brunt of the biofuel-driven changes. In this paper, we set forth a method for developing a possible future landscape and evaluate changes in practices and production between base year (BY) 2001 and biofuel target (BT) 2020. In our BT 2020 Midwest landscape, a total of 25 million acres (1 acre = 0.40 ha) of farmland was converted from rotational cropping to continuous corn. Several states across the Midwest had watersheds where continuous corn planting increased by more than 50%. The output from the Center for Agriculture and Rural Development (CARD) econometric model predicted that corn grain production would double. In our study we were able to get within 2% of this expected corn production. The greatest increases in corn production were in the Corn Belt as a result of conversion to continuous corn planting. In addition to changes to cropping practices as a result of biofuel initiatives we also found that urban growth would result in a loss of over 7 million acres of productive farmland by 2020. We demonstrate a method which successfully combines economic model output with gridded land cover data to create a spatially explicit detailed classification of the landscape across the Midwest. Understanding where changes are likely to take place on the landscape will enable the evaluation of trade-offs between economic benefits and ecosystem services allowing proactive conservation and sustainable production for human well-being into the future.
The ridge and slough landscape of the Florida Everglades consists of a mosaic of linear sawgrass ridges separated by deeper-water sloughs with tree islands interspersed throughout the landscape. We used pollen assemblages from transects of sediment cores spanning sawgrass ridges, sloughs, and ridge-slough transition zones to determine the timing of ridge and slough formation and to evaluate the response of components of the ridge and slough landscape to climate variability and 20th-century water management. These pollen data indicate that sawgrass ridges and sloughs have been vegetationally distinct from one another since initiation of the Everglades wetland in mid-Holocene time. Although the position and community composition of sloughs have remained relatively stable throughout their history, modern sawgrass ridges formed on sites that originally were occupied by marshes. Ridge formation and maturation were initiated during intervals of drier climate (the Medieval Warm Period and the Little Ice Age) when the mean position of the Intertropical Convergence Zone shifted southward. During these drier intervals, marsh taxa were more common in sloughs, but they quickly receded when precipitation increased. Comparison with regional climate records suggests that slough vegetation is strongly influenced by North Atlantic Oscillation variability, even under 20th-century water management practices.
During the 21st century, climate-driven changes in fire regimes will be a key agent of change in forests of the U.S. Pacific Northwest (PNW). Understanding the response of forest carbon (C) dynamics to increases in fire will help quantify limits on the contribution of forest C storage to climate change mitigation and prioritize forest types for monitoring C storage and fire management to minimize C loss. In this study, we used projections of 21st century area burned to explore the consequences of changes in fire regimes on C dynamics in forests of Washington State. We used a novel empirical approach that takes advantage of chronosequences of C pools and fluxes and statistical properties of fire regimes to explore the effects of shifting age class distributions on C dynamics. Forests of the western Cascades are projected to be more sensitive to climate-driven increases in fire, and thus projected changes in C dynamics, than forests of the eastern Cascades. In the western Cascades, mean live biomass C is projected to decrease by 24-37%, and coarse woody debris (CWD) biomass C by 15-25% for the 2040s. Loss of live biomass C is projected to be lower for forests of the eastern Cascades and Okanogan Highlands (17-26%), and CWD biomass is projected to increase. Landscape mean net primary productivity is projected to increase in wet low-elevation forests of the western Cascades, but decrease elsewhere. These forests, and moist forests of the Okanogan Highlands, are projected to have the greatest percentage increases in consumption of live biomass. Percentage increases in consumption of CWD biomass are greater than 50% for all regions and up to four times greater than increases in consumption of live biomass. Carbon sequestration in PNW forests will be highly sensitive to increases in fire, suggesting a cautious approach to managing these forests for C sequestration to mitigate anthropogenic CO2 emissions.
Wood density is a crucial variable in carbon accounting programs of both secondary and old-growth tropical forests. It also is the best single descriptor of wood: it correlates with numerous morphological, mechanical, physiological, and ecological properties. To explore the extent to which wood density could be estimated for rare or poorly censused taxa, and possible sources of variation in this trait, we analyzed regional, taxonomic, and phylogenetic variation in wood density among 2456 tree species from Central and South America. Wood density varied over more than one order of magnitude across species, with an overall mean of 0.645 g/cm3. Our geographical analysis showed significant decreases in wood density with increasing altitude and significant differences among low-altitude geographical regions: wet forests of Central America and western Amazonia have significantly lower mean wood density than dry forests of Central and South America, eastern and central Amazonian forests, and the Atlantic forests of Brazil; and eastern Amazonian forests have lower wood densities than the dry forests and the Atlantic forest. A nested analysis of variance showed that 74% of the species-level wood density variation was explained at the genus level, 34% at the Angiosperm Phylogeny Group (APG) family level, and 19% at the APG order level. This indicates that genus-level means give reliable approximations of values of species, except in a few hypervariable genera. We also studied which evolutionary shifts in wood density occurred in the phylogeny of seed plants using a composite phylogenetic tree. Major changes were observed at deep nodes (Eurosid 1), and also in more recent divergences (for instance in the Rhamnoids, Simaroubaceae, and Anacardiaceae). Our unprecedented wood density data set yields consistent guidelines for estimating wood densities when species-level information is lacking and should significantly reduce error in Central and South American carbon accounting programs.
Despite widespread recognition that they provide valuable ecosystem services and contribute significantly to global biodiversity, over half of the world's wetlands have been lost, primarily to agriculture. Wetland loss is evident in prairie Canada, but comprehensive information about causes of ongoing impact for existing wetlands is lacking. Habitat data collected for approximately 10,500 wetlands during annual waterfowl surveys (1985-2005) were analyzed using multistate models to estimate rates of wetland impact and recovery from agricultural activities in the Canadian prairies. An impact was defined as an agricultural activity that visibly altered a wetland margin (natural vegetation surrounding wetland interiors) or basin (interior depression capable of holding water), whereas recovery was deemed to have occurred if agricultural activities had ceased and effects were no longer visibly apparent. We estimated separate impact and recovery rates for wetland basins and wetland margins and considered covariates such as location, time, wetness indices, land use, and wetland permanence. Results indicate that impact rates for wetland margins have declined over time, likely due to a decreasing percentage of unaffected wetlands on the landscape. Recovery rates for margins were always lower than impact rates, suggesting progressive incidence of impacts to wetlands over time. Unlike margins, impact and recovery rates for basins fluctuated with May pond densities, which we used as a wetness index. Shallow ephemeral wetlands located in agricultural fields had the highest impact and lowest recovery rates relative to wetlands with higher water permanence or situated in areas of lower agricultural intensity. High rates and incidence of wetland impact in conjunction with low recovery rates clearly demonstrate the need for stronger wetland protection in prairie Canada.
Proliferation of woody plants in grasslands and savannas is a persistent problem globally. This widely observed shift from grass to shrub dominance in rangelands worldwide has been heterogeneous in space and time largely due to cross-scale interactions among soils, climate, and land-use history. Our objective was to use a hierarchical framework to evaluate the relationship between spatial patterns in soil properties and long-term shrub dynamics in the northern Chihuahuan Desert of New Mexico, USA. To meet this objective, shrub patch dynamics from 1937 to 2008 were characterized at patch and landscape scales using historical imagery and a recent digital soils map. Effects of annual precipitation on patch dynamics on two soils revealed strong correlations between shrub growth on deep sandy soils and above-average rainfall years (r = 0.671, P = 0.034) and shrub colonization and below-average rainfall years on shallow sandy soils (r = 0.705, P = 0.023). Patch-level analysis of demographic patterns revealed significant differences between shrub patches on deep and shallow sandy soils during periods of above- and below-average rainfall. Both deep and shallow sandy soils exhibited low shrub cover in 1937 (1.0% +/- 2.3% and 0.3% +/- 1.3%, respectively [mean +/- SD]) and were characterized by colonization or appearance of new patches until 1960. However, different demographic responses to the cessation of severe drought on the two soils and increased frequency of wet years after 1960 have resulted in very different endpoints. In 2008 a shrubland occupied the deep sandy soils with cover at 19.8% +/- 9.1%, while a shrub-dominated grassland occurred on the shallow sandy soils with cover at 9.3% +/- 7.2%. Present-day shrub vegetation constitutes a shifting mosaic marked by the coexistence of patches at different stages of development. Management implications of this long-term multi-scale assessment of vegetation dynamics support the notion that soil properties may constrain grassland remediation. Such efforts on sandy soils should be focused on sites characterized by near-surface water-holding capacity, as those lacking available water-holding capacity in the shallow root zone pose challenges to grass recovery and survival.
Prospective elasticity analyses have been used to aid in the management of fished species and the conservation of endangered species. Elasticities were examined for deterministic size-based matrix models of red abalone, Haliotis rufescens, and white abalone, H. sorenseni, to evaluate which size classes influenced population growth (lambda) the most. In the red abalone matrix, growth transitions were determined from a tag recapture study and grouped into nine size classes. In the white abalone matrix, abalone growth was determined from a laboratory study and grouped into five size classes. Survivorship was estimated from tag recapture data for red abalone using a Jolly-Seber model with size as a covariate and used for both red and white abalone. Reproduction estimates for both models used averages of the number of mature eggs produced by female red and white abalone in each size class from four-year reproduction studies. Population growth rate (lambda) was set to 1.0, and the first-year survival (larval survival through to the first size class) was estimated by iteration. Survival elasticities were higher than fecundity elasticities in both the red and white matrix models. The sizes classes with the greatest survival elasticities, and therefore the most influence on population growth in the model, were the sublegal red abalone (150-178 mm) and the largest white abalone size class (140-175 mm). For red abalone, the existing minimum legal size (178 mm) protects the size class the model suggests is critical to population growth. Implementation of education programs for novice divers coupled with renewed enforcement may serve to minimize incidental mortality of the critical size class. For white abalone, conservation efforts directed at restoring adults may have more of an impact on population growth than efforts focusing on juveniles. Our work is an example of how prospective elasticity analyses of size-structured matrix models can be used to quantitatively evaluate research priorities, fishery management strategies, and conservation options.
Archaeological data from coastal shell middens provide a window into the structure of ancient marine ecosystems and the nature of human impacts on fisheries that often span millennia. For decades Channel Island archaeologists have studied Middle Holocene shell middens visually dominated by large and often whole shells of the red abalone (Haliotis rufescens). Here we use modern ecological data, historical accounts, commercial red abalone catch records, and zooarchaeological data to examine long-term spatial and temporal variation in the productivity of red abalone fisheries on the Northern Channel Islands, California (USA). Historical patterns of abundance, in which red abalone densities increase from east to west through the islands, extend deep into the Holocene. The correlation of historical and archaeological data argue for long-term spatial continuity in productive red abalone fisheries and a resilience of abalone populations despite dramatic ecological changes and intensive human predation spanning more than 8000 years. Archaeological, historical, and ecological data suggest that California kelp forests and red abalone populations are structured by a complex combination of top-down and bottom-up controls.
The effects of abandoned mine drainage (AMD) on streams and responses to remediation efforts were studied using three streams (AMD-impacted, remediated, reference) in both the anthracite and the bituminous coal mining regions of Pennsylvania (USA). Response variables included ecosystem function as well as water chemistry and macroinvertebrate community composition. The bituminous AMD stream was extremely acidic with high dissolved metals concentrations, a prolific mid-summer growth of the filamentous alga, Mougeotia, and > 10-fold more chlorophyll than the reference stream. The anthracite AMD stream had a higher pH, substrata coated with iron hydroxide(s), and negligible chlorophyll. Macroinvertebrate communities in the AMD streams were different from the reference streams, the remediated streams, and each other. Relative to the reference stream, the AMD stream(s) had (1) greater gross primary productivity (GPP) in the bituminous region and undetectable GPP in the anthracite region, (2) greater ecosystem respiration in both regions, (3) greatly reduced ammonium uptake and nitrification in both regions, (4) lower nitrate uptake in the bituminous (but not the anthracite) region, (5) more rapid phosphorus removal from the water column in both regions, (6) activities of phosphorus-acquiring, nitrogen-acquiring, and hydrolytic-carbon-acquiring enzymes that indicated extreme phosphorus limitation in both regions, and (7) slower oak and maple leaf decomposition in the bituminous region and slower oak decomposition in the anthracite region. Remediation brought chlorophyll concentrations and GPP nearer to values for respective reference streams, depressed ecosystem respiration, restored ammonium uptake, and partially restored nitrification in the bituminous (but not the anthracite) region, reduced nitrate uptake to an undetectable level, restored phosphorus uptake to near normal rates, and brought enzyme activities more in line with the reference stream in the bituminous (but not the anthracite) region. Denitrification was not detected in any stream. Water chemistry and macroinvertebrate community structure analyses capture the impact of AMD at the local reach scale, but functional measures revealed that AMD has ramifications that can cascade to downstream reaches and perhaps to receiving estuaries.
Current rates of deforestation and the resulting C emissions in the tropics exceed those of secondary forest regrowth and C sequestration. Changing land-use strategies that would maintain standing forests may be among the least expensive of climate change mitigation options. Further, secondary tropical forests have been suggested to have great value for their potential to sequester atmospheric C. These options require an understanding of and capability to quantify C dynamics at landscape scales. Because of the diversity of physical and biotic features of tropical forests as well as approaches and intensities of land uses within the neotropics, there are tremendous differences in the capacity of different landscapes to store and sequester C. Major gaps in our current knowledge include quantification of C pools, rates and patterns of biomass loss following land-cover change, and quantification of the C storage potential of secondary forests following abandonment. In this paper we present a synthesis and further analyses from recent studies that describe C pools, patterns of C decline associated with land use, and rates of C accumulation following secondary-forest establishment--all information necessary for climate-change mitigation options. Ecosystem C pools of Neotropical primary forests minimally range from approximately 141 to 571 Mg/ha, demonstrating tremendous differences in the capacity of different forests to store C. Most of the losses in C and nutrient pools associated with conversion occur when fires are set to remove the slashed forest to prepare sites for crop or pasture establishment. Fires burning slashed primary forests have been found to result in C losses of 62-80% of prefire aboveground pools in dry (deciduous) forest landscapes and 29-57% in wet (evergreen) forest landscapes. Carbon emissions equivalent to the aboveground primary-forest pool arise from repeated fires occurring in the first 4 to 10 years following conversion. Feedbacks of climate change, land-cover change, and increasing habitat fragmentation may result in increases of both the area burned and the total quantity of biomass consumed per unit area by fire. These effects may well limit the capacity for future tropical forests to sequester C and nutrients.
The ability to accurately predict patterns of species' occurrences is fundamental to the successful management of animal communities. To determine optimal management strategies, it is essential to understand species-habitat relationships and how species habitat use is related to natural or human-induced environmental changes. Using five years of monitoring data in the Chesapeake and Ohio Canal National Historical Park, Maryland, USA, we developed four multispecies hierarchical models for estimating amphibian wetland use that account for imperfect detection during sampling. The models were designed to determine which factors (wetland habitat characteristics, annual trend effects, spring/summer precipitation, and previous wetland occupancy) were most important for predicting future habitat use. We used the models to make predictions about species occurrences in sampled and unsampled wetlands and evaluated model projections using additional data. Using a Bayesian approach, we calculated a posterior distribution of receiver operating characteristic area under the curve (ROC AUC) values, which allowed us to explicitly quantify the uncertainty in the quality of our predictions and to account for false negatives in the evaluation data set. We found that wetland hydroperiod (the length of time that a wetland holds water), as well as the occurrence state in the prior year, were generally the most important factors in determining occupancy. The model with habitat-only covariates predicted species occurrences well; however, knowledge of wetland use in the previous year significantly improved predictive ability at the community level and for two of 12 species/species complexes. Our results demonstrate the utility of multispecies models for understanding which factors affect species habitat use of an entire community (of species) and provide an improved methodology using AUC that is helpful for quantifying the uncertainty in model predictions while explicitly accounting for detection biases.
Invasion of native ecosystems by exotic species can seriously threaten native biodiversity, alter ecosystem function, and inhibit conservation. Moreover, restoration of native plant communities is often impeded by competition from exotic species. Exotic species invasion may be limited by unfavorable abiotic conditions and by competition with native species, but the relative importance of biotic and abiotic factors remains controversial and may vary during the invasion process. We used a long-term experiment involving restored vernal pool plant communities to characterize the temporal dynamics of exotic species invasion, and to evaluate the relative support for biotic and abiotic factors affecting invasion resistance. Experimental pools (n=256) were divided among controls and several seeding treatments. In most treatments, native vernal pool species were initially more abundant than exotic species, and pools that initially received more native seeds exhibited lower frequencies of exotic species over time. However, even densely seeded pools were eventually dominated by exotic species, following extreme climatic events that reduced both native and exotic plant densities across the study site. By the sixth year of the experiment, most pools supported more exotics than native vernal pool species, regardless of seeding treatment or pool depth. Although deeper pools were less invaded by exotic species, two exotics (Hordeum marinum and Lolium multiflorum) were able to colonize deeper pools as soon as the cover of native species was reduced by climatic extremes. Based on an information-theoretic analysis, the best model of invasion resistance included a nonlinear effect of seeding treatment and both linear and nonlinear effects of pool depth. Pool depth received more support as a predictor of invasion resistance, but seeding intensity was also strongly supported in multivariate models of invasion, and was the best predictor of resistance to invasion by H. marinum and L. multilorum. We conclude that extreme climatic events can facilitate exotic species invasions by both reducing abiotic constraints and weakening biotic resistance to invasion.
Effective management of invasive species requires that we understand the mechanisms determining community invasibility. Successful invaders must tolerate abiotic conditions and overcome resistance from native species in invaded habitats. Biotic resistance to invasions may reflect the diversity, abundance, or identity of species in a community. Few studies, however, have examined the relative importance of abiotic and biotic factors determining community invasibility. In a greenhouse experiment, we simulated the abiotic and biotic gradients typically found in vernal pools to better understand their impacts on invasibility. Specifically, we invaded plant communities differing in richness, identity, and abundance of native plants (the "plant neighborhood") and depth of inundation to measure their effects on growth, reproduction, and survival of five exotic plant species. Inundation reduced growth, reproduction, and survival of the five exotic species more than did plant neighborhood. Inundation reduced survival of three species and growth and reproduction of all five species. Neighboring plants reduced growth and reproduction of three species but generally did not affect survival. Brassica rapa, Centaurea solstitialis, and Vicia villosa all suffered high mortality due to inundation but were generally unaffected by neighboring plants. In contrast, Hordeum marinum and Lolium multiflorum, whose survival was unaffected by inundation, were more impacted by neighboring plants. However, the four measures describing plant neighborhood differed in their effects. Neighbor abundance impacted growth and reproduction more than did neighbor richness or identity, with growth and reproduction generally decreasing with increasing density and mass of neighbors. Collectively, these results suggest that abiotic constraints play the dominant role in determining invasibility along vernal pool and similar gradients. By reducing survival, abiotic constraints allow only species with the appropriate morphological and physiological traits to invade. In contrast, biotic resistance reduces invasibility only in more benign environments and is best predicted by the abundance, rather than diversity, of neighbors. These results suggest that stressful environments are not likely to be invaded by most exotic species. However, species, such as H. marinum, that are able to invade these habitats require careful management, especially since these environments often harbor rare species and communities.
The ecological impacts of forest plantations are a focus of intense debate, from studies that consider plantations as "biological deserts" to studies showing positive effects on plant diversity and dynamics. This lack of consensus might be influenced by the scarcity of studies that examine how the ecological characteristics of plantations vary along abiotic and biotic gradients. Here we conducted a large-scale assessment of plant regeneration and diversity in plantations of southern Spain. Tree seedling and sapling density, plant species richness, and Shannon's (H') diversity index were analyzed in 442 pine plantation plots covering a wide gradient of climatic conditions, stand density, and distance to natural forests that act as seed sources. Pronounced variation in regeneration and diversity was found in plantation understories along the gradients explored. Low- to mid-altitude plantations showed a diverse and abundant seedling bank dominated by Quercus ilex, whereas high-altitude plantations showed a virtually monospecific seeding bank of Pinus sylvestris. Regeneration was null in plantations with stand densities exceeding 1500 pines/ha. Moderate plantation densities (500-1000 pines/ha) promoted recruitment in comparison to low or null canopy cover, suggesting the existence of facilitative interactions. Quercus ilex recruitment diminished exponentially with distance to the nearest Q. ilex forest. Richness and H' index values showed a hump-shaped distribution along the altitudinal and radiation gradients and decreased monotonically along the stand density gradient. From a management perspective, different strategies will be necessary depending on where a plantation lies along the gradients explored. Active management will be required in high-density plantations with arrested succession and low diversity. Thinning could redirect plantations toward more natural densities where facilitation predominates. Passive management might be recommended for low- to moderate-density plantations with active successional dynamics (e.g., toward oak or pine-oak forests at low to mid altitudes). Enrichment planting will be required to overcome seed limitation, especially in plantations far from natural forests. We conclude that plantations should be perceived as dynamic systems where successional trajectories and diversity levels are determined by abiotic constraints, complex balances of competitive and facilitative interactions, the spatial configuration of native seed sources, and species life-history traits.
We report reptile and arboreal marsupial responses to vegetation planting and remnant native vegetation in agricultural landscapes in southeastern Australia. We used a hierarchical survey to select 23 landscapes that varied in the amounts of remnant native vegetation and planted native vegetation. We selected two farms within each landscape. In landscapes with plantings, we selected one farm with and one farm without plantings. We surveyed arboreal marsupials and reptiles on four sites on each farm that encompassed four vegetation types (plantings 7-20 years old, old-growth woodland, naturally occurring seedling regrowth woodland, and coppice [i.e., multistemmed] regrowth woodland). Reptiles and arboreal marsupials were less likely to occur on farms and in landscapes with comparatively large areas of plantings. Such farms and landscapes had less native vegetation, fewer paddock trees, and less woody debris within those areas of natural vegetation. The relatively large area of planting on these farms was insufficient to overcome the lack of these key structural attributes. Old-growth woodland, coppice regrowth, seedling regrowth, and planted areas had different habitat values for different reptiles and arboreal marsupials. We conclude that, although plantings may improve habitat conditions for some taxa, they may not effectively offset the negative effects of native vegetation clearing for all species, especially those reliant on old-growth woodland. Restoring suitable habitat for such species may take decades to centuries.
The recent trend to place monetary values on ecosystem services has led to studies on the economic importance of pollinators for agricultural crops. Several recent studies indicate regional, long-term pollinator declines, and economic consequences have been derived from declining pollination efficiencies. However, use of pollinator services as economic incentives for conservation must consider environmental factors such as drought, pests, and diseases, which can also limit yields. Moreover, "flower excess" is a well-known reproductive strategy of plants as insurance against unpredictable, external factors that limit reproduction. With three case studies on the importance of pollination levels for amounts of harvested fruits of three tropical crops (passion fruit in Brazil, coffee in Ecuador, and cacao in Indonesia) we illustrate how reproductive strategies and environmental stress can obscure initial benefits from improved pollination. By interpreting these results with findings from evolutionary sciences, agronomy, and studies on wild-plant populations, we argue that studies on economic benefits from pollinators should include the total of ecosystem processes that (1) lead to successful pollination and (2) mobilize nutrients and improve plant quality to the extent that crop yields indeed benefit from enhanced pollinator services. Conservation incentives that use quantifications of nature's services to human welfare will benefit from approaches at the ecosystem level that take into account the broad spectrum of biological processes that limit or deliver the service.
Reducing Emissions from Deforestation and Forest Degradation (REDD) in efforts to combat climate change requires participating countries to periodically assess their forest resources on a national scale. Such a process is particularly challenging in the tropics because of technical difficulties related to large aboveground forest biomass stocks, restricted availability of affordable, appropriate remote-sensing images, and a lack of accurate forest inventory data. In this paper, we apply the Fourier-based FOTO method of canopy texture analysis to Google Earth's very-high-resolution images of the wet evergreen forests in the Western Ghats of India in order to (1) assess the predictive power of the method on aboveground biomass of tropical forests, (2) test the merits of free Google Earth images relative to their native commercial IKONOS counterparts and (3) highlight further research needs for affordable, accurate regional aboveground biomass estimations. We used the FOTO method to ordinate Fourier spectra of 1436 square canopy images (125 x 125 m) with respect to a canopy grain texture gradient (i.e., a combination of size distribution and spatial pattern of tree crowns), benchmarked against virtual canopy scenes simulated from a set of known forest structure parameters and a 3-D light interception model. We then used 15 1-ha ground plots to demonstrate that both texture gradients provided by Google Earth and IKONOS images strongly correlated with field-observed stand structure parameters such as the density of large trees, total basal area, and aboveground biomass estimated from a regional allometric model. Our results highlight the great potential of the FOTO method applied to Google Earth data for biomass retrieval because the texture-biomass relationship is only subject to 15% relative error, on average, and does not show obvious saturation trends at large biomass values. We also provide the first reliable map of tropical forest aboveground biomass predicted from free Google Earth images.
Regional, high-resolution mapping of vegetation cover and biomass is central to understanding changes to the terrestrial carbon (C) cycle, especially in the context of C management. The third most extensive vegetation type in the United States is pinyon-juniper (P-J) woodland, yet the spatial patterns of tree cover and aboveground biomass (AGB) of P-J systems are poorly quantified. We developed a synoptic remote-sensing approach to scale up pinyon and juniper projected cover (hereafter "cover") and AGB field observations from plot to regional levels using fractional photosynthetic vegetation (PV) cover derived from airborne imaging spectroscopy and Landsat satellite data. Our results demonstrated strong correlations (P < 0.001) between field cover and airborne PV estimates (r2 = 0.92), and between airborne and satellite PV estimates (r2 = 0.61). Field data also indicated that P-J AGB can be estimated from canopy cover using a unified allometric equation (r2 = 0.69; P < 0.001). Using these multiscale cover-AGB relationships, we developed high-resolution, regional maps of P-J cover and AGB for the western Colorado Plateau. The P-J cover was 27.4% +/- 9.9% (mean +/- SD), and the mean aboveground woody C converted from AGB was 5.2 +/- 2.0 Mg C/ha. Combining our data with the southwest Regional Gap Analysis Program vegetation map, we estimated that total contemporary woody C storage for P-J systems throughout the Colorado Plateau (113 600 km2) is 59.0 +/- 22.7 Tg C. Our results show how multiple remote-sensing observations can be used to map cover and C stocks at high resolution in drylands, and they highlight the role of P-J ecosystems in the North American C budget.
Greenhouse-gas emissions resulting from logging are poorly quantified across the tropics. There is a need for robust measurement of rain forest biomass and the impacts of logging from which carbon losses can be reliably estimated at regional and global scales. We used a modified Bitterlich plotless technique to measure aboveground live biomass at six unlogged and six logged rain forest areas (coupes) across two approximately 3000-ha regions at the Makapa concession in lowland Papua New Guinea. "Reduced-impact logging" is practiced at Makapa. We found the mean unlogged aboveground biomass in the two regions to be 192.96 +/- 4.44 Mg/ha and 252.92 +/- 7.00 Mg/ha (mean +/- SE), which was reduced by logging to 146.92 +/- 4.58 Mg/ha and 158.84 +/- 4.16, respectively. Killed biomass was not a fixed proportion, but varied with unlogged biomass, with 24% killed in the lower-biomass region, and 37% in the higher-biomass region. Across the two regions logging resulted in a mean aboveground carbon loss of 35 +/- 2.8 Mg/ha. The plotless technique proved efficient at estimating mean aboveground biomass and logging damage. We conclude that substantial bias is likely to occur within biomass estimates derived from single unreplicated plots.
The expansion of selective logging in tropical forests may be an important source of global carbon emissions. However, the effects of logging practices on the carbon cycle have never been quantified over long periods of time. We followed the fate of more than 60 000 tropical trees over 23 years to assess changes in aboveground carbon stocks in 48 1.56-ha plots in French Guiana that represent a gradient of timber harvest intensities, with and without intensive timber stand improvement (TSI) treatments to stimulate timber tree growth. Conventional selective logging led to emissions equivalent to more than a third of aboveground carbon stocks in plots without TSI (85 Mg C/ha), while plots with TSI lost more than one-half of aboveground carbon stocks (142 Mg C/ha). Within 20 years of logging, plots without TSI sequestered aboveground carbon equivalent to more than 80% of aboveground carbon lost to logging (-70.7 Mg C/ha), and our simulations predicted an equilibrium aboveground carbon balance within 45 years of logging. In contrast, plots with intensive TSI are predicted to require more than 100 years to sequester aboveground carbon lost to emissions. These results indicate that in some tropical forests aboveground carbon storage can be recovered within half a century after conventional logging at moderate harvest intensities.
The restoration of cleared dry forest represents an important opportunity to sequester atmospheric carbon. In order to account for this potential, the influences of climate, soils, and disturbance need to be deciphered. A data set spanning a region defined the aboveground biomass of mulga (Acacia aneura) dry forest and was analyzed in relation to climate and soil variables using a Bayesian model averaging procedure. Mean annual rainfall had an overwhelmingly strong positive effect, with mean maximum temperature (negative) and soil depth (positive) also important. The data were collected after a recent drought, and the amount of recent tree mortality was weakly positively related to a measure of three-year rainfall deficit, and maximum temperature (positive), soil depth (negative), and coarse sand (negative). A grazing index represented by the distance of sites to watering points was not incorporated by the models. Stark management contrasts, including grazing exclosures, can represent a substantial part of the variance in the model predicting biomass, but the impact of management was unpredictable and was insignificant in the regional data set. There was no evidence of density-dependent effects on tree mortality. Climate change scenarios represented by the coincidence of historical extreme rainfall deficit with extreme temperature suggest mortality of 30.1% of aboveground biomass, compared to 21.6% after the recent (2003-2007) drought. Projections for recovery of forest using a mapping base of cleared areas revealed that the greatest opportunities for restoration of aboveground biomass are in the higher-rainfall areas, where biomass accumulation will be greatest and droughts are less intense. These areas are probably the most productive for rangeland pastoralism, and the trade-off between pastoral production and carbon sequestration will be determined by market forces and carbon-trading rules.
Primary tropical forests are renowned for their high biodiversity and carbon storage, and considerable research has documented both species and carbon losses with deforestation and agricultural land uses. Economic drivers are now leading to the abandonment of agricultural lands, and the area in secondary forests is increasing. We know little about how long it takes for these ecosystems to achieve the structural and compositional characteristics of primary forests. In this study, we examine changes in plant species composition and aboveground biomass during eight decades of tropical secondary succession in Puerto Rico, and compare these patterns with primary forests. Using a well-replicated chronosequence approach, we sampled primary forests and secondary forests established 10, 20, 30, 60, and 80 years ago on abandoned pastures. Tree species composition in all secondary forests was different from that of primary forests and could be divided into early (10-, 20-, and 30-year) vs. late (60- and 80-year) successional phases. The highest rates of aboveground biomass accumulation occurred in the first 20 years, with rates of C sequestration peaking at 6.7 +/- 0.5 Mg C x ha(-1) x yr(-1). Reforestation of pastures resulted in an accumulation of 125 Mg C/ha in aboveground standing live biomass over 80 years. The 80 year-old secondary forests had greater biomass than the primary forests, due to the replacement of woody species by palms in the primary forests. Our results show that these new ecosystems have different species composition, but similar species richness, and significant potential for carbon sequestration, compared to remnant primary forests.
Most methods for modeling species distributions from occurrence records require additional data representing the range of environmental conditions in the modeled region. These data, called background or pseudo-absence data, are usually drawn at random from the entire region, whereas occurrence collection is often spatially biased toward easily accessed areas. Since the spatial bias generally results in environmental bias, the difference between occurrence collection and background sampling may lead to inaccurate models. To correct the estimation, we propose choosing background data with the same bias as occurrence data. We investigate theoretical and practical implications of this approach. Accurate information about spatial bias is usually lacking, so explicit biased sampling of background sites may not be possible. However, it is likely that an entire target group of species observed by similar methods will share similar bias. We therefore explore the use of all occurrences within a target group as biased background data. We compare model performance using target-group background and randomly sampled background on a comprehensive collection of data for 226 species from diverse regions of the world. We find that target-group background improves average performance for all the modeling methods we consider, with the choice of background data having as large an effect on predictive performance as the choice of modeling method. The performance improvement due to target-group background is greatest when there is strong bias in the target-group presence records. Our approach applies to regression-based modeling methods that have been adapted for use with occurrence data, such as generalized linear or additive models and boosted regression trees, and to Maxent, a probability density estimation method. We argue that increased awareness of the implications of spatial bias in surveys, and possible modeling remedies, will substantially improve predictions of species distributions.