Integrated Environmental Assessment and Management

Published by Wiley
Online ISSN: 1551-3793
Print ISSN: 1551-3777
Increasing regulatory attention to 1,4-dioxane has prompted the United States Air Force (USAF) to evaluate potential environmental liabilities, primarily associated with legacy contamination, at an enterprise scale. Although accurately quantifying environmental liability is operationally difficult given limited historic environmental monitoring data, 1,4-dioxane is a known constituent (i.e., stabilizer) of chlorinated solvents, in particular 1,1,1-trichloroethane (TCA). Evidence regarding the co-occurrence of 1,4-dioxane and trichloroethylene (TCE), however, has been heavily debated. In fact, the prevailing opinion is that 1,4-dioxane was not a constituent of past TCE formulations and, therefore, these 2 contaminants would not likely co-occur in the same groundwater plume. Because historic handling, storage, and disposal practices of chlorinated solvents have resulted in widespread groundwater contamination at USAF installations, significant potential exists for unidentified 1,4-dioxane contamination. Therefore, the objective of this investigation is to determine the extent to which 1,4-dioxane co-occurs with TCE compared to TCA, and if these chemicals are co-contaminants, whether or not there is significant correlation using available monitoring data. To accomplish these objectives, the USAF Environmental Restoration Program Information Management System (ERPIMS) was queried for all relevant records for groundwater monitoring wells (GMWs) with 1,4-dioxane, TCA, and TCE, on which both categorical and quantitative analyses were carried out. Overall, ERPIMS contained 5788 GMWs from 49 installations with records for 1,4-dioxane, TCE, and TCA analytes. 1,4-Dioxane was observed in 17.4% of the GMWs with detections for TCE and/or TCA, which accounted for 93.7% of all 1,4-dioxane detections, verifying that 1,4-dioxane is seldom found independent of chlorinated solvent contamination. Surprisingly, 64.4% of all 1,4-dioxane detections were associated with TCE independently. Given the extensive data set, these results conclusively demonstrate for the first time that 1,4-dioxane is a relatively common groundwater co-contaminant with TCE. Trend analysis demonstrated a positive log-linear relationship where median 1,4-dioxane levels increased between approximately 6% and approximately 20% of the increase in TCE levels. In conclusion, this data mining exercise suggests that 1,4-dioxane has a probability of co-occurrence of approximately 17% with either TCE and/or TCA. Given the challenges imposed by remediation of 1,4-dioxane and the pending promulgation of a federal regulatory standard, environmental project managers should use the information presented in this article for prioritization of future characterization efforts to respond to the emerging issue. Importantly, site investigations should consider 1,4-dioxane a potential co-contaminant of TCE in groundwater plumes. Integr Environ Assess Manag 2012; 8: 731-737. © 2012 SETAC.
The risks of 1,4-dioxane (dioxane) concentrations in wastewater treatment plant (WWTP) effluents, receiving primarily domestic wastewater, to downstream drinking water intakes was estimated using distributions of measured dioxane concentrations in effluents from 40 WWTPs and surface water dilution factors of 1,323 drinking water intakes across the U.S. Effluent samples were spiked with an d8 -1,4-dioxane internal standard in the field immediately after sample collection. Dioxane was extracted with ENVI-CARB-Plus solid phase columns and analyzed by GC/MS/MS, with a limit of quantification of 0.30 µg/L. Measured dioxane concentrations in domestic wastewater effluents ranged from <0.30 to 3.30 µg/L, with a mean concentration of 1.11 ± 0.60 µg/L. Dilution of upstream inputs of effluent were estimated for U.S. drinking water intakes using the iSTREEM model at mean flow conditions, assuming no in-stream loss of dioxane. Dilution factors ranged from 2.6 to 48,113, with a mean of 875. The distributions of dilution factors and dioxane concentration in effluent were then combined using Monte Carlo analysis to estimate dioxane concentrations at drinking water intakes. This analysis showed the probability was negligible (p = 0.0031) that dioxane inputs from upstream WWTPs could result in intake concentrations exceeding the USEPA's drinking water advisory concentration of 0.35 µg/L, prior to any treatment of the water for drinking use. Integr Environ Assess Manag © 2013 SETAC.
In Europe, the decision whether waste is hazardous or not is based on 15 properties, among them the HP 14 property ("ecotoxic": waste which presents or may present immediate or delayed risks for one or more sectors of the environment). This document describes a strategy for assessing the HP 14 property, based on a combination of two approaches: the summation of classified compounds in the waste carried out according to the regulation on Classification, Labelling and Packaging of substances and mixtures (CLP) and the usage of the results of biotests performed on waste eluates and solid wastes. The proposal is based mainly on recommendations of an European ring test performed in 2007, the work carried out in the CEN/TC 292/WG 7 standardization working group and the results of various research projects regarding the ecotoxicological characterization of waste performed mainly in France and Germany. Examples are provided showing that, using this approach, a distinction between hazardous and non-hazardous wastes is possible, independently of which type of threshold values is used (currently, both Effect Concentrations (EC) or Lowest Ineffective Dilutions (LID) values have successfully been employed). Furthermore, a battery of tests (three using waste eluates, and three using solid waste samples, plus, under certain conditions, a genotoxicity test) is recommended for the ecotoxicological testing of wastes. We propose to consider this combined approach when defining the legal requirements for the ecotoxicological classification of wastes. Integr Environ Assess Manag © 2013 SETAC.
The Great Lakes Indian Fish and Wildlife Commission produces consumption advisories for methylmercury in walleye (Sander vitreus) harvested by its member tribes in the 1837 and 1842 ceded territories of Michigan, Minnesota, and Wisconsin, USA. Lake-specific advice is based primarily on regressions of methylmercury concentrations on walleye length and incorporates standard reference doses to generate recommended meal frequencies. The effects of variability and uncertainty are directly incorporated into the consumption advice through confidence bounds for the general population and prediction bounds for the sensitive population. Advice is tailored to the needs of the tribes because harvest and consumption of fish are culturally important. Data were sufficient to provide consumption advice for 293 of the 449 lakes assessed. Most of these carried a recommendation of no more than 4 meals per month for the general population and no more than 1 meal per month for the sensitive population.
Epizootic skin diseases in euryhaline flounder (Platichthys flesus) in the Dutch Wadden Sea were first reported in 1988. Particularly high prevalences of skin ulcers (up to one-third of individual fish being affected) were encountered in the vicinity of sluices draining freshwater from IJsselmeer Lake, in contrast with much lower levels in the freshwater bodies behind the sluices and open sea areas (<2%). It was proposed that salinity stress, high bacterial loads, nutritional deficiencies, and obstruction to fish migration by the sluices could all be involved in disease causation. Results of follow-up surveys at these outlet sluices from 1994 to 2005 further substantiate our preliminary findings. The follow-up data also show a general reduction in disease and improved condition factor during this period, which can be explained by improved habitat conditions for the flounder, partly due to effective sluice gate management. Furthermore, statistical correlations (p<0.05) were demonstrated between flounder ulcer occurrence and chemical contaminant concentrations in liver (Hg, Cd, Cu, Zn) and bile (the metabolite 1-OH pyrene as an indicator of chronic polycyclic aromatic hydrocarbon exposure), and histological liver lesions generally indicative of contaminant exposure (hydropic vacuolization of biliary duct epithelial cells). The findings suggest that a combination of osmotic and contaminant-induced stress also contributed to the observed disease patterns.
Tissue residue-based toxicity benchmarks (TRBs) have typically been developed using the results of individual studies selected from the literature. In the past, TRBs have been developed using a point estimate (e.g., LC50 value) reported in a study on a single species deemed to be most closely related to the receptor of interest. Despite attempts to maximize the protectiveness and relevance of TRBs, their relationship to specific receptors remains uncertain, and their general applicability for use in broader ecological risk assessment contexts is limited. This article proposes a novel framework that establishes benchmarks as distributions rather than single-point estimates. Benchmark distributions allow the user to select a tissue concentration that is associated with the protection of a specific percentage of organisms, rather than linked to a specific receptor. A methodology is proposed for searching, reviewing, and analyzing linked, tissue residue effect data to derive benchmark distributions. The approach is demonstrated for contaminants having a dioxin-like mechanism of toxic action and is based on residue effects data for 2,3,7,8-tetrachlorodibenzo-p-dioxin (2,3,7,8-TCDD) and equivalents in early life stage fish. The calculated tissue residue benchmarks for 2,3,7,8-TCDD toxic equivalency (TEQ) derived from the resulting distribution could range from 0.057- to 0.699-ng TCDD/g lipid depending on the level of protection needed; the lower estimate is protective of 99% of fish species whereas the higher end is protective of 90% of fish species.
In 1996, the Norwegian government issued a White Paper requiring the Norwegian oil industry to reach the goal of "zero discharge" for the marine environment by 2005. To achieve this goal, the Norwegian oil and gas industry initiated the Zero Discharge Programme for discharges of produced formation water from the hydrocarbon-containing reservoir, in close communication with regulators. The environmental impact factor (EIF), a risk-based management tool, was developed by the industry to quantify and document the environmental risks from produced water discharges. The EIF represents a volume of recipient water containing concentrations of one or more substances to a level exceeding a generic threshold for ecotoxicological effects. In addition, this tool facilitates the identification and selection of cost-effective risk mitigation measures. The EIF tool has been used by all operators on the Norwegian continental shelf since 2002 to report progress toward the goal of "zero discharge," interpreted as "zero harmful discharges," to the regulators. Even though produced water volumes have increased by approximately 30% between 2002 and 2008 on the Norwegian continental shelf, the total environmental risk from produced water discharges expressed by the summed EIF for all installations has been reduced by approximately 55%. The total amount of oil discharged to the sea has been reduced by 18% over the period 2000 to 2006. The experience from the Zero Discharge Programme shows that a risk-based approach is an excellent working tool to reduce discharges of potential harmful substances from offshore oil and gas installations.
The OECD 308 water-sediment transformation test has been routinely conducted in Phase II Tier A testing of the environmental risk assessment (ERA) for all human pharmaceutical marketing authorization applications in Europe since finalization of Environmental Medicines Agency (EMA) ERA guidance in June 2006. In addition to the 'Ready Biodegradation' test, it is the only transformation test for the aquatic/sediment compartment that supports the classification of the drug substance for its potential persistence in the environment and characterizes the fate of the test material in a water-sediment environment. Presented is an overview of 31 OECD 308 studies conducted by 4 companies with a focus on how pharmaceuticals behave in these water-sediment systems. The geometric mean (gm) parent total system half-life for the 31 pharmaceuticals was 30 days with 10(th) / 90(th) percentile (10/90%ile) of 14.0/ 121.6 days respectively, with cationic substances having an half-life about 2 times that of neutral and anionic substances. The formation of non-extractable residues was considerable, with gm (10/90%ile) of 38% (20.5/81.4) of the applied radioactivity: cationic substances 50.8% (27.7/87.6), neutral substances 31.9% (15.3/52.3) and anionic substances 16.7% (9.5/30.6). In general, cationic substances had fewer transformation products and more unchanged parent remaining at day 100 of the study. A review of whether a simplified one-point analysis could reasonably estimate the parent total system half-life showed that the total amount of parent remaining in the water and sediment extracts at day 100 followed first-order kinetics and that the theoretical half-life and the measured total system half-life values agreed to within a factor of 1.68. Recommendations from this 4 company collaboration addressed: 1) the need to develop a more relevant water-sediment transformation test reflecting the conditions of the discharge scenario more representative of human pharmaceuticals; 2) potential use of a one-point estimate of parent total system half-life in the EMA ERA screening phase of testing; 3) the need for a more consistent and transparent interpretation of the results from the transformation study; consistent use of terminology such as dissipation, transformation, depletion and degradation in describing their respective processes in the ERA; 4) use of the parent total system dissipation half-life in hazard classification schemes and in revising predicted environmental concentration in ERA; and 5) further research into cationic pharmaceuticals to assess whether their classification as such serves as a structural alert to high levels of non-extractable residues; and whether this results in reduced bioavailability of those residues. Integr Environ Assess Manag © 2013 SETAC.
Timely and effective remediation of contaminated sediments is essential for protecting human health and the environment and restoring beneficial uses to waterways. A number of site operational conditions influence the effect of environmental dredging of contaminated sediment on aquatic systems. Site experience shows that resuspension of contaminated sediment and release of contaminants occur during dredging and that contaminated sediment residuals will remain after operations. It is also understood that these processes affect the magnitude, distribution, and bioavailability of the contaminants, and hence the exposure and risk to receptors of concern. However, even after decades of sediment remediation project experience, substantial uncertainties still exist in our understanding of the cause-effect relationships relating dredging processes to risk. During the past few years, contaminated sediment site managers, researchers, and practitioners have recognized the need to better define and understand dredging-related processes. In this article, we present information and research needs on these processes as synthesized from recent symposia, reports, and remediation efforts. Although predictions about the effect of environmental dredging continue to improve, a clear need remains to better understand the effect that sediment remediation processes have on contaminant exposures and receptors of concern. Collecting, learning from, and incorporating new information into practice is the only avenue to improving the effectiveness of remedial operations.
The "top five actions" presented by Bridges et al. in "Accelerating Progress at Contaminated Sediment Sites: Moving from Guidance to Practice" (IEAM 8(2):331-338) represent an important set of normative guidelines that could accelerate progress at contaminated sediment sites. An important next step is to use those guidelines to develop recommendations about good practices. This letter presents some recommendations of that sort, paired with the guidelines from Bridges et al. that inspired them. The objective is to draw more attention to solving the important problem of how to accelerate progress toward cost-effective cleanup, closure and beneficial future use of contaminated sediment sites. Integr Environ Assess Manag © 2012 SETAC.
Contaminated sediments are a pervasive problem in the United States. Significant economic, ecological, and social issues are intertwined in addressing the nation's contaminated sediment problem. Managing contaminated sediments has become increasingly resource intensive, with some investigations costing tens of millions of dollars and the majority of remediation projects proceeding at a slow pace. At present, the approaches typically used to investigate, evaluate, and remediate contaminated sediment sites in the United States have largely fallen short of producing timely, risk-based, cost-effective, long-term solutions. With the purpose of identifying opportunities for accelerating progress at contaminated sediment sites, the US Army Corps of Engineers-Engineer Research and Development Center and the Sediment Management Work Group convened a workshop with experienced experts from government, industry, consulting, and academia. Workshop participants identified 5 actions that, if implemented, would accelerate the progress and increase the effectiveness of risk management at contaminated sediment sites. These actions included: 1) development of a detailed and explicit project vision and accompanying objectives, achievable short-term and long-term goals, and metrics of remedy success at the outset of a project, with refinement occurring as needed throughout the duration of the project; 2) strategic engagement of stakeholders in a more direct and meaningful process; 3) optimization of risk reduction, risk management processes, and remedy selection addressing 2 important elements: a) the deliberate use of early action remedies, where appropriate, to accelerate risk reduction; and b) the systematic and sequential development of a suite of actions applicable to the ultimate remedy, starting with monitored natural recovery and adding engineering actions as needed to satisfy the project's objectives; 4) an incentive process that encourages and rewards risk reduction; and 5) pursuit of sediment remediation projects as a public-private collaborative enterprise. These 5 actions provide a clear path for connecting current US regulatory guidance to improved practices that produce better applications of science and risk management and more effective and efficient solutions at contaminated sediment sites.
Spatially explicit wildlife exposure models have been developed to integrate chemical concentrations dispersed in space and time, heterogeneous habitats of varying qualities, and foraging behaviors of wildlife to give more realistic wildlife exposure estimates for ecological risk assessments. These models not only improve the realism of wildlife exposure estimates, but also increase the efficiency of remedial planning. However, despite being widely available, these models are rarely used in baseline (definitive) ecological risk assessments. A lack of precedent for their use, misperceptions about models in general and spatial models in particular, non-specific or no enabling regulations, poor communication, and uncertainties regarding inputs are all impediments to greater use of such models. An expert workshop was convened as part of an Environmental Security Technology Certification Program Project to evaluate current applications for spatially explicit models and consider ways such models could bring increased realism to ecological exposure assessments. Specific actions (e.g., greater accessibility and innovation in model design, increased communication with and training opportunities for decision makers and regulators, explicit consideration during assessment planning and problem formulation) were discussed as mechanisms to increase the use of these valuable and innovative modeling tools. The intent of this workshop synopsis is to highlight for the ecological risk assessment community both the value and availability of a wide range of spatial models and to recommend specific actions that may help to increase their acceptance and use by ecological risk assessment practitioners.
The US Consumer Product Safety Commission has proposed a ban on children's metal jewelry containing more than 0.06% lead by weight. This ban would replace the current interim enforcement standard, which also includes a measurement of accessible lead. Under current guidelines, accessible lead for any component of a jewelry item must not exceed 175 microg. Critics argue that lead in jewelry is inaccessible if items are properly plated. The objective of this study was to determine whether highly leaded jewelry also has high accessible lead content. Sixty-four inexpensive, leaded jewelry items with a wide range of total lead content (all above the 0.06% threshold) were tested first for accessible lead and then for total lead, with analysis by atomic absorption spectrometry. Fifty of the items exceeded the 175-microg maximum allowed for accessible lead. Thirty-one of these items exceeded 1000 microg accessible lead, and 18 items exceeded 3000 microg accessible lead. The finding that a majority of tested items had both accessible and total lead content above current standards provides support for the rationale of simplifying the current standard by basing regulations on total lead content.
Past nuclear accidents highlight communication as one of the most important challenges in emergency management. In the early phase, communication increases awareness and understanding of protective actions and improves the population response. In the medium and long term, risk communication can facilitate the remediation process and the return to normal life. Mass media play a central role in risk communication. The recent nuclear accident in Japan, as expected, induced massive media coverage. Media were employed to communicate with the public during the contamination phase, and they will play the same important role in the clean-up and recovery phases. However, media also have to fulfill the economic aspects of publishing or broadcasting, with the "bad news is good news" slogan that is a well-known phenomenon in journalism. This article addresses the main communication challenges and suggests possible risk communication approaches to adopt in the case of a nuclear accident.
The consequences of the Tohuku earthquake and subsequent tsunami in March 2011 caused a loss of power at the Fukushima Daiichi nuclear power plant, in Japan, and led to the release of radioactive materials into the environment. Although the full extent of the contamination is not currently known, the highly complex nature of the environmental contamination (radionuclides in water, soil, and agricultural produce) typical of nuclear accidents requires a detailed geospatial analysis of information with the ability to extrapolate across different scales with applications to risk assessment models and decision making support. This article briefly summarizes the approach used to inform risk-based land management and remediation decision making after the Chernobyl, Soviet Ukraine, accident in 1986.
The recent events at the Fukushima Daiichi nuclear power plant in Japan have raised questions over the effects of radiation in the environment. This article considers what we have learned about the radiological consequences for the environment from the Chernobyl accident, Ukraine, in April 1986. The literature offers mixed opinions of the long-term impacts on wildlife close to the Chernobyl plant, with some articles reporting significant effects at very low dose rates (below natural background dose rate levels in, for example, the United Kingdom). The lack of agreement highlights the need for further research to establish whether current radiological protection criteria for wildlife are adequate (and to determine if there are any implications for human radiological protection).
The accident at the Fukushima Daiichi nuclear power plant, precipitated by the earthquake and subsequent tsunami that struck the northeastern coast of Japan in March 2011, has raised concerns about the potential impact to marine biota posed by the release of radioactive water and radionuclide particles into the environment. The Fukushima accident is the only major nuclear accident that has resulted in the direct discharge of radioactive materials into a coastal environment. This article briefly summarizes what is currently understood about the effects of radioactive wastewaters and radionuclides to marine life.
The Biotic Ligand Model (BLM) theoretically enables the derivation of Environmental Quality Standards that are based on true bioavailable fractions of metals. Several physico-chemical variables (especially pH, major cations, DOC and dissolved metal concentrations) must, however, be assigned to run the BLM, but they are highly variable in time and space in natural systems. This paper describes probabilistic approaches for integrating such variability during the derivation of Risk Indexes. To describe each variable using a Probability Density Function (PDF), several methods were combined to (i) treat censored data (i.e., data below the Limit of Detection); (ii) incorporate the uncertainty of the solid-to-liquid partitioning of metals; and (iii) detect outliers. From a probabilistic perspective, two alternative approaches that are based on log-normal and Gamma distributions were tested to estimate the probability of the PEC (Predicted Environmental Concentration) exceeding the PNEC (Predicted Non Effect Concentration), i.e., pPECPNEC>1. The probabilistic approach was tested on four real-case studies based on copper-related data collected from stations on the Loire and Moselle rivers. The approach described in this paper is based on BLM tools that are freely available for end-users (i.e., the Bio-Met software) and on accessible statistical data treatments. This approach could be used by stakeholders who are involved in risk assessments of metals for improving site-specific studies. Integr Environ Assess Manag © 2013 SETAC.
Effects-based analysis is a fundamental component of watershed cumulative effects assessment. This study conducted an effects-based analysis for the Peace-Athabasca-Slave River System, part of the massive Mackenzie River Basin, encompassing 20% of Canada's total land mass and influenced by cumulative contributions of the W.A.C. Bennett Dam (Peace River) and industrial activities including oil sands mining (Athabasca River). This study assessed seasonal changes in 1) Peace River water quality and quantity before and after dam development 2) Athabasca River water quality and quantity before and after oil sands developments 3) tributary inputs from the Peace and Athabasca Rivers to the Slave River and 4) upstream to downstream differences in water quality in the Slave River. In addition, seasonal benchmarks were calculated for each river based on pre-perturbation post-perturbation data for future cumulative effects assessments. Winter discharge (Jan.-Mar.) from the Peace and Slave Rivers was significantly higher than before dam construction (pre-1967) (p<0.05), while summer peak flows (May-Jul.) were significantly lower than before the dam showing that regulation has significantly altered seasonal flow regimes. During spring freshet and summer high flows, the Peace River strongly influenced the quality of the Slave River, as there were no significant differences in loadings of dissolved N, TP, TOC, total As, total Mn, total V and turbidity and specific conductance between these rivers. In the Athabasca River, TP and specific conductance concentrations increased significantly since before oil sands developments (1967-2010), while dissolved N and sulphate have increased after the oil sands developments (1977-2010). Recently, the Athabasca River had significantly higher concentrations of dissolved N, TP, TOC, dissolved sulphate, specific conductance and total Mn than either the Slave or the Peace Rivers during the winter months. The transboundary nature of the Peace, Athabasca and Slave River basins has resulted in fragmented monitoring and reporting of the state of these rivers and a more consistent monitoring framework is recommended. Integr Environ Assess Manag © 2012 SETAC.
This paper is the second in a two-part series assessing the accumulated state of the transboundary Yukon River basin in northern Canada and the USA. The determination of accumulated state based on available long term discharge and water quality data is the first step in watershed CEA in the absence of sufficient biological monitoring data. Long term trends in water quantity and quality were determined and a benchmark against which to measure change was defined for five major reaches along the Yukon River for nitrate, total and dissolved organic carbon, total phosphorus, orthophosphate, pH and specific conductivity. Deviations from the reference condition were identified as "hot moments" in time, nested within a reach. Significant increasing long term trends in discharge were found on the Canadian portion of the Yukon River. There were significant long term decreases in nitrate, TOC and TP at the Headwater reach, and significant increases in nitrate and specific conductivity at the Lower reach. Deviations from reference condition were found in all water quality variables, but most notably during the ice-free period of the Yukon River (May-Sept) and in the Lower reach. The greatest magnitudes of outliers were found during the spring freshet. This study also incorporated traditional ecological knowledge (TEK) into its assessment of accumulated state. In the summer of 2007 the Yukon River Inter Tribal Watershed Council organized a team of people to paddle down the length of the Yukon River as part of a "Healing Journey", where both Western Science and TEK paradigms were employed. Water quality data were continuously collected and stories were shared between the team and communities along the Yukon River. Healing Journey data were compared to the long term reference conditions and showed the summer of 2007 was abnormal compared to the long term water quality. This study showed the importance of establishing a reference condition by reach and season for key indicators of water health to measure change and the importance of placing synoptic surveys into context of long terms accumulated state assessments. Integr Environ Assess Manag © 2012 SETAC.
A consistent methodology for assessing the accumulating effects of natural and man-made change on riverine systems has not been developed for a whole host of reasons including a lack of data, disagreement over core elements to consider, and complexity. Accumulated State Assessments of aquatic systems is an integral component of watershed cumulative effects assessment. The Yukon River is the largest free flowing river in the world and is the fourth largest drainage basin in North America, draining 855,000 km(2) in Canada and the United States. Due to its remote location, it is considered pristine but little is known about its cumulative state. This review identified seven "hot spot" areas in the Yukon River Basin including Lake Laberge, Yukon River at Dawson City, the Charley and Yukon River confluence, Porcupine and Yukon River confluence, Yukon River at the Dalton Highway Bridge, Tolovana River near Tolovana and Tanana River at Fairbanks. Climate change, natural stressors and anthropogenic stresses have resulted in accumulating changes including measurable levels of contaminants in surface waters and fish tissues, fish and human disease, changes in surface hydrology, as well as shifts in biogeochemical loads. This paper is the first integrated accumulated state assessment for the Yukon River basin based on a literature review. It is the first part of a two-part series. The second paper (this issue) is a quantitative accumulated state assessment of the Yukon River Basin where hot spots and hot moments are assessed outside of a "normal" range of variability. Integr Environ Assess Manag © 2012 SETAC.
Previous models used to estimate brine shrimp tissue concentrations of Se based on water or dietary concentrations Source Characteristics Brooks (2007) Laboratory estimates of uptake of Se from food and water for Great Salt Lake
Tests of significance from dataset for cocollected data pairs. All field data. Linear regressions on log-transformed data.
Great Salt Lake, Utah, is a large, terminal, hypersaline lake consisting of a northern more saline arm and a southern arm that is less saline. The southern arm supports a seasonally abundant fauna of low diversity consisting of brine shrimp (Artemia franciscana), 7 species of brine flies, and multiple species of algae. Although fish cannot survive in the main body of the lake, the lake is highly productive, and brine shrimp and brine fly populations support large numbers of migratory waterfowl and shorebirds, as well as resident waterfowl, shorebirds, and gulls. Selenium and other trace elements, metals, and nutrients are contaminants of concern for the lake because of their concentrations in municipal and industrial outfalls and runoff from local agriculture and the large urban area of Salt Lake City. As a consequence, the State of Utah recently recommended water quality standards for Se for the southern arm of Great Salt Lake based on exposure and risk to birds. The tissue-based recommendations (as measured in bird eggs) were based on the understanding that Se toxicity is predominately expressed through dietary exposure, and that the breeding shorebirds, waterfowl, and gulls of the lake are the receptors of most concern. The bird egg-based recommended standards for Se require a model to link bird egg Se concentrations to their dietary concentrations and water column values. This study analyzes available brine shrimp tissue Se data from a variety of sources, along with waterborne and water particulate (potential brine shrimp diet) Se concentrations, in an attempt to develop a model to predict brine shrimp Se concentrations from the Se concentrations in surrounding water. The model can serve as a tool for linking the tissue-based water quality standards of a key dietary item to waterborne concentrations. The results were compared to other laboratory and field-based models to predict brine shrimp tissue Se concentrations from ambient water and their diet. No significant relationships were found between brine shrimp and their dietary Se, as measured by seston concentrations. The final linear and piecewise regression models showed significant positive relationships between waterborne and brine shrimp tissue Se concentrations but with a very weak predictive ability for waterborne concentrations<10 µg/L.
Standardized laboratory protocols for measuring the accumulation of chemicals from sediments are used in assessing new and existing chemicals, evaluating navigational dredging materials, and establishing site-specific biota-sediment accumulation factors (BSAFs) for contaminated sediment sites. The BSAFs resulting from the testing protocols provide insight into the behavior and risks associated with individual chemicals. In addition to laboratory measurement, BSAFs can also be calculated from field data, including samples from studies using in situ exposure chambers and caging studies. The objective of this report is to compare and evaluate paired laboratory and field measurement of BSAFs and to evaluate the extent of their agreement. The peer-reviewed literature was searched for studies that conducted laboratory and field measurements of chemical bioaccumulation using the same or taxonomically related organisms. In addition, numerous Superfund and contaminated sediment site study reports were examined for relevant data. A limited number of studies were identified with paired laboratory and field measurements of BSAFs. BSAF comparisons were made between field-collected oligochaetes and the laboratory test organism Lumbriculus variegatus and field-collected bivalves and the laboratory test organisms Macoma nasuta and Corbicula fluminea. Our analysis suggests that laboratory BSAFs for the oligochaete L. variegatus are typically within a factor of 2 of the BSAFs for field-collected oligochaetes. Bivalve study results also suggest that laboratory BSAFs can provide reasonable estimates of field BSAF values if certain precautions are taken, such as ensuring that steady-state values are compared and that extrapolation among bivalve species is conducted with caution.
High levels of the nutrients nitrogen and phosphorus can cause unhealthy biological or ecological conditions in surface waters, and prevent the attainment of their designated uses. Regulatory agencies are developing numeric criteria for these nutrients in an effort to ensure that the surface waters in their jurisdictions remain healthy and productive, and that water quality standards are met. These criteria are often derived using field measurements that relate nutrient concentrations and other water quality conditions to expected biological responses such as undesirable growth or changes in aquatic plant and animal communities. Ideally, these numeric criteria can be used to accurately "diagnose" ecosystem health and guide management decisions. However, the degree to which numeric nutrient criteria are useful for decision-making depends on how accurately they reflect the status or risk of nutrient-related biological impairments. Numeric criteria that have little predictive value are not likely to be useful for managing nutrient concerns. This paper presents information on the role of numeric nutrient criteria as biological health indicators, and the potential benefits of sufficiently accurate criteria for nutrient management. In addition, it describes approaches being proposed or adopted in states such as Florida and Maine to improve the accuracy of numeric criteria and criteria-based decisions. This includes a preference for developing site-specific criteria where sufficient data are available, and the use of nutrient concentration and biological response criteria together in a framework to support designated use attainment decisions. Together with systematic planning during criteria development, the accuracy of field-derived numeric nutrient criteria can be assessed and maximized as a part of an overall effort to manage nutrient water quality concerns. Integr Environ Assess Manag © 2013 SETAC.
In 2011, as part of an update to its state water quality standards (WQS) for protection of human health, the State of Oregon adopted a fish consumption rate of 175 g/day for freshwater and estuarine finfish and shellfish, including anadromous species. WQS for the protection of human health whose derivation is based in part on anadromous fish, create the expectation that implementation of these WQS will lead to lower contaminant levels in returning adult fish. Whether this expectation can be met is likely a function of where and when such fish are exposed. Various exposure scenarios have been advanced to explain acquisition of bioaccumulative contaminants by Pacific salmonids. This study examined 16 different scenarios with bioenergetics and toxicokinetic models to identify those where WQS might be effective in reducing polychlorinated biphenyls (PCBs)--a representative bioaccumulative contaminant--in returning adult Fall chinook salmon, a representative salmonid. Model estimates of tissue concentrations and body burdens in juveniles and adults were corroborated with observations reported in the literature. Model results suggest that WQS may effect limited (< approximately 2 ×) reductions in PCB levels in adults who were resident in a confined marine water body or who transited a highly contaminated estuary as out-migrating juveniles. In all other scenarios examined, WQS would have little effect on PCB levels in returning adults. Although the results of any modeling study must be interpreted with caution and are not necessarily applicable to all salmonid species, they do suggest that the ability of WQS to meet the expectation of reducing contaminant loadings in anadromous species is limited.
While winter has proven to be one of the coldest and snowiest seasons on record throughout much of the United States, the coming summer could be unseasonably warm in Washington, DC if the United States Environmental Protection Agency (USEPA) successfully implements its reinterpretation of one of the nation's proudest environmental regulatory accomplishments, the Clean Water Act (CWA). In 2013, USEPA and the US Army Corps of Engineers (Corps) bypassed the traditional scientific review and public comment process by submitting to the Office of Management and Budget (OMB) a proposed rule establishing a broad interpretation of the scope of the forty year old CWA. In the US, the OMB is tasked, among other duties, with evaluating the significance of agency policies and proposed regulations on the national economy. Integr Environ Assess Manag
Limited hunting of deer at the future Rocky Flats National Wildlife Refuge has been proposed in U.S. Fish and Wildlife planning documents as a compatible wildlife-dependent public use. Historically, Rocky Flats site activities resulted in the contamination of surface environmental media with actinides, including isotopes of americium, plutonium, and uranium. In this study, measurements of actinides [Americium-241 (241Am); Plutonium-238 (238Pu); Plutonium-239,240 (239,240Pu); uranium-233,244 (233,234U); uranium-235,236 (235,236U); and uranium-238 (238U)] were completed on select liver, muscle, lung, bone, and kidney tissue samples harvested from resident Rocky Flats deer (N = 26) and control deer (N = 1). In total, only 17 of the more than 450 individual isotopic analyses conducted on Rocky Flats deer tissue samples measured actinide concentrations above method detection limits. Of these 17 detects, only 2 analyses, with analytical uncertainty values added, exceeded threshold values calculated around a 1 x 10(-6) risk level (isotopic americium, 0.01 pCi/g; isotopic plutonium, 0.02 pCi/g; isotopic uranium, 0.2 pCi/g). Subsequent, conservative risk calculations suggest minimal human risk associated with ingestion of these edible deer tissues. The maximum calculated risk level in this study (4.73 x 10(-6)) is at the low end of the U.S. Environmental Protection Agency's acceptable risk range.
This article reviews the mechanistic basis of the tissue residue approach for toxicity assessment (TRA). The tissue residue approach implies that whole-body or organ concentrations (residues) are a better dose metric for describing toxicity to aquatic organisms than is the aqueous concentration typically used in the external medium. Although the benefit of internal concentrations as dose metrics in ecotoxicology has long been recognized, the application of the tissue residue approach remains limited. The main factor responsible for this is the difficulty of measuring internal concentrations. We propose that environmental toxicology can advance if mechanistic considerations are implemented and toxicokinetics and toxicodynamics are explicitly addressed. The variability in ecotoxicological outcomes and species sensitivity is due in part to differences in toxicokinetics, which consist of several processes, including absorption, distribution, metabolism, and excretion (ADME), that influence internal concentrations. Using internal concentrations or tissue residues as the dose metric substantially reduces the variability in toxicity metrics among species and individuals exposed under varying conditions. Total internal concentrations are useful as dose metrics only if they represent a surrogate of the biologically effective dose, the concentration or dose at the target site. If there is no direct proportionality, we advise the implementation of comprehensive toxicokinetic models that include deriving the target dose. Depending on the mechanism of toxicity, the concentration at the target site may or may not be a sufficient descriptor of toxicity. The steady-state concentration of a baseline toxicant associated with the biological membrane is a good descriptor of the toxicodynamics of baseline toxicity. When assessing specific-acting and reactive mechanisms, additional parameters (e.g., reaction rate with the target site and regeneration of the target site) are needed for characterization. For specifically acting compounds, intrinsic potency depends on 1) affinity for, and 2) type of interaction with, a receptor or a target enzyme. These 2 parameters determine the selectivity for the toxic mechanism and the sensitivity, respectively. Implementation of mechanistic information in toxicokinetic-toxicodynamic (TK-TD) models may help explain time-delayed effects, toxicity after pulsed or fluctuating exposure, carryover toxicity after sequential pulses, and mixture toxicity. We believe that this mechanistic understanding of tissue residue toxicity will lead to improved environmental risk assessment.
When sediments are removed from aquatic bottoms, they turn into dredged material that must be managed, taking into account its environmental impact. In Part II of this 2-part paper addressing sediment quality assessment and dredged material management in Spain, legislation and criteria used to regulate dredged material disposal at sea in different European countries are reviewed, as are action levels (ALs) derived by different countries used to evaluate management of dredged sediments from Cádiz Bay located on the South Atlantic coast of Spain. Comparison of ALs established for dredged material disposal by different countries reveals orders of magnitude differences in the values established for the same chemical. In Part I of this 2-part paper, review of different sediment quality guideline (SQG) methods used to support sediment quality assessments indicated a great heterogeneity of SQGs, both with regard to the numeric values for a particular chemical and the number of substances for which SQGs have been derived. The analysis highlighted the absence of SQGs for priority substances identified in current European Union water policy. Here, in Part II, the ALs are applied to dredged sediments from Cádiz Bay (South Atlantic coast of Spain), evidencing that the heterogeneity of ALs implemented in the reviewed countries could determine different management strategies. The application of other measurements such as bioassays might offer information useful in identifying a cost-effective management option in a decision-making framework, especially for dredged material with intermediate chemical concentrations.
Contaminated sediments can pose serious threats to human health and the environment by acting as a source of toxic chemicals. The amendment of contaminated sediments with strong sorbents like activated carbon (AC) is a rapidly developing strategy to manage contaminated sediments. To date, a great deal of attention has been paid to the technical and ecological features and implications of sediment remediation with AC, although science in this field still is rapidly evolving. The present paper aims to provide an update on the recent literature on these features, and for the first time provides a comparison of sediment remediation with AC to other sediment management options, emphasising their full-scale application. First, a qualitative overview of (dis)advantages of current alternatives to remediate contaminated sediments is presented. Subsequently, AC treatment technology is critically reviewed, including current understanding of the effectiveness and ecological safety for the use of AC in natural systems. Finally, this information is used to provide a novel framework for supporting decisions concerning sediment remediation and beneficial re-use. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Top-cited authors
James A. Franklin
Pim De Voogt
  • University of Amsterdam
Urs Berger
  • Helmholtz-Zentrum für Umweltforschung
Robert C Buck
  • The Chemours Company
Stefan van Leeuwen
  • Wageningen University & Research