Article

Analyses of incident data show US, European pipelines becoming safer

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Analyses of pipeline-incident data for Europe and the US over 30 years confirm that systems in both regions have become increasingly safer. Pipeline-incident data were gathered over a long period by the European Gas Pipeline Incident Data Group (EGIG), CONCAWE (Western-European cross-country oil pipelines), and the US Department of Transportation's Office of Pipeline Safety, Research and Special Programs Administration. (DOT/OPS/RSPA covers both oil and gas pipeline systems in the US.) The main goal in gathering this information was to demonstrate and, if possible, improve the safety performance of pipeline transmission systems. Information on loss-of-containment incidents of European and US oil and gas pipeline transmission systems, as published by these organizations, was analyzed and is summarized in this article. Loss-of-containment incidents of onshore steel pipeline transmission systems are presented here. Although it is difficult to compare directly among the different databases (the definition of an incident, for example, not being uniquely defined), such a comparison leads to some general conclusions.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... European Gas Pipeline Incident Data Group (EGIG, 2008) collected 15 European countries pipeline incident data since 1970 and found out that overall incident frequency is 0.37 per 1,000 kilometre-years over 1970-2007 periods. With observing trend of annual US incident frequencies with 1986-2002 data and the trends of annual Western European pipeline incident frequencies with 1971-2001 data, Guijt (2004) suggested that US and European pipelines are becoming safer. A study of Transportation Research Board (TRB, 2004) showed that judicious land use decisions can reduce the probabilities and the consequences of incidents for transmission pipelines. ...
... The frequencies of incidents, injuries and fatalities are observed in this paper for the analysis of pipeline safety. The frequencies of incidents, injuries and fatalities are defined by number of incidents, injuries and fatalities per 1,000 mile-years respectively (Guijt, 2004). In addition, the concept of injuries per incident and fatalities per incident are first introduced as an indicator for evaluating the risk level of the individual incident to people. ...
... The data may conclude that smaller diameter pipelines have higher incident frequency, which aligns with many other authors' conclusions. The small wall thickness and lower material grade of the smaller diameter pipelines are considered major factors for higher incident frequency causing small diameter pipelines to be more vulnerable to external interferences (EGIG, 2008;Guijt, 2004). Strictly speaking, there is insufficient data to verify whether pipeline diameter influences incident frequencies, injure frequencies, and fatality frequencies. ...
Article
The objective of this paper is to provide a reference database for pipeline companies and/or regulators with an investigation of safety performance of US natural gas distribution pipelines. With a total of 3,679 natural gas distribution pipeline incidents between 1985 and 2010, nine safety indicators are statistically analysed in terms of the year, pipeline length, regions, pipeline diameter, pipeline wall thickness, material, age, incident area and incident cause to identify the relationship between safety indicators and various variables. Overall average frequencies of incidents, injuries and fatalities between 1985 and 2009 are 0.0846/1,000 mile-years, 0.0407/1,000 mile-years, and 0.0094/1,000 mile-years respectively. The analysis shows that the safety performance of US natural gas distribution pipeline is improving over time, and different variables have different impact on safety performances. However, the number of annual incidents does not show a significant decline due to increasing energy demand. [Received: March 21 2012; Accepted: July 15 2012]
... Five general causes of damage are distinguished by CONCAWE, the oil companies European organisation for environmental and health protection: (1) mechanical failures, (2) operational errors, (3) corrosion, (4) natural hazards and (5) digging activities. Analyses of 30 years of damage records on oil and gas pipeline networks in Europe and the U.S. by True (1998) and Guijt (2004) indicate that digging activities represent the largest cause of damage: between 30 to 50% of all damages could be attributed to digging activities. These percentages are European and U.S. averages for pipelines. ...
... 97% of the damages was attributed to digging activities (VELIN 2004). Although this percentage is exaggerated by the fact that the analysis also included near misses, the difference with the 30 -50% reported by True (1998) and Guijt (2004) is striking. It shows that the impact of digging activities is larger in more densely populated areas. ...
... Hardly any publication includes data on digging activities that did not result in damage (the study by Van Houten and Lourens (1995) is the single exception known to us). Most studies however are based only on records in damage registration systems (for example see Cooke and Jager 1998, True 1998and Guijt 2004. In the analysis reported in this article, data on digging activities with no damage are included. ...
Article
Digging activities are considered the largest cause of damage to underground cables and pipelines. Contractors can reduce the risk through detection, which will cost time and thus money. In the Netherlands, maps are the prime source of information on the location of cables/pipelines and detection time strongly depends on whether maps indicate the presence of cables and pipelines. Poor quality maps can contribute to increased risk or higher risk avoidance costs. The objective of this article is to present a model for calculating the trade-off between detection costs and risk in case of a hit and for calculating implications of over- and incompleteness of maps. The model aims to find the optimal detection time at which the sum of detection cost and risk is at its minimum. A case-study showed that it is possible to parameterise the model with data collected from contractors through a questionnaire. The case-study provides a numerical example of calculation of the trade-off between risk and detection costs and provides an example of calculation of costs of incompleteness. We conclude that the model contributes valuable new insight. However, more and location specific data are needed to enable operational use of the model.
... Asset types located in densely populated areas; in densely populated areas electricity assets, gas assets and other infrastructure assets, like telecom and water, are (both vertically and horizontally) geographically proximate, since in these areas the utilities were mainly constructed in one trench. According to Guijt (2004), the geographic proximity makes them more susceptible to digging incidents. Moreover, in densely populated areas the inconvenience to residents is larger, since more residents encounter the inconvenience caused. ...
... Gas and electricity infrastructure as well as telecom, internet, water and sewage infrastructure are prone to damage. Literature distinguishes different causes of damage to these assets, like corrosion, construction/material mistakes and digging activities (Guijt, 2004; Asseldonk, 2006). Oort et al. (2007) name investigations claiming that 30 to 50% of all damages to underground cables and lines can be attributed to digging activities. ...
... More damage by digging activities can be expected in more densely populated areas and when the frequency of activities is higher (Oort et al., 2007). According to Guijt (2004), pipelines with a cover depth of less than 1 meter or those built in densely populated areas are more prone to digging incidents. During replacement activities there is always a probability that damage is caused to other cables and lines present. ...
... cavitation) in components such as booster stations and pumps. Utilising pipelines still needs stable conditions of operation where the transported media is in the supercritical/dense phase [12][13][14][15] . This condition occurs at temperatures higher than 60 °C and pressures above the critical pressure of 7.38 MPa, giving a good margin for avoiding two phase flows. ...
... The concept of off shore disposal on average is considered to be safer, if the leakage is under question, than on shore systems. For this reason the support of the public is more easily implemented for the off shore system [14,15,[17][18][19] . The above facts and assessing the need for the construction and use of temporary CO 2 storage are the basis for further work in this specialised field of underground construction. ...
Article
Full-text available
After the CO2 has been captured at the source of emission, the CO2 would have to be transported to the storage site using different technologies. In some countries (i.e. USA) real possibilities exist so that available and new oil and water pipe lines could be used for such operations. In practice it means that transportation could be carried out with motor carriers, railway and water carriers. If the present experiences are taken into account and the real situation checked, such transportation systems are mainly used in praxis. For maximum throughput and to facilitate efficient loading and unloading, the physical condition with respect to pressure and temperature for the CO2 should be the liquid or supercritical/dense phases. Temporary storage of CO2 is of importance for finding a comprehensive solution for long-term storage under various environmental circumstances. Underground caverns are one of the possibilities of temporary storage. Geotechnical analysis of stress and strain changes that are present in the rocks around underground caverns filled with CO2 under high pressure provides a realistic assessment of conditions for temporary storage. This paper presents the analysis described above, for different parameters relating to underground storage of CO2.
... 97% of the damages was attributed to digging activities (VELIN 2004). Although this percentage is exaggerated by the fact that the analysis also included near misses, the difference with the 30-50% reported by True (1998) and Guijt (2004) is striking. It shows that the impact of digging activities is larger in more densely populated areas. ...
... Hardly any publication includes data on digging activities that did not result in damage (the study by van Houten en Lourens (1995) is the single exception known to us). Most studies however are based only on records in damage registration systems (for example see Cooke and Jager 1998, True 1998and Guijt 2004. In the analysis reported in this chapter, data on digging activities with no damage are included. ...
Article
The growing availability of spatial data along with growing ease to use the spatial data (thanks to wide-scale adoption of GIS) have made it possible to use spatial data in applications inappropriate considering the quality of the data. As a result, concerns about spatial data quality have increased. To deal with these concerns, it is necessary to (1) formalise and standardise descriptions of spatial data quality and (2) to apply these descriptions in assessing the suitability (fitness for use) of spatial data, before using the data. The aim of this thesis was twofold: (1) to enhance the description of spatial data quality and (2) to improve our understanding of the implications of spatial data quality.Chapter 1 sets the scene with a discussion on uncertainty and an explanation of why concerns about spatial data quality exist. Knowledge gaps are identified and the chapter concludes with six research questions.Chapter 2 presents an overview of definitions of spatial data quality. Overall, I found a strong agreement on which elements together define spatial data quality. Definitions appear to differ in two aspects: (1) the location within the meta-data report: some elements occur not in the spatial data quality section but in another section of the meta-data report; and (2) the explicitness with which elements are recognised as individual elements. For example, the European pre-standard explicitly recognises theelement'homogeneity'. Other standards recognise the importance of documenting the variation in quality, without naming it explicitly as an individual element.In chapter 3 we quantified the spatial variability in classification accuracy for the agricultural crops in the Dutch national land cover database (LGN). Classification accuracy was significantly correlated with: (1) the crop present according to LGN, (2) the homogeneity of the 8-cell neighbourhood around each cell, (3) the size of the patch in which a cell is located, and (4) the heterogeneity of the landscape in which a cell is located.In chapter 4 I present methods that use error matrices and change detection error matrices as input to make more accurate land cover change estimates. It was shown that temporal correlation in classification errors has a significant impact and must be taken into account. Producers of time series land cover data are recommended not only to report error matrices, but also change detection error matrices.Chapter 5 focuses on positional accuracy and area estimates. From the positional accuracy of vertices delineating polygons, the variance and covariance in area can be derived. Earlier studies derived equations for thevariance,this chapter presents a covariance equation. The variance and covariance equation were implemented in a model and applied in a case-study. The case-study consisted of 97 polygons with a small subsidy value (in euros per hectare) assigned to each polygon. With the model we could calculate the uncertainty in the total subsidy value (in euros) of the complete set of polygons as a consequence of uncertainty in the position of vertices.Chapter 6 explores the relationship between completeness of spatial data and risk in digging activities around underground cables and pipelines. A model is presented for calculating the economic implications of over- and incompleteness. An important element of this model is therelationship between detection time and costs. The model can be used to calculate the optimal detection time, i.e. the time at which expected costs are at their minimum.Chapter 7 addresses the question why risk analysis (RA) is so rarely applied to assess the suitability of spatial data prior to using the data. In theory, the use of RA is beneficial because it allows the user to judge if the use of certain spatial data does not produce unacceptable risks. Frequently proposed hypotheses explaining the scarce adoption of RA are all technical and educational. In chapter 7 we propose a new group of hypotheses, based on decision theory. We found that the willingness to spend resources on RA depends (1) on the presence of feedback mechanisms in the decision-making process, (2) on how much is at stake and (3) to a minor extent on how well the decision-making process can be modelled.Chapter 8 presents conclusions on the six research questions (chapters 2-7) and lists recommendations for users, producers and researchers of spatial data. With regard to the description, four recommendations are given. Firstly, spend more effort on documenting the lineage of reference data. Secondly, quantify and report correlation of quality between related data sets. Thirdly, investigate the integration of different forms of uncertainty (error, vagueness, ambiguity). Fourthly, study the implementation and use of spatial data quality standards. With regard to the application of spatial data quality descriptions, I have two main recommendations. Firstly, to continue the line of research followed in this thesis: quantification of implications of spatial data quality, through development of theory along with tangible illustrations in case-studies. Secondly, there is a need for more empirical research into how users cope with spatial data quality.
... Modern pipelines are very resistant to damage by mechanisms such as corrosion, due to good design and the use of high quality materials. Failure rates for onshore gas pipelines in Europe have fallen from 0.79 incidents per 1000 km years in the 1970s to 0.21 incidents per km year in the late 1990s [4] . Consequently, most anomalies found in service are superficial. ...
... This will ensure that disruption is minimised and the flow of LNG from the Yemen to customers around the world is maintained. Initiate gassing up (Re-Pressurisation) of section [4] . ...
Article
Large diameter long distance gas pipelines are high value assets which have to be kept operating. When damage occurs, or the pipeline fails, a rapid repair is critical to allow full operation to re-start. Suitable equipment and skilled personnel are required to ensure a repair can be completed for the range of damage that can occur. Many locations around the world can be remote or hostile creating an absence of both available skills (such as welders) and equipment for emergency repairs. Consequently, some operators need comprehensive repair systems and skills that can be mobilised quickly and easily. This paper presents an overview of the options for the emergency repair of different types of pipeline damage, and provides a strategy, and a case study of the process used to define the equipment and support contracts needed by the operator of a gas pipeline in a remote area to ensure that they could complete a repair to any credible damage or failure within just 7 days.
... 97% of the damages was attributed to digging activities (VELIN 2004). Although this percentage is exaggerated by the fact that the analysis also included near misses, the difference with the 30-50% reported by True (1998) and Guijt (2004) is striking. It shows that the impact of digging activities is larger in more densely populated areas. ...
... Hardly any publication includes data on digging activities that did not result in damage (the study by van Houten en Lourens (1995) is the single exception known to us). Most studies however are based only on records in damage registration systems (for example see Cooke and Jager 1998, True 1998and Guijt 2004. In the analysis reported in this chapter, data on digging activities with no damage are included. ...
Article
Full-text available
The growing availability of spatial data along with growing ease to use the spatial data (thanks to wide-scale adoption of GIS) have made it possible to use spatial data in applications inappropriate considering the quality of the data. As a result, concerns about spatial data quality have increased. To deal with these concerns, it is necessary to (1) formalise and standardise descriptions of spatial data quality and (2) to apply these descriptions in assessing the suitability (fitness for use) of spatial data, before using the data. The aim of this thesis was twofold: (1) to enhance the description of spatial data quality and (2) to improve our understanding of the implications of spatial data quality.Chapter 1 sets the scene with a discussion on uncertainty and an explanation of why concerns about spatial data quality exist. Knowledge gaps are identified and the chapter concludes with six research questions.Chapter 2 presents an overview of definitions of spatial data quality. Overall, I found a strong agreement on which elements together define spatial data quality. Definitions appear to differ in two aspects: (1) the location within the meta-data report: some elements occur not in the spatial data quality section but in another section of the meta-data report; and (2) the explicitness with which elements are recognised as individual elements. For example, the European pre-standard explicitly recognises theelement'homogeneity'. Other standards recognise the importance of documenting the variation in quality, without naming it explicitly as an individual element.In chapter 3 we quantified the spatial variability in classification accuracy for the agricultural crops in the Dutch national land cover database (LGN). Classification accuracy was significantly correlated with: (1) the crop present according to LGN, (2) the homogeneity of the 8-cell neighbourhood around each cell, (3) the size of the patch in which a cell is located, and (4) the heterogeneity of the landscape in which a cell is located.In chapter 4 I present methods that use error matrices and change detection error matrices as input to make more accurate land cover change estimates. It was shown that temporal correlation in classification errors has a significant impact and must be taken into account. Producers of time series land cover data are recommended not only to report error matrices, but also change detection error matrices.Chapter 5 focuses on positional accuracy and area estimates. From the positional accuracy of vertices delineating polygons, the variance and covariance in area can be derived. Earlier studies derived equations for thevariance,this chapter presents a covariance equation. The variance and covariance equation were implemented in a model and applied in a case-study. The case-study consisted of 97 polygons with a small subsidy value (in euros per hectare) assigned to each polygon. With the model we could calculate the uncertainty in the total subsidy value (in euros) of the complete set of polygons as a consequence of uncertainty in the position of vertices.Chapter 6 explores the relationship between completeness of spatial data and risk in digging activities around underground cables and pipelines. A model is presented for calculating the economic implications of over- and incompleteness. An important element of this model is therelationship between detection time and costs. The model can be used to calculate the optimal detection time, i.e. the time at which expected costs are at their minimum.Chapter 7 addresses the question why risk analysis (RA) is so rarely applied to assess the suitability of spatial data prior to using the data. In theory, the use of RA is beneficial because it allows the user to judge if the use of certain spatial data does not produce unacceptable risks. Frequently proposed hypotheses explaining the scarce adoption of RA are all technical and educational. In chapter 7 we propose a new group of hypotheses, based on decision theory. We found that the willingness to spend resources on RA depends (1) on the presence of feedback mechanisms in the decision-making process, (2) on how much is at stake and (3) to a minor extent on how well the decision-making process can be modelled.Chapter 8 presents conclusions on the six research questions (chapters 2-7) and lists recommendations for users, producers and researchers of spatial data. With regard to the description, four recommendations are given. Firstly, spend more effort on documenting the lineage of reference data. Secondly, quantify and report correlation of quality between related data sets. Thirdly, investigate the integration of different forms of uncertainty (error, vagueness, ambiguity). Fourthly, study the implementation and use of spatial data quality standards. With regard to the application of spatial data quality descriptions, I have two main recommendations. Firstly, to continue the line of research followed in this thesis: quantification of implications of spatial data quality, through development of theory along with tangible illustrations in case-studies. Secondly, there is a need for more empirical research into how users cope with spatial data quality.
... Weiterhin müssen Verunreinigungen im Kohlendioxid wie Wasserdampf oder Schwefelverbindungen vermieden werden, da sich sonst korrosive Bestandteile bilden können, die Transport-Behältnisse und -Rohre schädigen können. Bei Wasserdampfanteilen im Kohlendioxid können sich unter hohem Druck weiterhin unerwünschte Hydrate bilden, die mit ihren kristallinen Strukturen zu Verschlüssen in Pumpen oder Rohren führen können(Doctor 2000),(Gale 2004). Der Bereich des Transports in Pipelines umfasst die Konditionierung und Abführung des CO 2 nach der Abtrennung sowie die angeschlossene Kompression und die Injektion des CO 2 . ...
Technical Report
Full-text available
Die Erforschung der Akzeptanz spielt eine relevante Rolle bei der Umsetzung einer neuen Technologie. Akzeptanz kann in hochtechnisierten Gesellschaften nicht mehr grundsätzlich als vorausgesetzt angenommen werden. Die gesellschaftliche Akzeptanz von CCS wird von zahlreichen Faktoren bestimmt. Aus heutiger Perspektive können hierüber vor allem aufgrund des noch geringen Bekanntheitsgrades der Technologien nur bedingt Aussagen getroffen werden. Personen, die in der Nähe eines CO2-Speichers oder einer CO2-Pipeline leben, werden die CCS-Technologien kritischer beurteilen als Personen, die keine räumliche Nähe zur Technologie haben. Die Einschätzung des persönlich wahrgenommenen Risikos ist für Personen mit einer räumlichen Nähe zur Technologie größer. Dies ist nicht CCS-spezifisch, gilt aber für diese Technologie angesichts der ausgeprägten Infrastrukturanforderungen (neues Pipelinenetz) in besonderem Maße. Die analysierten Studien zu CCS weisen, je nach Erkenntnisinteresse, sehr differente Forschungsansätze und Methoden auf.
... Pipeline has been a popular research area over the past few decades (Papadakis, 1999), where investigations ranged from risk assessment to product scheduling (MirHassani and Ghorbanalizadeh, 2008). However, the primary focus has been on using risk assessment methodologies to examine the probability of pipeline failure (Restrepo et al., 2009), wherein the engagements range from using historical oil spill data (Gujit, 2004) or fault tree analysis technique (Yuhua and Datao, 2005) to estimate failure probabilities, to examining the basic reasons for failure (Dziubinski et al., 2006). Most recently, risk assessment efforts have focused on severe accidents (Bonvicini et al., 2015). ...
Article
Rail crude oil shipments have witnessed a steady increase over the past decade, which underscore the long-term viability of this transport mode. Although incidents involving these shipments could be catastrophic, having link-level information could be useful for designing appropriate emergency response network and responding to such episodes. We present a data-driven methodology that makes use of analytics to estimate the amount of crude oil on different rail-links in Canada until 2030. The resulting analyses facilitated identifying high-risk links around Canada based on the current practice of the railroad industry, and to suggest that incurring marginally higher transportation costs could reduce network risk. In addition, the availability of the proposed pipeline infrastructure would change the supply and demand location configurations over the forecast horizon, with the maximum changes to the current crude oil traffic flow pattern stemming from the completion of the Energy East pipeline project.
... Land uses near pipelines vary by locality, yet third-party damage to transmission pipelines occurs consistently. In both the United States and Europe, outside-force damage to oil pipelines constitutes one third or more of the total pipeline incidents (Bouissou et al, 2004;Guijt, 2004). Over the last few decades there has been a reduction in the number of pipeline incidents in the United States, Europe, and Russia, (Papadakis, 1999; PHMSA Office of Pipeline Safety, 2010), yet there remains high potential for pipeline rupture by causes related to land-use planning, such as third-party damage (Bouissou et al, 2004, pages 1, 3). ...
Article
Full-text available
Although long-term planning can be improved by full stakeholder participation that generates consensus, there are some planning problems that lack interest from a large and diverse group of stakeholders. For these low-interest yet substantively important issues, such as hazard mitigation, technical collaboration has been suggested as a precursor to processes that involve full stakeholder participation. However, there has been only limited research evaluating the role of technical collaboration in practice. In this study I analyze how technical collaboration influences hazard mitigation capacity for communities at risk from hazardous liquid and natural gas transmission pipeline accidents. Semistructured interviews were conducted with forty-five emergency managers and planning directors located in the Greensboro-Winston-Salem, North Carolina (USA) metropolitan area whose communities had hazardous liquid and natural gas transmission pipelines. On the basis of these interview data, I classified technical collaborations into three categories: loose alliances, full partnerships, and hierarchically cooperative groups. Using this typology of technical collaboration, I found that the type of collaboration (1) influenced local knowledge about pipelines; (2) impacted how transmission pipeline hazards were addressed within a mitigation agenda; and (3) affected a community's long-term capacity to mitigate pipeline hazards and build resilience against potential disasters. Leadership, access to resources, and continuity of the collaboration affected the function of technical collaborations. The research illustrated the inconsistencies in hazard resilience outcomes produced by the three types of technical collaboration. Collectively, the results illustrate how some planners and emergency managers can overcome deficits in knowledge about transmission pipeline hazards or about hazard mitigation planning tools in order to improve hazard resilience. Practitioners from jurisdictions of various sizes can use this research to facilitate their use of existing relationships to achieve hazard mitigation goals or to address critical issues that may have limited stakeholder support.
... There were five times more incidents reported in the period 1979-1983 (0.90 incidents/year per 1000 km) than in the period after 1985-1989 (0.18 incidents/year per 1000 km). This is probably largely caused by the introduction of the threshold (Guijt, 2004). 15 In the United Kingdom a design factor of 0.72 is used for non-populated areas (<250 persons/km 2 ) and 0.3 for populated areas (≥250 persons/km 2 ), while in the USA and Europe a more gradual increase from a design factor of 0.72-0.4 is regulated ( Van der Heden et al., 2003;Code of Federal Regulation, 2010). ...
... Similar methodologies have also been used in Europe [12]. In addition, a number of studies using OPS and other datasets for the United States and Europe have focused on estimating the probability that a hazardous liquid and natural gas pipeline will fail given size and on trend analyses [13][14][15]. ...
Article
In this paper the causes and consequences of accidents in US hazardous liquid pipelines that result in the unplanned release of hazardous liquids are examined. Understanding how different causes of accidents are associated with consequence measures can provide important inputs into risk management for this (and other) critical infrastructure systems. Data on 1582 accidents related to hazardous liquid pipelines for the period 2002–2005 are analyzed. The data were obtained from the US Department of Transportation’s Office of Pipeline Safety (OPS). Of the 25 different causes of accidents included in the data the most common ones are equipment malfunction, corrosion, material and weld failures, and incorrect operation. This paper focuses on one type of consequence–various costs associated with these pipeline accidents–and causes associated with them. The following economic consequence measures related to accident cost are examined: the value of the product lost; public, private, and operator property damage; and cleanup, recovery, and other costs. Logistic regression modeling is used to determine what factors are associated with nonzero product loss cost, nonzero property damage cost and nonzero cleanup and recovery costs. The factors examined include the system part involved in the accident, location characteristics (offshore versus onshore location, occurrence in a high consequence area), and whether there was liquid ignition, an explosion, and/or a liquid spill. For the accidents associated with nonzero values for these consequence measures (weighted) least squares regression is used to understand the factors related to them, as well as how the different initiating causes of the accidents are associated with the consequence measures. The results of these models are then used to construct illustrative scenarios for hazardous liquid pipeline accidents. These scenarios suggest that the magnitude of consequence measures such as value of product lost, property damage and cleanup and recovery costs are highly dependent on accident cause and other accident characteristics. The regression models used to construct these scenarios constitute an analytical tool that industry decision-makers can use to estimate the possible consequences of accidents in these pipeline systems by cause (and other characteristics) and to allocate resources for maintenance and to reduce risk factors in these systems.
Chapter
Carbon dioxide transportation from the capture point to the utilization or storage point plays a key function in carbon capture and storage systems. CO2 transportation modes (onshore pipelines, offshore pipelines, and ships) are introduced in this chapter. The design specifications, construction procedures, cost estimation, safety regulations, environmental and risk aspects, energy requirement, international codes and standards, legal issues, and international conventions of these modes are presented and discussed. Furthermore, the challenges and future research directions associated with CO2 transportation are concluded. The large capital and operational costs, integrity, flow assurance, and safety issues are the greatest challenges of CO2 pipeline transports. Substantial efforts must be directed to reduce these costs by improving less energy-intensive configurations. A holistic assessment of the impacts of CO2 impurities on the corrosion rate and the phase change of the transported stream is required to improve pipeline integrity. The influence of impurities and the changes in elevation on the pressure drop along the pipeline needs to be further investigated to ensure continuous flow via accurate positioning of the pumping stations. Although the long experience in the oil and gas pipeline industry forms a powerful reference, it is necessary to develop particular standards and techno-economic frameworks to mitigate the barriers facing CO2 transportation technologies.
Chapter
Global climate change and increasing greenhouse gas emissions require immediate attention, and there is a need to develop improved carbon capture and sequestration (CCS) technologies. Experimental works are crucial in investigating several factors affecting the carbon capture process. However, molecular simulations employ powerful and robust methods to investigate the underlying mechanism and physicochemical properties during CO2 capture. Use of suitable absorbents or advanced materials using adsorption is discussed in the present chapter. Process modeling and simulations pertaining to CCS are discussed in detail in this chapter. It enables selecting the best thermodynamic model that can model the experimental data obtained from pilot plants and carbon capture plants. The challenges limiting the performance of these modeling techniques are discussed in the present chapter.
Article
Full-text available
Facility siting mitigation decisions should be made in a logical and defensible manner. This article provides a framework for making and justifying facility siting mitigation decisions beginning with presenting risk results in a clear manner prior to identifying practical risk mitigation strategies, highlighting potential source and location risk mitigation strategies, demonstrating how these strategies can be evaluated with examples, and ultimately to quantifying and optimizing the safety‐benefit of each mitigation strategy/combination of strategies. The outcome of this process provides a defensible basis for prioritization and practicality of risk mitigation strategies, or a combination of strategies that reduce facility risk to broadly acceptable levels or as low as reasonably practicable while minimizing expense.
Chapter
The main objective of carbon capture and storage (CCS) is to prevent CO2 from entering the atmosphere by capturing CO2 from large industrial sources and securely storing it in various carbon sinks. CCS is considered a critical component of the portfolio of carbon mitigation solutions, because global economy heavily relies and will continue to rely on fossil fuels in the foreseeable future. Currently, there are close to 300 active and planned CCS-related projects around the world—an indication of a growing commitment to this technological option. However, despite significant progress in CCS technology, the pace of CCS commercial deployment is rather slow. The major challenges facing the large-scale CCS deployment worldwide relate to a very high financial barrier and limited economic stimuli or regulatory drivers to encourage investments in the technology. This chapter highlights scientific and engineering progress in all three major stages of the CCS chain, CO2 capture, transport, and storage, and the current status of existing and planned commercial CCS projects. Technological, economic, environmental, and societal aspects of the large-scale CCS deployment and its prospects as a major carbon abatement policy are analyzed in this chapter.
Article
Full-text available
A Partial Factor Design Method was introduced in the 1992 version of the Dutch Pipeline Standard NEN 3650. Within NEN 3650:1992, Load combinations and Load Factors were taken from earlier Codes and Standards or derived from other Limit State Design Guidelines and applying good engineering judgment. When updating NEN 3650, it was decided to carry out a research project for calibration of Load and Resistance Factors, by adopting a reliability based verification method. The new NEN 3650, published in July 2003, defines the new set of Load Combinations, as well as Load and Resistance Factors for Ultimate Limit States (ULS) and Serviceability Limit States (SLS). The Load and Resistance Factors currently introduced are dependent of the required safety level (Reliability Index b), the coefficient of variation (CoV or V) and a sensitivity coefficient a (defining the importance of load or resistance variable). Determination of Partial Factors for soil parameters was also considered when performing the research project. This paper summarizes the main outcome of the research project and presents the Load and Resistance factors as introduced in NEN 3650:2003 (Reference 1, 2). Copyright © 2004 by The International Society of Offshore and Polar Engineers.
Chapter
Publisher Summary CO2 sequestration using carbonate mineralization and employing locally available sources for ultramafic rock was investigated for 300-MW and larger pulverized-coal fired power plants in the U.S. EPA Region II network known as PJM. PJM could deliver up to 147 × 10–3 Metric tons of CO2 daily and using nearby ultramafic resources it is technically feasible to provide all the resources for sequestering this volume of CO2 through mineralization. A Pennsylvania-based central carbonate formation facility located near Allentown, PA would be an appropriate host site for a central carbonate formation plant since Allentown is near the weighted geographical epicenter for the CO2 sources and is accessible to nearby zones in which the carbonates could be stored after formation. The proposed pipeline routes will follow existing right-of-way corridors and permission will need to be secured for 800 km of pipeline ranging in size from 10 to 48 inches. All but 135 km of this right-of-way will be subject to the rigorous administrative safety reviews now mandated by the U.S. Department of Transportation. The capture was considered to be $42 per metric ton of CO2 using an oxy-fired fuel approach. These costs could come down significantly with success in ongoing research.
Article
The environmental impacts of carbon capture and storage (CCS) or carbon capture, utilization, and sequestration (CCUS) are very important not only for local public acceptance, but also for long-term project operations. This paper summarizes the key pollutants and reviews the probable environmental risks associated with carbon capture, transmission, and enhanced oil recovery and sequestration.
Conference Paper
The first stage in performing a risk calculation is the failure rate assessment. In order to get reliable risk calculations, a good understanding of the failure rate of an underground pipeline is indispensable. Earlier studies already showed that extra cover significantly reduces the likelihood of damage caused by third party interference. This paper describes the influence of depth of cover based on the latest information on incidents. Moreover, the influence of population density on the damage rate of pipelines is studied.