Direct Economic Losses in the Northridge Earthquake: A Three-Year Post-Event Perspective

Article (PDF Available)inEarthquake Spectra 14(2) · May 1998with 240 Reads
DOI: 10.1193/1.1585998
Cite this publication
Abstract
The Northridge earthquake will long be remembered for the unprecedented losses incurred as a result of a moderate-size event in a suburban area of Los Angeles. Current documented costs indicate that this event is the costliest disaster in U.S. history. Although it is difficult to estimate the full cost of this event, it is quite possible that total losses, excluding indirect effects, could reach as much as $40 billion. This would make the Northridge earthquake less severe than the Kobe event, which occurred exactly one year after the Northridge earthquake, but adds a bit of realism that a Kobe-type disaster is possible in the U.S. This paper attempts to put into perspective the direct capital losses associated with the Northridge earthquake. In doing so, we introduce the concept of hidden and/or undocumented costs that could double current estimates. In addition, we present the notion that a final estimate of loss may be impossible to achieve, although costs do begin to level off two years after the earthquake. Finally, we attempt to reconcile apparent differences between loss totals for two databases tracking similar information.
  • Preprint
    Full-text available
    Despite the wide range of possible scenarios in the aftermath of a disruptive event, each community can make choices to improve its resilience, or its ability to bounce back. A resilient community is one that has prepared for, and can thus absorb, recover from, and adapt to the disruptive event. One important aspect of the recovery phase is assessing the extent of the damage in the built environment through post-event building inspections. In this paper, we develop and demonstrate a resilience-based methodology intended to support rapid post-event decision-making about inspection priorities with limited information. The method uses the basic characteristics of the building stock in a community (floor area, number of stories, type of construction and configuration) to assign structure-specific fragility functions to each building. For an event with a given seismic intensity, the probability of each building reaching a particular damage state is determined, and is used to predict the actual building states and priorities for inspection. Losses are computed based on building usage category, estimated inspection costs, the consequences of erroneous decisions, and the potential for unnecessary restrictions in access. The aim is to provide a means for a community to make rapid cost-based decisions related to inspection of their building inventory. We pose the decision problem as an integer optimization problem that attempts to minimize the expected loss to the community. The advantages of this approach are that it: (i) is simple, (ii) requires minimal inventory data, (iii) is easily scalable, and (iv) does not require significant computing power. Use of this approach before the hazard event can also provide a community with the means to plan and allocate resources in advance of an event to achieve the desirable resiliency goals of the community.
  • Article
    Full-text available
    New Zealand's Alpine Fault is a large, plate-bounding strike-slip fault, that ruptures in large (MW > 8) earthquakes. Its hazard potential is linked to its geometrical properties. We conducted field and laboratory analyses of fault rocks to elucidate their influence on its fault zone architecture. Results reveal that the Alpine Fault zone has a complex geometry, comprising an anastomosing network of multiple slip planes that have accommodated different amounts of displacement. Within it, slip zone width is demonstrably not related to lithological differences of quartzofeldspathic lithologies, which vary slightly along- strike. The young, largely unconsolidated sediments that constitute the footwall in some outcrops have a much more significant influence on fault gouge rheological properties and structure. Additionally, seismic investigations indicate that the exposed complex fault zone architecture extends into the basement. This study reveals the Alpine Fault contains multiple slip zones surrounded by a broader damage zone; properties elsewhere associated with carbonate or phyllosilicate-rich faults.
  • Article
    Full-text available
    Despite California is a highly seismic prone region, most of homeowners are not covered against this risk. This study analyses the reasons for homeowners to purchase or not an insurance to cover earthquake losses, with application in California. A dedicated database is built from 18 different data sources about earthquake insurance, gathering data since 1921. A new model is developed to assess the take-up rate based on the homeowners’ risk awareness and the average annual insurance premium amount. Results suggest that only two extreme situations would lead all owners to cover their home with insurance: (1) a widespread belief that a devastating earthquake is imminent, or alternatively, (2) a massive decrease in the average annual premium amount by a factor exceeding 6 (from $980 to $160, USD 2015). Considering the low likelihood of each situation, we conclude from this study that new insurance solutions are necessary to fill the protection gap.
  • Article
    Performance-based design (PBD) of buildings can be properly addressed in a multi-objective optimization framework. However, computational costs of such an approach will be very expensive especially if nonlinear time–history analysis (NTHA) is used as the evaluation tool. In this paper, significant reductions in computational costs of solving structural multi-objective optimization problems is achieved by proposing a new metaheuristic surrogate model called Surrogate FC-MOPSO. In this method, which is an extension of FC-MOPSO algorithm, NTHA and pushover analysis (PA) are simultaneously employed for evaluating system responses. PAs are adopted as an approximation tool in the surrogate model while the responses corresponding to feasible solutions are always evaluated from NTHAs. The final Pareto optimal solutions, which yield tradeoffs between initial and life cycle costs (LCCs), are already evaluated based on NTHAs. It is shown that application of the proposed method results in substantial reductions of runtime of the considered problems. It is also demonstrated that adopting PAs as the only evaluation tool in optimal performance-based design of structures can result in unreliable solutions.
  • Article
    Full-text available
    Structures designed in accordance with even the most modern buildings codes are expected to sustain damage during a severe earthquake; however; these structures are expected to protect the lives of the occupants. Damage to the structure can require expensive repairs; significant business downtime; and in some cases building demolition. If damage occurs to many structures within a city or region; the regional and national economy may be severely disrupted. To address these shortcomings with current seismic lateral force resisting systems and to work towards more resilient; sustainable cities; a new class of seismic lateral force resisting systems that sustains little or no damage under severe earthquakes has been developed. These new seismic lateral force resisting systems reduce or prevent structural damage to nonreplaceable structural elements by softening the structural response elastically through gap opening mechanisms. To dissipate seismic energy; friction elements or replaceable yielding energy dissipation elements are also included. Post-tensioning is often used as a part of these systems to return the structure to a plumb; upright position (self-center) after the earthquake has passed. This paper summarizes the state-of-the art for self-centering seismic lateral force resisting systems and outlines current research challenges for these systems. Building 2018, 4 521
  • Article
    In recent years, much attention has been paid to the research and development of a new kind of passive control device named a particle damper, which has a very similar configuration and application method to the tuned mass damper; however, the damping mechanisms are different. Hence, systematic comparative studies on these dampers are very important for future applications. In this paper, three cases including single-degree-of-freedom structure, 5-story linear-elastic steel frame and 20-story nonlinear benchmark building, are used as primary structures to compare the structural performance with optimal tuned mass damper and optimal particle damper. The optimal parameters of the particle damper are designed by a differential evolution algorithm and the optimal parameters of the tuned mass damper are designed by the classical Den Hartog theory, providing the same additional mass. The numerical simulation shows that the properly designed particle damper certainly has better vibration control effect than that of the optimal tuned mass damper, not only for elastic performance indexes, but also for nonlinear performance indexes, such as the number and maximum rotation of plastic hinges, and energy dissipation of components. Moreover, the more obvious advantages are that, compared with the optimal tuned mass damper system, the optimal particle damper can significantly reduce the relative displacement between primary structure and the damper itself, as well as its better robustness.
  • Conference Paper
    Full-text available
    In most recent earthquakes, traditional seismic design demonstrated its effectiveness in reducing casualties through ductile structural mechanisms, but allowed for extensive structural damages that accounted for tremendous economic losses. This evidence raised awareness for the need of an increased level of resiliency, mostly in low-rise buildings. Seismic isolation is a protection system that proved to be successful in prevent damages and maintaining operability. However, high costs and sever testing protocols currently discourage the extensive application of this technology to low-rise buildings. In this study, an architected periodic cellular material with unprecedented characteristic is proposed as a low-cost alternative to traditional technologies to produce seismic isolation devices for implementation in residential, retail and office buildings. Results from preliminary numerical analysis demonstrate the range of performance of this novel architected material in comparison with traditional materials. The scalability of the architected material is also addressed with the aim of investigating the feasibility of using tests on small assemblies of the constituent unit-cells instead of full scale tests to assess the structural performance of the isolators. Keywords: Seismic isolation; periodic cellular materials; testing protocols; resiliency, low-rise buildings
  • Article
    A multiple level-of-detail (LOD) simulation framework is proposed in this study, to take full consideration of the diversity of structural types, available data, and simulation scenarios in an actual application of seismic-damage simulation to urban buildings. Firstly, key features of the frequently used seismic simulation methods for buildings are discussed, and logical relationships of these simulation methods, as well as the available multi-source data, are established in different LODs. Secondly, implementation of the proposed multi-LOD simulation framework is presented, and a unified city data structure is proposed to enable effective management and storage of data with different LODs. Finally, the Beijing central business district, which has various types of buildings, is investigated in detail to demonstrate the proposed multi-LOD framework. The accuracy, efficiency, and corresponding requirements of different LOD simulations are compared and discussed. The outcomes of this work are expected to provide a useful reference for the application of seismic-damage simulations in complex urban areas.
  • Article
    This paper mainly addresses the validity and verification of our current code-based knowledge through the utilization of test data of benchmark structure in Japan. A full-scale, four-story reinforced concrete building within the 2010 E-Defense test program is revisited to assess the capability of the current state of practice for the prediction of the actual seismic response of reinforced concrete structures by taking advantage of invaluable data and high-quality measurement techniques applied during the tests. The detailing of the structural elements was in compliant with the Japanese seismic design code whereas minor modifications were applied to the final design of the structure to bring it closer to the selectively well-known US design practice. A series of full-scale shake table tests were performed by gradually increasing the two ground motions of the 1995 Hyogo-Ken Nanbu earthquake for reaching to the near collapse limit state. Global dynamic response characteristics of the structure due to damage accumulation were derived from the test data to trace the strength and stiffness deterioration in the force–deformation hysteresis. Even though the story drift angles were exceeded the critical 0.04 level beyond the limitations of code-provisions, the structure has preserved dynamic stability and its intact form. Based on the surveys and measurements performed at the end of each test, shear walls and beam–column joints were sustained the severe damage while the beams and columns were maintained their structural integrity during the entire test program. Parameters of advanced hysteretic models in the moment-resisting and shear wall frames were calibrated using test data for developing analytical models that can represent all modes of the cyclic deterioration under pinching action as an alternative to the customarily used Takeda model in Japan. For the transformation of MDOF system to equivalent SDOF system, a straightforward procedure is applied to adopt the global response characteristics to the versatile hysteretic model. Moreover, the capability of capturing the seismic response of RC system through advanced hysteretic models is justified. The test results and numerical analyses implied the necessity of further improvements on MLIT and US design provisions for the reduction of observed damages on structural elements particularly in shear walls and beam–column joints to achieve reparability goals.
  • Article
    Psychological maternal stress is thought to be a factor in poor infant health, but direct evidence is difficult to obtain. We posit that the 1994 Northridge earthquake in Los Angeles, California provides a natural test of the effect of mothers’ stress on infants’ birth weight and gestation. The Northridge disaster featured a low rate of injury and a quick recovery, but long-lasting and well documented consequences for mental health. Difference-in-difference results show that infants born closest to the epicenter were 0.2 percentage points more likely to be born with low birth weight. Impacts were larger and more precisely identified for women who experienced the earthquake in their first or third trimester. Among the subsample of mothers most susceptible to stress – first-time, single mothers – low birth weight was 0.5 percentage points more likely to occur. We find little evidence that the earthquake affected preterm delivery.
  • Article
    The rapid availability of estimated shaking intensities, dollar losses and social impacts following the Northridge earthquake of January 17, 1994, proved to be a valuable resource to the California Office of Emergency Services (OES) and the Federal Emergency Management Agency (FEMA) in response and recovery decision making. These estimates were used to supplement standard reconnaissance procedures and expedite decisions which in previous disasters were delayed until observational assessments had been completed. Based on extensive, in-depth interviews with state and federal emergency managers, this paper will focus on how these estimates were used in making decisions. Methods and models employed in generating estimates will be addressed only briefly. Discussion will include an assessment of the potential benefits and pitfalls of near real-time loss estimation in emergency management.