Earthquakes on steroids, more powerful than the fat tailsteemCreated with Sketch.

in #art7 years ago

Extreme earthquakes are low-probability-high-consequence events, meaning that they are both rare and potentially very damaging. Being rare, we have little historical evidence of their impact. With exponential demographic growth, past damage experience does not even reflect what would happen today (Bilham, 2009). Psychological biases, such as availability heuristic*, are not here to help make sense of those extreme events (Kahneman, 2011 — * our tendency to rely on the recent examples that come to mind to evaluate risk). That’s the reason why, when such an earthquake strikes, it appears as a surprise. After the fact, however, it is rationalised by being added to the bucket of possible events. Nassim Taleb used the “black-swan” metaphor to describe these extreme events (Taleb, 2007). Didier Sornette proposed the more impressive “dragon-king” to describe even more extreme events (Sornette, 2009).

But let’s not get carried away by those metaphors! Let’s first clarify the real meaning of those two buzz words and then dismiss them (something, I believe, long overdue — please comment if you disagree). To show that both authors consider extreme earthquakes in their approach, let’s start by quoting them (emphasis mine):

If you are dealing with quantities from Extremistan, you will have trouble figuring out the average from any sample… knowledge in Extremistan grows slowly and erratically with the addition of data, some of it extreme, possibly at an unknown rate… Matters that seem to belong to Extremistan: wealth, …, number of references on Google, …, damage caused by earthquakes, …” — N.N. Taleb (2007)

These large ‘characteristic’ earthquakes have rupture lengths comparable with the fault length. If proven valid, this concept of a characteristic earthquake provides another example in which a dragon-king coexists with a power law distribution of smaller events.” — D. Sornette (2009)

Black-swan and dragon-king metaphors are indeed fun, they excite the imagination and are great conversation starters. Although described as “theories” (see Wikipedia for instance), they are just attempts at marketing the boring term “outlier”. Indeed:

  • A black-swan is an outlier in a normal (Gaussian) distribution but it’s not anymore in a power-law distribution (simply put, you cannot learn from the mean value anymore);
  • A dragon-king is an outlier in a power-law distribution but it’s not anymore in a heavier distribution.

That’s it, there is nothing more to it! Those metaphors have the merit to prove their point, but they also lead to much confusion: Being prepared against the unpredictable is a worthwhile effort to improve one’s resilience but should we stop trying to predict extremes altogether? Haven’t you ever heard someone say “anyway it’s a black swan, can’t predict it… can’t even imagine it”? What do you answer to such definitive statement? Is the dragon-king the answer then, as it infers some degree of predictability? The problem here is twofold: (1) it does not solve the black-swan problem in itself, only adds another layer to the problem; (2) the dragon-king “theory” equates complexity theory and complexity, despite the hype, is not the only physical framework around to explain extreme events (more on this in a future article). As we can see, we might be better off without those metaphors. In the rest of this article, you will see many examples of extreme earthquakes. No zoological classification of those apparent outliers will be proposed, but a collection of potential physical processes leading to these extremes will be given.


Large earthquakes are massive objects, displacing blocks of the earth crust by up to tens of metres over hundreds of kilometres and even more. It is obvious that with such large spatial footprints, earthquakes can lead to some of the most striking domino effects, such as landslides, tsunamis, critical infrastructure collapses, etc. (see “Using imagination in the risk assessment of domino effects: an exercise with natural science teachers”, based on Mignan et al., 2016). An aggregation of all those consequences can make any large earthquake quite extreme. The present article looks at something else: how earthquake risk, on its own, can already become extreme relative to standard probabilistic seismic risk assessment. We only need to consider a few faults, some ground to shake, and a few houses, nothing fancy or unusual.

But before I proceed, a disclaimer is in order: This article illustrates how extreme earthquakes can emerge; it does not mean that they are the most crucial events to consider in probabilistic seismic risk assessment. Multiplying occurrence rate by expected loss can change the full risk picture. A once-in-a-million-years earthquake that yields, say, one trillion in damage is less risky in average than a once-in-a-hundred-years earthquake that yields one billion in damage. Only proper modelling in a specific region can estimate which is which…

The power-law tail: Extending the earthquake frequency-magnitude distribution to higher magnitudes with fault rupture physics

Like many other processes observed in Nature (e.g., Clauset et al., 2009), the earthquake frequency-size distribution follows a power-law, that is in the energy domain (first proposed by Wadati in 1932 — see Utsu (1999) for a review):


With the earthquake magnitude M a logarithmic scale of the seismic energy E:


we easily get the exponential Gutenberg-Richter law (Gutenberg and Richter, 1944):


This means that by increasing the magnitude M by one unit, an earthquake becomes roughly 30 times more powerful (i.e., 10^1.5 = 31.6 with c = 1.5) while 10 times less frequent (i.e., 10^-1 with b = 1). A large event M is therefore three times more efficient than the combination of smaller events of magnitude M-1 to release energy. Why don’t we have only large ruptures then? The crust first needs to create small cracks that can only then coalesce into large ones, and this takes time. Mature tectonic systems are more likely to take most of the deformation on large faults but a fractal network of faults is more efficient at deforming in all possible directions (see King (1983) for a kinematic exercise where deformation occurs on triple junctions). Nature indeed found with fractals a very effective way to fill space and it is often from those fractals that a power-law behaviour emerges (Mandelbrot, 1982).


An important question remains: What is the maximum earthquake magnitude (Mmax) possible in a given region? Although it seems logical that this value should be bounded to the longest fault present, this has not been that obvious in probabilistic seismic hazard assessment (PSHA) until very recently. The most probable reason for lack of extreme earthquake rupture in PSHA maps is certainly linked to the fact that there is often no historical precedent. Potential ruptures are most often simplified to straight segments and the more complex ruptures present in PSHA models usually represent historical earthquakes. This is a case of under-sampling, a PSHA map being updated any time new larger-than-anticipated events occur (take the case of the 2011 Tohoku earthquake for example, whose magnitude was higher than the predicted Mmax. This value has been increased since then). As put by Catastrophist Gordon Woo, “in the earthquake lottery the actual historical realization is just one sample from a probability distribution of possible outcomes” (Woo and Mignan, submitted). An exposition of counterfactual risk analysis (Woo, 2016) was recently written for actuaries, to heighten their awareness of events that are insured but may not be known or included in any risk analysis (see also Lloyd’s, 2017).

A new generation of models now solves this Mmax discrepancy by modelling the earthquake rupture process, based on the theory of dynamic stress, in which a rupture can jump across fault segments, bend or branch to form more realistic earthquake behaviours (see “Mega-earthquakes, or when earthquake ruptures don’t stop”, based on Mignan et al., 2015). Quoting Lloyd’s “Emerging Risk Report 2017” on counterfactual risk, this stretches the range of event possibilities in a plausible and scientific way, mitigates bias in models that are based on the same datasets, explores the tail risks, and helps underwriters and risk managers analyse extreme and emerging risks”. The Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3) is the first official regional PSHA model to include multi-segment rupture propagation (Field et al., 2014). It is very likely that more PSHA models will follow suit.


The recurrence rate of Mmax earthquakes remains debated though. Should it be extrapolated from the Gutenberg-Richter law? If the maximum-size event is “characteristic”, its rate would be higher than the one predicted by a power-law. A thorough investigation of Gutenberg-Richter versus characteristic Mmax has been done long ago by Wesnousky (1994). An intuitive approach (but only my personal view) that combines conservation of energy and geometry could explain how both processes could coexist. Note then that the characteristic earthquake represents a fattening of the power-law distribution, meaning more extreme earthquakes of magnitude Mmax.

The second-order uncertainty of earthquake shaking: Wave propagation processes hidden in the lognormal distribution

Earthquake ground motion depends mainly on two parameters, the earthquake magnitude M and the distance R from the fault rupture. Empirical models are used to describe historical records but those only capture average features. Here is the general formulation of a so-called GMPE (ground motion prediction equation):


with PGA the peak ground acceleration and epsilon a random variable representing attenuation uncertainty. For ground acceleration, epsilon is lognormally distributed; for felt intensity, it is normally distributed.


Significant deviations are often observed, which can be due to site-specific and/or earthquake source-related effects, such as basin amplification, or rupture directivity (e.g., Bard et al., 1988; Sommerville et al., 1997). The figure above combines some USGS ShakeMap data (Allen et al., 2008) with the plotting advantages of R packages (e.g., Kahle and Wickman, 2013) to illustrate a textbook example of extreme earthquake severity: Basin amplification during the 1985 M8 Mexico City earthquake. This phenomenon was not known before, as attested by a New York Times article from the same year: “The powerful earthquake that killed at least 7,000 people here in September was, in effect, a deadly test in nature’s real and very brutal laboratory […] The Mexico City disaster was the first, scientists and engineers say, to test the modern building technology […] Among the key conclusions drawn from the disaster […] are that architects, engineers and city planners are going to have to restudy geological formations beneath some cities that might greatly increase the destructive force of an earthquake.” This historic(al) event led to the development of seismic microzonation to calibrate GMPEs to local geological and geophysical conditions.

Other amplifying factors, such as rupture directivity, can also be implemented in GMPEs (e.g., Somerville et al., 1997) although the use of such modified GMPE remains limited. Physics-based waveform simulations, such as the CyberShake initiative (Graves et al., 2011), can now replace traditional GMPEs to address those potential shaking amplifications and create more realistic seismic hazard maps. This has yet to be applied in most official PSHA models and requires high computational capabilities. It is likely that simulation-based PSHA will become the norm in the not-so-far future.

Natural clustering of earthquakes & emergence of seismic risk self-amplification

So far, we have seen how one earthquake can be extreme: (1) naturally, in a fractal network, tail events must occur to efficiently release the energy; (2) this tail can be extended to the longest fault rupture possible in a given region; (3) if this is not enough to release all stored energy, more extremes, bounded at Mmax, must occur, meaning a fattening of the power-law, which is already a fat tail compared to the good-old normal distribution; (4) although ground shaking attenuates with the distance from the rupture plane, local conditions can amplify the shaking, hence leading to more extreme severity. All of this is well known and, if not yet systematically implemented in PSHA, will be soon.

Let’s now move on to the case where we do not have one large earthquake, but two or three… Indeed, not only will a mainshock of magnitude M be followed with high likelihood by an aftershock on magnitude M-1 (so-called Bath law; Bath, 1965), it will also increase the stress on some nearby faults, potentially leading to doublets or even triplets of large earthquakes in a relatively short period of time: the 2004–2005 M9.0–8.7 Sunda megathrust doublet (Nalbant et al., Nature, 2005), 1999 M7.4–7.1 Izmit and Duzce North Anatolian doublet (Parsons et al., Science, 2000), and 1811–1812 M7.3–7.0–7.5 New Madrid Central US triplet (Mueller et al., Nature, 2004) are good examples (and, in all appearance, high impact-factor journal material). The process is called “clock advance” and is estimated with static stress modelling (let’s continue with Stein, Nature, 1999 for a review). Here again, the process is relatively well understood and standard models exist for time-dependent seismic hazard applications (e.g., USGS Coulomb 3 software; Lin and Stein, 2004; Toda et al., 2011). Yet, such modelling remains seldom used in regional PSHA. The main reason is that the basic formulation of PSHA assumes earthquake independence (read about the early history of PSHA in McGuire, 2008).


Simplified reverse fault network in northern Italy, modified from the European Seismic Hazard Model (ESHM13; Giardini et al., 2013). A simulated earthquake occurred on segment 1, increasing the static stress on nearby faults (nos. 2 and 18 here), meaning an increased probability of rupture on those segments. Modified from Mignan et al. (2018), stress computed with the USGS Coulomb 3 software.

Now, I will present the recent results of Mignan et al. (2018) who investigated the role of large earthquake clustering on the fattening of the risk curve. The modelling approach simply combined the USGS Coulomb 3 software for computing stress transfer with a basic Monte Carlo method for simulating time series, a flexible approach for dynamic (multi-)risk modelling (e.g., Mignan et al., 2014; 2017; Matos et al., 2015). The main innovation of this work is that it illustrates in a transparent manner how earthquake risk self-amplification can occur, considering both large earthquake clustering and its impact on building vulnerability.

To summarise 24 pages in only a few paragraphs, let us just consider three characteristic earthquakes on three nearby faults A, B and C. To make things easy, they occur with the same occurrence rate and same magnitude, let us say once every three hundred years (r = 1/300+1/300+1/300 = 1/100) and M = Mmax. Also, each one of these earthquakes yields the same loss L(Mmax). What could be the impact of A+B+C clustering on the aggregate exceedance probability (AEP) curve, or risk curve? Let’s first consider the case in which A, B and C are independent. The probability of occurrence of one event, two or three can be estimated from the Poisson distribution (see Table and blue AEP). It would be too cumbersome to discuss how static stress transfer can be computed and how the rate of clusters would be estimated from millions of simulations. Instead, we can mimic the clustering behaviour by the Negative Binomial distribution (see Table and red curve). Fortunately for us, Mignan et al. (2018) fitted this distribution to their stress transfer results and obtained a dispersion index of about 1.3, which we use here. As one can see, the occurrence of large earthquake doublets or triplets becomes realistic, while it was almost impossible before. This yields once more to some tail fattening.


So far, I have only talked about hazard amplification (higher earthquake magnitude Mmax, higher frequency of Mmax, higher severity, higher likelihood of large earthquake clustering). There are certainly many different ways risk can also be amplified via building vulnerability and exposure aspects. I will here focus on damage-dependent building vulnerability, which is directly linked to the clustering of earthquakes. This process describes how a building becomes more fragile as it experiences more shaking episodes. As you will see, the impact can become quite dramatic.

We will follow the generic approach proposed in Mignan et al. (2018). More sophisticated methods exist but all are based on the same principle, which is the following: Conceptually, the capacity of a structure degrades with increased damage. We can simply consider, as source of degradation, the decrease in the plasticity range


due to the deduction of a residual drift ratio


Deformation below the yield is elastic and has therefore no long-term effect (first equation). Above however, the deformation due to the earthquake is plastic and therefore permanent (second equation). Anytime a new earthquake occurs, it impacts the building capacity via


(Baker and Cornell, 2006), the ground acceleration being estimated via a GMPE (see above). Finally, we compute the damage state


where DS1 (DS=1) leads to insignificant damage (permanent deformation tending to 0) and DS5 (DS=5) to building collapse (when the earthquake drift ratio equals the maximum possible strain taken by the building). Although it may appear complicated at first, what is done is just a subtraction, removing a piece of potential deformation at each earthquake, meaning an increased likelihood of failure (read more in Mignan et al., 2018).

Let’s do an exercise with a building of standard yield displacement capacity 0.01 and a relatively low plastic displacement capacity 0.03, representative of some historic building (other parameters are a1 = -3.2 and a2 = 1). Now let’s shake the building with 0.4g several times. What happens? The first earthquake leads to slight damage (DS2). The second, although a clone of the first event, leads to moderate damage (DS3), and the third… to heavy damage (DS4), close to collapse (DS5). Of course, more sophisticated models are needed to estimate the expected behaviour of a specific building, but it is remarkable that a simple equation is all we need to understand the main process leading to amplified building vulnerability.


To finish, here is a map of damage due to a triplet of earthquakes, as simulated in Mignan et al. (2018) to identify the impact of damage-dependent vulnerability. Once again, this yields to higher losses for each cluster and therefore to further fattening of the risk curve.


I hope that this article proved that many different physical processes can lead to extreme seismic risk and that this cannot be described by one universal mathematical relationship (power-law or else). We talked about fractal geometry and other geometric constraints, conservation of energy, dynamic stress and static stress, wave amplification, and material plasticity. It is quite certain that many more aspects could be included. It is only by proper physical modelling of all these aspects that the number of surprise “super-earthquakes” can be minimised.

New studies now undermine the apparent universality of the power-law: Broido and Clauset (2018) showed that networks described by a power law are in fact rare, with the log-normal a possible alternative. Mignan (2015; 2016a; b) showed in the earthquake case that the famous Omori power-law of aftershocks is ill-defined and should be replaced by a stretched exponential (the topic of a future LinkedIn article). What these studies demonstrate is that universality is an oversimplification, that reality is often more complicated than we think. That’s alright, we just need to work a bit more to better understand what is really going on…

Main reference:

Mignan, A., L. Danciu and D. Giardini (2018), Considering large earthquake clustering in seismic risk analysis, Nat. Hazards, 91, S149-S172, doi: 10.1007/s11069–016–2549–9

Other references:

Allen, T.I., et al. (2008), An Atlas of ShakeMaps for Selected Global Earthquakes, USGS Open-File Report 2008–1236, 34 pp.

Baker, J.W. and C.A. Cornell (2006), Which Spectral Acceleration Are You Using?, Earthquake Spectra, 22, 293–312

Bard, P.-Y., M. Campillo, F.J. Chavez-Garcia and F. Sanchez-Sesma (1988), The Mexico Earthquake of September 19,1985 — A theoretical Investigation of Large- and Small-scale Amplification Effects in the Mexico City Valley, Earthquake Spectra, 4, 609–633

Bath, M. (1965), Lateral Inhomogeneities of the Upper Mantle, Tectonophysics, 2, 483–514

Bilham, R. (2009), The seismic future of cities, Bull. Earthquake Eng., 7, 839–887, doi: 10.1007/s10518–009–9147–0

Broido, A.D. and A. Clauset (2018), Scale-free networks are rare, arXiv: 1801.03400v1

Clauset, A., C.R. Shalizi and M.E.J. Newman (2009), Power-Law Distributions in Empirical Data, SIAM Review, 51, 661–703, doi: 10.1137/070710111

Field, E.H., et al. (2014), Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3) — The Time-Independent Model, Bull. Seismol. Soc. Am., 104, 1122–1180, doi: 10.1785/0120130164

Giardini, D. et al. (2013), Seismic Hazard harmonization in Europe (SHARE): online data, Resource, doi: 10.12686/SED-00000001-SHARE

Graves, R., et al. (2011), CyberShake: A Physics-Based Seismic Hazard Model for Southern California, Pure Appl. Geophys., 168, 367–381, doi: 10.1007/s00024–010–0161–6

Gutenberg, B. and C.F. Richter (1944), Frequency of earthquakes in California, Bull. Seismol. Soc. Am., 34, 184–188

Kahle, D. and H. Wickham (2013), ggmap: Spatial Visualization with ggplot2, The R Journal, 5(1), 144–161, ISSN: 2073–4859

Kahneman, D. (2011), Thinking, Fast and Slow, Farrar, Straus and Giroux, 499 pp.

King, G. (1983), The Accommodation of Large Strains in the Upper Lithosphere of the Earth and Other Solids by Self-similar Fault Systems: the Geometrical Origin of b-Value, PAGEOPH, 121, 761–815

Lin, J. and R.S. Stein (2004), Stress triggering in thrust and subduction earthquakes, and stress interaction between the southern San Andreas and nearby thrust and strike-slip faults, J. Geophys. Res., 109, B02303, doi: 10.1029/2003JB002607

Lloyd’s, ed. (2017), Reimagining history, Counterfactual risk analysis, Emerging Risk Report 2017, Understanding risk, 48 pp.

Mandelbrot, B. (1982), The Fractal Geometry of Nature, W.H. Freeman and co., 468 pp.

Matos, J.P., A. Mignan and A.J. Schleiss (2015), Vulnerability of large dams considering hazard interactions, Conceptual application of the Generic Multi-Risk framework, 13th ICOLD Benchmark Workshop on the Numerical Analysis of Dams, Switzerland, 285–292

McGuire, R.K. (2008), Probabilistic seismic hazard analysis: Early history, Earthquake Engng Struct. Dyn., 37, 329–338, doi: 10.1002/eqe.765

Mignan, A., S. Wiemer and D. Giardini (2014), The quantification of low-probablity-high-consequences events: part I. A generic multi-risk approach, Nat. Hazards, 73, 1999–2022, doi: 10.1007/s11069–014–1178–4

Mignan, A., L. Danciu and D. Giardini (2015), Reassessment of the Maximum Fault Rupture Length of Strike-Slip Earthquakes and Inference on Mmax in the Anatolian Peninsula, Turkey, Seismol. Res. Lett., 86(3), 890–900, doi: 10.1785/0220140252

Mignan, A. (2015), Modeling aftershocks as a stretched exponential relaxation, Geophys. Res. Lett., 42, 9726–9732, doi: 10.1002/2015GL066232

Mignan, A., A. Scolobig and A. Sauron (2016), Using reasoned imagination to learn about cascading hazards: a pilot study, Disaster Prevention and Management, 25, 329–344, doi: 10.1108/DPM-06–2015–0137

Mignan, A. (2016a), Revisiting the 1894 Omori Aftershock Dataset with the Stretched Exponential Function, Seismol. Soc. Am., 87, 685–689, doi: 10.1785/0220150230

Mignan, A. (2016b), Reply to “Comment on ‘Revisiting the 1894 Omori Aftershock Dataset with the Stretched Exponential Function’ by A. Mignan” by S. Hainzl and A. Christophersen, Seismol. Soc. Am., 87, 1134–1137, doi: 10.1785/0220160110

Mignan, A., N. Komendantova, A. Scolobig and K. Fleming (2017), Chapter 14: Multi-Risk Assessment and Governance, Handbook of Disaster Risk Reduction & Management, 357–381, doi: 10.1142/9789813207950_0014

New York Times (1985), Lessons emerge from Mexican Quake, November 5 1985 issue

Somerville, P.G., N.F. Smith, R.W. Graves and N.A. Abrahamson (1997), Modification of Empirical Strong Ground Motion Attenuation Relations to Include the Amplitude and Duration Effects of Rupture Directivity, Seismol. Res. Lett., 68, 199–222

Sornette, D. (2009), Dragon-kings, black swans, and the prediction of crises, Int. J. Terraspace Sci. and Engineering, 2, 1–18

Taleb, N.N. (2007), The black swan, Random House, New York, 400 pp.

Toda, S., R.S. Stein, V. Sevilgen and J. Lin (2011), Coulomb 3.3 Graphic-rich deformation and stress-change software for earthquake tectonic, and volcano research and teaching — user guide, USGS Open-File Report 2011–1060m 63 pp.

Utsu, T. (1999), Representation and Analysis of the Earthquake Size Distribution: A Historical Review and Some New Approaches, Pure Appl. Geophys., 155, 509–535

Wesnousky, S.G. (1994), The Gutenberg-Richter or Characteristic Earthquake Distribution, Which Is It? Bull. Seismol. Soc. Am., 84, 1940–1959

Woo, G. (2016), Counterfactual Disaster Risk Analysis, Variance, in press

Youngs, R.R., S.-J. Chiou, W.J. Silva and J.R. Humphrey (1997), Strong Ground Motion Attenuation Relationships for Subduction Zone Earthquakes, Seismol. Res. Lett., 68, 58–73

This article was originally published on LinkedIn on Jun. 16, 2018, under the title “Beyond the power-law tail, a tale of extreme earthquake risk”.



Posted from my blog with SteemPress : https://selfscroll.com/earthquakes-on-steroids-more-powerful-than-the-fat-tail/
Sort:  

Warning! This user is on my black list, likely as a known plagiarist, spammer or ID thief. Please be cautious with this post!
If you believe this is an error, please chat with us in the #cheetah-appeals channel in our discord.

This user is on the @buildawhale blacklist for one or more of the following reasons:

  • Spam
  • Plagiarism
  • Scam or Fraud