Understanding source characteristics of an earthquake, including size and location of asperities, rupture process, rupture directivity, source duration, and stress drop, is important in Seismology and extends application for seismic hazard prevention. Here, we link seismic observations and the source modeling through seismic signals for exploring potential source models for both small and large earthquakes. Observations show that microseismic events from the same fault area can have similar source durations but different seismic moments, violating the commonly assumed scaling. We use numerical simulations of earthquake sequences to demonstrate that strength variations over a seismogenic patch provide a potential explanation of such behavior, with the event duration controlled by the patch size and event magnitude determined by how much of the patch area is ruptured. We find that the stress drops estimated by typical seismological analyses for the simulated sources significantly increase with the event magnitude, ranging from 0.01 to 10 MPa. However, the actual stress drops determined from the on-fault stress changes are magnitude-independent and ~3 MPa. Our findings suggest that fault heterogeneity results in local deviations in the moment-duration scaling and earthquake sources with complex shapes of the ruptured area, for which stress drops may be significantly underestimated by the current seismological methods. For the large earthquake, the observations of the Mw 6.3 Meinong earthquake, Taiwan demonstrate that large pulse-like velocity ground motions produced heavy damages in Tainan city. Also, the unique double S-phases in southern stations of the epicenter were discovered. We recognize that this earthquake can be separated to a foreshock (Mw 5.6) at the hypocenter and the mainshock (Mw 6.2) near Tainan City, with a spatial separation of 12 km. The mainshock itself is responsible for the damages in Tainan due to its strong directivity and near source effect. Our preliminary dynamic model with two asperities, which correspond to the foreshock and mainshock, confirms these source characteristics. The synthetics for the model are able to explain the observing ground motions quite well. We will further investigate the rupture process between both the asperities.
Massachusetts Institute of Technology
We studied the source mechanisms of repeating earthquakes in multi-scales: from induced seismicity in oil/gas fields to pico-seismicity in the laboratory. A Bayesian source mechanism inversion method was used for the analysis of field and laboratory data. The seismicity that occurred in the oil/gas fields is probably induced by fluid injection and/or extraction, reactivating of the pre-existing faults. Double-couple dominant source mechanisms were obtained for the field data and the importance of the regional stress field and local fault networks in generating that micro-seismicity was observed. Laboratory generated acoustic emissions (AEs) can be used to study different rupture processes (e.g., hydraulic fracturing, stick-slip) in a controllable way. We studied the rupture processes in a saw-cut Lucite sample. The sample was subjected to conventional triaxial loading. Ultrasonic acoustic emissions (AEs) were monitored with eight PZT sensors. Two cycles of AEs were detected with the occurrence rate that decreased from the beginning to the end of each cycle, while the relative magnitudes increased. Correlation analysis indicated that these AEs were clustered into two groups – those with frequency content between 200-300kHz and a second group with frequency content between 10-50kHz. The high-frequency events, with almost identical waveforms, are from the sharp local asperities on the saw-cut plane. The locations of the low-frequency events show a slow slip process leading to the high-frequency events for each cycle. In this single experiment, there was a correlation of the proximity of the low-frequency events with the subsequent triggering of large high-frequency repeating events.
Univ. of Edinburgh
The seismic properties of fractured rocks saturated with multiple fluids are often intrinsically frequency-dependent due to a range of mechanisms of fluid-rock interaction, including the "squirt" and "patch" effects. We will review recent theoretical work which allows squirt and patch effects to be treated in a unified framework, and explain how the results generalize to anisotropy. We then compare the theoretical results to laboratory measurements of the properties of synthetic results. The analysis suggests that the frequency-dependence of certain seismic attributes, including shear-wave splitting, is sensitive to fluid-saturation, fracture properties, and relative permeability. In 2017 a unique broadband multicomponent seismic experiment was carried out in the North Sea, aiming to measure such seismic attributes in a gas chimney structure close to a potential CO2 storage site with a view to assessing potential leakage hazards. We will discuss the rationale and status of this work.
University of Southern California
Subduction zones produce Earth's largest earthquakes as the recycling of large oceanic plates into the mantle is resisted by friction with the upper plate. The dynamics of the seismic cycle at subduction zone is complex, with a host of rupture styles from slow to fast, further complicated by the mechanical coupling between fault slip and viscoelastic flow in the asthenospheric mantle. Long time series of geodetic data provide a window into subduction dynamics that can be used to decipher the contributions of localized and distributed deformation. A comprehensive picture of the internal deformation at these margins can be drawn using novel imaging and modeling techniques. As faulting represents a significant hazard, a large body of work has focused on modeling fault slip. Recently, we augmented the modeling toolkit by representing the effect of internal strain on surface deformation. This allows us to directly invert the distribution of strain in the mantle wedge and oceanic asthenosphere, as data permit, from surface observations. I will review a few recent results for the Sumatra subduction zone, the Taiwan orogen, and the Chilean subduction zone to illustrate the potential and limits of this new imaging technique akin to low-frequency tomography. The approach also allows us to simulate the interaction between megathrust slip and viscoelastic flow in the surrounding mantle. The simulations explain the peculiar retrograde motion of the seafloor following great and giant earthquakes. Drawing insight from structural geology, mineral physics, and theoretical work, we present a preliminary model of subduction zones that explains the spectrum of fault slip throughout the seismic cycle: very low-frequency and tsunami earthquake occur in the accretionary wedge due to the low friction, but velocity-weakening behavior of the décollement, large seismogenic zone earthquake take place where the oceanic slab cuts the continental upper and mid crust; long-term slow-slip event occurs in a serpentinized shear zone at continental lower-crustal depths; short-term slow slip event occurs in the brittle upper mantle; creep and viscoelastic flow take place at greater depths. These developments help us better understand the underlying physics behind seismic hazards towards a more resilient society.
Over the past few years, deep learning has led to rapid advances in applied computer science, from machine vision to natural language processing. These methods are now accessible to scientists across all disciplines due to the availability of easy-to-use APIs and affordable GPU acceleration. We demonstrate two specific applications of deep learning within earthquake science. In the first, we train a deep neural network to learn computationally efficient representations of viscoelastic solutions, across large ranges of times, locations, and rheological structures. Once found, these efficient neural network representations may accelerate computationally intensive viscoelastic calculations by a factor of 500. In the second, we focus on aftershock location patterns and find that a fully connected neural network trained on 131,000+ mainshock-aftershock pairs can explain aftershock locations in an independent testing data set of 30,000+ mainshock-aftershock pairs more accurately than static elastic Coulomb failure stress change. In contrast to the common assertion that deep learning produces “black box” results, the trained neural networks can provide some interesting physical insights.
University of Oregon
Are large earthquakes deterministic? This is an old and much-debated question in earthquake physics which we will revisit in this talk with new findings from global and regional seismic and geodetic observations. The rupture process of a large (M8+) earthquake can take several minutes, when, in this process is the final size of an event determined? Is there any predictability to it? Are the first few seconds of a large earthquake different from those of a smaller one? Two extreme views of this problem exist, in one, rupture is fully deterministic and the early phase (nucleation) of an earthquake has enough information to ascertain the event's final fate. At another extreme there is no determinism whatsoever, and only when a large event has ruptured completely can data constrain its magnitude. Recent results argue strongly for a middle of the road model, one of weak-determinism, where at nucleation there is no difference between earthquakes of different final magnitudes, but soon thereafter (in tens of seconds) and well before the rupture is finished, the earthquake organizes into a slip pulse with properties diagnostic of final magnitude. We will discuss the observations that constrain this view and what their implications are for earthquake and tsunami early warning.
Recent advances in seismic instrumentation makes rapid deployment of massive seismic sensors possible. Nodal sensors (Large-N) and distributed acoustic sensing (DAS) are such outstanding examples. Large-N and DAS arrays can easily have hundreds to thousands of sensors and tight spacing meters to hundreds of meters, which provides a unique opportunity to detect ultra-weak earthquakes and other unconventional sources. Meanwhile, the unprecedented big data volume poses great challenges for processing and analysis techniques. With the goal to detect ultra-weak signals, conventional amplitude-based detection methods are no longer feasible. As such, we developed and implemented two similarity-based methods, template matching (similarity along time) and neighbor similarity (similarity across space). We applied these two methods to two Large-N arrays (Long Beach and Oklahoma) as well as two DAS arrays (Brady Geothermal Field at Nevada, Goldstone California). In these four case studies of distinct settings, numerous local/regional/tele earthquakes below noise level and other unconventional events with known or unknown origins are revealed. The results demonstrate the great detection capability of these novel instrumentation techniques, which can open a new window that allows us to see the hidden seismic world.
CEA, China, UCLA
The deformation phenomena are associated with the process of earthquake preparation, occurrence and adjustment, so geodetic data are an important constraint for the earthquake deformation. As to reveal the residual distribution pattern and get high-accurate vertical movement field in a large scale, we conduct a systematic analysis for dynamic adjustment issues based on the simulated multi-source vertical data in the Chinese Mainland, meanwhile, two joint adjustment programs are developed. The results show that the graphics condition and observation interval mainly affect the residual distribution of the vertical rates, and the strict constraints and Helmert adjustment methods can both greatly improve the precision and reliability of the vertical rates. Specifically, the Helmert adjustment can optimize to determine the weight ratio between the leveling and spatial observations and can stress the advantage of both types of data, so this method can be one preferred strategy for joint dynamic adjustment of the large-area, multi-source vertical data. In order to get reliable GPS strain rate distribution, we developed the Least-Square Collocation (LSC) method in the spherical surface. Using modeled and simulated data comparison of several methods to compute GPS strain rate fields in terms of their precision and robustness reveals that least-squares collocation is superior. Strain rate results obtained for the Chinese mainland using GPS data show that the LSC method has slight edge effect, and its result can keep a steady state when the input data become sparseness. Meanwhile, the spherical harmonics method has serious edge effects, and the multi-surface function method shows non-steady-state characteristics. At last, the correlation between the strain rate field and earthquake occurrence in Chinese mainland are analyzed, in which the strain rate is calculated from the LSC method and the earthquake catalog is from China Earthquake Network Center.
Typical subduction zone tsunami models utilize kinematic earthquake source models, in which the slip at each point on the fault is prescribed at the beginning of the modeling process. The fault slip pattern is then used to produce seafloor deformation, which is then used as an initial condition for hydrodynamic tsunami modeling. Such models can include multiple realizations of stochastic fault slip to help characterize both the mean tsunami generation and its potential variability. However, the slip patterns assumed in these models are not guaranteed to be consistent with any physically plausible earthquake source process, and in particular may not be consistent with the fault geometry in the region. For this reason, my colleagues and I use spontaneous dynamic earthquake modeling, in which the slip pattern is a calculated result of the model, to underlie models of tsunamis. I will show examples of a case of a branched subduction/splay fault system (modeled after the Nankai Subduction Zone) with multiple realizations of stochastic shear pre-stress on the fault segments. We find that the variability in initial stress pattern leads to variability in the rupture path on the fault system and in the final earthquake magnitude; Some random stress realizations lead primarily to rupture of the plate boundary thrust, and others primarily to rupture of the splay faults. In turn, this variability in the earthquake source leads to a large variability in the amplitude of the resultant tsunami on the nearby coastline. Somewhat counter-intuitively, we find that when we scale the sizes of the modeled earthquakes to produce equivalent earthquake magnitudes, the resultant tsunami variability actually grows, rather than shrinks. We discuss the physical origin of this effect as well as the implications for tsunami hazard in areas with geometrically complex fault systems.
Low-frequency earthquakes (LFEs) are repetitive seismic events that accompany slow slip on plate boundaries, the cyclic slow aseismic release of built-up tectonic stress. Recent work has shown that LFE activity is a useful indicator of when slow slip occurs, even when its geodetic signature is buried in the noise. We first demonstrate that an M7.5 slow slip event in the Mexican subduction zone is not a continuous slow rupture over six months. The long-period surface displacement, as recorded by Global Positioning System, suggests a 6-month duration, but the motion in the direction of tectonic release only sporadically occurs over 55 days with its surface signature attenuated by rapid relocking of the plate interface. We conclude that large slow slip events are a cluster of short-duration slow transients.