The Generalized Long-Term Fault Memory Model and Applications to Paleoseismic Records
James Neely
University of Chicago
- Date & Time
- Location
- Online-only seminar via Microsoft Teams
- Summary
Commonly used large earthquake recurrence models have two major limitations. First, they predict that the probability of a large earthquake stays constant or even decreases after it is “overdue” (past the observed average recurrence interval), so additional accumulated strain does not make an earthquake more likely. Second, they assume that the probability distribution of the time between earthquakes is the same over successive earthquake cycles, despite the fact that earthquake histories show clusters and gaps. These limitations arise because the models are purely statistical in that they do not incorporate fundamental aspects of the strain accumulation and release processes that cause earthquakes. Here, we present a new large earthquake probability model, built on the Long-Term Fault Memory model framework that better reflects the strain accumulation and release processes. This Generalized Long-Term Fault Memory model (GLTFM) assumes that earthquake probability always increases with time between earthquakes as strain accumulates and allows the possibility of earthquakes releasing only part of the strain accumulated on the fault. GLTFM estimates when residual strain is likely present and its impact on the probable timing of the next earthquake in the sequence, and so can describe clustered earthquake sequences better than commonly used models. GLTFM’s simple implementation and versatility should make it a powerful tool in earthquake forecasting.