USGS logo

What Ever Happened to Earthquake Prediction?

Reprinted with permission from Geotimes, Vol 17, March 1997.
Copyright the American Geological Institute, 1997.
by Christopher Scholz, Lamont-Doherty Earth Observatory, P.O. Box 1000, Palisades, N.Y 10964

In February 1995, the National Academy of Sciences sponsored a colloquium, "Earthquake Prediction: The Scientific Challenge," in Irvine, Calif. The proceedings, which were published in April 1996, will make bemusing reading for anyone not intimately familiar with the long and checkered history of this topic (Proceedings of The National Academy of Science, v. 93, no. 9). With several notable exceptions, the contributions consisted of theoretical papers that dealt primarily with the earthquake mechanism and only incidentally, if at all, with the problem of earthquake prediction. As a collection, the volume succeeds admirably in talking around its subject.

If there is a scientific challenge to earthquake prediction, this group of illuminati appeared unwilling to admit much of a clue as to how to go about meeting it. Academy colloquia are usually organized to showcase new and exciting developments in science, but this one was more a wake than a celebration: the participants seemed more embarrassed than enthused with their topic. All of which leads one to wonder, "Whatever happened to earthquake prediction?"

High Hopes in the 1970's

The heyday of scientific cockiness on the predictability of earthquakes was in the '70s. In 1973, I and my colleagues Lynn Sykes and Yash Aggarwal put forward the dilatancy-diffusion theory in an attempt to explain a great variety of phenomena that had been observed to shortly precede earthquakes. These phenomena were actually precursory, said the theory, and therefore could be used to predict the earthquake. This development was ballyhooed, both by the global media and in the scientific literature, as signaling the advent of practical earthquake prediction. To raise the level of clamor yet higher, the Chinese government announced in 1975 that it had successfully predicted a magnitude 7.5 earthquake, and that the city of Haicheng had been evacuated in advance, saving thousands of lives.

The euphoria, however, was short-lived. In 1976, a U.S. team visited China to investigate the Haicheng prediction. They found that the prediction was apparently the result of an unusually pronounced foreshock sequence coinciding with a sort of widespread public hysteria associated with a Cultural Revolution declaration that earthquake prediction could be accomplished through the unfailing efforts of the "broad masses of the people." However one judged the merits of this prediction, it did not lead to any method that could be translated into western scientific practice. To cement this disillusionment, the famously unpredicted and disastrous Tangshan earthquake struck, killing 250,000 people in northern China in 1976.

In the meantime, back in California, several tests were made to detect one of the precursors predicted by the dilatancy-diffusion theory. These tests which were made on magnitude 5-ish earthquakes and were carried out, in my view, rather desultorily - produced negative results. Though none too conclusive, these results were accepted by consensus as deeming the failure of both the theory and of that particular form of earthquake precursor, neither of which has been seriously investigated since.

A Tradition of Suspicion

The ready, even eager, acceptance of these negative conclusions reflects the fact that the western scientific community has never viewed earthquake prediction with anything shy of extreme suspicion, if not doctrinaire disbelief. This view is typified, if somewhat brusquely, by the late Charles E Richter, the curmudgeonly doyen of American earthquake seismology, who was fond of saying, "Bah, no one but fools and charlatans try to predict earthquakes!" (Although this statement is often quoted as apocryphal, I actually once overheard Richter make this declaration to a TV reporter.)

This assertion, of course, conveys more than mere skepticism, for it states that earthquake prediction is not a valid topic for scientific investigation. It perhaps reflects a reaction to the long struggle in the West, still very much alive in some quarters, between scientific and religious authority. The prediction of earthquakes, long the province of the occult, is for that reason alone beyond the pale of respectable scientific study.

To be sure, Richter had a point: the subject does attract all manner of self-proclaimed oracles. Some, like Jim Bergland, the San Jose man who for several years in the '8Os kept the media entranced with his predictions based on fluctuations in the number of missing dog and cat notices in local newspapers, are simply amusing - unless you happen to be the U.S. Geological Survey (USGS) spokesperson repeatedly called upon to debunk them. Somewhat more annoying are cases like that of Iben Browning, who provoked a nationwide media frenzy several years ago with his prediction of an earthquake in New Madrid, Mo.

The really serious cases, though, are those that involve scientists - cases that often become causes célèbres. A current dispute of this type concerns the VAN method, named eponymously for its creators, Varotsos, Alexopoulos, and Nomicos, three physicists who since 1982 have been claiming successful predictions of earthquakes in Greece based on electrical signals. The validity of their claims and methodology has been hotly contested; the debate occupies an entire recent issue of Geophysical Research Letters (v. 23, no. 11, 1996).

Government Take Up The Challenge

The events of the '70s were not without long-range consequences. It was in this heady atmosphere that the U.S. Congress enacted the National Earthquake Hazards Reduction Program (NEHRP) in 1977. In Japan, a fledgling earthquake prediction program, which had been limping along since 1965, saw its budget double in 1974 and continue to grow rapidly thereafter. Suddenly there were national programs in the two leading earthquake-science countries, and both had a component featuring earthquake prediction, although the focus was very different in the two cases. NEHRP a multi-agency effort involving the USGS, National Science Foundation, and Federal Emergency Management Agency, became (and continues to be) the mainstay of earthquake studies in the United States. It focuses on both prediction and mitigation, and includes both earthquake engineering and earthquake science, broadly based.

The prediction component of NEHRP always emphasized the research nature of this endeavor and was never a major activity. At its peak, prediction studies accounted for no more than 20 percent of the USGS-administered part of the program, and this share gradually decreased. During the early years of NEHRP there was a sizable group of independent prediction researchers, both inside and outside of the USGS, but their numbers dwindled over time, due to the unusually severe peer criticism applied to this particular line of research. Eventually, virtually all prediction work became concentrated in one officially sanctioned activity the Parkfield Prediction Experiment.

Predicting earthquakes is as easy as one-two-three. Step 1: Deploy your precursor detection instruments at the site of the coming earthquake. Step 2: Detect and recognize the precursors. Step 3: Get all your colleagues to agree and then publicly predict the earthquake through approved channels.

Bill Bakun, Al Lindh, and their colleagues at the USGS thought that they had figured out at least the premise to Step 1, namely where an earthquake was going to happen in the near future. A section of the San Andreas Fault near the hamlet of Parkfield in central California had ruptured in magnitude 6 earthquakes six times since 1857 with an average repeat time of 22 years; the last quake had occurred in 1966. Assuming a plausible hypothesis for the recurrence of these events, the USGS scientists estimated (at a 95-percent confidence level) that the next Parkfield earthquake would occur before January 1993. Thus, in 1984 the Parkfield Prediction Experiment was launched.

An incredible array of instrumentation was assembled in the Parkfield area: dense seismic and geodetic networks, and instruments for measuring water level, electrical resistivity, magnetic fields, and geochemical precursors. This effort was a real-time experiment, so all the data had to be telemetered to the USGS in Menlo Park and analyzed by computer in near real time.

Anticipating Step 3, protocols were prearranged in which different combinations of precursors would lead automatically to different alert levels, which in turn would lead to a schedule of notification scenarios. Key personnel were put on a round-the-clock beeper roster. Now all they had to do was wait for the precursors to roll in. While the US earthquake program focused mainly on mitigation, the Japanese program became entirely a prediction program. By 1978 it was no longer called a research program, and had become committed to predicting a magnitude 8 earthquake in a highly populated and developed part of the country - the Tokai district on Suruga Bay, west of Tokyo.

The rationale for the selection of this particular site was similar to that for Parkfield, but for the Japanese, the stakes were immensely greater. Not only had the Japanese public been convinced that earthquake prediction was possible, but the consequences of failing to predict such a potentially disastrous earthquake were huge. Moreover, as Tokyo University's Bob Geller complained, the Japanese were putting all their eggs into one basket, neglecting the mitigation and earthquake engineering side of earthquake studies, as well as other areas of potential risk.

Opportunity Lost

The long-term forecasts that led to the selection of the Tokai and Parkfield sites for short-term prediction experiments are the other side of the earthquake prediction coin. Long-term forecasting is a much better posed scientific problem, and is the basis for earthquake hazard analysis. It has its roots in H.F. Reid's corollary to his elastic rebound theory: the next earthquake is likely to happen when the strain released in the previous one has been restored.

The application of this principle led to the long-term forecast of the Loma Prieta (World Series) earthquake of 1989. In a USGS open-file report in 1983, Al Lindh pointed out that the southernmost part of the rupture zone of the 1906 San Francisco earthquake had slipped much less than points to the north and thus had a high probability of rupturing within the next few decades. Lynn Sykes and Stu Nishenko further refined this forecast in 1984, and I weighed in with further analysis in 1985; these subsequent forecasts appeared in refereed journals. I stated that the imminence of this sector (Loma Prieta) was greater than any part of the San Andreas except Parkfield.

In the last few years before the earthquake occurred, this issue was debated before several forums, and the USGS was urged to instrument the area for detecting precursors. Dissenting opinion within the USGS and organizational inertia prevented any action. Aside from electromagnetic signals picked up by Stanford's A.C. Fraser-Smith, who had been serendipitously operating a ULF receiver in the vicinity, no precursors were detected for the Loma Prieta earthquake, for no instruments had been deployed, and a great opportunity was lost.

What Became of the Predictions?

Meanwhile, 1993 has come and gone at Parkfield with no earthquake, while two other damaging earthquakes have struck in California, both on little-known faults in the south. The error in the Parkfield timetable has been explained in two ways. Jim Savage of the USGS has argued that the original premise was fallacious, since it only considered a single, non-unique recurrence scenario. On the other hand, Steve Miller of UTH Zurich has suggested that a nearby thrust earthquake in Coalinga in 1983 reset the clock at Parkfield. Whatever the explanation for the tardiness of the next Parkfield earthquake, the wait goes on.

The 1995 Kobe earthquake disaster has given the Japanese a much bigger hangover, exposing the lopsidedness of their program. Not only did their much vaunted earthquake-resistant construction prove fallible, but the emergency response to the earthquake was scandalous - the relief corps did not enter the city until almost 10 hours after the earthquake.

Kobe was not an unexpected place for such a disaster to occur. In 1981, the causative fault of the Kobe earthquake had been designated by geologists as one of 12 "precautionary" faults in Japan - active faults in a late stage of their seismic cycle. The trouble is, slow moving intraplate faults like Kobe and the faults that caused the Landers and Northridge earthquakes in southern California have earthquake recurrence times of thousands of years, so even very good long-range forecasts for such faults would have very large uncertainties in human time scales.

In contrast, it is possible to estimate earthquake probabilities on a time scale of decades for rapidly moving faults like the San Andreas, where recurrence times are about every 100 years. Such estimates have been made for all of the main faults of the San Andreas system. This procedure has clear social benefits, for it shows us where to concentrate mitigation efforts, such as retrofitting substandard buildings and emergency planning.

So where are we with short-term prediction? The Kobe earthquake was found retrospectively to have been preceded by a host of precursors, as pointed out by Paul Silver and Hiroshi Wakita in the July 5, 1996, issue of Science (v. 273, p. 77). There is no doubt in my mind that many earthquakes are preceded by real precursors, but their causative processes remain murky, mainly because we lack good observations. Whether or not we can ever follow the three-step procedure necessary to predict earthquakes from such precursors is an open question. But for me, the scientific challenge in earthquake prediction lies in understanding the mechanisms behind those precursory phenomena.