WEBVTT
00:00:00.000 --> 00:00:03.340
OK, I'm gonna start the recording now.
00:00:05.120 --> 00:00:11.150
Alright, thank you all for attending the earthquake Science Center Weekly seminar series.
00:00:11.160 --> 00:00:12.760
If you are new, welcome.
00:00:12.940 --> 00:00:17.410
If you would like to be added to our email distribution group group, please send us an email.
00:00:17.820 --> 00:00:18.810
Seminars are recorded.
00:00:18.820 --> 00:00:27.670
Mostly all talks are posted on the USGS Earthquake Science Center web page and closed captioning can be turned on by clicking on the CC icon in the dot.
00:00:27.680 --> 00:00:30.370
More tab at the top of the page.
00:00:30.800 --> 00:00:37.620
Attendees, please meet your mics and turn off your cameras until the Q&A session at the end of the talk and submit your questions via the chat at anytime.
00:00:37.670 --> 00:00:41.650
Or you can wait and turn on your camera and ask your question at the end of the talk.
00:00:41.660 --> 00:00:42.450
During the Q&A session.
00:00:45.070 --> 00:01:06.140
SS so announcements for today, November 1st, SSA Eastern section of honors, Christine Goulet with the Jesuit Seismological Association award, the eastern section of the Seismological Society of America honors outstanding contributions to observational seismology with the Jesuit Seismological Association award this year.
00:01:06.150 --> 00:01:13.140
Christine Goulet, Goulet director of the USGS Earthquake Science seminar, was honored at a ceremony held in Dallas, TX.
00:01:13.270 --> 00:01:14.640
Congratulations, Christine.
00:01:20.390 --> 00:01:31.260
There is a USGS town hall tomorrow, November 2nd from 2:30 to 4:00 Eastern Time on Microsoft Teams join US senior leaders for updates related to the budget, human capital and more.
00:01:31.270 --> 00:01:33.840
See your email for the link and additional information.
00:01:34.270 --> 00:01:37.900
Also tomorrow, November 2nd is the live event guide to peer review.
00:01:37.910 --> 00:01:44.350
At 1:00 PM Eastern Time and it's scheduled for 1 1/2 hours with time for questions, questions, and answers.
00:01:45.080 --> 00:01:48.770
Attendees must pre register via DOI talent for the course.
00:01:48.780 --> 00:01:50.800
See your email from Shane For more information.
00:01:51.900 --> 00:02:03.790
The USGS media training is being offered to building 19 postdocs at Moffett Field on November 9th from 3 to 4 during the not ready for Prime Time Colloquium.
00:02:03.840 --> 00:02:16.650
Typical timing, so training will be given by Paul Lastin office of Communications during the NPT colloquium that occurs 3 weekly 3:00 PM on Thursdays in the Yosemite Room, and Building 19.
00:02:17.560 --> 00:02:20.730
And Please note that not ready for prime time is in person only.
00:02:21.120 --> 00:02:31.500
And finally, NSF is holding a webinar on that same day, November 9th, at 3:00 PM on their solicitation for the new geophysical facility.
00:02:32.230 --> 00:02:35.990
The functions currently operated by the Earth Scope Consortium for the Lynx.
00:02:36.000 --> 00:02:44.240
See your email from Shane that wraps up the announcements for today, so I will hand it over to Anne Marie who's gonna introduce our speaker today.
00:02:44.340 --> 00:02:45.100
Alright, well on here.
00:02:45.710 --> 00:02:46.300
Thanks Curtis.
00:02:46.310 --> 00:02:51.420
Alright, so I'm really excited to have Jamie virtually here today for a seminar.
00:02:51.890 --> 00:02:58.760
And Jamie did his undergrad in Bowdoin College in Maine, and I believe if I recall, hails from New England originally.
00:02:59.370 --> 00:02:59.860
That's correct.
00:03:01.220 --> 00:03:04.890
And spend some time at Penn State getting and a masters.
00:03:05.840 --> 00:03:10.580
He also spent some time at University of Michigan and doing some research and has made his way around and got his PhD.
00:03:11.300 --> 00:03:18.570
And at Northwestern, and also a Masters in statistics, while at Northwestern, which is great.
00:03:19.060 --> 00:03:34.550
And Jane's received many awards and honors, and we tried to lure him here for a postdoc, but he is doing an NSF postdoc at the University of Chicago, and Jamie and I are working on a lot of stress drop stuff.
00:03:34.560 --> 00:03:38.430
He's got some really innovative research on stress drop on certainties and synthetic earthquakes.
00:03:38.440 --> 00:03:43.640
But today he's going to talk about a totally different topic, which I am not really aware of.
00:03:43.650 --> 00:03:48.940
So you'll see that sort of the breadth of his various research.
00:03:49.450 --> 00:03:52.370
So I think yeah, thanks so much Jamie.
00:03:52.380 --> 00:03:53.240
And and take it away.
00:03:54.030 --> 00:03:54.650
Thanks Maria.
00:03:54.660 --> 00:03:55.580
Thanks for the introduction.
00:03:55.590 --> 00:03:58.600
Can everyone hear me alright and see the slides?
00:03:58.610 --> 00:03:59.230
Everything look good.
00:03:59.860 --> 00:04:00.850
Slides look great.
00:04:00.860 --> 00:04:01.430
We can hear you.
00:04:01.720 --> 00:04:02.370
Alright, perfect.
00:04:02.500 --> 00:04:02.790
Yeah.
00:04:02.800 --> 00:04:06.330
So thanks for introduction and thank you for the invitation to speak today.
00:04:07.120 --> 00:04:14.990
So today I'm going to be talking about, as Andrew mentioned, something not related to stress drop, but something that came out of my PhD at Northwestern.
00:04:15.260 --> 00:04:20.710
Let's just called the generalized long term fault memory model and applications to paleoseismic records.
00:04:21.120 --> 00:04:25.090
This has been like a collaborative effort so that I was working on it at Northwestern.
00:04:25.300 --> 00:04:29.150
Leah Saltish, one of the other PhD students at Northwestern, was also working on it.
00:04:29.320 --> 00:04:30.770
She was amended.
00:04:30.780 --> 00:04:31.930
Hall at the US just as well.
00:04:32.030 --> 00:04:34.320
Denver she's now at Guy Carpenter.
00:04:34.670 --> 00:04:38.780
Bruce Spencer, a statistician at Northwestern, and my advisor Seth Stein.
00:04:40.060 --> 00:04:59.750
So really just to to to to hit you home with the big points right now, the generalized long term fault memory model is a new large earthquake probability model that mimics the underlying strain processes that Dr Earthquakes and can produce forecasts that explicitly consider the impact of residual strain on earthquake probability.
00:04:59.940 --> 00:05:03.260
If those are the two take home points that you remember from the rest of this hour.
00:05:03.270 --> 00:05:05.430
Long talk on those are the key things to remember.
00:05:07.280 --> 00:05:12.260
So I kind of like this quote from GK Gilbert, sort of like sort of motivating the research we have.
00:05:12.270 --> 00:05:28.770
So this is after the 1906 San Francisco earthquake, but he said, must the citizens of San Francisco Bay, San Francisco and the Bay District face the danger of experiencing within a few generations, a shock equal to or even greater than the one to which they have just been subjected?
00:05:29.030 --> 00:05:34.010
Or have they earned by the recent calamity along immunity from violent disturbance?
00:05:34.140 --> 00:05:36.600
Essentially, is another big one going to happen tomorrow.
00:05:36.930 --> 00:05:39.540
Is there 100 years from now, 200 years from now, et cetera?
00:05:39.580 --> 00:05:50.610
So this question of like motivating when is the next large earthquake going to occur has been sort of been around for a long time, but it's the 1906 earthquake that really got people thinking about, well, what's the science behind this?
00:05:50.830 --> 00:06:01.080
Going on, obviously, as you are familiar with sort of the big idea that came out of the 1906 earthquake and the studies that are about it was elastic rebound theory by Harry Reid.
00:06:01.290 --> 00:06:11.230
And the idea as the famous offset fence post shows here, that between earthquakes and the quiescent period at the plates are moving, it builds up strain, that it accumulates.
00:06:11.440 --> 00:06:12.330
You have an earthquake.
00:06:12.340 --> 00:06:18.490
It releases it and then the process starts over again and sort of this fundamental idea sort of.
00:06:18.500 --> 00:06:31.370
You have strain or stress accumulation between the earthquakes as the plates move and then release really drives our very simple conceptual models of sort of fault mechanics on sort of large faults.
00:06:31.620 --> 00:06:34.230
I mean, you have your idea of sort of as the bottom right shows.
00:06:34.240 --> 00:06:52.330
It's a schematic showing your idea of a strictly periodic time predictable or a slip, predictable earthquake model, and so obviously if we could measure if we knew exactly what the stress state was on the fault right now, and we knew the exact failure, stress would be pretty simple to calculate.
00:06:52.340 --> 00:06:52.520
OK.
00:06:52.530 --> 00:06:56.770
Well, when's the next earthquake gonna be based on just accumulate motion rates?
00:06:57.180 --> 00:06:58.620
Obviously, we don't have that information.
00:06:58.630 --> 00:07:00.920
We don't know the stress state at which an earthquake is going to occur.
00:07:00.930 --> 00:07:07.570
We don't know the current stress on the fault and so obviously trying to forecast these large earthquakes is not so simple.
00:07:08.500 --> 00:07:12.040
Umm so as you are all familiar with, obviously is the idea of.
00:07:12.050 --> 00:07:18.340
OK, let's do these probabilistic forecasts and these numbers are used in a wide, wide range of places.
00:07:18.350 --> 00:07:20.960
Obviously they inform our static hazard analysis.
00:07:21.370 --> 00:07:25.070
I mean the public, the public often picks up on them.
00:07:25.130 --> 00:07:41.080
Newspapers right about these numbers and different areas around the world, like obviously the Bay Area, the Pacific Northwest, obviously The New Yorker article in the late 2000s really sort of sparked a lot of interest in of what when's the next big one going to happen there?
00:07:41.210 --> 00:07:42.520
And the Pacific Northwest.
00:07:42.960 --> 00:07:49.880
And then recently, there's been some controversy in Japan with the Nankai through and the megathrust zone there.
00:07:49.990 --> 00:08:04.840
The official line of sort of an 80% probability in the next 30 years when you dig deeper into it, there's really a large uncertainty there and depending on what you assume, what models you use, you're essentially anywhere between 6 to 80% probability of earthquake.
00:08:04.850 --> 00:08:06.200
And the next 30 years.
00:08:06.290 --> 00:08:17.100
And so there's been a lot of sort of recent interest, especially in Japan, about sort of what do these numbers really mean and is there a way that we can really improve these forecasts for these large large earthquakes?
00:08:18.460 --> 00:08:25.710
Uh, I assume that most people are probably familiar with this, but just in case there are folks that don't do this sort of haven't really done this work in a while.
00:08:25.720 --> 00:08:30.090
Just sort of a quick refresher to calculate these sort of large earthquake probabilities.
00:08:30.100 --> 00:08:32.870
There's a lot of different approaches that go into it.
00:08:33.160 --> 00:08:43.650
This is sort of the baseline simplest approach, but it's sort of gives you the quick and dirty sort of what's your probability, but generally what you do is you can take a Paleozoic record.
00:08:43.700 --> 00:08:50.190
This is a Paleozoic record of large earthquakes from the Palette Creek record along these southern San Andreas Fault.
00:08:50.940 --> 00:08:58.470
The two most recent earthquakes that we have any really historical information about are the 1812 and 1857 Fort Tejon earthquake.
00:08:58.900 --> 00:09:02.190
The rest of the earthquakes are purely from geologic evidence.
00:09:02.700 --> 00:09:09.770
You can take these earthquakes the times of these earthquakes and in which case you have about 11 earthquakes and you have inter event times between the earthquakes.
00:09:10.000 --> 00:09:14.210
And then what you do is you fit a probability density function to these earthquakes.
00:09:14.450 --> 00:09:30.340
Now the probability density function is simply just some sort of probability model that you're either going to fit to the mean or the mean and standard deviation of these inner event types, and so along the inter event times.
00:09:30.390 --> 00:09:37.970
Here you can have the Brownian passage time or the BPT model log normal model or an exponential model as well.
00:09:38.800 --> 00:09:48.170
Obviously you have very limited data, so trying to distinguish the models can be kind of difficult and there's a variety of models you can choose, but once you've selected your model, you fit it to the data.
00:09:48.360 --> 00:09:54.150
You can then actually calculate and estimate for the 30 or earthquake probability simply using probability theory.
00:09:54.540 --> 00:10:15.670
Pretty standard equations and so here we have on the far right and panel C this is the probability of an earthquake in the next 30 years starting after 1857, and you can see the exponential model is flat with time inherent by its nature, whereas the Brownian passage time and BPT models increase or, say lognormal models increase with time.
00:10:16.460 --> 00:10:18.590
Today there are about flattening out.
00:10:18.640 --> 00:10:24.150
The BPT stays flat and decreases a little bit, whereas the log normal actually now begins to steady decrease.
00:10:24.210 --> 00:10:33.150
So even if the current quiescent period continues by design, these probability models will actually start decreasing the probability of an earthquake along the fault.
00:10:33.160 --> 00:10:37.240
And that's just inherent to the functional form of those probability models.
00:10:37.650 --> 00:10:38.770
So that's sort of the simple way.
00:10:38.780 --> 00:10:39.850
These are these are calculated.
00:10:40.560 --> 00:10:40.950
Umm.
00:10:41.500 --> 00:10:52.740
But obviously your choice of probability model actually leads to assumptions about how what you think the relationship is between probability and strain accumulation and release.
00:10:52.900 --> 00:11:02.530
Now remember we have this very sort of simple based on the elastic rebound model, a very simple notion of how strain should accumulate along a fault, and then how it's being released.
00:11:02.880 --> 00:11:08.720
And so obviously, what probability model used sort of how does that map to those understandings of how strain accumulates and release?
00:11:09.800 --> 00:11:18.770
Now the memory lists or time independent exponential or Poisson model assumes that the earthquake probability is constant with time.
00:11:18.900 --> 00:11:24.670
So you have the probability on the you have this panel here with panel a, panel B.
00:11:24.950 --> 00:11:30.110
The stick figures show earthquakes, so these are earthquakes that occur at various times.
00:11:30.120 --> 00:11:44.620
These are just a synthetic Paleozoic record and the corresponding earthquake probability with time on the lower panel, and you can see here the probability for the exponential or Poisson sequence is constant with time even when an earthquake occurs.
00:11:44.790 --> 00:11:46.620
This is why we call it time independent.
00:11:46.690 --> 00:11:49.600
It doesn't matter what an earthquake occurs, the probability stays constant.
00:11:50.810 --> 00:11:55.760
The other class of models, the short term memory models or the time dependent models.
00:11:56.180 --> 00:11:59.400
This is your log normal Brownian passage time, Weibel.
00:11:59.410 --> 00:12:04.380
Essentially, any two parameter probability density function here.
00:12:04.390 --> 00:12:17.710
What you have is the probability will change with time since the last earthquake, but when an earthquake occurs, the probability drops to 0 or renews or resets and so this is sort of the Standard Time dependent model that it's used today.
00:12:18.050 --> 00:12:24.400
And if you see an application of the Brownian passage time or lognormal, this is what's going on here now.
00:12:24.410 --> 00:12:29.780
But when we look at real earthquake histories and I'm this is sort of, I'm not a field field geologist.
00:12:29.790 --> 00:12:29.980
I'm.
00:12:29.990 --> 00:12:39.340
I'm not out digging trenches, but the take the field geologist, they go out in the field when they look and look at these palaces like earthquakes, they sort of paint a more complex history of what's going on.
00:12:39.510 --> 00:12:44.380
And So what they'll see is these complex sort of patterns of clusters and gaps in there.
00:12:44.670 --> 00:12:50.760
And they're interpreting those clusters and gaps is sort of these complex strain accumulation and release histories.
00:12:51.010 --> 00:12:58.410
And so the idea being that when you have a long period between earthquakes, you've built up a lot of strain and then you have an earthquake.
00:12:58.480 --> 00:13:00.290
You only have a partial strain release.
00:13:00.300 --> 00:13:08.530
You release some of that strain, but there's a lot of strain leftover that the next earthquake can tap into, which is why it may actually happen sooner than you would normally expect.
00:13:08.620 --> 00:13:20.040
So the idea of being that you get these gaps and clusters because an earthquake only releases some of the accumulated strain that is there, and then other earthquakes, subsequent earthquakes can release the additional strain until eventually released.
00:13:20.080 --> 00:13:31.960
Most of the accumulated strain and start over again, and so this sort of complex release of strain and these clusters isn't really satisfied by the current probability models.
00:13:31.970 --> 00:13:32.620
There are sort of.
00:13:32.630 --> 00:13:38.460
You look at the probability models, you assume the probability changes the time you have an earthquake, and then you start over again.
00:13:38.810 --> 00:13:44.560
But the interpretation of the Paleozoic Records says, well, we're actually start over in terms of strain.
00:13:44.570 --> 00:13:51.030
We still have some strain in the system and that's making the next earthquake more likely to occur, which is why we see the clusters.
00:13:52.110 --> 00:14:00.260
And so here what we're proposing, the subject of my talk today is this new long term fault memory model and it's really sort of conceptually very simple.
00:14:00.410 --> 00:14:07.410
And the idea here is that you have probability increasing with time until you have an earthquake, and then we have an earthquake.
00:14:07.420 --> 00:14:08.940
You don't necessarily restart again.
00:14:09.110 --> 00:14:20.440
You have some probability drop, but there's still some residual probability after the earthquake, which essentially allows the next earthquake to tap into and then occur sooner than you'd expect.
00:14:20.570 --> 00:14:27.870
So here LTF essentially is creating the gaps and clusters that you see based on sort of real paleoseismic records.
00:14:27.880 --> 00:14:31.550
And so it's trying to mimic the underlying residual strain processes.
00:14:31.700 --> 00:14:33.310
It's still all probability model.
00:14:33.320 --> 00:14:40.530
There's no actual strain going, no strains being calculated, but it's sort of mimicking what we think is going on in the strain models.
00:14:41.770 --> 00:14:44.600
And so why would you want to choose the long term fault memory model?
00:14:44.790 --> 00:14:55.140
Well, the first is the idea is that the earthquake clusters that we're seeing in Paleozoic records actually reflect some real physical state of the fault and aren't just due to random chance.
00:14:55.390 --> 00:14:58.680
The current probability models that are used.
00:14:58.690 --> 00:15:01.080
So the Brownian passage time lognormal exponential.
00:15:01.530 --> 00:15:09.810
You can get clusters in those models, but they're all purely due to chance and don't really reflect anything in the underlying physical state of the fault system.
00:15:11.430 --> 00:15:29.490
The next reason we want to use LTCM would be that you want to have a forecast to consider the timing of recent earthquakes, and so the idea being that well, if we've had a lot of earthquakes recently, maybe we've released a lot of strain and we don't aren't likely to have one again soon or if it's been a long time since we had an earthquake, maybe you're more likely to have one.
00:15:30.130 --> 00:15:30.440
Umm.
00:15:30.930 --> 00:15:34.980
And really, what gets at the heart of that is that the earthquake inner event time order matters.
00:15:35.050 --> 00:15:39.440
So the actual ordering of the sequence matters for what your forecast is going to produce.
00:15:40.210 --> 00:15:47.400
This is a plot from one of the early papers we wrote on looking at the LTCM model, and here we have a stick figure.
00:15:47.410 --> 00:15:55.920
Once again, these are just purely sort of thought exercises for Paleoseismic records, but the panel and panel B are showing potential earthquake histories.
00:15:56.210 --> 00:16:01.530
Both of these have the same inter event times, they've just been reordered the top one.
00:16:01.540 --> 00:16:04.580
You can see you look at it and say, well, there's a lot of gaps and clusters here.
00:16:04.590 --> 00:16:14.110
So you'd say the earthquake record looks quite clustered, the bottom not at all, but these actually still have the same inter event times, which means they still have the same mean and standard deviations.
00:16:14.420 --> 00:16:28.350
But if you're forecasting the top versus forecasting the bottom record, you may want to say, well, maybe our forecast should look a little different because we're seeing different patterns even though the overall statistics of the record look the same.
00:16:28.560 --> 00:16:32.180
And this is something that our new long term fault remodel can do.
00:16:33.580 --> 00:16:44.620
So we've had lots of iterations of Lt, FM and the past, and you probably have seen talks Leah sodas when she was a PhD student, gave a lot of talks on the initial sort of formulation of it.
00:16:44.630 --> 00:16:58.280
And we have a paper in 2020 that discussed this in more detail, but I wanted to kind of walk through a little bit of the evolution of this model to give you a better understanding of sort of why we like it and where we think it can go and why it can be useful.
00:16:59.520 --> 00:17:18.750
So as it stands, the original TFM model was very, very simple and it was really just a forward model that showed that with two simple parameters, A which is the rate at which you think probability accumulates along the fault, and R which is the amount of probability that earthquake releases that.
00:17:18.760 --> 00:17:21.150
Essentially, you can mimic sort of the.
00:17:23.280 --> 00:17:32.120
Uh strain processes that we see for earthquakes and we can also sort of create these gaps and clusters, mimicking the idea of sort of partial strain release.
00:17:32.260 --> 00:17:36.170
So here we have synthetic earthquake history.
00:17:36.420 --> 00:18:01.700
The corresponding probability sort of state as we travel through time, you can see that the probability will increase at some value A, so it's at a slope A and then when you have an earthquake, we have a drop in the probability, but not necessarily to zero, and the idea being that that next earthquake in the in the process sort of taps into that remaining probability, it means it's more likely to occur soon.
00:18:01.710 --> 00:18:02.900
So that's why you get clusters.
00:18:03.970 --> 00:18:18.060
And So what we showed is that with these very, very simple earthquake parameters, or potentially parameterization, the process just using these two parameters, A&R, we could recreate a wide variety of earthquake behavior.
00:18:18.590 --> 00:18:18.990
Umm.
00:18:19.460 --> 00:18:29.140
And the top on the top right of the figure, we show some sort of schematic figures that show potential results, and these are ones that essentially just forward simulations.
00:18:29.310 --> 00:18:45.580
But we looked at different ratios between the size of the drop are and the rate at which probability accumulates, and so when the ratio between those two parameters is high, you get sort of periodic behavior similar to what we would observe with the renewal model.
00:18:45.890 --> 00:18:49.310
You have a strain or probability building between earthquakes.
00:18:49.550 --> 00:18:56.960
We have an earthquake and then because the amount of probability released is large, it drops back to zero and you essentially renew again.
00:18:57.120 --> 00:19:03.010
So this would be really no different than say if you applied a Weibull or BPT model, et cetera to the record.
00:19:03.740 --> 00:19:04.130
Umm.
00:19:04.530 --> 00:19:12.100
If you decrease the relative size of those parameters, essentially you make the size of the probability drop smaller.
00:19:12.310 --> 00:19:19.920
You start to get more residual strain apparent and so here in the top right you see that a one or so earthquakes now has residual strain.
00:19:20.110 --> 00:19:25.960
But as we sort of decrease move to the bottom row, we can see that our over A is 80 then R / A is 50.
00:19:26.230 --> 00:19:37.650
We see that after each earthquake, you're more likely to have residual probability leftover, meaning more residual strain, and you start to get more real gaps and clusters in the record.
00:19:38.200 --> 00:19:43.330
And as that value gets smaller and the far right, you can see that you never actually really drop back to zero.
00:19:43.640 --> 00:19:53.850
You have some drop, but you always the drop size is really small when the earthquake occurs, so there's always some probability leftover, meaning that the next earthquake is likely to occur sooner.
00:19:54.180 --> 00:19:58.550
So here we can see that we're creating these sort of gaps and clusters in earthquake.
00:19:58.600 --> 00:20:02.180
And while there's a statistical component to it, there's probability that goes into it.
00:20:02.190 --> 00:20:07.690
I mean, this is a probabilistic model, so it's not deterministic when an earthquake will occur or not.
00:20:08.230 --> 00:20:09.330
You're more likely to.
00:20:09.340 --> 00:20:18.100
There is some physical physics E informed, or I guess more sort of our understanding of a more physical based system where you have higher probabilities.
00:20:18.110 --> 00:20:22.890
You're more likely get an earthquake, and if you don't drop to zero, you're still more likely to get another cluster to occur.
00:20:24.300 --> 00:20:25.750
Umm, we've also.
00:20:25.760 --> 00:20:33.350
This is a very simple model, but you can sort of start adding more parameters to it, and obviously when you're dealing with paleoseismic records, you have relatively little data.
00:20:33.360 --> 00:20:41.890
So the more parameters you add, the more likely you are to overfit, but we just wanted to show what sort of the exciting possibilities could be with this type of approach.
00:20:42.020 --> 00:20:51.270
And here we can see what we've done essentially just added in the idea of a variable strain release as well as variable thresholds or when seismicity turns on and off.
00:20:51.380 --> 00:21:01.940
And just adding these couple simple additional parameters variable drop size you can start to create sort of really some interesting clustered behavior and so I just want to highlight the one on the far right.
00:21:02.150 --> 00:21:05.160
We sort of term this our sort of to hoku style cluster.
00:21:05.170 --> 00:21:16.640
The idea being that you might have a series of magnitude eights and these are the smaller drops and you sort of they are dropping probability a little bit, but not enough to really significantly change the overall sort of state of the system.
00:21:16.870 --> 00:21:23.610
But when you have a big magnitude 9, you have a large probability drop and it sort of restarts the process all over again.
00:21:24.220 --> 00:21:29.100
And so here we're sort of just showing sort of conceptually how you can create these different different types of models.
00:21:31.210 --> 00:21:38.360
So the the original formulation of the LTCM model was really sort of to show this is what could be done with incorporating this this history in here.
00:21:38.530 --> 00:21:40.560
But the big drawback was that it was a forward model.
00:21:40.570 --> 00:21:46.540
We didn't have really an easy way to feed it, a synthetic record or an actual real Paleozoic record.
00:21:46.550 --> 00:21:48.940
And then produce a forecast.
00:21:49.390 --> 00:21:55.200
You'd have to run thousands and thousands of simulations and trying to figure out well which simulation is the best that matches what we've seen.
00:21:55.210 --> 00:22:08.530
It's a really difficult thing to do and so we've been working on subsequently and what we'll talk about next is how we actually tried to come up with analytic expressions to actually be able to say, here's my palasik record, plug it into LTCM.
00:22:08.660 --> 00:22:09.730
What's my forecast?
00:22:10.110 --> 00:22:14.860
And so the 1st paper we came up with, the first approach we took out was using a Hidden Markov model approach.
00:22:15.630 --> 00:22:27.720
And so the idea here is that we took, and if anyone here is familiar with Hidden Markov models or Markov models, the idea is that you essentially take whatever your system is and you discretize it into states.
00:22:27.850 --> 00:22:35.700
And so here we took our original probability state and our probability, and then discretized it into different numbers of states.
00:22:35.910 --> 00:22:41.440
I'm showing a very simple example here with just 11 state or 10 states from 1 to 11.
00:22:42.830 --> 00:23:03.020
But obviously if you're trying to actually model it, you want to include, you know thousands of states, but essentially what you do is each state corresponds to a probability level, and then you essentially once you have your states, you can create a state transition matrix on the far right which just tells you what's the probability that if, say, if you're in state 2, which is if you're in state two.
00:23:03.030 --> 00:23:10.090
So you look at row two, what's the probability that you transition up to the next state, which would be state three or you drop down to a lower state.
00:23:10.570 --> 00:23:23.570
And so by doing this discretization and formulating a hidden Markov model, you can actually create this state transition matrix and you end up creating this very, very large, very, very sparse matrix.
00:23:23.580 --> 00:23:28.880
As you can see here in this simple example, essentially you either move up a state or you drop down to some lower state.
00:23:28.890 --> 00:23:42.380
When you have an earthquake, so if you move up the state, if you have no earthquake and you move down a state, if you do have an earthquake, so you can create this large matrix and then using Markov chain theory you can actually come up with the analytic expressions for earthquake probabilities.
00:23:42.730 --> 00:23:52.040
And so we recently published this work back earlier this year and we've provide all the analytic expressions to actually do these calculations.
00:23:52.290 --> 00:23:55.660
And I just want to show some of the power of this analysis here.
00:23:56.050 --> 00:24:08.110
And so here using this hidden Markov model, first we show that you can once again by just varying the drop size and using this hidden Markov model approach you can create a variety of sort of earthquake behaviors.
00:24:08.260 --> 00:24:14.530
And so here we have sort of our state space, this sort of sawtooth graph is showing the state space from a simulation.
00:24:14.820 --> 00:24:23.090
And here this drop side of the top row and panel a the drop size is very small and so if you're in state one essentially it's the lowest state.
00:24:23.100 --> 00:24:37.960
Probability is very low, but you can see here that generally you're sort of bouncing around a lot and you end up getting lots of gaps and clusters in the record because you're probability rarely resets to zero as you increase the earthquake size.
00:24:38.060 --> 00:24:40.270
So D is now 50 or 250.
00:24:40.440 --> 00:24:45.850
You see more of sort of the renewal process you see you move up in the states, you have an earthquake and then you drop back down to zero.
00:24:46.280 --> 00:24:50.170
And so this is very similar to what we saw with the original forward modeling approach.
00:24:50.400 --> 00:24:57.030
But what's beautiful with the Markov approach is that you can actually take the analytic equations and come up with probability density functions.
00:24:57.040 --> 00:25:06.880
So this is a probability density function for the inner event times that we should expect based on the various drop sizes in the previous simulation.
00:25:07.170 --> 00:25:10.300
So here we have simulations 1/2 and three.
00:25:10.390 --> 00:25:15.140
The line thickness corresponds to the different simulations of the thinnest line.
00:25:15.470 --> 00:25:31.550
You have corresponds to the smallest drop size here and you can see that it expects mostly short inner event times, whereas as you increase the drop size you would get longer inter event times and so the idea of being here when you have residual strain in the system you should expect short time.
00:25:31.560 --> 00:25:35.180
So the next earthquake, because the earthquake probability remains high.
00:25:35.290 --> 00:25:42.540
But when you have released a lot of your strain would expect at longer to the next earthquake and these are sort of overall what we'd expect.
00:25:42.550 --> 00:25:47.990
The probability density function, essentially the distribution of the inner event times from our simulations to look like.
00:25:50.320 --> 00:25:53.430
To harp on this point a little bit more, I want to show you a specific example.
00:25:53.440 --> 00:26:00.690
So here what we have are two synthetic records that have the same exact paleoseismic intervent times in them.
00:26:00.700 --> 00:26:02.190
They're just been reorganized.
00:26:02.300 --> 00:26:21.000
So on the left we have on the most recent inter event times are the shortest so you can see NO inter event times are quite short whereas on the right we've put the longest inter event times recently and in the example I showed earlier we talked about will a traditional model wouldn't necessarily be able to handle these differences.
00:26:21.010 --> 00:26:39.060
But we want to say, OK, well, shouldn't our forecast look different based on this ordering a sequence is if we assume that these inter event times actually the clustering reflects something real going on the fault and so on the bottom here we have the forecast for the next inner event, time for the different models.
00:26:39.210 --> 00:26:48.040
And so the exponential BPT and log normal, because they don't care about the order of the inter event times give you the exact same forecast.
00:26:48.170 --> 00:26:57.120
So they predict about 9596 years until the next earthquake on average, and it doesn't matter whether it's the one on the left or the one on the right order doesn't matter.
00:26:57.290 --> 00:27:02.040
Forecast is the same, but the long term fault memory model does care.
00:27:02.150 --> 00:27:10.650
It knows that the the on the left that you've had a lot of earthquakes recently, which means that you've probably released a lot of strains.
00:27:10.660 --> 00:27:19.740
There's probably not much residual strain left in the system, so it forecasts the next earthquake would occur on average in about 117 years, so much longer.
00:27:19.750 --> 00:27:20.220
Period.
00:27:20.790 --> 00:27:30.220
The other you know about 20 years more, whereas on the right it looks at the record and says, OK, well, you actually had a long time between earthquakes.
00:27:30.230 --> 00:27:48.970
There's been really these long gaps here, which means you probably have accumulated a lot of residual strain, which means that even after the most recent earthquake, we still probably have a lot of residual strain in the system, which means that an earthquake is more likely to occur soon, so it would forecast the next earthquake to occur in 55 years.
00:27:49.200 --> 00:27:51.310
So you can see there's a big difference between the two.
00:27:51.360 --> 00:28:02.500
LTCM forecasts like more than a factor of two, and that reflects the LTCM knowledge of the actual ordering of the inter event times and it's assumption about what that means for residual strain.
00:28:03.910 --> 00:28:12.800
And so this hidden Markov model approach, there's a we published this in BSA earlier this year, really gave us the first step to actually come up with analytical equations.
00:28:13.110 --> 00:28:26.980
If anyone has seen that paper, they will notice that the appendix has about 30 pages of derivations to actually go through all the probability theory, and so it's kind of a bit of a bear to implement.
00:28:27.170 --> 00:28:34.870
And because you're also dealing with these large sparse matrices, it can be kind of computationally challenging.
00:28:35.260 --> 00:28:35.580
Uh.
00:28:36.120 --> 00:28:41.870
And so we were running it on the cluster at Northwestern just for for speed.
00:28:42.160 --> 00:28:49.070
It can be run on a laptop, but I mean it would just take a while to actually complete some of the calculations and so it's not the most portable.
00:28:49.400 --> 00:29:05.560
So even as we were finishing up this initial approach, we were trying to come up with a way of saying, well, what's a more efficient way to create to implement LTCM so that people users can actually get into it, get their hands dirty, play around with it without having large computational costs.
00:29:06.330 --> 00:29:24.360
And so to do that what we've done is we've come up with this new generalized LTCM, and this paper is currently in review, but we do have the code available for folks to actually play with and get and start poking around with it a little bit, but it relies on a very, very simple conceptual idea.
00:29:24.650 --> 00:29:34.140
And the idea is that if we, let's say, we don't care, we have our standard probability models for what the probability density function will look like.
00:29:34.200 --> 00:29:45.120
If there's no residual strain and that's this sort of black curve here in panel a, that's the sort of the standard standard probability density function, assuming there's no residual strain in the system.
00:29:45.490 --> 00:29:52.720
So to model the residual strain, all we're really going to do is just time shift the the probability density function.
00:29:52.730 --> 00:29:55.830
So let's say we want to model 50 years of residual strain.
00:29:56.330 --> 00:30:03.560
We time shift our PDF to the left so we shift it to the left and then we renormalize it because part of it's now below 0.
00:30:03.650 --> 00:30:08.300
And so we come up with this new red curve, which is the probability density function.
00:30:08.690 --> 00:30:28.490
If you started off with 50 years of residual strain and so now you can see that you'd expect because the the curve is shifted to the left, we would now expect to have earthquakes occur sooner and then using just simple probability theory we can calculate the cumulative distribution function and then lastly the hazard rate function.
00:30:28.750 --> 00:30:31.420
And so this hazard rate function is different than a lot of times.
00:30:31.430 --> 00:30:36.800
The hazard rate is referred or annual hazard rate, which is often used in hazard analysis.
00:30:36.810 --> 00:30:54.200
This is the hazard rate used in terms of the probability theory, but you can think of it as sort of the instantaneous probability that an earthquake will occur at a given moment given that one has not occurred prior to that time, and so there are very simple equations.
00:30:54.210 --> 00:31:05.860
Now these top three equations which we have developed very which we have sort of used to construct the generalized LTCM umm, and calculate any probability of interest.
00:31:05.870 --> 00:31:13.900
So all we're doing now is just time shifting a PDF and then we can really, really implement this quite easily computationally.
00:31:13.910 --> 00:31:21.800
And the other beauty is we can do this and continuous time versus discrete time, which is one of the limitations of the Markovian approach.
00:31:23.450 --> 00:31:31.040
And so here's a sort of a we can show you how we can actually issue these dynamic forecasts based on estimated residual strain.
00:31:31.410 --> 00:31:35.580
This plot here is showing sort of the hazard rate history of an earthquake.
00:31:35.590 --> 00:31:41.320
We can see the earthquakes occur and then we have a long gap now where we're building up earthquake probability.
00:31:41.610 --> 00:31:53.650
If we have an earthquake that occurs and now we get a cluster of earthquakes, the numbers beneath each earthquake correspond to the amount of residual strain that the LTCM model is essentially estimating.
00:31:53.660 --> 00:32:01.140
After after the earthquake and then you can see here that we have a series of earthquakes and we drop back to close to zero.
00:32:01.150 --> 00:32:03.340
Then we build up again and start over again.
00:32:03.830 --> 00:32:07.340
So this is gonna run one more time and I want you to add to watch the bottom here.
00:32:07.350 --> 00:32:10.040
This is the inner event time probability density function.
00:32:10.110 --> 00:32:16.980
This is the forecast for the subsequent inner event time in the sequence, and you can see for the first four they've all been identical.
00:32:16.990 --> 00:32:24.180
Nothing has changed because we have no residual strain, but now we have a long gap and then we have now a lot of residual strain.
00:32:24.250 --> 00:32:36.860
And so you can see the PDF has shifted to the left and as we release more residual strain, it eventually returns back to its initial state with no residual with no residual strain leftover.
00:32:37.070 --> 00:32:47.620
And so you can see here that as strain residual strain increases in the model, the forecast is for shorter inter event times and as it decreases it's for longer inter event times.
00:32:47.630 --> 00:32:58.000
So this is a dynamic approach to actually estimating the intervent time, probability density functions, and so how do we fit the what?
00:32:58.010 --> 00:33:08.390
What really goes into the GL TFM approach and the GLTF M, the first thing you have to do is figure out what your underlying probability density function should be.
00:33:09.150 --> 00:33:09.490
Umm.
00:33:09.670 --> 00:33:22.800
We assume A2 parameter Weibull model and the reason we assume that is because the Weibo model can be constructed so that as probability as essentially strain increases with time.
00:33:22.870 --> 00:33:30.660
So does earthquake probability and so as the quiescent period, it increases the probability of an earthquake will always increase as well.
00:33:31.050 --> 00:33:40.250
As I mentioned before, the lognormal and the BPT models by just their nature will eventually level off and either decrease or flatten out with time.
00:33:40.260 --> 00:33:44.950
That's just how they're designed, but we chose the Weibull just because we thought it would be.
00:33:44.990 --> 00:33:55.230
It made sense, at least theoretically, to us that it should increase with time, but that's the approach we took here, but the GLTF is being instructed that you can actually use any probability density function.
00:33:55.240 --> 00:34:01.210
Any two parameter PDF that you want in the actual modeling, so you could use the PPT if you wanted to.
00:34:01.680 --> 00:34:03.280
So you need to get first. You have to.
00:34:03.310 --> 00:34:05.500
You have two parameters for your probability density function.
00:34:06.160 --> 00:34:17.040
The next one is R, which is the amount of strain that an earthquake releases, and we're designing this in terms of years of so we say that let's say that it's accumulated before the earthquake.
00:34:17.050 --> 00:34:20.300
There's been 100 years of accumulated strain.
00:34:20.610 --> 00:34:23.220
The earthquake will then release 100 years of accumulated strain.
00:34:23.230 --> 00:34:27.880
So everything is determined done in terms of years, not in terms of actual strain values.
00:34:28.200 --> 00:34:29.980
And so that's.
00:34:29.990 --> 00:34:34.940
So either you could have r = 100 years, you can have equals 25, so you would have more residual strain.
00:34:34.950 --> 00:34:47.620
If you have a smaller R value and the last parameter is 0, which is the initial residual strain once again defined in terms of years at the uh after the first earthquake in the sequence.
00:34:49.360 --> 00:34:56.870
And so in that case, this is how we can actually fit GLTF M to a real Paleo seismic record.
00:34:57.160 --> 00:35:00.130
And so here we have the senator as fault.
00:35:00.140 --> 00:35:02.510
This is from Sheryl in 2011.
00:35:03.340 --> 00:35:11.590
But what we have is first, we want to do is we want to estimate the initial probability density function parameters, assuming that there's no residual strain in the system.
00:35:11.720 --> 00:35:14.750
So here we use a maximum likelihood estimation approach.
00:35:14.840 --> 00:35:28.470
This is exactly how you would do it if you were just working with a standard short term memory model where you essentially find the parameters that most likely give you the inner event times that you've seen, plus the current quiescent period that we are observing.
00:35:30.220 --> 00:35:32.980
And then the second piece is now, here's something we do a little bit different.
00:35:32.990 --> 00:35:37.500
We want to bring in because these Paleozoic records contain so few earthquakes.
00:35:38.810 --> 00:35:58.920
We want to find additional ways to constrain these parameters if possible, so here what we're going to do is we're actually constrain the strain drop parameter R based on the average slip and the earthquakes, and to do that we can bring in actually tectonic information so we can look at the slip rate so we know well how fast should strain be accumulating along the fault.
00:35:59.530 --> 00:36:07.310
We can look at the size of the earthquakes that we think we have here and so then based on the size of the earthquake and the slip rate, we can say what's the average displacement.
00:36:08.160 --> 00:36:19.680
And then based on the average displacement and the slip rate, we can calculate R essentially how many years did it take for that earthquake to accumulate the amount of slip that it released, and that's how we can come up with our value.
00:36:19.690 --> 00:36:23.530
So here we're bringing in additional information to actually constrain the size.
00:36:24.270 --> 00:36:26.380
I'm in the rest of this talk.
00:36:26.390 --> 00:36:31.040
I'm just going to be presenting a sort of a fixed strain drop value for each for.
00:36:31.080 --> 00:36:36.600
Each alias has a record, so we're going to assume that every earthquake in a Pallas has a record has the same drop size.
00:36:36.880 --> 00:36:39.670
Now obviously you can change that to be very.
00:36:39.680 --> 00:36:50.030
It's very easy to actually change that information and change that in the Paleozoic record or in the GLTF approach, but oftentimes we don't have enough information to constrain that particularly well.
00:36:51.400 --> 00:36:59.630
And so the next what we do is now we estimate an initial estimate for Z naughty using our initial estimate for the probability density function parameters.
00:36:59.780 --> 00:37:02.560
And for that R value that I just said.
00:37:02.720 --> 00:37:16.390
And So what we do once again is a maximum likelihood approach, but this time using the conditional probabilities of based on the initial residual strain and those this uses the equations that I showed a couple slides ago.
00:37:16.640 --> 00:37:21.120
But then we now have an initial estimate for our PDF estimates and for our initial strain.
00:37:22.380 --> 00:37:34.270
We then reiterate the process again and then re estimate now our initial PDF parameters, but this time we actually adjust the inner event times that we're plugging into the equation.
00:37:34.420 --> 00:37:40.450
So now we adjust each enter event time by the initial residual strain at that start of that inner event time.
00:37:40.580 --> 00:37:45.350
So let's say the inter event time was 100 years long, but there was 50 years to start.
00:37:45.520 --> 00:37:51.490
We now say that we actually now refit our parameters using 150 years, and so we adjust it.
00:37:51.500 --> 00:38:15.880
We adjust each subsequent PDF or each subsequent intervent time by the initial start residual strain and so now we get a new estimate for our probability density function parameters and then once again we go back to our original interevent times and then now with our new PDF parameters get a final estimate for the initial residual strain once again using the original inner event time.
00:38:15.890 --> 00:38:19.830
So the unmodified ones, but are conditioned on the initial residual string.
00:38:20.840 --> 00:38:22.450
So we just iterative approach.
00:38:22.560 --> 00:38:24.820
You could keep iterating it over and over again if you want to.
00:38:24.830 --> 00:38:35.530
We just did this sort of initial two step approach just to show how it would work, but you could iterate over again into you sort of get some stable stable answer as well, but let me show you now what it looks like.
00:38:35.540 --> 00:38:39.640
Actually, in practice, when we apply this to a real real record.
00:38:39.650 --> 00:38:47.280
So I think as people are probably well aware, the using the Palette Creek section is sort of an interesting one to examine.
00:38:47.290 --> 00:38:52.070
First of all, it's relatively long and it has some interesting clustered behavior to it.
00:38:52.430 --> 00:39:02.650
So what we're going to do here is I have fed in the Palette Creek Section 2 LTCM, and then applied the algorithm on the prior slide to actually estimate the parameters.
00:39:03.220 --> 00:39:07.790
And so we get an estimate of sort of what the underlying parameters should be for the model.
00:39:07.920 --> 00:39:10.110
And then we also have here, let me just walk through.
00:39:10.120 --> 00:39:20.770
We're showing so this sort of stick and stick figure here is showing when the earthquakes occurred and the dotted line is showing how essentially hazard rate builds between the earthquakes.
00:39:20.780 --> 00:39:24.750
You can think of as the probability that earthquake is going to occur builds between the earthquake.
00:39:25.300 --> 00:39:27.590
Here we have an R value of 176.
00:39:27.600 --> 00:39:36.520
So each earthquake has a drop size of 176 and the number below the earthquake is the estimated residual strain.
00:39:36.530 --> 00:39:46.720
So at the beginning of the sequence, it estimates it's about 158 years of residual strain, but you can see that strain is released rather quickly over the next subsequent earthquakes.
00:39:47.110 --> 00:39:56.290
And then we have a long period where GLTF M estimates that there was really no residual strain in the system up until 1508.
00:39:56.300 --> 00:39:57.320
Still no residual strain.
00:39:57.770 --> 00:40:07.160
We then have a very long 304 year quiescent period where between 1508 and 1812, no large size amity is happening.
00:40:07.430 --> 00:40:16.550
And then we have a bigger earthquake in 1812 and GLTF M estimates that after the 1812 earthquake we have residual strain of 128 years.
00:40:17.290 --> 00:40:23.790
And then again, we have another earthquake in 1857 with just a short 4445 year gap between the two.
00:40:24.990 --> 00:40:26.640
But there is no SGLT.
00:40:26.650 --> 00:40:28.920
FMS makes no residual strain in the system today.
00:40:30.520 --> 00:40:34.150
Now what we want to know though, is that, well, this is what we think happened in the past.
00:40:34.160 --> 00:40:35.890
What's it gonna look like going forward?
00:40:36.180 --> 00:40:45.350
And so here on the bottom left I have a probability density function that is showing essentially the forecast for the current quiescent period.
00:40:45.720 --> 00:40:47.310
I just want to highlight a couple things.
00:40:47.660 --> 00:41:00.670
The first is, if we look at here, we can see that log normal, BPT and the why will distributions all estimate to the next earthquake would occur in about 145 hundred 46147 years, which we've all sort of were slightly passed it now.
00:41:00.680 --> 00:41:05.130
But obviously there's a large, large standard deviation there, but GLTF is a little bit larger.
00:41:05.140 --> 00:41:32.740
It's about 164 and the reason why GFM estimates a longer quiescent period than the other models would estimate is because shield TFM assumes that the actual past inter event times the short ones that we've seen are a product of residual strain and not necessarily something to do with sort of a short standard deviation of the OR a product of residual strain, not necessarily selling with the underlying probability density function.
00:41:33.300 --> 00:41:43.590
GT LFM essentially, when there is no strain present, we'll actually estimate the next intervent time to be longer than the average of the past earthquakes.
00:41:44.300 --> 00:41:56.690
And that's because when there is a relatively short intervene time in the past, it attributes that to residual strain, not necessarily a shorter average recurrence time.
00:41:58.920 --> 00:42:04.140
And then the other thing of interest that people really care about is the 30 or forecast.
00:42:04.150 --> 00:42:10.530
And so here we have GLTF M and this we have the estimated probability of earthquake in the next 30 years.
00:42:10.600 --> 00:42:13.670
And today, the models really don't disagree that much.
00:42:13.680 --> 00:42:21.260
I mean really hard to distinguish between 35 and 31%, but they will diverge quite strongly in the future.
00:42:22.670 --> 00:42:35.940
It will diverge quite strongly in the future, cause GLTF M assumes and as because based on the ybel that as the quiescent period continues, earthquake probabilities should continue to increase, whereas BBT and log normal flatten out or decline.
00:42:37.620 --> 00:42:43.950
One other things you want to look at was OK, let's look at the how well does GLTF M actually forecast that?
00:42:43.960 --> 00:42:47.230
Very, very short inter event time after 1812.
00:42:47.320 --> 00:42:56.130
So let's essentially plugged in the Palette Creek record up to and including 1812, but gave it no knowledge of 1857.
00:42:56.460 --> 00:43:08.860
And then here on the bottom left you can see that GFM is the only model that forecasts a very, very short interevent time of about 68 years, compared to the others which are closer to 150.
00:43:09.240 --> 00:43:17.650
And the reason it gives that short forecast is because it knows that there is residual strain in the system and therefore an earthquake is likely to occur sooner.
00:43:19.260 --> 00:43:35.130
Another way to look at it is to look at the estimated 45 year forecast directly after 1812, and GLTF says it's a 38% chance that an earthquake is going to occur within 45 years, whereas the log normal BPT and Weibel are less than 5%.
00:43:36.720 --> 00:44:00.260
We can also look at the uncertainties of these forecasts as well, and so here what I did is I for each model I took the best fit parameters and simulated 1000 synthetic records, then estimated new parameters for each of those synthetic records, then applied those new parameters to the real Palette Creek record and then said what's the probability of an earthquake occurring in 45 years?
00:44:00.430 --> 00:44:08.310
So these histograms are showing the results of those simulations and the subsequent forecast for each of the models.
00:44:09.200 --> 00:44:14.170
If the larger the histogram is to the left, it means that an 1857 earthquake is less likely.
00:44:14.180 --> 00:44:15.750
Essentially, the probability is quite low.
00:44:15.860 --> 00:44:25.460
The more points to the right means it's more likely and you can see here that the lognormal BPT and Weibel as well are pretty confident that 1857 wasn't going to happen.
00:44:25.470 --> 00:44:29.790
They are pretty confident that it was that you were not going to have an earthquake within 45 years.
00:44:30.220 --> 00:44:41.160
The bottom right here is showing the GLTF model and you can see a large spread here, but most of the results are essentially saying that, yeah, an earthquake in the next 45 years, well, it's not a given.
00:44:41.670 --> 00:44:48.220
It's certainly not something that could be excluded, like, you know, most of the results between about 20 and 80%, which is a pretty good chance.
00:44:48.230 --> 00:44:52.100
I mean, rolling a die and getting a six is less than 20% chance.
00:44:52.110 --> 00:44:56.400
So I mean it's a pretty good chance that you would get an earthquake in the next 45 years.
00:44:56.570 --> 00:45:08.920
So GLTF essentially is confident that an earthquake occurring soon is certainly a real possibility, and so we also looked at other paleoseismic records.
00:45:08.930 --> 00:45:23.380
Obviously the title this talk is looking at GLTF M to other paleoseismic records, so here we look plugged in the Hayward fault and to calculate the size of the drop of the earthquake, we took both the loading rate information, factored in the creep rate.
00:45:23.390 --> 00:45:25.550
Essentially some of that loading rate is being released.
00:45:25.560 --> 00:45:40.400
Asically looked at the average slip you would get from a magnitude 6 point earthquake, then to come up with an R value here, and you can see here that after really you only have residual strain for the first few earthquakes in the in the in the series.
00:45:40.530 --> 00:45:55.580
And then today you have no residual stream and that if we look down at the bottom, right, if we look at our estimated 30 or forecast today, GLTF M is smaller, but in the next 50 years or so, it will surpass the BPT and lognormal models.
00:45:55.590 --> 00:45:59.610
And that is purely based on the functional form of the probability density functions.
00:46:01.590 --> 00:46:03.490
We looked at Cascadia as well.
00:46:04.200 --> 00:46:08.210
Here we're just looking at the full margin ruptures in Cascadia.
00:46:08.220 --> 00:46:12.230
And so these essentially would be the magnitude nines from the Goldfinger data set.
00:46:12.720 --> 00:46:22.070
There are data available for magnitude eights as well for smaller sections which we haven't looked at yet, but like I said, you can incorporate a variable magnitude size in there.
00:46:22.320 --> 00:46:24.820
Here we can see we have some residual strain in the middle.
00:46:25.840 --> 00:46:29.570
Uh, but today once again, the system is assuming there's no residual strain left.
00:46:30.260 --> 00:46:32.630
The probability in the next 30 years is actually quite low.
00:46:32.640 --> 00:46:36.290
It's about 5% and it takes a long time.
00:46:36.300 --> 00:46:41.630
It's not going to surpass the BPT and long normal models for the next 300 years.
00:46:41.640 --> 00:46:46.650
Essentially, over 300 years it will be a while before it overcomes, overcomes those models as well.
00:46:49.030 --> 00:46:51.100
Lastly, we took a look at the Nankai through.
00:46:51.110 --> 00:46:53.590
So this is the AB section of the Nankai through.
00:46:53.600 --> 00:47:02.620
As I mentioned, there's a lot of reason discussion in Japan about what the hazard should be here, and depending on the approach, it can be quite different here.
00:47:02.930 --> 00:47:09.500
GLTF essentially estimates that there's been no residual strain in the system at any point for the recent time series.
00:47:16.380 --> 00:47:16.540
But.
00:47:10.090 --> 00:47:19.490
Obviously that's based on some assumptions, one that we have these dates well known, which we probably do in the case of Japan, but two that the size of the earthquakes are also properly constrained.
00:47:20.420 --> 00:47:25.520
And so here we're seeing all the earthquakes are same and it assumes that there's no residual strain here.
00:47:26.400 --> 00:47:36.210
And if we look at this in a little bit more detail, we can see here that once again GLTF M has a essentially the same probability today, but we'll change in the future.
00:47:36.460 --> 00:47:39.770
Now obviously residual strain could emerge for future inter event times.
00:47:39.780 --> 00:47:44.990
So if you have a really, really long intervent time, the next earthquake could have some residual strain, so it's something to consider as well.
00:47:46.740 --> 00:47:51.380
If folks are interested in playing around with any of this code, it's all on GitHub.
00:47:51.420 --> 00:47:52.330
It's in Python.
00:47:52.340 --> 00:47:53.370
It's all open source.
00:47:54.120 --> 00:47:58.180
I provided some sample data to work with.
00:47:58.190 --> 00:48:04.470
There's also some, hopefully enough documentation to get people up and running with it, but yeah, please, please, please check it out.
00:48:04.480 --> 00:48:11.790
Download it, play with it around, and if you find something that you think doesn't really work particularly well or you'd like to see in it, let me know.
00:48:11.860 --> 00:48:13.090
Happy to modify it.
00:48:13.300 --> 00:48:15.120
Really want this to be a resource for the community.
00:48:16.910 --> 00:48:33.400
So yeah, so in summary, just sort of to wrap up everything here, I just want to say that you know the reasons why you'd want to use the GLTF model and why sort of it's beneficial is that it assumes that the clusters that we see for earthquakes reflect really the state of the fault system and aren't just occurring by chance, which is what you assume with a conventional model.
00:48:33.660 --> 00:48:39.200
And the reason why GLCM could do that is because it includes the effect of residual strain and its forecasts.
00:48:40.090 --> 00:48:55.760
You can also, as I showed here when we calculate the drop size, you can calculate additional include additional tectonic information and the GL have GLTF M framework can also be modified to include other things like the variable earthquake magnitude.
00:48:55.770 --> 00:49:01.580
So if you actually have information about, maybe this is a 9 and an 8, you can include that information in there for different earthquakes.
00:49:01.850 --> 00:49:03.080
You can look at stress transfer.
00:49:03.090 --> 00:49:09.880
You can look at how 2 faults would interact with each other and look at how aseismic slip may play a role as well.
00:49:10.630 --> 00:49:12.570
Obviously there's a lot of uncertainties in all this.
00:49:12.580 --> 00:49:15.140
There's a lot of uncertainties in the earthquake magnitudes.
00:49:15.150 --> 00:49:20.940
We don't have great constraints on Paleolithic magnitudes, and obviously these are also dates.
00:49:20.990 --> 00:49:30.000
The the earthquake dates are also have uncertainties to them and that's one of the things we're gonna look at exploring more is sort of how does that impact sort of the forecast that GLTF FFM produces.
00:49:30.830 --> 00:49:42.410
But beyond the real specifics of GFM, we just want to sort of really stress that this is indicates a way that we can actually improve our recurrence models and sort of actually try to incorporate this long term memory into these forecasts that we create.
00:49:43.700 --> 00:49:53.970
I'll just leave this last slide up here, but if you're interested in any of the papers that we've published previously, we have a Tectonophysics 1, the recent one that just came out in BSA.
00:49:54.360 --> 00:50:11.910
And then lastly, if you want to access the GLTF M GitHub code, these two our codes will take you there to those various resources, and if you're interested in seeing the as I mentioned, the current GFM manuscript, which describes the recent approach and more detailed is in review currently.
00:50:12.520 --> 00:50:15.570
But if you would like to see it, I'd be happy to send you a preprint.
00:50:15.580 --> 00:50:16.570
Just send me an email.
00:50:16.840 --> 00:50:17.450
Thank you so much.
00:50:28.100 --> 00:50:33.320
I think it's time we open it up for questions online or in the room.
00:50:38.810 --> 00:50:39.140
Tom.
00:50:40.760 --> 00:50:41.040
And.
00:50:42.950 --> 00:50:43.180
In.
00:50:43.240 --> 00:50:45.460
Thank you for a very interesting talk.
00:50:46.860 --> 00:50:49.440
No, it's splashing.
00:50:49.450 --> 00:50:50.860
It were comment is.
00:50:52.590 --> 00:50:56.780
Will be some more fuzziness that you got to include in the model that.
00:50:56.670 --> 00:50:56.860
Yeah.
00:50:58.970 --> 00:50:59.550
That's OK.
00:51:02.320 --> 00:51:08.550
For the smaller earthquakes or the magnitude 7 earthquakes here to the 8th.
00:51:10.480 --> 00:51:11.090
Voice.
00:51:19.750 --> 00:51:19.950
Yeah.
00:51:12.300 --> 00:51:23.560
Which framed drop will be associated with a fairly small part of the fault compared to where the larger is going on?
00:51:24.080 --> 00:51:24.380
Umm.
00:51:38.520 --> 00:51:39.140
Yes, yes.
00:51:25.650 --> 00:51:41.450
The strain graph for the white, so you really need to know if it's just not a matter of strained drop, it's a strained drop in this particular place there part of the metro long before.
00:51:42.370 --> 00:51:42.880
Yeah.
00:51:42.950 --> 00:51:44.340
No, that's that's a good question, Sir.
00:51:44.350 --> 00:51:49.440
Yeah, the the treating it as a point source versus an area source or or the fault area.
00:51:49.450 --> 00:51:50.070
Yeah, that's an issue.
00:51:52.160 --> 00:51:55.610
That means you need to know the location of these earthquakes.
00:52:09.040 --> 00:52:09.300
Umm.
00:51:57.430 --> 00:52:12.220
Even better than and for the bigger earthquakes because, well, for the 1857 earthquake being out all of this following things going on for having to go on this 1812 earthquake.
00:52:14.130 --> 00:52:18.440
Not quite sure where to logical dating sitting over there.
00:52:22.610 --> 00:52:22.850
Yeah.
00:52:19.180 --> 00:52:25.590
So the following all sense, which probably overridden by the 1857 Earth but.
00:52:27.190 --> 00:52:31.920
So you don't know really where to put the 1857 in the day.
00:52:32.270 --> 00:52:36.970
So at evening you kind of get the drift of what I'm talking.
00:52:37.100 --> 00:52:37.810
Yeah.
00:52:37.860 --> 00:52:40.140
Ohh no, that's that's certainly that's certainly limitation.
00:52:40.150 --> 00:53:01.780
I'm I I think that like ideally what you would take or would be you would essentially correlate, you know you would have multiple Palisades excites along different sections of the fault that would then give you that insight and it's saying well, was the earthquake here or was it was it 50 kilometers away in the fall and that would give you a sense of relative sense of the size of the earthquakes.
00:53:02.180 --> 00:53:03.640
And that's the last point of saying.
00:53:03.650 --> 00:53:04.450
Well, then you could.
00:53:04.500 --> 00:53:23.320
The idea being that OK, well along the the different sections, you essentially could say well, how would A7 in this one section increase and that's sort of looking back at the cool, I'm stress interaction there, you could say, OK, well we think it increases the stress there and then you have to say well, what does that do to the probability of an earthquake nearby.
00:53:23.330 --> 00:53:24.840
And that's something we want to explore.
00:53:24.850 --> 00:53:34.020
Haven't gotten into yet, but would be something that would be interesting, and obviously that's sort of the fundamental of the idea of how does Coulomb stress transfer really impact earthquake probabilities?
00:53:34.190 --> 00:53:40.490
But that would be something we could incorporate the those approaches into sort of this approach looking at the full long histories.
00:53:42.300 --> 00:53:44.140
Yeah, maybe you can update me here.
00:53:45.800 --> 00:53:47.520
Information simply forgotten.
00:53:51.320 --> 00:53:56.830
Inside the 1812 with great is going paying me from full health data.
00:53:56.840 --> 00:54:01.520
I think or do they have geological estimate size they keep?
00:54:02.590 --> 00:54:07.580
I I think that it's mainly felt, the geologic evidence is fuzzy.
00:54:08.050 --> 00:54:15.390
I think there may be some there, but I think it's a little fuzzy and I know that the magnitude constraint is not.
00:54:15.990 --> 00:54:21.590
It's the magnitude estimates I've seen are a little bit smaller, so like.
00:54:21.650 --> 00:54:37.020
So if you say if 1857 is is a 79, I think that maybe 1812 is 7 1/2 ish, but obviously I mean you know these magnitude relationships, you know, I mean, there's a lot of uncertainty trying to do intensity.
00:54:37.030 --> 00:54:41.740
So using felt reports, estimate magnitude and even using Palladium data, it's probably smaller.
00:54:41.790 --> 00:54:42.720
How much smaller?
00:54:42.760 --> 00:54:44.230
That's a tough question.
00:54:45.280 --> 00:54:47.790
I'll defer to the folks who are doing the actual failures.
00:54:47.800 --> 00:54:48.670
As the studies themselves.
00:54:50.190 --> 00:54:52.070
Thanks again for very interesting talk.
00:54:53.090 --> 00:54:58.480
And you know you don't wanna work yourself out of a job at a young age.
00:54:59.950 --> 00:55:02.780
I will keep you busy for the next 30 or 40 years.
00:55:03.970 --> 00:55:09.050
The thank you much improved for your time and effort, so thanks again.
00:55:09.670 --> 00:55:10.130
Of thank you.
00:55:12.620 --> 00:55:15.640
That looks like Alex Hadom has a question on that line.
00:55:15.650 --> 00:55:17.520
Alex, you wanna unmute yourself and ask your question?
00:55:18.620 --> 00:55:19.030
Yeah.
00:55:21.720 --> 00:55:22.250
Not no problem.
00:55:19.040 --> 00:55:22.840
Hi Jamie, thanks for a really interesting talk, quite enjoyed it.
00:55:23.320 --> 00:55:25.150
There's a lot of questions I could ask.
00:55:31.950 --> 00:55:32.190
Yeah.
00:55:26.040 --> 00:55:50.360
One of them was actually the question in the room about incorporating amounted frequency distribution into your work, but I guess that, umm, another question I could ask is about using the Paleo earthquake chronologies as a test case as you showed you know as aside from 1812 and 1857, as you know, there's a lot of uncertainty in the actual age of the earthquake.
00:55:50.780 --> 00:55:50.940
Yeah.
00:55:53.230 --> 00:55:53.450
Yeah.
00:55:50.370 --> 00:55:53.980
And I noticed you were depicting them all as kind of vertical lines without uncertainty.
00:55:54.640 --> 00:55:57.400
Totally understanding that's complicated to get into.
00:55:57.690 --> 00:55:59.380
How to incorporate that uncertainty?
00:55:59.390 --> 00:56:03.470
I'm wondering if you've tried to implement maybe your generalized model.
00:56:04.250 --> 00:56:11.380
I'm with maybe a probability density function for the actual age of the earthquake and sampling from within that insecurity.
00:56:11.870 --> 00:56:12.280
Yeah, I know.
00:56:12.290 --> 00:56:17.790
That's actually we we haven't done that yet, but that's essentially like that is like 2 weeks from now work to be done.
00:56:18.230 --> 00:56:19.620
No, that's where we were going.
00:56:19.630 --> 00:56:32.050
We wanted to see how stable it really performs when you have that that uncertainty in there, because obviously your interpretations of clusters clustering changes if where things are in that.
00:56:32.210 --> 00:56:40.910
So yeah, so running some sort of Monte Carlo simulation where we sample from that dense those probability density functions for the actual underlying data as well as something that needs to be done.
00:56:41.980 --> 00:56:49.210
I I think that's that's the sort of where we're gonna go next, cause it's a it's a very it's a, it's clearly something that needs to be shown like this is what the uncertainty we're looking at.
00:56:49.220 --> 00:57:00.920
This is really the full uncertainty of your forecast moving forward, not just assuming the earthquakes are points in time because they although they are, we don't have a great confidence in those points in time for really ones.
00:57:03.050 --> 00:57:03.680
I don't think you.
00:57:04.140 --> 00:57:04.460
Thank you.
00:57:08.400 --> 00:57:10.290
Other questions in the room are online.
00:57:12.510 --> 00:57:13.940
Ohh, there's one in the chat.
00:57:13.950 --> 00:57:14.760
Give me just a second.
00:57:16.680 --> 00:57:17.490
Umm.
00:57:18.340 --> 00:57:20.250
Let's see a more general question.
00:57:20.360 --> 00:57:28.590
We are obviously interested in plumbing the other uncertainties in age displacement with some will want variable strain accumulation.
00:57:28.600 --> 00:57:32.290
How implementable will all these changes be?
00:57:32.380 --> 00:57:33.380
Thanks for an interesting time.
00:57:34.150 --> 00:57:34.640
Yes.
00:57:34.650 --> 00:57:45.350
So right now the code as it is up now actually can it can deal with the various displacements, can be different, that works and is in the code currently.
00:57:45.360 --> 00:57:47.020
So we actually have that running, which is nice.
00:57:47.130 --> 00:57:56.460
So you could say if you want to say 1 earthquake releases 100 years, the most recent one releases 100 years of accumulated strain and the other one prior one that releases 75.
00:57:56.530 --> 00:57:57.360
You can plug that in.
00:57:57.370 --> 00:57:58.120
That actually works.
00:57:58.130 --> 00:57:59.140
That works right now.
00:57:59.150 --> 00:58:06.540
Today the what we're assuming in terms of how strain accumulates.
00:58:06.550 --> 00:58:13.010
We're assuming that over the timeframe of our Paleozoic records that you kind of have a constant strain accumulation.
00:58:13.220 --> 00:58:16.830
So what we're assuming right now is that obviously we're not.
00:58:16.840 --> 00:58:18.520
We're not specifically calculating strain.
00:58:18.530 --> 00:58:27.840
What we're actually modeling is how does probability accumulate, and we're saying that the the how probability changes over the course of the entire record that is going to essentially stay the same.
00:58:28.010 --> 00:58:29.990
Now the rate of probability change.
00:58:30.000 --> 00:58:32.190
I'll show you that you can kind of see it in some of these.
00:58:32.230 --> 00:58:37.980
So you can see here that the rate of probability change actually is changing with time.
00:58:37.990 --> 00:58:46.570
Essentially, initially, after an earthquake, in this case the probability is rather slow accumulating, but then as the quiescent period continues, that probability increases.
00:58:46.700 --> 00:59:02.380
But this overall shape does not change over the course of the Paleos hasn't record, and this is sort of one of the things of one of the questions that we really have is what's the appropriate probability density function to actually describe how that probability changes?
00:59:03.120 --> 00:59:04.070
Where does assuming the?
00:59:04.080 --> 00:59:04.550
Why will just?
00:59:04.560 --> 00:59:12.870
Because it's sort of a neat one to work with, because you can design it so that as time continues as the quiet period continues, the probability increases.
00:59:13.150 --> 00:59:15.420
But we don't necessarily know if that's the right model.
00:59:15.550 --> 00:59:22.900
And so the question we have is well, as time increases after an earthquake, how should that probability really be changing?
00:59:22.950 --> 00:59:24.450
And that's sort of one of the outstandings.
00:59:24.460 --> 00:59:25.080
We don't know.
00:59:25.090 --> 00:59:30.000
Obviously there's a lot of you have post seismic relaxation going on.
00:59:30.010 --> 00:59:32.120
You have all these basic processes going on.
00:59:32.310 --> 00:59:46.910
How does it really change between earthquakes and the Weibull is nice because it's sort of is flexible and you can get these different sort of displays and you can do the same thing with PPT, but really don't know how that probability should be changing with time.
00:59:47.370 --> 00:59:50.380
And that's sort of one of the things we would like to get a better handle on.
00:59:51.220 --> 00:59:54.890
And it wasn't quite directly to the question, but it's sort of sparks song that we've been thinking about a lot.
00:59:58.710 --> 01:00:01.390
Either questions and Marie, I can ask a.
01:00:01.430 --> 01:00:03.000
Perhaps a naive question because I don't.
01:00:03.270 --> 01:00:09.700
I haven't thought about Paleoseismic records, but could you test your model with an instrumental record and just sort of scale everything down to?
01:00:11.780 --> 01:00:12.540
Yeah, you could.
01:00:13.310 --> 01:00:13.810
Yeah.
01:00:12.030 --> 01:00:14.810
Their screen and and put in the magnitudes.
01:00:15.260 --> 01:00:16.230
Yeah, you could do it.
01:00:16.240 --> 01:00:25.790
You could do it on really small on a small scale if you wanted to, like if you I mean if you found a place with like, I mean the question of like repeating earthquakes and it's like really like you would really.
01:00:26.020 --> 01:00:34.610
The tricky thing is to really do any like powerful statistical test you would need like 5100 at a minimum.
01:00:34.620 --> 01:00:40.750
If you really wanted to have, like any traditional statistical tests, you're gonna do is gonna need a lot more earthquakes.
01:00:40.760 --> 01:00:46.290
And even in terms of like repeating earthquakes, probably more than we have ohm.
01:00:46.740 --> 01:00:48.610
So that's that's sort of one of the limitations.
01:00:48.620 --> 01:00:48.910
So.
01:00:48.920 --> 01:01:01.490
So it's sort of like that and that's been a trouble with any of these working with any of these forecasts, the sort of time dependent models have been shown to work better than the exponential model, which assumes a constant probability time, which makes sense.
01:01:01.560 --> 01:01:04.290
You have a large earthquake, you know you have an attitude.
01:01:04.300 --> 01:01:04.570
Nine.
01:01:04.580 --> 01:01:11.390
You're not going to have another magnitude 9 on the same fault the next day, like there's no reason you should have that, and so the time dependent makes more sense.
01:01:11.800 --> 01:01:19.590
But besides that distinguishing, it's really, really hard to distinguish which model performs better, and you're just really don't have enough data to really work with it.
01:01:20.020 --> 01:01:30.110
And that's just kind of a limitation with testing GLTF we I mean the first acknowledge I showed you one, I showed you one sequence where it works really nicely and that's great.
01:01:30.280 --> 01:01:47.190
It works fine with the rest of them, but it works on par with the other models, just we just don't have enough data to say yes, this is definitely the right model and so, but it does give it's trying to give some physical meaning to what is being observed and which is something the current model don't do.
01:01:52.330 --> 01:01:54.680
Wayne Way cool talk.
01:01:55.270 --> 01:02:00.400
As I understood your presentation, you incorporated 2 main things.
01:02:02.790 --> 01:02:03.060
Umm.
01:02:00.710 --> 01:02:21.550
One is residual strain that is, for example, if a really big earthquake occurs like a magnitude 9 and subduction zone in managed Poku, then that changes the way in which we would do the normal probabilistic recurrence models with wide or log normal.
01:02:21.740 --> 01:02:21.920
Yeah.
01:02:21.660 --> 01:02:35.570
And second, you account for strain accumulation, which the longer the the intervent time is, the probability increases rather than flattening out.
01:02:35.900 --> 01:02:36.080
Yeah.
01:02:35.580 --> 01:02:38.070
Is these normal distributions which show?
01:02:41.330 --> 01:02:42.210
That said.
01:02:54.760 --> 01:02:55.040
Umm.
01:02:44.240 --> 01:03:01.210
And then you touched on this as well, uncertainties in first of all in in the prediction say of a log normal or or or Weibull distribution are very large given the short period record of of recurrence.
01:03:01.340 --> 01:03:01.540
Yeah.
01:03:01.380 --> 01:03:12.390
So there's a 67% chance of magnitude 6.9 in the Bay Area occurring over the next 30 years at the 95% confidence level.
01:03:12.400 --> 01:03:15.320
It could be 5% or or 98%.
01:03:16.350 --> 01:03:16.840
Yeah, no.
01:03:15.330 --> 01:03:16.960
I'm making up the numbers, but you know.
01:03:19.250 --> 01:03:20.790
Yeah, and.
01:03:19.180 --> 01:03:27.240
And and similarly for your approach which I I really like well there.
01:03:27.250 --> 01:03:34.730
There are similar large uncertainties because of the the data is just way, way too limited.
01:03:34.960 --> 01:03:35.120
Yeah.
01:03:35.080 --> 01:03:37.510
So this is a comment so far.
01:03:37.960 --> 01:03:49.730
My question is, we have the USGS are charged with with actually making these probabilistic assessments and and distributing them to the public.
01:03:57.170 --> 01:03:57.490
Mm-hmm.
01:03:50.000 --> 01:03:58.490
And my view is we don't emphasize the uncertainties as much as we perhaps should because they're very wide.
01:03:59.160 --> 01:04:13.440
Uh, how ready is what you're doing to prime time and applicable to to the USGS mission to inform the public?
01:04:12.100 --> 01:04:13.740
Of get the full uncertainty.
01:04:14.640 --> 01:04:16.160
Yeah, I mean, so, right, right now.
01:04:16.170 --> 01:04:16.750
I mean, that's a good.
01:04:16.760 --> 01:04:19.870
So right now we we've got as the code is, so I'll pull this back up.
01:04:19.880 --> 01:04:33.120
So as the code is right now, it doesn't look at it would not use it for like the full uncertainties of your forecast, because I think to really what what you would need to get is, I mean I've showed one of the examples of uncertainties here.
01:04:33.130 --> 01:04:37.340
So I can use simulations to sort of create sort of what your range of forecast is.
01:04:37.590 --> 01:04:38.670
The next step would be second.
01:04:38.680 --> 01:04:40.720
What's the uncertainties of the dates?
01:04:40.730 --> 01:04:43.490
And then you would suddenly start getting a full suite in there.
01:04:43.500 --> 01:04:44.120
And so it's close.
01:04:44.130 --> 01:04:49.180
I'd say it's like 75% there in terms of the uncertainties that you would need to get that in.
01:04:49.330 --> 01:04:56.510
But yeah, the next steps to add in are sort of saying, hey, let's take in uncertainties with the actual ages of the earthquakes.
01:04:56.520 --> 01:04:56.920
Like what?
01:04:56.930 --> 01:04:57.930
When do they actually occur?
01:04:57.940 --> 01:05:14.400
Like is it 100 year intervene times at 75 and then the last piece would be looking at thinking about probably taking so the with the actual earthquake sizes, kind of the idea being that there's there's gonna be uncertainty in these as well.
01:05:14.410 --> 01:05:14.760
So what?
01:05:14.770 --> 01:05:21.380
These are what these drop sizes are, and so you can inform that based on some you know you can take a probably more informed approach on this.
01:05:21.390 --> 01:05:31.480
And so one would be to say, OK, let's say, OK, so let's say in Palette Creek here everything we're probably observing is A7 or above.
01:05:32.010 --> 01:05:39.810
And then you could say, OK, you could say you could sample from, umm, a probability density function that has seven or above earthquakes.
01:05:39.820 --> 01:05:43.260
So then you could say, OK, well, what is the actual probably density function for these early ones?
01:05:43.270 --> 01:05:45.950
You could assign some random sizes as well to it.
01:05:45.960 --> 01:05:49.720
The ones you have well constrained you could use, but then you could also.
01:05:49.730 --> 01:05:52.400
There's a lot of additional uncertainty that can be added in.
01:05:52.990 --> 01:05:59.380
Some of it is sort of your standard uncertainty with like the times of the earthquakes, but the actual sizes are something would also be to consider.
01:05:59.390 --> 01:06:07.590
And that's another piece that we're going to explore, but it's the framework has been set up that it's not hard to essentially just add a small module in there.
01:06:07.600 --> 01:06:17.270
Essentially, when you in the code, it's set up so you can say, OK, well, instead of having a fixed R value, sample it here and just it's like 3 lines of code to add and it'll it'll sample from a distribution that you want to use.
01:06:18.060 --> 01:06:29.410
But I think that would get you sort of closer to what you would actually want out of it rather than a essentially one line forecast like this is our forecast, you wanna suite and it's not far from that.
01:06:30.220 --> 01:06:34.480
But I there's a couple more things to include in the code actually to do that.
01:06:36.030 --> 01:06:36.430
Thanks.
01:06:39.680 --> 01:06:42.740
Alrighty, last call on questions, anybody?
01:06:45.400 --> 01:06:48.860
I think with that then we will thank our speaker one last time.
01:06:52.910 --> 01:06:53.580
Thank you everyone.
01:06:53.590 --> 01:06:54.400
Appreciate the invitation.
01:06:55.260 --> 01:06:56.070
Thanks for speaking.
01:06:56.080 --> 01:06:57.130
It was a great talk.