WEBVTT 00:00:00.000 --> 00:00:07.000 Good morning everyone. Welcome back to the Northern California Earthquake Hazards Workshop. 00:00:07.000 --> 00:00:13.000 [sound made] That's my impression of a muppet. So today we have another one of the sessions that Josie Nevitt suggested. 00:00:13.000 --> 00:00:19.000 We're going to take a location and we're going to look at it from as many different disciplines as possible. 00:00:19.000 --> 00:00:30.000 And where are we going to go? Well, as that's song just told you we are going to head to San Jose if you know the way, and to get us going 00:00:30.000 --> 00:00:35.000 we have our wonderful moderators, Vicki, Langenheim and Heidi Stenner. 00:00:35.000 --> 00:00:38.000 [makes sound] woohoo. 00:00:38.000 --> 00:00:42.000 Hi! Thanks, Sarah! My name's Vicki Langenheim. I'm with USGS 00:00:42.000 --> 00:00:46.000 and I will be co-moderating this session on 00:00:46.000 --> 00:00:59.000 "Do you know the way to San Jose?" It will consist of four recorded talks, 15 min each, and you're welcome to use the chat to ask questions during the talks 00:00:59.000 --> 00:01:18.000 but please try and limit them to clarification questions that the speaker can answer, so that we can have a vigorous and exciting discussion at the end of the four talks. So I'll now let Heidi introduce herself and the first speaker. 00:01:18.000 --> 00:01:19.000 Thanks, Vicki. Yeah, I'm Heidi Stenner 00:01:19.000 --> 00:01:20.000 everybody nice to see you, those of you who I can see. 00:01:20.000 --> 00:01:28.000 I'm Heidi's Stenner and I'm a a geologist at GeoHazards 00:01:28.000 --> 00:01:37.000 International. I don't know if you've heard of it, but it's a small nonprofit working to reduce the risks from geologic hazards. 00:01:37.000 --> 00:01:41.000 So let's get started. I'll introduce the first talk, which is by Julien Marty from UC Berkeley and David 00:01:41.000 --> 00:02:00.000 Oppenheimer of USGS, and it's called the "2022 M5.1 Alum Rock Event – Background and Analysis." 00:02:00.000 --> 00:02:10.000 Good morning. In this presentation we will give a brief overview of the magnitude 5.1 Alum Rock event that occurred at the end of last year 00:02:10.000 --> 00:02:22.000 on October 25th, but before I start I really wanted to thank David for all the help he provided me with 00:02:22.000 --> 00:02:30.000 to prepare this presentation, and especially for all the historical information about the Calaveras. 00:02:30.000 --> 00:02:48.000 So this presentation will be divided in two main parts: the first one will be more about the earthquake process itself, and the second part will be more about the early warning alert that was issued for this event. 00:02:48.000 --> 00:02:53.000 So I'm pretty sure that all of you are already very familiar with the San Andreas 00:02:53.000 --> 00:03:12.000 Fault System in the Bay Area, but quickly the Calaveras fault is a major branch of the San Andreas Fault System, is shown in purple on this map, and it extends over about 120 kilometers I think, and as you can see in this figure 00:03:12.000 --> 00:03:22.000 it starts from the San Andreas fault near Hollister and it terminates close to Danville at its northern end. 00:03:22.000 --> 00:03:41.000 So, in fact, most of the earthquakes that were widely felt in the south of the San Francisco Bay Area over the last century have occurred along the Calaveras fault, I mean, apart from the 1989 Loma Prieta earthquake and all its 00:03:41.000 --> 00:03:47.000 pre-shocks and aftershocks. So that's a very active fault. 00:03:47.000 --> 00:04:11.000 And so along this Calaveras fault there are some persistent seismic fault areas and David, his colleagues in 1990, and then later in 2010, they identified in particular six stuck patches numbered from one to six 00:04:11.000 --> 00:04:33.000 or from South to North, as you can see on the figure where magnitude 5 to 6 earthquakes occurred with the most recent event before the 2022 event being the 2007 magnitude 5.4 Alum Rock event in Zone 5. So in the case of the 00:04:33.000 --> 00:04:44.000 2022 event, I mean, this event falls in Zone 3, and as you can see in this zone, there are two 00:04:44.000 --> 00:05:05.000 known historical events. One in 1911 with a magnitude estimated to be a slightly over 6 and one in 1984 the Morgan Hill event, with a magnitude of 6.2. As you can see on on the map on the right the epicenter of 00:05:05.000 --> 00:05:07.000 the 2022 events is almost identical 00:05:07.000 --> 00:05:14.000 to that of the magnitude 6.2 or Morgan Hill event. 00:05:14.000 --> 00:05:29.000 So what we are going to look at in the next slides is if the 2022 event can be considered as a repeat of the 1911 and 1984 events. 00:05:29.000 --> 00:05:43.000 But first we can start by looking at some InSAR measurements made over the last 15 years, and we can see that, in fact, there is indeed a stuck patch along Zone 3. 00:05:43.000 --> 00:05:52.000 So if you look at the bottom panel, which represents measurements along the Calaveras fault, and we can see that at the red star location, which corresponds to the 00:05:52.000 --> 00:06:18.000 epicenter of the 2022 earthquake, that there is, in fact, the default is indeed stuck at least at the surface, so the fault is not creeping at that location, and that's the location where the earthquake 00:06:18.000 --> 00:06:25.000 occurred. So now, if we look at the hypocenter on the top figure first. 00:06:25.000 --> 00:06:40.000 So the two stars in the middle, I mean they represent the blue star represents the 1984 magnitude 6.2 earthquake and the red star represents the 2022 events. 00:06:40.000 --> 00:06:45.000 So we can see that those 2 hypocenters are not exactly coincidence. 00:06:45.000 --> 00:06:53.000 I mean, I think about 560 meter, but they are probably close enough to be considered as similar. 00:06:53.000 --> 00:06:57.000 I mean the the bottom plot is basically the same plot, 00:06:57.000 --> 00:07:02.000 but with information about time, and the other large yellow star that can be seen on the left corresponds to the 2007 00:07:02.000 --> 00:07:19.000 magnitude 5.4 Alum Rock event. So in Zone 5, which was itself a repeat of magnitude 5.5 earthquake in 1955. 00:07:19.000 --> 00:07:39.000 So if we now look at the source directivity, I mean in this figure, which was made by Taka'aki just after the earthquake, we can see the source function that was estimated empirically, and so the source directivity is pretty clear I mean with all the 00:07:39.000 --> 00:07:58.000 energy arriving as a single packet, I mean, in the southeast, as [indiscernible], whereas in the northwest as [indiscernible], I mean, it can be seen that there are different [indiscernible] of energy arriving at one after another, so the the rupture direction is really the same 00:07:58.000 --> 00:08:16.000 as for the the 1984 earthquake, but because the magnitude is lower than for the 1984 event, we can assume that the 2022 earthquake is probably a partial rupture of the 1911 and 1984 events 00:08:16.000 --> 00:08:21.000 and with of course, a shorter repeating telem. 00:08:21.000 --> 00:08:26.000 So this raises the question of "why this was a partial rupture only?" and when talking to David, he suggested two possibilities. 00:08:26.000 --> 00:08:40.000 First, of course, that the rupture could have been early, and so might have been stopped by an asperity before he could continue for themselves 00:08:40.000 --> 00:09:06.000 for example. But alternatively, it could also be that the amount of rupture in the 1984 earthquake was somehow incomplete relative to some displacement for themselves, and this could be supported by some inversion results that were shown in David's 1990 and 2010 00:09:06.000 --> 00:09:11.000 papers, which show, in fact, a peak of 80 cm 00:09:11.000 --> 00:09:17.000 displacement in the area of 2022 quick compared to 100 cm 00:09:17.000 --> 00:09:29.000 further to the south. So, if so, then the 2022 rupture could be, in fact, addressing a slip deficit from the 1984. 00:09:29.000 --> 00:09:47.000 So, yes in conclusion of this part of the presentation, and in this last earthquake in 2020 highlighted how difficult it is to predict when the next quake will be, and what magnitude it will be even in areas. 00:09:47.000 --> 00:09:54.000 where we have information and records about historical earthquakes. 00:09:54.000 --> 00:10:00.000 Another aspect of this earthquake that we wanted to highlight 00:10:00.000 --> 00:10:11.000 is that because it happened in the in the Bay Area, it produced a very large data set for the seismic and engineering community. 00:10:11.000 --> 00:10:12.000 And another interesting fact is that the closest station, MHC, 00:10:12.000 --> 00:10:37.000 which is a part of the the Berkeley Digital Seismic Network, was just about like 4 km away from the epicenter and what's interesting is that, in fact, we had just had upgraded that station a few weeks before this earthquake with new 00:10:37.000 --> 00:10:41.000 infrastructure and equipment, including a new STS 00:10:41.000 --> 00:10:47.000 7 sensors. So this, upgrade was very timey, and that led to a very high quality recording at that site 00:10:47.000 --> 00:11:03.000 for this event. So, coming back to the data available, for example, to the engineering community, this slide was was shared by Lisa. 00:11:03.000 --> 00:11:25.000 But this is just an example of strong motion data that was collected in this specific case in the San Francisco Transamerica Tower, and, as you can see there are very clear recordings of the event at many different locations and levels across the Tower. 00:11:25.000 --> 00:11:34.000 So now we'll move to the second part of the presentation, which is more about the the response of ShakeAlert 00:11:34.000 --> 00:11:46.000 and the Earthquake Early Warning System. So you can see that the ShakeAlert location in green on the map, on the right, is very close to the final CISN/ 00:11:46.000 --> 00:12:02.000 ANSS location in yellow, and, in fact, the magnitude was also very close, because after 5 seconds you can see that the magnitude was estimated to be around 4.8, 00:12:02.000 --> 00:12:25.000 but in fact, about 3.5 seconds later it was revised to 5.1, which means that within 8 seconds the final magnitude was reached, and this final magnitude was estimated by ShakeAlert also perfectly matched with the CISN magnitude that was reviewed 00:12:25.000 --> 00:12:29.000 later on, and was also estimated to be 5.1. 00:12:29.000 --> 00:12:35.000 So, ShakeAlert, I mean, really perform very well for this event. 00:12:35.000 --> 00:12:45.000 So there is nothing much to highlight, I mean, except the fact that ShakeAlert performed already very well in the past for more than 60 events. 00:12:45.000 --> 00:12:55.000 But what made this event may be more special is that it occurred close to a very, very populated area. 00:12:55.000 --> 00:13:03.000 So millions of people received an alert, and many of them shared their experience and feedback 00:13:03.000 --> 00:13:11.000 online and as you can see on the map almost the entire Bay Area was alerted because it was inside the MMI 3 area. 00:13:11.000 --> 00:13:19.000 And so those are just example of some tweets that were collected by our USGS colleagues. 00:13:19.000 --> 00:13:22.000 We also corrected many of them that are very interesting. 00:13:22.000 --> 00:13:32.000 I mean, for example, the last one, I really liked it when the the person says that getting an alert on your phone right before an earthquake is one of the most futuristic experience I've ever had. 00:13:32.000 --> 00:13:45.000 So, I think that shows I mean that really there was a significant impact on the people who received those alerts. 00:13:45.000 --> 00:13:54.000 We also received, of course, lots and lots of media requests, as well for this event. 00:13:54.000 --> 00:14:01.000 And so it was really a great advertisement opportunity for ShakeAlert and Earthquake 00:14:01.000 --> 00:14:13.000 Early Warning in general. So as most of you might know, the official app for delivering alerts in California, Washington, and Oregon is MyShake. 00:14:13.000 --> 00:14:32.000 So, in line with what I just said in the months following the event the app was downloaded more than 300,000 times, and this was, in fact, the largest increase in terms of download since the launch of the app and in addition about two months later we got another event up north the 00:14:32.000 --> 00:14:50.000 Ferndale magnitude 6.4 earthquake on December 22nd and due to the high magnitude of this event a part of the Bay Area was also alerted and maybe the northern part, and so again, after that event, we get I think over the last month we get 00:14:50.000 --> 00:15:02.000 more than 100,000 additional downloads. So, thanks or because I don't know of both these events, we had, like more than 400,000. 00:15:02.000 --> 00:15:14.000 users over the last 3 months, which is a great news for the Earthquake Early Project, and also for for public safety in general. 00:15:14.000 --> 00:15:20.000 So yes, so this event was a real success, I think, in terms of alerting. 00:15:20.000 --> 00:15:21.000 While at the same time, not being a damaging earthquake. 00:15:21.000 --> 00:15:31.000 So because of course, it could have been a different story if this event would have been a repeat of the 1911, 1984 00:15:31.000 --> 00:15:49.000 earthquakes with a magnitude around 6.2. It was said after that event, in fact, those kind of earthquakes like the 2022 Alum Rock event are somehow good earthquakes, for Earthquake Early Warning purposes, because they are large enough 00:15:49.000 --> 00:15:55.000 to raise attention, but small enough not to produce significant damage 00:15:55.000 --> 00:16:05.000 This slide is just about the timeline, about how long it took for MyShake to deliver alias to the different users. 00:16:05.000 --> 00:16:12.000 So you can see that 1 second after the ShakeAlert alert was issued 00:16:12.000 --> 00:16:13.000 already 20% of the MyShake users had received the alert. 00:16:13.000 --> 00:16:18.000 And then 1 second after it reached 50% of the users, and 3 seconds... 2 seconds after 00:16:18.000 --> 00:16:40.000 it reached 80% of the of the users. So within 8-9 seconds after the events, I mean, about 80% of the users had received the the alert on their mobile device already. 00:16:40.000 --> 00:17:05.000 And if we look at those timelines for subset of individual devices, I mean in this figure the horizontal black line in the middle represent when the S-wave arrives at the user location and the users in purple are those who did not receive the alert before they felt the S-wave 00:17:05.000 --> 00:17:06.000 and you can see that most of them they are very close to the epicenter. 00:17:06.000 --> 00:17:14.000 So, the shake reached them before they could be alerted. 00:17:14.000 --> 00:17:34.000 However, you can see that about 67% of the users did receive the alert before they felt the S-wave, and an interesting feature, in fact, that can be observed in this figure is the different size of the areas, because, as we saw initially the magnitude, was 00:17:34.000 --> 00:17:46.000 estimated after 5 seconds to be 4.8. So the area that was alerted was up to to 60 kilometers, and then, when it was revised, a few seconds later, so it was extended. 00:17:46.000 --> 00:18:02.000 So the magnitude was revised to 5.1, so the area was extended to 100 kilometer, and this is why you see the second patch of users that were alerted there right after a little bit further away from the epicenter. 00:18:02.000 --> 00:18:04.000 So this was my last slide. And yeah, thank you for your attention, and I'm looking forward to questions 00:18:04.000 --> 00:18:18.000 and discussion following this session. Thank you. 00:18:18.000 --> 00:18:22.000 Well, thank you very much super Julien Marty [giggle]. 00:18:22.000 --> 00:18:27.000 Oh, I think we're going to move on to the next talk, so I'll leave that to Vicki. 00:18:27.000 --> 00:18:32.000 Yeah. So the next talk is by Rob Skoumal of the USGS, who's going to talk about using machine learning techniques to improve focal mechanisms. 00:18:32.000 --> 00:18:42.000 So roll it. 00:18:42.000 --> 00:18:47.000 Hello, everyone! My name is Rob Skoumal. I'm a research geophysicist at the USGS. 00:18:47.000 --> 00:18:54.000 here at Moffett Field. Today, I would like to talk about some of our efforts to improve focal mechanisms, using machine learning techniques. 00:18:54.000 --> 00:19:02.000 So this work was done in collaboration with Jeanne Hardebeck and Dave Shelley, and certainly wouldn't have been possible without their contributions. 00:19:02.000 --> 00:19:15.000 So today I'll primarily be talking about our work in the southeastern San Francisco Bay Area, focusing on the major strike-slip faults within our study area. Shown here on the right, we considered over 30,000 relocated earthquakes that were cataloged over 00:19:15.000 --> 00:19:28.000 the past four decades this catalog has a magnitude of completeness somewhere in the the lower magnitude one range. Since 1980 there have been seven earthquakes larger than magnitude 5 in this area, including the recent magnitude 5.1 Alum Rock earthquake that was just talked 00:19:28.000 --> 00:19:41.000 about in the previous presentation. So our ultimate goal here is to improve our ability to determine mechanisms for these earthquakes, particularly for smaller magnitude or otherwise poorly recorded events. 00:19:41.000 --> 00:19:55.000 So in this area, we primarily expect to see these northwest-southeast trending right-lateral strike-slip faults, the major exception, are earthquakes along the Mount Lewis fault zone highlighted there in purple. Debbie Kilb [?] previous work demonstrated that 00:19:55.000 --> 00:20:04.000 this broad north-south, oriented right-lateral structure of the Mount Lewis trend also consists of numerous left-lateral east-west oriented faults as well. 00:20:04.000 --> 00:20:18.000 So because the mechanisms given these two different faults coincident, determining which nodal plane represents the true fault plane in this area can be a challenge, which we thought it would be a good test of our method here 00:20:18.000 --> 00:20:19.000 So this presentation will be focused on our efforts to improve the P- 00:20:19.000 --> 00:20:26.000 wave, first motion polarities that will then use to calculate our focal mechanisms. 00:20:26.000 --> 00:20:34.000 So imagine this cartoon map here. Alright. We have an earthquake in the middle, and it's surrounded by a ring of seismometers. In a perfect world, 00:20:34.000 --> 00:20:39.000 we would have confident, first motion polarity measurements for all stations. 00:20:39.000 --> 00:20:43.000 Now using these polarities, let's say we use them to calculate 00:20:43.000 --> 00:20:58.000 this particular mechanism. Now, in reality not all stations will have polarity measurements; missing polarity measurements could be due to a wide variety of causes we're looking at 40 years of data here and stations have come and gone during this time. Perhaps stations were deployed after this particular 00:20:58.000 --> 00:21:05.000 earthquake occurred, you know those stations didn't record the earthquake, so it would be impossible to measure the polarities on those stations. 00:21:05.000 --> 00:21:19.000 Also, perhaps they were just, you know, the signal noise was too low at a given station, and so a confident measurement could not be made. There's plenty of reasons why we have missing measurements, but a goal of this work is to see if we can make educated 00:21:19.000 --> 00:21:26.000 guesses for those missing polarities, such that we could produce a mechanism as if we had actual polarity measurements. 00:21:26.000 --> 00:21:29.000 Now this might sound like a pretty large challenge, so I'd like to walk through 00:21:29.000 --> 00:21:35.000 how we've approached this problem. So ultimately, what we're doing is imputation, right? 00:21:35.000 --> 00:21:53.000 So imputation is just a fancy word which means filling in missing data. If you've ever done a crossword puzzle a soduku or a wordle, you've done imputation before, in all of those puzzles you made some hopefully informed guess about what letter or number goes in 00:21:53.000 --> 00:21:56.000 those squares. So inspired by those puzzles 00:21:56.000 --> 00:21:57.000 I've created a puzzle of my own here. 00:21:57.000 --> 00:22:03.000 There are both red and blue squares, with an unknown square colored in black. 00:22:03.000 --> 00:22:08.000 So, do you think this unknown square should be red or blue? 00:22:08.000 --> 00:22:12.000 I think nearly all of us would say Blue and I would agree. 00:22:12.000 --> 00:22:22.000 But, but why did we think it should be blue? You might respond with something along the lines of well, because there's this alternating pattern of the rows to which I'll respond? 00:22:22.000 --> 00:22:30.000 How did you know? There was that alternating pattern? There's something happening in our brains that's allowing us to recognize these patterns. 00:22:30.000 --> 00:22:35.000 But it can be difficult to describe what our brain is actually doing. 00:22:35.000 --> 00:22:46.000 So that was a rather simple example. If we had any kind of complexity like what's shown here, it becomes even more difficult to describe why, we think a square should be a given color. 00:22:46.000 --> 00:22:51.000 But if I gave everyone a minute I think most of us would come to some solution similar to this. 00:22:51.000 --> 00:23:05.000 While slightly more difficult, our brains are still very good at this scale of problem, but we need to understand what our brain is actually doing, because we wish to instruct a computer to try to replicate our thought process. 00:23:05.000 --> 00:23:10.000 This is because we wish to tackle problems that are much, much larger with, say, millions of squares. 00:23:10.000 --> 00:23:14.000 And the problem we have has a more complex set of rules. 00:23:14.000 --> 00:23:18.000 Perhaps you know, columns have different levels of importance and different weighting. 00:23:18.000 --> 00:23:22.000 And that's just not a task that our brains are set up to handle. 00:23:22.000 --> 00:23:26.000 But a computer would be ideally suited for this kind of task. 00:23:26.000 --> 00:23:27.000 To solve those puzzles what we were ultimately doing is forming decision 00:23:27.000 --> 00:23:35.000 trees. We have to form decision trees every day of our lives, even if we're not aware of it. 00:23:35.000 --> 00:23:39.000 So take the simple question. Should I bring an umbrella when I leave the house for those of us in California this past month? 00:23:39.000 --> 00:23:50.000 Perhaps a better question would be should I bring a boat, but, regardless to answer this question, we could consider different features about this problem. 00:23:50.000 --> 00:24:00.000 We could check the weather forecast, or look outside at the sky, or consider how long we'd be outdoors, depending on which features we select and how we order and give importance to those different features. 00:24:00.000 --> 00:24:07.000 our answers will vary. Now an important question is, how do we determine which features to consider? 00:24:07.000 --> 00:24:17.000 For example, the question is, today Wednesday, a totally valid feature, but with my limited knowledge of meteorology, I don't think the day of the week influences the weather. 00:24:17.000 --> 00:24:29.000 So here, in this application, we approach this problem using bootstrap aggregation more commonly referred to as "bagging." You can imagine that we randomly select a feature out of a hat we evaluate that feature. 00:24:29.000 --> 00:24:30.000 Then pull the next feature out of the hat and evaluate that. 00:24:30.000 --> 00:24:49.000 We repeat that process a bunch of times, so it's possible that in this example of umbrellas we could end up evaluating the day of the week as a part of that bagging process. So to address that problem we consider a whole bunch of different bagged decision trees and altogether, these 00:24:49.000 --> 00:24:52.000 form what's called a random forest. So an analogy for how this works is like the saying, wisdom of the crowds. 00:24:52.000 --> 00:25:02.000 If I was to ask Sarah Menson, how much does an ox weigh or how many gum balls are in this container? 00:25:02.000 --> 00:25:18.000 She may or may not give an accurate answer. Sarah is a brilliant seismologist, but I don't know how much of that knowledge translates to gumballs. A better approach would be to ask everyone at this workshop those questions as the meeting of our responses are likely more accurate 00:25:18.000 --> 00:25:29.000 than a randomly selected individual. So the takeaway from this analogy is that the consensus of the random forest is more dependable than the results from any individual decision 00:25:29.000 --> 00:25:31.000 tree. 00:25:31.000 --> 00:25:35.000 Now let's incorporate our polarity measurements into this concept. 00:25:35.000 --> 00:25:43.000 So we took our data set of manually identified first motion polarities and we selected different stations to serve as our features. 00:25:43.000 --> 00:25:49.000 We then grew a decision tree by considering the manual polarities recorded at those given stations. 00:25:49.000 --> 00:25:53.000 We repeated this process a bunch of times, forming a bunch of different decision 00:25:53.000 --> 00:26:08.000 trees. At this point each tree could then provide a guess about the value for a missing polarity, but instead of asking questions about the weather, like in that umbrella example, we validated whether the polarity at a given station is positive or negative, right. So let's say for a particular earthquake 00:26:08.000 --> 00:26:14.000 station here is negative and I've gone to the next node, and let's say Station B is positive, based on this example, 00:26:14.000 --> 00:26:20.000 cartoon decision tree, right? It would conclude that because station A is negative and station B is positive 00:26:20.000 --> 00:26:35.000 the unknown polarity is also positive. So our random force here is a perfect little democracy, which with each decision tree casting its vote for whether the missing player should be positive or negative, and the side with the most nodes wins. 00:26:35.000 --> 00:26:39.000 So this is an overly simplified cartoon overview of our random forest. 00:26:39.000 --> 00:26:47.000 I don't have time to get into all the details, but I hope this is able to give you some general understanding of what's actually going on under the hood. 00:26:47.000 --> 00:26:53.000 Now a really important component of imitation is estimating the accuracy for our guesses. 00:26:53.000 --> 00:27:00.000 We use a few different approaches of estimating this accuracy I'm just going to briefly touch on the least statistically robust, but also the easiest to explain methodology that we used. 00:27:00.000 --> 00:27:08.000 I'm more than happy to talk about the other methods that we used later. 00:27:08.000 --> 00:27:16.000 So the simple approach we can take our data set of real missing polarities, and that we omit a certain percentage of those real polarity measurements. 00:27:16.000 --> 00:27:28.000 We then build our random force using this partial data set, we finally run the random force and evaluate how many of the emitted polarities were correctly imputed. So fairly straightforward. 00:27:28.000 --> 00:27:43.000 but I find it quite helpful. We've used a simple approach to inform our decisions when selecting the confidence thresholds in our polarity data set, so if we only consider real polarity measurements with a high dependability, we get higher amputation accuracy because our measurements are higher 00:27:43.000 --> 00:27:49.000 quality, but, as a consequence, we're only able to consider fewer earthquakes with those high quality measurements 00:27:49.000 --> 00:27:58.000 as a result. The other hand, the lower we set the confidence threshold, the larger number of earthquakes we get to consider but the imputation accuracy is also lower alright. 00:27:58.000 --> 00:28:07.000 So this is always a balance, but we try to select threshold that produce imputation accuracies similar to the accuracy of manual polarity measurements. 00:28:07.000 --> 00:28:22.000 Let's now take a look at some actual results. So here's an earthquake that occurred along the Mount Lewis trend and the stations with real manual manually selected polarity measurements are represented with these up and down oriented triangles, so given my background dealing 00:28:22.000 --> 00:28:38.000 with very limited data in a lot of cases. From my perspective, I would call this focal mechanism very well constrained, I would say lots of manual selected polarities from different stations went into the solution, and most of the polarities look pretty good, but I said to this example to hopefully show 00:28:38.000 --> 00:28:44.000 That even for a well-constrained mechanism, imputation will still be useful. 00:28:44.000 --> 00:28:55.000 So on the right here, the darker red and blue triangles, represented the imputed values, you know again, this this earthquake occurred in the 1980s, and many of these impeded stations 00:28:55.000 --> 00:29:00.000 didn't even exist yet, right? So it'd be impossible to make measurements at those stations. 00:29:00.000 --> 00:29:17.000 But with imputation you can still make some reasonable educated guesses. And as a result of having these imputed guesses, the nodal plane uncertainties have decreased, and the fault plane solution is more in line with what we would expect for this given earthquake along the Mount Lewis trend. 00:29:17.000 --> 00:29:22.000 We can also start to look at the focal mechanism solutions throughout our study area. 00:29:22.000 --> 00:29:33.000 So here are ternary diagrams of the faulting types for the 'A' quality mechanisms that we produced. The figure on the left shows the mechanisms produced, using the manually selected polarities from the NCEDC. 00:29:33.000 --> 00:29:41.000 It shows you know, what we expect that this area is dominated with strike-slip faulting on the right are the mechanisms produced using imputation. 00:29:41.000 --> 00:29:46.000 We see an even tighter concentration of these strike-slip solutions. 00:29:46.000 --> 00:29:51.000 An important note is that the color scale differs by an order of magnitude, right. 00:29:51.000 --> 00:29:59.000 So imputation greatly increased the number of fault mechanisms that we had in the southeastern San Francisco Bay Area. 00:29:59.000 --> 00:30:05.000 Perhaps, an easier way to view the number of well-constrained mechanisms is with this sort of plot. 00:30:05.000 --> 00:30:16.000 So again, we have this magnitude completeness somewhere in the low mentioned 1's shown here by this this purple vertical line. The black curve shows the number of earthquakes in our catalog above a given magnitude. 00:30:16.000 --> 00:30:20.000 The red line below it shows the number of earthquakes that we've produced that had either an A or B quality. 00:30:20.000 --> 00:30:40.000 mechanism. Right. So we conclude by saying that we produced a well-constrained mechanism for almost all earthquakes above the completeness, and we do a much better job at characterizing the smaller magnitude earthquakes than when compared to with just the manually selected polarities. 00:30:40.000 --> 00:30:43.000 We then took those mechanisms and did a stress inversion. 00:30:43.000 --> 00:30:50.000 So the pink lines are SH-max measurements from Provost and Houston (2003), who did focal mechanism stress versions throughout Central California. Our, SH-max 00:30:50.000 --> 00:31:01.000 results are shown with the blue lines. Source solutions are in great agreement with these previous measurements, but largely because we have an order of magnitude more focal mechanisms 00:31:01.000 --> 00:31:06.000 we can characterize the stresses on a finer spatial, temporal resolution. 00:31:06.000 --> 00:31:19.000 We then looked at the deviation of our SH-max directions, and compare them with the strikes of the major faults in our city area, so interpreted faults along the Mount Lewis Fault Zone that hosted the magnitude 5.7 earthquake are well-oriented for 00:31:19.000 --> 00:31:20.000 failure from a traditional sense approximately 30 degrees away from SH-max. 00:31:20.000 --> 00:31:24.000 However, the major seismogenic faults in this area are poorly oriented to 00:31:24.000 --> 00:31:33.000 SH-max, ranging from about 50 to 70 degrees away from Sax. 00:31:33.000 --> 00:31:43.000 Once again, this result is consistent with previous work. Faults have been demonstrated to be poorly oriented in this area, but they become better lined further to the north and towards the south. 00:31:43.000 --> 00:31:50.000 The advantage here is that we can use these measurements and make interpretations on a finer scale. 00:31:50.000 --> 00:31:56.000 So in conclusion, we've found that imputation can aid our ability to constrain focal mechanisms. 00:31:56.000 --> 00:32:02.000 We produce a six-fold increase in the number of wall constrained mechanisms in the southeastern San Francisco Bay area 00:32:02.000 --> 00:32:08.000 over the past 40 years. The mechanism is in stress inversions, we're all consistent with previous work 00:32:08.000 --> 00:32:12.000 just we're able to analyze things on a much finer resolution. 00:32:12.000 --> 00:32:18.000 So this technique here is ideally suited for long duration catalogs with varying stations coming online and going away. 00:32:18.000 --> 00:32:21.000 And for improving the characteristics of tradition of small magnitude 00:32:21.000 --> 00:32:27.000 earthquakes. One thing I didn't have time to touch on is using this sort of approach for earthquake response. 00:32:27.000 --> 00:32:33.000 So, following a large magnitude earthquake will commonly go out and deploy additional seismometers near the epicenter. 00:32:33.000 --> 00:32:45.000 Previously those newly deployed stations could only be used to study earthquakes that occurred after they were installed with a mutation, provided those newly point stations record enough earthquakes to train our models. 00:32:45.000 --> 00:32:51.000 We can sort of go back in time and study earthquakes that occurred before the stations were deployed. 00:32:51.000 --> 00:33:01.000 So I've talked about primarily focal maximums here, but I'm willing to bet that everyone in this workshop has had to deal with missing or low confidence measurements at some point in their career. 00:33:01.000 --> 00:33:08.000 I think, imputation in the approach I've described here has tremendous potential throughout the entire field of seismology. 00:33:08.000 --> 00:33:14.000 So with that, thank you very much, and I look forward towards our discussion 00:33:14.000 --> 00:33:22.000 That was great. Thanks, Rob, and I think Heidi now will introduce the next talk. 00:33:22.000 --> 00:33:26.000 That's right. So Heather Shaddox is up next from 00:33:26.000 --> 00:33:33.000 UC Berkeley talking about creepmeter data on the San Andreas and Calaveras faults. 00:33:33.000 --> 00:33:42.000 Hello! I'm Heather. I'm a NSF post-doc with Roland Burgmann at the Berkeley Seismology lab, and today I'm talking about creepmeter data on the San Andreas and Calaveras fault specifically in Central California 00:33:42.000 --> 00:33:54.000 near San Juan Batista, and mostly on the San Andreas, but a bit on the Calaveras, and this work was done in close collaboration with Roger Bilham to install new creepmeters and analyze the data. Now in terms of a bigger picture we're really interested in 00:33:54.000 --> 00:34:01.000 the relationship between seismic and aseismic slip, and this locking transition of the San Andreas fault near San Juan Bautista 00:34:01.000 --> 00:34:04.000 And one aspect of that is the relationship between deep and shallow aseismic slip. 00:34:04.000 --> 00:34:10.000 And to get at this, we're combining seismic and geodetic information and this area is very well instrumented. 00:34:10.000 --> 00:34:14.000 However, in terms, of creepmeters, which are shown as these pink diamonds. 00:34:14.000 --> 00:34:19.000 Well, you know, these were the ones that were operating when I started my postdoc in October of 2021. 00:34:19.000 --> 00:34:23.000 And so there's definitely significant data gaps, here and here. [pointing on map] 00:34:23.000 --> 00:34:24.000 And so we're interested in shallow creeps. 00:34:24.000 --> 00:34:35.000 We wanted more creepmeters, and so we worked with Roger and put in a significant effort to install more creepmeters right in these data gaps. 00:34:35.000 --> 00:34:38.000 So new creepmeters that we've installed are shown as these green diamonds. 00:34:38.000 --> 00:34:39.000 We also put in one north on the Calaveras fault, which I'll show later. 00:34:39.000 --> 00:34:46.000 So, yeah, we talked to a lot of landowners for permission 00:34:46.000 --> 00:34:53.000 that was the biggest hurdle, right. But we have installed some creepmeters and so here's a schematic of a creepmeter, right. 00:34:53.000 --> 00:35:02.000 This is Rogers instrument, and essentially, you know, you install, you bury a carbon fiber rod across the fault to a [indiscernible] 00:35:02.000 --> 00:35:18.000 at an angle about 30 degrees, and both site ends are anchored in, and when you have in this case right-lateral motion on the fault that registers as extension on the instrument which then you can convert to fault-slip and these instruments are very sensitive you can 00:35:18.000 --> 00:35:29.000 record on the order of you know, micrometer scale. I want to give a special thank you and shout out to my victims, I mean volunteers, who helped us install these by hand that includes Ricky Garza-Girón, who is 00:35:29.000 --> 00:35:31.000 a postdoc at Colorado State University, 00:35:31.000 --> 00:35:32.000 Yuankun Xu, who's also a postdoc with Roland and Litong Huang, who's a grad student at UCSC. 00:35:32.000 --> 00:35:40.000 So thank you guys so much. This is a lot of work. 00:35:40.000 --> 00:35:47.000 And you made it really fun. Alright, so can jump right into the different Creepmeter sites in the data. 00:35:47.000 --> 00:35:49.000 No, this is still really preliminary, it's new data, 00:35:49.000 --> 00:35:52.000 But I want to run through each site and just show some interesting things. 00:35:52.000 --> 00:35:56.000 So first is Fox Creek, where we actually have a pair or three creepmeters. 00:35:56.000 --> 00:36:04.000 So here's a drone picture of the site. Here's the ranch. Here's an offset fence that we're able to use to map the fault really 00:36:04.000 --> 00:36:08.000 well and here's where we put one of the creepmeters or two of the creepmeters. 00:36:08.000 --> 00:36:11.000 If we zoom in. Here's that offset fence. 00:36:11.000 --> 00:36:15.000 Here's an oblique creepmeter that we put in FCR-1a. 00:36:15.000 --> 00:36:21.000 and at the same time we put in an orthogonal creepmeter to measure opening and closing of the fault, and we have a field picture of that here. 00:36:21.000 --> 00:36:22.000 So here's the offset fence. Here's the fault. 00:36:22.000 --> 00:36:27.000 Coming in. Here's the orthogonal Creek meter in the oblique one, and the owner was great. 00:36:27.000 --> 00:36:31.000 He said we could actually put in another creep meter if we wanted to, and so we did. 00:36:31.000 --> 00:36:33.000 A few months later we went back and installed this one 130 meters to the north. 00:36:33.000 --> 00:36:41.000 And so we actually have a pair of creepmeters that we can actually try and look at propagation of creep events. 00:36:41.000 --> 00:36:54.000 And here's the data. So in black is the oblique creepmeter in the south, which we installed in June of 2022. In brown, is the orthogonal creepmeter that's co-located with it and it's blown up 00:36:54.000 --> 00:37:07.000 three times, so you can really see the signals, in pink is the northern Creek meter that we put in in September of 2022, and then at the end of November we put in a rain gauge at the southern site and so in blue this is just 00:37:07.000 --> 00:37:21.000 plotting, cumulative rain, and you can really see the large rain event we just had, and we put that in to try and really quantify effects on the creepmeters from rain, and gray is just local seismicity rate. 00:37:21.000 --> 00:37:35.000 And we've had three large events since we first put in the southern creepmeter, and one since we've had both installed, and we're hoping to have another big creep event soon. The first I want to talk about is the smaller creep events that we saw 00:37:35.000 --> 00:37:38.000 across both creepmeters 00:37:38.000 --> 00:37:50.000 As well as other interesting observations. So first, I want to point out that the October 2022, magnitude 5.1 Alum Rock earthquake triggered slip on both creepmeters at the same time that showed here so slip 00:37:50.000 --> 00:37:55.000 trigger was about a tenth of a millimeter, and the fact that this happened at the same amplitude and time, 00:37:55.000 --> 00:37:59.000 really builds confidence that the creepmeters are synced in time and operating correctly, 00:37:59.000 --> 00:38:05.000 and I think then we can push the observations between creepmeters or differences a little farther. 00:38:05.000 --> 00:38:20.000 I also want to point out that the temperature at the northern site shows the red dash line lagged 4 hours behind the temperature in the south likely because the north is located under all these trees and in the shade. There's also a diurnal, a clear diurnal signal on the 00:38:20.000 --> 00:38:26.000 orthogonal creepmeter, and the peak dilation, which is a positive signal 00:38:26.000 --> 00:38:31.000 here is correlated with the maximum rate of temperature change. 00:38:31.000 --> 00:38:36.000 And we see at this short time propagation between creepmeters 00:38:36.000 --> 00:38:45.000 for several small events which I'm gonna zoom in on. So first, this figure is very similar to before I've just added in local earthquakes plotted against their magnitude. 00:38:45.000 --> 00:39:00.000 But at this time here, which we zoom in even a little farther, there is a small creep event that starts in the north, and then, 20 min later, we see it in the south, right, both on the oblique creepmeter and on the orthogonal one and based on their 00:39:00.000 --> 00:39:20.000 distance in lag this is a southward propagation, about 0.4 kilometers per hour, and then there's two other times right after it, where we have, or a few days after, where you have a tiny creep event starting in the south, and then we see it in the north. 00:39:20.000 --> 00:39:26.000 Now I want to look at a larger group event and the difference between the northern and southern creepmeters. 00:39:26.000 --> 00:39:40.000 So at this time in November, we have this large event, where we see it looks like possibly an onset around the same time, but it was really steep signal on the northern creepmeter and definitely less so on the southern. 00:39:40.000 --> 00:39:43.000 They definitely have a different shape to them or this may also just be the southern one lags behind it, right, 00:39:43.000 --> 00:39:53.000 and that this little initial step is something else. So it's definitely worth looking into when we have subsequent events 00:39:53.000 --> 00:40:05.000 if this is consistently observed, if there is a lag between them. If this is just a different shape, and so far I have shown you a comparison between the creepmeters and how they're very consistent 00:40:05.000 --> 00:40:11.000 but there are definitely times where there are signals on both creepmeters, but they're in apparent opposite slip motion, which is very weird. 00:40:11.000 --> 00:40:20.000 And so here, in this case, you have some right-lateral motion right on this southern creepmeter, and then you have a signal, 00:40:20.000 --> 00:40:27.000 but it is a apparently left-lateral, and we don't have an orthogonal creepmeters in the north to kind of see what's going on in the fault 00:40:27.000 --> 00:40:34.000 there, but it's definitely a puzzling signal, and we see it a couple of times, and it's worth pursuing. 00:40:34.000 --> 00:40:40.000 Now, I want to talk a bit more about the orthogonal creepmeter and things to be aware of, and also why I think it's so important. 00:40:40.000 --> 00:40:41.000 So here's a power spectra of the temperature in the South 00:40:41.000 --> 00:40:57.000 in red, the oblique creepmeter in black, and the orthogonal creepmeter and brown, and they all have very similarly they have a peak period at 24 hours and 12 hours as well as 8 and 6, and so we think of this of 00:40:57.000 --> 00:41:07.000 course, temperature related, right. The peak dilation in the orthogonal creepmeter correlates with the maximum rate of temperature change where I think which any makes sense 00:41:07.000 --> 00:41:11.000 but it's good to be aware of this right and correct for it too. 00:41:11.000 --> 00:41:14.000 So I did identify really tiny tectonic signals. 00:41:14.000 --> 00:41:18.000 And then rain is another big one, right? So here's a time. 00:41:18.000 --> 00:41:23.000 Just a couple of weeks ago we had a significant rain event which you can see in the cumulative rain in blue here, and at that time you have an apparent fault. 00:41:23.000 --> 00:41:31.000 closure, right or contraction, on the orthogonal creepmeter. 00:41:31.000 --> 00:41:33.000 At that same time you have it apparent, left-lateral motion on the oblique creepmeter 00:41:33.000 --> 00:41:52.000 and we've seen this other places too, is that when you have rain you have an apparent reversal in motion, and so if anyone saw Rogers talk a few days ago, he actually goes into the reasons potentially why in some detail. 00:41:52.000 --> 00:42:03.000 But here, I just want to show the observation and talk about how we can maybe correct for it. Something else that is interesting is that on the orthogonal creepmeter on this creep event, and several others 00:42:03.000 --> 00:42:09.000 you see at the start of it, this little jog in the signal, which maybe could be used to differentiate tectonic versus other processes. 00:42:09.000 --> 00:42:24.000 Now once again. If you saw Rogers talk a few days ago, you would have seen this figure, and it's really just comparing the orthogonal slip to the dextral slip and trying to figure out, you know, how they're correlated, what that could mean and in this case 00:42:24.000 --> 00:42:29.000 on the y-axis positive change is expansion and negative is contraction. 00:42:29.000 --> 00:42:44.000 So first off we have the three big creep events right, and their timing is shown as a gray dashed line, and in general this is all colored according to to time, and we see that each have kind of a similar signal, but leading up to them there's this time period where the fault is 00:42:44.000 --> 00:42:50.000 apparently closing, and there looks to be like there's an apparent reversal in slip direction. 00:42:50.000 --> 00:43:01.000 So we see that these two different times, and then, during this really large rain event, there appears to be fault closure, and at the same time a significant reversal in slip direction 00:43:01.000 --> 00:43:07.000 and I think because we have this data, we have the orthogonal creepmeter there, and we can relate it to opening closing of the fault. 00:43:07.000 --> 00:43:14.000 We can actually likely correct for the signal during rain events, and possibly during other events. 00:43:14.000 --> 00:43:15.000 Okay. So now, I want to talk a bit about the other creepmeters that we installed. 00:43:15.000 --> 00:43:31.000 There was this one XSJ historically, you know, this stopped operating a few years ago with this operating for a long time under what is normally a vineyard here, and we couldn't get permission from these owners, but we got permission from the landowner across the street to put in 00:43:31.000 --> 00:43:36.000 a creepmeter, and this is one of the only times we had a backhoe for the installation which made it really, really fast. 00:43:36.000 --> 00:43:40.000 And then at St. Francis, retreat 4 kilometers to the southeast 00:43:40.000 --> 00:43:50.000 we got permission to install a creepmeter and both of these we just put in one creepmeter just one oblique one. At Saint Francis we also installed a rain gauge, and the data from that is shown here in blue this is cumulative rain 00:43:50.000 --> 00:43:52.000 at St. Francis, and in black is the oblique creepmeter signal, and in green is the data from XSJ. 00:43:52.000 --> 00:44:00.000 And I want to first point out that these dashed gray lines is a timing of local earthquakes that are above magnitude. 00:44:00.000 --> 00:44:03.000 2.5, and just last week there was two right back-to-back magnitude a little above magnitude, three earthquakes. 00:44:03.000 --> 00:44:12.000 Just a couple [indiscernible] from my sites, and it was actually their downloading data. 00:44:12.000 --> 00:44:16.000 I was in my car, and it started shaking, and for some reason I thought I was being attacked by a bear, and I was like, oh, well, I'm gonna die. 00:44:16.000 --> 00:44:23.000 Then I quickly realized, no, this is an earthquake, and so then I got really excited, and I actually waited a couple days to download the data in case anything really interesting came out of it. 00:44:23.000 --> 00:44:24.000 It hasn't yet, but it was still pretty fun. 00:44:24.000 --> 00:44:30.000 The other thing I want to point out is at St. Francis 00:44:30.000 --> 00:44:35.000 there's also really clear signal of when you have a reign of it. 00:44:35.000 --> 00:44:40.000 You have an apparent reversal in fault motion, and so we don't have an orthogonal creepmeter there. 00:44:40.000 --> 00:44:44.000 But it would be really nice if we did, and we can really see if the fault also is closing. 00:44:44.000 --> 00:44:48.000 Maybe that's why we have an apparent reversal and slip direction. 00:44:48.000 --> 00:44:49.000 Now, I promise to talk about the Calaveras fault, so I will a little bit. 00:44:49.000 --> 00:44:55.000 We installed one creepmeter here on the Calaveras, and we did it 00:44:55.000 --> 00:45:11.000 a few days after the Alum Rock earthquake, and we were really hoping to capture maybe some long-term after slip and some shallow creep from this event, and we installed it here at the the Mendoza ranch which is a public park. 00:45:11.000 --> 00:45:13.000 And we were able to locate the fault because there's an alignment array 00:45:13.000 --> 00:45:17.000 that was operating, and we put the creepmeter just across it. 00:45:17.000 --> 00:45:21.000 And here's the data so far, you know, we have what looks like creep events. 00:45:21.000 --> 00:45:26.000 But we definitely need a longer time series. But you know this is us installing the creepmeter, and we were really happy, having a good time. 00:45:26.000 --> 00:45:32.000 But our good time has since been ruined, because here's the creepmeter 00:45:32.000 --> 00:45:38.000 now, or just a couple days ago, when I went to download data and it's turns out this is in a sag pond, which we kind of knew. 00:45:38.000 --> 00:45:52.000 But we thought it might be fine, but no, it was completely flooded, and so, during the process of still trying to a force offload and download the data, so hopefully, we can recover some of the data between end of November, until until now. Now I want to briefly give A kind of wish list 00:45:52.000 --> 00:45:56.000 of creepmeter sites that would be also really really nice to have. 00:45:56.000 --> 00:45:57.000 The first one is at Thomas Road, where there was an alignment array. 00:45:57.000 --> 00:46:05.000 This would be a really great site. We haven't been able to get in contact with the with the owners. 00:46:05.000 --> 00:46:09.000 They always have their gates locked. So it's really hard to pull up and ask for permission. 00:46:09.000 --> 00:46:10.000 But we're working on it. Another creepmeter that was operating that isn't now is the site here, 00:46:10.000 --> 00:46:21.000 the Frank Lewis long site, and we could potentially revamp that one be nice to have another creekmeter at Fox Creek another orthogonal one, and then it would be nice to have one or two more at XSJ or at St. 00:46:21.000 --> 00:46:33.000 Francis. I think both owners are open to the idea of doing that which is really great. 00:46:33.000 --> 00:46:34.000 Now I want to just briefly tie this back into the bigger picture, right. 00:46:34.000 --> 00:46:40.000 And so we've also been doing a large seismicity analysis here. 00:46:40.000 --> 00:46:43.000 And so this is a cross-section parallel to Sandra's fault. 00:46:43.000 --> 00:46:46.000 Here's San Juan Bautista. Here's XSJ and St. 00:46:46.000 --> 00:46:47.000 Francis in Fox Creek, just for reference, and in this location 00:46:47.000 --> 00:46:55.000 here, we think that the fault, is at least more locked than the fault around it. 00:46:55.000 --> 00:47:01.000 We think these areas are creeping more. And ultimately, you want to try and compare these deeper 00:47:01.000 --> 00:47:05.000 seismicity and what we think is creep to shallow signals. 00:47:05.000 --> 00:47:09.000 And so we're we're in the process of doing that and hopefully, long term term, we can. 00:47:09.000 --> 00:47:10.000 But I think in our deal it's really important to yeah, have these. 00:47:10.000 --> 00:47:24.000 Have more of these creepmeters. So to conclude, think sites with multiple creepmeters and types of observations are really valuable and there's lots of information signals to tease out. 00:47:24.000 --> 00:47:25.000 So you show some preliminary observations, but I think we have a lot, and it's very interesting. 00:47:25.000 --> 00:47:30.000 And so I would say that we need more creepmeters and I think there's a lot of momentum right now, especially near 00:47:30.000 --> 00:47:46.000 San Juan Bautista, after talking to multiple landowners, and they've been talking to some of their friends about it. I think there's a lot of momentum there that maybe we, as a community, should should take advantage of. 00:47:46.000 --> 00:47:55.000 Great thanks, thanks. Heather. That's very interesting. And looking forward to future results. 00:47:55.000 --> 00:47:57.000 So now we're going to hear from Alex Sarmiento and Steve Thompson about probabilistic fault displacement 00:47:57.000 --> 00:48:10.000 hazard map of California approached, and first results. Take it away. 00:48:10.000 --> 00:48:23.000 Good morning. My name is Alex Sarmiento and today I'll be sharing this talk with Steve Thompson to tell you about a new probabilistic fault displacement hazard map for the State of California. 00:48:23.000 --> 00:48:29.000 This work is part of the California Energy Commission's natural gas infrastructure, safety and integrity project and 00:48:29.000 --> 00:48:43.000 the key outcome of this project will be a statewide seismic risk analysis for gas systems, including the risk from fault displacement where distribution lines, cross surface rupturing faults. Our 00:48:43.000 --> 00:48:50.000 focus here today is on the hazard. But of course the hazard results are combined with fragility and vulnerability functions 00:48:50.000 --> 00:49:07.000 to describe the risk. As some of you are probably aware, a new set of fault displacement models has just been released, we'll be using those models here in this project so I'll briefly talk about those new models, and then I will pass it over to Steve for an overview of 00:49:07.000 --> 00:49:13.000 The seismic source, characterization. 00:49:13.000 --> 00:49:31.000 The new models were developed through the fault displacement hazard initiative project, which is a community-based research program that's coordinated by the University of California. And it's modeled after the NGA programs which a lot of you are probably familiar with. 00:49:31.000 --> 00:49:45.000 The key deliverables of this project are first, the development of a new database with fault rupture maps and displacements from historical earthquakes, and then, second, a set of new fault displacement models. 00:49:45.000 --> 00:49:52.000 The project website is shown here on the right, but you can also just search for UCLA fault displacement, 00:49:52.000 --> 00:50:06.000 and this should be the first thing that pops up. Importantly, all of the data and reports from the FDHI project are available for free under the reports tab. 00:50:06.000 --> 00:50:14.000 I don't have time today to go over all of the details of the database, but I will try to give you a few of the highlights here. 00:50:14.000 --> 00:50:17.000 There are 75 earthquakes spanning magnitude 00:50:17.000 --> 00:50:28.000 roughly 5 through 8, and for all styles of faulting there are over 40,000 displacement measurements, and there is geospatial control for everything. 00:50:28.000 --> 00:50:43.000 We also have an event-specific coordinate system that is really useful for constructing slip profiles, and all of the data are classified as principal or distributed faulting. 00:50:43.000 --> 00:50:54.000 We're wrapping up the first phase of model development with new models from 4 different teams and all of the model developers use the FDHI database. 00:50:54.000 --> 00:50:59.000 And so these new models are based on a larger database than the previous models. 00:50:59.000 --> 00:51:04.000 Three of the new models predict what we call an aggregated displacement, 00:51:04.000 --> 00:51:05.000 and that's just the total discrete displacement 00:51:05.000 --> 00:51:22.000 aummed across all of the multi-stranded ruptures. And then as well two of the new models also have a formulation to predict distributed displacements. 00:51:22.000 --> 00:51:31.000 Overall, there are some really big improvements in these new models relative to the existing models which now are over 10 years old. 00:51:31.000 --> 00:51:50.000 For example, the magnitude scaling in most of the new models is bilinear or trilinear, and we have found that that does a better job capturing the magnitude dependence of the medium predictions which can be particularly important at the extremes like the low magnitudes and 00:51:50.000 --> 00:51:54.000 the high magnitudes. 00:51:54.000 --> 00:52:12.000 Another key improvement is in the statistical modeling, where the aleatory variability is handled, much more thoroughly than it was in earlier models, and the upshot of all of this really is that we have better constraints on the tails of the distribution and particularly the 00:52:12.000 --> 00:52:21.000 upper tail, which is important in both probabilistic and deterministic analysis. 00:52:21.000 --> 00:52:29.000 And lastly, I do want to mention that's all of the new models include some form of modeling epistemic uncertainty as well. 00:52:29.000 --> 00:52:48.000 I know that was a whirlwind overview, but some of the model reports are already published on the UCLA fault displacement, web page, and eventually, very soon all of the reports, including a report that compares all the models will be posted here soon as well, so with that 00:52:48.000 --> 00:52:54.000 I will go ahead and pass it over to Steve. 00:52:54.000 --> 00:52:59.000 Thanks. Alex. The next several slides will continue our introduction of the Fault 00:52:59.000 --> 00:53:03.000 Displacement Hazard Mapping project for the State of California. 00:53:03.000 --> 00:53:21.000 The goals and objectives of the project include showcasing PFDHA and the new fault displacement models generated under the fault displacement hazards initiative. We also want to develop a tool for understanding fault displacement hazard and risk particularly to systems of distributed 00:53:21.000 --> 00:53:40.000 or linear infrastructure. And then, if we can interest or upset people just enough, the project can stimulate discussion about fault displacement hazard mapping and hazard mitigation. So this map is not intended to be used as a tool for zoning or planning nor as a 00:53:40.000 --> 00:53:50.000 basis for engineering design, and certainly not to be a replacement for site-specific assessment. 00:53:50.000 --> 00:53:54.000 In putting together the source model for fault rupture hazard. 00:53:54.000 --> 00:53:59.000 We needed to rely on statewide community models and data sets. 00:53:59.000 --> 00:54:12.000 So for earthquake magnitudes and rates, we're using the UCERF3 rupture model, including the ruptures or the models fault sources and the fault polygons that are shown here in blue. The Quaternary fault database is providing the 00:54:12.000 --> 00:54:27.000 detailed locations of surface fault rupture hazard, and those Quaternary fault traces that are contained within the UCERF3 fault polygons, and therefore can be tied to earthquake magnitudes and rates, these become the subset of fault traces 00:54:27.000 --> 00:54:31.000 in the source model. 00:54:31.000 --> 00:54:47.000 I'll provide an example of the rupture source model using a portion of the Northern Calaveras fault which is within the bounding box shown here. So 00:54:47.000 --> 00:54:52.000 this map is rotated 30o north to align with the approximate directions of the Pacific-Sierra motion. The Northern Calaveras 00:54:52.000 --> 00:54:54.000 fault branches from the central Calaveras- 00:54:54.000 --> 00:55:02.000 Hayward fault trend at or below the lower end of this map, near Calaveras Reservoir, and striking north northwest 00:55:02.000 --> 00:55:11.000 the Northern Calaveras fault bounds the margins of Sunol Valley in the south San Ramon Valley farther north. 00:55:11.000 --> 00:55:24.000 As with most faults, in the Bay region the Northern Calaveras is crossed by important infrastructure, including natural gas transmission pipelines, which are the focus of our CEC project and 00:55:24.000 --> 00:55:41.000 water conveyance systems, such as the South Bay aqueduct transports water from the Delta to Santa Clara Valley. Not shown here, but also in the same vicinity, passes the Hetch Hetchy aqueduct. 00:55:41.000 --> 00:55:47.000 Here we show the UCERF3 model overlong the Quaternary fault traces. The fault 00:55:47.000 --> 00:55:55.000 sources include the higher slippery faults, such as the Hayward, Northern Calaveras, Greenville and the Mount Diablo blind thrust. Mainly for tracking purposes 00:55:55.000 --> 00:56:15.000 the UCERF3 program also generated these fault polygons shown in red that are associated with each of the rupture sources. These polygons help to define a three-dimensional volume containing faults that the program believed associated with the rupture 00:56:15.000 --> 00:56:24.000 sources. Similarly, we are using these polygons to group the Q fault traces with each of the seismic sources. 00:56:24.000 --> 00:56:41.000 Now, while the UCERF3 model includes some of the lower slip-rate faults in the area, such as the Las Positas and Silver Creek, the model is not exhaustive. Examples of Holocene or late Pleistocene faults not represented directly in the UCERF3 model include 00:56:41.000 --> 00:56:45.000 the Verona Williams Fault Zone near the west end of the Las Positas 00:56:45.000 --> 00:56:50.000 fault and the Pleasanton fault which is located on the east side of San Ramon Valley. 00:56:50.000 --> 00:56:56.000 We'll discuss a little bit how we treat these faults in our model in a moment. 00:56:56.000 --> 00:56:59.000 The last feature of the UCERF3 model to note is the division of fault sources into these approximately 7 kilometer 00:56:59.000 --> 00:57:07.000 long subsections or tiles. We're highlighting the polygon for subsection 00:57:07.000 --> 00:57:24.000 203 here. The earthquake rupture event sets provide most of the needed source information for each of these subsections to perform our PFDHA. 00:57:24.000 --> 00:57:29.000 The plots here show data from one of the available event sets in UCERF3. 00:57:29.000 --> 00:57:34.000 Each circle represents one of thousands of ruptures, each with a specific magnitude, 00:57:34.000 --> 00:57:37.000 recurrence rate, and rupture. Location that passes through subsection 00:57:37.000 --> 00:57:52.000 203. The magnitude rate relationship is contained in the plot on the left. The right-hand plot, which shows rupture, length, versus magnitude, is representative of the rupture location information 00:57:52.000 --> 00:58:07.000 we need. So for each rupture we can calculate from the event set data, the relative position of a tile along the entire rupture language and the normalized structure location which is represented commonly is X over 00:58:07.000 --> 00:58:09.000 L is an explanatory variable in the fault 00:58:09.000 --> 00:58:14.000 displacement models. 00:58:14.000 --> 00:58:21.000 So returning now to Map View, we can see more detail of the Quaternary fault traces within subsection 00:58:21.000 --> 00:58:22.000 203. The UCERF3 event sets, and the fault displacement models 00:58:22.000 --> 00:58:43.000 they're sufficient to generate total or principal fault displacement hazard curves, but the results need to be applied to actual mapped fault traces which include a combination of primary fault traces and secondary traces or splays and in some cases other named 00:58:43.000 --> 00:59:01.000 faults, such as the Pleasanton fault here. A non-trivial effort in this project has been declassified fault traces across the state it's either primary or secondary, and in this example, the main through going trace of the northern Calaveras fault is classified as 00:59:01.000 --> 00:59:19.000 primary. Several branching faults are classified as secondary, and we've decided to classify other faults, such as the Pleasanton fault as secondary in the model, rather than excluding it altogether from the hazard mapping. 00:59:19.000 --> 00:59:22.000 To account for uncertainty and fault rupture location 00:59:22.000 --> 00:59:27.000 we apply a buffer to the classified fault traces and generate these fault 00:59:27.000 --> 00:59:45.000 trace polygons of primary and secondary hazard for each subsection. The selected buffer width of 200 meters is based on earlier work by Peterson and others in 2011, that evaluated distances between several historic surface ruptures and the fault 00:59:45.000 --> 00:59:48.000 trace mapping that was available prior to the rupture. 00:59:48.000 --> 00:59:57.000 This 200 meter width marginally represents a 95% confidence interval for an average ruptured condition. 00:59:57.000 --> 01:00:15.000 The intention for that final fault displacement hazard map is to provide hazard curve results for each of these fault trace polygons for each subsection. We're likely to include a more qualitative hazard statement as well within each of these broader 01:00:15.000 --> 01:00:20.000 UCERF3 false source polygons. 01:00:20.000 --> 01:00:24.000 Okay, this is the final slide where I list some limitations. 01:00:24.000 --> 01:00:28.000 So at a high level, we're adopting statewide community data sets. 01:00:28.000 --> 01:00:44.000 These are continuously in development, and they're imperfect. Default displacement hazard map we hope will be viewed with these same attributes of a work in progress and imperfect as well. For the Quaternary faults 01:00:44.000 --> 01:00:49.000 we recognize that these traces are just one version of a possibly correct set of faults. 01:00:49.000 --> 01:01:00.000 So here we show fault trace mapping from Kelson and Sundermann, which was generated in 2005 for input to the San Francisco Bay Regional Fault Map 01:01:00.000 --> 01:01:01.000 Project. Most, but not all, of these traces are equivalent to traces in the current 01:01:01.000 --> 01:01:11.000 Q fault database, and even more of these screen traces are captured in our fault 01:01:11.000 --> 01:01:16.000 trace polygons. There are several other traces, however, that fall outside of the fault 01:01:16.000 --> 01:01:28.000 trace polygons. Unfortunately for at least the preliminary version of this fault hazard map, such additional fault mapping, and certainty can't be incorporated. 01:01:28.000 --> 01:01:35.000 Other limitations of the map include our fault classification, which was performed very efficiently 01:01:35.000 --> 01:01:43.000 I guess we'll say over the entire State, and still other limitations are related to the UCERF3 set of fault sources, 01:01:43.000 --> 01:01:49.000 and again, this recognition that there's several active faults throughout the State that are not represented in the model 01:01:49.000 --> 01:01:58.000 currently. In a few cases, like the Pleasanton fault we can provide some estimate of fault displacement hazard through treating them as 01:01:58.000 --> 01:02:12.000 secondary structures. Lastly, the fault displacement hazard map will include limited and highly imperfect information about fault displacement direction and how to partition displacement within and between fault trace polygons. 01:02:12.000 --> 01:02:19.000 So, okay, that's all the time we have. So thank you very much for listening 01:02:19.000 --> 01:02:27.000 Wonderful. Thank you very much, Alex and Steve. So let's move into our discussion period. 01:02:27.000 --> 01:02:36.000 There's been fantastic discussion in the chat, and while that's been great, cause the the quick responses are are very handy. I think. 01:02:36.000 --> 01:02:40.000 Now maybe we can get into some more of that rich discussion, too. 01:02:40.000 --> 01:02:55.000 So who would like to ask a question? Please feel free to either raise your hand or unmute yourself, and speak up 01:02:55.000 --> 01:02:57.000 Everyone shy, now. 01:02:57.000 --> 01:03:04.000 Well, I'm just gonna read a question from Keith Knudsen, addressed to Steve. 01:03:04.000 --> 01:03:10.000 Through this project? Have you identified places where we really need to do a better job with mapping of the fault surface traces 01:03:10.000 --> 01:03:16.000 that sort of info would be useful, and that was actually a question that I had highlighted as well. 01:03:16.000 --> 01:03:28.000 That's that's great. If I could have planted a question, it would have been that so the the Quaternary fault database is a wonderful resource. 01:03:28.000 --> 01:03:32.000 We all use it for making any kind of site specific assessment. 01:03:32.000 --> 01:03:35.000 We always know that that's just a very first start. 01:03:35.000 --> 01:04:00.000 And if, if more resources could be directed to the Quaternary fault database for allowing Cgs folks in other States to have more time to review more recent maps, maybe engage with people doing detailed fault mapping the community, if if somehow we can get more resources 01:04:00.000 --> 01:04:22.000 To collect a better set of quaternity. Fault traces, get them incorporated in the national queue fault database that would go a very long way towards towards making a product like this more useful right now, the the usefulness of exercises like this is really limited for the 01:04:22.000 --> 01:04:27.000 Because of the accuracy to default mapping in the existing queue fault data 01:04:27.000 --> 01:04:36.000 But have you? Are there certain areas, certain faults that you see as really lacking in information that might be a priority 01:04:36.000 --> 01:04:51.000 Well, there you you could look at that a few ways. 01:04:51.000 --> 01:04:52.000 Cool. 01:04:52.000 --> 01:05:01.000 You can look at our, the, the faults that pose the greatest hazard, and risk even taking the Heyward a fault that we would think we've all nailed and have the exact locations we have excellent data products from gin lean camper showing the 01:05:01.000 --> 01:05:09.000 Principal creeping trace, but a lot of his mapping isn't in the current version of the queue faults. 01:05:09.000 --> 01:05:21.000 Database and simply updating and replacing the data set for Hayward with Jim's mapping, would miss some of the lock traces that still represent a hazard. 01:05:21.000 --> 01:05:25.000 So to answer the question. All faults need improvement. 01:05:25.000 --> 01:05:34.000 Starting with the highest risk. Faults make sense, completing the inventory and updating areas like up in the northeast part of the State, around the Hat Creek. 01:05:34.000 --> 01:05:37.000 Robin, that's another sort of gap that we have in our data. 01:05:37.000 --> 01:05:42.000 So there's a lot of work 01:05:42.000 --> 01:05:43.000 Great thanks. 01:05:43.000 --> 01:05:47.000 Hmm, okay, it looks like we have a hand raised by Don from Dwr 01:05:47.000 --> 01:06:08.000 Hi, good morning, thanks. That was a great presentation, you guys, Steve, with sympathetic fault movement like, for example, what we saw in less Pacitus from the 1,980 Greenville earthquake is there any accounting you know for that potential in the model 01:06:08.000 --> 01:06:22.000 We, we we aren't trying to specifically model phenomena like triggered slip, remotely triggered slip tends to be very small. 01:06:22.000 --> 01:06:29.000 Displacement amounts, and particularly if those displacements occur on faults, that interpreted to be cycling in their own right. 01:06:29.000 --> 01:06:46.000 The hazard curve is going to be, or the hazard will really be, dominated by slip events from faults when they are producing their own earthquakes, so when they will slip as principal ruptures have more robust secondary ruptures those are the kinds of 01:06:46.000 --> 01:06:51.000 Events that are going to dominate the hazard at most, displacement amounts of interest to engineers. 01:06:51.000 --> 01:07:01.000 So we're ignoring things like sympathetic or triggered slips 01:07:01.000 --> 01:07:07.000 Unless people can show examples of, you know, tens of centimeters or meters of triggered slips. 01:07:07.000 --> 01:07:12.000 Then please let us know. 01:07:12.000 --> 01:07:19.000 I think we have, well, we have you, Steve and and Alex. I think we have a number of questions about the width of the zone. 01:07:19.000 --> 01:07:23.000 The 200 meters. Do you want to talk a little bit more about that? 01:07:23.000 --> 01:07:24.000 Or 01:07:24.000 --> 01:07:30.000 Yeah, to, to to keep things brief. Chris. 01:07:30.000 --> 01:07:35.000 Know. I can't keep things brief on this topic. Chris Medugo gave a talk yesterday. 01:07:35.000 --> 01:07:58.000 He mentioned that Pg. And E. Is helping to fund a group that's led by Chelsea Scott in Down in Arizona State University, and she and Ramon Arrows Smith, as well as folks at University Nevada Reno were trying to put 01:07:58.000 --> 01:08:15.000 a project together to really more closely look at the uncertainty and rupture location from the viewpoint of what is a pre rupture fault, map, look like, and so for principal ruptures, what's our uncertainty in location for more distributed faulting 01:08:15.000 --> 01:08:31.000 what's our ability to use just traditional fault mapping to predict where the bulk of the deformation is going to be, or the bulk of the surface rupture is going to be, the 200 meter choice is again, is from Peterson at all. 01:08:31.000 --> 01:08:41.000 It's an initial offering. It doesn't try to account for epistemic uncertainty or uncertainty in the queue fault traces. 01:08:41.000 --> 01:08:52.000 And so again, this whole exercise is supposed to upset people enough, you know, if we can upset people and their interested, then we can continue and improve the map. 01:08:52.000 --> 01:08:57.000 So that's that's kind of the goal. So please be upset and tell us why you're upset 01:08:57.000 --> 01:09:05.000 You're getting some good comments in the in the chat for you to read later 01:09:05.000 --> 01:09:06.000 Alright! Does. Do we have any other hands raised I don't see any. 01:09:06.000 --> 01:09:11.000 You're welcome to keep putting them in chat 01:09:11.000 --> 01:09:13.000 No for Chris. 01:09:13.000 --> 01:09:17.000 John. John, just unmuted me so. But then we were just having a little conversation about how wide is the fault, and I'm coming from like a deep zoom fault perspective. 01:09:17.000 --> 01:09:25.000 But I've been talking to Alex. Hey? Something about this a lot. 01:09:25.000 --> 01:09:42.000 When we started to map areas from zoom faults, from the seismogenic zones like 5, 10 kilometers deep, where we could identify things sort of like simultaneously active kind of zones, they're on the order of maybe tens of meters for any single 01:09:42.000 --> 01:09:45.000 Event, and then, like over a geologic time, we get this cumulative zone. 01:09:45.000 --> 01:09:52.000 That's kind of hundreds of meters, wide and and I would say, like 3 to 3, 300 to like a kilometer wide that represents, like the fault core over geologic time. 01:09:52.000 --> 01:10:03.000 And that's like a really nice match actually to not any instantaneous like a creep meter measurement. 01:10:03.000 --> 01:10:14.000 But if you look at the axial trough of the San Andreas, or if you look at like some of these long-term erase of where we have faults that we lump together is like the same Gregorian fault like that width of like 300 to a 1,000 01:10:14.000 --> 01:10:17.000 Meters is that number pops up over and over, and this isn't just in California. 01:10:17.000 --> 01:10:33.000 This isn't like lots of places around, so that if I was gonna just like back of the envelope, pick a width to treat as the fault zone where ruptures are like most of the rupture is gonna happen this I would pick 300 to 1,000 meters for that but I 01:10:33.000 --> 01:10:34.000 Can inform that with some plots, if you if you want. 01:10:34.000 --> 01:10:40.000 I'm working on that 01:10:40.000 --> 01:10:41.000 That's my comment. 01:10:41.000 --> 01:10:49.000 Alright. Thank you. We we also have a feed. The 3 other papers, if you want to have any questions about other papers, too, we can. 01:10:49.000 --> 01:10:55.000 But Alex and Steve be off the spot. But if there's anything else for them, yes. 01:10:55.000 --> 01:11:02.000 If I could. I've got a further comment about the most recent presentation. 01:11:02.000 --> 01:11:18.000 Question, are there many examples of earthquakes that have hit the parts of the false set of already been mapped, and whether the new traces do displacement? 01:11:18.000 --> 01:11:27.000 Maps are consistent with what was already drawn before 01:11:27.000 --> 01:11:31.000 Let let's see, I think I understand. Thank you for the question, David. 01:11:31.000 --> 01:11:58.000 I think I understand it. So the the buffer width that we chose of 200 meters that comes from work that Tim Dawson and and others did that send the Peterson at all paper, and there Tim and Rui Chen and others specifically looked at the separation distance between 01:11:58.000 --> 01:12:01.000 traces that were mapped and traces that ruptured. 01:12:01.000 --> 01:12:04.000 So that's what we incorporated in our decision to do. 01:12:04.000 --> 01:12:10.000 The buffer with their other. So was that your question, or is your question a little different? 01:12:10.000 --> 01:12:20.000 Which is really about perspective, evaluation of the accuracy of the of the maps. 01:12:20.000 --> 01:12:23.000 Are. Are there earthquakes that have occurred after the maps are drawn 01:12:23.000 --> 01:12:24.000 Oh, gosh! No, no, no, no! We're just finishing this mapping right now. 01:12:24.000 --> 01:12:41.000 I mean a bit. Basically, I, I guess another question would be, you know, you you could look at the Ridge crest, earthquake sequence, and then you could compare it to the quaternary fault mapping. 01:12:41.000 --> 01:12:42.000 Right. 01:12:42.000 --> 01:12:43.000 That was done prior to it, and there's a disgusting mismatch. 01:12:43.000 --> 01:12:56.000 But I think that reflects more of the incompleteness of the mapping, and that gets to the first question of the resources that ideally would go into efforts like the qfold database 01:12:56.000 --> 01:13:14.000 Right, there might be an opportunity such that an earthquake occurs on one of those queue faults, and it wasn't yet drawn in the format in the way that you're drawing the fault with. 01:13:14.000 --> 01:13:21.000 But the data are there, and could have been drawn if you just had the resources in the time to map that fault. 01:13:21.000 --> 01:13:29.000 So it's a way to do a kind of a quasi perspective evaluation of the earthquake. 01:13:29.000 --> 01:13:33.000 Yeah. 01:13:33.000 --> 01:13:34.000 Yeah. 01:13:34.000 --> 01:13:36.000 It's certainly big earthquakes, like you know, the rich crest earthquake. 01:13:36.000 --> 01:13:37.000 Yeah. 01:13:37.000 --> 01:13:39.000 They're not gonna fit in in what you'd previously met. 01:13:39.000 --> 01:13:40.000 Yeah. 01:13:40.000 --> 01:13:45.000 But maybe they're smaller ones that might be usable in an almost perspective approach. 01:13:45.000 --> 01:13:52.000 Well, the that that program I mentioned. That's a collaboration with Arizona State and University, Nevada Reno with funding from Pg. 01:13:52.000 --> 01:14:15.000 And E. There we are trying to use pre rupture data sets from historical ruptures, globally and we're trying to do fault mapping exercises, using jomorphology to try to understand where are their geomorphic indicators where you what sort of fault map 01:14:15.000 --> 01:14:31.000 Would we produce? And then we're able to turn the actual surface fault, rupture that was mapped post earthquake on and we're trying to be fairly careful in the experiment to make sure that people doing the mapping have no knowledge of those specific earthquake ruptures so we're trying to do 01:14:31.000 --> 01:14:32.000 Yeah. 01:14:32.000 --> 01:14:35.000 Those tests as best we can 01:14:35.000 --> 01:14:40.000 Okay, great. That's a great, great response, and also a great presentation. 01:14:40.000 --> 01:14:47.000 I think, and perhaps long overdue. It's really great to see that the effort proceeding 01:14:47.000 --> 01:14:49.000 Yeah, thank, you. 01:14:49.000 --> 01:14:50.000 Alright. Thank you. 01:14:50.000 --> 01:14:54.000 So? Are there any questions for some of the others, for some of the other speakers? 01:14:54.000 --> 01:15:08.000 So for the Alan rock earthquake, do youulian, or to heather for the creep meter, or for the machine learning efforts of focal mechanisms which there was a lot of discussion in the 01:15:08.000 --> 01:15:16.000 Please feel free to raise your hand in the the virtual sense, or unmute yourself 01:15:16.000 --> 01:15:17.000 I actually like Dave Shelley's suggestion of back to the future. 01:15:17.000 --> 01:15:31.000 I mean, could we use this machine learning effort to try and characterize the 1,911 earthquake on the Calaver 01:15:31.000 --> 01:15:37.000 I mean, that might be pushing things a little bit. But if you have observations going back that are, you know you. 01:15:37.000 --> 01:15:43.000 You probably could do that but yeah, you need to have a large enough data set to actually train these models on. 01:15:43.000 --> 01:15:50.000 But maybe that's maybe a task that's better for our human brains, because it's a more manageable size problem. 01:15:50.000 --> 01:15:51.000 Yeah, in general, I trust our brains more than the computer. 01:15:51.000 --> 01:15:57.000 But okay, pros and constables, techniques. 01:15:57.000 --> 01:15:58.000 Oh, I see we have a couple of hands up. There's a question by Diane. 01:15:58.000 --> 01:16:04.000 You wanna go first 01:16:04.000 --> 01:16:20.000 Okay, my question is rather and coherent, but this is for heather and you're finding so many interesting things about creep and doing corrections for for the data in your very well instrumented side. 01:16:20.000 --> 01:16:38.000 I'm just wondering about how well you might be able to apply what you're learning in your study to older creep data up for cleaning up older data that don't have all the data sets that you do is that would that be possible 01:16:38.000 --> 01:16:43.000 Gosh, okay. Great question. Thank you. I mean, it's it's difficult. 01:16:43.000 --> 01:16:48.000 I would the very hopeful part of me would say, that's the goal. 01:16:48.000 --> 01:16:51.000 And maybe if we start with sites that we have Orthogonal Creek meters and ring gauges, and really trying to quantify what the relationship is. 01:16:51.000 --> 01:17:03.000 For example, that if we can do this at multiple sites, if we can do this at multiple sides, and we can see a pattern that maybe. 01:17:03.000 --> 01:17:13.000 But it it definitely would be very, very difficult. But I'm gonna I'm gonna say hopefully, maybe 01:17:13.000 --> 01:17:24.000 Okay? Well, maybe people who are studying creeping sections would right now, and listening to this would start putting in more instruments 01:17:24.000 --> 01:17:33.000 Yeah, so I mean, I think that. Yeah, I hope at least that came across in in in the talk that yeah, these instruments, I think, are super useful. 01:17:33.000 --> 01:17:38.000 And then more data. That's co-located with them. 01:17:38.000 --> 01:17:39.000 Oh, yeah. 01:17:39.000 --> 01:17:52.000 And people were saying like soil muster 2, which I also think would be great, and I know Rogers also installed different gas sensors, which I think is also very intriguing and useful so, and and ultimately I mean these instruments aren't very expensive, compared to to others right? 01:17:52.000 --> 01:18:00.000 So I mean it. It depends. But they're relatively inexpensive, and you can do it with a shovel. 01:18:00.000 --> 01:18:01.000 Yeah. 01:18:01.000 --> 01:18:02.000 I mean, it's of course, having a back goes better, but you can do it in a shovel, you know, in a day, with a few unfortunate souls. 01:18:02.000 --> 01:18:11.000 So, yeah, I think it would be really nice. I mean the the biggest issue, or hold up is site permissions. 01:18:11.000 --> 01:18:22.000 But you know, if there are study sites people already have and have access to, I think it's really useful to have a few of these instruments, at least 01:18:22.000 --> 01:18:23.000 Thank you. 01:18:23.000 --> 01:18:24.000 Okay. Thank you. 01:18:24.000 --> 01:18:28.000 Alright! We've got a couple more hands, lots of hands going up. This is great. 01:18:28.000 --> 01:18:31.000 How about Bell? Let's go to you. 01:18:31.000 --> 01:18:37.000 Hi! My question is for Rob, that was a really really interesting talking technique. 01:18:37.000 --> 01:19:03.000 And my question is kind of to make sure I'm thinking about the the process correctly, which is what it would be correct to say that, according to your method, there's more information in the few stations that existed in earthquakes in the past than we readily recognize as humans and that this sort of 01:19:03.000 --> 01:19:05.000 Pattern matching technique is a way extracting war information from those records. 01:19:05.000 --> 01:19:11.000 Than we as humans could 01:19:11.000 --> 01:19:12.000 Yeah, I would. Yes, and then I'll also add on to that. 01:19:12.000 --> 01:19:22.000 Where it's more we're looking at other stations to learn what that station might look like. Right? 01:19:22.000 --> 01:19:26.000 So if station A and Station B are always positive, right, and then one earthquake, you're missing station. B, right? 01:19:26.000 --> 01:19:33.000 You know you can guess if station A is positive solution would be positive. 01:19:33.000 --> 01:19:37.000 It's more. Yeah. You're making those informed guesses I wouldn't really say it's information, because we are just making these guesses. 01:19:37.000 --> 01:19:45.000 It's not an actual data. Yes. 01:19:45.000 --> 01:19:46.000 Thanks. 01:19:46.000 --> 01:19:49.000 Yep, thank you. 01:19:49.000 --> 01:19:51.000 And I see John Langenbein hand is up 01:19:51.000 --> 01:19:58.000 Yeah, I have a couple of comments to Diane's query, and Heather's response. 01:19:58.000 --> 01:20:09.000 I guess. Couple of items one, not all creep meters slow down and rate or go left later, or whatever in the rainy season. 01:20:09.000 --> 01:20:23.000 So there's a diversity of motion and timing so I'm a little skeptical being able to correct the older creep data, or just or what do you want to call it? 01:20:23.000 --> 01:20:30.000 Due to rain. You just have to put up with it. 01:20:30.000 --> 01:20:38.000 The I guess. See so, but it's I had the other comment, too, is I long time ago I didn't saw some soil moisture sensors. 01:20:38.000 --> 01:20:50.000 They really only told me sorta it rained, and it sort of was a binary signal was dry or so they weren't that informative. 01:20:50.000 --> 01:20:57.000 I could have sort of looked at a rate. What do you call the weather, and figured that out? 01:20:57.000 --> 01:21:03.000 Maybe that maybe a better sensor would have gotten a finer scale. 01:21:03.000 --> 01:21:11.000 But anyway, it was sort of, and all experiment. So with that I'll turn it back over. Thanks 01:21:11.000 --> 01:21:16.000 Cool. Thanks. So, yeah, having the rain gauges, there probably seems to be a better, better chance. 01:21:16.000 --> 01:21:22.000 Really trying to correlate the signals with the rain 01:21:22.000 --> 01:21:23.000 Wonderful. 01:21:23.000 --> 01:21:33.000 And I was more thinking of situations where you're you have the creep data and another side. 01:21:33.000 --> 01:21:45.000 For some reason whether you could kind of go back to the records and kind of clearing up no eyes rather than sign up randomly applying corrections that are needed 01:21:45.000 --> 01:21:50.000 Right. Let's go to Chris's question first. When you go 01:21:50.000 --> 01:21:52.000 Hi, this is this. Questions, for heather may have been answered in the chat, but there's a lot going on there. 01:21:52.000 --> 01:22:01.000 So how? How do you decide how wide to make your creep meter installation? 01:22:01.000 --> 01:22:09.000 Many of the sites that I've worked on have multiple fault strands can be really wide zones of deformation. 01:22:09.000 --> 01:22:17.000 Ben Brooks, a few years ago, was showing how creep could be on one strand, and then move to another strand over time. 01:22:17.000 --> 01:22:28.000 So I guess. Question one is, I I know you use the maps which could be off, as we learned from from full time certainty, and then the second question is, if you have multiple. 01:22:28.000 --> 01:22:29.000 Strands the the creeped signal could be partitioned. 01:22:29.000 --> 01:22:39.000 You could have an area of focused creep. We're really distributed, creep, depending on the surface materials. 01:22:39.000 --> 01:22:43.000 It seems like the way you install those creep meters to now. 01:22:43.000 --> 01:22:56.000 You wouldn't capture that nuance which we're interested in from our underground infrastructure and I'm assuming if you did, a bunch of parallel creep meters across the fault zone you could capture some more of that nuance and so could 01:22:56.000 --> 01:22:58.000 You speak to those issues, please. 01:22:58.000 --> 01:23:03.000 Yeah, absolutely. So my first response is, gonna be a not very scientific one. 01:23:03.000 --> 01:23:09.000 But how we decide to. You know where to put it in, and what strands is not right now is budget. 01:23:09.000 --> 01:23:13.000 We're like, okay, we think we're at the most creep here. 01:23:13.000 --> 01:23:17.000 And we, and in terms of like being in the field. Yes, like we, map it. 01:23:17.000 --> 01:23:31.000 And then we try and figure out from like road cracks, or if we have offset fences like at the Fox Creek site, we had a really good idea of where the fault was, which was off right from from the map, fault, not by much but enough to where we wouldn't have caught 01:23:31.000 --> 01:23:32.000 Any creep. If we had gone which is what was mapped, and so we had some field mapping. 01:23:32.000 --> 01:23:43.000 We put the first creep meter right by the offset fence, and in that case I guess we did about it's up 30 feet long. 01:23:43.000 --> 01:23:51.000 We thought that we would get most of the the fault creep, and so there's other sites, though, that that get St. Francis. 01:23:51.000 --> 01:23:53.000 But I showed that that one looks like there are multiple strands. 01:23:53.000 --> 01:23:58.000 And so we would love to go back and put creep meters across multiple strands. 01:23:58.000 --> 01:24:06.000 So right now we put it across the Strand that we think has the most creep and based on what we're seeing in terms of already of creep measurements. 01:24:06.000 --> 01:24:20.000 I think we're right. But if you're really have the, you know, the funding and the resources to do to really try and see, catch all the creep along fault and see how it changes, and absolutely you can do longer creek meters, or like you're saying kind of have a 01:24:20.000 --> 01:24:25.000 few overlapping, maybe parallel Creek meters, and that would be in that case I think it would be great. 01:24:25.000 --> 01:24:33.000 You could really see the distribution of creep across strands, and maybe how wide the zone is. 01:24:33.000 --> 01:24:39.000 And that would be, I think, very worth doing 01:24:39.000 --> 01:24:40.000 Yes. 01:24:40.000 --> 01:24:45.000 I'm wondering. Have you considered some simple geophysics, electric resistivity, or something like that, that you could do? 01:24:45.000 --> 01:24:48.000 Very simply and just pinpoint where the fault likely is. 01:24:48.000 --> 01:24:52.000 I mean, that seems like it would be a cheap add on just thinking out loud 01:24:52.000 --> 01:24:53.000 Yeah, yeah, definitely, I think that is a really great, you know, cheap, cheap. 01:24:53.000 --> 01:25:05.000 Add on, and it's, and it's kind of hard where we're putting the creek meters only like a foot and a half deep right? 01:25:05.000 --> 01:25:07.000 And so it's really, I found it's hard when it's that shallow with using the surface. 01:25:07.000 --> 01:25:17.000 Geophysics to get a better estimate than what we have and we've just been out there mapping it so that's been the. 01:25:17.000 --> 01:25:30.000 So that's the only hold it. But I think that if you own the instruments, the absolutely, go out there and and do it, or if you have cheap access to them, of course, and for sites, that may be, it's not as easy to map like we've been. 01:25:30.000 --> 01:25:37.000 Lucky with everything we've put in where we have really good field indicators of where the fault is or where most of the creep is 01:25:37.000 --> 01:25:38.000 Thank you. 01:25:38.000 --> 01:25:41.000 Yeah, thank, you. 01:25:41.000 --> 01:25:50.000 We have another hand up by Felipe, and I think we may have to start wrapping this up in the next 5 min or so. 01:25:50.000 --> 01:25:51.000 Hey? Thank you. I'm Feliva Kolin from Chile. 01:25:51.000 --> 01:26:04.000 So my question is to rob I'm wondering if the inputation method can be used with other measurements of the air surface displacements that may have some GPS like GPS Ins or things like that. 01:26:04.000 --> 01:26:15.000 Or if you can even combine that with the even leveraging for all or all earthquakes, for example. 01:26:15.000 --> 01:26:24.000 So I wonder if if you can use the same principles for other kind of of metrics, such as surface displacement captured by Gsu, new and all thanks 01:26:24.000 --> 01:26:29.000 Yeah. Great question. Yeah. So I think we can. So in my example here, right? 01:26:29.000 --> 01:26:37.000 We just imputed a binary data set, whether it's up or down, you can impute actual numbers right? 01:26:37.000 --> 01:26:38.000 Sure. 01:26:38.000 --> 01:26:45.000 And values not you to be careful with that right. But I I think we can do that. 01:26:45.000 --> 01:26:50.000 I think that's kind of kind of the future steps where we're where we're headed I'm not personally at that stage yet. 01:26:50.000 --> 01:26:58.000 Still still working on that on that code. But yes, I mean, I think that's that's the natural next step in general. 01:26:58.000 --> 01:27:03.000 We always need to be careful with how we're using those those data, those those guest values. 01:27:03.000 --> 01:27:04.000 Hmm. 01:27:04.000 --> 01:27:12.000 But as long as we're always aware of that limitation, I think you know, there's lots of promise in this throughout the entire field seismology. 01:27:12.000 --> 01:27:15.000 And yeah, in my literature view, I couldn't find anyone that had ever used imputation in geophysics before. 01:27:15.000 --> 01:27:22.000 It's possible that I missed something but I think it's it's a problem that we all have right. 01:27:22.000 --> 01:27:33.000 We have missing measurements, mission missing stations. It's inevitably so. I think if we can try to address some of those limitations, you know, we can have a a huge impact 01:27:33.000 --> 01:27:34.000 Thank you. 01:27:34.000 --> 01:27:36.000 Thank you. 01:27:36.000 --> 01:27:41.000 Great. I think we have time for another question. Is there a question for Julian? 01:27:41.000 --> 01:27:47.000 If not, I'm gonna ask a potentially loaded question. 01:27:47.000 --> 01:27:51.000 Given that we have all these magnitude 5 and sixes on the Calaveras. 01:27:51.000 --> 01:27:59.000 Does that influence the size of a major earthquake on the Calaveras? 01:27:59.000 --> 01:28:02.000 Yeah, that's in in fact, that's a good question. 01:28:02.000 --> 01:28:04.000 And that there were a prediction made with a timeline for recurring magnitude. 5 and magnitude. 01:28:04.000 --> 01:28:17.000 6 event along the the collabor as fault. And now that that we got that that kind of pasha rob tour, I think that's that's made. 01:28:17.000 --> 01:28:20.000 So that shows that, yeah, it's quite. It's quite challenging to make those those predictions. 01:28:20.000 --> 01:28:27.000 But I I don't know if David would like to to add something. Maybe 01:28:27.000 --> 01:28:43.000 Yes, it's you have to go into this with a bit of humility, because I think this earthquake demonstrated that we thought we understood the this section 3 were the working Hill earthquake occurred 1984 in 1911. 01:28:43.000 --> 01:28:47.000 But behave differently. I think part field. So the same thing. 01:28:47.000 --> 01:28:54.000 So I think we have a lot to learn. We're looking at a very small sample of geologic time 01:28:54.000 --> 01:29:02.000 Yeah, so given that shake alert, did so. Well, should we be investing more of our efforts towards that? 01:29:02.000 --> 01:29:11.000 Then then other investigations, just to be provocative. 01:29:11.000 --> 01:29:24.000 I mean in this, in this I mean, of course, there is the the public safety aspect, and and in this particular example, what was great is that the th the Jp. 01:29:24.000 --> 01:29:25.000 Centre was not the I mean was not in a very populated area. 01:29:25.000 --> 01:29:37.000 I was very close to a populated area. So so basically, that gave some time for for people to receive an alert before they failed. 01:29:37.000 --> 01:29:38.000 The main shake. That could be different in case of an earthquake. 01:29:38.000 --> 01:29:44.000 You know, in San Francisco along the day, what fault? 01:29:44.000 --> 01:29:48.000 But so, and I think it's it's 2 different strategy. 01:29:48.000 --> 01:29:54.000 I mean, one is really about public safety and mitigating damage, and the other one is about better understanding. 01:29:54.000 --> 01:30:00.000 Earthquake processes and and mechanisms. 01:30:00.000 --> 01:30:02.000 Alright. We have a question from Roland 01:30:02.000 --> 01:30:09.000 Yeah, just really, quickly, phone up on on the previous question, do we know enough about the color? 01:30:09.000 --> 01:30:31.000 Various fault to now, if a repeat of a 1984 event which would be right to chasing to where we have the the recent 5.1 can, that is, that plausible just in terms of slip rates and any information about recurrence intervals 01:30:31.000 --> 01:30:37.000 So in the paper that we published in 2,010, we tried to tackle that question. 01:30:37.000 --> 01:30:43.000 Tom Parsons try to estimate the likelihood. 01:30:43.000 --> 01:31:01.000 The probability of future earthquakes. In each of these segments, but it was predicated on assuming a magnitude of 6.2 in this recent earthquake, show that with that model wasn't quite correct, so, this if you think that's going to be the magnitude of the next 01:31:01.000 --> 01:31:05.000 Earthquake, using the technique that Tom described in that paper. 01:31:05.000 --> 01:31:16.000 You can come up with probabilities for future earthquakes, and you mean recurrence time, if I remember, for that segment is like 60 years. 01:31:16.000 --> 01:31:20.000 So we wouldn't expect one, probably, at least in my lifetime. 01:31:20.000 --> 01:31:25.000 Hmm, thanks. 01:31:25.000 --> 01:31:40.000 Okay. Well, I'm getting a message here that it's probably a big time to wrap up now, though, that we've gone into our break time, and I just really wanna thank all the speakers and and all the questions we've gotten it's been great. 01:31:40.000 --> 01:31:43.000 Thank you very much, and we do know the way to San Jose. 01:31:43.000 --> 01:31:45.000 No, to send those 01:31:45.000 --> 01:31:47.000 So thanks. 01:31:47.000 --> 01:31:48.000 And thank you so much. Heidi and Vicki, for being wonderful 01:31:50.000 --> 01:31:52.000 moderators.