WEBVTT 00:00:04.000 --> 00:00:13.000 Hello everyone, the time is 1:45p.m. on the West Coast and it is time to come back for a second round of Thunder Talks. 00:00:13.000 --> 00:00:22.000 Second force, same as the first, and to bring us through the singalong, we have the fabulous Josie Nevitt. [WooHoo] 00:00:22.000 --> 00:00:31.000 Alright, thanks everybody for joining us for the second session of Thunder Talks. We will have 11, 4-min talks given back-to-back 00:00:31.000 --> 00:00:41.000 and live. This should leave about 10 to 15 minutes of discussion time at the end, so, please feel free to ask questions in the chat or you can save them 00:00:41.000 --> 00:00:49.000 for the longer discussion period at the end. And so with that, we'll welcome our first speaker, Cliff Thurber. 00:00:49.000 --> 00:00:55.000 Thanks. Good afternoon. I just want to spend a few minutes telling you about work 00:00:55.000 --> 00:01:08.000 I'm doing with Taka'aki Taira, and Hao Guo, Avinash Nayak. Hao Guo is doing the heavy lifting on the work and my dog wants to join the meeting. 00:01:08.000 --> 00:01:18.000 So we're basically updating a pre-existing Tomographic model using new data and a different method and seeing if we can fit waveforms better than the original 00:01:18.000 --> 00:01:24.000 method. 00:01:24.000 --> 00:01:30.000 It's not advancing. 00:01:30.000 --> 00:01:42.000 Thanks. Okay, so the elements are as I say, we want to update a Tomographic 3D Velocity model as a sort of alternative to the geologically-based three model USGS has been working with. 00:01:42.000 --> 00:01:52.000 And update it using new data and applying joint body wave surface wave inversion hopefully to improve resolution, especially of the Shear Wave model. 00:01:52.000 --> 00:01:57.000 Starting model is my old 2009 Vp model and we're just using 00:01:57.000 --> 00:02:06.000 an arbitrary Vp/Vs of 1.732 to create the starting S model. Both the P-wave and S-wave rival time data sets have been expanded 00:02:06.000 --> 00:02:11.000 and we have a massive surface wave data set from ambient noise, courtesy of Avinash Nayak. 00:02:11.000 --> 00:02:17.000 And so we're trying to assess the performance of the initial model, Thurber et al., 2009 model 00:02:17.000 --> 00:02:32.000 versus the tomographically updated model and compare it to the original USGS 3D model. Just point out that, Evan Hirakawa has done work to update that model and will be incorporating some of his results as we go along. 00:02:32.000 --> 00:02:41.000 And so this last part that's really just begun is the validation. So just a quick look at the model 00:02:41.000 --> 00:02:51.000 covers the Bay Area plus some, so the greater Bay Area, the left is showing where these two cross-sections on the right are P-wave velocity 00:02:51.000 --> 00:02:58.000 shown both in color and with some overlay contours. The dashed line is where it's well resolved above there 00:02:58.000 --> 00:03:05.000 with some gaps. You can clearly see both the San Andreas and Hayward fault, 00:03:05.000 --> 00:03:12.000 and the deep part of the Calaveras in the top slide. The bottom line you can see the dipping part of the San Andreas near Loma Prieta 00:03:12.000 --> 00:03:16.000 and a beautifully aligned Calaveras fault. 00:03:16.000 --> 00:03:33.000 This one is the S-wave Velocity model on along the same cross-sections, the same features. Don't really have time to go into any detail about the structure, but in particular the Great Valley of the GV up on the upper right is reasonably well resolved. 00:03:33.000 --> 00:03:41.000 One of the ways we can assess quality that doesn't require the waveform validation, which is next, is just to look at how we're doing fitting the data. 00:03:41.000 --> 00:03:47.000 The gray histogram, the gray bars and the histograms are the initial misfit. 00:03:47.000 --> 00:03:55.000 And the blue bars are the final residuals and you can see the just regions are much more sharply peaked 00:03:55.000 --> 00:04:01.000 afterwards and the misfit is basically dropped by a factor of 2 or more. In the case of the surface wave data, we had a IS initially, which has been corrected. 00:04:01.000 --> 00:04:13.000 So that's very close to 0 mean with much reduced residuals, so from that measure, things are 00:04:13.000 --> 00:04:23.000 going in the right direction. What we were really just getting going on is the simulation work that's just one example for one station for one event. 00:04:23.000 --> 00:04:28.000 It's a magnitude 4.7 near Berkeley and a station near Fremont down at the bottom. 00:04:28.000 --> 00:04:33.000 And I got 30 seconds to wrap up. We've got the initial model in green 00:04:33.000 --> 00:04:39.000 and you can see that the red and the blue are both fitting the data better than the green model. 00:04:39.000 --> 00:04:50.000 So we are improving the initial model significantly, but we still have work to do to try to fit the coda, which is much more energetic than either model. 00:04:50.000 --> 00:04:58.000 But it's better in the USGS model. Thanks. 00:04:58.000 --> 00:05:17.000 Oh, missed a slide that's okay. 00:05:17.000 --> 00:05:19.000 You're muted, Artie. 00:05:19.000 --> 00:05:27.000 Okay, let's try that again. This is already, I'm coming to you tonight from Zurich, Switzerland where it's almost tomorrow 00:05:27.000 --> 00:05:34.000 and this talk will summarize some work I'm doing on waveform tomography for central and Northern California 00:05:34.000 --> 00:05:43.000 to improve the model of wave speeds and I'm grateful for computing assets at Livermore and data from from UC Berkeley 00:05:43.000 --> 00:06:13.000 and support from ETH. The goal of this work is obviously to create a 3D model of seismic waste feeds for numerical simulations of earthquake ground motions and adjoint tomography, images, subsurface structure using the data we record on seismographs rather than derived measurements and by design it improves the fidelity of simulations. 00:06:16.000 --> 00:06:27.000 So we're applying to the region of central and Northern California and asking the question, you know, can we improve the structure starting with a simple model? 00:06:27.000 --> 00:06:50.000 Can we fit periods of relevance for earthquake hazards and engineering. So the data come from moderate events, magnitude 4 to 6 from the, you know, that have, moment tensors from the Doug Dreger's coworkers and at UC Berkeley. 00:06:50.000 --> 00:07:05.000 And we collect all the available broadband waveform shown on the map on the right. This first experiments we've done are using the GIL7 model from Doug that he uses for the moment tensors and most of the region. 00:07:05.000 --> 00:07:16.000 And we're using although we're exploring some other starting models and we're using the Salvus software to simulate and invert the waveforms. 00:07:16.000 --> 00:07:24.000 So, waveform inversion requires an iterative multiscale approach. We start with the longest periods and we 00:07:24.000 --> 00:07:34.000 iterate until we converge and then we progressively add in shorter periods. In this case, we started with 15 seconcs minimum period. 00:07:34.000 --> 00:07:49.000 Reduce that to 8 and 45 totally iterations and this animation shows the wave speed at the surface, the shear wave speed at the surface and how it evolves over the 45 iterations. 00:07:49.000 --> 00:08:07.000 The minimum share wave is reduced from 1,500 meters per second to 800 meters per second and we see low wave speeds in the Great Valley sequence and the Delta east of the Calaveras, Hayward, and Rogers Creek faults and higher VS in the Franciscan rocks west of the San Andreas 00:08:07.000 --> 00:08:25.000 and also in the Sierra Nevada. This plot shows waveform fits for an event and the map in the upper right here from eastern California propagating to the Bay Area for the initial model. The data is in black and the simulation 00:08:25.000 --> 00:08:32.000 seismograms are in blue and the fits are progressively worse, especially at the largest distance. 00:08:32.000 --> 00:08:37.000 But after we run the inversions we get this fit so we see we can improve we go from this 00:08:37.000 --> 00:08:50.000 to this. We improve the fits. So, this is very much work in progress. Some of the next steps are to consider other starting models, including different representations of the Moho. 00:08:50.000 --> 00:09:03.000 What I showed is for a flat 1D model, but we're looking at models that conform to the Moho, we need to then, you know, we'll come up with several models and we'll have to then 00:09:03.000 --> 00:09:12.000 quantitatively evaluate the performance, you know, with an independent validation data set. And look at the resolution and such. 00:09:12.000 --> 00:09:17.000 And I believe we'll have to update the source parameters as we start to go to shorter periods. 00:09:17.000 --> 00:09:24.000 That means updating the moment tensors and the depths and moments of the waveform that's required to fit the waveforms. 00:09:24.000 --> 00:09:32.000 And we'll consider adding more data to improve path coverage, especially if we zoom in on the sub-region around the Bay Area. 00:09:32.000 --> 00:09:49.000 Thanks. 00:09:49.000 --> 00:09:55.000 Hey, sorry, I didn't realize I was next. Can you hear me? 00:09:55.000 --> 00:09:57.000 Yep, you're great. 00:09:57.000 --> 00:10:03.000 Thank you. Okay, so I'm just gonna describe something that I've been working on for the last few years. 00:10:03.000 --> 00:10:11.000 Which is a means of looking in detail at fault zones using earthquakes and their aftershocks 00:10:11.000 --> 00:10:17.000 after a large event. So this method [I don't believe that advanced] is called the virtual seismometer method. 00:10:17.000 --> 00:10:47.000 There was a paper that went into detail about the mathematics of it by Curtis approximately in 2009 [can somebody else advance I don't seem to have control over this thank you] so what I'm focusing on here is the Japanese earthquake that happened on January 1st a large magnitude 7.5 and immediately after that there were thousands of smaller events aftershocks that occurred after. The map 00:10:48.000 --> 00:11:02.000 that I'm showing here shows roughly the 70 largest best categorized ones that were present as of maybe 3 or 4 days after the initial event, 00:11:02.000 --> 00:11:09.000 and when you get this nice, collection of seismicity, you can do interferometry on them to get an estimate of the green function, 00:11:09.000 --> 00:11:20.000 and then you can use that green function to measure the physical properties of the fault zone beneath it. [Go to the next slide.] Thank you. 00:11:20.000 --> 00:11:27.000 So this is just a snapshot of an hour's worth of data after the initial earthquake. 00:11:27.000 --> 00:11:33.000 So about 20 min after the initial earthquake there's just a series of aftershocks that come in after and this continued on for quite a while. 00:11:33.000 --> 00:11:37.000 I came in and grabbed data about 3 or 4 days into this. So I got some of the early data. 00:11:37.000 --> 00:11:55.000 That was well categorized and I looked at it using the Japanese AHIH data to do the actual waveform measurements. 00:11:55.000 --> 00:11:59.000 [So if we go to the next slide, 00:11:59.000 --> 00:12:03.000 thank you.] So what you're seeing here is a map view. At about 12 km depth, which is where the bulk of the seismicity is. 00:12:03.000 --> 00:12:10.000 It's been smooth to 8 km resolution because that's roughly what the resolution is across this entire image. 00:12:10.000 --> 00:12:27.000 The resolution varies dramatically depending on how much seismicity there is in the subsurface. So as you get closer and closer to the fault, you can get higher and higher resolution. 00:12:27.000 --> 00:12:31.000 But this is a decent representation. You can see the major fault access. You can see how the seismicity, 00:12:31.000 --> 00:12:38.000 aligns to it and the green circle on the right most end. 00:12:38.000 --> 00:12:44.000 That is the initial, the origin point for the magnitude 7.5. [So if we go to the next slide.] 00:12:44.000 --> 00:12:50.000 So here we're just zooming in a little bit, to focus on that large event in the region around it 00:12:50.000 --> 00:13:03.000 you can see all the black dots filled circles are earthquakes that I used. Data from there's some open circles which are other aftershocks that I didn't use but they represent where the default zone is. 00:13:03.000 --> 00:13:14.000 So here you can see it's fairly smooth at this scale and you don't see a lot of detail at the fault but like I say you can enhance the resolution [so if we go to the next slide.] 00:13:14.000 --> 00:13:22.000 Thank you. That's 4 km resolution and now you can start to see details of the actual fault zone 00:13:22.000 --> 00:13:26.000 and you can see there's some separation, there's some variability in the structure 00:13:26.000 --> 00:13:28.000 and if we go to the next one. This is now 2 km resolution, which is about the highest that I do for this one. 00:13:28.000 --> 00:13:44.000 So here we can now see some some intricate details, different fracture patterns that these earthquakes are slipping on slightly different fault zones or slightly different fault fractures. 00:13:44.000 --> 00:13:54.000 And that presumably is affecting the rupture pattern and why larger earthquakes occur at some points and then smaller ones spread out. 00:13:54.000 --> 00:13:58.000 But those are details to be looked into. So this is the type of imaging, the details that you can start to see right along the trace of the fault zone from a back view. 00:13:58.000 --> 00:14:12.000 But if we go to the next slide. To me the more interesting thing is to look at a vertical profile through the trend of the major fault zone. 00:14:12.000 --> 00:14:21.000 So here we're seeing the top 35 km beneath the Noto Peninsula along the fault trace, 00:14:21.000 --> 00:14:30.000 and this is a bit wider than what we were looking at before. And this is just a representation of Vp/ over Vs 00:14:30.000 --> 00:14:35.000 in the subsurface and then I picked this particular one to show because it conforms pretty nicely to, the trend of the seismicity. 00:14:35.000 --> 00:14:52.000 Seismicity tends to fall in the Vp/Vs zones of Vp/Vs that are less than 1.7 and the higher the Vp/Vs that zones tend to be aseismic 00:14:52.000 --> 00:14:56.000 and we're seeing that represented here for this particular fault zone and you can start to see structural 00:14:56.000 --> 00:14:59.000 details as well, which are really interesting for this particular fault zone. 00:14:59.000 --> 00:15:04.000 And I'll stop there. 00:15:04.000 --> 00:15:16.000 And that's just a Qp/Qs layer. Thank you. 00:15:16.000 --> 00:15:40.000 Hello everybody. This is a Guoliang Li from the University of Southern California and SCEC community. Today I'm going to share your building the Multi-scale Velocity model in the Ridgecrest region with full-wave inversion of regional 2D and dense 1D seismic datasets. Now this figure shows in the Ridgecrest region after the two big events that this 2D 00:15:40.000 --> 00:15:41.000 Array and the 1D array we're deployed in this region. I, the many are deployed in this region. 00:15:41.000 --> 00:15:51.000 Many are deployed in this region. I, the interstation distance for those two deal rates above to 5 to 8 km. 00:15:51.000 --> 00:15:55.000 And, the interested in distance for those two deals, it's about the interstation distance for those two deal rates above to 5 to 8 km and those linear re-crossing in the request thought. 00:15:55.000 --> 00:16:03.000 I mean, it's only about 100 meters. You know, basically we use those 2D. 00:16:03.000 --> 00:16:11.000 To build a reasonable system model and also to further use those linear, to be able to detail the for. 00:16:11.000 --> 00:16:14.000 Well. 00:16:14.000 --> 00:16:27.000 We are here. I'm just assuming the reasonable bill, the reasonable cost models. There too much information here and I'm trying to summarize for you the most important feature is the loan. 00:16:27.000 --> 00:16:35.000 We asked and hi VP as a normally along the garlic fault and also the, the risk restored. 00:16:35.000 --> 00:16:48.000 And absolute depth. And besides, if you see clearly, if you use it carefully, you can also see the by material in the phenomenon along the riskressed region. 00:16:48.000 --> 00:17:02.000 Just our model, our model, that's the those, those, those, those, beneath those, fordons can be able to you can extend to adapt our 6 kilometers. 00:17:02.000 --> 00:17:16.000 And also a lot of other informations like the Basin, the in the B area, the velocity suits, low V, PVF and high VPV as anomalies and in the market area like the C. 00:17:16.000 --> 00:17:23.000 The, it's so high VP, a high VPVF velocity and relatively low VPF anomalies. 00:17:23.000 --> 00:17:29.000 And also our model also resolved the velocity contrast along the garg forwards. And here the velocity contrasts the defined as the will. 00:17:29.000 --> 00:17:41.000 We have something of the faults divided. We have nothing over the thoughts. Here the map, in the lost contrast, built from from our model. 00:17:41.000 --> 00:17:58.000 And we can see that this is the this is the But lost the contract forify along the along the strike and we can see that along the strike there are some priori already reors both along the strike and also another death rings. 00:17:58.000 --> 00:18:05.000 Next, next I'm going to see you, show, show you the detail, the phone structures. 00:18:05.000 --> 00:18:15.000 In this map, in in this map, so in the the, the, black, the black line here suits the surface nap rupture. 00:18:15.000 --> 00:18:23.000 And here, Susan, the detail, the location of the B 4, V 1, 2 before linear read. 00:18:23.000 --> 00:18:42.000 And here in this map, the dot, the white dotted line, so it's in the location, with the surface maps the rapture and we can see that beneath those beneath, beneath those, it's service mapped, Hi, be, no we asked And, hi, VPB as a nominate. 00:18:42.000 --> 00:18:54.000 And also and also, and the size, the, the, and, or we, we resolve the very clear by material. 00:18:54.000 --> 00:19:23.000 I'm a tutorial phenomenon here, a low loss structure in the south southwest part and the high velocity in the northeast part and also we also resolve with several localized the VPVs normally with the with the constitution of this and also here, Susan, the before already, you know, before I read the, the witchcraft bifurcated in 2 bronze. 00:19:23.000 --> 00:19:32.000 And here the first bronze, the another bronze and also, and the, Bronx, we resolve with the low Mays and high VPS the ratio. 00:19:32.000 --> 00:19:46.000 A normally and also localized the Hi, VP, the rest with the And, okay, that's our we just a result that Okay, that's all. 00:19:46.000 --> 00:20:00.000 Thank you. 00:20:00.000 --> 00:20:11.000 Hello everyone, I'm Weiqiang Zhu from Berkeley. So in 2018 I published I did my first public talk during my graduate school to present this 00:20:11.000 --> 00:20:20.000 at the Northern California earthquake workshop. So the PhaseNet model is a deep Neural-Network-Based model used to pick P and S-waves. 00:20:20.000 --> 00:20:28.000 Now this year, we try to propose an improved version about the PhaseNet model. 00:20:28.000 --> 00:20:34.000 So on title, you can also see the improvements that we make we add more tasks to the model. 00:20:34.000 --> 00:20:43.000 One is for not only picking sudden arrival times, but also including picking polarities and also estimating such parameters. 00:20:43.000 --> 00:20:56.000 To give an example, so this is what the PhaseNet model is doing, given three components of waveforms, the model is used to predict the P-wave arrival and S-wave arrivals. 00:20:56.000 --> 00:21:10.000 So this model was pretty good after [indiscernible] in Northern California earthquakes. It had been used to study many dense earthquake sequences and also volcanic earthquakes and induce seismicity. 00:21:10.000 --> 00:21:21.000 In the PhaseNet past model, we added two tasks. One, is the polarity picking part? So for the first motion polarity, the model also predict if it's up and down. 00:21:21.000 --> 00:21:29.000 And also we added a task to detect the earthquake event. So here the blue color is for the earthquake 00:21:29.000 --> 00:21:34.000 event detecting time; the orange lines is used to predict the origin time. 00:21:34.000 --> 00:21:45.000 So this has a larger origin time and this has smaller origin time. But adding these two tasks it can like, many used to serve two actual task. 00:21:45.000 --> 00:21:49.000 So for the detection event and the prediction origin time part, we mainly try to solve the association task. 00:21:49.000 --> 00:22:05.000 Also make it easier to count how many events. So the association task is like given a stance sequence our P and S picks, which can be generated by either [indiscernible] and [indiscernible] in the past, 00:22:05.000 --> 00:22:12.000 we want to cluster them into separate earthquakes. So on contrary we use a different algorithm called GAMA to do this work. 00:22:12.000 --> 00:22:22.000 But now, since adding the origin time prediction, also the P and S-wave are not well aligned, but if we predict the origin times they can be easily aligned. 00:22:22.000 --> 00:22:33.000 So then based on this predicted origin time you can imagine we can easily do a clustering to class them into different earthquakes showed by different colors. 00:22:33.000 --> 00:22:41.000 So by adding these task we solved the phase association task very easy to solve. 00:22:41.000 --> 00:22:47.000 For the phase picking up parity picking part, we can definitely directly use it for focal mechanism inversion. 00:22:47.000 --> 00:22:48.000 So because all these different models after training well it can be very sensitive to very weak signals, 00:22:48.000 --> 00:23:16.000 so it can increase the number of priority picks and give us more focal mechanism solutions. And also for the fixed arrival time picking part we mainly use for detecting earthquake and get higher resolutions earthquake locations, but we can also use the origin fixed arrival times to do tomography studies. 00:23:16.000 --> 00:23:27.000 So here we show some examples we tested all these first arrival pickings to test the rest of the models in the Bay Area. 00:23:27.000 --> 00:23:37.000 So with all these models, we try to use physical parameters to solve the perception-based part as accurate as possible. 00:23:37.000 --> 00:23:45.000 So the perception-based part is mainly into from the raw data measure all the parameters that we want to measure. 00:23:45.000 --> 00:24:00.000 So in this part we have the [indiscernible] model. We want to do it in raw waveform when we want to measure the phase arrival times the phase polarity and also hopefully make the phase associations and also hopefully make the association task also easy and also hopefully make the association task also easy and for the DASA association task also easy. 00:24:00.000 --> 00:24:06.000 And for the DAS data, we also have a model called the PhaseNet-DAS, which is used to pick a P and S as arrivals on DAS data. 00:24:06.000 --> 00:24:18.000 And we are also exploring, like a multi-station-based method that tried to use not only single stations but multiple stations to do the phase picking and physics association tasks. 00:24:18.000 --> 00:24:27.000 Then after we get all this accurate measurement, then we can get all the [indiscernible] constrain the accuracy of the parameters. I'll stop here. 00:24:27.000 --> 00:24:40.000 Thank you. 00:24:40.000 --> 00:24:50.000 Alright, good afternoon, everyone. So my name is Rob Skoumal. I'd like to talk just very briefly about a new algorithm and Python for computing focal mechanisms. 00:24:50.000 --> 00:24:56.000 So, part of the motivation for this new algorithm comes from our improvements in earthquake detection, right? 00:24:56.000 --> 00:25:07.000 So, with techniques like table matching or as we just heard about deep learning methods, we can commonly increase the number of detected earthquakes by about an order of magnitude when compared to traditional techniques. 00:25:07.000 --> 00:25:12.000 But although we can detect these events, characterizing these events has proven to be a little bit more challenging. 00:25:12.000 --> 00:25:19.000 So here in Northern California, we can commonly produce moment tensors for earthquakes of larger magnitude 4.0 00:25:19.000 --> 00:25:24.000 For smaller earthquakes and in the kind of 2 to 3 range we can usually produce a block and strain focal mechanism, 00:25:24.000 --> 00:25:30.000 but for earthquake that are smaller than that, we're usually out of luck when we're relying on traditional techniques. 00:25:30.000 --> 00:25:40.000 So I've been fortunate over the past couple years to work with Dave Shelley and Jeanne Hardebeck and we've been trying to think of new techniques to improve our ability to resolve these focal mechanisms. 00:25:40.000 --> 00:25:53.000 So this new algorithm has been designed with these techniques in mind. And then finally, whether or not you want to use these new techniques, the goal is to be able to produce focal mechanisms as easily and quickly as possible. 00:25:53.000 --> 00:26:02.000 So just want to run through some of the new features of this code. So hopefully as the name of this algorithm applies it's you know a lot of the framework is based off of HASH. 00:26:02.000 --> 00:26:12.000 So, with HASH, I think one of the coolest ideas is the idea to vary your input parameters to produce different suites of solutions representing a single event. 00:26:12.000 --> 00:26:15.000 So with HASH one of things you can vary is the hypocentral depth of the earthquake. 00:26:15.000 --> 00:26:19.000 Right, by varying that depth, you can get a variety of different takeoff angles. Now with HASH though the source receiver algorithm source is fixed in place. 00:26:19.000 --> 00:26:29.000 So if you look at the takeoff angles you can get you'll get that little green line there on that mechanism. 00:26:29.000 --> 00:26:36.000 Now with SKHASH, we've expanded this to the third dimension, right. So now we can vary that hypocentral location 00:26:36.000 --> 00:26:42.000 according to your aerial [indiscernible] and as a result our piercing points are much more reflective of the true uncertainty. 00:26:42.000 --> 00:26:44.000 Another feature that we added is ability to consider weighted priority measurements. 00:26:44.000 --> 00:26:59.000 So traditionaly using a human analyst, those polarity measurements are pretty qualitative. So with HASH, you can kind of imagine that under the hood the clarities are treated as categorical variables, right. 00:26:59.000 --> 00:27:06.000 Either the polarity is down or it's up. With SKHASH, we treat these polarities as floats, right. 00:27:06.000 --> 00:27:13.000 So if it's a down polarity we're represented by a negative sign. If it's a positive or upwards, it's represented by a positive value. 00:27:13.000 --> 00:27:20.000 And the absolute value that polarity measurement represents the weight, right. So this is really important, right. 00:27:20.000 --> 00:27:24.000 If you have a like, functional polarity measurements, some that you're very confident about, some that you're not as confident about 00:27:24.000 --> 00:27:29.000 you don't have to weight them the same because I could hurt your solution. At the same time, you don't want to exclude all the measurements 00:27:29.000 --> 00:27:42.000 You're not completely confident about. Because there is value in those measurements. So especially when you're using these new techniques for a book when we're able to quantitatively determine the uncertainties concerning considering these weights is really important. 00:27:42.000 --> 00:27:44.000 Another technique is the ability to consider, sort of missing data, right. So the idea is that 00:27:44.000 --> 00:27:54.000 you have some kind of catalog of earthquakes hopefully you have lots of polarities and lots of down plolarities 00:27:54.000 --> 00:28:06.000 but you're not gonna have qualities at every source receiver pair, most likely. So actually talked about this last year at this workshop and those idea was that we could use random force to make educated guesses for what those missing values will be. 00:28:06.000 --> 00:28:12.000 Now with SKHASH, you can at this point either determine individual mechanisms for each of your earthquakes. 00:28:12.000 --> 00:28:22.000 Or you can cluster them into families and they could produce composite mechanisms. So really the goal with SKHASH is you know, kind of no matter what your workflow is and what kind of data you're considering 00:28:22.000 --> 00:28:28.000 the idea is that hopefully it's flexible enough to consider, those different work flows. 00:28:28.000 --> 00:28:29.000 So in conclusion if you ever need to compute focal mechanisms in the future 00:28:29.000 --> 00:28:40.000 I hope you'll consider getting this code a try. Everything is publicly available online. If you have any questions or suggestions feel free to email me at any time. 00:28:40.000 --> 00:28:49.000 Alright, thanks. 00:28:49.000 --> 00:28:56.000 Hi everyone. I'm Yifang Cheng from University of California, Berkeley. 00:28:56.000 --> 00:29:01.000 Today I will share with you our recent work on the stress map of California. Here, is my view of the major tectonic units in California. 00:29:01.000 --> 00:29:16.000 The relative motion between the Pacific Plate and North America Plate is mainly accommodated by the San Andreas fault and the partial-like accommodated by the Walker Lane Shear Zone and eastern California share zone. 00:29:16.000 --> 00:29:26.000 And in Southern California the transverse ranges and the big band develop in association with the opening of Gulf California. 00:29:26.000 --> 00:29:34.000 The right side shows a corresponding strain field. The geodetic strain field shows the temporal fluctuation of the crustal deformation. 00:29:34.000 --> 00:29:41.000 But for the stratosphere it is the stress of accumulated across the long-term crustal deformation. 00:29:41.000 --> 00:29:51.000 So it shows a provide a comprehensive perspective across the deformation. So how can we serve the stress map of California? 00:29:51.000 --> 00:30:05.000 Here we use over 1.5 million earthquakes from both Northern and Southern California Assessment Network and we apply the REFOC hazard to calculate the focal mechanism. 00:30:05.000 --> 00:30:21.000 This measure not only uses the polarity and S-wave the P-wave amplitude ratios that from the individual earthquakes, but also use the inter-events P with amplitude ratios and S/P with amplitude ratios to further constrain the focal mechanism. 00:30:21.000 --> 00:30:26.000 In the end, we have obtained a high-quality focal mechanisms for over 50% of all catalog earthquakes. 00:30:26.000 --> 00:30:49.000 And we use this focal mechanisms to perform the stress inversion and here is the result. For the Walker Lane Shear Zone and Northern San Andreas fault area is dominated by the transtensional stress and the Central San Andreas fault area is dominated by the transpressional stress. 00:30:49.000 --> 00:30:56.000 While the Southern San Andreas Fault System is mainly dominant by the strike-slip stress regime. 00:30:56.000 --> 00:31:02.000 And we also observe, very strong stress heterogeneity and the local stress rotation show clear association with the major fault 00:31:02.000 --> 00:31:18.000 that might be related to the interval interactions, the petitioning and the major fault geometry. We further combine the stress tensor. 00:31:18.000 --> 00:31:22.000 We obtain the waste, the major fault orientation to compute whether these major faults are optimally oriented to fail. 00:31:22.000 --> 00:31:37.000 We found that most of the area the major faults are optimally oriented to fail, but some region have some major forces that are not optimally oriented. 00:31:37.000 --> 00:31:50.000 Some region may be caused by the local stress rotation like the region Northwest to the Clear Lake volcanic area, Southern San Andreas fault area. 00:31:50.000 --> 00:31:57.000 Some area has the recent major ruptures that has already released a large portion of the stress, like the Eastern California shear zone. 00:31:57.000 --> 00:32:02.000 And some major faults like the Garlock fault and the White Wolf fault, they show really high angle to the relative plate motion. 00:32:02.000 --> 00:32:17.000 So they are already not really optimally oriented to fail under the current plate motion direction. So that's all I want to share with you today and welcome 00:32:17.000 --> 00:32:30.000 further discussion by contacting me. Thanks! 00:32:30.000 --> 00:32:36.000 Hi everybody! This is, Felix Waldhauser, I'm at the Lamont-Doherty Earth Observatory at Columbia University. 00:32:36.000 --> 00:32:47.000 I'm presenting a summary of our recent developments in Northern California and a brief overview of our plan, our current plans 00:32:47.000 --> 00:32:57.000 and this is work with Theresa Sawl, a student who was at Lamont and is now a post-doc at the USGS 00:32:57.000 --> 00:33:12.000 at Moffett Field, Ben Holtzman, and David Schaff. So the overall objective is our current work is to detect and then monitor repeating earthquake sequences in real-time across Northern California, 00:33:12.000 --> 00:33:19.000 So that we can track changes in parameters like recurrence, times, and size, 00:33:19.000 --> 00:33:31.000 with respect to some baseline characteristics, and then monitor unusual behavior of individual faults. So we can take advantage now of three recent developments which I'm briefly 00:33:31.000 --> 00:33:43.000 presenting here. One is we have finished a major update of the real-time double difference relocation system DD-RT for Northern California. 00:33:43.000 --> 00:33:50.000 So this comes with a new high-resolution base catalog, which goes from 1984 to 2021, 00:33:50.000 --> 00:33:57.000 with almost a million earthquakes in the locations are for new events are real-time after that. 00:33:57.000 --> 00:34:04.000 So the most significant difference to the previous catalogs is that the hypocenter depth on our reference to the geoid 00:34:04.000 --> 00:34:12.000 as opposed to some baseline station elevation. Adapting to the changes that the USGS made a few years ago. 00:34:12.000 --> 00:34:19.000 So both catalogs, base catalog, and real-time solutions are available on a new server now, there's no call DD 00:34:19.000 --> 00:34:31.000 [indiscernible] The second development is on the left and yellow dots is a catalog of repeating earthquakes 00:34:31.000 --> 00:34:51.000 that we published in JGR in 2021 and it includes over 7,000 sequences with almost 28,000 repeaters it goes from a 1984 to 2014 and it will be used as the base catalog. 00:34:51.000 --> 00:34:58.000 So the third development is a new method to detect repeating earthquake using an unsupervised machine learning 00:34:58.000 --> 00:35:06.000 to harness variations in spectral features so Theresa Sawi has applied that tool 00:35:06.000 --> 00:35:18.000 to a [indiscernible] long segment of the San Andreas fault and we text about a double set number of repeaters then compared to the ones in the catalog [indiscernible] shown on the left. 00:35:18.000 --> 00:35:25.000 Both finding new repeaters sequences and also filling gaps in existing sequences 00:35:25.000 --> 00:35:43.000 And so the current work right now is the plan to move the base catalog and the new detection workflow into the real time system so that we can update the repeating earthquake sequences in real-time 00:35:43.000 --> 00:36:00.000 and expand on the current catalog. Thank you 00:36:00.000 --> 00:36:10.000 Hi there, my name is Scott Marshall. I'm at Appalachian State University and on behalf of my co-authors Andreas Push and John Shaw I just wanted to tell everybody about the Skck Community Fault model. 00:36:10.000 --> 00:36:18.000 And the with special emphasis on the fact that the community fault model is now going statewide because SECK is now the statewide California earthquake center. 00:36:18.000 --> 00:36:25.000 So let's see if I can change the slide here. There's this area now that's of great interest, which is Northern California. 00:36:25.000 --> 00:36:35.000 It always was of great interest, but now it's of specific interest because we have the Ske region is now expanded to Northern California as well and so as the Crescent Earthquake Center along with this group. 00:36:35.000 --> 00:36:45.000 So what we're trying to do is make sure that our fault model is as accurate as possible. So what I wanted to do is give you a little bit a quick overview of what the CFM is. 00:36:45.000 --> 00:36:53.000 What it is, it's hierarchically organized set of false. That currently covers Southern California, but You'll see it's about to cover Northern California. 00:36:53.000 --> 00:36:58.000 The preferred version has about 450 faults. There's somewhere around 40 alternative representations included there. 00:36:58.000 --> 00:37:13.000 This is all accessible, through the website. This is just the traces of the model, but the model itself is fully three-dimensional with variations in dip and strike with depth on the surfaces. 00:37:13.000 --> 00:37:28.000 So on this get home page, which is linked down in the bottom right hand corner, you get simple maps or you can have fancy maps but the model itself is fully three-dimensional so we encourage people to actually download it in its 3D format. 00:37:28.000 --> 00:37:39.000 In case you're familiar with other, large scale. BALL models. There's the US GSQ faults on the left, which is basically a collection of highly detailed surface traces. 00:37:39.000 --> 00:37:56.000 So this is just a two-dimensional model. The CFM sort of situated in the middle here resolution wise and then the user of 3 or the national seismic hazard model, faults are on the right and those are sort of 2 and a half D they have a trace and a dip but there's no variations in the down dip. 00:37:56.000 --> 00:38:04.000 Whereas the community default model has all kinds of variations wherever data justifies it. So just to compare these 3 side by side, they're not the same. 00:38:04.000 --> 00:38:12.000 They're all different. And they have their own pros and cons. 00:38:12.000 --> 00:38:21.000 Let's see if this works. This should be a 3D movie showing you one such fault to give you an idea of the complexity and the community fault model compared to some of these other fault models. 00:38:21.000 --> 00:38:24.000 This is the Palace Verdas Fault in Southern California from Franklin Wolf and others paper in 2,022. 00:38:24.000 --> 00:38:35.000 The plot on the bottom here is showing the dip angle of all the triangles in the mesh and the rows diagram up here showing you the strikes of those things. 00:38:35.000 --> 00:38:41.000 And the color scale is going to show you the dip angles here. So let's see what happens. 00:38:41.000 --> 00:38:49.000 Hey, it works. Awesome. So as we're rotating around, and by the way, the model is distributed in Go Cab format, but you don't need Go CAD to use the community fault model. 00:38:49.000 --> 00:38:57.000 Again, this is made using MATLAB. You could do the same thing with Python or generic mapping tools, whatever you want. 00:38:57.000 --> 00:39:07.000 So you can see this is quite the complicated fault structure. And one of the advantages of the community default model is we have a full three-dimensional representation. 00:39:07.000 --> 00:39:15.000 The amount of intersections here cannot simply be represented by a fault trace and a dip angle. So that's one of the strengths of this model. 00:39:15.000 --> 00:39:21.000 The downside is it's complicated. 00:39:21.000 --> 00:39:22.000 I wanted to let everybody know we have an association service that runs automatically on the Caltech server. 00:39:22.000 --> 00:39:32.000 Every time there's an earthquake, magnitude 3.0 or greater, detected by the USGS network. 00:39:32.000 --> 00:39:40.000 I think the Comcast catalog. Automatically our code runs on that server and associates it with whichever CFM fault is closest. 00:39:40.000 --> 00:39:50.000 So we're working to expand that statewide as we move the model statewide. And in my remaining seconds, what I just wanted to put out a call for is that we are moving statewide. 00:39:50.000 --> 00:39:53.000 So the northern portion of the model is in progress. It's close to done, but before we have an official release we wanted to form these work groups. 00:39:53.000 --> 00:40:07.000 So if you're interested in joining one of these work groups to help us assess the Northern California portion and tell us what we're missing and what we've done right. 00:40:07.000 --> 00:40:13.000 Please reach out to me, Marshall ST at upstate. Edu and we'll get you hooked up with that process. 00:40:13.000 --> 00:40:25.000 So thanks very much. 00:40:25.000 --> 00:40:45.000 Hi, I'm Scott Callahan from SCEC and I just wanted to take a couple of minutes today to give you an update on upcoming CyberShake studies that are planned for northern California. The idea is to essentially take the same methodology that we recently applied in Southern California for CyberShake Study 00:40:45.000 --> 00:40:52.000 2212. So this was end to 2022 started 2023 and use that same approach here in northern California. 00:40:52.000 --> 00:40:56.000 It's been a while since our last Northern California study, it was back in 2018. 00:40:56.000 --> 00:41:03.000 This time we're focusing on kind of a smaller area, kind of the greater San Francisco Bay Area, 315 sites. 00:41:03.000 --> 00:41:10.000 And then so for each of these sites, we're planning to simulate seismograms for about 250,000 events. 00:41:10.000 --> 00:41:17.000 This will be a broadband PSHA study, just like the one that we ran in Southern California. 00:41:17.000 --> 00:41:30.000 So we'll be doing deterministic calculations up to 1hz and then combining those with stochastic simulations using the modules in the SCEC broadband platform up to 25hz. 00:41:30.000 --> 00:41:37.000 So what's new and different with this upcoming CyberShake study? So here are a few things. 00:41:37.000 --> 00:41:48.000 One is that we are adding a vertical component. In the past, because CyberShake uses reciprocity, we've done calculations for both horizontal components but not the vertical, 00:41:48.000 --> 00:41:54.000 and we think it's time to add that. Another change is that we're reducing our minimum Vs. In the past. 00:41:54.000 --> 00:42:02.000 it's been 500m per second. We're bringing it down to 400m per second because we know there's a lot of low velocity structures in the Bay Area. 00:42:02.000 --> 00:42:07.000 We're also adding some new data products. We're going to calculate frequency dependent durations, 00:42:07.000 --> 00:42:16.000 FOIA spectra as well as response spectra. I'm going to stop calculating the geometric mean because it seems like nobody's really interested in it anymore. 00:42:16.000 --> 00:42:21.000 And as I mentioned, since we have a vertical component, we'll have three components sized grams instead of just two. 00:42:21.000 --> 00:42:41.000 Another big change is modification to the events that we're considering. So traditionally with CyberShake, the way we decide what events we want to include in our simulations is we take our site of interest, we go out 200km, and we include any fault surfaces that are within 200km of that site. 00:42:41.000 --> 00:42:53.000 We then include any events that occur on that fault surface. So what that practically means here is for the sites that are south of the red line in the map over here on the right, 200 km South will get you to Parkfield. 00:42:53.000 --> 00:42:59.000 And so that means that we needed to include any events which ruptured the Parkfield section of the San Andreas, 00:42:59.000 --> 00:43:06.000 and so in some cases, right, that is a wall-to-wall San Andreas event that starts down at the Salton Sea and ruptures all the way up. 00:43:06.000 --> 00:43:12.000 We took a look at the hazard that those events contribute and both in the last CyberShake study up here and in attenuation relationships, the contributions are pretty low. 00:43:12.000 --> 00:43:21.000 So we're removing all those events this time around and the effect is that it reduces the number of events when you need to simulate by about half 00:43:21.000 --> 00:43:28.000 and reduces our simulation volume size by over half, so computationally it makes it much easier. 00:43:28.000 --> 00:43:41.000 So right now we're in the process of getting this study ready to go. So we're evaluating different regional 3D models that we want to use as well as background models since many of our simulation volumes extend out into Nevada. 00:43:41.000 --> 00:43:47.000 So right now we're doing looking at kind of profiles and slices to make sure that they pass, the eye test. 00:43:47.000 --> 00:43:55.000 Once that's done, we're going to do the same thing we did in Southern California, which is to do validation using some of the SCEC broadband platform, Central-Northern California event, 00:43:55.000 --> 00:44:00.000 so we're looking at Loma Prieta, Alun Rock, and San Simeon. We're also verifying the vertical component 00:44:00.000 --> 00:44:11.000 to make sure all the processing pipelines, we're working correctly with that. And our timeline is that we're planning to start the study in the next few months. 00:44:11.000 --> 00:44:18.000 So, if any of you have any feedback you'd like to provide, things you want to make sure we include or don't include or data products you'd like to see. 00:44:18.000 --> 00:44:26.000 Or any of that, now is a great time to let us know and so you ought to get in touch with me. 00:44:26.000 --> 00:44:34.000 Thank you very much. 00:44:34.000 --> 00:44:40.000 Hello everyone, I'm Mark Benthien. I lead SCEC's education and outreach programs 00:44:40.000 --> 00:44:47.000 and I'm here to give an update on the HayWired Scenario Exercise Toolkit that many partners have been involved with that came out last year 00:44:47.000 --> 00:44:53.000 with support from Anne Wein, USGS and strong partnership with Monica Stoffel with 00:44:53.000 --> 00:45:12.000 the California Resiliency Alliance. We developed this toolkit to provide an ability for businesses and other organizations to use the USGS HayWired Scenario in discussion-based exercises of many different types. 00:45:12.000 --> 00:45:25.000 Of course, if you're not familiar with it, the HayWired Scenario is a comprehensive set of resources and reports about what might happen on a M7.0 centered near Oakland on the Hayward fault. 00:45:25.000 --> 00:45:33.000 And it's really useful for exercising because there's so much great content, all the different impacts associated with multiple hazards. 00:45:33.000 --> 00:45:44.000 And the cascading impacts of such an event and we there are many insights that can be drawn from these and planning not just for that earthquake but for any that might happen, 00:45:44.000 --> 00:45:51.000 and so it's been designed for any organization even without experience to plan lead and learn from such exercises. 00:45:51.000 --> 00:46:05.000 Again, many partners involved with creating it and providing feedback and we thank them all and many, many more to come who are using this now we're doing trainings with about the toolkit, 00:46:05.000 --> 00:46:14.000 and rolling it out. What's interesting is what we've identified basically, 45 ideas across 30 themes that are 00:46:14.000 --> 00:46:28.000 presented in the scenario that are ideal for these discussion-based exercises. We have currently 17 tools per 17 of those ideas and soon three more. 00:46:28.000 --> 00:46:36.000 The tools have a discussion guide with how to run an exercise, who to involve, what questions to ask. 00:46:36.000 --> 00:46:48.000 Really makes it easy as well as an imagery slide set pulled from the scenario reports. The three new ones that we're going to be adding are about economic impacts 00:46:48.000 --> 00:46:59.000 and how that may impact on customers, may affect businesses financial recovery, about food access and insecurity the ability for employees and customers to even access food after an earthquake 00:46:59.000 --> 00:47:03.000 and also payroll in terms of ability to pay employees. So those are discussions that businesses and many sizes might want to 00:47:03.000 --> 00:47:30.000 engage in. All the materials you can find on EarthquakeCountry.org/haywired and we are rolling out a program of trainings and outreach this spring starting with the city of Newark coming up, but we are looking for other locations and venues in hosts for these workshops so please email me. There's my address: Benthien@usc.edu 00:47:30.000 --> 00:47:38.000 If you want to host a workshop or know someone who might be interested. Those are the type of businesses and business associations, Chambers of Commerce, 00:47:38.000 --> 00:47:49.000 nonprofits, and many others that might work. And also just want to end with, we also lead the Earthquake Country Alliance statewide support from FEMA and CalOES, 00:47:49.000 --> 00:48:02.000 part of NEHRP and our next ECA Bay Area Workshop is on February 21st first thanks to Richard Allen and Ronald Bergman and others at UC Berkeley Seismological Lab. 00:48:02.000 --> 00:48:06.000 We're going to be having our meeting there and if you aren't a member, we're going to be having our meeting there. 00:48:06.000 --> 00:48:08.000 And if you aren't a member of ECA, you can go to EarthquakeCountry.org/join 00:48:08.000 --> 00:48:26.000 and get on our list and learn about that and other activities. Thank you. 00:48:26.000 --> 00:48:38.000 Alright, well thank you Thunder Talkers for a great session. And, we will have about 12 min now for, discussion. 00:48:38.000 --> 00:48:44.000 So, I'm gonna kinda highlight a couple questions from the chat that weren't answered. 00:48:44.000 --> 00:48:53.000 But please, if you want to ask a question, feel free to turn on your camera and raise your hand and we'll get to you next. 00:48:53.000 --> 00:49:04.000 So there is a question for Felix from Cliff Thurber who asked if there's any idea of the cause of the previously missed repeaters. 00:49:04.000 --> 00:49:18.000 Yeah, I think I, I answered that maybe. So most of the, well, many of the repeaters over missed from our initial catalog were That shallow depth and they have larger relative location errors. 00:49:18.000 --> 00:49:25.000 And so I that then. The the spatial criteria that we had. At the beginning was, I guess, too strict. 00:49:25.000 --> 00:49:36.000 And so with the, the unsupervised, feature extraction method we Essentially have a spatial, space independent. 00:49:36.000 --> 00:49:42.000 Tool to go through the whole catalog so we can compare those fingerprints much much more quickly than that. 00:49:42.000 --> 00:49:52.000 Doing a cross cross correlation. But then there are other reasons. So This is a paper on T and in the seismic record that. 00:49:52.000 --> 00:49:55.000 This some of the causes. 00:49:55.000 --> 00:50:00.000 Great. Yeah, and the link to that paper, I believe, is in the chat. 00:50:00.000 --> 00:50:01.000 Right. Correct. 00:50:01.000 --> 00:50:07.000 I can't put it in the chat 00:50:07.000 --> 00:50:18.000 Great. All right, Emery. 00:50:18.000 --> 00:50:25.000 We cannot hear you. 00:50:25.000 --> 00:50:28.000 Yes. Just join over here. 00:50:28.000 --> 00:50:34.000 Yeah. Yes. 00:50:34.000 --> 00:50:40.000 No. 00:50:40.000 --> 00:50:41.000 Hi, Veh, to hear your infinite echo. 00:50:41.000 --> 00:50:45.000 Alright, can you hear us in here? All right, how's that? 00:50:45.000 --> 00:50:46.000 Good. 00:50:46.000 --> 00:50:47.000 Go forward. 00:50:47.000 --> 00:50:51.000 Sorry we're trying here. Great. Yeah, thanks so much for all these thunder talks. 00:50:51.000 --> 00:50:57.000 We saw a lot of different talks about velocity. Models across different scales. And I was wondering if folks who mentioned velocity models. 00:50:57.000 --> 00:51:11.000 Use of velocity models or development of Blossom Miles wanted to comment on how. We should be merging the different models together, thinking about different scales and resolutions, and how we can. 00:51:11.000 --> 00:51:20.000 Incorporate the variety of models into our uncertainty and use them. For simulations. 00:51:20.000 --> 00:51:31.000 I can provide my thoughts on that. You know, it's clear to me that We can't resolve even with the including the surface wave data, we can't resolve the near surface well enough. 00:51:31.000 --> 00:51:41.000 And so like other people I wanna be. Merging models so you know the USGS model especially their most recent one does really well. 00:51:41.000 --> 00:51:53.000 And so I can imagine carving out basin structures down to the sediment base and interface. Embedding them in our topographically based model and then updating it. 00:51:53.000 --> 00:52:00.000 Keeping the basin part. Fixed or trying to let it vary although I don't think we'd resolve it so that would make the demographic model. 00:52:00.000 --> 00:52:13.000 Consistent with having the new basin. Stuck in it and fit the data. With that perturbation to the model. 00:52:13.000 --> 00:52:23.000 Add it in. There are more clever ways of doing it that could also be explored these sort of model merging that are being pursued by a variety of groups. 00:52:23.000 --> 00:52:28.000 So we'll try both. Just plain simple, stick it in and doing a more sophisticated merging. 00:52:28.000 --> 00:52:42.000 Basically take advantage of what people have done. That has really nailed down the near surface top kilometer or so. 00:52:42.000 --> 00:52:56.000 Yeah, could I add to that? I would, I mean, I'm, I'm thinking, I'm a big advocate now of waveform tomography because it inverts the data we're trying to predict because it inverts the data we're trying to predict the actual ground shaking. 00:52:56.000 --> 00:53:08.000 Waveforms and it's very sensitive to shear waves. So, but the downside is it's difficult to fit the high frequencies and resolve the near surface. 00:53:08.000 --> 00:53:20.000 So an approach is to use. Large scale waveform tomography like Claire Duty has a model of California in Nevada that fits periods of 12 s and then We can use that as a starting model for a smaller region. 00:53:20.000 --> 00:53:31.000 As I'm doing as I presented and we can. Reduce that to, you know, hopefully the low single-digit. 00:53:31.000 --> 00:53:38.000 You know, peer seconds of period. And then that provides a model of the deep structure that will predict. 00:53:38.000 --> 00:53:54.000 The arrivals of the surface waves that are so key at large scales. And, and I agree with Cliff that the challenge is going to be their surface and we might have to work on a layering approach, you know, to, in place. 00:53:54.000 --> 00:54:10.000 The geotechnical and you know upper few kilometers of the crust. And then test it and invert on it to make sure we are get something that's consistent with the data. 00:54:10.000 --> 00:54:22.000 So one aspect is gonna have to be having essentially a multi resolution grid. You can't. Invert all of Northern California say with 10 metre note spacing everywhere. 00:54:22.000 --> 00:54:28.000 So there has to be some cleverness. Hold it into that. 00:54:28.000 --> 00:54:37.000 Yeah, but if we're gonna, if like if Scott is gonna run Cybershake, you know, with a 300 kilometer or 500 kilometer long rupture it would be nice to have a single model. 00:54:37.000 --> 00:54:43.000 That, you know, is consistent across the domain for which you're trying to do the ground motion. 00:54:43.000 --> 00:54:52.000 And I think we've formed to, gives us an opportunity to do that. 00:54:52.000 --> 00:54:58.000 Great, thank you. We've got a couple more hands raised. Charles Scottthorne was first. 00:54:58.000 --> 00:55:05.000 Thank you. My question is from Mark Bentine. Mark, the toolkit looks very useful for mitigation. 00:55:05.000 --> 00:55:13.000 Thank you for that. I noticed in your table that there was no it was blank across the board for firefighting earthquake. 00:55:13.000 --> 00:55:21.000 So I just curious if you can comment on that. Thank you. 00:55:21.000 --> 00:55:35.000 We actually do have a mitigation, suggestion idea on there. May have been covered in what you saw or may not seen and we have a discussion tool for it. 00:55:35.000 --> 00:55:36.000 Thank you. 00:55:36.000 --> 00:55:39.000 For fire after the earthquake. So if you go to earthquake country.org slash haywired, you can look for that tool. 00:55:39.000 --> 00:55:43.000 Thanks. Sorry, I missed it. Thank you so much. 00:55:43.000 --> 00:55:44.000 No worries. 00:55:44.000 --> 00:55:48.000 Thanks, and Elizabeth Cochran has her hand up. 00:55:48.000 --> 00:56:00.000 Yeah, thanks, Josie. I had a just a question for, about, I noticed there was a lot of small-scale spatial variability in the Stress orientations and the fault slips stability. 00:56:00.000 --> 00:56:10.000 I was wondering how much of that do you think is sort of scatter in the in the results and how much of that is real and what might that be a attributed to. 00:56:10.000 --> 00:56:13.000 Thanks for a great talk too. 00:56:13.000 --> 00:56:16.000 Thank you for your question. Yes, we did check that, by, perturbing the, the focal mechanisms. 00:56:16.000 --> 00:56:30.000 And based on their uncertainty and the perform the stressing version again and most of the grades have the uncertainty less than 10 degree. 00:56:30.000 --> 00:56:36.000 And in the meantime, we also calculated for those focal mechanisms we use the first stress inversion. 00:56:36.000 --> 00:56:50.000 Whether those focal mechanisms are all optimally oriented or not the mistress angle between the No, the best sleep direction and is a safe direction of the focal. 00:56:50.000 --> 00:56:58.000 And we find the main sleep, deviation. Is about a 30 degree. Most of them are around like a 20 to 50, which is caused by the real process like the local spatial temporary stress heterogeneity in that grade. 00:56:58.000 --> 00:57:19.000 So which means, yeah, it's like the real process is perturbation may affect it more and even though the focal mechanism has large uncertainty, but once we have a large group of them and do the stressing version. 00:57:19.000 --> 00:57:27.000 Yeah, the yesterday is less than 10 degree, which is pretty, reliable. You can't answer your question. 00:57:27.000 --> 00:57:30.000 Thanks. Yeah. 00:57:30.000 --> 00:57:38.000 I have another question for you thing, which is you mentioned stress release. I've been reflected in sort of Eastern California share zone. 00:57:38.000 --> 00:57:50.000 I was curious if it would be, you know. Plausible and maybe if you've tried this to look at different time intervals before and after major earthquakes to see that transition. 00:57:50.000 --> 00:58:00.000 Yes, we really hope to, but actually for this major squeeze sequences, they have limited the foresharks, especially in the time in the several years before the main shock. 00:58:00.000 --> 00:58:11.000 So it's really hard to compare, but indeed the people really find this stress stress notation compared with the earthquake people in after the main shocks. 00:58:11.000 --> 00:58:37.000 But this kind of opinion will be more obvious in some like while monitor the small scale like increase the sesame state or like some some some whatever it can be more easier to see like before and after fluid injection and a lot of micro earthquake so it can be more obvious but for this kind of major earthquakes that are currently on a certain form to during the inter-sessment time period. 00:58:37.000 --> 00:58:40.000 Yeah, some of them are pretty quiet. 00:58:40.000 --> 00:58:49.000 That makes sense. Okay, I also I wanted to ask, Guo Yang Lee, a question, Ruth Harris. 00:58:49.000 --> 00:58:58.000 Had asked whether the earthquakes, were relocated and I think you answered that the ones that you had shown were not. 00:58:58.000 --> 00:59:19.000 I was also curious if you had compared the fault orientations mostly their dip that you've seen the tomography with other models or inversions of fault geometry such as from flesh at all, just to see if you're seeing the same sort of changes in depth along strike. 00:59:19.000 --> 00:59:21.000 For the rich cross faults. 00:59:21.000 --> 00:59:26.000 You mean that the the deep the deeper angle of the the board. 00:59:26.000 --> 00:59:29.000 Yes. In the dip direction. 00:59:29.000 --> 00:59:40.000 Yes, so I think of the for the risk question, the, the, the, the, the deep ankle almost, you know, a, so I We, you know, our mother, we have, we just get more, we still did such kind of work. 00:59:40.000 --> 00:59:45.000 So yes, I haven't. 00:59:45.000 --> 00:59:51.000 Okay. So we are just about at a time. I thought I would just. 00:59:51.000 --> 01:00:06.000 Also ask Scott Marshall as we're kind of extending the community models statewide if there are you know lessons learned from doing this in Southern california that could maybe expediate the process. 01:00:06.000 --> 01:00:13.000 Kind of help us produce these community models. A little more effectively up here. 01:00:13.000 --> 01:00:20.000 Hmm, lessons learned. Interesting question. Well, I guess there's a couple lessons. 01:00:20.000 --> 01:00:25.000 One of them is pretty obvious, which is don't duplicate efforts. So we already have a lot of data that exist for Northern California. 01:00:25.000 --> 01:00:30.000 For example, the Bay Area has some three-dimensional fault surfaces. We have the Q false database. 01:00:30.000 --> 01:00:33.000 We have the national seismic hazard model. So we're gonna definitely be using those resources as we build these. 01:00:33.000 --> 01:00:45.000 Faults northward. The other thing I would say that's been a really useful but complicated lesson learned is the importance of community evaluation. 01:00:45.000 --> 01:00:53.000 A lot of models that, produces are considered community models because lots of people work on the code or something like that. 01:00:53.000 --> 01:01:00.000 But the community fault model, I mean the the metadata for it has hundreds and hundreds of references. 01:01:00.000 --> 01:01:18.000 So there's not one single person that can really say they know every fault in California. So what we do is periodically just about a year and a half ago we had a community evaluation of the model and it's clear that there's not uniform agreement on all faults in Southern California, surprise. 01:01:18.000 --> 01:01:25.000 But it's important to have these evaluations. So when there is disagreement, we can make alternative representations. 01:01:25.000 --> 01:01:29.000 So I think that hopefully that kind of answers your question. 01:01:29.000 --> 01:01:36.000 Yeah, it's great. Yeah, we're looking forward to Thanks more community models up here, so that's great. 01:01:36.000 --> 01:01:43.000 Yeah, so if people are interested, we need more expertise in the North. So if people are interested in helping out, we're forming, evaluation groups to sort of look at small pieces of the Northern California model that we've put together. 01:01:43.000 --> 01:01:53.000 So, email me, let me know. Marshall st@appstate.edu. 01:01:53.000 --> 01:01:57.000 Great. Well, thanks again to our speakers. We are, looks like a couple of minutes over, so I'll turn it back to Sarah, I believe. 01:01:57.000 --> 01:02:06.000 Who will release this for a break. Alright. 01:02:06.000 --> 01:02:14.000 Thank you so much, Josie, and all of our wonderful speakers in Vendo Talks, 2 of 3. 01:02:14.000 --> 01:02:15.000 So, right now we get to have another little break unless you all speak on our next session. 01:02:15.000 --> 01:02:21.000 So everybody, please enjoy 15 min unless you all Hoshiva, Husco, Cochrane, Edwards, Boston, Mcbride, or Sandals, in which case stick around and let's get you set up. 01:02:21.000 --> 01:02:35.000 See the rest of you