WEBVTT 00:00:00.000 --> 00:00:04.000 [noise] 00:00:04.000 --> 00:00:12.000 Hello, I hope you all had a nutritious and fortifying lunch. Or whatever meal works for the time zone that you are in and welcome back. 00:00:12.000 --> 00:00:16.000 It is now time to start the Thunder Dome of Thunder Talks. 00:00:16.000 --> 00:00:22.000 These are the rapid back-to-back-to-back-to-back-to-back 4 minute talks each, and then we will do Q&A at the end, but please feel free to start putting comments or clarification in the chat as we get going. 00:00:22.000 --> 00:00:35.000 Otherwise. Everything will be at the end. So please let us welcome the magnificent Grace Parker to be our moderator. 00:00:35.000 --> 00:00:40.000 Thanks, Sarah. Thanks, everyone, for attending our workshop's first "Thunder Talk" session. 00:00:40.000 --> 00:00:53.000 This session will have 12, 4-minute talks that will be delivered live. And if all goes as planned, they'll be about 10 minutes left for Q&A at the end, but as Sarah mentioned, please take advantage of the chat. 00:00:53.000 --> 00:01:02.000 And just note that Dominique has to jet to teach a class and will not be attending the Q&A session. 00:01:02.000 --> 00:01:16.000 So if you have questions for her, put them in the chat or email her directly. With that, I think we can get started. 00:01:16.000 --> 00:01:21.000 Okay, thank you everyone. I'm Arben Pitarka at Lawrence Livermore National Lab. 00:01:21.000 --> 00:01:28.000 I'm really glad to participate in this workshop, I always enjoy this kind of interactions. 00:01:28.000 --> 00:01:35.000 So as the title shows, I'm gonna just quickly introduce you to recent work that we are 00:01:35.000 --> 00:01:49.000 currently doing and the objective of this study is to test the performance of the deterministic approach that we're using to simulate strong ground motion in the Bay Area. 00:01:49.000 --> 00:02:06.000 By using the three models of the Bay Area developed by USGS including topography. So the objective here is not only to kind of convince ourselves that the technique is working, 00:02:06.000 --> 00:02:15.000 but also to test the performance of the models that we are using for generating a rupture model and also wave propagation models. 00:02:15.000 --> 00:02:28.000 So all these pictures that I'm showing here is just to demonstrate to you what are the major steps in the technique so we start with the Graves and Pitarka rupture model. 00:02:28.000 --> 00:02:41.000 We use SW 4, which is a fine difference similuation code that uses a deterministic approach in terms of handling the wave propagation. In these simulations we use 00:02:41.000 --> 00:02:47.000 curvilinear mesh and mash refinement to model the near surface topography 00:02:47.000 --> 00:03:05.000 And the complexities in the velocity model. We for this study we used the slip model proposed by Dave Wald as a target to generate our rupture model as you can see is dominated by two largely patches. 00:03:05.000 --> 00:03:21.000 We use the most recent version of the velocity model proposed by USGS. Version 21.1 and in previous analogies, we have shown that this model works relatively well up to 5hz when we try to model local earthquakes. 00:03:21.000 --> 00:03:31.000 And this is the first time for us to test on an extended fault, and we're lucky to have several types of supercomputers in our lab, 00:03:31.000 --> 00:03:35.000 in this case, I am showing one of them. 00:03:35.000 --> 00:03:43.000 The last slide in my presentation, just to demonstrate how well we do in feeding recorded motion. 00:03:43.000 --> 00:03:56.000 I want to remind you that we were using also the surface topography. So black synthetics are rather recorded and I have shown here time series and the Fourier amplitude spectral on the left 00:03:56.000 --> 00:04:26.000 for certain stations, and on the right I show GOF of the Fourier output spectra average over 30 stations and on the left we have how we do when we include topography and as you can see that for all three components the model for these 30 stations works relatively well that means that the bias we get is very negligible and we will also occurs to know what happens if we neglect the topography and as you can see on the right 00:04:28.000 --> 00:04:40.000 by removing topography, we kind of diminish a little bit the altitude of the simulated ground motion, but that only happens at maybe frequencies beyond 3 to 4hz. 00:04:40.000 --> 00:04:58.000 So overall, the conclusion is that except for certain areas where other people at USGS or our point source simulations have shown that the velocity model needs some improvement but in general the conclusion is that this is an extremely good velocity model. 00:04:58.000 --> 00:05:16.000 It's impressive how well it works on a broad frequency range. This simulation reveals also that there may be some room for improvement not only at the basin edge but also in downtown where we know that the complexities are not very well represented in current model. 00:05:16.000 --> 00:05:25.000 So we'll use this kind of simulations to guide people that we're working with at USGS in terms of improving future improvements of the model. 00:05:25.000 --> 00:05:39.000 Thank you. 00:05:39.000 --> 00:05:44.000 Hello, I am Michael Barrall from USGS. 00:05:44.000 --> 00:05:53.000 In 2021, we published dynamic rupture simulations of large scenario earthquakes on the Hayward, Rogers Creek and Calaveras faults 00:05:53.000 --> 00:06:02.000 using 3D velocity structure and fault geometry from the Bay Area velocity model. For simplicity, we use linear elastic material properties. 00:06:02.000 --> 00:06:08.000 But elastic models can have unrealistically high slip rates and ground motions. 00:06:08.000 --> 00:06:16.000 For example, this simulated earthquake starts on the Rogers Creek fault and propagates southward through the Hayward and Calaveras faults. 00:06:16.000 --> 00:06:28.000 On the top is a plot of the rupture front contours at one second intervals. On the bottom, we are plotting PSA on the vertical axis as a function of distance from the rupture on the horizontal axis 00:06:28.000 --> 00:06:35.000 for periods of 3 and 7.5 seconds. Each blue dot represents one point on the Earth's surface. 00:06:35.000 --> 00:06:43.000 The three curves of the PSA predicted by the BSSA 14 ground motion model, median and plus or minus one standard deviation. 00:06:43.000 --> 00:06:51.000 At the 3 seconds period, the simulated PSA is much higher than the predicted values. At 7.5 seconds, it is somewhat higher. 00:06:51.000 --> 00:06:57.000 These high ground motions occur because elastic materials cannot dissipate energy. 00:06:57.000 --> 00:07:04.000 One way to add energy dissipation is to use viscoplasticity, but that creates many complications. 00:07:04.000 --> 00:07:13.000 Last year we proposed a much simpler way to add energy dissipation. It works by adding a nonlinear radiation damping term to the friction law as shown here. 00:07:13.000 --> 00:07:25.000 The radiation damping is a nonlinear function of slip rate. At low slip rates this term doesn't do much, but at high slip rates it produces a force that opposes rapid slip. 00:07:25.000 --> 00:07:32.000 We can demonstrate the effectiveness of nonlinear radiation damping in this example with the planar vertical strike-slip fault. 00:07:32.000 --> 00:07:38.000 Here we are plotting slip rate as a function of time. At the four locations on the fold marked by yellow stars. 00:07:38.000 --> 00:07:46.000 Figure one has elastic material properties. The four colors, black red, green, and blue correspond to the four locations. 00:07:46.000 --> 00:07:54.000 The peak slip rates increased almost linearly with increasing distance from the hypocenter. Figure 2 uses viscoplasticity. 00:07:54.000 --> 00:08:05.000 The peak slip rates are lower and tend toward leveling off rather than continuing to increase. Figure 3 is elastic again, but now with nonlinear radiation damping added to the friction law. 00:08:05.000 --> 00:08:12.000 It looks almost identical to figure 2, demonstrating that radiation damping can produce effects similar to viscoplasticity, 00:08:12.000 --> 00:08:23.000 but without the complications. Figure 4 also has nonlinear radiation damping. It uses stronger parameters and so reduces the slip rates even more. 00:08:23.000 --> 00:08:31.000 Finally, we return to our simulated Hayward Fault earthquake, this time with nonlinear radiation damping added to the friction law. 00:08:31.000 --> 00:08:37.000 Now the PSA values are reduced so they agree well with the BSSA 14 ground motion model. 00:08:37.000 --> 00:08:45.000 We had to make the radiation damping depth dependent. With strong damping near the Earth's surface and weak damping at depth. 00:08:45.000 --> 00:08:55.000 This lets us suppress the 3 second period more so than the 7.5 second period. This works because the 3 second period is more sensitive to the shallow part of the fault. 00:08:55.000 --> 00:09:10.000 Thank you. 00:09:10.000 --> 00:09:23.000 So I will show you a recent work on data-driven nonlinear site response. This work is led by my PhD student Flora Xia in collaboration with other members of my [excuse me] 00:09:23.000 --> 00:09:42.000 postdoc Grigorios (Greg) Lavrentiadis and my grad student, Yaozhong Shi. 00:09:42.000 --> 00:09:45.000 So funded by SCEC and the USGS over the past 10 years, we have developed PySeismoSoil, 00:09:45.000 --> 00:10:05.000 which is an open source modeling environment for nonlinear site response that given a ground motion at time series on walk, Vs30 and the Z1.0 bedrock depth we get as output the ground motion with an [indiscernible] response, produced in the time domain. 00:10:05.000 --> 00:10:13.000 Now the environment has comprises three modules and these are in detail. The said [indiscernible] Velocity model, which is a statistic model developed using geotechnical data from sedimentary sites in California 00:10:13.000 --> 00:10:26.000 that takes the same input the Vs30 in the Z1.0 and produces smooth velocity shear zone profile that stops at Z1.0. 00:10:26.000 --> 00:10:30.000 The second module is 00:10:30.000 --> 00:10:37.000 the Hybrid Hyperbolic Linear model which given a velocity profile computes basically all the nonlinear input parameters. 00:10:37.000 --> 00:10:48.000 And the third module is a finite difference non-linear site response over that discretizes the velocity profile and solves the nonlinear site response in the tide domain. 00:10:48.000 --> 00:10:58.000 Now we have used PySeismoSoil to train a Fourier Neural Operator to solve the wave equation of soft soil sites that experience non-linearity. 00:10:58.000 --> 00:11:23.000 And we have done so by generating realizations of the Sediment Velocity model using the Hybrid Hyperbolic model to generate automatic the nonlinear 8 model parameters and then using a series of input ground motions on rock with that's fine a wide range of magnitude and distances to generate thousands of nonlinear site response analysis. 00:11:23.000 --> 00:11:42.000 Neural operators generalized the classic on your networks to maps of between infinite dimensional spaces and as such they can train on course special temporal problems here for example 64x64 but they can be evaluated or must find a spatiotemporal resolution. 00:11:42.000 --> 00:12:05.000 For example, in Fourier meshes and higher frequencies for our case. So given a Neural Operator, given an input Vs30 and Z1.0, which implies for the [indiscernible] Velocity model shear wave velocity that stops at C 1,000 and an input time series. 00:12:05.000 --> 00:12:15.000 It can produce time and the output [indiscernible] velocity with week nonlinearity, but also with very strong non-linearity with very good accuracy up to about 10hz. 00:12:15.000 --> 00:12:33.000 And this is very, very recent work, which is also in progress. And we believe that this work can be applied to large scale simulators that do not account for not only the site response yet and we're hoping to test our model and evaluate improvements in the 1 to 10hz regime compared to linear 00:12:33.000 --> 00:12:38.000 scholastic simulations with respect to historical events as well as for ground motion models. PySeismoSoil 00:12:38.000 --> 00:12:57.000 is available to download. And stay tuned for the machine learning features. Thank you! 00:12:57.000 --> 00:13:12.000 Hi, I'm Irene Liou from UC Davis and I'm presenting on aleatory variability and epistemic uncertainty for real world application for ground-motion simulations in physics-based probabilistic seismic hazard analysis (PSHA),referred to as PCJ. 00:13:12.000 --> 00:13:20.000 So for the purpose of this research, numerical simulations are increasingly used as ground motion models in PSHA. 00:13:20.000 --> 00:13:33.000 And it's important that aleatory and epistemic components are identified and the overall context of seismic hazard, which is the annual rate of ground motion exceedance for a modeling random future earthquakes. 00:13:33.000 --> 00:13:47.000 So to provide some background in the context of ground motion simulations for PCJ aleatory is variability from physical effects unmodeled in the hazard that are treated as randomness in the model. 00:13:47.000 --> 00:14:01.000 Epistemic uncertainty is scientific uncertainty and the earthquake effects modeled in the hazard. So aleatory and epistemic uncertainty require different treatment and are reduced in different ways. 00:14:01.000 --> 00:14:11.000 Aleatory variability reduced through a selection of a more complex model and epistemic uncertainty is reduced through additional data collection and improved modeling. 00:14:11.000 --> 00:14:22.000 So there's two components I'll talk about in reference to the framework, so the model component in orange is in reference to the algorithm or basic assumptions. 00:14:22.000 --> 00:14:31.000 Model aleatory variability is unexamable variation from unmodel effects as well as due to simplified fix turns from the methodology. 00:14:31.000 --> 00:14:43.000 It's calculated through comparing predictions and observations. Well, model epistemic uncertainty is scientific uncertainty in the center and range from use of the methodology. 00:14:43.000 --> 00:14:50.000 The parametric component here in the green column is explainable variability calculated through forward realizations 00:14:50.000 --> 00:15:00.000 outside of the hazard. It is explainable because inputs that are simulated in the ground motion model, but not explicitly modeled in the hazard calculation. 00:15:00.000 --> 00:15:12.000 So I'll provide examples below. Parametric aleatory is calculated by a forward modeling the samples the range of parameters for the stimulus effects not in the hazard. 00:15:12.000 --> 00:15:24.000 Parametric epicentic uncertainty of the median and paramedic epistemic uncertainty of the aleatory variability are differences in the median and standard deviation due to different models of inputs. 00:15:24.000 --> 00:15:34.000 So next I'll provide examples of use of the framework. For model aleatory variability in the darker orange is from result of the methodology. 00:15:34.000 --> 00:15:42.000 So for a deterministic calculation, this would be assumed to be 0 because there is no variability from unmodeled effects, 00:15:42.000 --> 00:15:52.000 so all earthquake effects would be modeled. However, for a stochastic calculation, the model aleatory variability would need to be calculated as non-zero 00:15:52.000 --> 00:16:04.000 and calculate the comparisons with data. So for parametric component examples, the hypercenter location can be at input to simulations, however it is not a part of the hazard. 00:16:04.000 --> 00:16:16.000 So the distribution in the hypocenter location we need to define in sample, which would you lead to parametric aleatory variability and parametric epistemic uncertainty of the aleatory variability in the light green 00:16:16.000 --> 00:16:27.000 and the same situation applies for the slip distribution. So for the 30 velocity structure, this can be an input of simulations, but it's also not modeling the hazard, 00:16:27.000 --> 00:16:30.000 so this would lead to parametric epistemic uncertainty of the median in dark green because there is a true velocity model that is unchanging 00:16:30.000 --> 00:16:46.000 so alternative, the last structures should be sampled. So to conclude, it's important to identify the aleotary variability and epistemic components of numerical simulations because they would require different treatment. 00:16:46.000 --> 00:16:54.000 And the parametric category becomes relevant and required because earthly effects are modeled in the simulation, but not in the overall hazard. 00:16:54.000 --> 00:17:06.000 Thank you. 00:17:06.000 --> 00:17:15.000 Good afternoon. I'm Albert Kottke at PG&E. Today. I'm gonna talk about probabilistic or quick scenarios, ground shaking, and ground deformation in the SF Bay Region. 00:17:15.000 --> 00:17:23.000 So before we jump into that, I want to talk about risk attitude. Risk is how we measure the impact of earthquakes on our infrastructure and not all the losses are the same 00:17:23.000 --> 00:17:31.000 it depends on the risk attitude. At PG&E, we quantify risk using a multi-attribute risk score, which considers safety, financial, and reliability. 00:17:31.000 --> 00:17:45.000 And we use risk adversity scaling, which means that larger consequences have an inflated risk score. Risk adversity scaling changes the impact of earthquakes because larger earthquakes have a larger risk score associated with them, a larger spatial 00:17:45.000 --> 00:17:55.000 earthquakes. We've been working on a way to incorporate spatial damage into our assessments and we do that using the following approach. 00:17:55.000 --> 00:18:03.000 First, we develop regional probabilistic seismic hazard analysis to quantify the seismic hazard over a region or in grid points. 00:18:03.000 --> 00:18:09.000 And from that we select a series of seismic sources that are, significant to that hazard. 00:18:09.000 --> 00:18:16.000 From that, those sources we develop a suite of say, 300 maps of ground shaking and rates of occurrence. 00:18:16.000 --> 00:18:23.000 So every map is associated with a specific rate of occurrence and these maps can share spatial and intensity measure correlation. 00:18:23.000 --> 00:18:30.000 And at any point within the region, the maps and the rates of the currents approximate the hazard curve. 00:18:30.000 --> 00:18:40.000 These maps are then also adjusted for say specific conditions using Vs30. The maps then can be used in combination with secondary hazards like landslides and liquefaction. 00:18:40.000 --> 00:18:47.000 And then you also use an assessing asset performance, whether it's buildings or linear assets like pipelines 00:18:47.000 --> 00:19:00.000 or transition lines. If we combine the rates of occurrence and the maps with the performance of the assets in them, we can then get annualized lost estimates. 00:19:00.000 --> 00:19:07.000 If we look at that and the impact of considering spatial correlation on our building stock; 00:19:07.000 --> 00:19:12.000 this is our corporate real estate buildings we see a doubling in the risk score by considering spatial 00:19:12.000 --> 00:19:15.000 correlations. 00:19:15.000 --> 00:19:20.000 We've been looking at combining these maps with secondary effects like liquefaction and landslides 00:19:20.000 --> 00:19:25.000 and to do that we need simple models over the region of interest. We've been working on liquefaction deformation maps, 00:19:25.000 --> 00:19:38.000 where we have proprietary and public databases of CPT and SPT tests. We use those in conjunction with maps of 3D geologic deposits to combine to come up with 10x10 meter 00:19:38.000 --> 00:19:47.000 cells with unique geology, groundwater, and topographic conditions. From each of those cells, we come up with simple, spatially varied coefficients to come up with for use in the scenario based evaluations. 00:19:47.000 --> 00:19:59.000 So here we have an example of the probability of liquefaction PGA. For our geology based compared to Hazus in the Holzer 00:19:59.000 --> 00:20:02.000 models. 00:20:02.000 --> 00:20:10.000 We have that over the Bay Area and we're working to expand that to larger areas collecting more information as we go. 00:20:10.000 --> 00:20:29.000 We use a similar process for the Bay Area Landslide Deformation Model, where we're using published values of strengths, but then back-calculating strengths based off of performance during previous earthquakes as well as lack of movement during periods of high water or low water. 00:20:29.000 --> 00:20:41.000 And so this process is using, maximizing our Posterior Strength Model, shown here so that we're best fitting our observations. 00:20:41.000 --> 00:20:47.000 And finally, I'd like to thank the contractors that have been working on this with us and researchers. 00:20:47.000 --> 00:20:53.000 The earthquake landslides and the liquefaction was been done by Mike Greenfield and Chris Hitchcock. 00:20:53.000 --> 00:21:03.000 The spatially-correlated probabilistic ground motion scenarios has been working on by Melanie Walling and Nico Kuehn, and the building inventory risk assessment, by Alidad Hashemi. 00:21:03.000 --> 00:21:18.000 Eventually we want to make these tools available for everyone to use when we get there. Thank you. 00:21:18.000 --> 00:21:33.000 Hi! Glad to be here. My name is Evan Reis. I'm the director of science for Safe Hub and I'll be talking about how to use a real-time low-cost instrumentation for real-time earthquake damage assessment notifications. 00:21:33.000 --> 00:21:45.000 There's been a clear need for many years for a better tool to rapidly assess damage to buildings in the minutes, hours, and even days before engineers can actually get to sites. 00:21:45.000 --> 00:21:57.000 We've had cases where buildings have collapsed after people went back into them because they didn't see any obvious damage only for the building to collapse through an aftershock. 00:21:57.000 --> 00:21:59.000 We've had cases where hospitals have evacuated patients, which in itself is very risky 00:21:59.000 --> 00:22:19.000 when the damage hasn't been that severe and most recently in Turkey, we've had cases where doctors and nurses refuse to go back into buildings because they see damage that isn't necessarily structural. 00:22:19.000 --> 00:22:26.000 So, you know, MEMS based accelerometers have obviously been around for many years, they're used in phones, 00:22:26.000 --> 00:22:33.000 they're, they use for shake table testing, but they haven't traditionally been installed in buildings for damage assessment. 00:22:33.000 --> 00:22:42.000 That's usually been the purview of much more expensive, more complex sensors that are very difficult to install. 00:22:42.000 --> 00:22:53.000 But now with the admin of these low-cost sensors, we're able to decrease by an order of magnitude the cost and increased by an order of magnitude the number of buildings that can be censored. 00:22:53.000 --> 00:23:02.000 For example, California Geologic Survey has instrumented around 200 buildings through its SMIT program over about 50 years. 00:23:02.000 --> 00:23:09.000 Safehub and other companies, I'm sure as well, have instrumented now about a 1,000 buildings in 5 years because of that simplicity 00:23:09.000 --> 00:23:22.000 and so that provides a lot more information for building owners directly. For example, in the Bay Area, we have, one client, a property manager that has limited resources after an earthquake who is censored all 60 or so of their buildings 00:23:22.000 --> 00:23:40.000 so that they know which ones they need to send their limited engineers and and contractors to afterwards. And in Southern California a pool of about a 150 small cities have 00:23:40.000 --> 00:23:50.000 participated in instrumenting their building so that regional resources can be allocated where there is specific damage. 00:23:50.000 --> 00:24:06.000 The way the program works is basically the sensors record the shaking send that to the platform where it's overlaying on a vulnerability function that's developed specifically for each building based on its age and structural system and other parameters, 00:24:06.000 --> 00:24:17.000 and then an alert is generated based on the approximate estimate of building damage so that the building owners can decide whether to stay in the building, evacuate, or bring an engineer on. 00:24:17.000 --> 00:24:25.000 And so far today, Safehub has sent out over 1,000 alerts for earthquakes around the world. 00:24:25.000 --> 00:24:39.000 In addition, additional information such as the waveform data, floor accelerations, when their sensors placed at different floors can be a high use to structural engineers as they come 00:24:39.000 --> 00:25:00.000 out especially to larger building and they figure and where to look first. And that's one of the things that's really exciting is collaboration with structural engineering firms and their business occupancy resumption programs to develop these programs by which they can get the data and make better informed decisions for their clients about whether to stay in buildings. 00:25:00.000 --> 00:25:06.000 We've been doing some testing and research with University of California San Diego, which is very exciting. 00:25:06.000 --> 00:25:17.000 Just to verify the quality of the sensors. And what we're most excited about is the ability to use these in parts of the world that have no ability to create a dense network on their own. 00:25:17.000 --> 00:25:26.000 So for example, Mexico City now has 60 sensors deployed which will build a very dense network and we're working to do that in other cities around the world as well. 00:25:26.000 --> 00:25:41.000 So thank you very much for the opportunity. 00:25:41.000 --> 00:25:46.000 Good afternoon, everybody. I'm Mike Greenfield, principal engineer of Greenfield Geotechnical. 00:25:46.000 --> 00:25:54.000 And I'd like to talk to you this afternoon about a remarkable absence of liquefaction from the 2014 South Napa earthquake. 00:25:54.000 --> 00:26:06.000 So following the gear reconnaissance in the region, surrounding the strong shaking of the earthquake, there was a comment in the report about a remarkable absence of liquefaction. 00:26:06.000 --> 00:26:13.000 Liquefaction or lateral spreading were only observed in two locations, following the earthquake, both of which were in the Napa River. 00:26:13.000 --> 00:26:28.000 So comments like this tend to give me pause about why this would have occurred. Shaking from this earthquake was relatively intense, certainly sufficient to trigger liquefaction 00:26:28.000 --> 00:26:36.000 and the geology around much of the area of strong shaking is mapped as soils that could potentially liquefy in an earthquake. 00:26:36.000 --> 00:26:45.000 So in conjunction with a number of owners around the area PG&E, Albert Kottke, is one of them 00:26:45.000 --> 00:26:55.000 we've conducted multiple recent projects in recent years of investigating regional scale liquefaction hazards in the San Francisco Bay Area 00:26:55.000 --> 00:27:01.000 and my question was, can these regional scale analyses provide some insight into these historic observations? So first, regional scale analyses are different than site-specific studies 00:27:01.000 --> 00:27:21.000 they incorporate a much greater amount of uncertainty than site specific studies do. First groundwater is uncertain. We need to know about groundwater in order to assess soils that could be susceptible to liquefaction. 00:27:21.000 --> 00:27:34.000 Likewise, soil behavior and cyclic resistance are very uncertain at a regional scale. These are the parameters used to determine if saturated soils are susceptible to liquefaction and then if they are susceptible to liquefaction, the intensity of shaking that could trigger 00:27:34.000 --> 00:27:54.000 liquefaction. So through a series of developing a comprehensive subsurface database, looking at engineering mechanics and developing statistical models, we've been able to develop Liquefaction models 00:27:54.000 --> 00:27:59.000 on a regional scale that match observations from recent earthquakes. 00:27:59.000 --> 00:28:02.000 The first step of this is to develop a Groundwater model, so we did that using over 400,000 well observations. 00:28:02.000 --> 00:28:14.000 The basis is a simple finite difference model, something analogous to a homework problem in graduate school that would provide a mean function. 00:28:14.000 --> 00:28:22.000 And then we interpolate between the wells using Gaussian process regression. This includes both spatial and temporal dimensions 00:28:22.000 --> 00:28:40.000 such that we can generate a continuous groundwater function at the time of the earthquake. So based on the well observations, groundwater was approximately 1.8 meters below the median at the time of the earthquake. 00:28:40.000 --> 00:28:53.000 Likewise, the data indicates that the soils in the vicinity of strong shaking tend to exhibit less sand-like behavior than they do elsewhere in the San Francisco Bay Area. 00:28:53.000 --> 00:29:02.000 So we collected data from over 22,000 SBT samples and 100,000 CPT observations and found the Holocene alluvium of the North Bay 00:29:02.000 --> 00:29:10.000 typically contains a much smaller fraction of sand-like soils than elsewhere in the Bay Area. 00:29:10.000 --> 00:29:23.000 So based on these observations of a very low groundwater and higher than typical fraction of clay-like soils in the recent alluvium 00:29:23.000 --> 00:29:30.000 in the vicinity of the Napa earthquake the observations of liquefaction being relatively limited. 00:29:30.000 --> 00:29:34.000 Was not actually that surprising. So even during normal conditions with a median groundwater, the hazard is modest 00:29:34.000 --> 00:29:58.000 do to the prevalence of a lot of clay-like soils. However, the groundwater conditions at the time of the earthquake, when we compare those to the median conditions, if groundwater was kind of at its average level, an additional 1.2 square kilometers of land area would have been subjected to moderate or liquefaction hazards and they would have been about 7.5 times more 00:29:58.000 --> 00:30:15.000 liquefaction features. Thank you for your time and I give you to the Otter. 00:30:15.000 --> 00:30:21.000 Hi everybody, I'm Ken Hudson with Hudson Geotechnics and UCLA, 00:30:21.000 --> 00:30:30.000 and I'm gonna talk to you guys now about uncertainty in the site-specific versions of these liquefaction models that Mike Greenfield 00:30:30.000 --> 00:30:42.000 was talking about the regional component before. So we've been developing new liquefaction-triggering and manifestation models, as part of a part of the next generation 00:30:42.000 --> 00:30:48.000 liquefaction or NGO project. And so first of all to examine this we want to look at how previous liquefaction models are developed, 00:30:48.000 --> 00:31:04.000 were developed and how the case history data sets are put together. So, after an earthquake, there's observations of liquefaction 00:31:04.000 --> 00:31:21.000 whether they're sand boils or lateral spreading or other, settlements and we go out and we observe those in the field and then we also put some tests in situ tests like borings or CPTs next to them and also next to areas that didn't have liquefaction. 00:31:21.000 --> 00:31:33.000 But one of the biggest questions always is, well, which layer did these manifestations we see at the ground surface come from in the subsurface. 00:31:33.000 --> 00:31:40.000 And that's a big question that comes with a lot of judgment and previous case history data sets are assembled 00:31:40.000 --> 00:31:50.000 in a way that the authors look at individual layers and they do a few passes first of all you look at susceptibility, and susceptibility 00:31:50.000 --> 00:32:08.000 in itself is determining between clay-like and sand-like behavior and then once you look at and identify all the sand-like behaving layers, then you also want to look at, well, will it actually trigger given the amount of shaking that it occurred and the expected resistance of that soil, 00:32:08.000 --> 00:32:09.000 and then you can, use that and in previous models, it's always just one representative layer 00:32:09.000 --> 00:32:39.000 that is used and associated with the one surface manifestation and then those are used to regress models. But this becomes very problematic when you have profiles that have many eligible candidates for a critical layer and particularly when they might not agree with the observations you're observing at the site like for instance this is a profile that did not have any evidence of liquefaction at the surface. 00:32:46.000 --> 00:32:49.000 And yet, if we look at the CPT and do some layering on it, we find multiple layers within it 00:32:49.000 --> 00:33:16.000 that actually plot well above historic, triggering curves for liquefaction. So given the strong shaking at this site, we should have seen liquefaction, but we know from judgment that these are deep layers and so they probably wouldn't produce enough of a head to get through to the ground surface and create sound boils and they're very thin so they don't really create a lot of displacements when they're 00:33:16.000 --> 00:33:26.000 straining under that liquefied load. So what we can do is we can change it from being well, we saw manifestation, that means triggering occurred and vice versa. 00:33:26.000 --> 00:33:37.000 We can go with a probabilistic approach and we can assign probabilities of layers triggering and then conditional probabilities of manifestation occurring given it does or doesn't trigger 00:33:37.000 --> 00:33:44.000 and vice versa. And so for an example from this profile we can say that a lot of these deep layers, 00:33:44.000 --> 00:33:52.000 while, they do have a small probability of not triggering if no manifestation observed. 00:33:52.000 --> 00:34:00.000 Well, we observed no manifestation, but we can actually assign it a probability of triggering that is high, even though it didn't manifest. 00:34:00.000 --> 00:34:27.000 And so, that's just using Bayesian statistics. So then, we can also assemble this from just a critical layer framework into a profile based framework where we can center the entire profile all at once and we can consider all of the factors at one time susceptibility, triggering condition on susceptibility, and manifestation and condition on triggering with some saturation factors and some thickness corrections, 00:34:27.000 --> 00:34:37.000 and we can assemble it all for the entire profile at once. We can come up with laboratory based priors for susceptibility and triggering, which we've done here 00:34:37.000 --> 00:34:53.000 and then we can update all those in a Bayesian, updating regression where we come up with new triggering, new susceptibility, and a unknown manifestation given triggering models out of the case history data sets, which is what we've done now. 00:34:53.000 --> 00:35:00.000 Thank you. 00:35:00.000 --> 00:35:13.000 Hello, I'm Emrah Yenier from Haleyaldrich. I'm going to be talking about prediction of ground-motion mean period for Cascadia subduction earthquakes. 00:35:13.000 --> 00:35:20.000 Mean period is a scatter measure of ground-motion frequency content. It's calculated based on Fourier amplitudes. 00:35:20.000 --> 00:35:38.000 It was originally proposed by Rathje et al., in 1998. It's generally utilized for estimation of seismic demand in engineering applications such as slope stability, seismic earth pressures or seismic response of structures. 00:35:38.000 --> 00:35:46.000 In this study we develop a ground motion model for estimation of mean period for Cascadia subduction earthquakes. 00:35:46.000 --> 00:36:01.000 For this we use Cascadia ground motions from NGA-Sub database. There were 17 intraslab events with magnitudes generally being 4 and 7 and there were two interface events with magnitudes 4.7 and 4.9. 00:36:01.000 --> 00:36:07.000 Because of the sparse data, especially for interface events, we included in this study M9. 00:36:07.000 --> 00:36:16.000 simulations, which included 30 alternative rupture scenarios of Cascadia interface. 00:36:16.000 --> 00:36:42.000 Our model accounts for the effects of source path and site on mean period. Source and path effects are modeled as a function of moment magnitude and rupture, site effects have two components, site, the near surface effects based on Vs30 and basin effects based on, Z2.5, which is the depth to the velocity of Z2.5km per 00:36:42.000 --> 00:36:44.000 second. 00:36:44.000 --> 00:36:59.000 So after performing the repression analysis, we examined the residuals for the data and for the model and for the validation of the model performance. 00:36:59.000 --> 00:37:02.000 Due to the interest of time, I'm just going to focus on the comparison of median, mean period estimates for Hazus significant scenarios for the 00:37:02.000 --> 00:37:15.000 Pacific Northwest. So we are looking at these comparisons for non-basin and basin sites 00:37:15.000 --> 00:37:26.000 on the left figure and the right figure respectively. For crustal intraslab events, we consider magnitude 7 and for interface because the main 9 earthquakes. 00:37:26.000 --> 00:37:38.000 The mean period estimates for crustal events were estimated based on a model proposed by Du in 2017 and the mean periods for interest level interface are Smith based on our model. 00:37:38.000 --> 00:37:49.000 So for non-basin sites on the left figure. We observed that the mean periods are similar and comparable for all three different types. 00:37:49.000 --> 00:37:56.000 However, this holds also for basin sites, for crustal, and intraslab events. 00:37:56.000 --> 00:38:05.000 However, for interface events, we observe a significant influence of deep sediments for mean period. 00:38:05.000 --> 00:38:23.000 So the solid red line shows, the mean few estimates for interface events at a moderate sediment depths and, with their model sediments depths, and their dash thread line shows the same information for deeper cylinder thickness. 00:38:23.000 --> 00:38:38.000 As you can see, the mean period estimates, increase from 0.5 seconds 4, crustal intraslab interest lab events to between 1 and 2 seconds for interface events. 00:38:38.000 --> 00:38:59.000 This highlights the significance of basing effects on mean period for engineering applications. That concludes my slides. Thank you.df 00:38:59.000 --> 00:39:05.000 Alright, good afternoon, everyone and thank you for the opportunity to present today. My name is John Downs. 00:39:05.000 --> 00:39:10.000 I'm a PhD student at the Daniel J. Evans School of Public Policy and Governance at the University of Washington. 00:39:10.000 --> 00:39:16.000 [noise] Just a few initial findings regarding the association between social vulnerability and outcomes, which I theorized may be influenced by social vulnerability, 00:39:16.000 --> 00:39:27.000 which include past time from earthquake experience, earthquake preparedness, and perceived self-efficacy through earthquake early warning systems. 00:39:27.000 --> 00:39:34.000 The data used in the study are from the 2021 and 2023 ShakeAlert baseline surveys for the U.S. West Coast. 00:39:34.000 --> 00:39:41.000 My advisor, Dr. Bastrom and I are grateful to knowledge funding for this work from the USGS through the National Science Foundation 00:39:41.000 --> 00:39:51.000 and excellent survey implementation by NORC. I'll skip a bit on the methodological details today, but I'm happy to answer any questions related to the methodology that you might have. 00:39:51.000 --> 00:39:57.000 I'd like to preface this presentation by saying that these findings are initial and subject to revision, 00:39:57.000 --> 00:40:02.000 and so I welcome any and all comments and feedback. 00:40:02.000 --> 00:40:22.000 For the overall research question, getting this analysis was which aspects of social vulnerability affect earthquake impacts. From policy perspective, I'm interested in who and where preventative resources may be most efficiently and justly allocated to prevent cascading downstream effects. 00:40:22.000 --> 00:40:27.000 I'd like to acknowledge the considerable literature and the debate surrounding social vulnerability and how it's measured. 00:40:27.000 --> 00:40:38.000 I acknowledge that some communities have been made vulnerable by historical and embedded systemic processes. For ease of analysis, I adopt the widely used CDC ATSDR social vulnerability index. 00:40:38.000 --> 00:40:43.000 For this study, which is the basis of social vulnerability for the FEMA National Risk Index. 00:40:43.000 --> 00:40:52.000 Through literature view, I theorize, that the three questions from the ShakeAlert survey may be impacted by dimensions of social vulnerability, 00:40:52.000 --> 00:40:57.000 and so as I previously mentioned, these are harm from past earthquakes, which are to oneself, one family in one community, each individually 00:40:57.000 --> 00:41:06.000 specified. Earthquake preparedness and perceptions of self-efficacy from earthquake early warning. 00:41:06.000 --> 00:41:21.000 To complete this analysis since I conducted estimate to regressions on pooled February, 2021 and 2023 survey data from California, Oregon, and Washington we ended up with a sample size of over 7,000 observations and these 00:41:21.000 --> 00:41:29.000 samples are representative of all three states, New York, AmeriSpeak sample, supplemented with a lucid sample. 00:41:29.000 --> 00:41:31.000 And so at the end of the presentations, I'll provide a citation for the paper, 00:41:31.000 --> 00:41:45.000 from 2022, which has a full, detailed, presentation of the method. 00:41:45.000 --> 00:41:49.000 And so I'll go through this quickly for the sake of time, but these are 00:41:49.000 --> 00:41:59.000 some descriptive statistics. That we use for the dependent variables and so I just wanna point out to note the different scales used 00:41:59.000 --> 00:42:11.000 for these different measures, and so we see that overall response to having experienced an earthquake relatively few had actually experienced harm at a personal community or family level. 00:42:11.000 --> 00:42:24.000 Preparing for an earthquake was also pretty uncommon. This you can see that over 50% had prepared essential items, but this drops significantly for other activities and 00:42:24.000 --> 00:42:31.000 households could select multiple activity so most popular responses were actually having done none of these. I see, 00:42:31.000 --> 00:42:34.000 I'm running short on time, so I'll move a bit quickly. And so these are some of the descriptive statistics of the independent variables 00:42:34.000 --> 00:42:43.000 from the SDI index. And so we were able to construct using the survey data, our own index basically. 00:42:43.000 --> 00:42:53.000 I'll move along to the findings for running a bit short on time, but basically we use principal component analysis to 00:42:53.000 --> 00:43:06.000 construct our own index based on the SBI and our initial findings point to vulnerability is actually negatively correlated with her earthquake preparedness and household size is positively correlated. 00:43:06.000 --> 00:43:17.000 And so we'll get to this. So there's a positive coalition between, ShakeAlert, allowing others to help, 00:43:17.000 --> 00:43:23.000 being able to help others nearby with vulnerability. So we see this as evidence of social cohesion. So sorry for the rush. 00:43:23.000 --> 00:43:34.000 I'm happy to answer any questions and thank you for the time here. 00:43:34.000 --> 00:43:50.000 Hi everyone, I'm Max Schneider. I'm in Mendenhall Postdoc at USGS this Earthquake Science Center in Moffett Field and this is work I did with a summer intern, Bianca Artigas on improving how we make aftershock forecast maps based on user needs. 00:43:50.000 --> 00:43:54.000 So let me introduce you to such a forecast map. So this map on the right shows the probability of strong shaking. 00:43:54.000 --> 00:44:08.000 So MMI level 6 or greater in the week following a major earthquake. So in this case the 2010, El-Mayor Cucapah earthquake in northern Mexico. 00:44:08.000 --> 00:44:19.000 We designed this map with a humanitarian response application in mind and we made some specific choices, with that in mind. 00:44:19.000 --> 00:44:32.000 So, we chose a color palette that starts with yellow and goes to brown. We chose to classify this continuous probability scale into 10 classes, specifically with that application in mind. 00:44:32.000 --> 00:44:44.000 We, our forecasting for the next week, that's the duration. And there's no additional map layers or additional information in the map title or caption. 00:44:44.000 --> 00:44:58.000 But we can make a million other choices for all of those different attributes of the map. And so we wanted to understand what users could really benefit from, users of this aftershock forecast information. 00:44:58.000 --> 00:45:02.000 So we held and we wanted to understand how this might differ by country as well or in three different, country context. 00:45:02.000 --> 00:45:11.000 So we held three workshops last year in the U.S. in Menlo Park in Mexico and El Salvador with 25 to 30 participants per workshop. 00:45:11.000 --> 00:45:27.000 Some of whom are at this workshop. So big thanks to them. We recruited participants from professions who are current or prospective users of aftershock forecasts. 00:45:27.000 --> 00:45:41.000 And so there's a wide range of professions that were at our workshops. What we learned is that user needs for our forecast products vary quite a bit by profession because there's different user cases. 00:45:41.000 --> 00:45:54.000 So the decision that an emergency manager might make informed by aftershock forecasts are different from those that a critical infrastructure operator might make after a bigger earthquake or the media for example. 00:45:54.000 --> 00:46:05.000 User needs also varied with time since the mainshock. So, they, were different for the first day or one month after the mainshock and they vary to a smaller degree by country as well. 00:46:05.000 --> 00:46:20.000 And I encourage folks to look at the full results in our visualization for communication paper. So how do we then take those user needs that we elicited and, translate them into a redesigned forecast map. 00:46:20.000 --> 00:46:29.000 So we've been prototyping different versions of such redesign maps that use specific user needs that we learned in the workshops. 00:46:29.000 --> 00:46:34.000 On the right is an example, such a prototype. So you'll notice first off that it's showing the forecast for shorter duration for the next day. 00:46:34.000 --> 00:46:46.000 As requested by, multiple user groups. We were in charge by, most of our user groups to simplify our maps and one way of doing this is by having a scale with less colors. 00:46:46.000 --> 00:47:00.000 So we went down to 8 colors in this prototype and we're experimenting with with even fewer colors in other maps. 00:47:00.000 --> 00:47:10.000 We're using shades of red to signify risk here as requested by several user groups. We also changed some of the text on the map. 00:47:10.000 --> 00:47:23.000 So in the title, we added the maximum value from this map so the maximum probability of strong shaking because some user groups wanted worst case scenario information upfront. 00:47:23.000 --> 00:47:29.000 We made the dates for the forecast, explicit on the map. So, this forecast is actually for 3 days into the sequence and that's listed out here. 00:47:29.000 --> 00:47:51.000 And we added another layer of information on the map, the previous aftershocks that have occurred up until that third date of the sequence, as well as a gold star for the mainshock location and some temporal information about the mainshock too. 00:47:51.000 --> 00:47:54.000 This is all work in progress. We're prototyping different versions of this kind of forecasted shaking map aligned with specific user needs. 00:47:54.000 --> 00:48:05.000 That we learned about in the workshop. And so we're very open to your comments or any tomatoes you want to lob. 00:48:05.000 --> 00:48:16.000 So please send them in by email or in the chat. Thank you so much. 00:48:16.000 --> 00:48:27.000 Hi everyone, my name is Zahraa Saiyed and I will be giving you a snapshot of my dissertation results on equity and social vulnerability in earthquake and wildfire mitigation policy. 00:48:27.000 --> 00:48:33.000 This research is motivated by the fact that built environments and social systems are deeply interconnected 00:48:33.000 --> 00:48:51.000 creating risk and often disproportionate outcomes for certain populations following disasters. This was a practitioner centered research, based on some recent executive orders on embedding equity and federal government programs for disaster risk reduction in local and state initiatives on the same, 00:48:51.000 --> 00:49:07.000 and really just a way to understand what the challenges and opportunities we have on increasing equity and decreasing social vulnerability through the policies and programs we, create. 00:49:07.000 --> 00:49:17.000 The research questions are equity and social vulnerability address in current disaster mitigation policy for earthquakes and wildfires and are the concerns of socially vulnerable populations 00:49:17.000 --> 00:49:26.000 represented and considered when policy goals are defined. The policy is enacted and implemented and within the measurement and evaluation of a policy. 00:49:26.000 --> 00:49:41.000 This was a qualitative research study that interviewed experts and academics from around the country. From government, nonprofit, private, academia, and in the policy realm. 00:49:41.000 --> 00:50:01.000 We had a total of 37 participants for the study in the contiguous United States with most being in the Western region and then Southeast, Southwest and Midwest, which was nice because we were able to overlap where we have highest wildfire and earthquake risk 00:50:01.000 --> 00:50:16.000 in the country. Moving on to the findings. These are aggregate findings from all the participants that were in this study, which found that there are no archetypal examples of equity center mitigation programs and policies 00:50:16.000 --> 00:50:21.000 for earthquake and wildfire risk reduction in the U.S., State, local, and federal level. 00:50:21.000 --> 00:50:34.000 There is a lack of attention to the concerns of socially vulnerable communities within the policy development, implementation, and evaluation processes, and that many gaps and barriers exist in achieving equitable mitigation 00:50:34.000 --> 00:50:42.000 for community resilience and it was expressed that although some jurisdiction states in the federal government are attempting to embed equity into mitigation, it is not happening at the scale that is needed 00:50:42.000 --> 00:50:52.000 unfortunately, and the results were also kind of specified in six different categories listed here, but I won't go through them. 00:50:52.000 --> 00:51:02.000 I'm happy to share information if you're interested. Policy recommendations that arose out of this research and practitioner recommendations 00:51:02.000 --> 00:51:19.000 include the fact that local state and government should create equity plans for disaster mitigation, guided by federal recommendations that there's better understanding of local and state level vulnerabilities 00:51:19.000 --> 00:51:30.000 to prioritize and jumping the 7-year prioritizing increased funding for community engagement prior to mitigation, program development and investing in as community leader, civic champion 00:51:30.000 --> 00:51:41.000 that could help bottom up level understandings of disaster risk reduction. As well as overlap between other factors that lead to community vulnerability 00:51:41.000 --> 00:51:51.000 and also, an ideally mandate earthquake retrofits for highest buildings with dedicated funding source, which I'm sure many of us, could wish for as well. 00:51:51.000 --> 00:51:59.000 And with that I will end the presentation. The full research is on progress if you're interested, you can type that up or you can email me. 00:51:59.000 --> 00:52:04.000 And it includes explanatory factors, expanded policy recommendations. A design and equity evaluation framework for policies and programs and the theory of change for equitable resilience. 00:52:04.000 --> 00:52:25.000 I thank you for your time. 00:52:25.000 --> 00:52:37.000 Thanks for your great talks. So it's 1:23p.m. now, and so we have until 1:30p.m., about 7 min for a Q&A session. 00:52:37.000 --> 00:52:43.000 And so, if you just gave a talk in that session, I invite you to turn on your cameras 00:52:43.000 --> 00:52:56.000 And be prepared to answer some questions. So, for the audience, please raise your hand or turn on your camera 00:52:56.000 --> 00:53:01.000 if you have any questions. 00:53:01.000 --> 00:53:09.000 And I have a whole list of questions with speakers. So maybe, while the audience is 00:53:09.000 --> 00:53:20.000 getting ready, I can ask one. To Mike Greenfield if he still on. 00:53:20.000 --> 00:53:21.000 Yeah, hi. 00:53:21.000 --> 00:53:28.000 Hey, yeah, I really like your, comparison of the two liquefaction probability maps 00:53:28.000 --> 00:53:35.000 for the median groundwater level at the time of the Napa earthquake. 00:53:35.000 --> 00:53:47.000 Did you also make a comparison of that map for cases when the groundwater was higher than, so it's the worst case scenario and what did that look like? 00:53:47.000 --> 00:53:53.000 We have not done that yet. That is something we would like to pursue in the future, absolutely. 00:53:53.000 --> 00:54:01.000 Another interesting observation would be what's the intersection of sea level rise due to climate change 00:54:01.000 --> 00:54:15.000 and the liquefaction hazard. In this case, the increase is significant, but it's relatively modest as the groundwater gets higher due to the prevalence of a lot more clay soils, 00:54:15.000 --> 00:54:24.000 in the North Bay than elsewhere in the San Francisco Bay Area, but certainly elsewhere looking at different groundwater elevations 00:54:24.000 --> 00:54:32.000 is a key driver or can be a key driver of liquefaction hazards and I think really needs to be evaluated in greater detail. 00:54:32.000 --> 00:54:45.000 Thank you. It looks like we have a question, go ahead. 00:54:45.000 --> 00:54:52.000 I was just gonna ask, it seems to me the earthquakes we have up here in the North Coast 00:54:52.000 --> 00:55:04.000 duration of shaking is one of the more important triggers of liquefaction. It seems the shaking for a magnitude 6 00:55:04.000 --> 00:55:13.000 doesn't produce very much even though it's intense shaking, it doesn't produce much liquefaction. 00:55:13.000 --> 00:55:17.000 Just a comment. 00:55:17.000 --> 00:55:22.000 Mike and maybe Ken? You wanna both respond to that comment? 00:55:22.000 --> 00:55:31.000 Yeah, I'd like to respond to that specifically with respect to the loosest of soils 00:55:31.000 --> 00:55:42.000 the duration of shaking is less of PGA and an earthquake magnitude are the two key 00:55:42.000 --> 00:55:52.000 inputs into a liquefaction hazard analysis. With the loosest of soils, the duration of shaking is less of a driver, but can be very significant. 00:55:52.000 --> 00:56:00.000 So yes, it's incorporated into our analyses and can be somewhat informative. 00:56:00.000 --> 00:56:11.000 Yeah. I'll say that it is very important and is considered cyclic stress ratio is corrected for the magnitude to a normalized magnitude of 7.5. 00:56:11.000 --> 00:56:16.000 So that's how all these liquefaction models are computed. 00:56:16.000 --> 00:56:24.000 You can compare that magnitude, corrected cyclic stress ratio, against the cyclic resistance of the soil. 00:56:24.000 --> 00:56:32.000 Now there's tons of uncertainty in those models and there's a lot more work to be done with them but should be accounted for. 00:56:32.000 --> 00:56:37.000 I don't know exactly in the regional scale how that's computed, but yeah. 00:56:37.000 --> 00:56:51.000 We actually utilize, the site specific equations and just increase the uncertainty using closed form solutions. 00:56:51.000 --> 00:56:54.000 Thanks, Mike and Ken. Josie, do you have a question? 00:56:54.000 --> 00:57:06.000 Yeah, I have a question for Michael, I really enjoyed your talk and, I'm excited about the new modeling with the nonlinear damping. 00:57:06.000 --> 00:57:21.000 I was curious whether basically if you could comment on potential limitations of the approach basically if we're interested in the distribution of deformation and specifically plastic deformation at the surface. 00:57:21.000 --> 00:57:33.000 You know, if the elastic model with a nonlinear damping can capture anything that's different from the elastic model in terms of the deformation field. 00:57:33.000 --> 00:57:43.000 Yeah, I haven't really looked at the defamation field. What I'll say is that because the radiation damping acts at the fault surface only. 00:57:43.000 --> 00:57:58.000 It's only gonna be capturing effects that occur near to the fault so you know, plastic yielding that happens far away from the fault is not going to be captured by radiation damping. 00:57:58.000 --> 00:58:05.000 Okay, thank you. 00:58:05.000 --> 00:58:15.000 I have a question for Max. I know some of these comments are in the chat. Max, can you 00:58:15.000 --> 00:58:19.000 comment on 00:58:19.000 --> 00:58:35.000 the relative merits of trying to combine a bunch of different users needs into one product versus the benefits or confusion of trying to issue multiple products or types of products and if you'd thought about that and 00:58:35.000 --> 00:58:39.000 kind of the trade off happening there. 00:58:39.000 --> 00:58:51.000 Yeah. So one thing that we were able to study in our workshops because we had so many different professions, represented. 00:58:51.000 --> 00:59:01.000 Is what the trade offs could be when providing a product designed around, let's say, in engineering, set of user needs 00:59:01.000 --> 00:59:07.000 versus those for emergency management versus those for public information officials and media and that's something that we, you know, we can explore in this more hypothetical space 00:59:07.000 --> 00:59:27.000 in the research literature. When we're talking about making operational products as an agency or, you know, more broadly, we're probably not gonna be able to make 00:59:27.000 --> 00:59:33.000 products for every user group and so the other benefit of this kind of research is to understand where the overlaps are 00:59:33.000 --> 00:59:47.000 and, and I think you need to understand how people are going to use or have people say they would use your products to best argue for effective overlaps that you could then use to design. 00:59:47.000 --> 01:00:00.000 And at least argue that it could be more universal across different user groups. So I do think that we should not have, you know, a different stripe and shape of forecast products for every possible user group, 01:00:00.000 --> 01:00:16.000 but I do think we can think of categories of product types that would be useful for multiple groups and have a small set of those products available. 01:00:16.000 --> 01:00:17.000 I think that's a balance. 01:00:17.000 --> 01:00:23.000 Thank you. Yeah. Okay, so it's 1:30p.m., and unless there is a 01:00:23.000 --> 01:00:33.000 burning question from the audience. I think I will hand it back over to Sarah. 01:00:33.000 --> 01:00:45.000 Hello everyone and thank you so much for an excellent excellent Excellent. First round of thunder talks. We are going to take a brief 15 min intermission and then come back to. 01:00:45.000 --> 01:00:52.000 Have yet more Thunder Talks. So everyone, please take a few minutes unless you are in the next group of Thunder Talks, in which case, please stick around so we can get you set up. 01:00:52.000 --> 01:00:58.000 So that means, nevit, thermo, rodos, matzo, lee, zoo, schumol chain, wthouse or marshal, Callahan, Bentian, you know who you all, please stay. 01:00:58.000 --> 01:01:11.000 Everyone else