``Simulation Tools to Support Performance-Based Earthquake Engineering Design''
Prof. George Turkiyyah
University of Washington, Department of Civil Engineering
Seattle, WA 98195
E-mail: george@ce.washington.edu
URL: http://www.ce.washington.edu/~george

Prof. Gregory L. Fenves
University of California, Berkeley, Pacific Earthquake Engineering Research Center
Berkeley, CA 94720-1710
E-mail: fenves@ce.berkeley.edu
URL: http://www.ce.berkeley.edu/~fenves/

The push towards more rational and mechanistic procedures for the assessment of the behavior of structures subjected to seismic loads has fueled the need for software tools that allow engineers to develop simulation models that can be used in such assessments. We briefly describe some characteristics of these simulations as they relate to distributed intelligent repositories and scientific problem solving environments.

The simulations are multi-domain. Earthquake engineering simulations use information from basin-scale simulations as generally performed by geophysicists, local detailed nonlinear soil simulations as generally performed by geotechnical engineers, and nonlinear simulation of the elements of the structural system as generally performed by structural engineers. These simulations may be coupled, or the results from one serve as a partial set of inputs and boundary conditions to another. Therefore, there is a great need for tools to readily translate the output of one model to a form that can be used by the next model. Such intelligent translators would go a long way towards more realistic, collaborative models.

The data needed to generate the necessary models often resides in different locations and may not always be readily available. For example, soil information, hazards information, etc. are stored in GIS databases maintained by various organizations (USGS, utility companies, departments of transportations, local agencies, etc.). However the data may be at different resolutions, may be very sparse in some spatial regions while a lot denser in others, and may have different levels of reliability associated with it. Common complaints from engineers trying to access the necessary data are that it is difficult to find information at the scale and quality needed, not to mention the heterogeneous formats in which the data is available. Technologies being developed to search large digital libraries may provide the necessary query mechanisms to search and merge/extract the data needed from distributed national repositories. Associated with this access problem, are also some public policy questions: how much information should be available? Access restrictions may have to be imposed depending on the quality, and level of detail of the data, its intended uses, etc.

Rarely are the simulations performed one-shot processes. It is often the case that an engineer would build a model and perform a large number of parametric studies to evaluate the sensitivity and relative importance of various factors on the response of the system. Using today's modeling and analysis tools, the generation of these parametric models and the exploration of model response is done at a fairly low level, with most engineers developing their models ``from scratch.'' This is obviously inefficient. We envision a software library of reusable model fragments---''component models''---that represent the behavior of the various entities of earthquake simulations at various levels of granularity. This would allow a user to quickly build a model by plugging together the components needed. A high level scripting language would provide an interpreted environment that allows the ``gluing'' of these reusable components and the efficient generation of the models needed.

The simulations are generally large scale requiring significant computing resources. Many engineers who may wish to perform these simulations may not have the necessary resources locally and will need to have access to remote servers. Such ``analysis servers'' would accept a model and a specification of the resources needed, configure the hardware needed to run the simulations and return the results to the user. The visualization and interpretation of model results is another challenging area where interaction with intelligent repositories may be needed. For example, model interpretation requires checking analysis results against standard specifications in building codes. The presentation of the results at a high level and the identification/checking of the applicable specifications from ``specification servers'' are interesting tasks that have not been explored. They may be difficult because they require access not only to the model results but to the CAD models and GIS maps from which the model was generated.