You are on page 1of 8

Frequency Enhancements for Visualizing 3D Seismic Data

Cheng-Kai Chen

Carlos Correa

Kwan-Liu Ma

Department of Computer Science

University of California at Davis

A BSTRACT
This application paper introduces a suite of enhancement techniques for visualizing seismic data. These techniques provide a
better understanding of the underlying propagation process in the
complex time-dependent seismic data. Traditional techniques using
the accumulated displacement as a scalar or vector field for volume
rendering fail to capture the dynamic frequency variations, which
is essential for seismic study. We show that using multiband signal
filters to separate frequency components of the data can highlight
different frequency bands explicitly in visualization. The end result is a combination of different displacements, such as drift and
shaking along the horizontal and vertical directions which are perturbing the accumulated drifts. We have implemented a GPU-based
raycasting renderer to handle unstructured meshes. We also employ
deformable textures for effectively composing multiple frequency
components. Our analysis and visualization techniques provide a
new way for seismic scientists to study their data.
Index Terms:
I.3.3 [Computer Graphics]: Picture/Image
GenerationViewing algorithms; I.3.7 [Computer Graphics]:
Three-Dimensional Graphics and RealismColor, shading, shadowing, and texture; I.4.3 [Image Processing and Computer Vision]:
EnhancementFiltering; J.2 [Physical Sciences and Engineering]:
Earth and Atmospheric Sciences
1

I NTRODUCTION

As the simulation and exploration of seismic data becomes a major


part of the work for many seismic scientists, visualization continues to play an important role in data analysis and interpretation. In
the past few years, real-time volume rendering has become possible and has been widely utilized in scientific visualization as the
programmable graphics process unit (GPU) continues to advance.
With the help of an interactive interpretation seismic system, geological study and prediction can be made on the seismic data.
More precise predictions of natural disasters such as earthquakes
can be obtained by computer-aided visual analysis. Although there
are many interactive volume rendering toolkits available for seismic
data visualization, most of them only support rendering the data of
all frequencies as a whole. This treatment, however, does not provide insights into the unique aspect of dynamic frequency variations
of 3D seismic data, which is critical for analyzing and understanding seismic data. Surprisingly, little work has been done to improve
this situation.
In our study, the seismic data are available as a continuous function sampled at regular or irregular mesh points. Since the data
usually consist of low and high frequencies components including
noise, obtaining the essential frequency-domain features from the
data becomes an important aspect of data analysis and visualization.
We employ frequency-time (F-T) analysis, a general and effective
e-mail:

ckchen@ucdavis.edu
correac@cs.ucdavis.edu
e-mail: ma@cs.ucdavis.edu
e-mail:

technique for studying seismic data. F-T analysis includes delineation of sequence interface and determination of seismic sequence
cycles. In F-T analysis, spatial-temporal data are first transformed
into the frequency domain. Then, for the frequency component
at each sample point, the corresponding data are separated using
multiband filters, such as low-pass or high-pass filters. Low-pass
filters remove the component above the specific frequency, while
high-pass filters keep the component above the specific frequency.
After applying F-T analysis, we can obtain the interior structure of
seismic data in a meaningful way by studying the F-T structure.
Once the data are separated into different frequency components,
visualizing seismic data turns to be a multivariate data visualization problem. In this paper, we highlight separate frequency bands
in the visualization so that the scientists can better observe the intrinsic nature of seismic propagation. Mixing different frequency
bands together without visual cluttering and ambiguity is achieved
through several enhancement techniques including unsharp masking and deformable textures. For example, when each isolated band
is rendered using a different illustrative style, the observers can intuitively identify different frequency bands and clearly analyze their
relationships over time. To our knowledge, we are the first to utilize frequency-enhanced visualization techniques for effective understanding of the seismic data.
This paper is structured as follows. In the next section, we discuss related work. In Section 3, an overview of the proposed rendering process is given. The implementations of frequency analysis
and seismic data visualization will also be discussed in details. Several enhanced techniques will also be explained in this section. In
Section 4, the proposed method applied on earthquake seismic data
visualization is demonstrated. The paper concludes in Section 5 by
giving a short overview of the presented concepts.
2

R ELATED W ORK

In this section, we first review work in the field of seismic data


visualization. Then, we discuss related work dealing with timevarying and frequency data analysis and visualization. Finally, we
review the use of deformation, color, and texture in visualization.
2.1 Seismic Data Visualization
There have been extensive research efforts in the area of seismic
data visualization, and a large collection of major developments
were captured in several surveys. A pioneer work was presented
by Wolfe et al. [19]. Since the data they studied contain a considerable amount of ringing, it is necessary to remove the noise and
shape the pulse waveform. The demonstrated interactive 3D visualization approach interprets seismic data with a volumetric scheme,
and filters out the noisy part by using deconvolution filters. Hence,
the users get more clear pictures about the underground structures
in seismic data.
Castanie et al. [1] described a high quality volume rendering
algorithm to visualize 3D seismic data based with pre-integrated
transfer functions. Chourasia et al. [3] proposed an iterative refinement of the visualization incorporating feedback from scientists. Combined with the existing visualization techniques, such
as volumetric and topographic deformations, the proposed system
creates more meaningful visual results from the data sets. Patel et

al. [10] introduced techniques for visualizing interpreted and uninterpreted seismic data. The non-photorealistic rendering technique
was adopted to render the interpreted data as geological illustrations, while the uninterpreted data was rendered in color-coded volume. They also discussed how to combine the two representations
together so that the users can control the balance between the two
visualization styles accordingly. The concept of focus+context visualization metaphors was presented by Ropinski et al. [11] where
interactive exploration of volumetric subsurface data is supported.
By using specialized 3D interaction metaphors, the user is able to
switch between different lens shapes as well as visual representations, such as emphasizing or removing arbitrary parts of a data set.
To visualize massive data from large-scale earthquake simulations, Ma et al. [7] presented a parallel adaptive rendering algorithm
for visualizing time-varying unstructured volume data. Their goal
was to come up with a scalable, high-fidelity visualization solution
which allows scientists to explore in the temporal, spatial, and visualization domain of their data. Yu et al. [21] proposed a parallel
visualization pipeline for studying the terascale earthquake simulation. Their solution is based on a parallel adaptive rendering algorithm coupled with a new parallel I/O strategy which effectively
reduces interframe delay by dedicating some processors to I/O and
preprocessing tasks.
2.2 Time-Varying and Frequency Data Visualization
Fang et al. [6] proposed a method to visualize and explore timevarying volumetric medical images based on the temporal characteristics of the data. The basic idea is to consider a time-varying
data set as a 3D array where each voxel contains a time-activity
curve (TAC). Matching TACs based on a given template TAC essentially classifies voxels with similar temporal behaviors. Our work
is similar to this work in the sense that the F-T analysis operates on
each individual TAC.
In frequency data analysis in visualization, Neumann et al. [9]
presented a feature-preserving volume filtering method. The basic idea is to minimize a three components global error function
penalizing the density and gradient errors and the curvature of the
unknown filtered function. The optimization problem leads to a
large linear equation and can be efficiently solved in frequency domain using the fast Fourier transformation (FFT). Wu et al. [20]
introduced the 3D F-T analysis of seismic profile using the wavelet
transform, which provides an effective way for the subsequent analysis. Erlebacher et al. [5] also studied a wavelet toolkit for visualization and analysis of large earthquake data sets.
2.3 Deformation, Texture, and Color in Visualization
Chen et al. [2] introduced the concept of spatial transfer functions
as a unified approach to volume modeling and animation. A spatial
transfer function is a function that defines the geometrical transformation of a scalar field in space, and is a generalization and
abstraction of a variety of deformation methods. They proposed
methods for modeling and realizing spatial transfer functions, including simple procedural functions, operational decomposition of
complex functions, large-scale domain decomposition and temporal spatial transfer functions.
Effective utilization of color and texture is the main theme of
several research efforts. Sigfridsson et al. [13] combined scalar
volume rendering with glyphs. They presented a method for visualizing data sets containing tensors in 3D using a hybrid technique
which integrates direct volume rendering with glyph-based rendering. Interrante et al. [12, 15] described new strategies for effective utilization of colors and textures to represent multivariate data.
They provided a comprehensive overview of strategies to represent
multiple values at a single spatial location, and presented a new
technique for automatically interweaving multiple colors through
the structure of an acquired texture pattern. Wang et al. [17] de-

scribed a knowledge-based system that captures established color


design rules into a comprehensive interactive framework, aiming to
aid users in their selections of colors for scene objects by incorporating individual preferences, importance functions, and overall
scene composition.
3

S EISMIC DATA A NALYSIS

In recent years, seismic analysis have been widely applied to the


whole range, from exploration to development and exploitation, of
many problems, such as reservoir characterization, enhanced oil recovery strategies, and earthquake investigation. The interpretation
of seismic data involves not only the concepts of geology, seismology, and signal processing, but also various techniques in computer
graphics and visualization.
In general, seismic data are a series of data chunks and are available as a continuous function sampled at regular or irregular mesh
points. Such data are acquired directly from simulation results, or
indirectly from wave propagation such as earthquake shockwaves
through the subsurface. The data contain displacements, coherent and incoherent noise signals as well. The displacement of
each node is a multidimensional vector representing the movement
caused by shockwaves under the surface, such as shaking or drift.
F-T analysis is commonly used to separate the data with different
frequency components, and becomes an important examining technique to interpret the seismic data. F-T analysis is a technique for
manipulating signals with frequency components which are varying
in time. It includes delineation of sequence interface and determination of seismic sequence cycles. All the F-T methods map the
sequence of data into the frequency domain first, and then perform
the analysis.
There exist many F-T methods for seismic interpretation; however, all of them have different properties. In this paper, we use
the fourth order Butterworth filter to separate the different bandpass data. The Butterworth filter [14] is one of the most commonly
used digital filters in motion analysis. It is designed to flatten a
frequency response in the passband. Compared to other filters, the
Butterworth filter rolls off more slowly around the cutoff frequency,
but shows no ripples which is desirable for our analysis. A typical
Butterworth filter is the low-pass filter, and it can also be modified
to be used as a high-pass filter.
A low-pass filter passes relatively low frequency components in
the signal and stops the high frequency components. In other words,
the frequency components higher than the cutoff frequency will be
dropped by a low-pass filter. The behavior of a filter can be summarized by the so-called frequency response function. The following
equation gives the frequency response function of the Butterworth
filter:

|Hc ( j )|2 =

1
,
1 + ( j / jc )2N

(1)

where j = 1, = frequency (rad/s), c = cutoff frequency


(rad/s), and N = the order of the filter.
3.1 Earthquake Dataset
The seismic data used in this work were generated by earthquake simulation of the Humboldt Bay Middle Channel (HBMC)
bridge [23] in the Department of Structural Engineering at the University of California, San Diego. The simulation dataset consists of
a finite element (FE) model created with the software framework
OpenSees (Open System for Earthquake Engineering Simulation)
[8]. The simulation dataset consists of a finite element model of the
river channel and banks, represented as a hexahedral mesh. The elements discretized an effective-stress, cyclic-plasticity constitutive

model, able to represent layers of soil of different materials and liquefaction properties. The simulation then obtains the displacements
at each node by solving the FE system:
+
MU

BT d + Qp fS

(2)

+ Sp + Hp f p
QT U

(3)

where M is the mass matrix, U is the displacement vector, B the


strain-displacement matrix, the effective stress vector (determined by the model), Q the discrete gradient operator, p the pore
pressure vector, H the permeability matrix, S the compressibility
matrix and fs and f p the body forces and prescribed boundary conditions, respectively. The first equation models the motion of the
system, while the second represents mass conservation constraints.
The HBMC bridge seismic data contains complex incident wave
motions, including drift, or permanent components, and shaking
motions.
In addition, the dataset provides a 3D structure of the HBMC
bridge (including the superstructure, piers, and supporting piles).
The displacements are obtained using a 2D nonlinear mesh for the
bridge piers and linear elastic beam-column elements for the superstructure and piles. The mesh and a cross-section view of the bridge
are shown in Figure 1.
For efficient visualization, we converted the hexahedral elements
in the soil to a tetrahedral mesh. To preserve the interpolation and
avoid introducing artifacts due to the splitting operation, we use
barycentric interpolation of the displacement vectors for each element.

Samoa Channel
(North-West)
Eureka Channel
(South-East)

Z
X
Y

Drift motion
(Permanent deformation)
Shaking motion
(Cyclic component)

Figure 1: A 3D finite element structure, and a cross-section view


along X-Z direction for HBMC bridge. This simulation structure includes superstructure, piers, and supporting piles. The cross-section
view also illustrates that there exist different motions, such as drift
and shaking, at different frequency bands under the surface.

3.2 Seismic Displacement Components


The earthquake dataset, similar to other seismic data, contains
3D displacements that are the product of two different components. The drift is the permanent deformation component usually
in low frequency phase, and is the product of slumping and settlement/heave of the system. The shaking is the cyclic component
of seismic displacements existing in high frequency band, seen as
a back and forth movement. At the beginning of the simulation,
P-waves arrive early and correspond to the initial displacements.

Once the cyclic components become apparent, these correspond to


S-waves (shear waves). The dynamic shaking motions are superposed on accumulated drifting values, and therefore it is difficult to
distinguish the impact from different motions.
In order to extract these components and provide a better understanding of the simulation, we use high-pass and low-pass filters
along the temporal dimension, where the suggested cutoff frequencies from scientists are 0.3 and 0.1, for extracting the high-pass and
low-pass filtered data, respectively. To provide an intuitive visualization of the simulation, we perform the analysis on the displacement magnitude, since most of the permanent components occur
in the vertical direction, while the cyclic components occur on the
horizontal directions (parallel to the ground). Later on, we show
how to incorporate direction on the visualization.
4

M ULTI -BAND E NHANCED V ISUALIZATION

In this paper, we study the enhancement of direct volume rendering


for representing multiple bands of a single scalar or vector field. For
the case of seismic data, we have 3D displacements. Mapping these
to color and opacities is a difficult task. One alternative is to map
the magnitude of the vector field to color and opacities via one dimensional transfer functions. An example is shown in Figure 2 (a).
In this case, cool colors represent low magnitude while warm
colors represent high magnitude displacements. Figure 3 shows the
colormap and data value ranges. Although direction is not encoded,
it provides important cues about the distribution of magnitude and
change over time. For seismic data, however, the displacement
is the product of two main components, one being the permanent
component due to settlement or slumping, and the shaking component, which measures the actual ground movement due to seismic activity. In Figure 2 (b) and (c) we show the magnitude of the
permanent and cyclic components for the same two frames. When
compared to those in Figure 2 (a), it becomes evident that it is not
possible to extract the two components easily. This happens as it
becomes increasingly difficult to distinguish small movements as
the permanent component grows.
Instead, we propose a novel approach that displays the two components of the displacement as separate entities. Therefore, our
problem becomes that of visualizing time varying multi-variant
three dimensional data. An example is shown in Figure 2 (d). Notice how the different components can be easily identified, and now
it is possible to quantify the amount of shaking (high-frequency
component), according to the color map. For seismic data, highfrequency components become increasingly small compared to the
permanent components as the shaking dissipates. For this reason,
we present a number of methods to enhance a component of interest.
We present a number of methods, inspired by both imageprocessing operators and volume rendering techniques, to enhance
different components in a multi-band volume, namely optical operations, temporal unsharp masking and deformation.
4.1 Temporal Unsharp Masking
Unsharp masking is an image processing technique that increases
the contrast of edges in 2D images. It works by computing an unsharp mask of a signal, usually by subtracting the signal with a lowpass filtered version of the signal, and then adding a scaled version
of this unsharp masking to the original signal. It has the effect of
enhancing the high-frequency components of the image, which correspond to the edges, making it look sharper. In the case of seismic
data, we derive the components as frequencies in the time dimension, therefore we call this technique temporal unsharp masking.
Let us define a scalar field S(x,t) computed as the magnitude of the
displacement, and SL (x,t) and SH (x,t) the low-pass and high-pass
filtered scalar fields, respectively. The newly enhanced field S (x,t)

(a) Original

(b) Permanent (low band)

(c) Cyclic (high band)

(d) Combined visualization

Figure 2: Volume visualization for the HBMC bridge for time steps t = 380,480. (a) shows the original view of unfiltered seismic data. (b) and
(c) show the low-pass permanent deformation and high-pass cyclic components showing with warm and cool colors, respectively. (d) shows the
combined visualization for both low- and high-pass data. Now the different components can be easily identified.

Figure 3: Color map used for the images in this paper and range of
values. The high-frequency components are given more granularity
than the permanent component

combination can be additive (in RGB, HSV or CIELAB space) or


can be obtained with the over operator, used for compositing colors
in a front-to-back fashion. The latter simulates the effect of having two scalar fields intertwined together in a single volume. For
example, our previous enhancement can be performed optically to
achieve post-classification temporal unsharp masking. The resulting color and opacities of a sample are:

can be computed as:


S (x,t)

S(x,t) + (S(x,t) S(x,t) G(t))


SL (x,t) + (1 + )SH (x,t)

(4)
(5)

where G(t) is a low pass filter in the time dimension, which is convolved with the original signal ( f g denotes convolution). The parameter indicates the degree of enhancement. Figure 4 shows the
result of applying unsharp masking for three consecutive frames.
Note the appearance of regions of higher magnitude due to the addition of high frequency magnitude. These appear in the images as
purple-ish regions. Because the enhancement occurs before classification, it may be difficult to extract the actual magnitudes of the
low and high frequency components, and it becomes increasingly
difficult as the permanent component overcomes the cyclic components. For this reason, we turn to optical operations, as described in
the following section.
4.2 Optical Enhancement
This method combines the different components and performs enhancement in the optical domain, i.e, after classification of the filtered data. In general, classification is a mapping from a scalar field,
in our case magnitude, to color C and opacity . If we defer enhancement after classification, for the low and high components we
obtain colors and opacities CL , L and CH , H , respectively. Therefore, enhancement can be defined as a combination of these. The

=
=

CL CH CH
L H H

(6)
(7)

where represents an optical operator. In our examples, we use the


over operator used for front-to-back volume rendering. This resembles the composition of different samples equally spaces comprising each of the components (i.e., low, high and enhancement). To
achieve a comparable per-sample intensity to those images with no
enhancement, we make sure that the sample opacity is modulated
to accommodate the extra attenuation per sample. That is, we modulate the sample opacity to simulate the composition of two extra
samples per sample interval.
Figure 5 shows the result of combining the two components in
the optical domain. The superposition of cool and warm colors indicates the overlapping of the two displacement components,
i.e., drift and shaking. We can see that, although the drift accumulates, the shaking remains fixed within an interval (blue color).
Figure 6 shows an example of postclassification unsharp masking
with = 1.2. When compared to the result in Figure 5, we can
now clearly see the distribution of shaking, especially on the first
timestep. For timestep t = 580, the enhancement helps identify
more isosurfaces in the shaking component. Compare for example the difference with the pre-classification enhancement in Figure
4. Similar structures can be found (for example the waving motion
in t = 580), but the presence of the two intervals makes it possible to identify and quantify the regions where cyclic components
contribute to the total displacement.

4.3 Deformable Textures


In the above techniques, we mapped the displacement magnitude to
color and opacity and used optical operations to combine the high
and low frequency components. However, due to the additive nature
of color in the composition process, it becomes increasingly difficult to visualize patterns of interest as both components contribute
to occlusion. As an alternative, we propose the use of 3D textures
to modulate the opacity of one of the components. The texture can
be considered as a scalar field T : R3 7 R, where each sample indicates a density value. These can be obtained using texturing mechanisms such as Perlin noise or procedurally defined primitives, such
as spheres, tubes and thin plates. The advantage of these textures is
that it is now possible to control the degree of occlusion of one of
the components so that the other component is visible without the
additive operation of colors and opacities. A similar idea has been
exploited in the form of screen-door transparency [16].
There is an issue with simply using texture to modulate opacity
and it is that it appears static while the isosurfaces of interest move
within the textured space. This effect was rather confusing and of
little help. Instead, we want to move the texture in such a way that
it follows the vector field. Therefore, we incorporate deformable
textures, where the opacity is mapped according to the 3D pattern
that would be displayed if the original texture were deformed by the
vector field. We can incorporate this directly in the rendering process by warping the coordinates of the 3D texture with an inverse
displacement. In our case, we use deformable textures to modulate
the opacity of the high frequency components. The opacity of the
high frequency component is found using the following expression:

H (x) = T (x D(x,t))

(8)

where T is the scalar field representing the texture and D is the


vector field, which varies over time. When combined with a low
frequency component, the result achieves a better mix of the two
components that minimizes inter-component occlusion but provides
a lot of information of the vector field. In our experiments, we tried
with several textures, including semi-transparent spheres and tubular structures. Since most of the high frequency movement occurs
in the horizontal planes (while vertical movement is associated with
the permanent displacements), a horizontal texture is appropriate.
Figure 8 shows an example of using a texture pattern of a tube
along one of the horizontal directions. Since the shaking often occurs on these planes, the deformable texture helps us identify the
back and forth motion of the cyclic waves. In addition, the horizontal stripes are reminiscent of the horizontal layers in the soil,
facilitating the understanding to geologists.
Depending on the density of the texture, the cyclic component
may occlude the permanent component of the displacement. In Figure 9 we show the result of applying a texture pattern of a sphere,
which gives more visibility to the drift component. This texture
emulates the rendering of semi-transparent probes in the soil that
move with the displacement. Since each sphere appears deformed,
this method provides insight on the local direction of displacement.
To fully appreciate the effect of the deformable textures, please see
the accompanying video.
5

I MPLEMENTATION D ETAILS

To test these techniques, we implemented a GPU-based renderer


of unstructured meshes. To obtain high-quality rendering, we use
raycasting following a similar implementation to that proposed by
Weiler et al. [18]. We use 2D textures to encode the mesh vertices, the corresponding scalar values and connectivity information
to be able to march through the tetrahedral mesh during rendering.
Unlike the original implementation by Weiler et al., we encode indexed vertices rather than unrolling the mesh. This is necessary

as we need to store several scalar fields for a single vertice, corresponding to the different components of the displacement magnitude. This method also results in a more compact encoding that
allows us to store more timesteps in GPU memory. In our implementation, the different enhancement techniques are processed during rendering, which allows us to change their parameter on-the-fly.
The addition of the different techniques make the rendering process
slower than simple volume rendering due to extra texture fetches.
In addition, barycentric interpolation within the tetrahedron may
be costly when we require to interpolate additional quantities (e.g.,
high-frequency component, displacements). For the case of deformable textures, we decided to define it procedurally rather than
adding an extra fetch, since for this case pixel processing power is
in general faster than texture fetches. Table 1 summarizes the extra
Technique

Extra texture fetches


and interpolations

Total

Unsharp masking
Postclassification
unsharp masking
Displacement

Fetch high-pass

4 + nti

Average
Cost
(fps)
16.66

Fetch high-pass +
classify high-pass

4 + n(ti + t f )

14.3

Fetch high-pass +
deform coordinate +
classify high-pass

4 + n(ti +t f +tb )

5.0

Table 1: Cost of enhancement techniques in terms of extra texture fetches and interpolation operations and overall performance in
frames per second (volume rendering without lighting). ti refers to
the time it takes to perform scalar value interpolation, t f is the time
of a texture fetch and tb is the cost of barycentric interpolation for 3D
displacements.

texture fetches and interpolations required for the introduction of


enhancement techniques. ti refers to the time it takes to perform
scalar value interpolation, t f is the time of a texture fetch and tb is
the cost of barycentric interpolation for 3D displacements. Scalar
value interpolation can be efficiently implemented using linear approximation of the cell gradient. Applying the same technique to
approximate the barycentric interpolation of displacements would
cost roughly three times as much. As a baseline comparison, the
rendering algorithm traverses the tetrahedral mesh element by element. For each tetrahedron, the four scalar values are fetched from
a texture and the scalar field is sampled at uniform sample points
within the cell (to obtain high quality rendering). Assuming that
the algorithm traverses n samples in a tetrahedron, the number of
operations in a traditional unstructured mesh renderer per tetrahedron is 4 + n(ti + t f ), where the first 4 refer to the fetching of the
scalar values and each sample requires a texture fetch and an interpolation. We run our system on an Intel Core 2 Duo with an
nVidia GeForce 280 card. At high quality resolution, our system
performs at about 20 and 6.66 fps without and with lighting, respectively, in a 512 512 image size. To provide interactive rates,
we provide a low-quality mode that runs at much higher speeds. We
notice that optical operators do not compromise performance considerably, while deformation is costlier. Most of the cost is due to
the barycentric interpolation of displacements. The search for more
efficient encoding and interpolation mechanisms are currently being sought.
6 C ONCLUSIONS
In this paper, we have presented a number of techniques for enhancing frequencies of 3D displacement data. Some of these techniques operate in the data domain while others operate in the optical
domain after classification. Although enhancement of frequencies
can be addressed from a signal processing approach using sharpening filters, their visualization was not effective at conveying the

(a) t = 380

(b) t = 480

(c) t = 580

Figure 4: Volume visualizations for the HBMC bridge with applying temporal unsharp masking for time steps t = 380,480,580. One of the problems
with this approach is that the unsharp masking exaggerates the magnitude of the original displacement, and it is difficult to extract the actual
shaking from the low frequency component.

(a) t = 380

(b) t = 480

(c) t = 580

Figure 5: The HBMC Bridge volume visualization with combinations of low- and high-pass components in optical domain for three consecutive
time steps t = 380,480,580. The superposition of cool and warm colors indicate the overlapping of the two displacement components, i.e.,
drift and shaking. Hence, although the drift accumulates, the shaking remains fixed within an interval (blue color).

(a) t = 380

(b) t = 480

(c) t = 580

Figure 6: Examples of post-classification unsharp masking with = 1.2 for three time steps where t = 380,480,580. Compared to Figure 5,
we can now clearly see the distribution of shaking, especially on the first timestep. For timestep t = 580, the enhancement helps identify more
isosurfaces in the shaking component.

v1

v2

v4

n samples

v3

SCALAR TEXTURE

TRANSFER FUNCTION

Figure 7: Texture Fetches in a tetrahedral mesh renderer. For each


tetrahedron, we must get the four scalar values from the scalar texture and for each sample within a tetrahedron we must fetch the color
and opacity from a transfer function. Enhancement operators on
the entire tetrahedra, such as getting the high-frequency component
does not affect much performance. Per-sample enhancement operators (post-classification or deformable textures) imply additional
fetches per-sample, which increases the cost.

relationships between the permanent and cyclic components. We


believe that this is due to a fundamental perceptual and cognitive
problem that prevents us from understanding the addition of colors (due to classification) as a metaphor for arithmetic addition. In
contrast, optical combination, which simulates the effect of intertwining two sets of surfaces, seems to work better, since the color
ranges from the two components can be more easily discerned (e.g.,
cool colors for high frequency and warm colors for low frequency).
When these intervals overlap there may be problems, and careful
design of color maps will prove important. We will address these
issues in our future work. In our experiments, we found that highlighting optically two components was useful, but highly dependent
on the opacity transfer function. When the opacities of both components become high, they may occlude each other in a way that
it is not possible to discover their relationship anymore. The use
of deformable textures solves this problem in an elegant manner.
The particular choice of texture seems also important for effective
visualization. In our case, horizontal textures prove effective since
they are reminiscent of geological layers and thus have a physical
counterpart. Spheres also emulate a series of deformable probes in
the ground and help see displacement along different directions.
With the ability to extract and enhance the different components
of seismic data, we are providing unprecedented capabilities to scientists to understand the superposition of displacements in complex time-varying phenomena. We believe that these ideas can be
extended to other domains, such as flow visualization, where we
replace displacements with velocities. Although our techniques extend to vector field data in general, they need to be adapted according to the nature of the data to create more meaningful decompositions and enhancements. For example, textures may need to
be oriented in such a way that they capture the most predominant
component of the flow.
R EFERENCES
[1] L. Castanie, F. Bosquet, and B. Levy. Advances in seismic interpretation using new volume visualization techniques. European Association of Geoscientists and Engineers - The First Break, 23, october
2005.

[2] M. Chen, D. Silver, A. S. Winter, V. Singh, and N. Cornea. Spatial


transfer functions: a unified approach to specifying deformation in
volume modeling and animation. In VG 03: Proceedings of the 2003
Eurographics/IEEE TVCG Workshop on Volume graphics, pages 35
44, New York, NY, USA, 2003. ACM.
[3] A. Chourasia, S. Cutchin, Y. Cui, R. W. Moore, K. Olsen, S. M. Day,
J. B. Minster, P. Maechling, and T. H. Jordan. Visual insights into
high-resolution earthquake simulations. IEEE Comput. Graph. Appl.,
27(5):2834, 2007.
[4] G. Erlebacher and D. A. Yuen. A wavelet toolkit for visualization and
analysis of large data sets in earthquake research. applied geophysics
PAGEOPH, 161:22152229(15), December 2004.
[5] Z. Fang, T. Moller, G. Hamarneh, and A. Celler. Visualization and
exploration of time-varying medical image data sets. In GI 07: Proceedings of Graphics Interface 2007, pages 281288, New York, NY,
USA, 2007. ACM.
[6] K.-L. Ma, A. Stompel, J. Bielak, O. Ghattas, and E. J. Kim. Visualizing very large-scale earthquake simulations. In Proceedings of
ACM/IEEE Supercomputing 2003 Conference, 2003.
[7] F. McKenna and G.L.Fenves. The opensees command language manual, 2001.
[8] L. Neumann, B. Csebfalvi, I. Viola, M. Mlejnek, and M. E. Groller.
Feature-preserving volume filtering. In Data Visualization 2002,
pages 105114. ACM, May 2002.
[9] D. Patel, C. Giertsen, J. Thurmond, and M. E. Groller. Illustrative
rendering of seismic data. In H. S. Hendrik. Lensch, Bodo Rosenhahn,
editor, Proceeding of Vision Modeling and Visualization 2007, pages
1322, Nov. 2007.
[10] T. Ropinski, F. Steinicke, and K. H. Hinrichs. Visual exploration of
seismic volume datasets. Journal Proceedings of the 14th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG06), 14:7380, 2006.
[11] H. H. Shenas and V. Interrante. Compositing color with texture for
multi-variate visualization. In GRAPHITE 05: Proceedings of the
3rd international conference on Computer graphics and interactive
techniques in Australasia and South East Asia, pages 443446, New
York, NY, USA, 2005. ACM.
[12] A. Sigfridsson, T. Ebbers, E. Heiberg, and L. Wigstrm. Tensor field visualisation using adaptive filtering of noise fields combined with glyph
rendering. Visualization Conference, IEEE, 0:null, 2002.
[13] Stephen. On the theory of filter amplifiers. Experimental Wireless and
the Wireless Engineer, 7:536541, 1930.
[14] T. Urness, V. Interrante, I. Marusic, E. Longmire, and B. Ganapathisubramani. Effectively visualizing multi-valued flow data using
color and texture. In VIS 03: Proceedings of the 14th IEEE Visualization 2003 (VIS03), page 16, Washington, DC, USA, 2003. IEEE
Computer Society.
[15] I. Viola, A. Kanitsar, and M. E. Groller. Importance-driven volume
rendering. In VIS 04: Proceedings of the conference on Visualization
04, pages 139146, 2004.
[16] L. Wang, J. Giesen, K. McDonnell, P. Zolliker, and K. Mueller. Color
design for illustrative visualization. In To appear in Special issue
IEEE Visualization Conference 2008, 2008.
[17] M. Weiler, M. Kraus, M. Merz, and T. Ertl. Hardware-based ray casting for tetrahedral meshes. In VIS 03: Proceedings of the 14th IEEE
Visualization 2003 (VIS03), page 44, 2003.
[18] R. Wolfe and C. N. Liu. Interactive visualization of 3d seismic data: a
volumetric method. IEEE Comput. Graph. Appl., 8(4):2430, 1988.
[19] G. Wu, Y. Yin, F. Zhang, and G. Zhang. 3d f-t analysis of seismic
profile. In Proceedings of Fourth International Conference on Signal
Processing ICSP 98, volume 2,12-16, pages 16861688, 1998.
[20] H. Yu, K.-L. Ma, and J. Welling. A parallel visualization pipeline
for terascale earthquake simulations. In Proceedings of ACM/IEEE
Supercomputing 2004 Conference, 2004.
[21] Y. Zhang, Z. Yang, J. Bielak, J. Conte, and A. Elgamal. Treatment of
seismic input and boundary conditions in nonlinear seismic analysis
of a bridge ground system. In Proceeding of 16th ASCE Engineering
Mechanics Conference, Univ. of Washington, Seattle, WA, July 2003.

Figure 8: Examples of applying deformable texture enhancement for


time steps t = 380,480,580. The deformable tubes along one of the
horizontal directions represent the corresponding cyclic components.
It helps us identify the back and forth motion of the cyclic waves. To
appreciate more the effect of the deforming textures, please refer to
the accompanying video.

Figure 9: The texture is replaced with deformable spheres. They


provide more visibility to the drift motions and the shape of spheres
helps us identify the local direction of displacement. To appreciate
more the effect of the deforming textures, please refer to the accompanying video.

You might also like