Professional Documents
Culture Documents
Cheng-Kai Chen
Carlos Correa
Kwan-Liu Ma
A BSTRACT
This application paper introduces a suite of enhancement techniques for visualizing seismic data. These techniques provide a
better understanding of the underlying propagation process in the
complex time-dependent seismic data. Traditional techniques using
the accumulated displacement as a scalar or vector field for volume
rendering fail to capture the dynamic frequency variations, which
is essential for seismic study. We show that using multiband signal
filters to separate frequency components of the data can highlight
different frequency bands explicitly in visualization. The end result is a combination of different displacements, such as drift and
shaking along the horizontal and vertical directions which are perturbing the accumulated drifts. We have implemented a GPU-based
raycasting renderer to handle unstructured meshes. We also employ
deformable textures for effectively composing multiple frequency
components. Our analysis and visualization techniques provide a
new way for seismic scientists to study their data.
Index Terms:
I.3.3 [Computer Graphics]: Picture/Image
GenerationViewing algorithms; I.3.7 [Computer Graphics]:
Three-Dimensional Graphics and RealismColor, shading, shadowing, and texture; I.4.3 [Image Processing and Computer Vision]:
EnhancementFiltering; J.2 [Physical Sciences and Engineering]:
Earth and Atmospheric Sciences
1
I NTRODUCTION
ckchen@ucdavis.edu
correac@cs.ucdavis.edu
e-mail: ma@cs.ucdavis.edu
e-mail:
technique for studying seismic data. F-T analysis includes delineation of sequence interface and determination of seismic sequence
cycles. In F-T analysis, spatial-temporal data are first transformed
into the frequency domain. Then, for the frequency component
at each sample point, the corresponding data are separated using
multiband filters, such as low-pass or high-pass filters. Low-pass
filters remove the component above the specific frequency, while
high-pass filters keep the component above the specific frequency.
After applying F-T analysis, we can obtain the interior structure of
seismic data in a meaningful way by studying the F-T structure.
Once the data are separated into different frequency components,
visualizing seismic data turns to be a multivariate data visualization problem. In this paper, we highlight separate frequency bands
in the visualization so that the scientists can better observe the intrinsic nature of seismic propagation. Mixing different frequency
bands together without visual cluttering and ambiguity is achieved
through several enhancement techniques including unsharp masking and deformable textures. For example, when each isolated band
is rendered using a different illustrative style, the observers can intuitively identify different frequency bands and clearly analyze their
relationships over time. To our knowledge, we are the first to utilize frequency-enhanced visualization techniques for effective understanding of the seismic data.
This paper is structured as follows. In the next section, we discuss related work. In Section 3, an overview of the proposed rendering process is given. The implementations of frequency analysis
and seismic data visualization will also be discussed in details. Several enhanced techniques will also be explained in this section. In
Section 4, the proposed method applied on earthquake seismic data
visualization is demonstrated. The paper concludes in Section 5 by
giving a short overview of the presented concepts.
2
R ELATED W ORK
al. [10] introduced techniques for visualizing interpreted and uninterpreted seismic data. The non-photorealistic rendering technique
was adopted to render the interpreted data as geological illustrations, while the uninterpreted data was rendered in color-coded volume. They also discussed how to combine the two representations
together so that the users can control the balance between the two
visualization styles accordingly. The concept of focus+context visualization metaphors was presented by Ropinski et al. [11] where
interactive exploration of volumetric subsurface data is supported.
By using specialized 3D interaction metaphors, the user is able to
switch between different lens shapes as well as visual representations, such as emphasizing or removing arbitrary parts of a data set.
To visualize massive data from large-scale earthquake simulations, Ma et al. [7] presented a parallel adaptive rendering algorithm
for visualizing time-varying unstructured volume data. Their goal
was to come up with a scalable, high-fidelity visualization solution
which allows scientists to explore in the temporal, spatial, and visualization domain of their data. Yu et al. [21] proposed a parallel
visualization pipeline for studying the terascale earthquake simulation. Their solution is based on a parallel adaptive rendering algorithm coupled with a new parallel I/O strategy which effectively
reduces interframe delay by dedicating some processors to I/O and
preprocessing tasks.
2.2 Time-Varying and Frequency Data Visualization
Fang et al. [6] proposed a method to visualize and explore timevarying volumetric medical images based on the temporal characteristics of the data. The basic idea is to consider a time-varying
data set as a 3D array where each voxel contains a time-activity
curve (TAC). Matching TACs based on a given template TAC essentially classifies voxels with similar temporal behaviors. Our work
is similar to this work in the sense that the F-T analysis operates on
each individual TAC.
In frequency data analysis in visualization, Neumann et al. [9]
presented a feature-preserving volume filtering method. The basic idea is to minimize a three components global error function
penalizing the density and gradient errors and the curvature of the
unknown filtered function. The optimization problem leads to a
large linear equation and can be efficiently solved in frequency domain using the fast Fourier transformation (FFT). Wu et al. [20]
introduced the 3D F-T analysis of seismic profile using the wavelet
transform, which provides an effective way for the subsequent analysis. Erlebacher et al. [5] also studied a wavelet toolkit for visualization and analysis of large earthquake data sets.
2.3 Deformation, Texture, and Color in Visualization
Chen et al. [2] introduced the concept of spatial transfer functions
as a unified approach to volume modeling and animation. A spatial
transfer function is a function that defines the geometrical transformation of a scalar field in space, and is a generalization and
abstraction of a variety of deformation methods. They proposed
methods for modeling and realizing spatial transfer functions, including simple procedural functions, operational decomposition of
complex functions, large-scale domain decomposition and temporal spatial transfer functions.
Effective utilization of color and texture is the main theme of
several research efforts. Sigfridsson et al. [13] combined scalar
volume rendering with glyphs. They presented a method for visualizing data sets containing tensors in 3D using a hybrid technique
which integrates direct volume rendering with glyph-based rendering. Interrante et al. [12, 15] described new strategies for effective utilization of colors and textures to represent multivariate data.
They provided a comprehensive overview of strategies to represent
multiple values at a single spatial location, and presented a new
technique for automatically interweaving multiple colors through
the structure of an acquired texture pattern. Wang et al. [17] de-
|Hc ( j )|2 =
1
,
1 + ( j / jc )2N
(1)
model, able to represent layers of soil of different materials and liquefaction properties. The simulation then obtains the displacements
at each node by solving the FE system:
+
MU
BT d + Qp fS
(2)
+ Sp + Hp f p
QT U
(3)
Samoa Channel
(North-West)
Eureka Channel
(South-East)
Z
X
Y
Drift motion
(Permanent deformation)
Shaking motion
(Cyclic component)
(a) Original
Figure 2: Volume visualization for the HBMC bridge for time steps t = 380,480. (a) shows the original view of unfiltered seismic data. (b) and
(c) show the low-pass permanent deformation and high-pass cyclic components showing with warm and cool colors, respectively. (d) shows the
combined visualization for both low- and high-pass data. Now the different components can be easily identified.
Figure 3: Color map used for the images in this paper and range of
values. The high-frequency components are given more granularity
than the permanent component
(4)
(5)
where G(t) is a low pass filter in the time dimension, which is convolved with the original signal ( f g denotes convolution). The parameter indicates the degree of enhancement. Figure 4 shows the
result of applying unsharp masking for three consecutive frames.
Note the appearance of regions of higher magnitude due to the addition of high frequency magnitude. These appear in the images as
purple-ish regions. Because the enhancement occurs before classification, it may be difficult to extract the actual magnitudes of the
low and high frequency components, and it becomes increasingly
difficult as the permanent component overcomes the cyclic components. For this reason, we turn to optical operations, as described in
the following section.
4.2 Optical Enhancement
This method combines the different components and performs enhancement in the optical domain, i.e, after classification of the filtered data. In general, classification is a mapping from a scalar field,
in our case magnitude, to color C and opacity . If we defer enhancement after classification, for the low and high components we
obtain colors and opacities CL , L and CH , H , respectively. Therefore, enhancement can be defined as a combination of these. The
=
=
CL CH CH
L H H
(6)
(7)
H (x) = T (x D(x,t))
(8)
I MPLEMENTATION D ETAILS
as we need to store several scalar fields for a single vertice, corresponding to the different components of the displacement magnitude. This method also results in a more compact encoding that
allows us to store more timesteps in GPU memory. In our implementation, the different enhancement techniques are processed during rendering, which allows us to change their parameter on-the-fly.
The addition of the different techniques make the rendering process
slower than simple volume rendering due to extra texture fetches.
In addition, barycentric interpolation within the tetrahedron may
be costly when we require to interpolate additional quantities (e.g.,
high-frequency component, displacements). For the case of deformable textures, we decided to define it procedurally rather than
adding an extra fetch, since for this case pixel processing power is
in general faster than texture fetches. Table 1 summarizes the extra
Technique
Total
Unsharp masking
Postclassification
unsharp masking
Displacement
Fetch high-pass
4 + nti
Average
Cost
(fps)
16.66
Fetch high-pass +
classify high-pass
4 + n(ti + t f )
14.3
Fetch high-pass +
deform coordinate +
classify high-pass
4 + n(ti +t f +tb )
5.0
Table 1: Cost of enhancement techniques in terms of extra texture fetches and interpolation operations and overall performance in
frames per second (volume rendering without lighting). ti refers to
the time it takes to perform scalar value interpolation, t f is the time
of a texture fetch and tb is the cost of barycentric interpolation for 3D
displacements.
(a) t = 380
(b) t = 480
(c) t = 580
Figure 4: Volume visualizations for the HBMC bridge with applying temporal unsharp masking for time steps t = 380,480,580. One of the problems
with this approach is that the unsharp masking exaggerates the magnitude of the original displacement, and it is difficult to extract the actual
shaking from the low frequency component.
(a) t = 380
(b) t = 480
(c) t = 580
Figure 5: The HBMC Bridge volume visualization with combinations of low- and high-pass components in optical domain for three consecutive
time steps t = 380,480,580. The superposition of cool and warm colors indicate the overlapping of the two displacement components, i.e.,
drift and shaking. Hence, although the drift accumulates, the shaking remains fixed within an interval (blue color).
(a) t = 380
(b) t = 480
(c) t = 580
Figure 6: Examples of post-classification unsharp masking with = 1.2 for three time steps where t = 380,480,580. Compared to Figure 5,
we can now clearly see the distribution of shaking, especially on the first timestep. For timestep t = 580, the enhancement helps identify more
isosurfaces in the shaking component.
v1
v2
v4
n samples
v3
SCALAR TEXTURE
TRANSFER FUNCTION