You are on page 1of 18

Scaling Siebel CRM Solutions with Oracle 10g RAC

James Qiu, Anda Zhao


Siebel Systems, Inc.
2207 Bridgepointe Parkway
San Mateo, CA 94404
USA

Larry Chen
SRS2, Inc.
One Market St, Spear Tower, Suite 2260
San Francisco, CA 94105
USA

November 15, 2004

Executive Summary
Customers using Siebel Customer Relationship Management (CRM) solutions are
increasingly demanding greater scalability and high availability to support mission-
critical operations and continued business growth. Such environments are characterized
by 24 X 7 operations and several thousand concurrent users. In addition, many large
Siebel customers are interested in leveraging the Linux operating system to decrease the
cost of deployment for mission-critical enterprise software.

Increasing the overall capacity of a Siebel CRM environment requires scaling in both the
middle tier and database tier of Siebels N-tier architecture. The middle tier is designed to
scale up by adding more CPUs to a single Siebel Application Server and to scale out
easily through the addition of Siebel Application Servers. Traditionally, the database tier
could only be scaled up by replacing one proprietary computing platform with another
more powerful platform to get more database performance. The availability of Oracle 10g
Real Application Clusters (RAC) changes this paradigm, allowing the database tier to
scale out through the addition of lower cost servers, including those running Linux.

Siebel partnered with SRS2 in collaboration with Oracle and several hardware vendors to
test the viability of this approach in Siebel environments under real-world conditions. In
benchmarks utilizing a single server, a two-node Oracle 10g RAC cluster and a four-node
Oracle 10g RAC cluster, the results show that Oracle 10g RAC offers 80% scalability up
to 4 nodes. A single medium-sized database node supported up to 2,500 users, a 2-node
cluster supported 4,000 users, and a 4-node cluster supported 8,000 users (all servers at
75% CPU utilization), proving the viability of the scale-out approach.

This paper discusses the details of this investigation carried out using Egenera
BladeFrame servers and Network Appliance Unified Storage.

2004, Siebel Systems Inc.


1
I. Introduction
Siebel customer relationship management (CRM) solutions enable organizations to
utilize an integrated set of customer-driven best practices across their sales, marketing,
customer service, and partner organizations. Employees can manage, coordinate, and
synchronize all customer touch-points, including web, call center, field, retail, and
distribution channels using comprehensive Siebel CRM software.

The operation of Siebel CRM solutions is highly dependent on the underlying IT


infrastructure, which must scale as a business grows to enable Siebel software to handle a
larger number of users and increased transaction rates. The infrastructure must also
provide the necessary level of availability to support business objectives.

Siebel software uses an N-tier architecture as illustrated in Figure 1. Business logic is


implemented in the middle tier by Siebel Application Servers tailored to provide the
specific functionality required by each customer. Specific software functionality can be
executed on any Siebel Application Server to provide necessary services for both
interactive and batch operations. Siebel software also has the flexibility to group services
on dedicated servers or execute them in parallel across servers to optimize the
performance of particular functions. The middle tier is designed to scale up by adding
more processors to a Siebel Application Server or to scale out easily through the
addition of Siebel Application Servers as needed, providing both scalability and high
availability.

Figure 1. Siebel N-tier architecture.


On the back end, Siebel utilizes a standard 3rd party relational database platform such as
Oracle, IBM DB/2 or Microsoft SQL Server to provide a centralized data repository. The
scalability and availability of the backend database plays a critical role in the overall
scalability and availability of a Siebel environment.

Because databases have traditionally been constrained to run only on a single server,
Siebel customers have typically followed a scale-up strategy for the database part of
the Siebel IT infrastructure. Whenever the database server becomes a bottleneck to
overall application performance, the server is replaced with a larger, faster machine.
While this approach is well understood, it can be highly disruptive to ongoing business

2004, Siebel Systems Inc.


2
operations and expensive to implement.

Oracle 10g Real Application Clusters (Oracle 10g RAC) provides a potential alternative
approach for scaling database performance for Siebel applications. Since, Oracle 10g
RAC is supported on Linux and Microsoft Windows 2003 Advanced Server, lower-cost,
industry-standard hardware platforms can be used. Oracle 10g RAC is designed to scale
through the addition of server hardware to a database cluster. Each server runs against the
same database, allowing the database infrastructure to be scaled-out as needs grow
while also providing high availability. This approach promises to be less disruptive to
ongoing business operations, more reliable, and less expensive to implement.

To investigate the scalability of this approach, Siebel worked with SRS2


(http://www.srs2.com), an Oracle certified partner specializing in the deployment and
tuning of grid applications and infrastructures. Siebel and SRS2 have worked together to
benchmark the performance of Oracle 10g RAC with loads generated from Siebel
Applications using a single node, two nodes, and four nodes in a RAC cluster running
Red Hat Linux.

This paper explores the results of these studies, proving the viability of the scale-out
approach and providing a set of best practices for implementing Oracle 10g RAC for use
with Siebel Applications.

II. Test Infrastructure and Configuration


The reference architecture used for testing was chosen to provide a level of scalability
and availability commensurate with the needs of typical Siebel customers to help ensure
that the test results would be relevant to real customer environments. Additional
objectives included:

- Minimal IT support
- Easy to deploy/provision/re-configure
- Flexibility to allow new servers to be added to any tier
- Small footprint
- Centralized administration
- Ability to run multiple tests against different databases in parallel

The test infrastructure also had to meet the requirements of Oracle 10g RAC, which
utilizes clustered hardware to run multiple nodes (Oracle instances) against a single
database. If one cluster node fails, the other nodes continue to provide uninterrupted
access to the database. Database files are stored on shared storage that is physically or
logically connected to each node. To maintain the consistency of the database, Oracle
10g RAC software coordinates all database modifications between cluster nodes. A
cluster interconnect enables database instances to pass control information and data to
each other.

2004, Siebel Systems Inc.


3
Processing Hardware, Cluster Interconnect, and Networking
Because of the coordination between nodes, Oracle 10g RAC requires a clustered
hardware platform with a high-speed cluster interconnect. Egenera
(http://www.egenera.com/) was selected to provide the processing hardware for these
tests. Egenera's BladeFrame has been pre-qualified by Oracle for use with both Oracle
RAC versions. A BladeFrame system consists of a set of Egenera Processing Blade
resources. Individual blades are tightly linked through the Egenera BladePlane
backplane and management software system to enhance the operation of clustered
applications. A central management console simplifies scaling, monitoring and managing
the individual servers that make up the BladeFrame. Since the BladeFrame consolidates
all servers in a single cabinet with an integrated cluster interconnect, the selection of
Egenera substantially simplified the test configuration.

Two Control Blades (C Blades) in the Egenera BladeFrame were used for external I/O
and IP networks, while two Switch Blades (S Blades) were used for point-to-point
connections between Processing Blades and Control Blades over an integrated switched
fabric network inside the BladePlane running at 2.5 Gigabits/sec.

The BladeFrame architecture was designed for mission critical applications and has built-
in High Availability with N+1 failover. With no local disk, Processing Blades have no
permanent identity. This proved ideal for testing since individual blades could be
assigned different tasks and re-assigned as testing conditions were altered.

The individual blades in the Egenera BladeFrame were allocated to support one of four
different functions:

1. Load Generators: Configured with Windows 2003 Advanced Server and used
during testing to simulate user loads, thereby taking the place of the client tier in
Siebels N-tier architecture. Each blade was capable of simulating up to 2,000
users.
2. Application Servers: Configured as Siebel Application servers running Siebel
7.7 software on top of Windows 2003 Advanced Server. These Application
servers were divided into two clusters of 5 servers each for high availability.
3. Web servers: Configured as web servers running Windows 2003 Advanced
Server. All requests from the load generators go to the web servers as HTML,
resulting in calls to the Application servers.
4. Database servers: Configured with Oracle 10g RAC and Red Hat Linux
Advanced Server 2.1 (kernel level: 2.4.9e25). Each database server was
configured with 4 network interfaces, two for message passing between nodes,
and two for external network connections to Siebel Application Servers. In each
case, one interface served as primary and the second as backup to ensure high
availability.

2004, Siebel Systems Inc.


4
Machine Configuration OS Quantity Comment

2-way 3.06GHZ RAM 6GB Linux 4 DB Server: 4-node RAC

4-way 2GHZ RAM 12GB Windows 2003 10 Siebel Apps Server

2-way 2.4GHZ RAM 2GB Windows 2003 2 Web Server

2-way 2.4GHZ RAM 6GB Windows 2003 6 Load Runner

Table 1. Distribution of functions across various Egenera blades.


Storage Systems
Storage systems were chosen to deliver a level of flexibility, reliability and availability
consistent with the Egenera processing hardware. For this project, Network Appliance
(http://www.netapp.com) unified storage systems were selected. NetApp storage systems
are capable of supporting fibre channel SAN, iSCSI SAN, and traditional NAS data
services, such as NFS and CIFS, simultaneously.

NetApp provided a highly available FAS980 cluster for Siebel testing. Each FAS980 has
a maximum disk capacity of 32 TB, dual 2.8GHz Intel Pentium 4 CPUs with 2MB of
Level3 Cache, 8GB of system memory, and 512MB of NVRAM (nonvolatile RAM). The
cluster consisted of two FAS980 systems in an active-active configuration. Both storage
systems are active during normal operation. Should one system fail, the other takes over
the workload and physical storage of the failed system.

The NetApp storage cluster was connected to the Egenera BladeFrame via a fibre channel
SAN. The NetApp storage cluster provided block-level storage for each of the Egenera
blades serving as operating system "boot" devices. Block-level storage for the Oracle
databases and Siebel application servers utilized during testing were also provided by the
NetApp cluster.

The NetApp storage cluster provided a highly-scalable, highly-reliable, easily-managed


virtual storage environment that facilitated the testing through its unique feature set. As
Siebel and Oracle application services were added, deleted, or moved, between various
processing resources on the Egenera BladeFrame, the storage system was able to easily
accommodate changes by cloning, deleting, and transparently moving LUNs. This rapid
LUN cloning capability accomplished using NetApp Snapshot technology is particularly
useful when applied to blade computing. These processes are described more completely
in a NetApp white paper entitled Applications for Writeable LUNs and LUN Cloning in
Oracle Environments at http://www.netapp.com/tech_library/3266.html.

Enterprise-class data protection including on-demand backup and restore are key

2004, Siebel Systems Inc.


5
capabilities of NetApp storage in mission-critical, high-transaction environments like
those often seen in Siebel and Oracle environments. Testing was greatly simplified and
accelerated using NetApp SnapShot and SnapRestore. Snapshot was used to create a data
consistency "checkpoint". After testing and data gathering, the test configuration was
instantly returned to the earlier "checkpointed" state using SnapRestore to ready it for the
next test iteration. The advanced regression testing capabilities offered by NetApp were
key to the test teams ability to deliver the results of this project ahead of time.

SAN Configuration
A pair of Brocade Silkworm 4100 2Gbit/sec fibre channel switches was used to provide
the SAN fabric between the NetApp cluster and the BladeFrame. The SAN was designed
to ensure no single points of failure for high availability.

Load Generation Software


Mercury Interactive LoadRunner software was used to simulate the required number of
concurrent users for each test cycle. LoadRunner is designed to emulate hundreds or
thousands of concurrent users to put an application through real-life user loads while
maintaining performance metrics on every transaction. LoadRunner is certified to work
with Siebel.

Figure 2 shows how the hardware is deployed at Siebels data center. The twenty-four
servers required for the testing were provided neatly organized by the BladeFrame,
significantly reducing the wiring and data center space required.

2004, Siebel Systems Inc.


6
Figure 2. Physical layout of Egenera and NetApp infrastructure for testing.

The following diagram shows the logical layout of Egenera blades and Netapp storage
systems.

2004, Siebel Systems Inc.


7
Figure 3. Logical layout of the Egenera/NetApp/Brocade infrastructure. The C Blades indicated in the
figure are Egenera Control Blade modules, which are used for external I/O.
III. Test Methodology and Design
Siebel selected the test methodology to closely represent a real customer operating
environment such as a call center. Siebel CRM applications include a rich array of
component modules. These modules fall into two general classes:

Object Manager (OM) based components that typically run interactively.


Server components that run in the background or batch mode.

Two specific Siebel CRM components were chosen for testing:

Siebel CallCenter is Siebels call center management module that enables an


agent to handle service, support, and sales interactions seamlessly. With
CallCenter an agent can support a broader range of products and services from a
single integrated tool.
Siebel eChannel is Siebels Partner Relationship Management module that is
designed to allow companies to effectively manage partner relationships and

2004, Siebel Systems Inc.


8
provides partners with a broad range of capabilities including configuring,
ordering, etc. while maintaining a high level of security. Typically, Siebel
eChannel is deployed by large companies to service several thousand partners.

Most Siebel customers deploy only one of these two modules. Combining both Siebel
Call Center and Siebel eChannel produces heavier workloads than most customers will
see in their deployments. Both are OM-based products providing a broad array of
services and functionality, and have a large installed user base, typically with heavy user
load.

Test Database
A Siebel engineering scalability database was used for all tests. This database is a
representation of the cross-functional nature of Siebel Industry Solutions; the data shape
is as close to production shapes as can be simulated with a synthetically-generated
database, and is used for Siebel Performance/Scalability/Reliability testing. The total size
of the database was 200GB. The main tables involved in the test (S_SRV_BU, S_OPTY,
etc.) contain 4-6 million rows of data each.

Database Parameters and Tuning


A limited effort was made to tune Oracle 10g RAC for optimal performance as part of the
testing. A complete discussion of issues encountered during testing and the solutions
summarized above is provided in Appendix A. The complete init.ora file is provided in
Appendix B for anyone wishing to replicate these tests. Note that much of the tuning
required was specific to the characteristics of the workload under test, and may not be
necessary or appropriate for other workloads.

Tested Oracle 10g RAC Configurations and Test Criteria


All tests use standard client scalability benchmark runs against an Oracle 10g database
without batch transactions. The following test configurations were used:

1. A single active RAC node. A second RAC node was configured but inactive.
Simulated user load was added to drive the active node to 60% to 70% CPU
utilization on average. This test provided the baseline against which the scalability
of multi-node configurations was judged.
2. Two active RAC nodes. Simulated user load is added until both nodes reach 60%
to 70% CPU utilization on average.
3. Four active RAC nodes. Simulated user load is added until all nodes reach 75%
to 80% CPU utilization on average with acceptable performance.

Note that similar testing was performed with Oracle 9i RAC on an HP cluster. Those
results are not reported here. In general, customers can expect similar performance with
Oracle 9i RAC.

Simulated User Load


User load was simulated using scripts executed with Mercury Interactives LoadRunner
software. The scripts represented the activity of a total of 39 different transactions: 33
from Siebel CallCenter (using a service request scenario with medium-level eScripting

2004, Siebel Systems Inc.


9
customizations) and 6 from Siebel eChannel (using eScripting invoking workflows). The
read/write ratio was 70%/30%. Each simulated user executes a transaction a particular
number of times to produce an overall total number of transactions. This is detailed in
Table 2.

Type of Load Runner


User Transactions / Use Case
CallCente 33
r
eChannel 6

Table 2. Distribution of simulated user workload between applications


Test Execution
Each test cycle was conducted by applying a user load to the configuration under test,
measuring the results, increasing the user load, measuring again, and continuing in this
fashion until the average CPU usage on each database node reaches 65 to 70%.

Each run included a ramp-up phase, a steady-state phase, and a ramp down phase, as
shown in Figure 4.

Figure 4: Workload Characteristics.

Database CPU usage was gathered during the steady state phase using the Linux vmstat
command. Oracle statspack snap shots were also taken for database analysis and tuning.

2004, Siebel Systems Inc.


10
Mercury Interactive Loadrunner gathered client-side data during each test. Scalability is
calculated based on the number of users and validated by the following metrics:

Transactions per second for CallCenter and eChannel


The overall average response time for new service request and opportunity
Verification that the run completed with no failures

At the end of each run, the data was analyzed and the workload was deemed to have
passed or failed. Before a workload was deemed to have passed, it had to meet the
following criteria:

Overall 99 percent of transactions were passed


The average response time did not exceed 1 second

IV. Benchmark Results


Table 3 shows the total number of simulated users supported during steady state test
execution.

Number of RAC Nodes Avg. DB Server CPU (%) Total Number of Users
1 65%-70% 2500
2 65%-70% 4000
4 65%-70% 8000
Table 3: Total Number of simulated users supported

Adding nodes to the Oracle 10g RAC configuration results in 80% scalability. In other
words, 2 RAC nodes can support 4,000 users and 4 RAC nodes can support 8,000 users
versus:

2-Node theoretical max. = 2 X 2,500 = 5,000 users; 4,000/5,000 = 80%


4-Node theoretical max. = 4 X 2,500 = 10,000 users; 8,000/10,000 = 80%

Table 4 summarizes the total number of transactions completed per second for CallCenter
and eChannel in each configuration and details the average response time for new service
requests and new opportunities.

Number of Transactions Transactions Avg. Response Avg. Response


RAC Nodes per Second for per Second for Time for New Time for New
CallCenter eChannel Service Opportunities
Requests
1 53.60 35.70 0.01 0.01
2 46.75 38.25 0.05 0.01
4 44.70 32.40 0.15 0.07
Table 4: Total Number of Transactions per second and average response time

2004, Siebel Systems Inc.


11
Note that average response are very low (less than 0.2 seconds), indicating that user
interactive response should not degrade noticeably as the Oracle RAC configuration is
scaled.

2004, Siebel Systems Inc.


12
V. Conclusion
The results presented in section IV indicate that the scale-out strategy is a viable solution
for the database component of Siebel CRM solutions. Scaling from a single database
node to a two-node configuration results in 80% scaling with minimal change in average
response per transaction. The 4-node configuration delivers the same 80% scaling versus
the single node configuration.

Based on these results, Oracle 10g RAC provides Siebel users an alternative to the
traditional scale-up database strategy. A customer can start with a modest configuration
consisting of 1 or 2 nodes and expect good scaling with each node added. Please note that
Siebel Remote requires ORDERED sequences which conflict with Oracle 10g RAC
scalability requirements. Therefore, Siebel does not support Siebel Remote with Oracle
10g RAC.

Our testing also derived substantial benefits from the hardware architecture described
above. The Egenera BladeFrame made it simple to deploy and manage not only database
servers, but also the Siebel Application servers, web servers, and load generators needed
to create a complete environment and to ensure high availability for all components.
Individual blade servers could be easily re-purposed as needs during testing changed with
a minimum of cabling and configuration alterations. This flexibility should be of benefit
in almost any dynamic IT environment.

The highly available NetApp storage cluster complemented the Egenera BladeFrame,
providing great flexibility, simplified management, and innovative features that made
testing easier. NetApp unified storage allowed us to support SAN and NAS connections
from the same storage system. During testing the test team was able to quickly and easily
reconfigure the SAN storage as needs changed. The ability to clone, delete and move
LUNs and file systems as necessary simplified the teams work, and should provide
similar benefits in any busy Siebel environment.

2004, Siebel Systems Inc.


13
Appendix A. Oracle 10g RAC Tuning
In general a RAC database does not require significantly more tuning than a single Oracle
instance. In most cases traditional single instance tuning techniques such as identifying
and addressing hot spots and contention are equally applicable to RAC.

The tuning performed for this project was based on data from statspack reports taken at
regular intervals during steady state testing and also from closely monitoring CPU and
I/O activity on each database node. The first two pages of a statspack report briefly
describe the workload. Statistics and wait events presented there were monitored over
time to proactively detect performance issues and to ensure performance was optimal.
SAR and vmstat provided measures of CPU usage and I/O performance during the
benchmark.

The tuning changes described here apply to the workload created by this particular
benchmark. The workload consisted of approximately 30% inserts and 70% selects. Care
should be taken to verify that your workload is similar before applying these changes.

Tuning Recommendations

SEQUENCES

Sequences are used in the Siebel application in order to create sequential numbers.
Sequences have two performance related properties: caching and ordering . For optimal
performance in RAC, sequences should use the CACHE clause, a reasonably large cache
size and NOORDER unless gapless sequences and ordering are required.

In the benchmark high wait time was observed on SQ-ENQUEUE.The insert rate was
very high with multiple nodes inserting data simultaneously causing leaf block contention
on the index.

Tuning:
Changed the sequences to use large CACHE sizes (10000) and NOORDER.

HASH PARTITIONING

The benchmark experienced some contention on index blocks experiencing high insertion
rates. This contention shows up in statspack as high wait times for the following wait
events and affects response times:
enq: TX - index contention
gc buffer busy
gc current block busy
buffer busy waits

Tuning:
The tables and indexes under contention were identified and hash partitioned. A right-
growing index is a characteristic hot spot for OLTP applications, due mainly to the fact

2004, Siebel Systems Inc.


14
that keys are generated in a more or less ordered increasing sequence. Therefore, the leaf
blocks heat up. Distributing access to the leaves over multiple index partitions alleviates
the hot spots. Apart from a significant reduction of contention, local cache affinity
improves because leaf blocks are retained in a local cache longer and are more available
for local use.

It should be noted that SQL execution could be affected when partitioning tables and
indexes because index range scans may need to access all index partitions. This could be
addressed by ensuring that the partition key is used in the where clause of the query to
perform partition elimination.

_gc_defer_time:

The parameter _gc_defer_time represents the time a block is deferred (if a cleanout is
pending for the block ) before being shipped to the requesting node. The default is 30ms.
The parameter generally increases the local affinity for a particular block.

In the current benchmark waits were observed on the Global cache null to x and a high
ratio for current block defers (greater than 0.3).

This event is waited on when one instance wants to modify a current block and does not
find it in its local cache in the appropriate access mode. After sending the request to the
other node the session waits for the current block transfer from the remote cache. The
latency of this operation is strongly influenced by the time it takes the serving instance to
release the block and one of the main delaying factors is the _gc_defer_time.

Tuning:
Turn off _gc_defer_time by setting it to 0. The current block is not deferred and is
shipped immediately to the requesting node. Note that this tuning is specific to the
current workload and might not work for all workloads.

Hyperthreading

Hyper-Threading Technology is a form of simultaneous multi-threading in which


multiple threads can be run simultaneously on one processor. This is achieved by
duplicating the architectural state on each processor, while sharing one set of processor
execution resources. Hyper-Threading Technology also delivers faster response times for
multi-tasking workloads.

Oracle will work just fine on any O/S that recognizes a hyper-threading enabled system.
In addition, it will take advantage of the logical CPUs to their fullest extent (assuming the
O/S reports that it recognizes that hyper-threading is enabled).

Tuning:
During the Benchmark, turning on hyperthreading helped us eliminate some high run
queues.

2004, Siebel Systems Inc.


15
HW Enqueue Contention

The Statspack was showing lot of contention on Enq:HW contention. In Oracle, the
High-water Mark (HWM) of a segment is a pointer to a data block up to which free
blocks are formatted and are available to insert new data. If data is inserted at a high rate,
new blocks may have to be made available after a search of the freelists is unable to
return any space. This involves formatting the blocks, inserting them into a segment
header or bitmap block and raising the HWM.

Tuning:
The fast growing segments were identified . Uniform and large extent sizes were used for
the locally managed and automatic space managed segments which are subject to high
volume inserts. This alleviated some of the HW enqueue contention.

Appendix B. Oracle 10g init.ora Settings


aq_tm_processes 0
cluster_database TRUE
cluster_database_instances 4
compatible 10.1.0.3.0
db_block_size 8192
db_cache_size 1912602624
db_file_multiblock_read_count 16
disk_asynch_io TRUE
fast_start_mttr_target 3600
hash_area_size 1024000
instance_number 1
job_queue_processes 1
local_listener LISTENER_SDCEGENR-ORACLE1
log_buffer 6144000
open_cursors 1000
optimizer_index_cost_adj 1
pga_aggregate_target 0
processes 1000
remote_listener LISTENERS_R10G
remote_login_passwordfile EXCLUSIVE
shared_pool_size 402653184
sort_area_size 1024000
thread 1
timed_statistics TRUE
undo_management AUTO
undo_retention 900
undo_tablespace UNDOTBS1
_gc_defer_time 0

2004, Siebel Systems Inc.


16
References:

[1] NetApp White Paper, Creating a UNIX-based Database Software Testing


Environment Using a NetApp Filer
[2] NetApp White Paper, Applications for Writeable LUNs and LUN Cloning in
Oracle Environments, June 2003
[3] NetApp White Paper, Using Oracle Database 10g Automatic Storage
Management with Network Appliance Storage, June 2004
[4] NetApp White Paper, Oracle 10g Real Applications Cluster Release 1:
Installation with a Filer in a Red Hat Enterprise Linux RHEL 3 Environment,
September 2004
[5] Egenera Benchmark, Demonstrating Oracle9i Real Application Clusters (RAC)
on the Egenera BladeFrame System: The Calling Circle Problem, 2003
[6] Egenera White Paper, Advantages of Running Oracle 9i Database with Real
Application Clusters on the Egenera BladeFrame System, 2002
[7] Egenera Product Brief, Oracle 9i RAC on the Egenera BladeFrame System, 2003

Acknowledgements
The authors would like to acknowledge the following individuals for their contributions
to this project:

Egenera: Kathy Yenke, Blaine Lincoln and Joe Gorski


Network Appliance: Eric Melvin, Antonio Robinson, Lane Spiers and Philip Trautman
Oracle: Kavitha Raghunathan, Anna Leyderman and Michael Zoll
Siebel: Frank Lu, Francisco Casa, Mark Farrier and Kelly Lawler
SRS2: Eric He, Fang Su and Peter Wang

2004, Siebel Systems Inc.


17
2004, Siebel Systems Inc.
18

You might also like