Professional Documents
Culture Documents
5
Technical Report
Trading Networks Performance
January 2006
Copyright
© 2006 webMethods, Inc. All rights reserved.
Trademarks
The webMethods logo and Get There Faster are trademarks or registered trademarks of
webMethods, Inc.
Statement of Conditions
webMethods, Inc. may revise this publication from time to time without notice. Some
states or jurisdictions do not allow disclaimer of express or implied warranties in certain
transactions; therefore, this statement may not apply to you.
All rights reserved. No part of this work covered by copyright herein may be reproduced
in any form or by any means—graphic, electronic or mechanical—including
photocopying, recording, taping, or storage in an information retrieval system, without
prior written permission of the copyright owner.
This technical report is one of a series that define and measure synthetic benchmarks
that are representative of how webMethods 6.5 is used in the field. These benchmarks
were designed to be readily applied to various real world deployments. This report
focuses on Trading Networks.
The intended audiences are application architects, developers and managers involved in
capacity planning. This document will not, by itself, allow capacity planning. It shows the
relative performance of some of the various Trading Networks configurations. These can
be used to weigh design-time choices as well as provide input into the hardware selection
process.
For all tests, two servers were used. One ran Trading Networks and the other was
running Microsoft SQL Server 2000. The SQL Server system was used for all tests but
one of two different systems was used for Trading Networks.
The TN1 and DB systems were running Windows 2000 Advanced Server with service
pack 4 (build 2195), and the IBM 1.4.2 JVM was used for all tests. The TN2 system was
running Solaris 9 and the Sun 1.4.2 server JVM was used for all tests.
All the servers used Gigabit Ethernet (over copper). The load generation systems used a
mixture of gigabit over copper and 100-megabit Ethernet. The systems were on an
isolated network connected via a Cisco switch.
The TN Server, unless otherwise noted, was left in its default configuration except for the
following changes:
watt.debug.level=1 in server.cnf
watt.server.auditLog=off in server.cnf
watt.server.auditGuaranteed=false in server.cnf
watt.server.auditLog.session=off in server.cnf
JAVA_MIN_MEM=1500M in server.bat or server.sh
JAVA_MAX_MEM=1500M in server.bat or server.sh
The Trading Networks database pool was set to a minimum of 30 connections with a
maximum of 100 connections.
The database system was running Microsoft SQL Server 2000. The DB system was
dedicated to that task.
The heart of the SilkPerformer environment is the Workbench. It provides the user
interface where test scenarios are defined and run. The load generator systems, a
heterogeneous pool of systems running Windows 2000 Server, inject the load into the
test system under the control of the Workbench. The load generators send their
performance metrics back to the Workbench for correlation and display.
TN Test
This test measures the ability of Trading Networks to route documents. The
SilkPerformer load generator submits an XML document via HTTP POST to the
wm.tn:receive service. Depending on the test, the document was 1KB, 10KB, or 100KB.
TN recognizes the document, and the matching rule persists everything to the database
and synchronously calls a no-op service.
The client ignored the IS session and did not use keep-alives so each request mimicked
a new user submitting a request.
The purpose of the test was to measure the effect of document size, the number of
document types, the number of partners and the number of routing rules on throughput.
The load generation added one client thread every two minutes and ramped up load
beyond the point of peak throughput.
A no-op service was used so that TN itself could be measured and to minimize the effect
of any workload. Real-world usage matching these same processing rules would be
somewhat slower depending on the actual service invoked. These numbers should be
treated as best-case in that only TN-related work was performed. However, they are not
best-case in the sense that the rules or data were not skewed to produce higher
throughput.
The matching rules were placed at the bottom of the rules list. This means that TN had
to check each rule in the list before it found the matching rule. For each non-matching
rule, a check was made for two attributes that were guaranteed not to exist in the sample
data. In real-world usage, assuming all rules are uniformly chosen, these test cases
would be roughly equivalent to having twice as many processing rules as, on average,
half the rules would need to be checked to find the result.
These tests do not exhaust the variability possible in Trading Networks configuration, but
they are representative of what occurs in many Trading Networks deployments.
100 Partners
This series of tests held the number of partners at 100. The number of document types,
number of processing rules and document size were varied to see what affect they had
on throughput.
300
250
200
150 10 doctypes,
10 rules
100
100 doctypes,
10 rules
50
100 doctypes,
0 100 rules
1K 10K 100K
Figure 1: Throughput with 100 Partners
For all of these tests the test was processor-bound on the Trading Networks server.
From these tests, we can conclude that document size is the single largest factor that
determines throughput. The number of document types is the next largest factor with the
number of processing rules having more impact on smaller documents.
350
300
250
200
150
100
50
0
10 partners 100 partners 1,000 partners
In an absolute sense, varying the number of partners had a small but growing effect as
the number of partners was increased.
Solaris Performance
A series of tests on Solaris 10 was run using 100 partners, 100 document types and 10
rules. Document sizes of 1KB, 10KB and 100KB were run.
Solaris Throughput
Documents/Second
350
300
250
200
150
100
50
0
1KB 10KB 100KB
For the 10KB and 100KB documents the Trading Networks system was processor-
bound. For the 1KB document, it was database bound although the Trading Networks
system wasn’t far from being processor-bound.
US West Coast
432 Lakeside Drive
Sunnyvale, CA 94088
Tel: 408 962 5000
Fax: 408 962 5329
European Headquarters
Barons Court
20 The Avenue
Egham, Surrey
TW20 9AU
United Kingdom
ABOUT WEBMETHODS, INC. Tel: 44 0 1784 221700
Fax: 44 0 1784 221701
webMethods is a leading provider of business integration
software to the Global 2000 and large government agencies.
Our technology lets our customers integrate, assemble and
Asia-Pacific Headquarters
optimize available IT assets to drive business process Level 15
Philips Building
productivity. Currently, more than 1,200 customers are 15 Blue Street
meeting customer demands, reducing costs, creating new North Sydney NSW 2060
Australia
revenue opportunities and reclaiming ROI. Faster. Tel: 61 2 8919 1111
webMethods is headquartered in Fairfax, Va., with offices Fax: 61 2 8920 2917
throughout the United States, Europe, Asia Pacific and
Japan. More information about the company can be found at
www.webMethods.com. NASDAQ: WEBM. webMethods Japan KK
Izumi Garden Tower 30F
1-6-1 Roppongi, Minato-ku
Tokyo 106-6030
Japan
Tel: 81 3 6229 3700
Fax: 81 3 6229 3701
www.webMethods.com