You are on page 1of 15

Introduction to the WebSphere Message Broker global

cache
James Hart
Software Developer, WebSphere Message Broker
Development team
IBM

19 December 2012

The new global cache feature in WebSphere Message Broker V8.0.0.1 enables you to share
an in-memory cache between processes and brokers. This article describes the global cache,
shows you how to use it, and answers some frequently asked questions about it.

Introduction
This article provides an overview of the new global cache capability in IBM WebSphere
Message Broker V8.0.0.1. The global cache uses WebSphere eXtreme Scale technology to
provide an in-memory data cache that can be shared across execution groups and brokers. This
article has three sections:
Overview of global cache
Frequently asked questions
Cache topology considerations
Programming considerations will be covered in a future article.

Overview of global cache


Why was this capability needed in WebSphere Message Broker?
A longstanding WebSphere Message Broker requirement has been a mechanism for sharing
data between different processes. This requirement is most easily explained in the context of
an asynchronous request/reply scenario. In this kind of scenario, a broker acts as intermediary
between a number of client applications and a back-end system. Each client application sends, via
the broker, request messages that contain correlating information to be included in any subsequent
replies. The broker forwards the messages on to the back-end system, and then processes the
responses from that system. To complete the round-trip, the broker has to insert the correlating
information back into the replies and route them back to the correct clients.
When the flows are contained within a single broker, there are a few options for storing the
correlating information, such as a database, or a store queue where an MQGet node is used to
Copyright IBM Corporation 2012
Introduction to the WebSphere Message Broker global cache

Trademarks
Page 1 of 15

developerWorks

ibm.com/developerWorks/

retrieve the information later. If you need to scale this solution horizontally and add brokers to
handle an increase in throughput, then a database is the only reasonable option.
However, if a single in-memory cache is available to the request and response message flows
regardless of which brokers they are running in, then the request flow can store the correlation
information in the cache, and the reply flow can retrieve (and delete) it. Regardless of whether the
reply message is routed through the same broker as the original request, or through a different
broker, the reply still ends up with the correct client, as shown below:

Request/reply scenario with global cache

Other common scenarios for the global cache include maintaining an in-memory route table
shared between execution groups or brokers, or reducing latency to back-end systems by keeping
a faade of frequently queried data in the cache.

How does the global cache work?


The WebSphere Message Broker global cache is implemented using embedded WebSphere
eXtreme Scale (WebSphere XS) technology. By hosting WebSphere XS components, the JVMs
embedded within WebSphere Message Broker execution groups can collaborate to provide
a cache. For a description of WebSphere XS, see Product overview in the WebSphere XS
information center. Here are some of the key components in the WebSphere XS topology:
Catalog server
Controls placement of data and monitors the health of containers.
Container server
A component embedded in the execution group that holds a subset of the cache data.
Between them, all container servers in the global cache host all of the cache data at least
once.
Map
A data structure that maps keys to values. One map is the default map, but the global cache
can have several maps.
Each execution group can host a WebSphere XS catalog server, container server, or both.
Additionally, each execution group can make a client connection to the cache for use by message
Introduction to the WebSphere Message Broker global cache

Page 2 of 15

ibm.com/developerWorks/

developerWorks

flows. The global cache works out of the box, with default settings, and no configuration -- you just
switch it on! You do not need to install WebSphere XS alongside the broker, or any other additional
components or products.

How do you control the scope of the cache?


The default scope of one cache is across one broker. To enable this, switch the broker-level policy
property on the GlobalCache tab of Message Broker Explorer to Default and restart. This causes
each execution group to assume a role in the cache dynamically on startup. The first execution
group to start will be a catalog and container server, using the first four ports from the supplied port
range (a port range will have been generated for you, but you can modify this). For more details
on the port range, see Frequently asked questions below. The second, third, and fourth execution
groups (if present) will be container servers, each using three ports from the range. Any execution
groups beyond the fourth one will not host cache components, but will connect as clients to the
cache hosted in execution groups 1-4. The diagram below shows the placement of servers, and
the client connections, for a single-broker cache with six execution groups:

Single-broker cache with default policy

You can extend the cache to multiple brokers by using a cache policy file. Three sample policy files
are included in the product install, in the sample/globalcache directory. You can simply alter the
policy file to contain all the brokers you want to collaborate in a single cache, then point the brokerlevel cache policy property at this file. Here is this setting in Message Broker Explorer:

Introduction to the WebSphere Message Broker global cache

Page 3 of 15

developerWorks

ibm.com/developerWorks/

Configuring a cache policy file in Message Broker Explorer

The file lets you nominate each broker to host zero, one, or two catalog servers, and the port
range that each broker should use for its cache components. The following diagram shows a twobroker cache, with both brokers configured to contain catalog servers:

Introduction to the WebSphere Message Broker global cache

Page 4 of 15

ibm.com/developerWorks/

developerWorks

Two-broker cache controlled by policy file

You can also configure specific cache roles for each execution group, rather than having them
dynamically assigned on startup, by changing the broker cache policy property to none, which
then enables a set of detailed execution group properties. For details, see Cache topology
considerations below.
As the cached data is all hosted inside the execution group processes, the global cache is a
better fit for some solutions than others. For instance, storing transient correlation information or
routing tables is clearly a good use for this technology. Its suitability as a database faade depends
largely on the amount of data to be kept in the cache. Hosting several gigabytes of data in the
JVM heaps of a small number of execution groups is not recommended. Another consideration is
the requirement for advanced features of WebSphere XS. If you have identified a need to exploit
APIs other than the simple ObjectMap interface, or a requirement to customize the underlying
WebSphere XS grid configuration XML files, then a separate WebSphere XS or XC10 installation
may be more appropriate. Connectivity to a remote a WebSphere XS installation from WebSphere
Message Broker is not covered by this article.

How do you interact with the cache?


The message flow developer has new, simple artifacts for working with the global cache, and
is not immediately aware of the underlying WebSphere XS technology or topology. Specifically,
the Java Compute node interface has a new MbGlobalMap object, which provides access to
the global cache. This object handles client connectivity to the global cache, and provides a
number of methods for working with maps in the cache. The methods available are similar to
those you would find on regular Java maps. Individual MbGlobalMap objects are created by using
a static getter on the MbGlobalMap class, which acts as a factory mechanism. You can work with
Introduction to the WebSphere Message Broker global cache

Page 5 of 15

developerWorks

ibm.com/developerWorks/

multiple MbGlobalMap objects at the same time, and create them either anonymously (which uses a
predefined default map name under the covers in WebSphere XS), or with any map name of your
choice. In the examples below, defaultMap will work with the system-defined default map within
the global cache. myMap will work with a map called myMap, and will create this map if it does not
already exist in the cache.

Sample MbGlobalMap objects


MbGlobalMap defaultMap = MbGlobalMap.getGlobalMap();
MbGlobalMap myMap = MbGlobalMap.getGlobalMap(myMap);

How do you administer the cache?


Interactions with the global cache for a given execution group are tracked using resource statistics,
and an activity log that records the start of the catalog or container server in that execution group.
It will then record each put, get, update, remove or containsKey call made within that execution
group, as well as which flow and node made the call, which map and key were used, and details
of any exceptions in processing. The figure below shows the narrative of connecting to the global
cache, checking whether an entry exists, and then putting some data in the cache.

Example activity trace output

Since resource statistics and activity traces can provide only an isolated view from each execution
group, there is also the mqsicacheadmin command, which provides information about the entire
global cache. It has a number of subcommands. The listHosts and showPlacement commands
help validate the condition of your global cache -- for example, does it cover the number of hosts
you expect, does it have the right number of containers, and are the shards spread evenly across
these containers. showMapSizes shows the overall size of your global cache, map by map, while
clearGrid lets you clear the data from a specific map, which is useful for purging data when a
particular map gets too large.

Frequently asked questions


Q: Can one global cache be shared across multiple brokers?
A: Yes! Although the default cache policy provides a single-broker cache, you can use a cache
policy file to create a multi-broker cache. Three sample policy files are included in the product
install. Simply alter the policy file to contain all the brokers you want to collaborate in a single
cache, and then point the broker-level cache policy property at this file. The file lets you nominate
each broker to host zero, one, or two catalog servers, and the port range that each broker should
use for its cache components. Ensure that the listenerHost attribute for each broker in the policy
Introduction to the WebSphere Message Broker global cache

Page 6 of 15

ibm.com/developerWorks/

developerWorks

file matches that broker's listenerHost property. This is used, along with the broker name, by each
broker to identify itself in the policy file.
Q: What happens if I shut down an execution group that hosts a catalog server?
A: If it is the only catalog server, then your cache is gone. However, if you have configured your
cache to have more than one catalog server, and they have successfully handshaked on startup,
then the remaining catalog server (or servers) will continue to run the cache without interruption.
When you restart the execution group, the catalog server will rejoin the topology.
Q: In a multi-broker cache, if one of my brokers fails, what happens to my data?
A: As long as you are running with multiple catalog servers on different machines, your data is still
available to the remaining brokers. Each piece of data in the cache has a primary version (primary
shard), and a replica. If you have multiple catalogs on different machines then the replica of any
piece of data is guaranteed to reside on a different machine from the primary. If the broker hosting
the primary shard of some data fails, the replica shard automatically becomes the primary, and a
new replica is created elsewhere. This behaviour is enabled by APAR IC88062, which guarantees
that primary and replica shards are placed on different machines. Without this APAR, it is possible
to lose both shards if a broker fails. If there is only one catalog server, or only one host containing
catalog servers, then primary and replica shards can be placed on the same machine.
Q: Is there a way to configure the cache to expire data after a certain amount of time?
A: Currently, there is no way to enable expiration in the embedded cache. The life cycle of all data
must be coded for in your message flows.
Q: Can I ensure that a specific execution group always hosts a catalog server?
A: Yes, by moving from a broker policy of default / file to none, and by setting each execution
group's cache properties individually. For more details on this topic, see Cache topology
considerations below.
Q: I'm trying out the global cache on my laptop and having problems when my IP address
changes. What is the solution?
A: For a single machine topology, you can change the listenerHost property of your broker (on the
broker's GlobalCache tab in Message Broker Explorer) to localhost. localhost will not work if you
expand the cache to cover multiple machines, but is suitable for this kind of simple, laptop-oriented
DHCP usage.
Q: Does the global cache work for multi-instance brokers?
A: If an active broker fails, and a standby broker starts up to take its place on a different machine,
the catalogs and containers in that broker will fail to start (as they are binding to the wrong
hostname). But all execution groups will be able to make client connections to the global cache,
assuming that a catalog is still running in another broker. The original active broker will need to be
restored in order for the cache components (catalogs and containers) within that broker to restart
and rejoin the cache.
Q: Is there a way to persist data to the file system or a database?
A: Currently there is no automatic way to do this with the global cache.
Introduction to the WebSphere Message Broker global cache

Page 7 of 15

developerWorks

ibm.com/developerWorks/

Q: Are interactions with the global cache transactional?


A: Each interaction with the global cache is a transaction in its own right. WebSphere XS
pessimistic locking is used during each action, and control is returned to the user only after that
action has been committed to the primary and replica versions of the data. But the global cache
interactions are not integrated with the message flow transaction, so if a message flow rolls back
after some data has been put into the cache, the data will not be removed. Keep this fact in mind
when designing message flows, and try to make cache puts, updates, and removes occur as late
in the flow as possible. The Coordinated Request Reply sample in the product demonstrates this
technique storing correlation information into the cache only after the flow's MQOutput node has
successfully put a message.
Q: Can the global cache be accessed outside of a Java Compute node?
A: Yes. You can create static Java wrappers for common global cache tasks, such as getting a
String value from a given Map. You can then call these Java methods from within ESQL and the
Mapping node. Making the cache available natively from across the programming interfaces is the
long-term goal.
Q: Are there any implications for the size of execution group processes when using the
global cache?
A: With a catalog or container server enabled, the heap size of the JVM within an execution
group will increase by at least 30 MB. There is no impact on the performance of flows within
that execution group as a result of hosting a cache component, but all of the data in the global
cache will be hosted within your execution group processes, and the size of these processes
will grow accordingly. If you create flows that continually write to the cache and never remove
data, the size of your processes will keep growing until they cause out-of-memory errors. You
should consider how much data is likely to be placed in the cache, and refer to the WebSphere XS
product documentation for guidance on JVM heap size settings.
Q: Why doesn't the default cache policy allow for more than one catalog server?
A: Although running with multiple catalog servers is recommended, there is some overhead
involved. When you have multiple catalogs, ensure that a majority of them are started at around
the same time, and allow 60-90 seconds for them to handshake before the cache becomes
available. A default single-broker cache with one catalog still provides sharing of data across
execution groups, and starts up quickly.
Q: Do I need to restart all my brokers if I modify the cache policy file?
A: It depends on what you have modified. If you have added, removed or modified a broker that
hosts a catalog server, then restart all of the brokers. All of the catalog servers need to know about
each other before starting, and adding a new catalog server dynamically whilst the others are still
running is not possible. However, if you add a broker that does not contain a catalog server, then
you only need to restart this one broker. The container servers and client connections within this
broker will connect to the grid using the information already present in the policy file, and the other
components do not need to be made aware of this change.

Introduction to the WebSphere Message Broker global cache

Page 8 of 15

ibm.com/developerWorks/

developerWorks

Q: Why is a port range required, and how large should it be?


A: The catalog server and container server components within the cache all require a number
of ports to communicate. Execution groups that are hosting catalog servers (or catalogs and
containers) require four ports. Execution groups that host only container servers require three
ports. Execution groups that act purely as clients do not require any ports. WebSphere Message
Broker requires that you provide a range of at least 20 ports. Currently, a maximum of either 13
ports (for the default cache policy) or 14 ports (for a two-catalog broker defined via the policy file)
are actually used per broker. The additional ports are reserved for future use.
Q: Does every container have a full copy of the cache?
A: No. Each container hosts a subset of data in the cache. Each piece of data has a primary copy
and a replica. The WebSphere XS components in the cache ensure that those primary and replica
copies (shards) are spread across the available containers.

Cache topology considerations


What is cache topology?
Cache topology is the set of catalog servers, containers, and client connections that collaborate to
form a global cache. When using the default policy, the first execution group to start will perform
the role of catalog server and container server (call this Role1). The next three execution groups
to start perform the role of container servers (Role2, 3, and 4). No other execution groups will host
catalogs or containers, but all execution groups (including those performing Roles1, 2, 3, and 4)
host client connections to the global cache. When you restart the broker, the execution groups may
start in a different order, in which case different execution groups might perform Roles1, 2, 3, and
4. The cache topology still contains one catalog server, up to four containers, and multiple clients,
but the distribution of those roles across your execution groups will vary.

How does a cache policy file help with this?


The policy file lets you define the shape of your cache topology across one or more brokers. If
you create a policy file with two brokers, each of which is to host one catalog server, then you are
actually just joining together two default caches. Each broker will have the same Roles1, 2, 3, and
4 described above but they all know they are part of a multiple-catalog cache, and will set up
their properties accordingly.
However, you can also use the policy file to define a single-broker cache with two catalog servers.
In this case, Role2 described above becomes a second catalog server, and a container server.
You can also specify that a particular broker should contain zero catalog servers, which is useful in
a cache that spans multiple brokers. A small, prime number of catalog servers is ideal for optimum
availability and performance. So in a cache spread across six brokers, you might want only three
of them (for example) to host catalogs. For a discussion of when you have to restart brokers after a
change to the policy file, see Frequently asked questions above.
Introduction to the WebSphere Message Broker global cache

Page 9 of 15

developerWorks

ibm.com/developerWorks/

When should you use policy none and control each execution group
individually?
When you want to know exactly which role each execution group is playing in the topology,
and have this fixed across broker restarts.
When you are using the cache heavily and need to tune the JVM heap size, and therefore
need to know which execution groups to tune.
When you want to isolate the catalog server, so that you can manage that execution group's
life cycle individually.
When you are using the cache heavily, and do not want the catalog server coexisting in a
JVM with a container, since the work performed by the catalog could affect the performance of
that container.
When you want to create more container servers or catalog servers in a broker than any of
the policy options provide.
When you want to configure each port individually, rather than having the broker pick ports
from a range.

Having set the policy to none, what do you do next?


When you choose the broker policy setting of none, a set of execution group properties that were
previously read-only become available. These are visible in Message Broker Explorer on the
GlobalCache tab of the execution group's properties, and also from the command line via the
ComIbmCacheManager resource within the execution group:
mqsireportproperties <broker> -e <EG> -o ComIbmCacheManager -r

When you switch from a policy (either the default or a policy file) to none, the most recent values
of these properties that were dynamically set by the policy are retained as a starting point for
customization. Customizing properties for execution groups that you want to not host catalog
servers is relatively straightforward:
Three ports are required listenerPort, haManagerPort, and jmxServicePort.
You can disable JMX (set enableJMX to false), but the mqsicacheadmin command will then be
unable to reach components in this execution group.
connectionEndPoints must be set to a comma separated list of catalog servers, with each
catalog server described in the format listenerHost:listenerPort. This property is used
by the container server (if enabled) in this execution group, and by the client connection, to
communicate with the cache.
The catalogClusterEndPoints property does not need to be set.
Customizing properties for execution groups that you want to host catalog servers is a little more
difficult. In addition to the steps above, you must also set the catalogClusterEndPoints property:
Each execution group that is to host a catalog server must have the same value for
catalogClusterEndPoints.
This property contains details of not only this execution group's catalog, but also every other
catalog server in the topology.
Introduction to the WebSphere Message Broker global cache

Page 10 of 15

ibm.com/developerWorks/

developerWorks

Each endpoint is in the format ServerName:listenerHost:CatalogPeerPort:haManagerPort.


The CatalogPeerPort is a fourth port, for catalog servers only, and is not referenced
elsewhere in the properties.
listenerHost and haManagerPort are the same values as specified in this execution group's
properties.
The server name is BrokerName_listenerHost_listenerPort. To give an example:
You want this execution group to host a catalog server, and you have set the
listenerPort to 2800, the haManagerPort to 2801, and the jmxServicePort to 2802.
You want the fourth port (CatalogPeerPort) to be 2803.
The broker is called MB8BROKER.
The listenerHost is set to mybrokerbox.
The catalogClusterEndPoint for this catalog server will be

MB8BROKER_mybrokerbox_2800:mybrokerbox:2803:2801
catalogClusterEndPoints for this execution group need

list of the end point above, and all other catalog servers.

to be set to a comma-separated

Due to the complexity of setting the catalog server properties, here are some alternative
approaches for setting individual execution group properties:
Start with a cache policy. Bring up execution groups in the correct order based on the roles
you want them to take. Switch to none to fix these roles. You can then also tweak the settings
without having to craft them from scratch (for example, switch enableContainerService to
false to achieve an isolated catalog server).
Again, start with a cache policy. Based on the discussion above about roles within the
topology, you can consider each set of execution group properties to constitute a role
definition, which is not tightly coupled to any one execution group. Use mqsireportproperties
to create a snapshot of these role definitions. You can now switch to none and use
mqsichangeproperties to assign these roles to the specific execution groups. For example,
EG1 may have taken Role1 under the default policy, but you actually want a dedicated
execution group called CATCONT1 to perform that role. You can report EG1's properties and
use those precise values to set up CATCONT1. If you use this approach, make sure that you
revisit EG1's properties to ensure that it now has the correct role (even if that just involves
setting enableCatalogService and enableContainerService to false).

Introduction to the WebSphere Message Broker global cache

Page 11 of 15

developerWorks

ibm.com/developerWorks/

Resources
WebSphere Message Broker resources
Global cache overview
Topic in the WebSphere Message Broker V8.0.0.1 information center.
Product overview
Topic in the WebSphere eXtreme Scale information center.
Parameter values for the cachemanager component
Topic in the WebSphere Message Broker V8.0.0.1 information center.
WebSphere Message Broker V8 information center
A single Web portal to all WebSphere Message Broker V8 documentation, with
conceptual, task, and reference information on installing, configuring, and using your
WebSphere Message Broker environment.
WebSphere Message Broker developer resources page
Technical resources to help you use WebSphere Message Broker for connectivity,
universal data transformation, and enterprise-level integration of disparate services,
applications, and platforms to power your SOA.
WebSphere Message Broker product page
Product descriptions, product news, training information, support information, and more.
Download free trial version of WebSphere Message Broker
WebSphere Message Broker is an ESB built for universal connectivity and
transformation in heterogeneous IT environments. It distributes information and
data generated by business events in real time to people, applications, and devices
throughout your extended enterprise and beyond.
WebSphere Message Broker documentation library
WebSphere Message Broker specifications and manuals.
WebSphere Message Broker forum
Get answers to technical questions and share your expertise with other WebSphere
Message Broker users.
WebSphere Message Broker support page
A searchable database of support problems and their solutions, plus downloads, fixes,
and problem tracking.
IBM Training course: WebSphere Message Broker V8 Development
This course from IBM Training shows you how to use the components of the WebSphere
Message Broker development and runtime environments to develop and troubleshoot
message flows that use ESQL, Java, and PHP to transform messages.
Youtube tutorial: Integrating Microsoft .NET code in a WebSphere Message Broker V8
message flow
This five-minute youtube tutorial shows you how simple it is to use WebSphere Message
Broker V8 to build a message flow that includes Microsoft .NET code. Microsoft Visual
Studio is used to build .NET code in C#, which is then integrated into a message flow
using Message Broker and an HTTP RESTful interface.
WebSphere resources
developerWorks WebSphere developer resources
Introduction to the WebSphere Message Broker global cache

Page 12 of 15

ibm.com/developerWorks/

developerWorks

Technical information and resources for developers who use WebSphere products.
developerWorks WebSphere provides product downloads, how-to information, support
resources, and a free technical library of more than 2000 technical articles, tutorials,
best practices, IBM Redbooks, and online product manuals.
developerWorks WebSphere application integration developer resources
How-to articles, downloads, tutorials, education, product info, and other resources to
help you build WebSphere application integration and business integration solutions.
Most popular WebSphere trial downloads
No-charge trial downloads for key WebSphere products.
WebSphere forums
Product-specific forums where you can get answers to your technical questions and
share your expertise with other WebSphere users.
WebSphere on-demand demos
Download and watch these self-running demos, and learn how WebSphere products and
technologies can help your company respond to the rapidly changing and increasingly
complex business environment.
WebSphere-related articles on developerWorks
Over 3000 edited and categorized articles on WebSphere and related technologies by
top practitioners and consultants inside and outside IBM. Search for what you need.
developerWorks WebSphere weekly newsletter
The developerWorks newsletter gives you the latest articles and information only on
those topics that interest you. In addition to WebSphere, you can select from Java,
Linux, Open source, Rational, SOA, Web services, and other topics. Subscribe now and
design your custom mailing.
WebSphere-related books from IBM Press
Convenient online ordering through Barnes & Noble.
WebSphere-related events
Conferences, trade shows, Webcasts, and other events around the world of interest to
WebSphere developers.
developerWorks resources
Trial downloads for IBM software products
No-charge trial downloads for selected IBM DB2, Lotus, Rational, Tivoli, and
WebSphere products.
developerWorks business process management developer resources
BPM how-to articles, downloads, tutorials, education, product info, and other resources
to help you model, assemble, deploy, and manage business processes.
developerWorks blogs
Join a conversation with developerWorks users and authors, and IBM editors and
developers.
developerWorks tech briefings
Free technical sessions by IBM experts to accelerate your learning curve and help you
succeed in your most challenging software projects. Sessions range from one-hour
virtual briefings to half-day and full-day live sessions in cities worldwide.
developerWorks podcasts
Listen to interesting and offbeat interviews and discussions with software innovators.
Introduction to the WebSphere Message Broker global cache

Page 13 of 15

developerWorks

ibm.com/developerWorks/

developerWorks on Twitter
Check out recent Twitter messages and URLs.
IBM Education Assistant
A collection of multimedia educational modules that will help you better understand IBM
software products and use them more effectively to meet your business requirements.

Introduction to the WebSphere Message Broker global cache

Page 14 of 15

ibm.com/developerWorks/

developerWorks

About the author


James Hart
James Hart is a Software Developer on the WebSphere Message Broker
Development team at the IBM Software Lab in Hursley Park, United Kingdom. He has
worked there since 1999 in a variety of technical and customer-facing roles, and he is
currently Technical Lead for caching technology in WebSphere Message Broker.

Copyright IBM Corporation 2012


(www.ibm.com/legal/copytrade.shtml)
Trademarks
(www.ibm.com/developerworks/ibm/trademarks/)

Introduction to the WebSphere Message Broker global cache

Page 15 of 15

You might also like