You are on page 1of 12

Viewpoints on Latency - Thought Leadership

at The Cutting Edge of Electronic Trading


November 2012

Sponsored by:
n Viewpoints on Latency Low-Latency.com

The Low-Latency
Lowdown
By Pete Harris, Editor, Low-Latency.com

L
ow Latency. Cloud Com- testing), and for transaction cost Not covered in the research
puting. Big Data. These analysis. is the collection and analysis
separate business and - Structured price and of operational data from the
technology endeavours associated - data, for each technology infrastructure
continue to evolve, and also con- transaction. This drives trading underpinning trading. That
verge, in ways that will underpin systems both manual and includes latency data from
the next generation of financial automated leading to trade measurement systems such as
trading. executions, which generate more from Corvil and TS-Associates.
This time, lets concentrate on data. Records of transactions need In that respect, Corvils
low latency and big data, and to flow into matching, risk and recently introduced its CNE-7300
how the two are connected. But settlement systems, and need to appliance, capable of storing 60
first, its worth defining what big be archived for compliance. terabytes of price, timestamp and
data is, especially as it relates - Unstructured real-time data, latency data enough to store 50
to the financial markets, where including news stories and social days of options tick data. Such
extremes of data variety, velocity media updates. These can be storage allows for deep analysis
and volume are the norm, and parsed to extract information, over an extended period of how
where data value can be huge. and can be then used to generate infrastructure is performing.
Our sister web community - sentiment analytics, which can Also adding to the operational
BigDataForFinance.com has feed into trading algorithms. data soup is date derived from
developed this useful definition: - Both structured and network switches, reporting the
unstructured reference data status of internal buffers and
Datasets whose relating to transactions, varying message queues, which is vital for
characteristics - size, from records of corporate actions, designing network architectures
data type and frequency counterparty and legal entity that cope with microburst
- are beyond efficient, information, contracts and income traffic. Both Cisco Systems and
accurate and secure flows related to derivatives and Arista Networks have added
processing, as well as structured products. such functionality to their latest
storage and extraction, - Audio recordings of offerings, and they will no doubt
by traditional database transactions negotiated and serve up masses of diagnostic
management tools. executed via phone. analytics to help networks run
Of the above, time series data better.
Recent research conducted is probably the best understood, In overview, while low latency
and soon to be published and several vendor solutions and big data might seem to be
in partnership with SAP - by exist to support its storage and opposite ends of the technology
BigDataForFinance.com has analysis. For the most part, spectrum, in reality they overlap
investigated how big data though, its use is directed mostly and combine to make trading
technologies and approaches can towards designing algorithms and strategies more effective, and to
be leveraged by trading and risk determining their efficacy, and reduce the cost of operating the
management applications. not as part of the trade execution infrastructure that executes them.
That research determined that process.
big data as it relates to trading Of greater relevance to trade
covers a lot of different types and execution is the analysis of
collections of data, as detailed unstructured data, including news
thus: feeds, and social media, such
- Time series price data as as twitter. Here, the goal is to
granular as every single tick (trade leverage information on market
report) and every order to buy or moving events before they move
sell over an extended period of the markets in order to trade for
hours, days, weeks or years. Such advantage. As such, it represents
data is used to design algorithmic a real combination of big data and
trading models (through back low latency technologies.

2 | News, Thought Leadership and Opinion from Low-Latency.com November 2012


Low-Latency.com  Viewpoints on Latency n

Beyond Latency
into Sustained
Performance
By Dave Malik, Senior Director, Solutions Architecture, Cisco Systems

A
s competition in financial latency while providing the increased Layer 2 and Layer 3 switch with three
markets continues to visibility and instrumentation firms performance modes:
rise, the desire for speed need to gain a competitive edge and In normal mode, the forwarding
and ultra-low latency is capture alpha. latency of the Cisco Nexus 3548 is an
relentless. Across the global trading industry-leading 250 nanoseconds (ns).
environment, emerging business Superior Speed and Flexibility In Warp mode, latency is further
opportunities highlight an increasing reduced to 190 ns as Layer 2 and
need for speed with greater visibility, Using Ciscos innovative Algo Boost Layer 3 operations are combined on
flexibility, and manageability of application-specific integrated circuit a single Ternary Content Addressable
the trading fabric. The Cisco (ASIC) technology, the new Cisco Memory (TCAM) region.
High-Performance Trading Fabric Nexus 3548 Switch platform is the A special mode called Warp SPAN
architecture dramatically reduces fastest and most flexible, full-featured lowers latency to 50 ns for traffic
traversing from a single ingress port
and replicated to N egress ports.
Warp SPAN is critical for market
data distribution scenarios where
accelerated price discovery is
required.
Figure 1 demonstrates the Cisco
Nexus 3548 Switchs flexibility; a
market data feed is received and
replicated to Ports 1-4 and 25-28
in Warp SPAN mode, while orders
are simultaneously sent in normal or
Warp mode to the exchange using
other ports on the device.

Proactive Congestion
Management

Visibility in a trading environment


is essential for analysing latency
characteristics and sources of
congestion in application flows.
Short-lived congestion, known as a
microburst, is a common occurrence
that should be closely examined. The
active buffer monitoring feature in
Cisco Algo Boost provides granular
visibility of buffer occupancy in
switch ports up to 10 ns resolution.
A histogram of this data and various
export methods assist applications to
proactively detect rising congestion
that may cause suboptimal
performance. Buffer utilisation is also
ideal for troubleshooting increasing
Figure 1: Warp SPAN Mode on the Cisco Nexus 3548 Switch latency and packet drops. A triage

November 2012 News, Thought Leadership and Opinion from Low-Latency.com | 3


n Viewpoints on Latency Low-Latency.com

can be quickly conducted to correlate for gaps in the ticks by correlating the trading fabric consisting of hundreds
this data with latency monitoring sequence numbers in the packet and of hosts. The hosts can receive
tools and thereby detect sources of correlating the info in a dashboard clocking from the network and keep
congestion, duration and exact time displaying real-time market data fan- the applications synchronized. If
of microbursts, and peak levels of out latency to engines building an traffic needs to be analyzed, ERSPAN
buffer utilization at a specific interval. Order Book. with PTP capability can be also be
The firm may choose to swing to an used to insert a nanosecond-level
Efficient Traffic Analysis alternate exchange feed where data is timestamp in captured packets.
being delivered in a more consistent Furthermore, a pulse-per-second
In high-performance trading fashion. Encapsulated Remote (PPS) port on the Cisco Nexus 3548
environments, real-time data capture Switch Port Analyzer (ERSPAN) Switch can be connected to an
and analysis for forensics, latency further enhances capabilities to oscilloscope to determine if there any
monitoring, and data correlation send mirrored traffic to remote drift in the PTP clock accuracy on the
purposes is critical. This volume analysers across a Layer 3 network switch compared to a grandmaster
of data often requires significant without requiring local analyser tools clock signal.
storage infrastructure. The Cisco connected to monitored devices.
Nexus 3548 Switchs intelligent traffic Summary
mirroring capabilities help manage Accurate Time Synchronization
the data deluge. Switch Port Analyzer Trading environments are becoming
(SPAN) capabilities allow selection, Time synchronization in the trading more complex, as many components
filtering, and routing of traffic by type fabric is important not only for trading in the trading lifecycle work as a
to monitoring devices using access engines, but also for data analytics complete system. Ultra-low latency,
control lists (ACLs). tools. IEEE 1588 Precision Time granular data visibility, intelligent
SPAN allows for selective Protocol (PTP) is becoming more traffic monitoring, and precision clock
truncation such that the switch can prevalent, replacing Network Time synchronization are increasingly vital
be configured to send only a specific Protocol (NTP) on Ethernet networks for a high-performance trading fabric.
portion of the packet. For example, a and providing nanosecond clock Ciscos commitment to financial
market data packet from an exchange accuracy. The Cisco Nexus 3548 market innovations such as Algo Boost
could be truncated after the sequence Switch provides hardware-based PTP ASICs continues to enhance these
number and sent to a monitoring clock synchronization and a timing capabilities while extending them to
device. The device can be searching distribution method in a scale-out additional platforms in the future.

4 | News, Thought Leadership and Opinion from Low-Latency.com November 2012


Low-Latency.com  Viewpoints on Latency n

New Imperatives in
Trade Monitoring:
Speed, Safety and
Business Value
By Donal O Sullivan, VP Product Management, Corvil

R
educing latency has been rely on derived inputs. These measurements. Hardware time-
a high priority in electronic properties are highly desirable stamping is provided as a built-in
trading since at least in a system used both to ensure feature in monitoring appliances
the introduction of Reg- compliance and to guard against and is also spreading into data
NMS and MiFiD. The rise of new technology failures. aggregation devices and switches,
technologies such as wireless and The requirements for making it easier than ever to
FPGAs shows that the low-latency comprehensive and independent instrument large scale networks.
trend will continue. But over the past monitoring are contributing to In summary, network-attached
year, perspectives have broadened to the rise of large-scale network- monitoring promises advantages
give increased weight to technology attached monitoring systems that of comprehensive visibility,
cost, business value, and risk as use raw data from the network independence, performance and
well as speed. This emerging, more to analyse activity within and precision that are compelling
circumspect approach is driving between trading systems. Since for todays complex, high-
new requirements for trading system all meaningful trading activity speed trading environments.
monitoring. As participants look for ultimately crosses the network, Delivering these benefits will
solutions to help them trade not only and the network naturally however require an advanced
faster, but also more safely and more aggregates streams of activity monitoring system capable of
intelligently, they are seeking more within the firm, network-attached leveraging rich, but raw, network
comprehensive visibility into trade monitoring provides efficient data. The system must provide
activity across their platforms and access to the integrated view of a multi-layered approach that
all layers of their technology stack. trading activity that firms seek. In can work its way from network-
Successful monitoring for electronic addition, this style of monitoring layer activity and performance,
trading must provide the scale and can be fully independent of trading through to trading-layer behavior
breadth of analysis to meet these at both the hardware and software and results. Achieving broad
needs, as well as the precision and levels, using an architecture where visibility across large-scale trading
performance needed to operate in network traffic is passively copied systems requires decoding of
high-speed environments. to dedicated hardware loaded with multiple order-flow, market data
In addition to seeking more independently developed analysis and middleware protocols in
integrated views of trading software. real-time at very high message-
activity, firms are also looking for Network-attached systems rates. Support for precision clock
greater independence between have two further advantages that synchronization and correlation
trading and monitoring functions. make them ideal for monitoring of event-data across distributed
The need for independence is high-speed trading environments. environments are also required
driven by the important role that Firstly, the use of dedicated to support high-speed trading
monitoring plays in managing hardware allows monitoring performance analysis.
technology risks and regulatory to deploy the computational These are the demanding
compliance. Isolating monitoring firepower needed for real-time requirements that have driven
from trading improves its resilience analysis of ever-growing message the development of Corvils
during error or outage conditions, volumes, without burdening hardware and software trade
and makes it less likely to share the trading system. Secondly monitoring products. CorvilNet
common weaknesses with the network-attached monitoring can 8.0, announced in October 2012,
trading platform. Isolation also use highly accurate hardware- is the latest major release of
ensures that monitoring produces based time-stamping, ensuring Corvils monitoring platform and
an independent interpretation the nanosecond-level precision is at the forefront of the change
of trading activity that does not that is needed for low latency in how the industry monitors

November 2012 News, Thought Leadership and Opinion from Low-Latency.com | 5


n Viewpoints on Latency Low-Latency.com

trading. CorvilNet is an appliance- by configurable business metrics, trading technology continue to


based, network-attached solution and adds a management reporting increase. Leveraging advanced and
that decodes raw data from facility that eases high-level access comprehensive trade monitoring
the wire to analyse activity and to monitoring results. This release can help participants negotiate
performance at the network, also introduces a new range this environment safely and
application and trading business of appliances based on Intels successfully. If you are interested
layers. Monitoring workflows are Sandybridge architecture, pushing in hearing more about Corvils
supported at all three layers via processing and storage capacity to solutions for trade visibility, please
real-time dashboards and alerts, new limits to support larger scale contact Sheila.carroll@corvil.com
and large scale data capture for monitoring deployments. or visit our website at www.corvil.
historical analysis. Our 8.0 release Todays trading environment is com.
extends trading layer visibility with a highly demanding one as the
a new TradeLens module powered speed, scale and complexity of

6 | News, Thought Leadership and Opinion from Low-Latency.com November 2012


Low-Latency.com  Viewpoints on Latency n

Low-Latency
Roundtable
Dave Malik, Senior Director, Solutions
Architecture, Cisco System How would you characterise the assess how they can manage this
current state of play of how risk. Monitoring has a significant
market participants are actually role to play in this. The fact that
using analytics - data - from monitoring systems can tell you
latency and network monitoring independently and in real-time what
devices? is happening in your trading system
can be very valuable.
Malik: Achieving and maintaining
low latency in todays fast- What are the differences
moving high performance trading between the ways that trading
environments is a constant challenge. firms are using such analytics,
Typical analytics usage involves versus exchanges?
Fergal Toomey, Chief Scientist and latency metric collection at each
Co-Founder, Corvil hop of the infrastructure where the Malik: Trading firms are using
trade flow occurs. Real-time packet analytics to understand delays,
capture in co-location and market microbursts visibility, gaps in
data distribution segments, order market data from the venues, and
matching, and routing environments hop-by-hop latency between their
are common areas of interest to trading engines and the exchanges
participants. More advanced analytics matching engines. This allows firms
usage can influence trading strategies to understand certain order fill rates
where data mining is used to refine, across multiple liquidity providers
test and tune algorithms. Real-time while allowing them to conduct
modification of execution strategies latency arbitrage. Firms are using
based on applications being fed analytics to understand what changes
data from analytics tools - is on the need to be made to trading strategies,
leading edge. However, timely event especially those that are intra-day.
correlation between tick databases, All data is recorded real-time for
FIX logs, network traffic, and historical analysis and regulatory
application logs is not a trivial task. purposes. Predictive latency scenario
modeling is used for what-if
Toomey: On the infrastructure scenarios such as, What impact will
management side, typical usage is to a 50% latency increase in the trading
identify actual problems and resolve environment have on my fill rates,
them quickly, so fairly reactive. application performance, quotes/
Slightly more advanced is to identify trades transmitted and received
impending problems, and adjust during a period of time, etc?
infrastructure before the problem The exchanges leverage latency
materialises. analytics to understand if they are
On the business side, well in compliance with Service Level
understood latency metrics (e.g. order Agreements for market data delivery
response time) are used to influence to firms to ensure fair access.
routing decisions. Constant checks examine round-
And most recently, everyone is trip latency through the matching
looking to see if instrumentation can engine and to the perimeter of
help manage or minimise technology client connectivity which can be in
and business risk. The reputational remote POPs (Points of Presence)
and financial damage that highly or within a co-location hall. Some
automated trading systems can of the analytics are used to adjust
inflict in a very short space of time load-balancing algorithms to multiple
is foremost in the minds of senior exchange gateways for different
management. And most companies financial products as well. The Order
have initiated new programs to to Quote latency measurements are

November 2012 News, Thought Leadership and Opinion from Low-Latency.com | 7


n Viewpoints on Latency Low-Latency.com

critical since they provide insight into base. A prime example is the need as conducting risk checks, adjusting
the elapsed time from when an order to understand available headroom strategies, and troubleshooting
enters the exchanges network to the in the infrastructure across several potential client or exchange order flow
time the same order appears on the applications infrastructures for a given issues.
data feed. service under specific quote and trade
volumes, pricing history, market feed Toomey: I think weve covered much
Toomey: On the infrastructure latency, spread generation and FIX of this above. Real-time analytics are
management side, the use of gateway responses. used to influence routing decisions,
monitoring and analytics is actually and to indicate problems with
quite similar. Both sides are trying to Toomey: This is a very interesting infrastructure. They are also used to
manage increasing data rates and question. When developing an report on current trading behaviour
business challenges while keeping algorithm, traders will test against and risk exposure. However, creating
control of spending. This is a tough a historical database, and tune the the correct signals and metrics is not
nut to crack, so knowing where you algorithm before putting it live. The trivial, and can take some time. We
are really hurting and where you really more complete the data and analytics see participants putting this effort in
must invest is critical. Monitoring and in the historical database, then this up front, so that when that light goes
analytics tells you this. tuning can be more realistic and the red they know exactly what it means
For order routing and trade decision algorithm has a better chance of and can react immediately.
making, the participants have more replicating its test bed performance
to gain and are correspondingly more when it goes live. Not all trading strategies
active in exploring how the analytics More broadly, if you are monitoring require the lowest latencies.
can feed their trading decisions and your trading activity and something Are latency analytics still
give them a competitive edge. goes red, say your response time to a relevant for those? How so?
On the risk management side, particular market, then the better you
again, the participants have understand what that red light means Malik: Latency analytics are still
potentially more to lose, at least the more appropriate your response relevant for trading strategies that
financially, so they are extremely can be. So, if its just one gateway do not require the absolute lowest
active. Systems that were deemed thats the problem, switch your trading latency. Congestion can occur
acceptable from a risk viewpoint 6 to another. If its an entire venue, the anytime during the trading day,
months ago are no longer deemed to response will be different. But you regardless of the strategy or the
be so. They are building new systems typically will not know this the first trading instrument. Congestion results
to oversee all trading activity and time you see it, so we see participants in queuing, which can cause massive
provide an independent view to senior studying historical analytics to better delays and potential packet drops.
management as to what is really understand the various signals they This could result in a firm being out
happening. see and how these will impact trading of the market for a period of time.
outcomes. Constant feedback mechanisms from
How are market participants the trading fabric are important. For
using pre-recorded analytics, And how are they using example, several of Ciscos clients
collected over a period of time? analytics collected in real time? are excited about the Active Buffer
Monitoring feature in the Nexus 3548
Malik: Market participants leverage Malik: Analytics are collected in platform. This feature allows them to
pre-recorded analytics to test existing real-time with precise instrumentation have full visibility of rising congestion
and new strategies to determine how that is highly synchronized with on the infrastructure ports where
they would behave in certain market granular time stamping. Most firms trading systems are connected.
environments (i.e. Triple Witching, utilise network taps and Switch Port Early detection based on congestion
interest rate change announcements, Analyser (SPAN) capability in low- thresholds is important to systems
key economic indicator news, latency switches to send the real- that can then proactively take
earnings announcements, etc.). time traffic to analysers. All traffic appropriate measures to minimise
Event correlation is predominantly in the network is synchronised with application performance degradation.
used to understand the interaction precision timing. On Ciscos Nexus
between a market event or trading 3548 platform, the Encapsulated Toomey: There is a significant trend
infrastructure event that is causing Remote Switch Port Analyser that we are seeing here. Many trading
overall application performance (ERSPAN) with Precision Time strategies have traditionally not been
degradation. Market data replay is Protocol feature can be used to latency sensitive. However, various
used for testing and benchmarking consolidate all of the analysers in developments over the last few years
new upcoming applications including a central analytic farm and send have resulted in a much faster, more
some exchanges offering historic monitored traffic to a central location. automated trading environment. And
market data from the cloud. Key data This will save cost in allocating an these strategies have to live in this
constructs are also leveraged for analysers physical port directly to faster, more automated world. So
capacity planning exercises which every monitored device. Real-time while latency analytics may not be
is key area of concern in our client analytics are used for functions such critical as inputs to these strategies,

8 | News, Thought Leadership and Opinion from Low-Latency.com November 2012


Low-Latency.com  Viewpoints on Latency n

the participants who run these demonstrate that there still is room Functionally this will probably
strategies want to understand the for improvement on how trading remain separate from the network
latency landscape that their strategy fabrics interact with the business for some time, even if physically it is
inhabits. They are demanding more applications and analysis tools. As performed by a processing blade in
transparency and, yes, lower latency, analytics data of vital performance a switch. Probably worthy of a longer
from their brokers and venues. metrics can be further exposed to the conversation
applications and tools via northbound
What part is regulation playing APIs, the entire trading system How will analytics be used -
on the need for market becomes a competitive differentiator. what analytics will be needed
participants to collect, store For example, Ciscos latest platform, - in the future to provide better
and analyse analytics? such as the Nexus 3548, allows execution of trading strategies?
Python scripting and XML/NETCONF
Malik: Regulatory pressures are capabilities for architects to leverage Malik: The world of analytics is
causing a heightened sense of via northbound applications. In the growing rapidly and causing data
urgency within firms to increase future, Software Defined Networking deluge in many of our clients
transparency in their daily trading (SDN) constructs will be utilised by environments. Converting big
operations. Firms should be able to business applications and analytical data into actionable intelligence
produce a highly accurate record of tools to program the network for competitive benefit will require
every transaction in which they are fabric based on certain business further interaction and correlation
involved. Several exchange outages policies and real-time health of the with latency analytics, arbitrage
in a few global exchanges in the past infrastructure. The Open Network algorithms, and real-time monitoring
months have raised the regulatory Environment (ONE) is just one systems. Understanding relationships
focus and scrutiny at the transaction example where Cisco is innovating between key components and order
and record level. Long-term data by making the infrastructure more fill rates is an important aspect of
retention and the requirement for programmable. Analytical tools will driving more profitable trading.
timely access and tracing of historical gain from an entire new opportunity Aggregating data from various
transactions have increased the to provide higher-level analytics sources of latency from network,
burden on certain infrastructures. and business metrics dashboards, compute, storage, applications and
Risk and Compliance officers within a enabling the trading value chain to analyzer/capture devices themselves
firm are also increasing their stringent interact in real-time with the trading for data mining becomes essential.
requirements to be able to view data fabric in a more meaningful way. Detailed graphical views of quote
in many different forms. and trade latency through hardware
Toomey: Currently, the infrastructure and software platforms with each
Toomey: Certainly, regulation is is fairly dumb when it comes to financial instrument in various trading
driving greater demands for collection analytics, and all the timestamping locations analysed at a firm-wide level
and storage for participants, but also and analytics are performed by the can assist in providing an overall
on the venues. The regulators are monitoring appliances. The first view of certain risk exposures which
demanding more transparency and thing that will change is that the are not broadly visible today.
the many of the traditional methods of timestamping will be performed in
providing this are not adequate. For the infrastructure, and the monitoring Toomey: If I knew that Seriously,
example, if your system performance appliance will base its analysis on that we think that the process of
is measured in microseconds, which timestamp. (Corvil already supports automation will continue, and the
it typically is, then providing reports this external timestamp model with analytics that will be needed are
based on software logs is no longer a few specialist vendors, and we will those that will be required to support
good enough. So the regulators are add support for 3 or 4 more by end that automation. To further remove
demanding the transparency, and 2012.) Beyond this timestamping, we humans from the decision loop,
the sheer speed of the infrastructure see that some vendors are starting to machines will need to be able to
is determining that on-the-wire high put in some queue length reporting, parse more of the complex signals
speed solutions are the means of and this is useful also, because the that humans are good at absorbing.
providing it. infrastructure is the right place to So, better intelligent parsing of news
do that. (Again, Corvil consume and feeds to extract reliable trading
How do you see the future report on Aristas LANZ feed.) signals is one area that will continue
development of intelligent We support all these developments to develop. TV image processing,
network switches, and of because they ultimately reduce the facebook and twitter integration, all of
specialist latency monitoring cost of monitoring and analysis for these offer potential for the future.
appliances, and how will they the customer, and we think this is
work together to provide necessary to support the broader
complete analytics tools for deployment of monitoring solutions.
trading systems? However, once you get to full
trading decodes, multi-hop analysis,
Malik: Innovation continues to fill-rates etc., it gets trickier.

November 2012 News, Thought Leadership and Opinion from Low-Latency.com | 9


n Viewpoints on Latency Low-Latency.com

Low-Latency Packets News from Low-Latency.com


Telx Lowers Latency to NYSE Mahwah

Proximity data centre operator Telx is tapping into new fibre routes from Cross River Fiber to lower latency to NYSE
Euronexts Mahwah, NJ data centre.

Cross Rivers new fibre runs directly through Clifton, NJ where Telx operates its existing NJR2 data centre, with NJR3
under construction. The fibre then runs on to Secaucus, NJ (where Equinix has facilities) and Weehawken, NJ (where
a Savvis facility is located). The Mahwah to Clifton route is the most direct one into Mahwah, offering competitive
latency to existing Mahwah to Secaucus connections.

The NYSEs decision to open up access to Mahwah is driving the construction of fibre into it from data centres hous-
ing other markets, in order to facilitate cross-market trading.

TIBCO Taps Pluribus For FTL Message Switch

Tibco Software has added to its messaging arsenal with the introduction of the Tibco FTL Message Switch, the result
of a partnership with Pluribus Networks, a startup focused on low-latency virtualised networks. Founded in 2010 in
part by former Sun Microsystems executives, Pluribus is located close to Tibcos headquarters in Palo Alto, Ca. Its
funding to date has come from Mohr Davidow Ventures and from NEA.

According to TIBCOs Denny Page, the partnership will see Tibco offering messaging that is truly a part of the net-
working infrastructure. According to Pluribus product literature, its F64 Server-Switch provides Customer ability
to deploy application logic directly onto the switch, improving performance and reducing network latency. The F64
features 48 ports for 10gE connectivity and four ports for 40gE, providing Layer 2 and 3 switching functionality with
latency of less than 400 nanoseconds.

Nasdaq OMX Introduces FPGA Data Feed

Nasdaq OMX has gone live with a version of its TotalView Itch full depth equities feed driven by FPGA technology. The
feed is being directed at trading firms that are particularly latency sensitive, with no queuing during data bursts that
occur at peak trading periods. The TotalView Itch 4.1 FPGA feed is identical in format to the standard feed, which is
processed and distributed using software-only code.

Because there is no queuing, messages on the FPGA feed can contain multiple price updates and so message sizes
could be larger than the standard feed, up to 9,000 bytes in total. Recipients need to connect via 10G or 40G links,
and should be able to handle data bursts up to 2,000 messages in a millisecond.

Ullink Partners With Azul For Low Latency

Ullink is continuing its focus on squeezing microseconds of latency from its trade execution and pre-trade risk sys-
tems, by partnering with Azul Systems, which offers a low and predictable latency Java virtual machine (JVM).

Azuls Zing JVM is 100% compatible with the standard distribution but is optimised for the Linux operating system,
reducing latency and jitter that is often associated with the standard JVM. Applications - such as Ullinks trading sys-
tem suite - that are written in Java require a JVM to execute on host operating system and hardware platforms.

Hibernia Connects to Amazon Cloud

Hibernia Atlantic is connecting its global network into Amazons AWS cloud platform, via the latters AWS Direct
Connect facility, which provides higher bandwidth and possibly lower costs, compared to internet connectivity. AWS
Direct Connect is available at several Amazon platform instances, including those in New York City, Virginia, Califor-
nia, plus London in the UK and San Paulo in Brazil.

This connectivity could be of interest to financial markets firms looking to run applications on cloud infrastructure,
such as back testing and creation of quantitative models. Such applications generally require access to large volumes
of time series data.

10 | News, Thought Leadership and Opinion from Low-Latency.com November 2012


n Viewpoints on Latency Low-Latency.com

Sponsored by:

You might also like