You are on page 1of 17

WAN Speak Musings Volume V

Over the last months, Quocirca has been blogging for Silver Peak System s independent blog site, http://www.WANSpeak.com. Here, the blog pieces are brought together as a single report.
January 2014

In the continuing series of WAN Speak aggregated blog articles from the Quocirca team covering a range of topics.

Clive Longbottom Quocirca Ltd Tel : +44 118 948 3360 Email: Clive.Longbottom@Quocirca.com

Bernt Ostergaard Quocirca Ltd Tel: +45 45 50 51 00 Email: Bernt.Ostergaard@Quocirca.com

Copyright Quocirca 2014

WAN Speak Musings Volume V

WAN Speak Musings Volume V


Over the last months, Quocirca has been blogging for Silver Peak Systems independent blog site, http://www.WANSpeak.com. Here, the blog pieces are brought together as a single report. Why The Future Needs It is pointless moving vast amounts of static data around the internet which is why content distribution networks (CDNs) have evolved. However, the same technology can and probably Local CDN should be used within an organisations own network. Our Chief Weapon Is
Understanding the dependencies between the various parts of a technology platform the servers, storage and networks components is important so as to make sure that solving one problem doesnt just appear as a different problem elsewhere. EMC made a raft of announcements around its storage systems and there was plenty of focus on mid-range systems. However, it looks like something may have been omitted

Its Here: The MidRange Enterprise Storage Tsunami Can Vendors Help Your Technical Strategy? SDN An Apology

Vendors get a lot of stick (including from me) for misleading their prospects and customers. Maybe it is time that the vendors upped their game and worked with them in order to held define a suitable future strategy that works for all concerned? SDN has been touted as the silver bullet to end all silver bullets. However, as implementations start to be seen, issues have come to the fore and it is time to apologise for being a full-on SDN supporter. Possibly. Youve decided that cloud is for you. All your apps are now virtualised, and you are ready to flip the switch. Except for that pesky problem of all that data. Co-locational data centres seem to be doing well which shouldnt be much of a surprise. However, not all data centres are the same and the future will see the evolution and maturation of a new beast: the cloud broker and aggregator. The good news is computer and consumer displays are getting better with higher definitions. The bad news is this could have major impacts on an organisations networks. How? If the quality is there, users will utilise it. As EE launches super-high speed wireless broadband in the UK, does this start to challenge all that copper and fibre that has been laid down? Could super-high speed wireless be the future at least for specific situations? As problems with the idea of software defined networks (SDN) come to the surface, service providers have decided that they need to have something a little bit more focused to their needs. Hence network function virtualisation (NFV). Is this a binary battle to the death?

Three Pitfalls to Cloud On-Ramping The Future For CoLo

The Future Is Crisper And Clearer Wires? So Last Year, Darling! NFV & SDN: You Can Go Your Own Way? How the Dark Side is Creating the Notwork Hub a Dub Dub 3 Things in a Tub?

Spam email may just appear to be an easily dealt with problem, where device- or server-based software can eliminate a large percentage of the problem. However, nipping spam in the bud closer to the point of creation could have major beneficial impacts on the internet itself.
The Internet of Things (IoT) wants to break free of constrained pilot projects, but could find itself a victim of its own success. Three things need to be considered first and by getting these right, the IoT stands a much better chance of success.

Quocirca 2014

-2-

WAN Speak Musings Volume V

Why The Future Needs Local CDN


Content delivery (or distribution) networks (CDN) are generally perceived as being of concern only for wide area networks (WANs). The idea is that relatively static content can be distributed to be held as close to a user as possible so as to reduce the stress on a single server that would result from videos, music, and other files all being streamed from a single place. The CDN provides stores of the files in multiple different places around the world, so leading to a more balanced network load with less latency for those needing to access the files. However, there is nothing to stop a CDN being implemented in a more local environment, and indeed, this may be a good idea. Although local area network (LAN) bandwidths are growing at a reasonably fast rate, the traffic traversing these networks is growing at a faster rate. High-definition video conferencing, high-definition voice over IP (VoIP), increasing use of images in documents, and other large single-file activities have moved traffic volumes up by orders of magnitude in some organisations. The future will bring more data as we move toward an Internet of Things, with machine-to-machine (M2M) and other data migrating from proprietary networks to the standardised realm of the corporate IP network. Combine this with the need to deal with the world of Big Data using sources not only within the organisation, but also from along its extended value chain and information sources directly from the web, and it is apparent that LAN bandwidth may remain insufficient to support t he business actual needs. Sure, priority and quality of service based on packet labelling can help but requires a lot of forethought and continuous monitoring to get right. Much of the M2M and other Internet of Things data will be small packets that are being pushed through the network at frequent intervals and this sort of chatter can cause problems. Some of this data will need to be near real time; some can be more asynchronous. The key is to ensure that it does not impact the traffic that has to be as close to real time as possible such as the video conferencing and voice traffic. A large proportion of an organisations data is relatively unchanging. Reports, contracts, standard operating procedures, handbooks, user guides, and other information used on a relatively constant basis by users will change only on an occasional basis and is ideal for a CDN to deal with. These files can be pushed out once to a data store close to the user so that access does not result in a long chain of data requests across the LAN and WAN to the detriment of other more changeable or dynamic data assets. Only when the original file changes will the request from the user trigger the download of the file from the central repository at which point it is placed in the local store again, such that the next user requesting the file gets that copy. Many of the WAN acceleration vendors already have the equivalent of a CDN built in to their systems through the use of intelligent caching. In many cases, there is little to do to set up the capability the systems are self-learning and understand what changes and what doesnt. In some cases, there may be a need for some simple rules to be created, but this should be a once-only activity. Extending the use of WAN acceleration into the LAN could bring solid benefits to organisations that are looking to move towards a more inclusive network and now may be the best time to investigate implementing such an approach before it gets too late.

Quocirca 2014

-3-

WAN Speak Musings Volume V

Our Chief Weapon Is


Monty Python, that most British of eccentric comedies, had a sketch in it around nobody expecting the Spanish Inquisition. A scarlet-befrocked cardinal bursts in through the door and declares Our chief weapon is surprise. Fear and surprise. Our two main weapons are fear and surprise and ruthless efficiency. Err amongst our chief weapons are. This seems to me to be a pretty good analogy to what we are seeing in the IT industry these days. The role of specialists in a world where everything is dependent on everything else leads to some interesting and often unexpected results. IT is still being pushed to the buyer using speed and feeds this server can do more GFlops than that one; this storage array can do more IOPS than that one; this network can carry more bits per second than that one. However, this is meaningless where the worlds fastest server is attached to a wax cylinder for storage, or where wet string and aluminium cans are being used as networks. The worlds latest sub-photonic multi-lambda network is of no use at all if all your storage can manage is to free up data at a few bytes per second. Users are getting confused they identify the root cause of a performance issue and invest in solving it, only to find that the solution doesnt give the benefits they expected. All that has happened is that the problem has been moved from one place to another the paucity of disk performance has been solved, but the server cpu performance cant keep up with the data; the servers have been updated, but the network is now jittering all over the place. Even worse is where the three main variables of server, storage, and networks are all dealt with, and everything looks great. Green markers on all sysadmin screens; everything is pointing toward absolutely fantastic performance from the IT platform. The IT manager is preparing a shelf at their home for the Employee of the Millennium award that is surely coming their way. Except that the help desk manager has red flags on their screens. Users are calling in with application performance issues; unintelligible VoIP calls; connectivity drop outs. Sure, the data centre is working like a super-charged V12 engine; the problem is that the wide area network connectivity is working far more like a 1940 s John Deere tractor. To get that special award, the IT manager has to take a more holistic view of the entire ITC (IT & comms) environment. As virtualisation and cloud continue to become more the norm, focusing on specialisms will not provide the overall optimisation of an organisations IT platform that is required to make sure that IT does what it is there for supporting the business. End-to-end application performance monitoring is needed, along with the requisite tools to identify root cause of an issue alongside of tools that can then play the what if? scenarios. If I solve the problem of a slow server here, what does this do to the storage there? If the storage is upgraded, how will my LAN and WAN deal with the increased traffic? Only by understanding the contextual dependencies between the constituent parts of the total platform can IT be sure of its part in the future of the organisations structure. However, dressing up as a cardinal and charging in to management meetings shouting Nobody expects the effective IT manager! may at least get you noticed...

Quocirca 2014

-4-

WAN Speak Musings Volume V

Its Here: The Mid-Range Enterprise Storage Tsunami


I went to the virtual races with Lotus Renault in Milan in early September to partake in EMCs global mid -range storage tech announcements and you just have to give EMC an A for effort: making storage racks look sexy really requires determination and deep marketing pockets. The event was located in a hangar-size television studio, which the upward of 500 participants entered through an umbilical cord tunnel embellished with race track motifs, accompanied by the roar of high octane racing cars one of which was parked in black and gold Lotus colours to greet the guests as they emerged from the tunnel. This was an all-day, worldwide event going out to some twenty EMC event sites in Asia, the US and several other locations in Europe. In the centre of the studio was a news desk where a TV host duo was continuously interviewing launch luminaries. The news desk fronted the stage and product presentation area, which was flanked by multiple 6 foot cabinets hidden under satin drapes, waiting to be unwrapped. David Goulden, EMC president and COO presently took the stage and launched into the story of EMC and the four major trends: mobile, cloud, big data, and social media. Together, these trends were driving WAN data volumes along an exponential growth curve. It took EMC 20 years to sell one exabyte (thats 1 with 18 zeroes) of data storage. The following year in 2010, EMC sold one exabyte in a single year, and earlier this year the company sold one exabyte of storage in a single month. And its not because the boxes have gotten bigger. Memorys physical size is shrinking, just as the systems capacity to store and retrieve blocks, files and objects continues to speed up, and flash storage keep the latency issues at bay. So, in a roundabout way, our capacity to store and retrieve data is expanding to keep pace with our growing need to generate data that needs storage. Correspondingly, the physical footprint of our storage facilities remains the same, and what we pay for our enhanced storage capabilities is also pretty stable with the 2013 corporate storage budget delivering 5 times more storage capacity than it did just a year ago. Data centre virtualisation and the ascent to private/hybrid/public clouds environments is another innovation growth zone that EMC wants to play in, and the customer hook apart from storage density and price is ease-of-use. EMC wants to make private cloud solutions as easy and flexible as public cloud offerings. This is at the heart of the Nile project an elastic cloud storage device for private clouds and on-line SPs. It provides business users with a simple interface to their corporate cloud store and lets them pay by the drink. They can select block, file or object storage type, then set performance parameters and capacity and finally the amount of elastic block storage required. In the mid-range VNX environment the monthly price for 500GB of storage is typically $25. Nile solutions will be on the market in early 2014. An interesting omission from all the goodies announced in the mid-range storage market where EMC reigns supreme is built-in anti-virus solutions. This is an often-requested feature, which EMC with its RSA security subsidiary should be eminently well positioned to deliver. Given the multiple entry points into corporate networks, the multitude of employee BOYD devices, and the value that stored data represents, security needs to reside on the platforms as well as in the applications. Need another box with that order, sir?

Quocirca 2014

-5-

WAN Speak Musings Volume V

Can Vendors Help Your Technical Strategy?


Talking with end user organisations, it is becoming apparent that the world I grew up in the world of companies having 5-year plans is changing. No-one really saw the financial collapse coming, and the impact of speculators on world markets can change situations on what seems to be a day-to-day basis. This has made corporate strategising difficult, and it is apparent that organisations are struggling to figure out much true planning beyond a few months and in many cases just to the 12-week cyclical horizon forced upon them by their shareholders. At the IT level, this is compounded by the pace of change in todays technology. Few saw cloud computing, fabric networks, converged computing, or big data coming along at an early enough stage to make them part of their strategy, and many are only now beginning to move some of these technologies into their mainstream use. If the end-user organisation is struggling to make longer-term strategic decisions, then can it make shorter-term ones that will still contribute toward a valid vision of a future state? My belief is that it can, but at the technology level it will need help, and this help should come from those who are responsible for technology changes: the vendors. The vendors should be able to present not only what they are doing now, but also provide the longer-term vision to their customers and prospects for them to evaluate and buy into if they believe in the proposition.

Figure 1

OK it is equally difficult for a vendor to see the future in any clarity as it is for an end-user organisation, but they can at least provide a view on what they believe could happen. Figure 1 shows Quocircas approach to creating a vision, which we call a Hi-Lo Road Map. This works in the following way: If an end-user asks a vendor what functionality they have right now, the vendor can only offer what they have on their books at the moment. This gives a base point to work from of functionality against time or point 0,0 on the graph. However, in six months time, the vendor has the capability to change the functionality of their products. Depending on the financial climate, the vendor may have little money to invest in Quocirca 2014 -6-

WAN Speak Musings Volume V


R&D; should there be more money available and more pull from customers, then they could invest more. This leads to a possible spread in functionality, shown by the dark blue segment of the graph. As time goes further out, the possible spread of functionality becomes larger. However, the vendor is at least providing limits to their vision and giving the customer (or prospective customer) a vision to buy into. The model allows the vendor to say to an organisation that they will ensure that the functionality the vendor provides will never drop below a certain point, as this would show that the vendor was being too cautious in their approach. Neither will they push too hard into being overly ambitious and cause the customer to back out of a technology approach to something more mainstream. Recently, I attended a couple of events by essentially similar vendors. One was completely open about where it was going and its vision of the future. It fell easily into this model and the end user organisations present, along with the partners, were firmly bought in to its approach. The other peppered its presentations with non-disclosure warnings. Therefore, I cannot state to its prospects or customers exactly what the vendor sees as the future, even though it has a good portfolio of products. Which would you choose as a partner? The one with the open, long-term vision, or the one who says we can help you but we cant tell you where were going?

SDN An Apology
OK. I admit it. When I first started looking into SDN, I fell for i t. Hook, line, sinker in fact, the fishing rod and the fisherman as well. The simplicity of it all was so dazzling that I suspended my normal cynicism and went for it. SDN was going to change the world Cisco, Juniper, and other intelligent switch manufacturers were dead, and wed all be using $50 boxes within months as all the intelligence went to the software layer. Well, hopefully I wasnt that bad, and I have raised plenty of issues along the way, but I did miss one tiny little problem. Im OK with the idea of multiple planes, and Im OK with two o f them being as defined within the SDN environment. However, there is one that still niggles: the control plane. The data plane has to stay down at the hardware level. If it wasnt there, then the data couldnt be moved around. This could be filed under a statement of the obvious. The management layer can, and should, be moved to the software layer. The rules and policies to do with how data should be dealt with are better served by being up in the software layer, as a lot of the actions are dependent on what is happening at that level. Great we have two layers sorted. Now the thorny issue of the control plane. With SDN this is meant to be abstracted to the software layer too, but it just wont work unless there is a great deal of intelligence at t he hardware layer. However, the hardware layer is meant to have little to no intelligence in an SDN environment. The problem is that if every packet has to be dealt with via the control plane, then it needs to jump from the hardware to the software layer and back down again, introducing large amounts of latency into the system. This may not be a problem for many commercial data centres, but it is a complete no-no for service providers. So, to get around the latency issue, only those packets that need action taking on them should be sent up to the software layer. This could work, but something has to decide what packets should go up to the software layer. This would be something along the lines of, oh, I dont know how about a switch operating system and some clever ASICs or FPGAs? And while were at it, we may as well get rid of that last problem of SDN latency by not sending any of the packets to the software layer and do everything at the hardware layer it is far more efficient and effective. In other words, a switch as we already know it.

Quocirca 2014

-7-

WAN Speak Musings Volume V


Does this mean that SDN is dead? By no means. Getting the management plane as an abstraction makes a great deal of sense. Ensuring that the messages that the management plane sends down to the hardware and the messages the hardware sends back to the management plane are streamlined and effective will be key. All switch manufacturers will need to support a common set of standards such that the entire network estate appears as a single resource pool and everything can be driven from software management layer. Hardware manufacturers will still be able to differentiate on capabilities within their operating system and silicon, which may be good for the market anyway as this will still drive network innovation. SDN isn t dead it may be different to what I envisaged originally, but it will still change the world through driving a more standardised network, and in allowing the network to be integrated into the overall IT platform with greater ease.

Three Pitfalls to Cloud On-Ramping


Youve decided that youre going to move to an external cloud platform. Youve provisioned the platform with the applications you need, and youre ready to go. Except for the few quazillion bytes of existing data that are sitting in your existing data centre and need to be moved over to the new platform. A while back, I discussed how a logistics company and a physical storage device is probably the best way to get large amounts of data onto an external cloud platform. But what about just using the internet to do this can it be done effectively? 1) Volume. The first issue is just the volume of data involved and the bandwidth available for moving it. The first task, therefore, has to be in minimising the amount of data that needs to be transferred. Data cleansing followed by deduplication can reduce volumes by up to 80% transfer times will be proportionately shorter. Data compression, packet shaping, and other network acceleration techniques can also help in reducing the time required to move the data from the old data centre to the cloud. However, this then brings us on to point two: Network Issues. Even a few terabytes would take a long time to move over anything other than a high-speed link dedicated to the organisation, and most would be sharing this link with all their other internet traffic. Therefore, it is important to ensure that the correct priorities are applied to the various types of data being transferred across the connection. Voice and video will probably already be receiving high priority, with most other data being allocated a standard class of transport. The temptation is to look at the mass data of the transfer as being the same as a backup and as such allocating it a low priority. This will mean that the data will only transfer when there is little to nothing else happening and the transfer times will be extended. So, if possible, the best option is a dedicated physical link. If not, use a dedicated virtual portion of the connection so that you can calculate pretty accurately how long the transfer is going to take. Only as a last choice should the data be transferred over the same shared connection as your organisations other data. Why? Because of point 3: Synchronisation. Even when the data transfer is happening as fast as possible, it still takes a finite amount of time. During this time, the data at the source continues to change new transactions take place, new data is created. Therefore, what you end up with at the cloud side is not the same as what you have at the source side, which is a bit of a problem.

2)

3)

A couple of ways around this include the unlikely one of shutting down the application that is creating the data for the period of time that the data transfer is taking place. This has the obvious, and unwelcome, side effect of halting business. No the main way of dealing with this is to iterate the transfer at a delta level. Assume that you start with 10Tb of data, with each day creating 1% more data (so, 100Gb). Assume that the original transfer of data to the cloud takes 1 day. The original data transfer therefore is of the original 10Tb but this leaves you with yesterdays data, not todays. You now have 100Gb of delta data to transfer across this is not as fast a task per byte as was the original transfer, as comparisons have to be carried out against what is already there and what has changed. For the sake of argument, lets say that this 1% extra data takes 10% of the time to move across. Quocirca 2014 -8-

WAN Speak Musings Volume V


Now, we have moved closer to our end result. However, there has been a change to the source data again that 1 hour or so that it took to move the delta data across has resulted in a further 10Gb of new data being created. The iterations could go on for a long time but at a 10Gb level, youre probably at the point where the last synchronisation of data can be carried out while the planned switchover of the application is being carried out over with minimum impact to the business. The key is to plan on how many iterations will be required to get to a point where a final synchronisation can be more easily done. So there are three main areas to deal with: data volume can be dealt with through cleansing, data deduplication, and wide area network optimisation technologies; network issues can be dealt with through virtual data links and/or prioritisation; and data synchronisation handled through planned iterations of synchronisation, followed by a final off-line synchronisation. Or you could use a logistics company

The Future For CoLo


Times are looking good for co-location (co-lo) facilities. As commercial entities realise that building yet another data centre for their own use is not exactly cost-effective, they look towards the value of using an external one for some or all of their needs. For some, the use of the XaaS (whatever-it-may-be as a service) model will be the way that they will go. However, there will always be the server huggers: those who realise that owning the facility is now counter-productive, but who still want to own the hardware and software stack within the facility. For these people, letting someone else worry about power smoothing and distribution, environmental monitoring, auxiliary power generation, facility cooling, and connectivity makes a lot of sense. And, then of course, there are those who want to be in the XaaS market but as a provider, not as a user. Providing services to the general public and commercial organisations can be fraught with danger particularly if you are new to market with a relatively new idea. Will everyone go for The Next Great Idea? If so, going it alone and building a full data centre requires a crystal ball of the largest, clearest magnificence: get it wrong and too many customers appear, and you find yourself needing to splash out on another facility far too early. Get it wrong, and not enough customers come to you, and you find yourself trying to keep a vast empty data centre going with low cash flows. No better to go for co-lo, and start at the size you need with the knowledge that you can grow (or shrink) as your needs change. Funding the relatively small amount of money needed up front to get off the ground can then make it so that cash flows come through the hockey-stick curve rapidly, and the positive, profitable cash flow can then be used to fund the incremental increases in space that the service provider needs as customer volumes increase. So far, hopefully all pretty obvious. Co-lo does, however, offer both users and service providers further advantages. As there will be many customers within a single facility, they are all capable of interacting at data centre speed. Therefore, a service provider housed in the same co-lo facility as their own customer can pretty much forget about any discussions on data latency core network speeds will mean that this will be down in the microseconds for a well-architected platform, rather than the milliseconds. Even where the service provider is in a different physical facility than the customer, but with the same co-lo provider, the interconnectivity between the facilities will minimise latency far below what an organisation could hope to get through the use of their own WAN connectivity. Even between co-lo data centres owned by different companies, the amount and quality of connectivity in place between the two facilities will still outperform the vast majority of in Quocirca 2014 -9-

WAN Speak Musings Volume V


house data centres. Combine all of this with judicious use of WAN acceleration from the co-lo facility to the end user in the headquarters and/or remote offices of the end customer and a pretty well performing system should be possible. For co-lo providers like Interxion and Equinix, this points to a need to not just be a facility; nor even a fully managed, intelligent facility. What it needs is what Interxion calls a community of customers, partners and suppliers. By working as a neutral party between all three, it can help advise all its customers on how best to approach putting together a suitable platform. They can also help with advising on which of its partners can provide services that could make life a lot easier for a customer who may through no fault of their own, apart from lack of time to check into everything available out there be intent on re-inventing the wheel. In some cases, this may also be advising on what services are provided from outside its facilities such as the use of Google or Bing Maps in mash-ups, rather than buying in mapping capabilities This breeds a new type of professional service: the co-lo provider which does not provide technology services per se over and beyond the facility itself, nor provides systems integration services, but does provide the skills to mix and match the right customers with the right service providers. Backing this up with higher-level advice on how to approach architecting the end-customers own equipment to make the most of the data centre interconnect speed and minimise latencies to provide the best possible end-user experience should be of real value to all co-lo customers particularly commercial companies struggling to fully understand the complexities of next generation platforms. If you are looking for co-lo, Quocirca heavily recommends that you ask prospective providers whether they plan to offer such a neutral brokerage service and if not, walk away.

The Future Is Crisper And Clearer


Ive attended a few events lately with vendors in the PC, laptop, tablet, and display markets. In their attempts to drive a desire in users to upgrade or change their devices, these vendors are finding it a struggle to push the speeds and feeds as they always have done of old. Whereas there are still the technogeeks who will still pay for faster processing, better graphics, and a humungous great fast hard disk drive, the majority of users are now more like magpies: if it catches their eye, then its a good start they are looking for something that looks nice and enables them to make a personal statement in the bling stakes. Therefore, more vendors are making greater style statements in their new offerings. Ultrabooks are thinner and more stylish; tablets are lighter and sleeker; smartphones are glossier. However, the biggest move seems to be on the display front. With Apple having pushed the Retina display for a while now, others have also gone for increasing pixel density to match or exceed standard HD (19201080 pixels). However, there seems to be an increasing push now to go for either the ultra-high definition video standards, 4K (38402160 pixels) or 4Q (25601440 pixels). On top of this is deep colour depth generally 8- or 10-bit, the latter being 1.7 billion possible colours, or far more than the human eye can actually perceive. On the face of it, this is a pretty simple evolution that is becoming more of a requirement. Many screens are getting larger some professionals in the media markets will be using 30 or 40 screens for their work, and at that size standard HD can look a little grainy. Cameras and videos can now create images that are in the multiple tens of millions of pixels, so even a 4K displays 8.3 million pixels will only be displaying a cut-down version of the real image. However, for a smartphone or a 10 tablet, it does seem a little on the overkill side. The problem for many, though, will be the impact it could have on the network. Textual documents using TrueType fonts will not be a problem their size will stay the same. However, with pin-sharp resolution on their screens, many

Quocirca 2014

- 10 -

WAN Speak Musings Volume V


content creators will now move from 72 dpi graphics to 300 or even 600 dpi images with a massive impact on document size. Images even those being posted on Facebook will be sized to impress those on such high resolution screens. Videos will be streamed using the higher resolutions by those with the capability to show them on their screens. This will have an impact on the underlying network, unsurprisingly. A reasonably well-compressed, visually lossless HD film currently requires around 3.5Mb/sec bandwidth. Move this to 4K, and you are looking at 14Mb/s a good means of bringing a network to a halt if ever I have seen one. Sure, an organisation could prevent such large files from being accessed, but such a negative approach will only be putting off the inevitable. Technologies will be required that allow such high definitions and larger files to be embraced and encouraged. More efficient compression codecs; incremental viewing of files using low-resolution first pass, building up to high resolution; content delivery networks, and other caching techniques and network acceleration techniques will all have a part to play. From the noises Im hearing from the likes of Dell, Fujitsu, Lenovo, ViewSonic, and Iiyama, 2014 will be the year of introducing 4K/4Q displays. This will lead to an increasing network load of higher definition files through 2015 and beyond maybe it is time to start planning for this now.

Wires? So Last Year, Darling!


EE, a mobile operator in the UK, has just started trialling a small, controlled LTE-A 4G network capable of providing bandwidth speeds of up to 300Mb/s. This is possible through aggregating different spectrum, with 20MHz of 1800MHz and 20MHz of 2.6GHz spectrum being used to provide a combined ultrafast network speed. EE has also chosen its first roll-out location carefully. It would have been easy to choose some low-traffic environment in a quiet backwater somewhere. EE has decided to carry out the trial in the UKs Tech City environment the centre for technology start-ups and support companies in Londons East End. Full of techies stuffed to the gills with gadgets and demanding the latest and greatest of everything, Tech City will be a lightning rod for any problems of bandwidth contention, packet jitter, and collisions. With many of the companies in the area combining data with voice and video, the testing of this ultrafast service should be severe and it will be interesting to see how EE manages to deal with it all. 300Mb/s is a great deal more than the average business connection speed in the UK at the moment, with the majority of small and medium businesses using ADSL or ADSL2 connections giving connection speeds averaging out around the 14Mb/s level for downloads, with much smaller capacity on upload speeds. However, there are headline speeds and there are realistic speeds. Although Virgin Media states that 66% of its customers can expect just over 94Mb/s from its up to 100Mb/s fibre to the home (FTTH), an independent site says its research shows that the overall average, polled from real-world measurement of peoples service, shows that 100Mb fibre tends to give closer to 40Mb/s. When other connectivity methods from other providers are taken into account, such as fibre to the cabinet (FTTC) and copper ADSL/ADSL2/ADSL+/SDSL, then the UKs overall average connection speed is just shy of 15Mb/s. The problem here is that a lot of wired systems (including optical wire, i.e. fibre) are very dependent on the distance the signal has to travel within the wire. Therefore, if the FTTC cabinet just happens to be 2 meters from the exchange, you will get blazingly fast speeds: if it is a mile from the exchange, then your speeds will be mediocre in comparison. Then there is contention having multiple different users on the same wire at the same time will impact just how much of your data can be carried at any one time. At 3am in the morning, you may find that you are the only one on the line, and everything is zipping along. At 10am as other businesses and consumers start downloading songs, videoconferencing, and generally clogging up the system, problems will occur. Quocirca 2014 - 11 -

WAN Speak Musings Volume V


And you will find it very difficult if you live out in the sticks: the investment is only there to provide fast services where there is money to be made. If you are not deemed worthy, then you will have the technical equivalent of a wet piece of string to send your data down. 4G does away with the wires and makes it easier to provision connectivity in out-of-the-way places. What it doesnt necessarily do is to provide unlimited bandwidth and it still suffers from the proximity issues. The EE pilot will identify what it can do around the first issue, and should give pointers as to how dense the radio masts will have to be to give adequate cover for an area without creating too much of a noise level. However, should 4G LTE-A prove to be successful, it will offer another tool in the consumer and business connectivity toolbox, and for many it could prove to be the force behind dropping the need for a wire at all.

NFV & SDN: You Can Go Your Own Way?


SDN is maturing: many network vendors now offer variations on a theme of OpenFlow or other SDN-based devices. However, the latest launches from Cisco via its Insieme group show how the market may well evolve with no big surprises. Insieme is nominally a SDN-compliant architecture, with an interoperability to OpenStack cloud platforms. However, as is Ciscos wont, it only provides its best capabilities in a Cisco -only environment, as the Application Policy Infrastructure Controller (APIC) is dependent on there being specific Nexus switches in place. And Cisco is nt the only one to be causing problems more on this later. So SDN starts to get a bit dirty, with only parts of the promised abstraction of the control layer being possible in a heterogeneous environment. As SDN and OpenFlow mature and hit some problems, service providers have come up with a different group to try and meet its own problems. Network Function Virtualisation (NFV) tries to create a standardised manner of dealing with network functions within a service providers environment, aimed at att empting to control the tail-chasing that they have to do in trying to keep up with the continuous changes from the technical and vendor markets. From the original NFV whitepaper, we can see what the service providers are trying to do: Network Functions Virtualisation aims to address these problems (ranging from acquisition of skills, increasing heterogeneity of platform, need to find space and power for new equipment, the capital investment required and the speed to end-of-life of hardware appliances) by leveraging standard IT virtualisation technology to consolidate many network equipment types onto industry standard high volume servers, switches and storage, which could be located in Datacentres, Network Nodes and in the end user premises. We believe Network Functions Virtualisation is applicable to any data plane packet processing and control plane function in fixed and mobile network infrastructures. This sounds an admirable and suitable aim but the mention of data and control planes seems to place this well in the face of SDN. Are we seeing a battle to the death between one standard founded in academia and pushed by the vendors (SDN) and one founded and championed by those having to deal with all the problems at the coal face (NFV)? Probably not. NFV is aimed at dealing with specific cases, and is really looking at how a service provider can collapse certain data functions down into a more standardised, flexible, and longer-lived environment. SDN is looking at layering on top of this a more complex and rich set of data functions aimed at providing applications and users a better overall data experience. The two can and should work together in harmony to each others benefit. However, there will be an ongoing need to ensure that there is not a bifurcation of aim, that there remain adequate touch points between each approach, and that the standards put in place by each group work well with each other.

Quocirca 2014

- 12 -

WAN Speak Musings Volume V


This then brings us back to the original part of the piece: although bifurcation of SDN/NFV would be bad enough, a forking of SDN by the vendors would be worse. Ciscos and other vendors oft -trodden path of providing nominal support for a standard but in such a way that tries to tie people to their own kit is not good for SDN, nor for the end user. Other vendors making similar noises with their own SDN projects and software, such as Juniper with Contrail, Alcatel-Lucent with Nuage, and VMware with Nicira, may have better intentions and a more open approach, but the overall messaging does mean that we are entering a fairly typical vendor market, where a good idea is becoming bogged down in efforts to ensure that no vendor has to cannibalise its existing approach too much. In actuality, as organisations are becoming more aware of the need for data agnosticism and how the world of hybrid cloud is, by its very nature, one of heterogeneity, this could rebound (and looks like it already is rebounding) on some of these vendors. With Cisco in particular, its outlook for the future, presented by CEO John Chambers, is poor. Predicting a revenue drop of between 8% and 10% in this quarter, based on year-on-year figures, it sees emerging markets showing a marked collapse in orders. With many of these markets being a prime target for approaches such as SDN and NFV, due to many projects being green field, or full replacements of old kit ill-suited to new markets, it looks like Cisco is being bypassed in favour of those who can offer a more standardised approach to a heterogeneous environment. Alongside APIC, Cisco also has its Open Network Environment (ONE) platform, and is heavily involved in the OpenDaylight project with the Linux Foundation, to which it has donated part of its ONE code. If it is going to be an ongoing force in the new network markets, it will need to provide a stronger message in both the completely open SDN and NFV markets. It is to be hoped that the vendors do not strangle this market before it has really taken off. SDN and NFV need to work together: the vendors need to create this new network market and then fight over who does what best in a heterogeneous environment. Only through this will the end user be the winner.

How the Dark Side is Creating the Notwork


Despite the massive strides being made in the bandwidth available to users through improved consumer and business connectivity, often, the internet just doesnt seem responsive enough. Sure, poorly crafted web sites with poor code can be part of the problem, but just a small part a larger overall problem. According to Kaspersky Labs, averaged across 2012, spam accounted for 72.1% of all emails, of which around 5% contained a malicious attachment. Depending on which source you trust, you will be told that between 150 and 300 billion emails are sent every day, or up to 110 trillion emails per year, of which 80 trillion will be essentially just taking up bandwidth (and peoples time, if the messages get through to them). Although spam v olumes are falling as standard advertising emailers are finding it less worth their while using email for these activities, the organised, more malicious spammers are now taking over, with more worrying possible outcomes. For the organised blackhat using phishing (the sending of what looks like a valid email with a targeted message and links to external code or other means of getting a user hooked) or using emails as a means of introducing a Trojan or other payload, email is still a tool of choice. These messages are often harder to pick up as spam, as they are more targeted, but tools are there to try and identify them based on behavior modelling and pattern matching. With the rise of emails introducing ransomware code such as CryptoLocker, more people and organisations are finding that spam is not just time consuming, but that it can also be very expensive to deal with. The network hit of a single email is relatively low: however, unless the masses of spam and malicious emails are stopped at source, millions of such messages will take up a horrendous amount of overall bandwidth. With the growth in image and even video based spam, the average email size is increasing and slowing down the internet to a point

Quocirca 2014

- 13 -

WAN Speak Musings Volume V


where a large email botnet spewing out billions of emails per day can make certain areas of the internet a notwork just too slow for any real work particularly in emerging economies. There are benefits to service providers, telecoms companies and end-users alike to move from the mindset of its a packet of data and well shift it to its a packet of useless data and well dump it. The main one is just in freeing up the internet from a large chunk of the data volume that is slowing it down. Removing 80 trillion useless messages from the global internet would help to speed up slow areas, and make the management of more real time traffic, such as voice and video, easier. It would also help avoid the problems of malicious spam with links to external code and phishing messages from being acted upon by the less-aware users. Also, with the internet of things (IoT) becoming an increasing blob on the radar, the freed-up bandwidth allows the chattiness of IoT traffic to be embraced more easily without massive new investments in yet more bandwidth. Although proactive filtering and blocking of spam as close to the wire as possible seems to make sense to everyone except the blackhat, there seems little appetite for a concerted approach to trying to stop spam. It all seems piecemeal, and this still allows the blackhats to carry on with the impact it can have on networks, individuals, and organisations. If service providers and telecoms operators would work together, rather than in an uncoordinated manner and attempt to stop spam as close to the source as possible, it starts to turn the tables. Active filtering of emails streams by all the major players would drop spam volumes immediately. Increased blacklisting of service providers who allow spamming as well as better policing of IP address blocks being used for spamming from private servers would help to stamp out the volumes at source. Access device-based anti-spam tools, such as Kaspersky, Norton, and McAfee will do little to ameliorate the impact on the overall network, as they are only working against traffic that has already traversed the network. What is required is something that is closer to the wire. This was put forward by Trend Micro many years ago, but at the time the capability and cost to deal with line-speed treatment of information streams was not quite up to the task. Symantec and Dell, amongst others, have brought out appliances that enterprises and service providers can use. Symantec, GFI, and Wedge Networks have cloud-based systems that can act as filters to remove spam for organisations with their own high-volume email servers and service providers offering hosted email. Using such systems across a geography such as the UK or, better still, across Europe would start to really hurt the spam merchants. The blackhats would start to see that any mass mailing approach would be less effective and would either have to give in or move to using a different approach. They could use different vectors, such as trying to depend more on e.g. port 80 data transfers and hooking people via malicious code on web sites, but it becomes harder to lure people there if your emails cant get through to them. They could go for extreme targeting (spear phishing) moving from a numbers game of depending on one being born every minute to trying to identify the high-worth targets and concentrating on them with single emails with no content or behavioural pattern. As the IoT grows and its traffic increases, dealing with spam should be more of a focus for service providers and telecoms companies. Unless steps are taken, more areas of the internet will become notworks time to fight the blackhats and recover the internet.

Hub a Dub Dub 3 Things in a Tub?


The Internet of Things (IoT) seems to have been around the corner for the last few years, with discussions of how the toaster will be able to talk to the bread bin to see if there is enough sourdough to meet the humans breakfast needs, and for the car to be able to report back to the manufacturer that it could really do with some new gear oil, please, as it is feeling a little dirty. Quocirca 2014 - 14 -

WAN Speak Musings Volume V

The main thing holding the IoT back has been a mix of cost (who wants to embed a $10 sensor into a $5 piece of equipment?) and standards (how should data be formatted in order that the billions of connected items can all speak a lingua franca?). However, costs are falling, and xml and other data formatting standards are chipping away at the latter issue. Could we finally be ready for the IoT to become a reality? Certainly, with the likes of Google Glass and other wearable technology, we seem to be well on the way to the IoT being there in some form today is there something that will prevent it from accelerating and becoming ubiquitous? Heres what Quocircas analysts believe are the three most pressing issues: 1) Chattiness consider a smart electrical grid network. Every electrical item within a house or business premise is part of the IoT, reporting back to utility providers information on usage. This data can also be used by remote monitors who can advise when an item is about to breakdown based on monitoring current draw, for example, or by the house or business owner who wants to be able to see just what power is being drawn at any one time, and apply controls. This is great except for the volumes of small data packets flying around all over the place. Such high volume chatter could bring networks to their knees. Security opening up so many connected devices to the internet could provide more vectors of attack for blackhats. You probably wouldnt be that bothered if someone broke into the data from your fridge and found that you had a secret stash of chocolate bars, but you may be a bit more worried if the data from your CCTV security systems was compromised. Value although pretty much anything can be connected to the internet, is there any real value in doing so? Lighting, heating, entertainment systems, cookers and so on may make sense where a householder can get them set just as needed while they return from work is one thing an internet connected toilet seat (as can be obtained from Japan) may not offer quite so much obvious value.

2)

3)

If the IoT is to be implemented as an anything-connected-to-anything mesh, we can expect it to be a complete mess and for problems to occur all over the place. If instead we hub the IoT, with a house having intelligent machines that aggregate and filter the data to identify what is a real event and what is just noise, then far less traffic will need to be transported over the public internet and security can be applied at a more value-based level against only the data that needs it. This may involve the use of programmable micro-computers, such as the Raspberry Pi, and will also require embedded data filtering along the line of data leak prevention (DLP), as seen from the likes of Symantec and CA, and contextuallyaware security from the likes of LogRhythm and EMC/RSA can help in dealing with IoT data in a more local environment. Combine this with intelligent routing and WAN compression, and the data volumes start to look a little more controllable. If left uncontrolled, the IoT will be a case of pouring more data into a sea of similar data and trying to make sense of it. By applying intelligent hubs, the data is more akin to being added to a small tub: this is easier to deal with, and only what is really important then needs to be let out through the plug into the greater internet sea.

Quocirca 2014

- 15 -

About Silver Peak Systems


Silver Peak software accelerates data between data centres, branch offices and the cloud. The companys software defined acceleration solves network quality, capacity and distance challenges to provide fast and reliable access to data anywhere in the world. Leveraging its leadership in data centre class wide area network (WAN) optimisation, Silver Peak is a key enabler for strategic IT projects like virtualisation, disaster recovery and cloud computing. Download Silver Peak software today at http://marketplace.silver-peak.com.

REPORT NOTE: This report has been written independently by Quocirca Ltd to provide an overview of the issues facing organisations seeking to maximise the effectiveness of todays dynamic workforce. The report draws on Quocircas extensive knowledge of the technology and business arenas, and provides advice on the approach that organisations should take to create a more effective and efficient environment for future growth.

About Quocirca
Quocirca is a primary research and analysis company specialising in the business impact of information technology and communications (ITC). With world-wide, native language reach, Quocirca provides in-depth insights into the views of buyers and influencers in large, mid-sized and small organisations. Its analyst team is made up of real-world practitioners with first-hand experience of ITC delivery who continuously research and track the industry and its real usage in the markets. Through researching perceptions, Quocirca uncovers the real hurdles to technology adoption the personal and political aspects of an organisations environment and the pressures of the need for demonstrable business value in any implementation. This capability to uncover and report back on the end-user perceptions in the market enables Quocirca to provide advice on the realities of technology adoption, not the promises.

Quocirca research is always pragmatic, business orientated and conducted in the context of the bigger picture. ITC has the ability to transform businesses and the processes that drive them, but often fails to do so. Quocircas mission is to help organisations improve their success rate in process enablement through better levels of understanding and the adoption of the correct technologies at the correct time. Quocirca has a pro-active primary research programme, regularly surveying users, purchasers and resellers of ITC products and services on emerging, evolving and maturing technologies. Over time, Quocirca has built a picture of long term investment trends, providing invaluable information for the whole of the ITC community. Quocirca works with global and local providers of ITC products and services to help them deliver on the promise that ITC holds for business. Quocircas clients include Oracle, IBM, CA, O2, T -Mobile, HP, Xerox, Ricoh and Symantec, along with other large and medium sized vendors, service providers and more specialist firms. Details of Quocircas work and the services it offers can be found at http://www.quocirca.com Disclaimer: This report has been written independently by Quocirca Ltd. During the preparation of this report, Quocirca may have used a number of sources for the information and views provided. Although Quocirca has attempted wherever possible to validate the information received from each vendor, Quocirca cannot be held responsible for any errors in information received in this manner. Although Quocirca has taken what steps it can to ensure that the information provided in this report is true and reflects real market conditions, Quocirca cannot take any responsibility for the ultimate reliability of the details presented. Therefore, Quocirca expressly disclaims all warranties and claims as to the validity of the data presented here, including any and all consequential losses incurred by any organisation or individual taking any action based on such data and advice. All brand and product names are recognised and acknowledged as trademarks or service marks of their respective holders.

You might also like