You are on page 1of 10

The Edge is Not the End Why Cloud and

Mobile Make CDNs Obsolete


By Paddy Ganti

Recently Shane Lowry, our VP of Engineering, wrote a blog post on how the next disruption in
application delivery is about eliminating human middleware. I wanted to provide some more
context and also share some data nuggets to expand on the facts laid out in that article.
Its no surprise that mobile adoption and the advent of cloud computing are the two biggest
disruptions we have seen in the Internet service delivery space. In this post, we consider the
implications for both client and server side given these disruptions. We will also show that
content sizes are increasing, device diversity is exploding and the new choke point for
application delivery is now the Radio Resource Controller (RRC) and Radio Access Network
(RAN). These challenges dictate a solution space thats different from the previous
approaches we have seen and Instart Logic is specifically focused here.
First, lets start by talking about the two key disruptions mobile and its impact on the
client/device front; and cloud-based computing and what that means for the back end.

Mobile
Globally, mobile traffic is about 30% of all Internet activity today and is increasing rapidly,
with an additional 6% of activity generated from tablets. The Cisco Visual Networking Index
(VNI) provides the following quantitative estimates in mobile data growth, which shows that
we expect to see an 18x increase in 5 years (2011-2016).

This growth is fueled by demand for better applications and more content (mostly video) from
a variety of mobile devices. The reason that this growth differs from what weve seen
historically is that previously desktops primarily consisted of Wintel-based platforms using
wired line access to the Internet which made it easy to optimize for a homogeneous
workload. Todays plethora of smart phones and tablets makes it an entirely different
ballgame.
While its tempting to bundle all mobile growth into a single bucket, in reality the demand for
content emanates from a wide variety of devices. The variety starts with platforms. Lets
consider the following treemap of Android devices that are out there (Android owns 72% of
the market, while iOS accounts for 26%).

From our own logs, we see the following distribution of device platforms:

To add to that, we also need to consider the screen size diversity, which ranges from 320480
pixels (smart phones) all the way up to 19201080 pixels (HD displays).
The bottom line is that mobile data is growing exponentially and is being consumed by a
greater variety of screens and device platforms.

Cloud service adoption


While the client side is exploding, on the server side we see a trend towards cloud adoption.
The manifestation of cloud computing for web pages is the presence of a lot of third party
components such as widgets doing A/B testing, providing feedback via beacons, and tracking
user behavior apart from providing analytics. This increases the number of components on a
given web page while not exactly contributing much to the overall payload. We saw that
roughly 48% of the requests in http archive are classified as third-party.

With the explosion of mobile devices and consolidation of cloud services, and the perennial
expectation that compute and network just keep getting better and faster, the logical
conclusion is that this must mean that the mobile web is faster. But the reality is quite
different.
The Mobile Web is in fact getting slower over time
When we say faster, we mean visually/perceptually faster. So the question then boils down to
what metric should we choose that best correlates with visual perception of a page
load?OnLoad isnt a good metric, since a page load event can be artificially triggered by sites
even when no visual content is present, and neither is Start Render, which can be triggered
after onLoad. So we finally settled on Speed Index, which is a WebPagetest measurement of
how quickly the screen paints (perceived load time). The faster you paint the whole screen,
the lower the score. A Speed Index of less than a second is the holy grail in web performance.
Lets now look at how Speed Index evolution over time as collected through http archive
mobile data set (note that for any of these side-by-side graphs in this article, if you click on
an individual graph, you can see a larger version of it):

We tracked Speed Index for the top 1,000, top 10,000, and top 100,000 sites as cohorts to
check if any apparent trend is uniform, or if it differs over the various groupings. From what
we can see, its a uniform trend that mobile websites over the last 2 years are getting slower
not faster, despite all the advances that have been made.
(Note: the collection of data changed a bit in the middle, when the throughput of the mobile
device measurement was altered to use an emulated 3G network in June 2013. However these
changes do not affect our conclusion in any meaningful way.)
So why is the Mobile Web getting slower?
Content is getting fatter
The first fairly obvious reason is the growth in richer and more content-intensive web sites.
To substantiate this claim, we took a look at the Page weight metric.

As you can see, the uniform trend across all cohorts is a marked increase in page bytes.
Next we wanted to see if we could pin this increase to particular types of web traffic, so we
separated out the Page weight data by content types:

Again the uniform trend shows that content sizes are bloating across all content types,
ranging from a few percent in HTML to a near-doubling of Image bytes.
Network latency of the access medium
A quantitative study performed by Mike Belshe (one of the creators of the SPDY protocol) on
the impact of varying bandwidth vs. latency on page load times for some of the most popular
destinations on the Web showed the following:

Looking at this graph, one would question any provider touting bandwidth increase as a
panacea for web page performance.
As you can see from the data above, if users double their bandwidth without reducing their
RTT significantly, the effect on Web Browsing will be a minimal improvement. However,
decreasing RTT, regardless of current bandwidth always helps make web browsing faster. To
speed up the Internet at large, we should look for more ways to bring down RTT. What if we
could reduce cross-atlantic RTTs from 150ms to 100ms? This would have a larger effect on the
speed of the internet than increasing a users bandwidth from 3.9Mbps to 10Mbps or even
1Gbps. Mike Belshe
So we ask, what is the trend in RTTs across the world? Lets consult an active measurement
database maintained by Les Cotrell to see what the trend is there.

As you can see, in the last couple of years there has been a small improvement in RTTs, but
by and large nothing meaningful.
Since the majority of e-commerce and hosting providers happen to be in the US, lets look for
FCC reports on latencies across DSL, Cable and Fiber.

In 2014, fiber-to-the-home services provided 24 ms round-trip latency on average, while


cable-based services averaged 31 ms, and DSL-based services averaged 48 ms. Compare this
to 2013, where fiber-to-the-home services provided 18 ms round-trip latency on average,
while cable-based services averaged 26 ms, and DSL-based services averaged 43 ms.
Overall latency is not getting any better if anything, its getting worse. The average RTT to
Google is pretty much the same as it was in 2010, despite all the innovations brought to us by
this awesome company. An alternate study by M-Lab stresses this point of degradation in
latency due to interconnections between providers.
So far all the above data is desktop alone, so lets focus on latency numbers from AT&T:

AT&T core network latency

LTE

HSPA+

HSPA

EDGE

GPRS

40-50 ms

50-200 ms

150-400 ms

600-750 ms

600-750 ms

To put those latencies in context, also consider the bandwidth available by technology:
Generation

Data rate

2G

100 400 Kbit/s

3G

0.5 5 Mbit/s

4G

1 50 Mbit/s

Since we are talking about mobile data, lets see the overall path a packet has to traverse to
get service over the internet:

As you can see, its the confluence of a lot of technologies that helps bring information to
your fingertips.
While the middle mile was the bottleneck in the desktop world, in the mobile world the Radio
Access Network (RAN) is the new bottleneck for mobile browsing. More specifically, lets take
a look at the capacity of a typical cell tower:

Typically these towers are provisioned and operate at 75% utilization, which means we have
only 16.2Mbps to use. The average voice call takes 12Kbps, which means a maximum of 1350
calls are supported before degrading. Add the average fat webpage to this mix and you are
looking at a maximum of 8 webpages holding the tower at capacity. This is the new
bottleneck in the whole mobile user experience, and there is not much a user or content
publisher can do about this except send the most important bits of the application in the
first few packets.

Conclusions
So weve talked about a lot of different elements in this article. To summarize, we saw that
web content is getting richer while device diversity is exploding, and that we cannot pin our
hopes on faster lanes, given that network access times have been stagnant for over a decade
(and will likely continue to be so in the near future). All these forces combine together to
create a new pressure point on RAN congestion, which is already at capacity.
While I have mostly dwelt on the problems in this post, the solution space for mobile web
applications is to

make things smaller (without losing quality of experience)

move them closer to the user (I mean in the browser, not some server in the cloud
given the RTT)

cache them as long as we can (existing solutions do not)

loading the application resources intelligently (loading the most significant resources
first)

Sounds easy enough, yet it requires a very different approach to application delivery one
that we at Instart Logic, with our Software-Defined Application Delivery platform, are focused
on.
References
1. HTTP Archive
2. Cisco VNI
3. Ilya Grigoriks blog
4. Android Fragmentation
5. Why Mobile Apps are Slow
6. More Bandwidth Doesnt Matter Much
7. M-Lab Interconnection Study
8. FCC Broadband America
9. Netflix ISP Speed Index
10. PingER Project
11. High Performance Browser Networking
12. Bessemer Cloud

You might also like