You are on page 1of 5

Proceedings published by International Journal of Computer Applications (IJCA)

International Conference on Computer Communication and Networks CSI- COMNET-2011

Performance Analysis of IPv4 v/s IPv6 in Virtual


Environment Using UBUNTU
Savita Shiwani G.N. Purohit Naveen Hemrajani
Computer Science,Gyan AIM & ACT, Banasthali Computer Science,Gyan
Vihar University, Rajasthan, University, Banasthali, Vihar University, Rajasthan,
India 302204, INDIA India

ABSTRACT increasing energy efficiency and requiring less hardware while


The IPv4 address space is about to vanish. Networks will need increasing server to admin ratio and ensure enterprise
to transition to IPv6, which allows for larger address space applications perform with the highest availability and
however it has limitations that hinder its growth. IPv6 addresses performance.
inherent problems in the earlier version protocol and it provides The structure of this article is as follows. In Section 2, we briefly
new opportunities too. However, due to the increased overhead discuss about research background of this study at Section 3
in IPv6 and its interaction with the operating system that hosts explains experimental setup and Section 4 describes about the
this communication protocol, there may be network performance test scenario. Results from the test scenario were explained in
issues. This paper will focus on the considerations that affect Section 5. In Sections 6 is discussion for the test results. Finally,
network performance analysis for IPv4 and IPv6 based networks conclusion for this article and proposed future works are in
for ubuntu10.0.4 open source Linux based Operating System Section 7.
deployed on top of a virtual infrastructure. Here, ubuntu is
configured with the two versions of IP and empirically evaluated The network benchmarking tool, Iperf 2.0.2, is used for all the
for performance difference. Performance related metrics like experiments. Iperf measures unidirectional and bidirectional
throughput, delay, and jitter are measured on a test-bed network performance for TCP and UDP traffic. It includes
implementation. support to measure TCP and UDP throughput, using bulk
transfers, and end-to-end latencies. Iperf has a client-server
Keywords model and comprises the following:
IPv4, IPv6, Performance, Analysis, Ubuntu, Virtual, TCP,
Iperf client, which acts as a data sender
Bandwidth, Jitter
IPerf server process, which acts as a receiver.
1. INTRODUCTION
VMware virtual infrastructure has many network configuration 2. RESEARCH BACKGROUND
options and allows us to implement a wide variety of network Previous work has shown measurable performance differences
architectures. Virtualization basically provides an environment between Ipv4 and Ipv6 networks especially when considering
where multiple virtual machines can run on a single physical smaller packet sizes [5, 6]. As IPv6's header size is double that
machine, with each virtual machine sharing the resources of that of Ipv4 [3,4] the throughput is expected to scale down with
one physical computer across multiple environments. Different regards to average overall packet size and the maximum
virtual machines can run different operating systems and transmission unit (MTU) of the physical layer network topology.
multiple applications on the same physical computer. VMware The MTU of a network is the maximum size a packet can have
virtualization works by inserting a thin layer of software directly for that transmission medium. Any payload plus header must fit
on the computer hardware or on a host operating system. These within the MTU or it will be broken into two or more full size
contain a virtual machine monitor that allocates hardware packets plus an additional packet for the remainder; each
resources dynamically and transparently. Multiple operating needing it's own header. For example Ethernet has a MTU of
systems run concurrently on a single physical computer and 1500 bytes. For IPv6 this means that 40 bytes are reserved for
share hardware resources with each other. By encapsulating an the IP header and the rest is available for the payload. When
entire machine, including CPU, memory, operating system, and looking at large packets and/or a large MTU, this overhead is a
network devices, and a virtual machine is completely compatible small percentage of the overall transmitted bytes. For example
with all standard x86 operating systems, applications, and device consider a packet that is 1400 bytes in size. The header as a
drivers. Each virtual machine runs its own operating system and percentage of the overall packet size is 40 divided by 1400 or
applications. The virtual machines are isolated from each other; 2.8% of the total transmitted bytes. When looking at smaller
they cannot communicate with each other or leak data, other packets or payloads slightly larger than the MTU, thus causing a
than via networking mechanisms used to connect separate small remainder packet, this overhead increases dramatically.
physical machines. This isolation leads many users of VMware Consider a small 64 byte Ipv6 packet. It will have 62.5% of its
software to build internal firewalls or other network isolation size taken up by its header in comparison to a similar IPv4
environments, allowing some virtual machines to connect to the packet which would have 31.3% of its space taken up by the
outside while others are connected only via virtual networks header. Obviously this is a contrived comparison, but it serves to
through other virtual machines. Virtual environment helps us to illustrate the difference in increased header vs. payload size for a
run multiple operating systems on a single computer creating a single small packet.
virtual PC environment. It also reduces capital costs by

72
Proceedings published by International Journal of Computer Applications (IJCA)
International Conference on Computer Communication and Networks CSI- COMNET-2011

3. EXPERIMENTAL SETUP the protocol in action. As expected, the additional outbound


All virtual machines in the experiments used a uni-processor overhead from our TCP sending node is quite negligible.
hardware abstraction layer (HAL)/kernel and were configured However, the return traffic mostly small packets of TCP ACKs
with one virtual CPU. All virtual machines were configured with with empty payloads contains a significant amount of additional
512MB of RAM and used the workstation 6.5-7.x virtual device. overhead.
HOST MACHINE 4. TEST SCENARIO
Hardware Configuration We carried out experiment, first to compute the bandwidth
ACPI Multiprocessor PC utilization performance. Data were transmitted from one
Intel Pentium Dual Core CPU T3200 @ 2GHz machine to another using the IPerf tools for Open Source Linux
Processor platform (UBUNTU) at various data sizes ranging from KBytes
3 GB RAM to GBytes for 30 seconds. The experiment was carried for
NVIDIA GeForce 8200M G Display Adapter wireless architecture for the computation of BW utilization and
HDAUDIO Soft Data FAX Modem with SmartCP round trip time computation under platforms for variable data
Network Adapters sizes as mentioned above. We have used wireless test-lab using
o NVIDIA nForce 10/100/1000 Mbps Ethernet Bridged in VMware Environment for two ubuntu virtual
o RT73 USB wireless LAN card machines.
Software Configuration
Microsoft Windows XP Professional Service Pack 2
VIRTUAL MACHINE OS
Ubuntu10.0.4
VIRTUAL MACHINE HARDWARE
1 Virtual CPU
512MB RAM
Virtual device: workstation 6.5-7.x VMware
In our experiment, initially we loaded various Virtual machines
and their required software configured the network system
comprising of workstation 6.5-7.x VMware. Several other pieces
of software were installed on the experimental virtual machines
Java JRE version 6 Update 2.
Iperf version 2.0.4
JPerf version 2.0.2
Throughput was measured using JPerf. JPerf is simply a java Fig.1 Virtual VMware Environment
interface for the command line version of IPerf simplifying the
testing and analysis. IPerf is a network testing tool that is useful Each of the experimental stations had the all three pieces of
for measuring the maximum throughput of a network link. software installed. JPerf was used to test the maximum
Advantages of Jperf are: throughput of the systems. Iperf was used to test the statistical
It is easy to use with GUI. properties of the traffic focusing on delay and jitter over various
Less time required for setting up process. payload sizes. Experiments were executed systematically with
Bandwidth calculation is automatic and can be shown each experiment run being ran three times on the sender and
at a certain interval (configurable). statistical data being recorded on the receiver station. Each run
Sequential or concurrent test upload and download. was executed for thirty seconds.
The test station software setup consisted of the Ubuntu10.0.4 The experiments focused on three important statistics:
Enterprise Edition Operating System installed on a single hard Maximum Throughput, Delay and Jitter. Maximum throughput
drive partition with the default installation settings. Both IPv4 is defined as the maximum amount of data that can be passed
and IPv6 stacks are installed by default in it. Both network between two hosts. Delay is defined as the time it takes for a
stacks were not left enabled concurrently and either the IPv4 or packet to traverse between two hosts on a network. Jitter is the
IPv6 network stack was disabled via the network console, difference or change in the delay over time. Each of these
depending upon the experiment being run, to insure that statistics is important in measuring the performance of computer
additional traffic from the unused protocol would not alter or networks and will be analyzed for both IPv4 and IPv6.
pollute the test results. IP addressing was accomplished
statically using a private class C address range for Ipv4 and a
complementary private network range for IPv6.
To measure IPv6 overhead in a real-world test, I configured two
User-Mode Ubuntu virtual machines in vmware workstation 6.5-
7.x with a virtual serial connection between them which can
support PPP. I used the Iperf program to transfer data streams of
various sizes over TCP in both IPv4 and IPv6, and recorded the
total number of transferred bytes. Performing real-world
measurements was somewhat redundant, since the theoretical
overhead can be easily calculated using the size of the packet
headers and payload lengths. However, it was interesting to see

73
Proceedings published by International Journal of Computer Applications (IJCA)
International Conference on Computer Communication and Networks CSI- COMNET-2011

5. PERFORMANCE RESULTS As mentioned earlier, multiple file size from KBytes to GBytes
This section presents the output result from the test were used for TCP and UDP testing under IPv4 and IPv6
network in ubuntu virtual machine environment.
Iperf c 192.168.6.130 P 1 I 1 p 5001 f K t 30

Fig.5 Bandwidth Utilization in KBytes under TCP and IPv6


Fig.2 Bandwidth Utilization in Kbytes under TCP and IPv4 Iperf c 192.168.6.130 P 1 I 1 p 5001 V f M t 30 S
Iperf c 192.168.6.130 P 1 p 5001 f M t 30 0x04

Fig.3 Bandwidth Utilization in Mbytes under TCP and IPv4


Iperf c 192.168.6.130 P 1 I 1 p 5001 f g t 30 Fig.6 Bandwidth Utilization in MBytes under TCP and IPv6
Iperf c 192.168.6.130 P 1 I 1 p 5001 V f g t 30 S 0x04

Fig.7 Bandwidth Utilization in GBits under TCP and IPv6


Iperf s u P 0 I 1 p 5001 f K
Iperf s u P 0 I 1 p 5001 V f K
Fig.4 Bandwidth Utilization in GBits under TCP and IPv4

74
Proceedings published by International Journal of Computer Applications (IJCA)
International Conference on Computer Communication and Networks CSI- COMNET-2011

6. DISCUSSION
As can be seen from table1. that on an average 734.00
Kbytes/sec throughput can be achieved in Kbytes under IPv4 as
compared to 1220.00 Kbytes/sec in IPv6 in ubuntu virtual
machine environment for TCP. Similarly the average throughput
is 0.85 Mbytes/sec under IPv4 as compared to 0.96 Mbytes/sec
in IPv6 in ubuntu virtual machine environment for TCP. In
GBits the bandwidth utilization and throughput for IPv4 and
IPv6 achieved is the same i.e. 0.01 GBits/sec. Most of the
throughput for IPv6 address configuration is higher as compared
with IPv4 address throughput. The throughput also seems to
increase in IPv6 with the size of data packets sent.
Table.1 Bandwidth Utilization under Transmission Control
Protocol
Through Maximum Minimum Average
Fig.8 Bandwidth and Jitter in KBytes under UDP put
Iperf s u P 2 I 1 p 5001 V f M achieved
Iperf c 192.168.6.130 u P 1 I 1 p 5001 V f M b 1.0M in TCP
t 30 T 1 S 0x10 IPv4 1200 500 Kbytes 734.00
Kbytes 0.40 Kbytes
1.70 Mbytes 0.85
Mbytes 0.01 GBits Mbytes
0.02 GBits 0.01 GBits
IPv6 4000 525 Kbytes 1220.00
Kbytes 0.50 Kbytes
2.00 Mbytes 0.96
Mbytes 0.01 GBits Mbytes
0.02 GBits 0.01 GBits

Table.2 Bandwidth Utilization and Jitter under UDP


Throughput Average Average Jitter
achieved in Bytes/sec
UDP
IPv4 117 Kbytes 1.48ms
0.12 Mbytes 0.60 ms
Fig.9 Bandwidth and Jitter in MBytes under UDP
0.01 GBits 0.15ms
Iperf s u P 0 I 1 p 5001 V f g
Iperf c 192.168.6.130 u P 1 I 1 p 5001 V f g b IPv6 121 Kbytes 0.73ms
1000.0M t 30 T 1 S 0x10 0.12 Mbytes 1.84ms
0.01 GBits 0.16s
As evident from the graphs and table .2 that with higher
throughput the delays are lowering down from 1.84 ms to 0.15
ms. Interestingly, we find from our experimental results that the
bandwidth utilization and latency (jitter) parameters of IPv4 are
superior to those of IPv6 protocols. For this case, we infer that
IPv6 results are a bit poorer in comparison to IPv4 due to the
bigger overhead constraints of IPv6.It is important to monitor
these metrics because it determines accuracy of the test and
affect the performance result. The test setup was design in such
way to get accurate comparative result of IPv4 an IPv6 network
performance under controlled virtual network environment
7. CONCLUSION
Result from the test shows that, the difference in performance
between IPv4 and IPv6 for test bed is around 486Kbytes/sec in
KB and 0.11Mbytes in MB. Small TCP window size will result
lower throughput for both IPv4 and IPv6.
During the tools evaluation process we discover that average
throughput is unchanged even when the tool (Jperf) reach steady
Fig.10 Bandwidth and Jitter in GBits under UDP state condition or a large file size such 8GB was transferred for
high queue/repetitive number. Finally we also discover by

75
Proceedings published by International Journal of Computer Applications (IJCA)
International Conference on Computer Communication and Networks CSI- COMNET-2011

Wireshark that the actual IPv4 and IPv6 maximum throughput 8. REFERENCES
for 100Mbps link will not reach the maximum 100Mbps due to [1] M.K.Sailan, R.Hassan and A.Patel, A Comparative
Transmission Control Protocol (TCP) overhead during file Review of IPv4 and IPv6 for Research Test Bed, in
transfer process, ARP traffic, broadcast/multicast traffic and the .ICEEI09, 2009, NW-05, p.427.
nature of Jperf tool itself.
Ongoing and future research area that we will embark on are a [2] M.K.Sailan, R.Hassan, Design of Accurate End-to-End
test scenario which involve a test bed with a multi service router IPv4 and IPv6 Performance Test in ATUR09, 2009.
and live experiment of IPv4 and IPv6 in virtual clouds network
[3] http://code.google.com/p/xjperf/
performance test. Once all data from test scenarios have been
collected and analyzed, detail characteristic will be applied in [4] Deering, S., and Hinder, Internet Protocol, Version 6
the next simulation process. (IPv6) Specification.RFC 2460, December 1998
The IPv6 header overhead begins to really add up when using
smaller packet sizes. The total bandwidth consumed by [5] The Design and Implementation of an IPv6/IPv4 Network
applications with small bits of sporadic data or low-latency Address and Protocol Translator Department of
requirements (i.e. VoIP) might be significantly higher on an Computer Science and Engineering, University of
IPv6 network. All in all, I think the additional protocol overhead Washington, Seattle, Washington 98195
of IPv6 is quite manageable in most cases, and I hope network http://www.cs.princeton.edu/~mef/research/napt/reports/us
operators begin upgrading their IP networks soon. enix98/
[6] Ioan Raicu IPv6 Performance Results, cs.wayne.edu
[7] Yi Wang 1, Shaozhi Ye 2, Xing Li, Understanding
Current IPv6 Performance: A Measurement Study, 3
Department of Electronic Engineering, Tsinghua
University, Beijing 100084, P. R. China
http://doi.ieeecomputersociety.org/10.1109/ISCC.2005.151

76

You might also like