You are on page 1of 7

History

Tulip Telecom Limited (BSE: 532691), formerly Tulip IT Services Ltd., is an Indian telecommunications services provider. The company was started by Lt Col Hardeep Singh Bedi an ex-army man. After serving the Indian army for 22 years, Lt Col HS Bedi took voluntary retirement and founded his company Tulip IT Services, then Tulip Software Pvt Ltd, in 1994 and today it is known as Tulip Telecom Ltd. Today, Lt Col H S Bedi is a well-recognized business leader in the IT and telecom industry and has vast experience in this field.

Tulip Telecom Ltd. (BSE: 532691/NSE: TULIP) is India s leading Enterprise Communications Service provider. The Company s data network has the largest reach of over 2,000 locations globally. The Company has a global presence with over 3,250 employees and more than 1,800 customers. Tulip designs, implements and manages communication networks of large enterprises on long term contracts to include enterprise communications connectivity, network integration, managed and value added services.

The Founder of the Company Lt Col H S Bedi started off the company with 4 employees in New Delhi as a software reseller. Later the company explored the field of Network Integration and implemented World s largest wireless network in Mallapuram, Kerala.[3] This was a key milestone in Tulip s history. The company also explored the field of wireless and developed India s largest MPLS VPN network and today it is the India's Leading Enterprise Communication Service Provider.

Tulip Today
Today Tulip Telecom Ltd has fiber optic network across all Metros and wide range of wireless presence all across India. Tulip Telecom has established itself as one of the strongest data connectivity players in the industry. Tulip has won the Frost & Sullivan Growth Leadership Award for Enterprise Data Services (MPLS VPN) in 2009 and Market Leadership Award for MPLS VPN in 2007 and 2008.[4] The company has achieved enviable success within a short time span, with revenues of over 1600 crores and employee strength of over 2500. The company is Indias leading Enterprise Communications Service provider. The Companys data network has the largest reach of over 2,000 locations in India and partnerships to reach every part of the world. The Company has a global presence with over 3,250 employees and more than 1,800 customers. Tulip designs, implements and manages communication networks of large enterprises on long term contracts to include enterprise communications connectivity, network integration, managed and value added services.

Tulip Telecom Building World's Third Largest Data Centre


The company has recently acquired a data center facility in Bangaluru by purchasing 100 per cent shares of SADA IT Parks Private Ltd for Rs 230 crore. The company said that the move will further strengthen its end to end data services offering. Tulip Telecom announced that it has chosen IT major IBM and data centre consultant Schnabel to establish a data centre in Bengaluru, which the company said will be India's largest and the world's third largest data centre. In an recent press release by Tulip Telecom -: The company said that, IBM will provide design consultancy services to Tulip for the overall data centre space along with turnkey execution to build the first phase of the data centre. IBM's consulting and design services will cover a range

of data centre technologies, including power, cooling, rack layout, chillers, UPS, DG sets and more. Tulip, through its wholly owned subsidiary company, Tulip Data Centre Pvt Ltd, spread across 9 lakh square feet, earlier this year for Rs 230 crore. The data centre will be built with an approximate investment of Rs 900 crore, spread over three years. Schnabel will do peer review consulting with Tulip on the data centre build. This is Tulip's fifth data centre in the country, which will increases the combined space under its data centres to approximately 10 lakh square feet. Tulip already has one data centre each in Delhi and Bengaluru and two in Mumbai.

Multiprotocol Label Switching


Multiprotocol Label Switching (MPLS) is a mechanism in high-performance telecommunications networks which directs and carries data from one network node to the next with the help of labels. MPLS makes it easy to create "virtual links" between distant nodes. It can encapsulate packets of various network protocols. MPLS is a highly scalable, protocol agnostic, data-carrying mechanism. In an MPLS network, data packets are assigned labels. Packet-forwarding decisions are made solely on the contents of this label, without the need to examine the packet itself. This allows one to create end-to-end circuits across any type of transport medium, using any protocol. The primary benefit is to eliminate dependence on a particular Data Link Layer technology, such as ATM, frame relay, SONET or Ethernet, and eliminate the need for multiple Layer 2 networks to satisfy different types of traffic. MPLS belongs to the family of packet-switched networks. MPLS operates at an OSI Model layer that is generally considered to lie between traditional definitions of Layer 2 (Data Link Layer) and Layer 3 (Network Layer), and thus is often referred to as a "Layer 2.5" protocol. It was designed to provide a unified data-carrying service for both circuit-based clients and packet-switching clients which provide a datagram service model. It can be used to carry many different kinds of traffic, including IP packets, as well as native ATM, SONET, and Ethernet frames. A number of different technologies were previously deployed with essentially identical goals, such as frame relay and ATM. MPLS technologies have evolved with the strengths and weaknesses of ATM in mind. Many network engineers agree that ATM should be replaced with a protocol that requires less overhead, while providing connection-oriented services for variablelength frames. MPLS is currently replacing some of these technologies in the marketplace. It is highly possible that MPLS will completely replace these technologies in the future, thus aligning these technologies with current and future technology needs.[1]

In particular, MPLS dispenses with the cell-switching and signaling-protocol baggage of ATM. MPLS recognizes that small ATM cells are not needed in the core of modern networks, since modern optical networks (as of 2008) are so fast (at 40 Gbit/s and beyond) that even full-length 1500 byte packets do not incur significant real-time queueing delays (the need to reduce such delays e.g., to support voice traffic was the motivation for the cell nature of ATM). At the same time, MPLS attempts to preserve the traffic engineering and out-of-band control that made frame relay and ATM attractive for deploying large-scale networks. While the traffic management benefits of migrating to MPLS are quite valuable (better reliability, increased performance), there is a significant loss of visibility and access into the MPLS cloud for IT departments

[edit] How MPLS works


MPLS works by prefixing packets with an MPLS header, containing one or more "labels". This is called a label stack. Each label stack entry contains four fields:
y y y y

A 20-bit label value. a 3-bit Traffic Class field for QoS (quality of service) priority (experimental) and ECN (Explicit Congestion Notification). a 1-bit bottom of stack flag. If this is set, it signifies that the current label is the last in the stack. an 8-bit TTL (time to live) field.

These MPLS-labeled packets are switched after a label lookup/switch instead of a lookup into the IP table. As mentioned above, when MPLS was conceived, label lookup and label switching were faster than a routing table or RIB (Routing Information Base) lookup because they could take place directly within the switched fabric and not the CPU. The entry and exit points of an MPLS network are called label edge routers (LER), which, respectively, push an MPLS label onto an incoming packet and pop it off the outgoing packet. Routers that perform routing based only on the label are called label switching routers (LSR). In some applications, the packet presented to the LER already may have a label, so that the new LER pushes a second label onto the packet. For more information see penultimate hop popping. Labels are distributed between LERs and LSRs using the Label Distribution Protocol (LDP).[6] Label Switch Routers in an MPLS network regularly exchange label and reachability information with each other using standardized procedures in order to build a complete picture of the network they can then use to forward packets. Label Switch Paths (LSPs) are established by the network operator for a variety of purposes, such as to create network-based IP virtual private networks or to route traffic along specified paths through the network. In many respects, LSPs are not different from PVCs in ATM or Frame Relay networks, except that they are not dependent on a particular Layer 2 technology.

In the specific context of an MPLS-based virtual private network (VPN), LERs that function as ingress and/or egress routers to the VPN are often called PE (Provider Edge) routers. Devices that function only as transit routers are similarly called P (Provider) routers. See RFC 4364.[7] The job of a P router is significantly easier than that of a PE router, so they can be less complex and may be more dependable because of this. When an unlabeled packet enters the ingress router and needs to be passed on to an MPLS tunnel, the router first determines the forwarding equivalence class (FEC) the packet should be in, and then inserts one or more labels in the packet's newly-created MPLS header. The packet is then passed on to the next hop router for this tunnel. When a labeled packet is received by an MPLS router, the topmost label is examined. Based on the contents of the label a swap, push (impose) or pop (dispose) operation can be performed on the packet's label stack. Routers can have prebuilt lookup tables that tell them which kind of operation to do based on the topmost label of the incoming packet so they can process the packet very quickly. In a swap operation the label is swapped with a new label, and the packet is forwarded along the path associated with the new label. In a push operation a new label is pushed on top of the existing label, effectively "encapsulating" the packet in another layer of MPLS. This allows hierarchical routing of MPLS packets. Notably, this is used by MPLS VPNs. In a pop operation the label is removed from the packet, which may reveal an inner label below. This process is called "decapsulation". If the popped label was the last on the label stack, the packet "leaves" the MPLS tunnel. This is usually done by the egress router, but see Penultimate Hop Popping (PHP) below. During these operations, the contents of the packet below the MPLS Label stack are not examined. Indeed transit routers typically need only to examine the topmost label on the stack. The forwarding of the packet is done based on the contents of the labels, which allows "protocolindependent packet forwarding" that does not need to look at a protocol-dependent routing table and avoids the expensive IP longest prefix match at each hop. At the egress router, when the last label has been popped, only the payload remains. This can be an IP packet, or any of a number of other kinds of payload packet. The egress router must therefore have routing information for the packet's payload, since it must forward it without the help of label lookup tables. An MPLS transit router has no such requirement. In some special cases, the last label can also be popped off at the penultimate hop (the hop before the egress router). This is called Penultimate Hop Popping (PHP). This may be interesting in cases where the egress router has lots of packets leaving MPLS tunnels, and thus spends inordinate amounts of CPU time on this. By using PHP, transit routers connected directly to this egress router effectively offload it, by popping the last label themselves.

MPLS can make use of existing ATM network or frame relay infrastructure, as its labeled flows can be mapped to ATM or frame relay virtual circuit identifiers, and vice versa.

Contention ratio

In computer networking, the contention ratio is the ratio of the potential maximum demand to the actual bandwidth. The higher the contention ratio, the greater the number of users that may be trying to use the actual bandwidth at any one time and, therefore, the lower the effective bandwidth offered, especially at peak times.[1] A contended service is a service which offers (or attempts to offer) the users of the network a minimum statistically guaranteed contention ratio, while typically offering peaks of usage of up to the maximum bandwidth supplied to the user. Contended services are usually much cheaper to provide than uncontended services, although they only reduce the backbone traffic costs for the users, and do not reduce the costs of providing and maintaining equipment for connecting to the network. In the UK, an RADSL (Rate Adaptive Digital Subscriber Line) connection usually has a contention ratio between 20:1 and 50:1 per BT guidelines, meaning that 20 to 50 subscribers, each assigned or sold a bandwidth of "up to" 8 Mbit/s for instance, may be sharing 8 Mbit/s of uplink bandwidth.[2] In the US and on satellite internet connections, the contention ratio is often higher, and other formulas are used, such as counting only those users who are actually online at a particular time.[citation needed] It is also less often divulged by ISPs than it is in the UK.[citation needed] The connection speed for each user will therefore differ depending on the number of computers using the uplink connection at the same time because the uplink (where all the low bandwidth connections join) will only handle the speed that has been implemented on that line.

Data center

Data centers have their roots in the huge computer rooms of the early ages of the computing industry. Early computer systems were complex to operate and maintain, and required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised, such as standard racks to mount equipment, elevated floors, and cable trays (installed overhead or under the elevated floor). Also, a single mainframe required a great deal of power, and had to be cooled to avoid overheating. Security was important computers were expensive, and were often used for military purposes. Basic design guidelines for controlling access to the computer room were therefore devised. During the boom of the microcomputer industry, and especially during the 1980s, computers started to be deployed everywhere, in many cases with little or no care about operating requirements. However, as information technology (IT) operations started to grow in complexity, companies grew aware of the need to control IT resources. With the advent of client-server computing, during the 1990s, microcomputers (now called "servers") started to find their places in the old computer rooms. The availability of inexpensive networking equipment, coupled with new standards for network cabling, made it possible to use a hierarchical design that put the servers in a specific room inside the company. The use of the term "data center," as applied to specially designed computer rooms, started to gain popular recognition about this time, The boom of data centers came during the dot-com bubble. Companies needed fast Internet connectivity and nonstop operation to deploy systems and establish a presence on the Internet. Installing such equipment was not viable for many smaller companies. Many companies started building very large facilities, called Internet data centers (IDCs), which provide businesses with a range of solutions for systems deployment and operation. New technologies and practices were designed to handle the scale and the operational requirements of such large-scale operations. These practices eventually migrated toward the private data centers, and were adopted largely because of their practical results. As of 2007, data center design, construction, and operation is a well-known discipline. Standard Documents from accredited professional groups, such as the Telecommunications Industry Association, specify the requirements for data center design. Well-known operational metrics for data center availability can be used to evaluate the business impact of a disruption. There is still a lot of development being done in operation practice, and also in environmentally-friendly data center design. Data centers are typically very expensive to build and maintain. For instance, Amazon.com's new 116,000 sq ft (10,800 m2) data center in Oregon is expected to cost up to $100 million

You might also like