You are on page 1of 7

Ethernet Introduction Over the course of the last 20 years, Ethernet has become the dominant LAN (local

area network) technology used throughout the world. As the computerized workplace has become more network based, with the rise in E-mail and the Internet, Ethernet has become even more prevalent. Almost all companies nowadays use Ethernet to connect their computers to each other, and indeed, if you are at work, you are probably reading this website on a computer connected to the Internet through an Ethernet Network. This doc covers the basic operational aspect of Ethernet, as well as the its history and possible future. History In 1973 Xerox Corporations Palo Alto Research Center began the development of a bus topology LAN. Later, in 1976, carrier sensing was added, and Xerox built a 2.94 Mbps network to connect over 100 personal workstations on a 1 km cable. This network was called the Ethernet, named after the ether, the single coaxial cable used to connect the machines. Xerox Ethernet was so successful, that in 1980 Digital Equipment Corporation, Intel Corporation and Xerox had released a de facto standard for a 10 Mbps Ethernet, informally called DIX Ethernet (from the initials of the 3 companies). This Ethernet Specification defined Ethernet II and was used as a basis for the IEEE 802.3 specification. Strictly, "Ethernet" refers to a product which predates the IEEE 802.3 Standard. However nowadays any 802.3 compliant network is referred to as an Ethernet. Over the years Ethernet has continued to evolve, with 10Base5 using thick coaxial cable approved in 1986, 10Base2 using cheaper thin coaxial cable approved in 1986. Twisted pair wiring was used in 10BaseT, approved in 1991 and fibre wire in 10BaseF, approved in 1994-95. In 1995 100Mbps Ethernet was released, increasing the speed of Ethernet, which has since been further increased with the release of Gigabit Ethernet in 1998-99. In the future, Ethernet will continue to increase in speed, with 10 Gigabit Ethernet recently ratified, with 40 Gigabit arriving soon and 100 Gigabit Ethernet technology demonstrations currently o,ccurring. Broadcast Network Operation Ethernet, by its very nature is a Broadcast Network. This means that the hosts are connected to a network through a single shared medium. This has the advantage that messages don't have to be routed to their destination, as all hosts are present on the shared medium, but it does incur another set of problems. The main problem which needs to be addressed is that of Media Access Control (MAC) or giving fair access to multiple nodes on a shared medium. Collisions When a number of nodes are connected to a single shared medium, one of the first things that need to be considered is what happens if two or more nodes try to broadcast at the same time. This is called a collision and prevents any information passing along the network because the multiple messages would corrupt each other, destroying both. There are two main methods for reducing the effect of collisions, Collision Avoidance or Collision Resolution. Collision Avoidance involves systems which prevent any collisions occurring in the first place, such as polling or token passing. Collision Resolution or Contention MAC Strategies rely on the fact that collisions will occur, and try to cope with them as well as possible. Ethernet uses Collision Resolution, so I shall focus on this strategy for the rest of the page. ALOHA The most basic form of Collision Resolution is to simply allow any station to send a message (or packet) whenever it is ready to send one. This form of transmission was first used in a prototype packet radio network, ALOHANET, commissioned in Hawaii in 1970, and has been known ever since as unslotted ALOHA. In Pure ALOHA, packets

contain some form of error detection which is verified by the receiver. If the packet is received correctly, the destination returns an acknowledgment. If a collision occurs and the message is destroyed or corrupted, then no acknowledgment will be sent. If the sender does not receive an acknowledgment after a certain delay, it will re-send the message. Carrier Sense Multiple Access The next stage in Collision Resolution after ALOHA was to add the ability for devices to detect whether the shared medium is idle or not. This is called "Carrier Sense Multiple Access" or CSMA. This, however, does not completely eliminate collisions, since two devices could detect the medium as idle, then attempt to send at approximately the same time. CSMA is actually a family of protocols which vary by the method which they wait for the medium to become idle, known as the persistence strategy. Below is an explanation of some of the major strategies:

1-Persistent CSMA - In this strategy, when a device wants to send a message, it first listens to the medium. If it is idle the message is sent immediately, however, if it is busy the device continues to listen to the medium until it becomes idle and then sends the message immediately. The problem with 1-Persistent CSMA is that if a number of devices attempt to send during a busy period, then they shall all send as soon as the medium becomes idle, guaranteeing a collision. nonpersistent CSMA - This strategy attempts to reduce the greediness of 1-Persistent CSMA. It again first listens to the medium to see if it is idle, if so it sends immediately. If the medium is busy, instead of continuing to listen for the medium to become idle and transmitting immediately, it waits a random period, then tries again. This means that in high load situations, there is less chance of collisions occurring.

Collision Window We have talked about a collision occurring if two devices send at approximately the same time, but how long does a device have to wait until it knows that its message has not been corrupted by a collision. Messages take a certain amount of time to travel from the device to the end of the signaling medium, which is known as the propagation delay. It would seem that a device only needs to wait for one propagation delay, until the message reaches the last receiver, to know if a collision has occurred. This, however, is not the case. Take for example the following situation. A device sends a message, which takes 1 propagation delay to reach the last device on the medium. This last device on the medium could then send a message just before the original message reaches it (i.e. just before 1 propagation delay). This new message would take an additional propagation delay to reach the original device, which means that this device would not know that a collision had occurred until after 2 propagation delays. Collision Detection Now that we know how long we need to wait to discover if a collision has occurred, we can use this to increase the effectiveness of CSMA. CSMA behaves inefficiently when a collision occurs, since both stations continue to send their full packet, even though it will be corrupted. A simple enhancement to CSMA is the addition of Collision Detection (CSMA/CD). A simple check is made to make sure that the signal present on the medium is the same as the outgoing message. If it isn't then a collision is occurring, and the message can be aborted. This means that the time spent sending the doomed messages can be put to a more productive use. Ethernet Protocol The Ethernet protocol is made up of a number of components, such as the structure of Ethernet frames, the Physical Layer and its MAC operation. This page will detail the fundamental structure of the Ethernet Protocol. Frame Structure Information is sent around an Ethernet network in discreet messages known as frames. The frame structure is quite

simple, consisting of the following fields:

The Preamble - This consists of seven bytes, all of the form "10101010". This allows the receiver's clock to be synchronised with the sender's. The Start Frame Delimiter - This is a single byte ("10101011") which is used to indicate the start of a frame. The Destination Address - This is the address of the intended recipient of the frame. The addresses in 802.3 use globally unique hardwired 48 bit addresses. The Source Address - This is the address of the source, in the same form as above. The Length - This is the length of the data in the Ethernet frame, which can be anything from 0 to 1500 bytes. Data - This is the information being sent by the frame. Pad - 802.3 frame must be at least 64 bytes long, so if the data is shorter than 46 bytes, the pad field must compensate. The reason for the minimum length lies with the collision detection mechanism. In CSMA/CD the sender must wait at least two times the maximum propagation delay before it knows that no collision has occurred. If a station sends a very short message, then it might release the ether without knowing that the frame has been corrupted. 802.3 sets an upper limit on the propagation delay, and the minimum frame size is set at the amount of data which can be sent in twice this figure. Checksum - This is used for error detection and recovery.

Ethernet vs 802.3 Although the Ethernet and 802.3 standards are effectively the same thing, there are some subtle differences between Ethernet II and 802.3. The IEEE 802.3 standard was part of a bigger standard, 802. This contains a number of different network technologies, such as token ring, and token bus, as well as Ethernet of course. These technologies are brought together by a layer on top of these MAC Layers called Logical Link Control (LLC) as shown in the figure below. Ethernet II, however, does not use this LLC layer.

Another protocol, known as SNAP (subnetwork access protocol) was defined by the IEEE. This protocol is carried by LLC, and provides compatibility with the pre-802 Ethernet II standard. Physical Layer The Physical Layer is concerned with the low level electronic way in which the signals are transmitted. In Ethernet, signals are transmitted using Manchester Encoding. This encoding is used to ensure that clocking data is sent along with the data, so that the sending and receiving device clocks are in sync. The logic levels are transmitted along the medium using voltage levels of 0.85V. MAC Operation Ethernet is a CSMA/CD network (see Broadcast Network Operation for more information). To send a frame, a station on an 802.3 network, it first listens to check if the medium is busy. If it is then the station uses the 1-persistent strategy, and transmits after only a short fixed delay (the inter-frame gap) after the medium becomes idle. If there is

no collision, then this message will be sent normally. If the device detects a collision however, the frame transmission stops and the station sends a jamming signal to alert other stations of the situation. The station then decides how long to wait before re-sending using a truncated binary exponential backoff algorithm. The station waits for some multiple of 51.2us slots. The station first waits for either 0 or 1 slots, then transmits. If there is another collision, then the station waits for 0,1,2 or 3 slots before transmitting. This continues with the station choosing to wait a random number of slots from 0 to 2^k - 1 if there have been k collisions in the current transmission, until k=10 where the number of slots chosen from stops growing. After 16 continuous collisions, the MAC layer gives up and reports a failure to the layer above. Ethernet at 10 Mbps The first Ethernet standard (Ethernet II) worked at a speed of 10 Mbps. This page will describe the technology used at this point in time. 10Base5 and 10Base2 The initial Ethernet implementations used coaxial cable to connect the stations to each other. Two forms of coaxial cable were used, 10Base5 cable known as thick ethernet and a thinner coaxial cable, 10Base2 also known as thin ethernet. 10Base5 uses 10mm wide coaxial cable which allows up to 100 nodes over a maximum distance of 500m. The hardware used in Thick Ethernet is divided in two major parts one of them is the network interface card (NIC), which handles the digital aspects of communication. The second part is an analog electronic device, called a transceiver, which handles analog signals. The transceiver is attached directly to the Ethernet cable, and a separate cable, known as Attachment Unit Interface (AUI) cable, connects the transceiver to the NIC in the computer. 10Base2 uses 5mm wide and allows up to 30 nodes over a maximum distance of 185m. Thin Ethernet generally costs less than Thick Ethernet. The hardware that performs the transceiver function is built into the NIC, so no external transceivers are needed. Also Thin Ethernet doesnt use an AUI cable to attach the NIC to the communication medium, instead it attaches directly to the back of each computer using a BNC connector. The medium is a flexible cable than connects from the NIC on one computer directly to the NIC on another computer. 10BaseT with hubs A newer form of Ethernet has become much more popular. Called 10BaseT, it uses twisted pair wiring instead of coaxial cable. 10BaseT doesnt have a shared physical medium like the other wiring schemes. Instead, it has an electronic device, called an Ethernet hub, which serves as the centre of the network. Each computer is connected to the hub by twisted pair wiring using RJ-45 connectors. Electronic components in the hub simulate a physical cable, making the entire system operate like a conventional Ethernet. The hub repeats any signal from an input to all of its outputs, thus replacing the broadcast feature of the cable bus. Hubs have several advantages over a single cable bus system. Firstly, the hubs allow for a centralised monitoring and maintenance infrastructure over the network. Secondly, 10BaseT could use Category 3 twisted pair wire, which is commonly already present in modern office buildings for the telephone system. Switches A hub works in much the same way as a single bus (as it is intended to) but this means that if two stations transmit at the same time there is going to be a collision, even if the messages are going to different destinations. If, instead of connecting all of the stations on a hub to a single backplane (i.e. effectively a single shared medium), we could have multiple paths. This allows messages from different sources to different destinations to occur in parallel. This is the principle behind an Ethernet switch, which breaks from the basic CSMA/CD concept of Ethernet and effectively produces a switched network. These switches look just like a 10BaseT hub to the NIC, but there will be less collisions in the network (although collisions will still occur, for example if two messages have the same destination).

It is also possible to connected a switch to a Network Interface Card using two twisted pairs to create a full duplex 10Mbps connection, i.e. a connection which can both send and receive at the same moment of time at full speed. Bridges A Bridge is an electronic device that connects two Ethernet segments. It handles complete frames and has the same network interface as a computer. When the bridge receives a frame from one segment, it verifies that the frame is valid and if so forwards the copy of the frame to the other segment if necessary. Therefore, two Ethernet segments connected by a bridge behave like a single Ethernet any pair of computers on the extended Ethernet can communicate. A Bridge performs frame filtering. This means that the Bridge doesn't forward the frame if it knows that the destination address is at the same side as the source of the frame. It does this by building up a list of computer addresses on a segment from the source address of messages sent by that segment. Broadcast and collision domains The Broadcast Domain represents the domain of nodes which can be addressed in a LAN. Separate LANs will have different broadcast domains. The Collision Domain of a network is the domain in which a collision will occur if two nodes attempt to transmit data at the same time. A LAN comprising a single segment, or multiple segments connected by repeaters will function as a single collision domain. A switch, router or bridge will not forward a collision signal, and will therefore separate the collision domain. Ethernet at 100 Mbps 10 Mbps was soon not enough bandwidth for many networks and so in 1992 the IEEE started defining a standard for a faster LAN. Challenges Instead of creating a completely new protocol, the IEEE decided to keep all the old packet formats, interfaces and procedural rules and simply reduce the bit time from 100 nsec to 10 nsec. This effectively increased the bandwidth to 100 Mbps. The protocol was officially known as 802.3u, but was more commonly called Fast Ethernet. The change in bit time presented a number of challenges to the designers of Fast Ethernet. The reduction in bit time increases the number of bits sent within a set time period, however these bits still take the same amount of time to travel across a length of wire (i.e. the propagation delay is the same over a given length). This means that more bits will be send during the 2 * Propagation Delay used as the collision window (see Broadcast Network Operation). Either the minimum frame size needs to be increased, or the propagation delay (i.e. cable length) needs to be reduced. Changing the minimum frame size would have caused problems with the backward compatability of this standard, so the maximum network size was reduced, with typically maximum cable lengths of 100m. Media Types Fast Ethernet supports three main wiring schemes, described below. All of these systems use hubs or switches to connect the network. There was no shared medium scheme like the coaxial cable of 10Base5. This was mainly because of the decrease in maximum cable length, which made it impractical to connect a network with a single cable.

100Base-T4 - This uses category 3 twisted pairs, which is normally already present in offices for the telephone network. These wires can handle clock rates up to 25 MHz, so to achieve 100 Mhz four twisted pairs are required. Of these four twisted pairs, one is always to the hub, one is always from the hub, and other two are switchable to the current transmission direction. 100Base-TX - This uses category 5 twisted pairs, which are more expensive than category 3 cables. In this

scheme the design is easier, because these wires can handle clock rates up to 125 MHz and beyond. Only two twisted pairs per station are used, one to the hub and one from it. This scheme allows full duplex communication. Stations can transmit at 100 Mbps and receive at 100 Mbps at the same time. 100Base-FX - This uses two strands of multimode fibre, one for each direction, so it, too, is full duplex with 100 Mbps in each direction. In addition, the distance between a station and a hub can be up to 2 km.

Changes in The Ethernet Stack The various layers of the Fast Ethernet protocol architecture are shown in below. The MII is the interface between the MAC layer and the Physical layer. It allows any physical layer to be used with the MAC layer. The MII provides 2 media status signals, one indicates presence of the carrier, and the other indicates absence of collision. The Reconciliation Sublayer maps these signals to Physical Signaling (PLS) primitives understood by the existing MAC.

The MII is divided into three sublayers:

Physical Coding Sublayer (PCS) - This sublayer provides a uniform interface to the Reconciliation layer for all physical media. Carrier Sense and Collision Detect indications are generated by this sublayer. It also manages the auto-negotiation process by which the NIC (Network Interface) communicates with the network to determine the network speed (10 or 100 Mbps) and mode of operation (half-duplex or full-duplex). Physical Medium Attachment (PMA) - This sublayer provides a medium-independent means for the PCS to support various serial bit-oriented physical media. This layer serializes code groups for transmission and deserialises bits received from the medium into code groups. Physical Medium Dependent (PMD) - This maps the physical medium to the PMA. It defines the physical layer signaling used for various media.

1000 Mbps and Above With the huge rise in multimedia technologies, and the network-centric workplace, the demand for higher bandwidth has continued to rise. This has been met by a new Ethernet Standard, called Gigabit Ethernet, which runs at 1000MBps, but is still compatible with the standard Ethernet and Fast Ethernet Nodes. Effect on Network Size As we have already seen with Fast Ethernet, an increase in speed corresponds to a decrease in network size, due to collision detection, if the minimum frame size is kept the same. The speed increase to 1000MBps would have meant a maximum cable length of 10 metres, which is hardly practical. To deal with this, Gigabit Ethernet uses a bigger slot size of 512 bytes. To maintain compatibility with Ethernet, the minimum frame size is not increased, but the "carrier event" is extended. If the frame is shorter than 512 bytes, then it is padded with extension symbols. These are special symbols, which cannot occur in the payload. This process is called Carrier Extension.

Carrier Extension is a simple solution, but it wastes bandwidth. Up to 448 padding bytes may be sent for small packets. This results in low throughput. In fact, for a large number of small packets, the throughput is only marginally better than Fast Ethernet. Packet Bursting attempts to rectify this. Packet Bursting is "Carrier Extension plus a burst of packets". When a station has a number of packets to transmit, the first packet is padded to the slot time if necessary using carrier extension. Subsequent packets are transmitted back to back, with the minimum Inter-packet gap (IPG) until a burst timer (of 1500 bytes) expires. Media Types Gigabit Ethernet has defined four media types with which it operates:

1000Base-LX - This long wavelength option supports duplex links of up to 550 m long length of 62.5micron or 50-micron multimode fibre, or up to 5 km of 9-micron single-mode fibre. Wavelengths are in the range of 1270 to 1355 mm. 1000Base-SX - This short-wavelength option supports duplex links of up to 275 m using 62.5-micron multimode or up to 550 m using 55 micron multimode fibre. Wavelengths are in the range of 770 to 860 nm. 1000Base-CX - This option supports 1-Gbps links among devices located within a single room or equipment rack, using copper jumpers (specialized shielded twisted-pair cable that spans no more than 25 m). Each link is composed of separate shielded twisted-pair running in each direction. 1000Base-T - This option makes use of four pairs of Category 5 unshielded twisted-pair copper wires to support devices over a range of up to 100 m.

Changes in Ethernet Stack The Gigabit Ethernet layer stack is shown below. The GMII is an extension of the MII ( Media Independent Interface ) used in Fast Ethernet. It uses the same management interface as MII and can support 10, 100 and 1000 Mbps data rates.

Future of Ethernet 10 Gigabit Ethernet has already been ratified, and is being used for parts of the Internet backbone and for large scientific clusters. 1 Gigabit Ethernet is now a common Network interface upon new PC's, replacing Fast Ethernet. The next standard will be 40 Gigabit Ethernet, which has already been demonstrated by switching platforms.

You might also like