You are on page 1of 17

Introduction The VIO Server is currently based on a subset of AIX 5.3 that includes additional packages and services.

It comes with the IBM optional packages called the Advan ced POWER Virtualization (APV) for pSeries p5 machines or the Advanced OpenPower Virtualization (AOPV) packages for OpenPower machines. As these two versions ha ve identical features and functions, I reference them as VIO Server throughout t his article. The VIO Server provides a complete environment for the VIO Server, full support from IBM, and higher levels of performance. Virtualization is a hot topic in the computing industry, with many widely differ ent technologies and solutions being recommended, developed, and used. The POWER 5-based machines have inherited the know how from the IBM mainframes to provide opportunities for a significant reduction in operating costs for complex environ ments. Unlike software solutions available from other vendors, the POWER5 implem entation uses advanced processor features, firmware (also known as Hypervisor), and hardware features to create efficient and flexible virtualization capabiliti es. Uniquely, these capabilities are offered from the top to the bottom of the s erver range -- a powerful 64-way SMP (Symmetric multiprocessor) machine down to a two-way, desk-side system. The key to this virtualization is the VIO Server. This article: Explains the VIO Server concepts and how it works between logical partitions (LP ARs) for disk access and networks. Covers the advantages of using a VIO Server and typical usage scenarios. Shows, by example, how to set up the IBM VIO server and VIO clients. ________________________________________ Back to top What is a VIO Server? pSeries servers from IBM have, since October 2001, allowed a machine to be divid ed into LPARs, with each LPAR running a different OS image -- effectively a serv er within a server. You can achieve this by logically splitting up a large machi ne into smaller units with CPU, memory, and PCI adapter slot allocations. The new POWER5 machines (pSeries p5 and OpenPower servers) can also run an LPAR with less than one whole CPU -- up to ten LPARs per CPU. So, for example, on a f our CPU machine, 20 LPARs can easily be running. With each LPAR needing a minimu m of one SCSI adapter for disk I/O and one Ethernet adapter for networking, the example of 20 LPARs would require the server to have at least 40 PCI adapters. T his is where the VIO Server helps. The VIO Server owns real PCI adapters (Ethernet, SCSI, or SAN), but lets other L PARs share them remotely using the built-in Hypervisor services. These other LPA Rs are called Virtual I/O client partitions (VIO client). And because they don't need real physical disks or real physical Ethernet adapters to run, they can be created quickly and cheaply. VIO Server implementations There are different VIO Server implementations: Both APV and AOPV versions of the VIO Server are special-purpose, single-functio n appliances and are not intended to run general applications. The Linux VIO Server for pSeries p5 or OpenPower hardware first became available with the SUSE SLES 9 distribution. Unlike the VIO Server, this is just a copy o f the Linux operating system. This means it can run other central services such as NFS, network installation, DNS, an Apache Web site, or Samba services. Some c are should be taken that these functions do not interfere with the performance o f the VIO Server service. This software is also available on the Debian Linux fo r POWER distribution. There are different implementations for VIO clients. Actually, these are just th e regular operating systems, but they include the device drivers for running as a VIO client. AIX 5.3 (only supported by the APV or AOPV VIO Server) Linux -- SUSE SLES 9 Linux -- Red Hat EL 3 update 3 onwards and Red Hat EL 4 Linux -- Debian for POWER This article covers the VIO Server and the AIX and Linux VIO clients.

Virtual SCSI disks The VIO Server provides a virtual SCSI disk service, as shown in Figure 1 below. Figure 1. Virtual SCSI disk service With the VIO Server, data is moved directly between the VIO client partition and the real disk device using Remote DMA (RDMA) protocols. In the SUSE VIO Server implementation, data moves from the VIO client to the VIO Server, and thence to the disk. This is "double buffering". Figure 1 shows a single VIO Server providing virtual SCSI services to multiple V IO client partitions. Each VIO client operates as if it had a dedicated SCSI dev ice but, in fact, each client device is a real disk partition (logical disk part ition) on the VIO Server. Alternatively, on the VIO Server, it could use a compl ete disk (hdisk). The VIO Server and VIO client communicate using the internal p Series Hypervisor firmware (PHYP) feature, which efficiently allows disk I/O req uests to be transferred between the LPARs using a message-passing protocol. In Figure 1 above, the VIO Server has a few disks that could be SCSI or fiber ch annel storage area network (SAN) disks. The disk subsystem hardware or a RAID5 S CSI adapter can provide data protection. The VIO clients use the VIO client devi ce driver just as they would a regular local device disk to communicate with the matching server VIO device driver. Then the VIO Server actually does the disk t ransfers on behalf of the VIO client. There is a strict client/server relationsh ip between the VIO client and the VIO Server. Virtual Ethernet The LPARs in the machine can use the virtual Ethernet switch service (in the Hyp ervisor) in a number of different ways. Case one: Internal only networks You can use the Virtual Ethernet to allow TCP/IP (Transmission Control Protocol/ Internet Protocol) to communicate between the LPARs, as shown in Figure 2 below. This provides high-speed data transfer without any hardware adapters starting a t roughly one Gbit per second (can be much higher), especially using larger bloc k sizes. Figure 2 also shows that there is no client/server relationship between the LPARs -- all are equally using the Virtual Ethernet. There can be many Virt ual Ethernets in one machine, where groups of LPARs can communicate only within the virtual Ethernet they're connected to, allowing fast communication and compl ete security without buying additional Ethernet adapters, cables, hubs, or route rs. Figure 2. Virtual Ethernet -- Private/internal only networks Case two: Routing to a physical LAN One LPAR on the Virtual Ethernet can also communicate externally to other machin es using a real physical network on behalf of all the LPARs. In this case, this special LPAR is being used to route Ethernet packets between the internal Virtua l Ethernet and the external physical Ethernet network. It will work well, but in volves setting up TCP/IP routes between the two networks (internal and external) and can take time to set up. Figure 3 below shows one LPAR with a real physical Ethernet adapter providing standard network routing between the two Ethernets. Note that this is not using any VIO Server features. Figure 3. Internal Virtual Ethernet with a bridge to the external LAN Case three: Shared Ethernet Adapter (SEA) to a physical LAN Here, the VIO Server is being used to bridge Ethernet packets between the intern al Virtual Ethernet and the external physical Ethernet network so that all the L PARs appear as regular machines on the physical network. This is simple to set u p and is the option used in the example in this article. In Figure 4, the VIO Se rver is being used to join the two networks using the SEA. Strictly speaking, th e adapter is not shared. It's owned and controlled by the VIO Server; however, i t also provides shared access to the real physical network.

Figure 4. Internal Virtual Ethernet with a SEA to the external LAN Case four: Bridging with virtual LANs (VLANs) This particular scenario is almost the same as Case three. The only difference i s the number of VLANs within the machine using Virtual Ethernet. These are conne cted to VLANs on the external network with a bridging LPAR and a network router that supports VLAN. This complex scenario is beyond the scope of this article, b ut some hints are included and it's supported. ________________________________________ Back to top Why use the VIO Server? You can use a VIO Server in any number of scenarios. Below are five typical exam ples that would make good use of a VIO Server. Small machine with limited PCI slots You have one set of internal SCSI disks or you can split the SCSI disks in two 4 -packs on the OpenPower 720 or p5-550. This gives you two LPARs (at most) using the internal disks. So, you might run a VIO Server to support the other LPARs. F or example, try a VIO Server (0.5 of a CPU) with four to six clients (0.1 to 1 C PU). Typically, clients might be small -- four to 16 GB virtual SCSI disks and o ne Virtual Ethernet for the whole machine. Figure 5 shows multiple LPARs running on a single disk pack. Figure 5. Multiple LPARs Mid-range machines with extra small workloads This might be an eight or 16 CPU machine with large partitions for production us e. But many system administrators also want a small number of extra LPARs. Rathe r than buy an extra machine, a VIO Server can easily host a half dozen smaller L PARs. For example, larger production LPARs might have one to four larger dedicat ed CPU(s), dedicated disk I/O(s), and network(s) each. The VIO Server is used for "bits and bobs" LPAR like test, development, training , practice, new application trials, and so on. Typically, VIO clients might have a couple of four GB to eight GB virtual SCSI disks and one or two virtual Ether nets. In Figure 6, three large production LPARs are running (they would have dedicated disks and Ethernet) with a few extra small VIO clients and one VIO Server on th e machine using spare capacity. This "spare" capacity could be demanded by the p roduction LPARs during peaks in their workload. Figure 6. Three large production LPARs Ranch or server farm style Lots of small server consolidation workloads from smaller or older machines or m any small servers are required, but they are unlikely to peak at the same time. The machine is to run lots of LPARs, for example 10 to 20 clients on a four-way machine or many times that on larger machines. Each LPAR is for small applicatio ns, but not high demand (0.2, or 0.5 CPU up to 2 CPU). This could be server cons olidation or, for example, the importance of data isolation from a collection of small Web servers. The VIO Server has one or two CPU(s), possible RAID 5 SCSI disks, or SAN disks. Typically, clients have one or more four GB virtual SCSI disks each and might ha ve different groups of LPARs around a different Virtual Ethernet. Figure 7 shows dozens of VIO clients with a medium-sized VIO Server supporting t hem on what might be several disk packs. Figure 7. Different groups of LPARs Serious I/O setup only once (to reduce setup and management) The VIO Server has SAN disks connected by two to four Fibre Channel adapters and two Ethernet adapters to run Ether channel for redundancy and additional bandwi

dth. The VIO Server has load balancing and failover, but VIO clients have a much simpler disk and Ethernet setup. The VIO Server could have one to three CPU(s), but the VIO clients are larger, too. For example, one to eight CPU(s) run quite large applications. Typically, VIO cl ients could have hundreds of GB of virtual SCSI disks and many Virtual Ethernets . This complex setup is not covered in this article. Figure 8 shows two regular LPARs (it would have dedicated disks) and a fully configured VIO Server (large) with multiple paths to disks and Ethernet. This is supporting some large VIO cli ent LPARs. Figure 8. Regular LPAR Serious with high availability backup Same as above, but with a second VIO Server for availability/throughput. There a re arguments that for very high availability you should spread your access to vi rtual SCSI and Virtual Ethernet across two VIO Servers in order to continue runn ing in case one VIO Server goes down. The counter argument is that VIO Server is only running a few device drivers. De vices drivers are extremely reliable. Also, anything that would crash one VIO se rver could also crash the second one. Figure 9 shows that instead of using local physical device drivers, the VIO client uses the virtual resource device driver s to communicate with the VIO Server, which does the real I/O. Except the virtua l VIO Server device drivers and the physical resource device drivers, there is v ery little code running on the VIO Server. Little can go wrong on the VIO Server side. Figure 9. Figure 9. VIO Server I'm not going to cover duplicated VIO Servers. Further details are in the Advanc ed POWER Virtualization on IBM p5 Servers redbook (Download Adobe Reader). ________________________________________ Back to top Prerequisites This section describes the software, hardware, skills, and type of network you'l l need. Software Where do I get the VIO Server? For a pSeries p5 machine, the software is included in the APV feature. This runs AIX and Linux VIO clients. For an OpenPower machine, the software is included in the AOPV feature. This wil l only run the Linux VIO clients (AIX does not run on these machines). Hardware You'll need: An OpenPower or pSeries p5 machine with spare resources: o Some CPU resources -- can be less than one CPU o Memory -- 512MB per LPAR (if necessary, just 256MB) o Real Ethernet adapter o Time with CD drive -- unless network installation is preferred o SCSI adapter and a SCSI disk -- could equally use a SAN disk The hardware virtualization feature activated, which is needed for LPAR and VIO Server features, but optional on some POWER5 machines. The VIO Server software on CD-ROM. Network Installation Manager (NIM) is possibl e too, but not covered here. Skills This article doesn't show you a screen-by-screen level of detail and each input field. It's assumed you already understand: Though the IBM VIO Server is based on AIX, it's quite different. You only use th e VIO Server commands; some standard AIX features are not present (for example, smitty) and some typical AIX setups are not allowed, such as LVM- (Logical Volum e Management) level mirroring of VIO client logical volumes. Don't think of the

VIO Server as AIX, but AIX experience can be useful in understanding the terms u sed. Basic AIX systems administration such as installing from an mksysb image, config uring networks, AIX style volume groups, and logical volumes terms. If you intend to use a Linux VIO client, then you need: o Basic Linux systems administration such as installing an RPM (rpm -Uvh < package>.rpm), configuring an Ethernet network adapter (ifconfig eth0 <ipaddress > mask 255.255.255.0), and managing a filesystem (mount /dev/sda5 /mnt). These t asks are identical to working on the Intel platform, and there are many books, t raining courses, and Internet material covering the regular system administratio n commands and tasks. o How to install SuSE Linux in either text mode (on a dumb/ASCII screen) o r with a VNC (Virtual Network Computing) session. Once you've installed Linux a couple of times, this becomes a routine task. For the VNC install, the boot prom pt extra command is vnc=1 password=abc123. There are six characters for the pass word. The system also prompts you for the other details. Hardware Management Console (HMC): o How to install the HMC hardware and software o How to set it up (It's assumed this has already been done.) o How to use the HMC to create and start a simple LPAR and its profiles The pSeries p5 and OpenPower range of machines internals are like the names of t he adapter positions. For example, the Tn names for internal adapters and Cn for real adapters in a PCI slot. You need to create the VIO Server LPAR with the ri ght SCSI disks and Ethernet resources on the HMC, with and without the CD. Above n is the number of the slot; details are in the hardware manuals, redbooks, or on the large sticker on the outside of the machine covers. Network The VIO Server must be able to communicate directly with the HMC for advanced fu nctions and error reporting. Since this is easily forgotten, I recommend a netwo rk like the one in Figure 10. Figure 10. The network Many sites also have other dedicated networks in addition to the above. For exam ple, a network for remote backup or a network dedicated for systems administrati on in addition to the networks in Figure 11 below. ________________________________________ Back to top Getting started with VIO Server This section covers the steps and three extra common tasks to get started with t he VIO Server. 1. Step 1. Logical diagram of the example setup 2. Step 2.Planning your setup 3. Step 3. Create the SUSE SLES 9 VIO Server LPAR 4. Step 4. Install SUSE SLES 9 VIO Server 5. Step 5. HMC defining the VIO Server -- Virtual Ethernet 6. Step 6. HMC defining the VIO Server -- virtual SCSI 7. Step 7. HMC create the VIO client LPARs 8. Step 8. Clean up the HMC 9. Step 9. VIO Server preparing for the clients: 1. Step 9a. Virtual Ethernet 2. Step 9b.Virtual SCSI using logical volume disk partition for Client X 3. Step 9c.Virtual SCSI using a whole disk for Client Y 10. Step 10. VIO client LPAR installations 11. Step 11. *Backing up a VIO Server and VIO client 12. Step 12. *Cloning a client 13. Step 13. *Linux dynamic LPARs (DLPARs) and RAS *These particular tasks are useful and recommended. Please note that it takes longer to describe some steps than to actually impleme nt!

________________________________________ Back to top 1. Logical diagram of the example Figure 11 shows the SUSE VIO Server in an LPAR and two VIO client LPARs that are going to be set up for this article. It's a logical diagram of the example setu p, which is explained in the rest of this article. Figure 11. The SUSE SLES 9 Server LPAR Ethernet For simplicity, the VIO client LPARs are given Ethernet IP addresses within the address range of the regular physical Ethernet network in this computer room. Th e VIO Server bridges between physical and virtual networks, meaning that the cli ent LPARs will appear like any other computer to users. This is the most likely option to be implemented and hides the Virtual Ethernet network completely from users in order to allow simple access to the client LPARs. Disks For the disks, let's use the internal SCSI adapter in the VIO Server and one dis k. The first client's (Client X) virtual disk connects to a logical volume (disk partition) on the VIO Server. The second client's (Client Y) virtual disk is su pported by a whole disk on the VIO Server. This shows all of the common types of setup -- the SEA network and disk partitions, plus allocating a whole disk. In practice, most people use a logical volume. ________________________________________ Back to top 2. Planning your setup First, do some planning of the VIO Server and client logical partitions. Experie nce has shown that just creating LPARs without some planning causes problems and can waste a lot of time. Table 1 shows the planning I've done for this example, which is an OpenPower 720. Except for the references to PCI slots like C3, T6, and T14, which are machine dependant, the references could be for any pSeries p5 or OpenPower machine. In this example, the VIO client logical partitions are go ing to be Linux, but equally they could be running AIX. Notes are included where AIX VIO clients would be different. Table 1. Planning VIO Server Client A Client B Hostname op34 op36 op37 Ethernet adapter C3 bridging Virtual Virtual IP address 9.137.62.34 9.137.62.36 9.137.62.37 Virtual LAN ID (port) 1 1 1 Mask 255.255.255.0 255.255.255.0 255.255.255.0 Gateway 9.137.62.1 9.137.62.1 9.137.62.1 DNS 9.137.62.2 9.137.62.2 9.137.62.2 CD adapter T6 for install only T6 for install only T6 for install o nly SCSI adapter T14 Virtual Virtual Disk size hdisk0 is 36 GB 4 GB hdisk1 is 36 GB for client Y 73 GB Device on VIO Server lv00 hdisk1 Virtual SCSI adapters Slot 3 for Client X Slot 3 to server slot 3 Slot 4 for Client Y Slot 3 to server slot 4 Profile names Normal Normal Normal Normal with CD Normal with CD Normal with CD CPU values: Dedicated/shared CPU Shared Shared Shared CPU desired 0.4 0.3 0.3 CPU min 0.2 0.1 0.1 CPU max 1 2 2 Virtual processors 1 2 2

Memory values: Memory 512 MB 2048 MB 256 MB ________________________________________ Back to top 3. Creating the VIO Server LPAR Next, you need to create the VIO Server LPAR. You do this on the HMC and create a special VIO Server LPAR, but initially with no extra virtualization features. You'll add the virtual features later. The only feature that is different from a regular Linux LPAR is the LPAR Partition Environment feature on the first panel of the Create Logical Partition Wizard. Here you must not select the AIX or Lin ux option, but must select VIO Server. See Figure 12 below. Figure 12. The SUSE SLES 9 Server LPAR Create the LPAR and the first profile as above, using the details in Table 1. (T his article assumes you are familiar with the HMC and creating LPARs. If you are not, see Resources for documents that describe how to create LPARs.) I call the LPAR profile that is normally used with a name of "Normal". Further hints: A VIO Server LPAR can use dedicated CPUs, which is a good idea if you have plent y of CPUs or are expecting to do lots of I/O for many VIO client LPARs; it avoid s any delay in starting the I/O on the real adapters. Dedicated CPUs are running the VIO Server all the time. A VIO Server LPAR can use shared CPUs, which is a good idea if you don't have wh ole CPUs that can be assigned. This also means unused CPU cycles are given back to the shared pool for other LPARs to use. If the machine becomes heavily loaded , it can introduce tiny delays in starting the I/O on real adapters. Shared CPU partitions are time-sliced onto the CPU, along with other LPARs. Setting the VIO Server partition to Uncapped and with a high weight is generally a good idea. A simple CPU rule of thumb: Assign at least ten percent of those CPUs to the VIO Server for CPUs that are going to be used for the VIO Server and client partiti ons. For example, for five CPUs in the shared pool being used for both VIO Serve r and VIO clients, allocate 0.5 of a CPU to the VIO Server. A simple memory rule of thumb: Use 512 MB of memory. It's recommended to have an LPAR profile with the adapter connected to the CD dr ive included to make installing the IBM VIO Server from CD straightforward. Copy the Normal profile and rename it Normal with CD. Then change the new profile pr operties to include the CD SCSI adapter. This will be used to initially boot the LPAR with a DVD/CD drive for installing the VIO Server. If this is a new machine and you are the only user, installations go much faster if you assign the LPAR a whole CPU or more. If the LPAR is going to be assigned less than this in production, it can always be reduced later, but this simple t rick might save you ten minutes per LPAR installation. ________________________________________ Back to top 4. Installing the VIO Server Now install the VIO Server into this partition as a recover from mksysb image me thods. AIX systems administration experts will be familiar with this. The basic steps are: On the HMC in the Activate LPAR dialog, boot the LPAR into the System Management Services (SMS) menu by selecting both Open a Terminal and the Advanced button a nd then Boot Mode = SMS. Once in the SMS menus, choose the Boot Options and sele ct Install/Boot. Then choose List all Devices and carefully select the CD-ROM dr ive, Normal Boot, and Yes to leave the SMS menus. Read the instructions carefully and, if free, elect to install the VIO Server on hdisk0. (This is assumed in the rest of the article.) Warning: If you want to use a whole disk for your VIO client, then you need to m ake sure the recovery of the VIO Server mksysb is not spread across all the disk s. Assuming you now have the VIO Server up and running, you need can add and set up the VIO Server virtualization features.

Important hints If you have the DLPAR change software installed and working, it's possible to dy namically add Virtual Ethernet and virtual SCSI. In practice, I recommend you sh utdown your VIO Server and VIO client LPARs during this initial setup to make su re it works the first time. If you set up DLPAR later on, you can then experimen t, but remember DLPAR changes also have to be implemented identically in your LP AR profile -- if you want the same configuration next time, you need to reboot y our LPAR. In this article, I take a simple and ultra-safe approach to shut down the VIO Se rver, making changes to the VIO Server profile and restarting it to avoid any co nfusion and complications. So on the VIO Server, use shutdown. If you make changes to an LPAR profile, the LPAR must be shutdown and then resta rted from the HMC to pick up those changes. If you use shutdown restart in the V IO Server LPAR, then you'll have only the same resources that were available whe n the LPAR was previously started from the HMC. You need to completely stop the LPAR to get the new resources. ________________________________________ Back to top 5. Defining the VIO Server -- Virtual Ethernet On the HMC, you can now define the Virtual Ethernet. First shutdown the VIO Serv er LPAR (as root run: shutdown). On the HMC, change your Normal profile properti es by right-clicking the Profile and selecting Properties. You also need to sele ct the VIO tab. Select Ethernet at the bottom and click Create. By default, this will be allocated to slot number 2 and a Port virtual LAN ID of 1. Any LPAR wit h the same Port virtual LAN ID will be able to communicate with each other. This is going to be set up as SEA. To do this, you want to log into the VIO Server o ver the network and set these two options: Select trunk adapter. Leave the IEEE 802.1Q compatible adapter unchecked -- this is only needed if you are using VLANs internally. If you want different Virtual Ethernet LANs so that different groups of LPARS ca n communicate with each other, all you need to do is have different Port virtual LAN ID numbers. These complex configurations are not covered in this article. In Figure 13, you should see the VIO Server in the lower half and VIO client in the top half. It shows that if the Port virtual LAN IDs are the same, then the L PARs can communicate. It also shows the additional settings for the VIO Server. (The trunk is selected and IEEE 802.1Q is not selected.) These additional settin gs are really for the bridging feature, as Virtual Ethernet does not really have a client/server relationship -- all LPARs are equal on the network. Figure 13 also shows the Virtual Ethernet settings. At the bottom is the VIO Ser ver (or any LPAR that will be doing the bridging to the real Ethernet adapter) a nd at the top is the VIO client, or any LPAR that only uses the Virtual Ethernet . Figure 13. Virtual Ethernet settings On the other Virtual Ethernet LPARs, you can use the ifconfig command to set up your network just as you would any network. If it's AIX, you can use smitty or w ebsm. If it's SUSE, the YAST tool can be used. It finds the Virtual Ethernet ada pter just like any other regardless of the tool selection. Figure 14 shows a SUS E example. Figure 14. Non-bridging Virtual Ethernet LPARs ________________________________________ Back to top 6. Defining the VIO Server -- virtual SCSI On the HMC, you can now define the two different virtual SCSI devices. These two types of virtual disks (a logical volume or a whole hdisk) appear identical on the HMC; only on the actual VIO Server LPAR are they set up any differently.

If not done already, shutdown the VIO Server LPAR (as root run: shutdown). On the HMC, select the VIO Server and change your Normal profile properties by r ight-clicking the Profile and selecting Properties. You also need to select the Virtual I/O tab. Select SCSI at the bottom and click Create. This is the VIO Server, so select Adapter Type: Server. Select Any Remote Partition and Slot can connect. Ideally, this should name the specific LPAR and slot to eliminate the risk of the wrong connection between ser ver and client, but at this point you have not created the client partition, so you can't name it yet. This is fixed up later; see Clean up the HMC section for details. Select OK. Do this a second time for the second virtual SCSI. The client LPARs are going to use the VIO Server slots 3 and 4. Any more SCSI ad apters are optional in this example. In practice, the writer typically sets up a handful of extra virtual devices, so they can be used in the future without sto pping the VIO Server or having to do dynamic changes. Unused virtual adapters co st very little, so it's not a waste. In Figure 15, you have the eventual configuration showing how the VIO Server at the bottom and client at the top both explicitly refer to each other to eliminat e errors. You'll reach this configuration in Step 11. I have not covered it yet, but the client LPAR is, of course, shown here too. Figure 15. Configuration of VIO Server and client ________________________________________ Back to top 7. Creating the VIO client LPARs Now you can create the two VIO client LPARs for the two different types of virtu al SCSI used in the example. It's assumed you already know the procedure to crea te a regular LPAR; this section covers additional things you need to consider. This might be obvious, but you don't need real adapters for your disks or Ethern et connection because you're going to use virtual resources for these. I recommend you install the client LPAR using CD because it's simple, so you'll want to have the CD SCSI adapter within your LPAR. Once installed, this can be r emoved from the LPAR profile. It's recommended that you create two identical LPAR profiles -- one with and one without the CD. Once installed, the writer uses NFS to remotely mount a filesys tem containing the AIX and Linux CDs, so you don't need the CD drive from then o n to install additional LPP or RPM packages. Add the Virtual Ethernet adapter on the VIO screen with the same Port virtual LA N ID, which is 1 in this example, but do not select the trunk or IEEE 802.1Q com patible adapter options. Add the virtual SCSI adapter as follows: Set the Adapter Type: Client. Explicitly name the Remote Partition (the LPAR in which you have the VIO Server) . Explicitly name the Remote Partition Virtual Slot Number, which is slots 3 for t he first client LPAR (Client X) and 4 for the second client LPAR (Client Y). Don't forget, you have two client LPARs to create with the two different SCSI Re mote Partition virtual slot numbers but the same Virtual Ethernet Port virtual L AN ID. ________________________________________ Back to top 8. Cleaning up the HMC Now that you've created the client LPARs, you can go back to the VIO Server LPAR and connect up the virtual SCSI adapters explicitly to their virtual client LPA Rs and slots. This ensures that only the right client LPAR connects to the right virtual SCSI disk. It's a safety precaution and worth doing. Reference Figures 14 and 15 to make sure everything is in order.

On the HMC, highlight the VIO Server LPAR profile and bring up its properties. I n the VIO tab, select each Server SCSI resource and the Properties button. You a lso need to set: Only selected Remote Partition and Slot connect option The correct Remote Partition name The correct Remote Partition Virtual Slot number In this example, you have two virtual SCSI adapters on the VIO Server to "clean up". This is very easy to get wrong, and this is why I planned it in advance (se e Table 1). ________________________________________ Back to top 9. VIO Server -- preparing for the clients You now have all the connections set up for the VIO Server and virtual clients, but still have to connect the virtual SCSI disk to a piece of real disk space an d the virtual and real Ethernets using the SEA. This is done on the VIO Server o nly as follows. First, start the VIO Server LPAR again from the HMC. 9a. Virtual Ethernet Once the VIO Server is running and assuming no network is set up, you need to fi nd out the names of the virtual resources you have to work with using: $ lsdev -virtual name status description nt2 Available Virtual I/O Ethernet Adapter (l-lan) vhost0 Available Virtual SCSI Server Adapter vhost1 Available Virtual SCSI Server Adapter vhost2 Available Virtual SCSI Server Adapter vhost3 Defined Virtual SCSI Server Adapter vsa0 Available LPAR Virtual Serial Adapter clientY Available Virtual Target Device - Logical Volume clientZ Available Virtual Target Device - Logical Volume To see the real adapters, use: $ lsdev -type adapter name status description ent0 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (14108902) ent1 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (14108902) ent2 Available Virtual I/O Ethernet Adapter (l-lan) ide0 Defined ATA/IDE Controller Device lai0 Defined GXT135P Graphics Adapter sisioa0 Defined PCI-X Dual Channel U320 SCSI RAID Adapter sisioa1 Available PCI-X Dual Channel U320 SCSI RAID Adapter . . . If your real Ethernet adapter has more than one port, it can be confusing since your Virtual Ethernet will have a higher number. In the case of a two-port Ether net card, the Virtual Ethernet name might be en2, as en1 can be the second port on the real adapter. Also, make sure you plug in the Ethernet cable into the rig ht port. On your machine, the resource names might be slightly different, so be careful i n following this example. Create the SEA between the real and Virtual Ethernets with (Note: Do not type th e arrow or comments when you try this.) : $ mkvdev -sea ent0 <- this is the real Ethernet -vadapter ent2 <- this is the Virtual Ethernet -default ent2 <- this simple setup it's to only one so it's the default -defaultid 1 <- this is the Port Virtual ID from the HMC This returned the below results: ent3 Available en3

et3 $ And created the SEA with a name of ent3. Take a look with: $ lsdev -dev en3 name status description ent3 Available Shared Ethernet Adapter This new SEA adapter is used in the mktcpip command below. Now program the TCP/IP details on the SEA adapter. This command is used instead of smitty (which is not available on the VIO Server) or ifconfig (not allowed fo r the SEA). You'll, of course, have to decide your own hostname and address. mktcpip -hostname op34 <- use you own hostname -inetaddr 9.137.62.34 <- use your IP address -interface en3 <- from the mkvdev command -netmask 255.255.255.0 <- normal TCPIP meaning -gateway 9.137.62.1 <- normal TCPIP meaning You should now be able to ping your gateway: ping 9.137.62.1. 9b. Virtual SCSI using a logical volume for Client X Once the VIO Server is running and before the virtual clients are started, you n eed to create the disk space and connect it to the virtual SCSI resource that th e VIO client will try to attach to. Check your disks: $ lspv hdisk0 00c033eaf709961e rootvg active hdisk1 none None hdisk2 none None $ Here you see the VIO Server is using the first disk and the others are currently unused. Next, take a look at the free space on that first disk and the primary volume group called rootvg: $ lsvg rootvg $ lsvg rootvg VOLUME GROUP: rootvg VG IDENTIFIER: 00c033ea00004c000000010104ffa3fc VG STATE: active PP SIZE: 128 megabyte(s) VG PERMISSION: read/write TOTAL PPs: 271 (34688 megabytes) MAX LVs: 256 FREE PPs: 151(18328 megabytes) LVs: 14 USED PPs: 120(15636 megabytes) OPEN LVs: 12 QUORUM: 2 TOTAL PVs: 1 VG DESCRIPTORS: 2 STALE PVs: 0 STALE PPs: 0 ACTIVE PVs: 1 AUTO ON: yes MAX PPs per VG: 32512 MAX PPs per PV: 1016 MAX PVs: 32 LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no HOT SPARE: no BB POLICY: relocatable $ lsvg -pv rootvg rootvg: PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION Hdisk0 active 271 55 22..09..00..00..24 $ Look at the disk view, too: $ lspv hdisk0 PHYSICAL VOLUME: hdisk0 VOLUME GROUP: rootvg PV IDENTIFIER: 00c033eaf709961e VG IDENTIFIER

00c033ea00004c000000010104ffa3fc PV STATE: active STALE PARTITIONS: 0 ALLOCATABLE: yes PP SIZE: 128 megabyte(s) LOGICAL VOLUMES: 14 TOTAL PPs: 271 (34688 megabytes) VG DESCRIPTORS: 2 FREE PPs: 151(19328 megabytes) HOT SPARE: no USED PPs: 120 (15636 megabytes) MAX REQUEST: 256 kilobytes FREE DISTRIBUTION: 22..09..00..00..24 USED DISTRIBUTION: 33..45..54..54..30 $ Important things to note here are: The TOTAL PPs entry shows the disk is a 36 GB drive -- actually 34688 MB, but re member the disk sizes are quoted in millions and billions and not the binary num bers used here. The FREE PPs entry shows there is approximately 18 GB free space on the disk. The PP SIZE entry shows that the VIO Server is allocating disk space in a minimu m of this amount. To create a logical volume in the rootvg volume group, try: $ mklv -lv lv00 rootvg 4G lv00 The lv00 confirms the name used to create the logical volume. To connect this to the VIO client resource, the name of the resource (here it's clientx) is used to make it very clear which partition is using it, but you coul d use any suitable name: $ mkvdev -vdev lv00 -vadapter vhost0 -dev clientx clientx Available You can now start the VIO Client X and find its virtual SCSI resources. In this example, you've defined just one logical volume for this VIO client, but many lo gical volumes could be used. I recommend you keep them to a minimum to make the configuration simpler. 9c. Virtual SCSI using a whole disk for Client Y In the above section, you found that hdisk1 was unused. This disk will be used t o support the VIO Client Y. The disk must not be in a volume group. In the lspv command above, there is no logical volume group name next to the disk, so it's n ot in a volume group. To connect this disk to the virtual SCSI disk for VIO Clie nt Y, try: $ mkvdev -vdev hdisk1 -vadapter vhost1 -dev clienty clienty Available 9d. Checking the configuration You can now see the configuration: $ lsdev -virtual name status description ent2 Available Virtual I/O Ethernet Adapter (l-lan) vhost0 Available Virtual SCSI Server Adapter vhost1 Available Virtual SCSI Server Adapter ... vsa0 Available LPAR Virtual Serial Adapter clientx Available Virtual Target Device - Logical Volume clienty Available Virtual Target Device - Logical Volume ent3 Available Shared Ethernet Adapter $ $ lsdev -dev clientx -attr attribute value description user_settable LogicalUnitAddr 0x8100000000000000 Logical Unit Address False aix_tdev lv00 Target Device Name False $ lsdev -dev ent3 -attr

attribute value description pvid 3 PVID to use for the SEA device pvid_adapter ent2 Default virtual adapter to use for non-VLAN-tagged real_adapter ent0 Physical adapter associated with the SEA thread 0 Thread mode enabled (1) or disabled (0) virt_adapters ent2 List of virtual adapters associated with the SEA ________________________________________ Back to top 10. VIO client LPAR installations Now you can start up your VIO clients and install them. This can be AIX (but onl y if the hardware is pSeries p5 and not OpenPower), SUSES SLES 9, Red Hat 3 upda te 3 onwards, or Debian. They should find both the: Virtual Ethernet, which will be named a Virtual Ethernet and behave like a real physical adapter. Virtual SCSI disk, which is presented just like an SCSI disk, but it will only b e the size of the underlying disk partition or disk. These should install just like a regular real Ethernet and SCSI disk. VIO client running AIX For AIX, this installation should be just like a normal AIX partition. VIO client running Linux For Linux, here are some additional notes. Once running, the Virtual Ethernet lo oks and behaves like a very fast one GB real adapter: clienta:~ # ifconfig eth0 Link encap:Ethernet HWaddr AE:38:00:00:D0:02 inet addr:9.137.62.178 Bcast:9.137.62.255 Mask:255.255.255.0 inet6 addr: fe80::ac38:ff:fe00:d002/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1075 errors:0 dropped:0 overruns:0 frame:0 TX packets:350 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:113566 (110.9 Kb) TX bytes:40940 (39.9 Kb) Interrupt:184 Once running, you can see the virtual SCSI disk is being treated just like a reg ular disk: clienta:~ # fdisk -l Disk /dev/sda: 4194 MB, 4194304000 bytes 130 heads, 62 sectors/track, 1016 cylinders Units = cylinders of 8060 * 512 = 4126720 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 1 3999 41 PPC PReP Boot /dev/sda2 6 132 511810 82 Linux swap /dev/sda3 133 1016 3562520 83 Linux And you'll find the IBM virtual SCSI client kernel module installed (see ibmveth and ibmvscsic below): clienta:~ # lsmod Module Size Used by evdev 31416 0 joydev 31520 0 st 72688 0 ipv6 478560 93 sg 74176 0 usbcore 183644 1 ibmveth 44536 0 subfs 30168 2 dm_mod 108224 0 ibmvscsic 43072 2 sr_mod 44380 0

sd_mod 43792 3 scsi_mod 192024 5 st,sg,ibmvscsic,sr_mod,sd_mod Network installation might be trickier, as you might have to activate the device s drivers for the installation tools to find the virtual adapters. Network insta llations are not covered in this article. Depending on which release you use, the installer might give you, with a series of menus, options to install the IBM virtual SCSI client and Virtual Ethernet dr ivers before you start an installation properly. Later releases fully understand and install the device drivers for these virtual resources without manual inter vention. ________________________________________ Back to top 11. Backing up a VIO Server and VIO client Once you've created your client LPAR and set it up the way you like, you should consider backing up the operating system images. Backups are a large subject for which many books have been written. There are many backup solutions from both c ommercial applications and freely available tools in the AIX and Linux world. Fo r AIX, IBM has the Tivoli Storage Manager product for remote backups. For Linux, one of the popular freely available tools is Amanda (Advanced Maryland Automatic Network Disk Archiver), which provides remote backup with disk caching and tape library management. There is also a Linux "Backup and Recovery How To" on the I nternet for more information. This article just covers the special considerations for VIO Servers and VIO clie nts. Backups are important for at least these three reasons, and these apply to VIO systems, too: Recovery of files that are accidentally removed. Disk failure, assuming your disks are not already protected with a mirror or RAI D 5, or you are very unlucky and lose more than one disk. Recreating the entire system for a disaster (total machine loss) from backups he ld off the site. HMC The HMC data includes definitions of the LPAR physical resources such as CPU, me mory, PCI slots, and definitions of the LPAR virtual resources (such as the conn ections between VIO Server and VIO clients). See Effective System Management Using the IBM Hardware Management Console for pS eries, SG24-7038-00. If the HMC fails, the data is still held in the Service Processor (FSP) and can be read by a replacement or recovered HMC. It's vital that the configuration det ails are available in case of a disaster. HMC backups are documented in manuals, InfoCenter help files, and IBM Redbooks. It's also recommended that details of the LPARs are documented on paper. For example, something similar to the plannin g table used to create the LPARs in this article. VIO Server The VIO Server itself needs to be backed up. There is the backupios command for saving the rootvg volume group to either a tape, filesystem, CD, or DVD. The oth er volume group structure (not the contents) can be saved and restored with the savevgstruct and restorevgstrct commands. To save the contents of other volume g roups (not rootvg), you'll have to make other arrangements such as using the oem _setup_env command, dd command, savevg command, tar command, or other backup sol utions. If the VIO Server is purely being used for virtual I/O, then you need to back up : The details of the client logical volumes or the details of hdisks. Details incl ude number of logical volumes, their size, the disk layout, which clients, and t heir use. The contents of the client logical volumes or hdisks. To recover the VIO Server, you can simply reinstall the original install image, which is a mksysb image in much the same way as you installed the VIO Server in the first place.

There are different approaches to backing up the VIO client images from the VIO Server. First, note that you have the option of doing hot or cold backups: Hot backup A hot backup is while the VIO client is running. This is dangerous and is not re commended. Cold backup A cold backup is the only sensible way to back up the client from the VIO Server . This is simply a matter of shutting down the VIO client. For Linux as root on the client, try: shutdown -fh now, and for AIX shutdown -Fh. For the logical volume method, you have to use the dd command to copy the logica l volume images to a file, tape, NFS, and so on. For the whole hdisk method, the large size probably means a tape is the best opt ion. The cp command is not a good idea, since it copies a file using small blocks and is very inefficient and slow. A better command is dd with a large block size. F or example, 64 KB blocks use the bs=64k option. To copy a logical volume, try: dd bs=64k if=/dev/lv01 of=/backup/B Alternatively, you can back up straight to a tape drive using a command like tar or backup. Some machines support a writeable DVD device that can also be used a s a backup medium. Because AIX and the VIO Server can perform DLPAR changes of PCI slots, a single tape drive and its associated SCSI adapter can be moved to the VIO Server for th e backup period, and then removed so it can be used in other LPARs. Recovery of a VIO client involves getting the disk image back in the right place and starting the VIO client LPAR again. VIO client The VIO client can be used to back up its own data just like a regular LPAR runn ing AIX or Linux. With an AIX VIO client, the best backup method is the mksysb c ommand. It's unlikely that VIO client LPARs will have its own tape, since the purpose of a VIO client is to share physical resources to reduce hardware requirements. As with the VIO Server, the DLPAR changes of PCI slots can be used to temporarily introduce a tape driver to the client and then for it to back up its own data. A utomating this process can be hard to coordinate between lots of client LPARs, b ut it can be done using scripts on a central machine. Some machines support a wr iteable DVD device that can also be used as a backup medium. A second option is for the client to use another LPAR (possibly the VIO Server) or another machine to save the data using either a: Remote tape drive You can find lots of information on how to do this and make it secure on the Int ernet. You might need to check the speed of this mechanism and use a Linux tool called "buffer" to ensure the tape drive streams data onto the tape drive effici ently. NFS server To temporarily store the backup data before it's backed up to tape. Remote Backup application Discussed at the top of this section and uses a local client application to tran sfer data to the server machine to provide a backup service. In all three cases, the high speed of the Virtual Ethernet can boost backup perf ormance. Recovery using these methods can be harder work. With AIX, you can simp ly recover the mksysb image. With Linux, you might have to reinstall from the or iginal CD-ROMs and then overwrite the running Linux with the backup. ________________________________________ Back to top 12. Cloning a client Another option is to create a copy of the VIO client SCSI disk and use it as the virtual SCSI disk for a new VIO client LPAR. For the logical volume, create another logical volume of identical size and use the dd command to create a copy of the original logical volume. Be careful with the header structure of the logical volume; AIX can keep some information in the

first block. For the whole disk, you would need a disk of identical size and then copy from t he original to the new partition using the dd command. You need to check that the cloned LPAR is not using the same Ethernet IP address es as the original. Alternatively, you can clone the original LPAR before puttin g it on the network. ________________________________________ Back to top 13. Linux DLPARs and RAS This is not really part of the VIO Server, but it's important for what IBM calls reliability, availability, and serviceability (RAS). For AIX 5.3, VIO clients have the DLPAR and RAS features already installed. For the Linux VIO clients, these need to be added as described below. After installing Linux, it's strongly recommended that you install the IBM packa ges for both DLPARs (this works for physical and virtual resources) and the daem ons and tools to increase RAS. This ensures that you get the expected reliabilit y from your pSeries p5 or OpenPower machine. These RPMs put the LPAR in touch wi th the HMC for dynamic changes, reporting problems, and provides tools to use on the LPAR, too. The tools can be downloaded from Service and productivity tools for Linux on POW ER. At this Web site, select the right version of Linux tab then the HMC-Managed option to find the list. Below is the list of RPMs at the time of writing this article, but the version numbers might have since been updated since. 1. src-1.2.2.1-0.ppc.rpm 2. rsct.core.utils-2.3.4.2-0.ppc.rpm 3. rsct.core-2.3.4.2-0.ppc.rpm 4. csm.core-1.4.0.3-79.ppc.rpm 5. csm.client-1.4.0.3-79.ppc.rpm 6. devices.chrp.base.ServiceRM-2.2.0.0-1.ppc.rpm 7. DynamicRM-1.1-2.ppc.rpm 8. rpa-dlpar-1.0-11.ppc.rpm 9. rpa-pci-hotplug-1.0-8.ppc.rpm 10. librtas-1.1-12.ppc64.rpm A prerequisite is the rdistcommand. For SUSE Linux, this is on the Linux SuSE SL ES 9 CD3 and file name rdist-6.1.5-792.1.ppc.rpm. Download these RPMs and install or update existing packages. You should find you can now do DLPAR changes. In addition, Linux can report problems to the HMC, wh ich is used to forward problems to IBM (if set up) and used by hardware maintena nce staff for diagnosis and correction. A similar procedure is available for Red Hat and other Linux versions. See the a bove Web site for details about the packages you need to install. ________________________________________ Back to top Conclusion This article described the virtualization capabilities of IBM POWER5 servers and , through examples that apply equally to both pSeries p5 and eServer OpenPower s ystems, showed how to set up and use the VIO Server. Virtualization is a hot topic in the computing industry. The POWER5-based machin es provide opportunities for a significant reduction in operating costs for comp lex environments. Unlike software solutions available from other vendors, the PO WER5 implementation uses advanced processor features (Hypervisor) and hardware f eatures to create efficient and flexible virtualization capabilities. Acknowledgements The author wishes to thank Dave Williams and Stephen Atkins, both of IBM UK, for reviewing and improving this article. Resources Get more information on: o Setting up your IBM eServer OpenPower server

o o o

Setting up your Linux on pSeries server Linux on Power Facts and Features pSeries p5, OpenPower, and JS20 Facts and Features

Get a full list of Linux on Power applications and more on the way. Find several Linux on Power developerWorks articles. Download IBM Redbooks: o Introduction to Advanced POWER Virtualization on IBM p5 Servers, Introdu ction and basic configuration, SG24-7940 o Advanced Power Virtualization on IBM p5 Servers Architecture and Perform ance Considerations, SG24-5768 o o o o Read the following whitepapers: IBM p5 570 Server Consolidation Using POWER5 Virtualization IBM p5 570 Workload Balancing Using POWER5 Virtualization Virtual I/O Server-Performance/Sizing/QOS considerations Go to the InfoCenter for: Using the Virtual I/O Server

Visit the Virtual Innovation Center for Hardware for AIX development support. Th is is the primary source for all pSeries AIX development. Browse for books on these and other technical topics. Want more? The developerWorks AIX and UNIX zone hosts hundreds of informative ar ticles and introductory, intermediate, and advanced tutorials on the eServer bra nd. Download Adobe Reader. Get involved in the developerWorks community by participating in developerWorks blogs. The IBM developerWorks team hosts hundreds of technical briefings around the wor ld which you can attend at no charge.

You might also like