You are on page 1of 25

Document version: 3.

1 27 October 2011

Install Your Own OpenStack Cloud


Diablo Edition
By Eric Dodmont

OpenStack is an open source IaaS cloud computing platform (www.openstack.org) written in the Python programming language. In this document, I will describe in detail the installation, configuration and use of my own OpenStack platform. You can use it to install your own private or public cloud.

We will install OpenStack on two physical servers: The node1 will be: The cloud controller node (running nova-api, nova-scheduler, nova-objectstore, novanetwork, MySQL, RabbitMQ, and glance). A compute node (running nova-compute and KVM). A volume node (running nova-volume and iSCSI).

The node2 will be: A compute node (running nova-compute and KVM). A volume node (running nova-volume and iSCSI).

It means that an instance or a volume can either be created on the node1 or the node2 (the nova-scheduler decides where to create them). It means also that if you deactivate the node2, you can still provision new instances and new volumes: the node1 can run in stand-alone mode. Hardware node1 CPU: 1 Intel i7 RAM: 8 GB HDD: 2 1 GB (sda & sdb)

NIC: 2 (eth0 & eth1)

Install Your Own OpenStack Cloud Diablo Edition By Eric Dodmont

node2 CPU: 1 Intel Core2Quad RAM: 4 GB HDD: 2 500 MB (sda & sdb) NIC: 2 (eth0 & eth1)

Networks Network type: VLAN (VlanNetworkManager) Network1 (eth0): public / external network 192.168.1.0/24 Network2 (eth1): private / internal network (managed by OpenStack) 10.0.0.0/8

The node1 (nova-network) is the network gateway: the floating IPs and the private default gateways IPs (10.0.X.1) are configured on it. The node1 acts as a router to be able to access the instances on the node1 or the node2. Public IPs of the two nodes: node1: 192.168.1.201 node2: 192.168.1.202

The two nodes and the two networks


Software versions o o Operating System (OS): Linux Ubuntu Server version 11.04 (Natty), 64 bits. Cloud Computing (IaaS):

Install Your Own OpenStack Cloud Diablo Edition By Eric Dodmont


a. OpenStack Compute (Nova) version 2011.3 (Diablo). b. OpenStack Image Service (Glance) version 2011.3 (Diablo). These are the different versions of OpenStack until now (October 2011): Code name Austin Bexar Cactus Diablo Essex Version Remark 2010.1 2011.2 2011.2 2011.3 Present released version 2012.1 Present development version

Naming conventions The Amazon EC2 and OpenStack Nova denominations are sometime different: Examples: Amazon EC2 OpenStack Nova Elastic IP (EIP) Floating IP (FIP) Elastic Block Storage (EBS) Volume (VOL) I will try to use the OpenStack denomination as often as possible. Anyway, the following terms are considered as synonyms in this document: Node & Host & Server Instance & Virtual Machine (VM) & Guest External network & Public network Internal network & Private network Floating IP & Elastic IP Volume & Elastic Block Storage Nova components & Nova services

1) Install node1
Install some required packages: # aptitude install python-greenlet python-mysqldb unzip Configure the PPA (Ubuntu Personal Package Archives): # aptitude install python-software-properties # add-apt-repository ppa:openstack-release/2011.3 # aptitude update Remarks: To install the development version (Essex for the moment), replace the ppa by: ppa:nova-core/trunk (for Nova) and ppa:glance-core/trunk (for Glance).

To install OpenStack Diablo on Ubuntu Oneiric, no need to add the PPA because the packages are available directly from the Ubuntu repositories (Component: main /Section: net).

Install Your Own OpenStack Cloud Diablo Edition By Eric Dodmont

Install RabbitMQ (the messaging/queuing server): # aptitude install rabbitmq-server Install MySQL (the DB server): In the following document, my password. passwords will be 123456; please chose your own

# aptitude install mysql-server # sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf # restart mysql Install Nova (the compute service) Create the nova DB and nova username: # mysql -uroot -p123456 -e 'CREATE DATABASE nova;' # mysql -uroot -p123456 -e "GRANT ALL PRIVILEGES ON *.* TO 'nova'@'%' WITH GRANT OPTION;" # mysql -uroot -p123456 -e "SET PASSWORD FOR 'nova'@'%' = PASSWORD('123456');" Install all the Nova components (six): # aptitude install nova-api nova-objectstore nova-scheduler nova-network # aptitude install nova-compute nova-volume # aptitude install nova-doc Remarks: KVM/Qemu and libvirt are automatically installed when installing nova-compute (see the package dependencies below). Here is the list of dependencies of all the Nova components as found in the packages metadata. These packages are installed automatically. 1. 2. 3. 4. nova-api: nova-common, python, upstart-job. nova-sceduler: nova-common, python, upstart-job. nova-objectstore: nova-common, python, upstart-job. nova-network: socat, vlan, bridge-utils, dnsmasq-base, nova-common, python, upstart-job. 5. nova-compute: adduser, libvirt-bin, qemu-kvm, nova-common, kpartx, curl, parted, vlan, ebtables, gawk, iptables, open-iscsi, nova-compute-kvm or nova-computehypervisor, python, upstart-job. 6. nova-volume: nova-common, lvm2, iscsitarget, python, upstart-job And some other Nova packages dependencies:

1. python-nova: python, python-support, python-boto, python-m2crypto, pythonpycurl, python-daemon, python-carrot, python-kombu, python-lockfile, pythongflags, openssl, python-libxml2, python-ldap, python-sqlalchemy-ext or pythonsqlalchemy, python-eventlet, python-routes, python-webob, python-cheetah, python-netaddr, python-paste, python-pastedeploy, python-tempita, python-migrate, python-glance, python-novaclient, python-simplejson, python-lxml, pythonfeedparser, sudo 2. nova-common: python-amqplib, python-nova, python, adduser 3. nova-compute-kvm: nova-compute, python-libvirt, libvirt-bin, kvm No need to configure manually Ethernet bridges for KVM to run: these bridges are automatically configured on the nodes when it is needed (there is one bridge per project). If you do not want to use KVM, but LXC for example, then replace nova-compute by nova-compute-lxc (in place of installing nova-compute + nova-compute-kvm, it will install nova-compute + nova-compute-lxc).

Install Your Own OpenStack Cloud Diablo Edition By Eric Dodmont

Install Glance (the image service) Create the glance DB and glance username: # mysql -uroot -p123456 -e 'CREATE DATABASE glance;' # mysql -uroot -p123456 -e "GRANT ALL PRIVILEGES ON *.* TO 'glance'@'%' WITH GRANT OPTION;" # mysql -uroot -p123456 -e "SET PASSWORD FOR 'glance'@'%' = PASSWORD('123456');" Install Glance components (api & registry): # aptitude install glance # aptitude install python-glance-doc

2) Install node2
Install different utilities: # aptitude install python-greenlet python-mysqldb unzip Configure the PPA (Ubuntu Personal Package Archives): # aptitude install python-software-properties # add-apt-repository ppa:openstack-release/2011.3 # aptitude update Install the Nova components (two): # aptitude install nova-compute nova-volume # aptitude install nova-doc

3) Configure node1 and node2

On both nodes:

Install Your Own OpenStack Cloud Diablo Edition By Eric Dodmont

Adapt the hosts file: # vi /etc/hosts And add these lines: 192.168.1.201 node1 192.168.1.202 node2 Adapt the interfaces file: # vi /etc/network/interfaces And add this line at the end of the eth0 definition block: up ifconfig eth1 0.0.0.0 Adapt the nova.conf file (configuration of OpenStack Nova): # vi /etc/nova/nova.conf And put the following lines in it: ##### RabbitMQ ##### --rabbit_host=192.168.1.201 ##### MySQL ##### --sql_connection=mysql://nova:123456@192.168.1.201/nova ##### nova-api ##### --auth_driver=nova.auth.dbdriver.DbDriver --cc_host=192.168.1.201 --ec2_url=http://192.168.1.201:8773/services/Cloud --s3_host=192.168.1.201 --s3_dmz=192.168.1.201 --use_deprecated_auth ##### nova-network ##### --network_manager=nova.network.manager.VlanManager --public_interface=eth0 --vlan_interface=eth1 --network_host=192.168.1.201 --routing_source_ip=192.168.1.201 --fixed_range=10.0.0.0/8 --network_size=1024 --dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/usr/bin/nova-dhcpbridge --force_dhcp_release=true --fixed_ip_disassociate_timeout=30 ##### nova-compute ##### --libvirt_type=kvm --libvirt_use_virtio_for_bridges=true --start_guests_on_host_boot=true

--resume_guests_state_on_host_boot=true ##### nova-volume ##### --iscsi_ip_prefix=192.168.1.20 --num_targets=100 ##### glance ##### --image_service=nova.image.glance.GlanceImageService --glance_api_servers=192.168.1.201:9292 ##### Misc ##### --logdir=/var/log/nova --state_path=/var/lib/nova --lock_path=/var/lock/nova --verbose ##### VNC Console ##### --vnc_enabled=true --vncproxy_url=http://lcc.louvrex.net:6080 --vnc_console_proxy_url=http://lcc.louvrex.net:6080

Install Your Own OpenStack Cloud Diablo Edition By Eric Dodmont

Some parameters in the nova.conf file are the default ones and then do not need to be put in the configuration file. But for clarity, I prefer them to be present. Examples of some default parameters: Parameter: network_manager (type of networking used on the internal / private network used by the instances) nova.network.manager.FlatManager One flat network for all projects (no DHCP server). nova.network.manager.FlatDHCPManager One flat network for all projects (with a DHCP server). nova.network.manager.VlanManager Default The most sophisticated OpenStack network mode (one VLAN, one Ethernet bridge, one IP range subnet, and one DHCP server per project).

Parameter: libvirt_type (type of virtualization used on the compute nodes) kvm Default Linux Kernel-based Virtual Machine (hardware virtualization technology like Intel VT-x is required); qemu You can use it if you install OpenStack in a VM or on a HW without hardware virtualization technology like Intel VT-x. lxc LinuX Container (based on the Linux kernel; virtualization of the OS, not of the HW; similar to OpenVZ and Parallels Virtuozzo; in Solaris Unix, this virtualization technology is called Solaris Zones.)

Virtualization which are supported by OpenStack: KVM (via libvirt) Qemu (via libvirt) UML (via libvirt) Xen (via libvirt) XenServer (from Citrix) ESX (from VMware) Hyper-V (from Microsoft) OpenVZ (soon)

VirtualBox (Oracle) (soon)

Install Your Own OpenStack Cloud Diablo Edition By Eric Dodmont

On node1 only: To allow the node1 to act as a router: # vi /etc/sysctl.conf And uncomment this line: net.ipv4.ip_forward=1 Adapt the glance-xxx.conf files (configuration of OpenStack Glance): # vi /etc/glance/glance-registry.conf # vi /etc/glance/glance-scrubber.conf And replace one line like this: Before: sql_connection = sqlite:////var/lib/glance/glance.sqlite After: sql_connection = mysql://glance:123456@192.168.1.201/glance Create the tables in the nova DB: # nova-manage db sync Create the tables in the glance DB: # glance-manage db_sync Reboot both servers to take into account all the configurations made.

4) Configure the networking (nova-network)


In the VLAN network mode, each project is given a specific VLAN/subnet. We will configure three VLANs/subnets (feel free to create much more). VLAN 1 2 3 Bridge br1 br2 br3 Subnet 10.0.1.0/24 10.0.2.0/24 10.0.3.0/24 DGW (1) 10.0.1.1 10.0.2.1 10.0.3.1 VPN (2) 10.0.1.2 10.0.2.2 10.0.2.2 Instance IPs (3) 10.0.1.4 10.0.1.254 10.0.2.4 10.0.2.254 10.0.3.4 10.0.3.254

(1) Default gateway: automatically configured on the node1/nova-network. (2) Cloudpipe VPN instance: used to access the network via VPN. (3) Instance IPs: automatically distributed to the instances via DHCP.

Install Your Own OpenStack Cloud Diablo Edition By Eric Dodmont


Launch these commands to create the three networks: # nova-manage network create vlan1 10.0.1.0/24 1 256 --vlan 1 # nova-manage network create vlan2 10.0.2.0/24 1 256 --vlan 2 # nova-manage network create vlan3 10.0.3.0/24 1 256 --vlan 3 In fact, for each project, a specific VLAN and subnet is attributed, but also a: Specific Ethernet bridge is configured on the nodes hosting the projects instances; Specific DHCP server (dnsmasq) is launched on node1/nova-network to serve IPs to the projects instances.

The first time you launch an instance in the cloud, lets say an instance for project1, the VLAN1 will be chosen and attributed exclusively to project1. As from that moment, VLAN1 will always be used for instances for project1. If you launch an instance for another project, the first VLAN not yet attributed to a project will be chosen and attributed to that project. The ip addr (show IP addresses) and brctl show (show bridge interfaces) commands on the node1 will give a result like this (I made a lot of cleaning): # ip addr 1: lo inet 127.0.0.1/8 inet 169.254.169.254/32 (1) (1) Amazon EC2 metadata service 2: eth0 (2) ether 00:24:1d:d3:a1:e6 inet 192.168.1.201/24 (3) (2) First physical Ethernet interface (connected to the public network) (3) node1 public IP 3: eth1 (4) ether 00:10:18:34:c0:e5 (4) Second physical Ethernet interface (connected to the private network) 4: virbr0 (5) ether ae:4e:3d:1f:97:3b inet 192.168.122.1/24 (5) Bridge configured by the libvirt API # brctl show virbr0 (1)

Install Your Own OpenStack Cloud Diablo Edition By Eric Dodmont


(1) Bridge configured by LibVirt. If you launch some instances in different projects, and if you associate some floating IPs to them, you could have results like this: # ip addr 1: lo inet 127.0.0.1/8 inet 169.254.169.254/32 (1) (1) Amazon EC2 metadata service 2: eth0 (2) ether 00:24:1d:d3:a1:e6 inet 192.168.1.201/24 (3) inet 192.168.1.240/32 (4) inet 192.168.1.241/32 (5) (2) (3) (4) (5) First physical Ethernet interface (connected to the public network) node1 public IP Floating IP n 1 (associated to an instance) Floating IP n 2 (associated to an instance)

10

3: eth1 (6) ether 00:10:18:34:c0:e5 (6) Second physical Ethernet interface (connected to the private network) 4: virbr0 (7) ether ae:4e:3d:1f:97:3b inet 192.168.122.1/24 (7) Bridge configured by the libvirt API 5: vlan1@eth1 (8) ether 00:10:18:34:c0:e5 (8) eth1 interface tagged for VLAN1 6: br1 (9) ether 00:10:18:34:c0:e5 inet 10.0.1.1/24 (10) (9) Bridge connected on the vlan1@eth1 interface (10) Default gateway of the first VLAN network (e.g. for the 1st project) 7: vlan2@eth1 (11) ether 00:10:18:34:c0:e5 (11) eth1 interface tagged for VLAN2

8: br2 (12) ether 00:10:18:34:c0:e5 inet 10.0.2.1/24 (13)

Install Your Own OpenStack Cloud Diablo Edition By Eric Dodmont

(12) Bridge connected on the vlan2@eth1 interface (13) Default gateway of the second VLAN network (e.g. for the 2nd project) 9: vlan3@eth1 (14) ether 00:10:18:34:c0:e5 (14) eth1 interface tagged for VLAN1 10: br3 (15) ether 00:10:18:34:c0:e5 inet 10.0.3.1/24 (16) (15) Bridge connected on the vlan3@eth1 interface (16) Default gateway of the third VLAN network (e.g. for the 3rd project) 11: vnet0 (17) ether fe:16:3e:2a:a3:f1 (17) Virtual interface for the first instance running on the node1 12: vnet1 (18) ether fe:16:3e:46:07:6b (18) Virtual interface for the second instance running on the node1 13: vnet2 (19) ether fe:16:3e:34:53:06 (19) Virtual interface for the third instance running on the node1 # brctl show br1 vlan1 vnet0 (1) vnet1 (2) vnet2 (3) vlan2 vlan3

11

br2 br3 virbr0

(1) Virtual interface for the 1st instance running on the node1 (VLAN1) (2) Virtual interface for the 2nd instance running on the node1 (VLAN1) (3) Virtual interface for the 3rd instance running on the node1 (VLAN1)

Install Your Own OpenStack Cloud Diablo Edition By Eric Dodmont

12

The OpenStack's VLAN networking in picture


Configure the public floating IPs (8 IPs: 192.168.1.240->192.168.1.247): # nova-manage floating create 192.168.1.240/29 iptables is used to configure the floating IPs on nova-network (node1 in our case). iptables is also used to configure the firewall rules (security groups) on nova-compute (node1 and node2 in our case). For the floating IPs, the NAT table is used. You can see these NATing rules on the node1 with this command: # iptables -nL -t nat For the firewall rules (security groups), the filter table is used. You can see them on the node1 or the node2 with this command: # iptables -nL -t filter

5) Configure the storage (nova-volume)


On both nodes: Create one LVM primary partition (sdb1) on the second HDD: # cfdisk /dev/sdb Create one LVM physical volume: # pvcreate /dev/sdb1

Create one LVM volume group called nova-volumes: # vgcreate nova-volumes /dev/sdb1 Start the iSCSI target service automatically: # sed -i 's/false/true/g' /etc/default/iscsitarget Start the iscsitarget and nova-volume services: # /etc/init.d/iscsitarget start # start nova-volume

Install Your Own OpenStack Cloud Diablo Edition By Eric Dodmont

13

Please note that in our configuration, the iSCSI traffic will pass on the external network. This traffic flows between the nova-volume components and the nova-compute components. The nova-compute components then expose the volumes to the attached instances. In a bigger configuration with more than two nodes, a dedicated storage network should be foreseen:

OpenStack multinode architecture (c) Stackops (www.stackops.org)


In our configuration: Public network = Management network = Storage network = 192.168.1.0/24 (eth0) Service network = 10.0.0.0/8 (eth1)

Remark: In the latest version of Nova, you have the choice of the iSCSI target software you want to use:

iet (iSCSI Enterprise Target / iscsitarget): default until Ubuntu Natty. tgt (Linux SCSI target framework): default as from Ubuntu Oneiric.

Install Your Own OpenStack Cloud Diablo Edition By Eric Dodmont

The flag in nova.conf is: --iscsi_helper=ietadm|tgtadm

14

6) Install the CLI clients


There are two APIs in OpenStack Nova: Amazon EC2-API: you will use the euca-XXX commands with that API (package: euca2ools). These commands are compatible with the Amazon public cloud (AWS: Amazon Web Services; EC2: Elastic Compute Cloud). OpenStack OS-API: you will use the nova XXXX commands with that API (package: python-novaclient). It is not possible yet to manage everything with it as from now. But latter, it should become the reference API for OpenStack.

In this document, I will sometime use the EC2-API (euca), sometime the OS-API (nova or glance), and sometime both. The nova-manage commands are reserved for the administrators of the cloud, when euca, nova, and glance commands are for the end-users of the cloud. On any other computer than node1 or node2: Install python-novaclient (the OpenStack Nova CLI client): # # # # aptitude install python-software-properties add-apt-repository ppa:openstack-release/2011.3 add-apt-repository ppa:dodeeric/openstack-dodeeric aptitude update

# aptitude install python-novaclient Remarks: The python-novaclient package is installed automatically with the nova-common package which is needed by all nova components. If you want to launch the nova commands from the node1 or the node2, then no need to install that package. If you want a more recent version of the python-novaclient (2.6.6 in place of 2.6.4), then add also the following PPA: dodeeric/openstack-dodeeric. With version 2.6.6, you will be able to manage key pairs, security groups, volumes, to boot instances with a key-pair and a security group, etc.

On the node1 or any other computer: Install Euca2ools (the Amazon EC2 CLI client): # aptitude install euca2ools

Install Your Own OpenStack Cloud Diablo Edition By Eric Dodmont


Remarks: You need the latest version of the Euca2ools to be able to use all the features of the Diablo release (e.g. boot from an EBS-volume). Euca2ools version 2.0.0 is available by default in Ubuntu 11.10 (Oneiric).

15

How to install the latest version of the Euca2ools in Ubuntu Natty (version 2.0.0 for the moment): Install the latest version of Boto (Amazon AWS / EC2 API library): # # # # wget http://boto.googlecode.com/files/boto-2.0.tar.gz tar -xzvf boto-2.0.tar.gz cd boto-2.0 python setup.py install

Install the latest version of M2Crypto: # # # # # # wget http://pypi.python.org/packages/source/M/M2Crypto/M2Crypto-0.21.1.tar.gz tar -xzvf M2Crypto-0.21.1.tar.gz cd M2Crypto-0.21.1 aptitude install swig libssl-dev python-dev python setup.py build python setup.py install

And finally install the latest version of Euca2ools: # # # # # aptitude install bzr bzr branch lp:euca2ools cd euca2ools aptitude install make make

7) Internet access
Internet address and DNS My personal home network has only one internet IP address, and it is a dynamic IP changing every 4 days. My cloud is available from the internet with the DNS name lcc.louvrex.net (LCC = Louvrex Cloud Computing). The lcc.louvrex.net DNS name is linked to my internet dynamic IP address. I use the DynDNS service with the ddclient running on the node1 to update the DNS name automatically. PAT/NAT Different NAT/PAT rules are configured in the router to access the ec2-api, the os-api, the nodes (ssh), and the instances (ssh, http, etc.)

Here a sample of these rules: Rule name node1-ssh node2-ssh node1-ec2-api node1-os-api eip1-http eip2-http eip3-http eip4-http eip1-ssh eip2-ssh eip3-ssh eip4-ssh

Install Your Own OpenStack Cloud Diablo Edition By Eric Dodmont


Internet port LAN port 2201 22 2202 22 8773 8773 8774 8774 80 80 8041 80 8042 80 8043 80 22 22 2201 22 2202 22 2203 22 LAN IP 192.168.1.201 192.168.1.202 192.168.1.201 192.168.1.202 192.168.1.240 192.168.1.241 192.168.1.242 192.168.1.243 192.168.1.240 192.168.1.241 192.168.1.242 192.168.1.243

16

6) Use the cloud


From node1: Create one user: In this case, the user will have full cloud admin rights. # nova-manage user admin dodeeric Create one project: The user dodeeric will be the project manager of the project. # nova-manage project create project-one dodeeric "Test project" Retrieve the credentials: # nova-manage project zipfile project-one dodeeric # unzip nova.zip # . novarc Allow by default ssh (tcp/22) and icmp (ping) for project-one (adapt the default security group): # euca-authorize -P icmp -t -1:-1 default # euca-authorize -P tcp -p 22 default Create a key pair to be used to access your instances: # euca-add-keypair key-dodeeric > key-dodeeric.priv # chmod 600 key-dodeeric.priv

Install Your Own OpenStack Cloud Diablo Edition By Eric Dodmont


Add at least one image: Let's use the glance CLI for that. You have two possibilities: with and without a ramdisk image. Without a ramdisk, you need two images to run an instance: 1) the kernel image (AKI) 2) the root-fs image (AMI) With a ramdisk, you need three images to run an instance: 1) the kernel image (AKI) 2) the ramdisk image (ARI) 3) the root-fs image (AMI) By default, Ubuntu cloudimg is delivered without the ramdisk image, and the ramdisk is indeed not needed. But, if you are going to modify (customize) the root-fs image, the ramdisk image will be needed or else the instance could not boot correctly until the end. Bellow, how to upload the images to Glance with and without a ramdisk image. Download the latest cloud image of Linux Ubuntu Server 11.10 (Oneiric) 64 bits: # wget http://cloud-images.ubuntu.com/oneiric/current/oneiric-server-cloudimgamd64.tar.gz Remark: "cloudimg" (Cloud Image) was known previously as "UEC" (Ubuntu Enterprise Cloud). Untar the file: # tar -xzvf oneiric-server-cloudimg-amd64.tar.gz (Optional) Delete all unneeded files; only the two following ones are needed: # ls -lh oneiric-server-cloudimg-amd64.img oneiric-server-cloudimg-amd64-vmlinuz-virtual The first one if the root-fs image and the second one is the kernel image: # file oneiric-server-cloudimg-amd64.img oneiric-server-cloudimg-amd64.img: Linux rev 1.0 ext4 filesystem data, UUID=cf31b4f7-a5ad-4c24-9a99-6ca117d43eb8, volume name "cloudimg-rootfs" (extents) (large files) (huge files) # file oneiric-server-cloudimg-amd64-vmlinuz-virtual

17

oneiric-server-cloudimg-amd64-vmlinuz-virtual: Linux kernel x86 boot executable bzImage, version 3.0.0-12-virtual (buildd@creste, RO-rootFS, root_dev 0x801, swap_dev 0x4, Normal VGA The -virtual means that the kernel has been optimized for virtual machines. Let's upload the images to Glance: A) Without a ramdisk image Upload the kernel image: # glance add name="oneiric-server-cloudimg-amd64-vmlinuz-virtual" is_public=true disk_format=aki container_format=aki architecture=x86_64 < oneiric-servercloudimg-amd64-vmlinuz-virtual Added new image with ID: 1 Upload the root-fs image and specify the kernel image id to be used (in this case kernel_id=1): # glance add name="oneiric-server-cloudimg-amd64" is_public=true disk_format=ami container_format=ami kernel_id=1 architecture=x86_64 < oneiric-server-cloudimgamd64.img Added new image with ID: 2 Now you have both images uploaded in Glance, and you can use them to start instances: Glance API: # glance index
ID ---------------2 1 Name -----------------------------oneiric-server-cloudimg-amd64 oneiric-server-cloudimg-amd64Disk Format -------------------ami aki Container Format Size -------------------- -------------ami 1476395008 aki 4731440

Install Your Own OpenStack Cloud Diablo Edition By Eric Dodmont

18

Nova OS-API: # nova image-list +----+-----------------------------------------------+--------+ | ID | Name | Status | +----+-----------------------------------------------+--------+ | 1 | oneiric-server-cloudimg-amd64-vmlinuz-virtual | ACTIVE | | 2 | oneiric-server-cloudimg-amd64 | ACTIVE | +----+-----------------------------------------------+--------+ Nova EC2-API: # euca-describe-images

IMAGE ami-00000002 None (oneiric-server-...) available public x86_64 machine aki-00000001 instance-store IMAGE aki-00000001 None (oneiric-server-...) available public x86_64 kernel instance-store

Install Your Own OpenStack Cloud Diablo Edition By Eric Dodmont

* To start an instance: # nova boot --key_name dodeeric --security_groups default --flavor 3 --image 2 superman superman being the name and hostname of the instance: # nova list +----+----------+--------+----------------+ | ID | Name | Status | Networks | +----+----------+--------+----------------+ | 7 | superman | ACTIVE | vlan1=10.0.1.3 | +----+----------+--------+----------------+ # ssh -i dodeeric.priv ubuntu@10.0.1.3 ubuntu@superman:~$ B) With a ramdisk image As the ramdisk image in not present in the tarball file, you will have to extract it from the root-fs image. Mount the root-fs image on a loopback device: # mkdir mnt # mount -o loop oneiric-server-cloudimg-amd64.img mnt/ Go inside the root-fs image, and copy the kernel and ramdisk images which are in the boot directory: # cd mnt/boot/ root@node1:~/images/oneiric/mnt/boot# ls -l
total 12528 -rw-r--r-- 1 -rw-r--r-- 1 drwxr-xr-x 3 -rw-r--r-- 1 -rw-r--r-- 1 -rw-r--r-- 1 -rw------- 1 -rw------- 1 -rw------- 1 root root root root root root root root root root 730681 2011-10-07 23:52 abi-3.0.0-12-virtual root 134874 2011-10-07 23:52 config-3.0.0-12-virtual root 12288 2011-10-19 07:52 grub root 4124184 2011-10-19 07:52 initrd.img-3.0.0-12-virtual <=== ramdisk image root 176764 2011-05-03 01:07 memtest86+.bin root 178944 2011-05-03 01:07 memtest86+_multiboot.bin root 2723214 2011-10-07 23:52 System.map-3.0.0-12-virtual root 1367 2011-10-07 23:58 vmcoreinfo-3.0.0-12-virtual root 4731440 2011-10-07 23:52 vmlinuz-3.0.0-12-virtual <=== kernel image

19

root@node1:~/images/oneiric/mnt/boot# cp initrd.img-3.0.0-12-virtual /images/oneiric/ root@node1:~/images/oneiric/mnt/boot# cp vmlinuz-3.0.0-12-virtual /images/oneiric/

As you can see, the two kernels are identical (4731440 bytes): root@node1:~/images/oneiric# ls -l

Install Your Own OpenStack Cloud Diablo Edition By Eric Dodmont


-rw-r--r-- 1 root root 4124184 2011-10-26 13:35 drwxr-xr-x 23 root root 4096 2011-10-19 07:53 -rw-r--r-- 1 dodeeric dodeeric 1476395008 2011-10-26 13:36 -rw-r--r-- 1 dodeeric dodeeric 4731440 2011-10-19 07:59 -rw------- 1 root root 4731440 2011-10-26 13:35 initrd.img-3.0.0-12-virtual mnt oneiric-server-cloudimg-amd64.img oneiric-server-cloudimg-amd64-vmlinuz-virtual vmlinuz-3.0.0-12-virtual

20

I prefer to rename the kernel and ramdisk images as follow: # mv vmlinuz-3.0.0-12-virtual kernel-amd64-3.0.0-12-virtual # mv initrd.img-3.0.0-12-virtual ramdisk-amd64-3.0.0-12-virtual Umount the root-fs image: # umount mnt/ * Now you can upload the three images: The kernel: # glance add name="kernel-amd64-3.0.0-12-virtual" is_public=true disk_format=aki container_format=aki architecture=x86_64 < kernel-amd64-3.0.0-12-virtual Added new image with ID: 3 The ramdisk: # glance add name="ramdisk-amd64-3.0.0-12-virtual" is_public=true disk_format=ari container_format=ari architecture=x86_64 < ramdisk-amd643.0.0-12-virtual Added new image with ID: 4 The root-fs image with kernel_id=3 and ramdisk_id=4: # glance add name="oneiric-server-cloudimg-amd64" is_public=true disk_format=ami container_format=ami architecture=x86_64 kernel_id=3 ramdisk_id=4 < oneiric-server-cloudimg-amd64.img Added new image with ID: 5 # glance index
ID ---------------5 4 3 2 1 Name -----------------------------oneiric-server-cloudimg-amd64 ramdisk-amd64-3.0.0-12-virtual kernel-amd64-3.0.0-12-virtual oneiric-server-cloudimg-amd64 kernel-amd64-3.0.0-12-virtual Disk Format -------------------ami ari aki ami aki Container Format Size -------------------- -------------ami 1476395008 ari 4124184 aki 4731440 ami 1476395008 aki 4731440

If you want you can delete image 1 and 2: # glance delete 1

Install Your Own OpenStack Cloud Diablo Edition By Eric Dodmont


Delete image 1? [y/N] y Deleted image 1 # glance delete 2 Delete image 2? [y/N] y Deleted image 2 # glance index
ID ---------------5 4 3 Name -----------------------------oneiric-server-cloudimg-amd64 ramdisk-amd64-3.0.0-12-virtual kernel-amd64-3.0.0-12-virtual Disk Format -------------------ami ari aki Container Format Size -------------------- -------------ami 1476395008 ari 4124184 aki 4731440

21

Please note that we are only speaking here of AMI (Amazon Machine Image). Such images are using a kernel (AKI) and a ramdisk (ARI) from outside the root-fs (AMI). Glance and Nova supports other formats as well. Be aware that the EC2-API gives the references (for images, instances, volumes, etc.) in Hexadecimal; the OS-API (Glance included) gives the references in decimal. E.g.: Image OS (Dec.) EC2 (Hex.) Kernel 24 18 (aki-00000018) Ramdisk 25 19 (ari-00000019) Image 26 1A (ami-0000001A) Lets launch our first instance: # euca-run-instances -k key-dodeeric -t m1.medium ami-0000001a Result: i-00000001 The -t parameter is the type of the instance. It is defined like this (but can be changed): # nova-manage instance_type list Answer: m1.tiny: m1.small: m1.medium: m1.large: m1.xlarge: Memory: Memory: Memory: Memory: Memory: 512MB, 2048MB, 4096MB, 8192MB, 16384MB, VCPUS: VCPUS: VCPUS: VCPUS: VCPUS: 1, 1, 2, 4, 8, Storage: 0GB Storage: 20GB Storage: 40GB Storage: 80GB Storage: 160GB

The storage value is the non-persistent local storage (/dev/vdb). Lets associate one floating IP to the instance:

# euca-allocate-address Result: 192.168.1.240

Install Your Own OpenStack Cloud Diablo Edition By Eric Dodmont

# euca-associate-address -i i-00000001 192.168.1.240 Lets connect to the instance: # ssh -i key-dodeeric.priv ubuntu@192.168.1.240 Lets create a volume of 5 GB and attach it to the instance: # euca-create-volume -s 5 -z nova Result: vol-00000001 # euca-attach-volume -i i-00000001 -d /dev/vdc vol-00000001 Lets use that volume inside the instance: Check if the volume is seen by the instance: # fdisk -l You should see /dev/vdc. Create one partition (vdc1): # cfdisk /dev/vdc Format the partition with the ext4 filesystem: # mkfs.ext4 /dev/vdc1 Create a directory to mount the volume: # mkdir /ebs Mount the volume on the directory: # mount /dev/vdc1 /ebs Edit the fstab file to mount automatically the volume at boot time: # vi /etc/fstab And ad this line: /dev/vdc1 /ebs ext4 nobootwait

22

There are also local disks:

Install Your Own OpenStack Cloud Diablo Edition By Eric Dodmont


vda: root filesystem disk (boot) vdb: additional storage disk Remark: the volume (created by nova-create-volume and attached to the instance with the iSCSI protocol) is a permanent/persistent storage: when the instance is terminated, that volume will survive. This is not the case of the local disks.

23

7) Customize your own image


Lets customize the latest Linux Ubuntu server image: Natty (11.04) 64 bits. We will add the following packages: Apache/PHP (web server + PHP programing language) MySQL (DB server) PhpMyAdmin (web interface for MySQL) Postfix (SMTP email server)

We will also make some specific configurations. - Lets download the UEC (Ubuntu Enterprise Cloud / Eucalyptus) image which is compatible with OpenStack: # mkdir custom-image # cd custom-image # wget http://uec-images.ubuntu.com/natty/current/natty-server-uec-amd64.tar.gz - Untar the file, and delete the non-needed files/images (we only need natty-server-uecamd64.img): # tar -xzvf natty-server-uec-amd64.tar.gz # rm natty-server-uec-amd64-vmlinuz-virtual natty-server-uec-amd64-loader nattyserver-uec-amd64-floppy README.files natty-server-uec - Rename the image: # mv natty-server-uec-amd64.img dodeeric-lamp-v1-natty-server-uec-amd64.img - We will mount the image and chroot into it to install/configure it: # # # # # # mkdir mnt mount -o loop mount -o bind mount -o bind mount -o bind chroot mnt dodeeric-lamp-v1-natty-server-uec-amd64.img mnt /proc mnt/proc /sys mnt/sys /dev mnt/dev

- Configure a working DNS server (remove first all the lines in the resolv.conf file): # vi /etc/resolv.conf

And add:

Install Your Own OpenStack Cloud Diablo Edition By Eric Dodmont

nameserver 192.168.1.201 - Install this mandatory package: # aptitude install language-pack-en-base - Configure your time zone: # dpkg-reconfigure tzdata - Install the Apache / MySQL / PHP packages (select LAMP): # tasksel - Install and configure Postfix. You can choose the satellite configuration and enter a working SMTP relay server (my relay is relay.skynet.be): # aptitude install postfix - Allow to connect with the root username, in place of the ubuntu username: # vi /etc/cloud/cloud.cfg Before: disable_root: 1 After: disable_root: 0 - Exit the chrooted environment: # exit - Lets retrieve the kernel and the ramdisk (rd) images: # cd mnt/boot # cp vmlinuz-2.6.38-8-virtual ../../ # cp initrd.img-2.6.38-8-virtual ../../ - Now we can umount the image: # umount -l mnt - Lets rename the kernel and the ramdisk images into something more clear: # mv vmlinuz-2.6.38-8-virtual natty-server-uec-amd64-kernel-2.6.38-8-virtual # mv initrd.img-2.6.38-8-virtual natty-server-uec-amd64-ramdisk-2.6.38-8-virtual

24

Install Your Own OpenStack Cloud Diablo Edition By Eric Dodmont


After that, import the three images into Glance as described previously.

25

You might also like