There are a few Infrastructure-as-a-Service offerings that available to download and use. Eucalyptus and OpenNebula are two such offerings. I ended up installing and experimenting with both Eucalyptus and OpenNebula. In this blog post, I’ll detail my experience of installing and setting up Eucalyptus 1.6.2 on CentOS.
For the sake of keeping things simple but still practical enough, we will have:
- 1 front-end machine. This will house the Cloud Controller (CLC) and Walrus. Since we intend to keep things fairly simple we will limit ourselves to a single cluster and setup the Cluster Controller (CC) and Storage Controller (SC) on this same machine. In my case, this machine has one network interface (NIC) with an IP address of 192.168.0.114.
- 2 machines (Nodes) that will serve as hosts running Xen hypervisor for the virtual machines i.e. each machine will have a Node Controller (NC) installed. In my case, each machine has a single NIC and the IP addresses are 192.168.0.19 and 192.168.5.7 respectively.
Before we install Eucalyptus we need to first prep these machines.
Note: For the rest of this document, run the commands as root user.
This document is organized as below. Feel free to skip any sections if you have already implemented the steps in that section.
On the front-end machine, we first install Java and Ant. You can download Sun JDK from here and Ant from here . I’m using JDK version 1.6u20 (jdk-6u20-linux-i586-rpm.bin) and Ant version 1.8.0 (apache-ant-1.8.0-bin.tar.gz).
Once you have downloaded Sun JDK to a directory, install it as follows:
chmod +x jdk-6u20-linux-i586-rpm.bin ./jdk-6u20-linux-i586-rpm.bin
You can confirm that java is on the PATH by running the following command:
java -version
You should output similar to:
Java(TM) SE Runtime Environment (build 1.6.0_19-b04)
Java HotSpot(TM) Client VM (build 16.2-b04, mixed mode, sharing)
Next, install Ant under /opt directory as follows:
cd /opt mkdir ant cd ant tar zxvf ~/apache-ant-1.8.0-bin.tar.gz ln -s apache-ant-1.8.0 latest
Next, we need to add an environment variable ANT_HOME that points to /opt/ant/latest and append the $ANT_HOME/bin to the PATH environment variable. Add this to the /etc/profile file as follows:
cd /etc cp profile profile.ORIG echo "export ANT_HOME=/opt/ant/latest" >> profile echo "export PATH=\$PATH:\$ANT_HOME/bin" >> profile
Next we need to install a few dependencies (dhcp, bridge-utils, httpd, xen-libs, ntp) and synchronize the system clock on the front-end machine. You can do this as follows:
yum update yum install dhcp xen-libs httpd bridge-utils ntp ntpdate pool.ntp.org
I have the following versions installed:
yum list dhcp xen-libs httpd bridge-utils
Loading mirror speeds from cached hostfile
* addons: mirror.fdcservers.net
* base: mirrors.ecvps.com
* extras: mirror.ubiquityservers.com
* updates: mirror.ubiquityservers.com
Installed Packages
bridge-utils.i386 1.1-2 installed
dhcp.i386 12:3.0.5-21.el5_4.1 installed
httpd.i386 2.2.3-31.el5.centos.4 installed
xen-libs.i386 3.0.3-94.el5_4.3 installed
Available Packages
dhcp.i386 12:3.0.5-23.el5 base
httpd.i386 2.2.3-43.el5.centos base
xen-libs.i386 3.0.3-105.el5 base
We also allow the front-end machine to forward IP packets as follows:
cd /etc cp sysctl.conf sysctl.conf.ORIG sed -i "s/net.ipv4.ip_forward = 0/net.ipv4.ip_forward = 1/" sysctl.conf
To change this value immediately without rebooting, run the following command:
sysctl -p /etc/sysctl.conf
Next, we need to configure firewall rules to permit the various Eucalyptus communicate with each other. Since we are planning on using security groups in Eucalyptus, let’s start with disabling SELinux on the front-end machine as follows:
cd /etc/selinux cp config config.ORIG sed -i "s/SELINUX=permissive/SELINUX=disabled/" config
Let’s reboot the front-end machine at this point.
Next, we need to prep the two Nodes. We start by installing xen hypervisor and also synchronize the system clock on each Node as follows:
yum update yum install xen ntp ntpdate pool.ntp.org
I have the following versions of xen installed:
yum list xen
Loading mirror speeds from cached hostfile
* addons: mirror.ash.fastserv.com
* base: mirror.ubiquityservers.com
* extras: mirror.steadfast.net
* updates: hpc.arc.georgetown.edu
Installed Packages
xen.i386 3.0.3-94.el5_4.3 installed
Available Packages
xen.i386 3.0.3-105.el5 base
Once we have Xen installed we need to configure it to allow for the hypervisor to be controlled via HTTP from localhost. We can do this by editing /etc/xen/xend-config.sxp file and then restart xen daemon as follows:
cd /etc/xen cp xend-config.sxp xend-config.sxp.ORIG sed -i "s/#(xend-http-server no)/(xend-http-server yes)/" xend-config.sxp sed -i "s/#(xend-address localhost)/(xend-address localhost)/" xend-config.sxp /etc/init.d/xend restart
Next we need to make sure the correct kernel with xen enabled is started at boot. We do this by editing the GRUB configuration file (grub.conf) under /boot/grub. If grub.conf is not available, then edit menu.lst which should be a file instead of a symlink to grub.conf.
In my case, /boot/grub/grub.conf is:
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE: You have a /boot partition. This means that
# all kernel and initrd paths are relative to /boot/, eg.
# root (hd0,0)
# kernel /vmlinuz-version ro root=/dev/sda3
# initrd /initrd-version.img
#boot=/dev/sda
default=1
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.18-164.15.1.el5xen)
root (hd0,0)
kernel /xen.gz-2.6.18-164.15.1.el5
module /vmlinuz-2.6.18-164.15.1.el5xen ro root=LABEL=/
module /initrd-2.6.18-164.15.1.el5xen.img
title CentOS (2.6.18-164.15.1.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-164.15.1.el5 ro root=LABEL=/
initrd /initrd-2.6.18-164.15.1.el5.img
title CentOS (2.6.18-164.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-164.el5 ro root=LABEL=/
initrd /initrd-2.6.18-164.el5.img
The default line is the line we want to change. The first title is 0. Since we want title CentOS (2.6.18-164.15.1.el5xen) to be the default kernel we would set default to 0.
We can do this as follows:
cd /boot/grub cp grub.conf grub.conf.ORIG sed -i "default=1/default=0/" grub.conf
Next, we disable SELinux on the Node machines as follows:
cd /etc/selinux cp config config.ORIG sed -i "s/SELINUX=permissive/SELINUX=disabled/" config
Let’s reboot both Node machines at this point. We are not ready to proceed with the installation of Eucalyptus.
You could choose to install Eucalyptus via yum if needed which is easier that
You can downloaded Eucalyptus from here. I picked the 32-bit CentOS 5 rpms that come bundled in a gzip compressed tar file.
Note: Different Eucalyptus components need to be installed on the front-end and each of the Node machines. The aforementioned tar.gz file contains all Eucalyptus components though. Therefore download it once on the front-end and then copy this file over to each of the Node machines.
Install Eucalyptus on the front-end
Once you have downloaded Eucalyptus (in my case, eucalyptus-1.6.2-centos-i386.tar.gz) on the front-end, untar it to root’s home folder /root.
tar zxvf eucalyptus-1.6.2-centos-i386.tar.gz cd eucalyptus-1.6.2-centos-i386
We are ready to install. Let’s start by installing the 3rd-party dependency RPMs included in the eucalyptus-1.6.2-rpm-deps-i386 directory. Install all the rpms in this directory as follows:
cd eucalyptus-1.6.2-rpm-deps-i386 rpm -Uvh aoetools-21-1.el4.i386.rpm euca-axis2c-1.6.0-1.i386.rpm euca-rampartc-1.3.0-1.i386.rpm vblade-14-1mdv2008.1.i586.rpm groovy-1.6.5-1.noarch.rpm vtun-3.0.2-1.el5.rf.i386.rpm lzo2-2.02-3.el5.rf.i386.rpm cd ..
Note the above “rpm -Uvh…” command might fail with an error about “Failed dependencies…java-sdk > 1.6.0 is needed….“. To get past this error, run the above rpm -Uvh… with –nodeps. The error is because it is trying to look for Openjdk during installation. But we have installed Sun Java instead. Adding –nodeps will get us past this error message. Don’t worry, the Eucalyptus components will start up fine when the time comes to run them.
Next, let’s install the Cloud Controller, Walrus, Cluster Controller, Storage Controller, and a few other dependencies on the front-end machine as follows:
rpm -Uvh eucalyptus-1.6.2-1.i386.rpm eucalyptus-common-java-1.6.2-1.i386.rpm eucalyptus-cloud-1.6.2-1.i386.rpm eucalyptus-walrus-1.6.2-1.i386.rpm eucalyptus-sc-1.6.2-1.i386.rpm eucalyptus-cc-1.6.2-1.i386.rpm eucalyptus-gl-1.6.2-1.i386.rpm
Now let’s move on the installing Eucalyptus components on the Node.
Install Eucalyptus on the Nodes
First, copy (or download) the Eucalyptus tar.gz file on each Node. Untar it to root’s home folder /root.
Note: The steps in this section need to be performed on each Node (in my case on each of the two Nodes).
Let’s begin by installing a few 3rd-party dependency RPMs.
tar zxvf eucalyptus-1.6.2-centos-i386.tar.gz cd eucalyptus-1.6.2-centos-i386 cd eucalyptus-1.6.2-rpm-deps-i386 rpm -Uvh aoetools-21-1.el4.i386.rpm euca-axis2c-1.6.0-1.i386.rpm euca-rampartc-1.3.0-1.i386.rpm cd ..
Next, we install the Node Controller (and a couple of dependencies) on each Node as follows:
rpm -Uvh eucalyptus-1.6.2-1.i386.rpm eucalyptus-gl-1.6.2-1.i386.rpm eucalyptus-nc-1.6.2-1.i386.rpm
Next, confirm that the user eucalyptus can connect with the hypervisor through libvirt.
su eucalyptus -c "virsh list"
The output of the above command should look something like:
———————————-
0 Domain-0 running
Note: If you don’t have libvirt installed/running on the Nodes, you could install it: yum install libvirt
That’s it with the installation!
You are now ready to start Eucalyptus up.
SSH to the front-end machine and start the Cluster Controller and Cloud Controller as follows:
/etc/init.d/eucalyptus-cc start /etc/init.d/eucalyptus-cloud start
Run ps command to confirm Eucalyptus is running on the front-end:
ps auxww | grep euca
500 30500 0.3 4.7 1103496 48552 ? S May13 33:14 /usr/sbin/httpd -f //etc/eucalyptus/httpd-cc.conf
500 30501 0.3 4.7 1103496 48704 ? S May13 33:26 /usr/sbin/httpd -f //etc/eucalyptus/httpd-cc.conf
500 30502 0.3 7.3 1136720 75808 ? R May13 33:22 /usr/sbin/httpd -f //etc/eucalyptus/httpd-cc.conf
500 30503 0.3 3.9 1103484 40232 ? S May13 32:56 /usr/sbin/httpd -f //etc/eucalyptus/httpd-cc.conf
500 30504 0.3 5.0 1103568 51416 ? S May13 32:55 /usr/sbin/httpd -f //etc/eucalyptus/httpd-cc.conf
root 30586 0.0 0.0 1852 224 ? Ss May13 0:00 eucalyptus-cloud –remote-dns –disable-iscsi -h / -u eucalyptus –pidfile //var/run/eucalyptus/eucalyptus-cloud.pid -f -L console-log
500 30587 6.5 39.3 937236 403344 ? Sl May13 569:24 eucalyptus-cloud –remote-dns –disable-iscsi -h / -u eucalyptus –pidfile //var/run/eucalyptus/eucalyptus-cloud.pid -f -L console-log
500 30840 0.1 0.4 1137232 4824 ? S May13 12:53 /usr/sbin/httpd -f //etc/eucalyptus/httpd-cc.conf
500 31612 0.3 3.9 1103636 40556 ? S May13 32:50 /usr/sbin/httpd -f //etc/eucalyptus/httpd-cc.conf
500 31676 0.3 3.9 1103568 40400 ? S May13 33:09 /usr/sbin/httpd -f //etc/eucalyptus/httpd-cc.conf
500 31678 0.3 4.7 1103636 48628 ? S May13 32:54 /usr/sbin/httpd -f //etc/eucalyptus/httpd-cc.conf
Next, SSH to each Node and start Node Controller as follows:
/etc/init.d/eucalyptus-nc start
Confirm eucalyptus is running on the Nodes:
ps auxww | grep euca
500 20639 0.9 10.2 80688 50904 ? Sl May13 78:15 /usr/sbin/httpd -f //etc/eucalyptus/httpd-nc.conf
Registering Eucalyptus components
Now that you have started all components, you will need to register them so that they can talk to each other.
SSH to the front-end machine (in my case, 192.168.0.114) and run the following commands:
euca_conf --register-walrus 192.168.0.114 euca_conf --register-cluster rosh-cluster1 192.168.0.114 euca_conf --register-sc rosh-cluster1 192.168.0.114
where,
192.168.0.114 – is the IP address of my front-end machine which has CLC, Walrus, CC and SC installed/running. Replace this with the IP address of your front-end machine in all the above commands.
rosh-cluster1 – is the cluster name that I used. Replace it with your own cluster name.
Next, we need to register the 2 Nodes. On the front-end machine, run the following command:
euca_conf --register-nodes "192.168.0.19 192.168.5.7"
where,
192.168.0.19, 192.168.5.7 – are the 2 Nodes in my case. Replace the above IP addresses with the IP addresses of your Nodes. Add additional Nodes separated with a space.
You can verify that the nodes are registered by verifying that value of the NODES element in the eucalyptus.conf file on the front-end reflects the node IP addresses added via the above euca_conf –register-nodes command. In my case:
grep NODES /etc/eucalyptus/eucalyptus.conf
We are done with registering the Eucalyptus components.
We are now ready to perform some quick configuration.
Using a browser, browse to https://<front-end-ip-address>:8443. In my case, https://192.168.0.114:8443. You will get a warning page stating that the “site’s security certificate is not trusted“. Since Eucalyptus is using a self-signed certificate which is not verified by a third-party that the browser trusts, shows you this warning. Accept the certificate and you will be prompted for a user_id/password. Enter admin for both.
Once you have logged in for the first time, you will be asked to change the password, set the admin email address, etc. Enter the relevant details and hit “Submit”.
On the “Configuration” web page you will see Cloud Configuration, Walrus Configuration, Clusters, etc. These should all be pre-populated. You could make changes to the configurations if you wish. I left these unchanged for now.
Next, browse to “Credentials” web page and click the “Download Credentials” zip file. Save the “euca2-admin-x509.zip” to a directory. You will need these credentials when you use client tools such as euca2ools to manage virtual machines, images, etc.
Create a .euca folder and unzip the contents of this file in this folder. Run the following command from under .euca folder:
unzip euca2-admin-x509.zip
Once you have unzipped the contents, you will find a .eucarc file that exports some variables. The EC2_URL in this case will point to your front-end machine. In my case, 192.168.0.114.
cat .eucarc
export S3_URL=http://192.168.0.114:8773/services/Walrus
export EC2_URL=http://192.168.0.114:8773/services/Eucalyptus
…
…
Before you run any client tools, you will need to source this file.
Testing our Eucalyptus install
To keep things simple and quickly test our Eucalyptus installation, download Amazon EC2 API Tools. Unzip the downloaded ec2-api-tools.zip to under the .euca folder that you created in the “First-time Configuration” section.
unzip ec2-api-tools.zip
Next source the .eucarc file under and run the ec2-describe-availability-zones command provided by the ec2-api-tools. From under .euca folder run the following commands:
cd .euca source .eucarc cd ec2-api-tools-1.3-46266/bin ec2-describe-availability-zones verbose
You should see output similar to the following:
AVAILABILITYZONE rosh-cluster1 192.168.0.114
AVAILABILITYZONE |- vm types free / max cpu ram disk
AVAILABILITYZONE |- m1.small 0004 / 0004 1 128 2
AVAILABILITYZONE |- c1.medium 0004 / 0004 1 256 5
AVAILABILITYZONE |- m1.large 0002 / 0002 2 512 10
AVAILABILITYZONE |- m1.xlarge 0000 / 0000 2 1024 20
AVAILABILITYZONE |- c1.xlarge 0000 / 0000 4 2048 20
where,
rosh-cluster1 – is the cluster I registered using euca_conf and in my case, it corresponds to the Cluster Controller running on my front-end machine (192.168.0.114)
If you see something like the above, give yourself a pat on the back!
You are now ready to bundle images and create instances from those images on your own private infrastructure cloud!
Related Articles:
Configuring your private cloud