Quantcast
Channel: Curious » grid
Viewing all articles
Browse latest Browse all 7

Eucalyptus: Configuring your private cloud to resemble Amazon EC2

$
0
0

You can reduce your hardware infrastructure expenditure by using Eucalyptus to efficiently run and manage your virtual machines on existing hardware. This in turn effectively leads to larger energy savings – less physical machines equals less power needed to run the hardware.

In my previous post on setting up a private cloud, we looked at getting Eucalyptus “Infrastructure-as-a-Service” platform installed and running. If you recall, we stated to keep things simple but still practical enough such that you could use this as a reference guide to “Building and running a private cloud”. Hence in this post we will look at tweaking the default configuration – primarily changing the default out-of-the-box enabled networking to something similar to Amazon EC2.



Networking in Eucalyptus

Eucalyptus comes with 4 networking modes.

  • SYSTEM
  • STATIC
  • MANAGED
  • MANAGED-NOVLAN



SYSTEM networking mode

SYSTEM networking mode is the default “no-frills” networking that is offered as part of an out-of-the-box Eucalyptus install.
In this networking mode, Eucalyptus relies on the existence of a DHCP server (non-Eucalyptus controlled) located on the LAN. Virtual machines obtain their IP address from this DHCP server the same way other machines (such as your desktop, laptop, etc.) on the LAN do.
In this mode, you cannot set up create VLANs or isolate network traffic. There is only one IP address that gets assigned to the VMs i.e. the IP address obtained from the DHCP server setup by your administrator.
This mode is useful when you are just getting started with Eucalyptus. If you have only a single machine (server/laptop/desktop) to try out Eucalyptus, then SYSTEM (or STATIC) networking mode is the way to go.

As described in my previous post, I had setup Eucalyptus with 1 Cluster Controller (CC), 2 Node Controllers (NC) and SYSTEM networking mode. But we will tweak this to use MANAGED mode.



MANAGED networking mode

This mode almost resembles the networking setup of Amazon EC2 cloud in that you can define:

  • a large private VLAN from which VMs can obtain IP addresses
  • a pool of public IP addresses that can be assigned to VMs. Similar to Amazon’s elastic IP addresses
  • define security groups where users can define ingress rules that apply to the VM that runs within that security group

In this mode, Eucalyptus maintains a DHCP server, fully manages the local VM instance network and provides all the networking features Eucalyptus currently supports.

Before we jump into configuring Eucalyptus for MANAGED networking there are some requirements that need to be met. Namely:

  • Available range of IP addresses unused on the network. These IP addresses will be used to create private VLANs.
  • Any switch ports that the Eucalyptus components are connected to allow and forward VLAN tagged packets
  • There is either no firewall running on the Cluster Controller or the firewall is compatible with dynamic changes Eucalyptus will make to the front-end netfilter rules



Checklist #1: Available range of IP addresses for private VLAN

Before you start configuring Eucalyptus for MANAGED networking mode, you need to find a range of IP addresses that is unused on the network. We will configure Eucalyptus to use this range to create a private VLAN.

When Eucalyptus boots a virtual machine instance it will assign the instance a private IP address from this range. This is similar to the private IP address assigned to an instance running on Amazon EC2.

In my case, I have 10.10.0.0 – 10.10.255.255 unused as an example.



Checklist #2: Allow/forward VLAN tagged packets

Next we need to verify that the local network will allow/forward VLAN tagged packets between machines running Eucalyptus components.
To test this, we will create, on the front-end and Nodes, virtual ethernet devices assigned with an IP address from the above range of IP addresses. And then attempt to ping them.

Let’s start with the front-end. In my case, 192.168.0.114 if you have been following this series (see post on Setting up a private cloud using Eucalyptus for list of machines involved).

vconfig add eth0 10

where,
eth0 – is the value of the VNET_PRIVINTERFACE in my eucalyptus.conf on my front-end

Added VLAN with VID == 10 to IF -:eth0:-

We can verify we created a VLAN device on eth0 by checking /proc/net/vlan/config

cat /proc/net/vlan/config
VLAN Dev name | VLAN ID
Name-Type: VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD
eth0.10 | 10 | eth0

We could also run the following command to verify that the VLAN device was created

ip a sh eth0.10
4: eth0.10@eth0: mtu 1500 qdisc noop
link/ether 00:01:80:65:d2:5b brd ff:ff:ff:ff:ff:ff

Next let’s pick an IP from the “Available range of IP addresses for private VLAN“, 10.10.0.0 – 10.10.255.255 (in my case). Let’s pick 10.10.1.2

ifconfig eth0.10 10.10.1.2 up

Let’s verify that the virtual ethernet device eth0.10 is up

ip a sh eth0.10
4: eth0.10@eth0: <broadcast,multicast,up,lower_up /> mtu 1500 qdisc noqueue
link/ether 00:01:80:65:d2:5b brd ff:ff:ff:ff:ff:ff
inet 10.10.1.2/8 brd 10.255.255.255 scope global eth0.10
inet6 fe80::201:80ff:fe65:d25b/64 scope link
valid_lft forever preferred_lft forever

Excellent. Now let’s do the same on the Nodes with the exception that we will pick a different IP address, say 10.10.1.3 and 10.10.1.5 for each of my two Nodes, 192.168.0.19 and 192.168.5.7.

For the sake of brevity, I’ve detailed the steps on one of my Nodes.

vconfig add eth0 10

where,
eth0 – is the value of the VNET_PRIVINTERFACE (and VNET_PUBINTERFACE) in my eucalyptus.conf on my Nodes

Added VLAN with VID == 10 to IF -:eth0:-

Verify that the virtual ethernet device has been created on the Node:

cat /proc/net/vlan/config
VLAN Dev name | VLAN ID
Name-Type: VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD
eth0.10 | 10 | eth0

Next assign the virtual ethernet device eth0.10 IP address 10.10.1.3 and bring it up

ifconfig eth0.10 10.10.1.3 up
ip a sh eth0.10
14: eth0.10@eth0: <broadcast,multicast,up,lower_up /> mtu 1500 qdisc noqueue
link/ether 00:01:80:66:18:78 brd ff:ff:ff:ff:ff:ff
inet 10.10.1.3/8 brd 10.255.255.255 scope global eth0.10
inet6 fe80::201:80ff:fe66:1878/64 scope link
valid_lft forever preferred_lft forever

Now comes the test. From the above Node, let’s ping the front-end’s virtual ethernet device 10.10.1.2:

ping 10.10.1.2
PING 10.10.1.2 (10.10.1.2) 56(84) bytes of data.
64 bytes from 10.10.1.2: icmp_seq=1 ttl=64 time=2.80 ms
64 bytes from 10.10.1.2: icmp_seq=2 ttl=64 time=0.691 ms
64 bytes from 10.10.1.2: icmp_seq=3 ttl=64 time=1.13 ms
64 bytes from 10.10.1.2: icmp_seq=4 ttl=64 time=0.576 ms
64 bytes from 10.10.1.2: icmp_seq=5 ttl=64 time=1.01 ms
64 bytes from 10.10.1.2: icmp_seq=6 ttl=64 time=0.326 ms
64 bytes from 10.10.1.2: icmp_seq=7 ttl=64 time=0.901 ms
64 bytes from 10.10.1.2: icmp_seq=8 ttl=64 time=0.344 ms
64 bytes from 10.10.1.2: icmp_seq=9 ttl=64 time=0.788 ms
64 bytes from 10.10.1.2: icmp_seq=10 ttl=64 time=0.228 ms
64 bytes from 10.10.1.2: icmp_seq=11 ttl=64 time=0.672 ms
64 bytes from 10.10.1.2: icmp_seq=12 ttl=64 time=1.11 ms
64 bytes from 10.10.1.2: icmp_seq=13 ttl=64 time=0.565 ms

— 10.10.1.2 ping statistics —
13 packets transmitted, 13 received, 0% packet loss, time 12002ms
rtt min/avg/max/mdev = 0.228/0.858/2.807/0.629 ms

Next, from the front-end machine, 192.168.0.114, let’s ping the Node’s virtual ethernet device, 10.10.1.3

ping 10.10.1.3
PING 10.10.1.3 (10.10.1.3) 56(84) bytes of data.
64 bytes from 10.10.1.3: icmp_seq=1 ttl=64 time=2.24 ms
64 bytes from 10.10.1.3: icmp_seq=2 ttl=64 time=1.33 ms
64 bytes from 10.10.1.3: icmp_seq=3 ttl=64 time=0.748 ms
64 bytes from 10.10.1.3: icmp_seq=4 ttl=64 time=1.15 ms

— 10.10.1.3 ping statistics —
4 packets transmitted, 4 received, 0% packet loss, time 3001ms
rtt min/avg/max/mdev = 0.748/1.369/2.242/0.547 ms

Super cool! This proves that my switch allows/forwards VLAN tagged packets. If for some reason, your results do not resemble mine, then your switch, perhaps, needs to be configured to allow/forward VLAN tagged packets. A little bit of googling on your switch may help.

For cleanup sake, you could go ahead and remove the above test vlan devices. You could do this as follows on the front-end and Node:

ip link set eth0.10 down
vconfig rem eth0.10
Removed VLAN -:eth0.10:-
cat /proc/net/vlan/config
VLAN Dev name | VLAN ID
Name-Type: VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD
ip a sh eth0.10
Device “eth0.10″ does not exist.



Checklist #3: Firewall configuration on front-end

Finally in the list of things to check, we need to make sure the firewall on the front-end does not interfere with Eucalyptus which will dynamically update the nat and filter rules. In my case my iptables on the front-end reveal the following:

iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination

Chain FORWARD (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination

iptables -L -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination

Chain POSTROUTING (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination

Ok. Now we are ready to proceed with the MANAGED networking configuration.



Front-end configuration – MANAGED networking mode

Presently to configure Eucalyptus networking, you need to edit eucalyptus.conf. In my installation, this file is located under /etc/eucalyptus.

All of the options that we plan on configuring are located under the “Networking options” section in eucalyptus.conf namely options starting with “VNET_

We, first, start by configuring VNET_PRIVINTERFACE and VNET_PUBINTERFACE. VNET_PRIVINTERFACE should be set to the ethernet device that is attached to the same physical ethernet as the Nodes. In my case, eth0.

VNET_PRIVINTERFACE="eth0"

Next, if your front-end has a second ethernet device, say eth1, which is used to access the public network, you could configure VNET_PUBINTERFACE to this. In my case, I have only one ethernet device eth0. Therefore I set VNET_PUBINTERFACE to eth0

VNET_PUBINTERFACE="eth0"

Ignore the VNET_BRIDGE since it is only valid for the Nodes.

In MANAGED configuration, Eucalyptus maintains a DHCP server that it uses to dole out IP addresses to virtual machine instances. Therefore it needs the location of dhcp daemon. In my case this is /usr/sbin/dhcpd and in my CentOS OS it is configured to run as root user. Therefore I leave the VNET_DHCPUSER commented out since by default Eucalyptus will setup the DHCPD configuration files/directories to be owned by root user.

VNET_DHCPDAEMON="/usr/sbin/dhcpd"
#VNET_DHCPUSER="root"

If your DHCP daemon is set to run as a non-root user (for example, in Ubuntu it is set to run under dhcpd user), then un-comment VNET_DHCPUSER and update this value accordingly.

Next, comment out VNET_MODE=SYSTEM (out-of-the-box networking mode) since our objective is to use MANAGED networking.

So, let un-comment out the line VNET_MODE=”MANAGED”.

Next, let’s set the values for VNET_SUBNET and VNET_NETMASK. In my case, according to “Checklist #1: Available range of IP addresses for private VLAN“, we define these values as follows:

VNET_SUBNET="10.10.0.0"
VNET_NETMASK="255.255.0.0"

Update your values accordingly. This makes 65536 (256 * 256 = 65536) IP addresses available to Eucalyptus to assign as private IP addresses to virtual machine instances.

Next, I set the number of IP addresses allowed per network (security group per user) to 32 i.e. the option VNET_ADDRSPERNET:

VNET_ADDRSPERNET="32"

The setting above allows for 2048 (65536 / 32 = 2048) networks to be active simultaneously. Depending on your VNET_SUBNET, VNET_NETMASK, and VNET_ADDRSPERNET values, the number of concurrent active networks will be different that mine.

Also, in my case if I end up having, say, 100 users, then each user will have a maximum of 20 networks (2048 / 100 = 20.48) in operation at any given point in time.

Next, we set the VNET_DNS to the same DNS server used by my front-end machine. You can find your DNS server by looking in the /etc/resolv.conf file as follows:

cat /etc/resolv.conf
; generated by /sbin/dhclient-script
search wave.local
nameserver 192.168.0.2

So I set VNET_DNS with:

VNET_DNS="192.168.0.2"

Next, you will want to assign public IP addresses to your instances – very much like Amazon EC2 instances. This gives users the ability to log into their instances from outside the cluster/front-end. But first you must find a set of public IP addresses that are not in use.

Note: Talk to your LAN administrator at this point to see if he can give you a bunch of IP addresses (a range will be great) that Eucalyptus can use to assign to instances at boot or dynamically at instance run time.

The public IP addresses you pick must be capable of being assigned to the front-end. In my case, I have 192.168.3.1 – 192.168.3.255 addresses available on my LAN network. I confirm that the IP addresses can be assigned to the front-end NIC eth0 as follows:
For example, for IP address 192.168.3.1

ip a add 192.168.3.1/32 dev eth0
ping 192.168.3.1
PING 192.168.3.1 (192.168.3.1) 56(84) bytes of data.
64 bytes from 192.168.3.1: icmp_seq=1 ttl=64 time=0.099 ms
64 bytes from 192.168.3.1: icmp_seq=2 ttl=64 time=0.054 ms
64 bytes from 192.168.3.1: icmp_seq=3 ttl=64 time=0.062 ms

— 192.168.3.1 ping statistics —
3 packets transmitted, 3 received, 0% packet loss, time 2010ms
rtt min/avg/max/mdev = 0.054/0.071/0.099/0.021 ms

Perfect. I now configure VNET_PUBLICIPS as follows:

VNET_PUBLICIPS="192.168.3.1-192.168.3.31"

Note: I’ve deliberately, for no particular reason, configured the range to be only allow for 31 public IP addresses.

If you have individual IP addresses instead of range, then separate each IP address with a space.

Next, comes VNET_CLOUDIP and VNET_LOCALIP. Since my Cloud Controller and Cluster Controller are running on the same machine, 192.168.0.114, I leave VNET_CLOUDIP commented out.
Also, since I’m running a single Cluster Controller I leave VNET_LOCALIP commented out as well.

#VNET_LOCALIP="your-public-interface's-ip"
#VNET_CLOUDIP="your-cloud-controller's-ip"

If your installation has multiple Cluster Controllers, and you wish to specify the IP of the Cluster Controller that all other Cluster Controllers can reach, you can set VNET_LOCALIP to that Cluster Controller’s IP address.

That’s all with configuring Eucalyptus to use MANAGED networking mode on the front-end. To summarize my settings on the front-end are as follows:

VNET_PUBINTERFACE="eth0"
VNET_PRIVINTERFACE="eth0"
...
...
VNET_DHCPDAEMON="/usr/sbin/dhcpd"
#VNET_DHCPUSER="root"
...
...
VNET_MODE="MANAGED"
VNET_SUBNET="10.10.0.0"
VNET_NETMASK="255.255.0.0"
VNET_DNS="192.168.0.2"
VNET_ADDRSPERNET="32"
VNET_PUBLICIPS="192.168.3.1-192.168.3.31"
#VNET_LOCALIP="your-public-interface's-ip"
#VNET_CLOUDIP="your-cloud-controller's-ip"
...
...
#VNET_MODE="SYSTEM"



Node configuration – MANAGED networking mode

Next, we need to configure the Nodes for MANAGED networking mode. In my case, my Nodes are 192.168.0.19 and 192.168.5.7.

Again, all the options that we plan on configuring are located under the “Networking options” section in eucalyptus.conf. We are mainly concerned with options VNET_PRIVINTERFACE, VNET_PUBINTERFACE, and VNET_MODE.

Let’s start with VNET_PRIVINTERFACE and VNET_PUBINTERFACE. Both these options should be set to the ethernet device that is attached to the same physical ethernet as the Cluster Controller. In my case, eth0.

VNET_PUBINTERFACE="eth0"
VNET_PRIVINTERFACE="eth0"

We also un-comment the line VNET_MODE=”MANAGED” and comment VNET_MODE=”SYSTEM”

That’s all with configuring Eucalyptus to use MANAGED networking mode on the Nodes. To summarize my settings on the Nodes are as follows:

VNET_PUBINTERFACE="eth0"
VNET_PRIVINTERFACE="eth0"
...
...
VNET_MODE="MANAGED"
#VNET_MODE="SYSTEM"

Now we are ready to restart Eucalyptus on both front-end and Nodes.



Restart Eucalyptus with MANAGED networking mode

On the front-end, first do a “clean” start of the Cluster Controller as follows:

/etc/init.d/eucalyptus-cc cleanstart
Starting Eucalyptus cluster controller: done.

Note: A clean start of the Cluster Controller is necessary when you update any Eucalyptus settings for the changes to take effect.

Next, start the Cloud Controller:

/etc/init.d/eucalyptus-cloud start
Starting Eucalyptus services: walrus sc cloud done.

Verify that Eucalyptus (Cloud Controller, Cluster Controller) is started on the front-end:

ps auxww | grep euca | grep -v euca
root 3583 0.0 0.1 9840 1484 ? Ss 10:49 0:00 /usr/sbin/httpd -f //etc/eucalyptus/httpd-cc.conf
500 3584 0.0 0.3 13480 3160 ? S 10:49 0:00 /usr/sbin/httpd -f //etc/eucalyptus/httpd-cc.conf
500 3585 0.0 0.3 13480 3160 ? S 10:49 0:00 /usr/sbin/httpd -f //etc/eucalyptus/httpd-cc.conf
500 3586 0.0 0.3 13480 3160 ? S 10:49 0:00 /usr/sbin/httpd -f //etc/eucalyptus/httpd-cc.conf
500 3587 0.0 0.3 13480 3160 ? S 10:49 0:00 /usr/sbin/httpd -f //etc/eucalyptus/httpd-cc.conf
500 3588 0.0 0.3 13480 3160 ? S 10:49 0:00 /usr/sbin/httpd -f //etc/eucalyptus/httpd-cc.conf
root 3667 0.0 0.0 1852 220 ? Ss 10:49 0:00 eucalyptus-cloud –remote-dns –disable-iscsi -h / -u eucalyptus –pidfile //var/run/eucalyptus/eucalyptus-cloud.pid -f -L console-log
root 3668 124 16.0 812212 164964 ? Rl 10:49 0:29 eucalyptus-cloud –remote-dns –disable-iscsi -h / -u eucalyptus –pidfile //var/run/eucalyptus/eucalyptus-cloud.pid -f -L console-log

Moving on to the Nodes, start the Node Controllers as follows:

/etc/init.d/eucalyptus-nc start
You should have at least 32 loop devices
Starting Eucalyptus services:
Enabling bridge netfiltering for eucalyptus.
done.

Verify that Eucalyptus (Node Controller) is started on the Nodes:

ps auxww | grep euca | grep -v euca
root 4375 0.0 0.1 9856 1484 ? Ss 10:55 0:00 /usr/sbin/httpd -f //etc/eucalyptus/httpd-nc.conf
500 4376 0.0 0.3 15376 3448 ? S 10:55 0:00 /usr/sbin/httpd -f //etc/eucalyptus/httpd-nc.conf



Testing our Eucalyptus MANAGED networking

Recall in our post on Setting up a private cloud using Eucalyptus, we had installed the Amazon EC2 API Tools. Let’s run a few commands to test Eucalyptus MANAGED networking, especially some of the settings such as the public IP addresses that we set in the Eucalyptus configuration on the front-end:

ec2-describe-availability-zones verbose
[Deprecated] Xalan: org.apache.xml.res.XMLErrorResources_en_US
AVAILABILITYZONE rosh-cluster1 192.168.0.114
AVAILABILITYZONE |- vm types free / max cpu ram disk
AVAILABILITYZONE |- m1.small 0002 / 0004 1 128 2
AVAILABILITYZONE |- c1.medium 0002 / 0004 1 256 5
AVAILABILITYZONE |- m1.large 0001 / 0002 2 512 10
AVAILABILITYZONE |- m1.xlarge 0000 / 0000 2 1024 20
AVAILABILITYZONE |- c1.xlarge 0000 / 0000 4 2048 20

You will notice in the above output that it seems like I have some instances running – free < max. Infact I did start an instance (using ec2-run-instances command) so we could see MANAGED networking in action.

ec2-describe-instances
[Deprecated] Xalan: org.apache.xml.res.XMLErrorResources_en_US
RESERVATION r-352406B2 admin default
INSTANCE i-4BAA0834 emi-839A0EC7 192.168.3.1 10.10.1.2 running test2_key 0 m1.large 2010-05-26T18:20:47+0000 rosh-cluster1 eki-9065137F eri-E86014C4 monitoring-false

As you can see my instance i-4BAA0834 has a public IP address of 192.168.3.1 and a private IP address of 10.10.1.2.

When Eucalyptus booted up my instance it assigned the instance 192.168.3.1 from the list of public IP addresses (see VNET_PUBLICIPS in section Front-end configuration – MANAGED networking mode). Eucalyptus also assigned the instance a private IP address 10.10.1.2 from the range of unused IP addresses (see VNET_SUBNET in section Front-end configuration – MANAGED networking mode).

Also, if you run ec2-describe-addresses you will see the public IP addresses (VNET_PUBLICIPS) that are in use and available.

ec2-describe-addresses
[Deprecated] Xalan: org.apache.xml.res.XMLErrorResources_en_US
ADDRESS 192.168.3.1 i-4BAA0834 (eucalyptus)
ADDRESS 192.168.3.10 nobody
ADDRESS 192.168.3.11 nobody
ADDRESS 192.168.3.12 nobody
ADDRESS 192.168.3.13 nobody
ADDRESS 192.168.3.14 nobody
ADDRESS 192.168.3.15 nobody
ADDRESS 192.168.3.16 nobody
ADDRESS 192.168.3.17 nobody
ADDRESS 192.168.3.18 nobody
ADDRESS 192.168.3.19 nobody
ADDRESS 192.168.3.2 nobody
ADDRESS 192.168.3.20 nobody
ADDRESS 192.168.3.21 nobody
ADDRESS 192.168.3.22 nobody
ADDRESS 192.168.3.23 nobody
ADDRESS 192.168.3.24 nobody
ADDRESS 192.168.3.25 nobody
ADDRESS 192.168.3.26 nobody
ADDRESS 192.168.3.27 nobody
ADDRESS 192.168.3.28 nobody
ADDRESS 192.168.3.29 nobody
ADDRESS 192.168.3.3 nobody
ADDRESS 192.168.3.30 nobody
ADDRESS 192.168.3.31 nobody
ADDRESS 192.168.3.4 nobody
ADDRESS 192.168.3.5 nobody
ADDRESS 192.168.3.6 nobody
ADDRESS 192.168.3.7 nobody
ADDRESS 192.168.3.8 nobody
ADDRESS 192.168.3.9 nobody



So now when Eucalyptus boots up instances, the instances will be assigned a private address and a public address, similar to Amazon EC2 instances.

So you are now running a Amazon EC2-like cloud in your datacenter using Eucalyptus!

In up-coming posts in this series of “Building a private Cloud”, we will look how to bundle/register images, run instances, and more importantly how Platform-as-a-Service complements Infrastructure-as-a-Service.


Viewing all articles
Browse latest Browse all 7

Trending Articles