Monthly Archives: October 2010

Openstack Nova in Maverick

Ubuntu Maverick was released yesterday. Big congrats to the Ubuntu team for another release well out the door.

As you may know, both Openstack storage (Swift) and compute (Nova) are available in the Ubuntu repositories. We haven’t made a proper release of Nova yet, so that’s a development snapshot, but it’s in reasonably good shape. Swift, on the other hand, should be in very good shape and be production ready. I’ve worked mostly on Nova, so that’s what I’ll focus on.

So, to get to play with Nova in Maverick on a single machine, here are the instructions:

sudo apt-get install rabbitmq-server redis-server
sudo apt-get install nova-api nova-objectstore nova-compute \
                nova-scheduler nova-network euca2ools unzip

rabbitmq-server and redis-server are not stated as dependencies of Nova in the packages, because they don’t need to live on the same host. In fact, as soon as you add the next compute node (or API node or whatever), you’ll want to use a remote rabbitmq server and a remote database, too. But, for our small experiment here, we need a rabbitmq server and a redis server (it’s very likely that the final release of Nova will not require Redis, but for now, we need it).

A quick explanation of the different components:

RabbitMQ
is a messaging system the implements AMQP.  Basically, it’s a server that passes messages around between the other components that make up Nova.
nova-api
is the API server (I was schocked to learn this, too!) . It implements a subset of the Amazon EC2. We’re working on adding the rest, but it takes time. It also implements a subset of the Rackspace API.
nova-objectstore
stores objects. It implements the S3 API. It’s quite crude. If you’re serious about storing objects, Swift is what you want. Really.
nova-compute
the component that runs virtual machines.
nova-network
the network worker. Depending on configuration, it may just assign IP’s or it could work as the gateway for a bunch of NAT’ed VM’s.
nova-scheduler
the scheduler (another schocker). When a user wants to run a virtual machine, they send a request to the API server. The API server asks the network worker for an IP and then passes off handling to the scheduler. The scheduler decides which host gets to run the VM.

Once it’s done installing (which should be a breeze), you can create an admin user (I name mine “soren” for obvious reasons):

sudo nova-manage user admin soren

and create a project (also named soren) with the above user as the project admin:

sudo nova-manage project create soren soren

Now, you’ll want to get a hold of your credentials:

sudo nova-manage project zipfile soren soren

This yields a nova.zip in the current working directory. Unzip it..

unzip nova.zip

and source the rc file:

. novarc

And now you’re ready to go!

Let’s just repeat all that in one go, shall we?

sudo apt-get install rabbitmq-server redis-server
sudo apt-get install nova-api nova-objectstore nova-compute \
                nova-scheduler nova-network euca2ools unzip
sudo nova-manage user admin soren
sudo nova-manage project create soren soren
sudo nova-manage project zipfile soren soren
unzip nova.zip
. novarc

That’s pretty much it. Now your cloud is up and running, you’ve created an admin user and retrieved the corresponding credentials and put them in your environment.
This is not much fun without any VM’s to run, so you need to add some images. We have some small images we use for testing that you can download here:

wget http://c2477062.cdn.cloudfiles.rackspacecloud.com/images.tgz

Extract that file:

tar xvzf images.tgz

This gives you a directory tree like this:

images
|-- aki-lucid
|   |-- image
|   `-- info.json
|-- ami-tiny
|   |-- image
|   `-- info.json
`-- ari-lucid
    |-- image
    `-- info.json

As a shortcut, you could just extract this directly in /var/lib/nova and change the permisssions appropriately, but to get the full experience, we’ll use euca-* to get these images uploaded.

euca-bundle-image -i images/aki-lucid/image -p kernel --kernel true
euca-bundle-image -i images/ari-lucid/image -p ramdisk --ramdisk true
euca-upload-bundle -m /tmp/kernel.manifest.xml -b mybucket
euca-upload-bundle -m /tmp/ramdisk.manifest.xml -b mybucket
out=$(euca-register mybucket/kernel.manifest.xml)
[ $? -eq 0 ] && kernel=$(echo $out | awk -- '{ print $2 }') || echo $out

out=$(euca-register mybucket/ramdisk.manifest.xml)
[ $? -eq 0 ] && ramdisk=$(echo $out | awk -- '{ print $2 }') || echo $out

euca-bundle-image -i images/ami-tiny/image -p machine  --kernel $kernel --ramdisk $ramdisk
euca-upload-bundle -m /tmp/machine.manifest.xml -b mybucket
out=$(euca-register mybucket/machine.manifest.xml)
[ $? -eq 0 ] && machine=$(echo $out | awk -- '{ print $2 }') || echo $out
echo kernel: $kernel, ramdisk: $ramdisk, machine: $machine

Alright, so we have images!

Now, we just need a keypair:

euca-add-keypair mykey > mykey.priv
chmod 600 mykey.priv

Let’s run a VM!

euca-run-instances $machine --kernel $kernel --ramdisk $ramdisk -k mykey

This should respond with some info about the VM, among other things, the IP.

In my case, it was 10.0.0.5:

ssh -i mykey.priv root@10.0.0.5

YAY!

I’ll leave it to someone else to provide similar instructions for Swift