Category Archives: Ubuntu

What Ubuntu Server *could* be

I’m glad Thierry started this discussion. About six months ago when we were first beginning to talk about what to do in Jaunty, I sat down and wrote a bunch of notes that I meant to turn into a blog post, but it never made it farther than an e-mail to a few people, but now that we’re sharing visions, I thought I’d post it.

Disclaimer: These are simply notes I wrote for myself. They’re not the outcome of a discussion, it’s not a blessed strategy.. They’re just my notes.

What is our profile? What offsets us from the others?

If I’m brutally honest, I must admit that when I explain Ubuntu server
to people, it very often ends up something like: “Debian with a sane,
predictable release schedule. We take a snapshot of Debian at some
point, and apply some polish and tender loving, and we ship it.” (Note:
I wrote these notes 6 months ago, and this part is not quite true anymore,
but let’s just forget that for a little bit.)

Sure, we also add a few gadgets, gizmos, and widgets, but the type of
user who gets won over by that sort of thing alone is probably not the
kind of user we’re really interested in (in part because they’re
transient… If another distro comes up with another gizmo they suddenly
can’t live without, they’ll be out of here in no time).

We need some kind of profile. We need to do something differently from
others. Offer a different concept. Right now, we’re trying to the others
at their game. I’m not saying it can’t be done, but it’s a veritable
David vs. Goliath.

Debian provides us with a technically strong, dependable base, but
Debian is a solution to a problem we’re not trying to solve.

Ubuntu on the desktop took off with a bang with the Warty Warthog
release.. It was an almost instant success. Why? Because it solved the
problems everyone was facing:

  • Easy to install
    • The install process was boiled down to as few questions as we
      could possibly get away with, in part by leaving out a lot of
      advanced options.
  • Lots of common hardware supported
    • Even restricted drivers. The idea was that a software stack
      consisting of all free software with a single binary blob to
      enable a wifi card or a graphics adapter is better than a software
      stack of all non-free software. For most users, these were (and
      are still) the only two viable choices.
  • A wide selection of software was pre-installed and ready to go.
    • All you needed to do was look around in the menus and you found
      the software you needed to get most of your work done. No need to
      look on the internet for “what software do you use instead of
      Word/Internet Explorer/MSN Messenger/Outlook on Linux?”

Essentially, it was all about “making the best of free software

Now, is “making it available” still a problem on servers? Yes! Sure,
there’s lots of stuff we can’t do with an Ubuntu Server, but what if we
focus on what you *can* do, and make that very, very available in a way
that’s true to our UNIX heritage?

What would that require?

  • Easy to install
    • What are the common stumbling points for the installation process?
      • Example: Partitioning is difficult. You usually only get the one
        chance to get it right, and if it’s your first linux system, you
        won’t have a clue.
  • How can we fix them?
    • Example: Do their partitioning for them?
      • In ways that don’t limit our choices later on?
        • Example: Always make the disk a raid member where the raid set only
          has that one member. That way, it’s easier to add another
          member later.
        • Example: Always do LVM. Provide tools to easily move parts of the
          filesystem to a newly created logical volume (creating the lv
          and mkfs it in the process).
  • Lots of common hardware supported.
    • What server class hardware out there is unsupported?
    • Do we need to create a restricted driver set for servers?
  • A wide selection of software pre-installed and ready to go.
    • Perhaps not actually pre-installing them, but making sure that
      people are using “the right selection” of software some other way,
      perhas by means of:

      • Better documentation
        • I’ve never read a book about Linux system administration and not
          thought that they were doing it all wrong. This is symptomatic:
          IMO, we’re quite good at pointing out when people are doing things
          wrong, but we fail to go out and define the One True Way[tm] to do
          things. Personally, I’m afraid I’ll make a mistake and people will
          wind up at a dead end, because there’s something, I’ve
  • Much better integration
    • Again, this stems from our failure to go out and define the One
      True Way[tm] of setting up our services and integrate them. This
      is something we inherited from Debian. I believe it needs to stop
      right now. Take dovecot, for instance.. I don’t expect any half
      serious deployment of dovecot to use the userdb and passdb
      backends that its configured to use by default, yet we leave the
      defaults that way. Why? Because Debian does it. Why do they do it?
      Because they’re trying to solve a different problem than we are.
      They want to provide a platform that unbiased does everything for
      the relatively few people who know how to drive it. This is noble enough, but
      to dovecot, it means that it’s only as enterprise ready as the
      sysadmin can manage to set it up. We need to define what an
      Ubuntu based enterprise environment looks like and offer that in a
      packaged form for easy deployment. The benefits are numerous:

      • Knowing that a company uses an Ubuntu based network
        infrastructure currently tells you nothing. Defining these best
        practices will provide a baseline, that’s recognisable by Ubuntu
        admins everywhere.
      • Hiring is easier (for companies looking for Ubuntu sysadmins).
        If an admin has Ubuntu experience there’s now a chance that
        he’ll actually be able to apply his knowledge directly.
      • Support is much easier when you can actually make assumptions
        about what people are using as their directory server, and how
        everything speaks together, because *we* defined it.
      • It paves the way for an Ubuntu System Administration
    • Etc.

What we should offer is:

  • Enterprise readiness out of the box.
    • Well defined interfaces (contracts, if you will) between components.
      • Example: If we were to decide that Ubuntu Server uses an ldap
        backend for storing mail aliases, we’d clearly document the exact
        query that would be run to fetch that info. If a user for
        whatever reason needed to extend the ldap schema, he’s allowed to do
        so and can expect everything to keep working as long as that query
        gives the same result. Likewise, the LDAP DIT will also be
        clearly documented, so that the user is allowed to add custom
      • These contracts follow our freeze process. I think beta freeze
        would be an appropriate time to lock these down.
    • Simple tools (akin to the ones we already have) to manage these
      things. Home users or small businesses shouldn’t suffer because we
      decided to change the way things work.

      • E.g. if we decide to install an LDAP server and use that from nss
        and pam instead of passwd/shadow on each and every Ubuntu Server
        installation, adduser and such should keep working as it always

Sorry if it’s a bit of a mess, but as you know, perfect is the enemy of good enough.

gtk-vnc and virt-viewer mozilla plugins

Another cool thing that’s new in Jaunty that I’ve never gotten around to bloggin about is the fact that the virt-viewer and gtk-vnc packages in Ubuntu now provide mozilla-virt-viewer and mozilla-gtk-vnc, respectively.

This means you can now put something like

  <embed type="application/x-gtk-vnc"
    host="" port="5900">

or this:

  <embed type="application/x-virt-viewer"
    uri="qemu:///system" name="something">

in a web page and have access to virtual machines or other VNC servers directly in your browser.

I have a feeling this will spark some rather interesting web based management tools once it becomes more ubiquitous.

Announcing Eucalyptus

I’m very pleased to announce the availability of Eucalyptus in Ubuntu Jaunty Jackalope!

From the package description:

EUCALYPTUS is an open source service overlay that implements elastic
computing using existing resources. The goal of EUCALYPTUS is to allow
sites with existing clusters and server infrastructure to co-host an
elastic computing service that is interface-compatible with Amazon’s EC2.

Simply put: Eucalyptus gives you your very own EC2 in your own data center.

Being interface-compatible with EC2 means that anything you might already be doing
with EC2 you can now do with your local Eucalyptus instance.

There are three notable packages:

The cloud controller. You will generally only have one of these. It provides Walrus (Eucalyptus’ S3 implementation) and is the part of Eucalyptus that users will talk to using the EC2 API.

The cluster controller. If you’re familiar with EC2, you can think of this as the master server for an availabilty zone. Most people will only have one of these.
The node controller. This is the component that instantiates your virtual machines (instances, in EC2 speak). You will install this on each of your servers that will be running virtual machines for Eucalyptus.

The quick start guide:

  • Install all three packages on a machine with lots of available ressources, both in terms of CPU, RAM, and disk space. sudo apt-get install eucalyptus-cloud eucalyptus-cc eucalyptus-nc
  • After a while (perhaps up to a minute or two, even on beefy servers), you should be able to access the admin interface on https://ip_or_hostname_of_your_server:8443/. (You must use https, not http).
  • Set up the admin user (should be self-explanatory)
  • Add the cluster controller in the configuration tab
  • From the command line, add the node to the cluster: “sudo euca_conf -addnode name_of_this_server” and follow the instructions it gives you.
  • At this point you should be ready to upload kernels, ramdisks, and filesystem images to your cloud. You can find a bit of information about how here: It is not completely up-to-date with the version we have in Ubuntu, but it’s very helpful nonetheless.

A few notable differences between our packages and what you’ll see mentioned on the Eucalyptus website are that our version uses KVM as the default hypervisor and it also supports EBS. I expect the upstream documentation will be updated soon to reflect these cool new features.

A big “Thank you!” goes out to everyone who played a big part in this:

  • All the guys in the Eucalyptus group at UCSB: Chris, Dan, Neil, Graziano, Dmitrii, and Rich for creating Eucalyptus and making it awesome!
  • Thierry from our very own server team, for sorting out all the Java dependencies along Chris from UCSB.
  • Chris Jones for lots of very helpful feedback from alpha testing this whole thing.


OpenNebula in Ubuntu

The Ubuntu server team intends to offer a set of new software packages related to cloud computing in Ubuntu 9.04 (Jaunty Jackalope), due to be released in April 2009. Most notably:

Provides a very convenient abstraction of computational ressources (both local (in your on data center) and remote (on Amazon’s EC2, for instance)).
Provides an EC2-like cloud, so that you can set up your very own local EC2).

I’m pleased to announce that as of a couple of days ago, the first of these, OpenNebula, is now available.

There are five packages in total, of which only two are going to be of general interest:

  • opennebula

    This is the core of opennebula. It contains oned, mm_sched and
    everything else that you’d find in a regular OpenNebula installation.
    Additionally, it creates an ssh key pair for the oneadmin user, to
    ease in setting up connections to other nodes.

  • opennebula-node

    As you may know if you’re a current OpenNebula user, OpenNebula doesn’t
    actually need any parts of OpenNebula installed on its nodes. All it
    needs is ssh access from the OpenNebula server and access to a hypervisor.
    Hence, this package installs the hypervisor packages (kvm and
    libvirt-bin) and prepares the oneadmin user.

If you have any feedback, please don’t hesitate to shout, and also feel
free to report any bugs here:

A word of caution: Ubuntu 9.04 is still very much work in progress. Do
not use this in a production environment. If it breaks, you get to keep
both pieces. :)

Better uri defaults for virt-viewer

This has been bugging me for a looong time, but for some reason I only just now pulled myself together and did something about it:

Back in Hardy, I changed Ubuntu’s version of virsh (the command line utility for libvirt) to connect to qemu:///system if you had access to that, and fall back to qemu:///session if you didn’t. This saves you the trouble of adding -c qemu:///system to your command line (or setting VIRSH_DEFAULT_URI appropriately) every time you wanted to do something useful with virsh. The upstream default was (and still is, IIRC) to connect to xen:///, but that’s not really appropriate for us since we prefer kvm.

virt-viewer, however, never got the same attention. It still defaults to xen:///. Or rather: it did until an hour or so ago: Enjoy. Err… Make that Go me. :(

One-time passwords

Every once in a while (not too often) I leave the house without my trusty laptop or maybe I’m just in a place with no wifi available to me. In such cases I sometimes need access to one of my servers or perhaps just my e-mail, but I’ve always felt uncomfortable entering my password on strange computers, especially ones running Windows. You never know what kind of key loggers or other kinds of spyware they might be infected with. Every time I’ve done it, I’ve always changed my password on said servers the next time I’m using my own laptop which I quite trust. This procedure gets quite annoying. Enter OPIE.


OPIE is a free implementation of the S/KEY (one time password) specifications (RFC 1760 and RFC 2289). The idea is that each password is only usable once so it doesn’t matter if anyone grabs it as it’ll be useless when they try to use it.

Setting it up was quite simple:

  • Install the opie-server package.
  • Add pam_opie to your pam configuration. If you haven’t tweaked your pam configuration at all, you can just copy /usr/share/doc/libpam-opie/examples/pam.d/common-auth to /etc/pam.d/common-auth. You’ll still be able to log in as you’ve always done it, but also with your shiny new one-time-password setup.
  • Now, as your regular user, run opiepasswd. You’ll see a prompt like this:
    Adding sh:
    You need the response from an OTP generator.
    New secret pass phrase:
            otp-md5 499 bi0617

    Now, in another terminal, run the command shown (in this case otp-md5 499 bi0617). This will look something like this:

    $ otp-md5 499 bi0617
    Using the MD5 algorithm to compute response.
    Reminder: Don't use opiekey from telnet or dial-in sessions.
    Enter secret pass phrase: 

    Your passphrase will have to be 10 characters or more. Now, enter that (“DARE LAID BUM TAB PI BURY”) into the opiepasswd session from before.

    Adding sh:
    You need the response from an OTP generator.
    New secret pass phrase:
            otp-md5 499 bi0617
            Response: DARE LAID BUM TAB PI BURY
    ID test OTP key is 499 bi0617

    And that’s it. From now on, you can log in using these one time passwords. Try su - yourusername and just press enter on the well-known “password:” prompt, and you’ll be prompted for a one-time-password.

All that’s left is a way to get your hands on these one-time passwords when you need them. You can either use the otp-md5 tool to generate a bunch of passwords, print them, and carry them with you (see the man page for otp-md5 for details on the -n option), or you can install a generator on your mobile phone (there are several options available) or your Nokia 770 if you are lucky enough to own one of those.


As I mentioned in the beginning, I needed this for webmail, but as you might know, when using PHP based webmail systems every page load means a new login to the IMAP server, but when your passwords are only good for one login, you’re not going to have a very enjoyable experience with this setup. But if we add imapproxy to the mixture we’re back in business.

imapproxy was created to offload imap servers from these excessive logins, but at the same time, it resolves our problem. When connecting for the first time, imapproxy passes your authentication credentials on to the imap server, and if succesfully authenticated, it remembers your password for a configurable amount of time while keeping the connection to the imap server open… so the webmail app will keep using your (almost-but-not-quite-)one-time password to authenticate against imapproxy. Make sure you configure a relatively short cache_expiration_time so that the window of opportunity for an attacker is as narrow as possible. I’ve set it to 70 seconds since my webmail system refreshes every minute, so 70 seconds leaves a bit of time for different delays and such.

A word of warning:If you’re writing an e-mail from the webmail system, it might not be refreshing every minute anymore, so if you’re writing a long e-mail, make sure you do it in a different window so that when you click send, you won’t be rejected because the connection from your imapproxy to your imap server has been dropped and hence your password has expired.

The configuration file for imapproxy is heavily commented, but here’s is mine(stripped for comments and blank lines) if you’re interested:

server_hostname localhost
cache_size 3072
listen_port 144
server_port 143
cache_expiration_time 70
proc_username nobody
proc_groupname nogroup
stat_filename /var/run/pimpstats
protocol_log_filename /var/log/imapproxy_protocol.log
syslog_facility LOG_MAIL
send_tcp_keepalives no
enable_select_cache no
foreground_mode no
force_tls no
enable_admin_commands no

Old packages on Launchpad

Launchpad can be a hassle to navigate at times, so I figured I’d share a little gem with you. Every once in a while, there’s a bug in an Ubuntu package. Yes, hard to believe, but it’s true. To figure out exactly when it broke it sometimes helps to be able to install previus versions of a package to determine when the issue arose, but what if the package has been removed from the archives? Here’s the trick:

  1. Figure out the name of the source package. can help you there. The bottom of each binary package page lists — among other things — the name of the source package.
  2. Go to Ubuntu’s Launchpad page (remember, Launchpad is not only for Ubuntu).
  3. In the search field, enter the name of the source package. E.g. “firefox”.
  4. This gives you a list of possible matches. Select the right one. :-)
  5. Find the version you’re interested in in the table and click the version number (the far right column).
  6. You’ll see a list of binary packages generated by this source package. On the left, there’s a box that says “Builds of sourcepackagename – version” with each of the relevant architectures’ build results. Click the architecture you’re running.
  7. You’ll see some info relating to the build of the package (which server built it, how long did it take, when was it done, etc.). On the right, there’s a box containing a list of the resulting binary package. Select the one, you’re looking for.
  8. There! At the top, there’s a download link for the deb. Mind you, you’re on your own with regard to managing dependencies.
  9. Oh, and per request I’ve added myself to the planet. Yay!

otp-clicking in gnome-terminal

I noticed that otp challenges get highlighted in gnome-terminal: Screenshot of gnome-terminal underlining an otp challenge
After a bit of digging in the source, it turns out that if you hold down CTRL while clicking the challenge, a window comes up allowing you to enter your password. The resulting passphrase will be sent to the terminal. Nice…