Tribal Chicken

Security. Malware Research. Digital Forensics.

ZFS on XenServer 6.2

I have a need to install ZFS on Linux (ZoL: http://zfsonlinux.org) on XenServer 6.2 to access some stuff on a ZFS formatted HDD.

Disclaimer: This is a bad idea. For starters I’m sure this is not the recommended way of installing drivers on XenServer (I believe the correct way is through a supplementary pack? Maybe?)

You will likely run into issues caused by memory limitations. So far I’ve experienced spl_kmem_cache pinning a core at 100% and blocking indefinitely due to vmalloc issues. At the time of writing XenServer runs a 32-bit kernel in dom0, so normal ZoL 32-bit caveats apply.

I also hope I don’t need to tell you not to do this on a production system or to run file sharing off your hypervisor. Don’t hold me responsible if you break something important.

The way to build drivers for XenServer is by using the aptly-named Driver Development Kit (DDK). This is available for download from Citrix – I’m running XenServer 6.2 with the SP1 and SP1001 updates, XenServer-6.2.0-SP1-ddk.iso is appropriate (You need to ensure the DDK is running the same kernel).

Once you have downloaded the DDK ISO and uploaded to your XenServer, we can import the DDK VM and get started.

Mount the ISO

# mount /media/XenServer-6.2.0-SP1-ddk.iso -o loop /mnt

Import the DDK VM

# xe vm-import filename=/mnt/ddk/ova.xml

This will return the UUID of the new VM – Make or note of it (Or copy it onto clipboard)

Now we need to add a virtual interface which will provide the VM with an internet connection. In my case I attached the VM to my LAN, which is connected to eth1 on the XenServer machine. Yours may be different.

This is not strictly required of course.

This will return the list of networks and their associated UUID’s

# xe network-list
 
**uuid ( RO)                : 55f30b6c-7f07-7d00-4e57-86c8f8696574**** name-label ( RW): Pool-wide network associated with eth1**** name-description ( RW):**** bridge ( RO): xenbr1**
 
uuid ( RO)                : 6dcf2ca0-ab95-17c0-db16-e9491781e714 name-label ( RW): Pool-wide network associated with eth0 name-description ( RW): bridge ( RO): xenbr0
 
uuid ( RO)                : 35c1da4f-fc72-e82d-1468-491c8ec15125 name-label ( RW): Host internal management network name-description ( RW): Network on which guests will be assigned a private link-local IP address which can be used to talk XenAPI bridge ( RO): xenapi

Grab the UUID of the network you are after and also the UUID of the VM which you took a note of earlier then create the virtual interface

# xe vif-create network-uuid=<UUID-of-network> vm-uuid=<UUID-of-VM> device=0`

We will be using xenconsole for access, so disable VNC, then start the VM:

# xe vm-param-set uuid=<UUID-of-vm> other-config:disable_pv_vnc=1

# xe vm-start uuid=<UUID-of-VM>

Now we’re ready to get into the VM using xenconsole. Use this command to get the domain ID of your newly started VM:

# xe vm-list params=dom-id uuid=<UUID-of-vm>

This will give you the domain ID which xenconsole is expecting:

# /usr/lib/xen/bin/xenconsole 1

You are now in the console environment of the DDK VM! so now we want to build ZFSonLinux

Download the latest SPL and ZFS packages and extract them:

# wget http://archive.zfsonlinux.org/downloads/zfsonlinux/spl/spl-0.6.2.tar.gz
# wget http://archive.zfsonlinux.org/downloads/zfsonlinux/zfs/zfs-0.6.2.tar.gz

# tar xf spl-0.6.2.tar.gz # tar xf zfs-0.6.2.tar.gz

Unfortunately I came across a slight issue here when attempting to build the kmod RPM’s where the build was failing. I believe it’s macro-related but I’m not well-versed enough in RPM spec files and build systems to explain it off the top of my head.

However, it can be worked around by doing the following for both the SPL and ZFS packages:

You need to edit the file {SPL/ZFS folder}/rpm/generic/{spl/zfs}-kmod.spec and locate the %build section (For me it was around line 116)

Then in the %build section change:

%configure

to

%_configure

Then save and close. Make sure you do the same for both SPL and ZFS

Now we want to build the SPL kmod package:

# cd spl-0.6.2/ && ./configure # make rpm-utils rpm-kmod

That should succeed. In theory you can then proceed to build ZFS with the –with-spl argument, but that didn’t quite work right and to be perfectly honest I couldn’t be bothered messing around. I just installed SPL.

# rpm -ivh spl-0.6.2-1.i386.rpm kmod-spl-2.6.32.43-0.4.1.xs1.8.0.847.170785xen-0.6.2-1.i386.rpm

Now we can build ZFS, similar procedure:

# cd zfs-0.6.2/ && ./configure  && make # make rpm-utils rpm-kmod

Once that completed I decided the easiest method of installing in dom0 was just copying the RPM’s”

# scp -r *.rpm [email protected]:/root/

Now open up another shell window in dom0 (and make sure you’re not still in the DDK VM)

Install the RPM’s:

# rpm -ivh spl-0.6.2-1.i386.rpm kmod-spl-2.6.32.43-0.4.1.xs1.8.0.847.170785xen-0.6.2-1.i386.rpm
# rpm -ivh zfs-0.6.2-1.i386.rpm kmod-zfs-2.6.32.43-0.4.1.xs1.8.0.847.170785xen-0.6.2-1.i386.rpm

Verify ZFS is loaded with:

# lsmod | grep zfs
zfs                  1125692  1 
zcommon                39750  1
zfs znvpair                69409  2
zfs,zcommon zavl                    5087  1
zfs zunicode              321040  1
zfs spl                   145309  5 zfs,zcommon,znvpair,zavl,zunicode

Now you should be able to import your ZFS pool if you already have one and hope that nothing explodes!