What and Why There are many ways to host data for containers. Earlier I covered containers using LVM, so each container would sit on a separate thinly provisioned Logical Volume managed by LVM. In this instance I’m going to try to do the...
The first thing we need to do is install ZFS which includes the kernel modules, tools, and associated libraries.
$ fdisk -l /dev/vda Device Start End Sectors Size Type /dev/vda1 2048 4095 2048 1M BIOS boot /dev/vda2 4096 52432895 52428800 25G Linux filesystem /dev/vda3 52432896 104861695 52428800 25G Linux filesystem /dev/vda4 104861696 419428351 314566656 150G Linux filesystem
This should install ZFS and associated dependencies of which there will be a quite a few. It needs to compile and generate kernel modules so it may take a few minutes to complete.
$ apt install zfs-dkms
It gets a little more interesting as we add a pool with compression and encryption;
$ zpool create default /dev/vda4 -m legacy $ zpool list NAME SIZE ALLOC FREE FRAG CAP DEDUP HEALTH default 149G 100K 149G 0% 0% 1.00x ONLINE $ zfs list NAME USED AVAIL REFER MOUNTPOINT default 118K 144G 24K legacy
And we should be done. The state of private with regards to encryption is persistent, so once the pool has been unlocked, so it will remain so until either explicitly locked or until the machine is shut down.
$ zpool create -O encryption=on -O keyformat=passphrase \ -O keylocation=prompt -o compatibility=off \ -o feature@encryption=enabled -m legacy \ private /dev/vda3 # # It should then prompt for your passphrase, this should be # secure and at least 14 characters. # Enter new passphrase: .... Re-enter new passphrase: .... $ zfs list NAME USED AVAIL REFER MOUNTPOINT default 118K 144G 24K legacy private 198K 23.7G 98K legacy # # Now turn on compression for the pool called "private" # $ zfs set compression=gzip private
So the archive when stored on the normal ext4 root filesystem is 2.6G in size, however when moved to the encrypted / compressed filesystem on the private zfs pool, it’s only actually consuming 975M, which seems pretty reasonable. Just to see how much of this is compression and how much is de-duplication, I’ll move it over to the uncompressed default pool;
$ tar cf archive.tar /usr $ ls -lh total 2.6G -rw-r--r-- 1 root root 2.6G Oct 16 12:16 archive.tar # # If we create a temporary volume in "private" # $ zfs create private/tmp -o mountpoint=/mnt/tmp # # Then move our test archive onto it .. # $ zfs list NAME USED AVAIL REFER MOUNTPOINT default 118K 144G 24K legacy private 975M 22.8G 98K legacy private/tmp 975M 22.8G =>975M /mnt/tmp
So although we’re getting a little bit of de-duplication saving (2.46G vs 2.6) the majority is coming from the compression we applied to the private volume. We could apply compression to default, however this would slow our containers down somewhat and in this instance I’m not too worries about storage space.
$ zfs create default/tmp -o mountpoint=/mnt/tmp2 $ mv /mnt/tmp/archive.tar /mnt/tmp2 $ zfs list NAME USED AVAIL REFER MOUNTPOINT default 2.49G 142G 24K legacy default/tmp 2.49G 142G =>2.49G /mnt/tmp2 private 442K 23.7G 98K legacy private/tmp 98K 23.7G 98K /mnt/tmp
Note compression is typically applied to either an entire pool, or a volume, whereas encryption is typically enabled for an entire pool.
Now if you take a look at df you should see it has unlocked the private pool and automatically mounted the /mnt/tmp volume.
$zfs load-key -a Enter passphrase for 'private': ... 1 / 1 key(s) successfully loaded $ zfs mount -a
Now enable the LXD user interface;
$ snap install lxd $ lxd init Would you like to use LXD clustering? (yes/no) no Do you want to configure a new storage pool? yes Name of the new storage pool: default Name of the storage backend to use: zfs Create a new ZFS pool? (yes/no): no Name of the existing ZFS pool or dataset: default Would you like to connect to a MAAS server? no Would you like to create a new local network bridge? yes What should the new bridge be called? lxdbr0 What IPv4 address should be used? auto What IPv6 address should be used? auto Would you like the LXD server available over the network? yes Address to bind LXD to (not including port): all Port to bind LXD to: 8443 Would you like cached images to be updated automatically? yes Would you like a YAML "lxd init" preseed to be printed? no
If you point your browser at the machine’s port 8443 (for from the machine itself; https://localhost:8443) and follow the instructions, you should be able to install the appropriate client certificates to get the GUI working.
$ snap set lxd ui.enable=true $ systemctl reload snap.lxd.daemon
Note I have found the process of installing client certificates in the browser for LXD to, on occasion, be problematic. If you end up with a strange text (JSON) response instead of a web page, you might like to try the following fix which has worked for me;
IF the issue still persists (note, this will destroy any containers you’ve created), try;
$ mkdir lxd-api-access-cert-key-files $ cd lxd-api-access-cert-key-files $ openssl genrsa -out lxd-webui.key 4096 $ openssl req -new -key lxd-webui.key -out lxd-webui.csropenssl x509 -req -days 3650 -in lxd-webui.csr -signkey lxd-webui.key -out lxd-webui.crt $ openssl pkcs12 -keypbe PBE-SHA1-3DES -certpbe PBE-SHA1-3DES -export -in $ lxd-webui.crt -inkey lxd-webui.key -out lxd-webui.pfx -name "LXD WebUI" $ lxc config trust add lxd-webui.crt # Now download the lxd-webui.pfx file. Locally. # Import the file to the browser.
https://localhost:8443, we should be seeing something like this;
$ snap remove --purge lxd $ snap install lxd # At this point you will need to remove all the ZFS volumes # from **default** because "init" will try to recreate them $ lxd init
So the base image is consuming 617M, but the container itself is only using 10.6M. The useful thing to note is that if we create a second container using the same version of Linux, it can use the same base instance. So whereas the first container consumes 617M + 10.6M of space, the second (and subsequent) will only consume 10.6M of space. (which makes them incredibly space efficient, even before you start to look at de-duplication or compression) Just to prove the point, if I create a second instance;
$ zfs list -r default/images NAME USED AVAIL REFER MOUNT default/images 617M 144G 24K legacy default/images/5c0f660608... 617M 144G 617M legacy $ zfs list -r default/containers NAME USED AVAIL REFER MOUNTPOINT default/containers 21.3M 144G 24K legacy default/containers/zfs1 10.6M 144G 622M legacy
$ zfs list -r default/images NAME USED AVAIL REFER MOUNT default/images 617M 144G 24K legacy default/images/5c0f660608... 617M 144G 617M legacy $ zfs list -r default/containers NAME USED AVAIL REFER MOUNTPOINT default/containers 21.3M 144G 24K legacy default/containers/zfs1 10.6M 144G 622M legacy default/containers/zfs2 10.6M 144G 622M legacy
- One or more host managed storage pools
- Access to ZFS options such as compression, encryption, RAID etc
- Out of the box de-duplication
- Lazy space allocation / re-allocation
- Easy access to snapshots for backing up individual containers
- Fully integrated into LXD’s infrastructure and UI