Ubuntu/Debian: Remove prompts from commands

This should facilitate easier copy-and-paste.
Richard Laager 2019-11-04 21:20:16 -06:00
parent a0b47d0b57
commit 0ea96f54fb
3 changed files with 512 additions and 474 deletions

@ -2,11 +2,11 @@ This experimental guide has been made official at [[Debian Buster Root on ZFS]].
If you have an existing system installed from the experimental guide, adjust your sources: If you have an existing system installed from the experimental guide, adjust your sources:
# vi /etc/apt/sources.list.d/buster-backports.list vi /etc/apt/sources.list.d/buster-backports.list
deb http://deb.debian.org/debian buster-backports main contrib deb http://deb.debian.org/debian buster-backports main contrib
deb-src http://deb.debian.org/debian buster-backports main contrib deb-src http://deb.debian.org/debian buster-backports main contrib
# vi /etc/apt/preferences.d/90_zfs vi /etc/apt/preferences.d/90_zfs
Package: libnvpair1linux libuutil1linux libzfs2linux libzpool2linux zfs-dkms zfs-initramfs zfs-test zfsutils-linux zfs-zed Package: libnvpair1linux libuutil1linux libzfs2linux libzpool2linux zfs-dkms zfs-initramfs zfs-test zfsutils-linux zfs-zed
Pin: release n=buster-backports Pin: release n=buster-backports
Pin-Priority: 990 Pin-Priority: 990
@ -15,19 +15,19 @@ This will allow you to upgrade from the locally-built packages to the official b
You should set a root password before upgrading: You should set a root password before upgrading:
# passwd passwd
Apply updates: Apply updates:
# apt update apt update
# apt dist-upgrade apt dist-upgrade
Reboot: Reboot:
# reboot reboot
If the bpool fails to import, then enter the rescue shell (which requires a root password) and run: If the bpool fails to import, then enter the rescue shell (which requires a root password) and run:
# zpool import -f bpool zpool import -f bpool
# zpool export bpool zpool export bpool
# reboot reboot

@ -41,27 +41,27 @@ ZFS native encryption encrypts the data and most metadata in the root pool. It d
If you have a second system, using SSH to access the target system can be convenient. If you have a second system, using SSH to access the target system can be convenient.
$ sudo apt update sudo apt update
$ sudo apt install --yes openssh-server sudo apt install --yes openssh-server
$ sudo systemctl restart ssh sudo systemctl restart ssh
**Hint:** You can find your IP address with `ip addr show scope global | grep inet`. Then, from your main machine, connect with `ssh user@IP`. **Hint:** You can find your IP address with `ip addr show scope global | grep inet`. Then, from your main machine, connect with `ssh user@IP`.
1.3 Become root: 1.3 Become root:
$ sudo -i sudo -i
1.4 Setup and update the repositories: 1.4 Setup and update the repositories:
# echo deb http://deb.debian.org/debian buster contrib >> /etc/apt/sources.list echo deb http://deb.debian.org/debian buster contrib >> /etc/apt/sources.list
# echo deb http://deb.debian.org/debian buster-backports main contrib >> /etc/apt/sources.list echo deb http://deb.debian.org/debian buster-backports main contrib >> /etc/apt/sources.list
# apt update apt update
1.5 Install ZFS in the Live CD environment: 1.5 Install ZFS in the Live CD environment:
# apt install --yes debootstrap gdisk dkms dpkg-dev linux-headers-$(uname -r) apt install --yes debootstrap gdisk dkms dpkg-dev linux-headers-$(uname -r)
# apt install --yes -t buster-backports zfs-dkms apt install --yes -t buster-backports zfs-dkms
# modprobe zfs modprobe zfs
* The dkms dependency is installed manually just so it comes from buster and not buster-backports. This is not critical. * The dkms dependency is installed manually just so it comes from buster and not buster-backports. This is not critical.
@ -69,33 +69,38 @@ If you have a second system, using SSH to access the target system can be conven
2.1 If you are re-using a disk, clear it as necessary: 2.1 If you are re-using a disk, clear it as necessary:
If the disk was previously used in an MD array, zero the superblock: If the disk was previously used in an MD array, zero the superblock:
# apt install --yes mdadm
# mdadm --zero-superblock --force /dev/disk/by-id/scsi-SATA_disk1
Clear the partition table: apt install --yes mdadm
# sgdisk --zap-all /dev/disk/by-id/scsi-SATA_disk1 mdadm --zero-superblock --force /dev/disk/by-id/scsi-SATA_disk1
Clear the partition table:
sgdisk --zap-all /dev/disk/by-id/scsi-SATA_disk1
2.2 Partition your disk(s): 2.2 Partition your disk(s):
Run this if you need legacy (BIOS) booting: Run this if you need legacy (BIOS) booting:
# sgdisk -a1 -n1:24K:+1000K -t1:EF02 /dev/disk/by-id/scsi-SATA_disk1
Run this for UEFI booting (for use now or in the future): sgdisk -a1 -n1:24K:+1000K -t1:EF02 /dev/disk/by-id/scsi-SATA_disk1
# sgdisk -n2:1M:+512M -t2:EF00 /dev/disk/by-id/scsi-SATA_disk1
Run this for the boot pool: Run this for UEFI booting (for use now or in the future):
# sgdisk -n3:0:+1G -t3:BF01 /dev/disk/by-id/scsi-SATA_disk1
sgdisk -n2:1M:+512M -t2:EF00 /dev/disk/by-id/scsi-SATA_disk1
Run this for the boot pool:
sgdisk -n3:0:+1G -t3:BF01 /dev/disk/by-id/scsi-SATA_disk1
Choose one of the following options: Choose one of the following options:
2.2a Unencrypted or ZFS native encryption: 2.2a Unencrypted or ZFS native encryption:
# sgdisk -n4:0:0 -t4:BF01 /dev/disk/by-id/scsi-SATA_disk1 sgdisk -n4:0:0 -t4:BF01 /dev/disk/by-id/scsi-SATA_disk1
2.2b LUKS: 2.2b LUKS:
# sgdisk -n4:0:0 -t4:8300 /dev/disk/by-id/scsi-SATA_disk1 sgdisk -n4:0:0 -t4:8300 /dev/disk/by-id/scsi-SATA_disk1
Always use the long `/dev/disk/by-id/*` aliases with ZFS. Using the `/dev/sd*` device nodes directly can cause sporadic import failures, especially on systems that have more than one storage pool. Always use the long `/dev/disk/by-id/*` aliases with ZFS. Using the `/dev/sd*` device nodes directly can cause sporadic import failures, especially on systems that have more than one storage pool.
@ -106,28 +111,28 @@ Always use the long `/dev/disk/by-id/*` aliases with ZFS. Using the `/dev/sd*`
2.3 Create the boot pool: 2.3 Create the boot pool:
# zpool create -o ashift=12 -d \ zpool create -o ashift=12 -d \
-o feature@async_destroy=enabled \ -o feature@async_destroy=enabled \
-o feature@bookmarks=enabled \ -o feature@bookmarks=enabled \
-o feature@embedded_data=enabled \ -o feature@embedded_data=enabled \
-o feature@empty_bpobj=enabled \ -o feature@empty_bpobj=enabled \
-o feature@enabled_txg=enabled \ -o feature@enabled_txg=enabled \
-o feature@extensible_dataset=enabled \ -o feature@extensible_dataset=enabled \
-o feature@filesystem_limits=enabled \ -o feature@filesystem_limits=enabled \
-o feature@hole_birth=enabled \ -o feature@hole_birth=enabled \
-o feature@large_blocks=enabled \ -o feature@large_blocks=enabled \
-o feature@lz4_compress=enabled \ -o feature@lz4_compress=enabled \
-o feature@spacemap_histogram=enabled \ -o feature@spacemap_histogram=enabled \
-o feature@userobj_accounting=enabled \ -o feature@userobj_accounting=enabled \
-o feature@zpool_checkpoint=enabled \ -o feature@zpool_checkpoint=enabled \
-o feature@spacemap_v2=enabled \ -o feature@spacemap_v2=enabled \
-o feature@project_quota=enabled \ -o feature@project_quota=enabled \
-o feature@resilver_defer=enabled \ -o feature@resilver_defer=enabled \
-o feature@allocation_classes=enabled \ -o feature@allocation_classes=enabled \
-O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \ -O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \
-O normalization=formD -O relatime=on -O xattr=sa \ -O normalization=formD -O relatime=on -O xattr=sa \
-O mountpoint=/ -R /mnt \ -O mountpoint=/ -R /mnt \
bpool /dev/disk/by-id/scsi-SATA_disk1-part3 bpool /dev/disk/by-id/scsi-SATA_disk1-part3
You should not need to customize any of the options for the boot pool. You should not need to customize any of the options for the boot pool.
@ -143,32 +148,32 @@ Choose one of the following options:
2.4a Unencrypted: 2.4a Unencrypted:
# zpool create -o ashift=12 \ zpool create -o ashift=12 \
-O acltype=posixacl -O canmount=off -O compression=lz4 \ -O acltype=posixacl -O canmount=off -O compression=lz4 \
-O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
-O mountpoint=/ -R /mnt \ -O mountpoint=/ -R /mnt \
rpool /dev/disk/by-id/scsi-SATA_disk1-part4 rpool /dev/disk/by-id/scsi-SATA_disk1-part4
2.4b LUKS: 2.4b LUKS:
# apt install --yes cryptsetup apt install --yes cryptsetup
# cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 \ cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 \
/dev/disk/by-id/scsi-SATA_disk1-part4 /dev/disk/by-id/scsi-SATA_disk1-part4
# cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
# zpool create -o ashift=12 \ zpool create -o ashift=12 \
-O acltype=posixacl -O canmount=off -O compression=lz4 \ -O acltype=posixacl -O canmount=off -O compression=lz4 \
-O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
-O mountpoint=/ -R /mnt \ -O mountpoint=/ -R /mnt \
rpool /dev/mapper/luks1 rpool /dev/mapper/luks1
2.4c ZFS native encryption: 2.4c ZFS native encryption:
# zpool create -o ashift=12 \ zpool create -o ashift=12 \
-O acltype=posixacl -O canmount=off -O compression=lz4 \ -O acltype=posixacl -O canmount=off -O compression=lz4 \
-O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
-O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase \ -O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase \
-O mountpoint=/ -R /mnt \ -O mountpoint=/ -R /mnt \
rpool /dev/disk/by-id/scsi-SATA_disk1-part4 rpool /dev/disk/by-id/scsi-SATA_disk1-part4
* The use of `ashift=12` is recommended here because many drives today have 4KiB (or larger) physical sectors, even though they present 512B logical sectors. Also, a future replacement drive may have 4KiB physical sectors (in which case `ashift=12` is desirable) or 4KiB logical sectors (in which case `ashift=12` is required). * The use of `ashift=12` is recommended here because many drives today have 4KiB (or larger) physical sectors, even though they present 512B logical sectors. Also, a future replacement drive may have 4KiB physical sectors (in which case `ashift=12` is desirable) or 4KiB logical sectors (in which case `ashift=12` is required).
* Setting `-O acltype=posixacl` enables POSIX ACLs globally. If you do not want this, remove that option, but later add `-o acltype=posixacl` (note: lowercase "o") to the `zfs create` for `/var/log`, as [journald requires ACLs](https://askubuntu.com/questions/970886/journalctl-says-failed-to-search-journal-acl-operation-not-supported) * Setting `-O acltype=posixacl` enables POSIX ACLs globally. If you do not want this, remove that option, but later add `-o acltype=posixacl` (note: lowercase "o") to the `zfs create` for `/var/log`, as [journald requires ACLs](https://askubuntu.com/questions/970886/journalctl-says-failed-to-search-journal-acl-operation-not-supported)
@ -188,72 +193,84 @@ Choose one of the following options:
3.1 Create filesystem datasets to act as containers: 3.1 Create filesystem datasets to act as containers:
# zfs create -o canmount=off -o mountpoint=none rpool/ROOT zfs create -o canmount=off -o mountpoint=none rpool/ROOT
# zfs create -o canmount=off -o mountpoint=none bpool/BOOT zfs create -o canmount=off -o mountpoint=none bpool/BOOT
On Solaris systems, the root filesystem is cloned and the suffix is incremented for major system changes through `pkg image-update` or `beadm`. Similar functionality for APT is possible but currently unimplemented. Even without such a tool, it can still be used for manually created clones. On Solaris systems, the root filesystem is cloned and the suffix is incremented for major system changes through `pkg image-update` or `beadm`. Similar functionality for APT is possible but currently unimplemented. Even without such a tool, it can still be used for manually created clones.
3.2 Create filesystem datasets for the root and boot filesystems: 3.2 Create filesystem datasets for the root and boot filesystems:
# zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian
# zfs mount rpool/ROOT/debian zfs mount rpool/ROOT/debian
# zfs create -o canmount=noauto -o mountpoint=/boot bpool/BOOT/debian zfs create -o canmount=noauto -o mountpoint=/boot bpool/BOOT/debian
# zfs mount bpool/BOOT/debian zfs mount bpool/BOOT/debian
With ZFS, it is not normally necessary to use a mount command (either `mount` or `zfs mount`). This situation is an exception because of `canmount=noauto`. With ZFS, it is not normally necessary to use a mount command (either `mount` or `zfs mount`). This situation is an exception because of `canmount=noauto`.
3.3 Create datasets: 3.3 Create datasets:
# zfs create rpool/home zfs create rpool/home
# zfs create -o mountpoint=/root rpool/home/root zfs create -o mountpoint=/root rpool/home/root
# zfs create -o canmount=off rpool/var zfs create -o canmount=off rpool/var
# zfs create -o canmount=off rpool/var/lib zfs create -o canmount=off rpool/var/lib
# zfs create rpool/var/log zfs create rpool/var/log
# zfs create rpool/var/spool zfs create rpool/var/spool
The datasets below are optional, depending on your preferences and/or The datasets below are optional, depending on your preferences and/or software
software choices: choices.
If you wish to exclude these from snapshots: If you wish to exclude these from snapshots:
# zfs create -o com.sun:auto-snapshot=false rpool/var/cache
# zfs create -o com.sun:auto-snapshot=false rpool/var/tmp
# chmod 1777 /mnt/var/tmp
If you use /opt on this system: zfs create -o com.sun:auto-snapshot=false rpool/var/cache
# zfs create rpool/opt zfs create -o com.sun:auto-snapshot=false rpool/var/tmp
chmod 1777 /mnt/var/tmp
If you use /srv on this system: If you use /opt on this system:
# zfs create rpool/srv
If you use /usr/local on this system: zfs create rpool/opt
# zfs create -o canmount=off rpool/usr
# zfs create rpool/usr/local
If this system will have games installed: If you use /srv on this system:
# zfs create rpool/var/games
If this system will store local email in /var/mail: zfs create rpool/srv
# zfs create rpool/var/mail
If this system will use Snap packages: If you use /usr/local on this system:
# zfs create rpool/var/snap
If you use /var/www on this system: zfs create -o canmount=off rpool/usr
# zfs create rpool/var/www zfs create rpool/usr/local
If this system will use GNOME: If this system will have games installed:
# zfs create rpool/var/lib/AccountsService
If this system will use Docker (which manages its own datasets & snapshots): zfs create rpool/var/games
# zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker
If this system will use NFS (locking): If this system will store local email in /var/mail:
# zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs
A tmpfs is recommended later, but if you want a separate dataset for /tmp: zfs create rpool/var/mail
# zfs create -o com.sun:auto-snapshot=false rpool/tmp
# chmod 1777 /mnt/tmp If this system will use Snap packages:
zfs create rpool/var/snap
If you use /var/www on this system:
zfs create rpool/var/www
If this system will use GNOME:
zfs create rpool/var/lib/AccountsService
If this system will use Docker (which manages its own datasets & snapshots):
zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker
If this system will use NFS (locking):
zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs
A tmpfs is recommended later, but if you want a separate dataset for /tmp:
zfs create -o com.sun:auto-snapshot=false rpool/tmp
chmod 1777 /mnt/tmp
The primary goal of this dataset layout is to separate the OS from user data. This allows the root filesystem to be rolled back without rolling back user data such as logs (in `/var/log`). This will be especially important if/when a `beadm` or similar utility is integrated. The `com.sun.auto-snapshot` setting is used by some ZFS snapshot utilities to exclude transient data. The primary goal of this dataset layout is to separate the OS from user data. This allows the root filesystem to be rolled back without rolling back user data such as logs (in `/var/log`). This will be especially important if/when a `beadm` or similar utility is integrated. The `com.sun.auto-snapshot` setting is used by some ZFS snapshot utilities to exclude transient data.
@ -261,8 +278,8 @@ If you do nothing extra, `/tmp` will be stored as part of the root filesystem. A
3.4 Install the minimal system: 3.4 Install the minimal system:
# debootstrap buster /mnt debootstrap buster /mnt
# zfs set devices=off rpool zfs set devices=off rpool
The `debootstrap` command leaves the new system in an unconfigured state. An alternative to using `debootstrap` is to copy the entirety of a working system into the new ZFS root. The `debootstrap` command leaves the new system in an unconfigured state. An alternative to using `debootstrap` is to copy the entirety of a working system into the new ZFS root.
@ -270,9 +287,9 @@ The `debootstrap` command leaves the new system in an unconfigured state. An al
4.1 Configure the hostname (change `HOSTNAME` to the desired hostname). 4.1 Configure the hostname (change `HOSTNAME` to the desired hostname).
# echo HOSTNAME > /mnt/etc/hostname echo HOSTNAME > /mnt/etc/hostname
# vi /mnt/etc/hosts vi /mnt/etc/hosts
Add a line: Add a line:
127.0.1.1 HOSTNAME 127.0.1.1 HOSTNAME
or if the system has a real name in DNS: or if the system has a real name in DNS:
@ -282,10 +299,13 @@ The `debootstrap` command leaves the new system in an unconfigured state. An al
4.2 Configure the network interface: 4.2 Configure the network interface:
Find the interface name: Find the interface name:
# ip addr show
# vi /mnt/etc/network/interfaces.d/NAME ip addr show
Adjust NAME below to match your interface name:
vi /mnt/etc/network/interfaces.d/NAME
auto NAME auto NAME
iface NAME inet dhcp iface NAME inet dhcp
@ -293,52 +313,52 @@ Customize this file if the system is not a DHCP client.
4.3 Configure the package sources: 4.3 Configure the package sources:
# vi /mnt/etc/apt/sources.list vi /mnt/etc/apt/sources.list
deb http://deb.debian.org/debian buster main contrib deb http://deb.debian.org/debian buster main contrib
deb-src http://deb.debian.org/debian buster main contrib deb-src http://deb.debian.org/debian buster main contrib
# vi /mnt/etc/apt/sources.list.d/buster-backports.list vi /mnt/etc/apt/sources.list.d/buster-backports.list
deb http://deb.debian.org/debian buster-backports main contrib deb http://deb.debian.org/debian buster-backports main contrib
deb-src http://deb.debian.org/debian buster-backports main contrib deb-src http://deb.debian.org/debian buster-backports main contrib
# vi /mnt/etc/apt/preferences.d/90_zfs vi /mnt/etc/apt/preferences.d/90_zfs
Package: libnvpair1linux libuutil1linux libzfs2linux libzpool2linux zfs-dkms zfs-initramfs zfs-test zfsutils-linux zfsutils-linux-dev zfs-zed Package: libnvpair1linux libuutil1linux libzfs2linux libzpool2linux zfs-dkms zfs-initramfs zfs-test zfsutils-linux zfsutils-linux-dev zfs-zed
Pin: release n=buster-backports Pin: release n=buster-backports
Pin-Priority: 990 Pin-Priority: 990
4.4 Bind the virtual filesystems from the LiveCD environment to the new system and `chroot` into it: 4.4 Bind the virtual filesystems from the LiveCD environment to the new system and `chroot` into it:
# mount --rbind /dev /mnt/dev mount --rbind /dev /mnt/dev
# mount --rbind /proc /mnt/proc mount --rbind /proc /mnt/proc
# mount --rbind /sys /mnt/sys mount --rbind /sys /mnt/sys
# chroot /mnt /bin/bash --login chroot /mnt /bin/bash --login
**Note:** This is using `--rbind`, not `--bind`. **Note:** This is using `--rbind`, not `--bind`.
4.5 Configure a basic system environment: 4.5 Configure a basic system environment:
# ln -s /proc/self/mounts /etc/mtab ln -s /proc/self/mounts /etc/mtab
# apt update apt update
# apt install --yes locales apt install --yes locales
# dpkg-reconfigure locales dpkg-reconfigure locales
Even if you prefer a non-English system language, always ensure that `en_US.UTF-8` is available. Even if you prefer a non-English system language, always ensure that `en_US.UTF-8` is available.
# dpkg-reconfigure tzdata dpkg-reconfigure tzdata
4.6 Install ZFS in the chroot environment for the new system: 4.6 Install ZFS in the chroot environment for the new system:
# apt install --yes dpkg-dev linux-headers-amd64 linux-image-amd64 apt install --yes dpkg-dev linux-headers-amd64 linux-image-amd64
# apt install --yes zfs-initramfs apt install --yes zfs-initramfs
4.7 For LUKS installs only, setup crypttab: 4.7 For LUKS installs only, setup crypttab:
# apt install --yes cryptsetup apt install --yes cryptsetup
# echo luks1 UUID=$(blkid -s UUID -o value \ echo luks1 UUID=$(blkid -s UUID -o value \
/dev/disk/by-id/scsi-SATA_disk1-part4) none \ /dev/disk/by-id/scsi-SATA_disk1-part4) none \
luks,discard,initramfs > /etc/crypttab luks,discard,initramfs > /etc/crypttab
* The use of `initramfs` is a work-around for [cryptsetup does not support ZFS](https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906). * The use of `initramfs` is a work-around for [cryptsetup does not support ZFS](https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906).
@ -350,20 +370,20 @@ Choose one of the following options:
4.8a Install GRUB for legacy (BIOS) booting 4.8a Install GRUB for legacy (BIOS) booting
# apt install --yes grub-pc apt install --yes grub-pc
Install GRUB to the disk(s), not the partition(s). Install GRUB to the disk(s), not the partition(s).
4.8b Install GRUB for UEFI booting 4.8b Install GRUB for UEFI booting
# apt install dosfstools apt install dosfstools
# mkdosfs -F 32 -s 1 -n EFI /dev/disk/by-id/scsi-SATA_disk1-part2 mkdosfs -F 32 -s 1 -n EFI /dev/disk/by-id/scsi-SATA_disk1-part2
# mkdir /boot/efi mkdir /boot/efi
# echo PARTUUID=$(blkid -s PARTUUID -o value \ echo PARTUUID=$(blkid -s PARTUUID -o value \
/dev/disk/by-id/scsi-SATA_disk1-part2) \ /dev/disk/by-id/scsi-SATA_disk1-part2) \
/boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab /boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab
# mount /boot/efi mount /boot/efi
# apt install --yes grub-efi-amd64 shim-signed apt install --yes grub-efi-amd64 shim-signed
* The `-s 1` for `mkdosfs` is only necessary for drives which present 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size (given the partition size of 512 MiB) for FAT32. It also works fine on drives which present 512 B sectors. * The `-s 1` for `mkdosfs` is only necessary for drives which present 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size (given the partition size of 512 MiB) for FAT32. It also works fine on drives which present 512 B sectors.
@ -371,13 +391,13 @@ Install GRUB to the disk(s), not the partition(s).
4.9 Set a root password 4.9 Set a root password
# passwd passwd
4.10 Enable importing bpool 4.10 Enable importing bpool
This ensures that `bpool` is always imported, regardless of whether `/etc/zfs/zpool.cache` exists, whether it is in the cachefile or not, or whether `zfs-import-scan.service` is enabled. This ensures that `bpool` is always imported, regardless of whether `/etc/zfs/zpool.cache` exists, whether it is in the cachefile or not, or whether `zfs-import-scan.service` is enabled.
``` ```
# vi /etc/systemd/system/zfs-import-bpool.service vi /etc/systemd/system/zfs-import-bpool.service
[Unit] [Unit]
DefaultDependencies=no DefaultDependencies=no
Before=zfs-import-scan.service Before=zfs-import-scan.service
@ -391,21 +411,21 @@ This ensures that `bpool` is always imported, regardless of whether `/etc/zfs/zp
[Install] [Install]
WantedBy=zfs-import.target WantedBy=zfs-import.target
# systemctl enable zfs-import-bpool.service systemctl enable zfs-import-bpool.service
``` ```
4.11 Optional (but recommended): Mount a tmpfs to /tmp 4.11 Optional (but recommended): Mount a tmpfs to /tmp
If you chose to create a `/tmp` dataset above, skip this step, as they are mutually exclusive choices. Otherwise, you can put `/tmp` on a tmpfs (RAM filesystem) by enabling the `tmp.mount` unit. If you chose to create a `/tmp` dataset above, skip this step, as they are mutually exclusive choices. Otherwise, you can put `/tmp` on a tmpfs (RAM filesystem) by enabling the `tmp.mount` unit.
# cp /usr/share/systemd/tmp.mount /etc/systemd/system/ cp /usr/share/systemd/tmp.mount /etc/systemd/system/
# systemctl enable tmp.mount systemctl enable tmp.mount
4.12 Optional (but kindly requested): Install popcon 4.12 Optional (but kindly requested): Install popcon
The `popularity-contest` package reports the list of packages install on your system. Showing that ZFS is popular may be helpful in terms of long-term attention from the distro. The `popularity-contest` package reports the list of packages install on your system. Showing that ZFS is popular may be helpful in terms of long-term attention from the distro.
# apt install --yes popularity-contest apt install --yes popularity-contest
Choose Yes at the prompt. Choose Yes at the prompt.
@ -413,24 +433,22 @@ Choose Yes at the prompt.
5.1 Verify that the ZFS boot filesystem is recognized: 5.1 Verify that the ZFS boot filesystem is recognized:
# grub-probe /boot grub-probe /boot
zfs
5.2 Refresh the initrd files: 5.2 Refresh the initrd files:
# update-initramfs -u -k all update-initramfs -u -k all
update-initramfs: Generating /boot/initrd.img-4.19.0-6-amd64
**Note:** When using LUKS, this will print "WARNING could not determine root device from /etc/fstab". This is because [cryptsetup does not support ZFS](https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906). **Note:** When using LUKS, this will print "WARNING could not determine root device from /etc/fstab". This is because [cryptsetup does not support ZFS](https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906).
5.3 Workaround GRUB's missing zpool-features support: 5.3 Workaround GRUB's missing zpool-features support:
# vi /etc/default/grub vi /etc/default/grub
Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian" Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian"
5.4 Optional (but highly recommended): Make debugging GRUB easier: 5.4 Optional (but highly recommended): Make debugging GRUB easier:
# vi /etc/default/grub vi /etc/default/grub
Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console Uncomment: GRUB_TERMINAL=console
Save and quit. Save and quit.
@ -439,11 +457,7 @@ Later, once the system has rebooted twice and you are sure everything is working
5.5 Update the boot configuration: 5.5 Update the boot configuration:
# update-grub update-grub
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.19.0-6-amd64
Found initrd image: /boot/initrd.img-4.19.0-6-amd64
done
**Note:** Ignore errors from `osprober`, if present. **Note:** Ignore errors from `osprober`, if present.
@ -451,22 +465,20 @@ Later, once the system has rebooted twice and you are sure everything is working
5.6a For legacy (BIOS) booting, install GRUB to the MBR: 5.6a For legacy (BIOS) booting, install GRUB to the MBR:
# grub-install /dev/disk/by-id/scsi-SATA_disk1 grub-install /dev/disk/by-id/scsi-SATA_disk1
Installing for i386-pc platform.
Installation finished. No error reported.
Do not reboot the computer until you get exactly that result message. Note that you are installing GRUB to the whole disk, not a partition. Note that you are installing GRUB to the whole disk, not a partition.
If you are creating a mirror or raidz topology, repeat the `grub-install` command for each disk in the pool. If you are creating a mirror or raidz topology, repeat the `grub-install` command for each disk in the pool.
5.6b For UEFI booting, install GRUB: 5.6b For UEFI booting, install GRUB:
# grub-install --target=x86_64-efi --efi-directory=/boot/efi \ grub-install --target=x86_64-efi --efi-directory=/boot/efi \
--bootloader-id=debian --recheck --no-floppy --bootloader-id=debian --recheck --no-floppy
5.7 Verify that the ZFS module is installed: 5.7 Verify that the ZFS module is installed:
# ls /boot/grub/*/zfs.mod ls /boot/grub/*/zfs.mod
5.8 Fix filesystem mount ordering 5.8 Fix filesystem mount ordering
@ -474,67 +486,72 @@ Until there is support for mounting `/boot` in the initramfs, we also need to mo
We need to activate `zfs-mount-generator`. This makes systemd aware of the separate mountpoints, which is important for things like `/var/log` and `/var/tmp`. In turn, `rsyslog.service` depends on `var-log.mount` by way of `local-fs.target` and services using the `PrivateTmp` feature of systemd automatically use `After=var-tmp.mount`. We need to activate `zfs-mount-generator`. This makes systemd aware of the separate mountpoints, which is important for things like `/var/log` and `/var/tmp`. In turn, `rsyslog.service` depends on `var-log.mount` by way of `local-fs.target` and services using the `PrivateTmp` feature of systemd automatically use `After=var-tmp.mount`.
For UEFI booting, unmount /boot/efi first: For UEFI booting, unmount /boot/efi first:
# umount /boot/efi
Everything else applies to both BIOS and UEFI booting: umount /boot/efi
# zfs set mountpoint=legacy bpool/BOOT/debian Everything else applies to both BIOS and UEFI booting:
# echo bpool/BOOT/debian /boot zfs \
nodev,relatime,x-systemd.requires=zfs-import-bpool.service 0 0 >> /etc/fstab
# mkdir /etc/zfs/zfs-list.cache zfs set mountpoint=legacy bpool/BOOT/debian
# touch /etc/zfs/zfs-list.cache/rpool echo bpool/BOOT/debian /boot zfs \
# ln -s /usr/lib/zfs-linux/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d nodev,relatime,x-systemd.requires=zfs-import-bpool.service 0 0 >> /etc/fstab
# zed -F &
Verify that zed updated the cache by making sure this is not empty: mkdir /etc/zfs/zfs-list.cache
# cat /etc/zfs/zfs-list.cache/rpool touch /etc/zfs/zfs-list.cache/rpool
ln -s /usr/lib/zfs-linux/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d
zed -F &
If it is empty, force a cache update and check again: Verify that zed updated the cache by making sure this is not empty:
# zfs set canmount=noauto rpool/ROOT/debian
Stop zed: cat /etc/zfs/zfs-list.cache/rpool
# fg
If it is empty, force a cache update and check again:
zfs set canmount=noauto rpool/ROOT/debian
Stop zed:
fg
Press Ctrl-C. Press Ctrl-C.
Fix the paths to eliminate /mnt: Fix the paths to eliminate /mnt:
# sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/rpool
sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/rpool
## Step 6: First Boot ## Step 6: First Boot
6.1 Snapshot the initial installation: 6.1 Snapshot the initial installation:
# zfs snapshot bpool/BOOT/debian@install zfs snapshot bpool/BOOT/debian@install
# zfs snapshot rpool/ROOT/debian@install zfs snapshot rpool/ROOT/debian@install
In the future, you will likely want to take snapshots before each upgrade, and remove old snapshots (including this one) at some point to save space. In the future, you will likely want to take snapshots before each upgrade, and remove old snapshots (including this one) at some point to save space.
6.2 Exit from the `chroot` environment back to the LiveCD environment: 6.2 Exit from the `chroot` environment back to the LiveCD environment:
# exit exit
6.3 Run these commands in the LiveCD environment to unmount all filesystems: 6.3 Run these commands in the LiveCD environment to unmount all filesystems:
# mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
# zpool export -a zpool export -a
6.4 Reboot: 6.4 Reboot:
# reboot reboot
6.5 Wait for the newly installed system to boot normally. Login as root. 6.5 Wait for the newly installed system to boot normally. Login as root.
6.6 Create a user account: 6.6 Create a user account:
# zfs create rpool/home/YOURUSERNAME zfs create rpool/home/YOURUSERNAME
# adduser YOURUSERNAME adduser YOURUSERNAME
# cp -a /etc/skel/. /home/YOURUSERNAME cp -a /etc/skel/. /home/YOURUSERNAME
# chown -R YOURUSERNAME:YOURUSERNAME /home/YOURUSERNAME chown -R YOURUSERNAME:YOURUSERNAME /home/YOURUSERNAME
6.7 Add your user account to the default set of groups for an administrator: 6.7 Add your user account to the default set of groups for an administrator:
# usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video YOURUSERNAME usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video YOURUSERNAME
6.8 Mirror GRUB 6.8 Mirror GRUB
@ -542,21 +559,22 @@ If you installed to multiple disks, install GRUB on the additional disks:
6.8a For legacy (BIOS) booting: 6.8a For legacy (BIOS) booting:
# dpkg-reconfigure grub-pc dpkg-reconfigure grub-pc
Hit enter until you get to the device selection screen. Hit enter until you get to the device selection screen.
Select (using the space bar) all of the disks (not partitions) in your pool. Select (using the space bar) all of the disks (not partitions) in your pool.
6.8b UEFI 6.8b UEFI
# umount /boot/efi umount /boot/efi
For the second and subsequent disks (increment debian-2 to -3, etc.): For the second and subsequent disks (increment debian-2 to -3, etc.):
# dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \
of=/dev/disk/by-id/scsi-SATA_disk2-part2
# efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \
-p 3 -L "debian-2" -l '\EFI\debian\grubx64.efi'
# mount /boot/efi dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \
of=/dev/disk/by-id/scsi-SATA_disk2-part2
efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \
-p 3 -L "debian-2" -l '\EFI\debian\grubx64.efi'
mount /boot/efi
## Step 7: (Optional) Configure Swap ## Step 7: (Optional) Configure Swap
@ -564,10 +582,10 @@ If you installed to multiple disks, install GRUB on the additional disks:
7.1 Create a volume dataset (zvol) for use as a swap device: 7.1 Create a volume dataset (zvol) for use as a swap device:
# zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \ zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \ -o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \ -o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap -o com.sun:auto-snapshot=false rpool/swap
You can adjust the size (the `4G` part) to your needs. You can adjust the size (the `4G` part) to your needs.
@ -577,31 +595,31 @@ The compression algorithm is set to `zle` because it is the cheapest available a
**Caution**: Always use long `/dev/zvol` aliases in configuration files. Never use a short `/dev/zdX` device name. **Caution**: Always use long `/dev/zvol` aliases in configuration files. Never use a short `/dev/zdX` device name.
# mkswap -f /dev/zvol/rpool/swap mkswap -f /dev/zvol/rpool/swap
# echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab
# echo RESUME=none > /etc/initramfs-tools/conf.d/resume echo RESUME=none > /etc/initramfs-tools/conf.d/resume
The `RESUME=none` is necessary to disable resuming from hibernation. This does not work, as the zvol is not present (because the pool has not yet been imported) at the time the resume script runs. If it is not disabled, the boot process hangs for 30 seconds waiting for the swap zvol to appear. The `RESUME=none` is necessary to disable resuming from hibernation. This does not work, as the zvol is not present (because the pool has not yet been imported) at the time the resume script runs. If it is not disabled, the boot process hangs for 30 seconds waiting for the swap zvol to appear.
7.3 Enable the swap device: 7.3 Enable the swap device:
# swapon -av swapon -av
## Step 8: Full Software Installation ## Step 8: Full Software Installation
8.1 Upgrade the minimal system: 8.1 Upgrade the minimal system:
# apt dist-upgrade --yes apt dist-upgrade --yes
8.2 Install a regular set of software: 8.2 Install a regular set of software:
# tasksel tasksel
8.3 Optional: Disable log compression: 8.3 Optional: Disable log compression:
As `/var/log` is already compressed by ZFS, logrotates compression is going to burn CPU and disk I/O for (in most cases) very little gain. Also, if you are making snapshots of `/var/log`, logrotates compression will actually waste space, as the uncompressed data will live on in the snapshot. You can edit the files in `/etc/logrotate.d` by hand to comment out `compress`, or use this loop (copy-and-paste highly recommended): As `/var/log` is already compressed by ZFS, logrotates compression is going to burn CPU and disk I/O for (in most cases) very little gain. Also, if you are making snapshots of `/var/log`, logrotates compression will actually waste space, as the uncompressed data will live on in the snapshot. You can edit the files in `/etc/logrotate.d` by hand to comment out `compress`, or use this loop (copy-and-paste highly recommended):
# for file in /etc/logrotate.d/* ; do for file in /etc/logrotate.d/* ; do
if grep -Eq "(^|[^#y])compress" "$file" ; then if grep -Eq "(^|[^#y])compress" "$file" ; then
sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
fi fi
@ -609,7 +627,7 @@ As `/var/log` is already compressed by ZFS, logrotates compression is going t
8.4 Reboot: 8.4 Reboot:
# reboot reboot
### Step 9: Final Cleanup ### Step 9: Final Cleanup
@ -617,29 +635,29 @@ As `/var/log` is already compressed by ZFS, logrotates compression is going t
9.2 Optional: Delete the snapshots of the initial installation: 9.2 Optional: Delete the snapshots of the initial installation:
$ sudo zfs destroy bpool/BOOT/debian@install sudo zfs destroy bpool/BOOT/debian@install
$ sudo zfs destroy rpool/ROOT/debian@install sudo zfs destroy rpool/ROOT/debian@install
9.3 Optional: Disable the root password 9.3 Optional: Disable the root password
$ sudo usermod -p '*' root sudo usermod -p '*' root
9.4 Optional: Re-enable the graphical boot process: 9.4 Optional: Re-enable the graphical boot process:
If you prefer the graphical boot process, you can re-enable it now. If you are using LUKS, it makes the prompt look nicer. If you prefer the graphical boot process, you can re-enable it now. If you are using LUKS, it makes the prompt look nicer.
$ sudo vi /etc/default/grub sudo vi /etc/default/grub
Add quiet to GRUB_CMDLINE_LINUX_DEFAULT Add quiet to GRUB_CMDLINE_LINUX_DEFAULT
Comment out GRUB_TERMINAL=console Comment out GRUB_TERMINAL=console
Save and quit. Save and quit.
$ sudo update-grub sudo update-grub
**Note:** Ignore errors from `osprober`, if present. **Note:** Ignore errors from `osprober`, if present.
9.5 Optional: For LUKS installs only, backup the LUKS header: 9.5 Optional: For LUKS installs only, backup the LUKS header:
$ sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
--header-backup-file luks1-header.dat --header-backup-file luks1-header.dat
Store that backup somewhere safe (e.g. cloud storage). It is protected by your LUKS passphrase, but you may wish to use additional encryption. Store that backup somewhere safe (e.g. cloud storage). It is protected by your LUKS passphrase, but you may wish to use additional encryption.
@ -654,36 +672,36 @@ Go through [Step 1: Prepare The Install Environment](#step-1-prepare-the-install
For LUKS, first unlock the disk(s): For LUKS, first unlock the disk(s):
# apt install --yes cryptsetup apt install --yes cryptsetup
# cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
Repeat for additional disks, if this is a mirror or raidz topology. Repeat for additional disks, if this is a mirror or raidz topology.
Mount everything correctly: Mount everything correctly:
# zpool export -a zpool export -a
# zpool import -N -R /mnt rpool zpool import -N -R /mnt rpool
# zpool import -N -R /mnt bpool zpool import -N -R /mnt bpool
# zfs load-key -a zfs load-key -a
# zfs mount rpool/ROOT/debian zfs mount rpool/ROOT/debian
# zfs mount -a zfs mount -a
If needed, you can chroot into your installed environment: If needed, you can chroot into your installed environment:
# mount --rbind /dev /mnt/dev mount --rbind /dev /mnt/dev
# mount --rbind /proc /mnt/proc mount --rbind /proc /mnt/proc
# mount --rbind /sys /mnt/sys mount --rbind /sys /mnt/sys
# chroot /mnt /bin/bash --login chroot /mnt /bin/bash --login
# mount /boot mount /boot
# mount -a mount -a
Do whatever you need to do to fix your system. Do whatever you need to do to fix your system.
When done, cleanup: When done, cleanup:
# exit exit
# mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
# zpool export -a zpool export -a
# reboot reboot
### MPT2SAS ### MPT2SAS
@ -709,11 +727,13 @@ Set a unique serial number on each virtual disk using libvirt or qemu (e.g. `-dr
To be able to use UEFI in guests (instead of only BIOS booting), run this on the host: To be able to use UEFI in guests (instead of only BIOS booting), run this on the host:
$ sudo apt install ovmf sudo apt install ovmf
$ sudo vi /etc/libvirt/qemu.conf
sudo vi /etc/libvirt/qemu.conf
Uncomment these lines: Uncomment these lines:
nvram = [ nvram = [
"/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
"/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd" "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd"
] ]
$ sudo service libvirt-bin restart
sudo service libvirt-bin restart

@ -36,58 +36,63 @@ LUKS encrypts almost everything: the OS, swap, home directories, and anything el
1.2 Setup and update the repositories: 1.2 Setup and update the repositories:
$ sudo apt-add-repository universe sudo apt-add-repository universe
$ sudo apt update sudo apt update
1.3 Optional: Install and start the OpenSSH server in the Live CD environment: 1.3 Optional: Install and start the OpenSSH server in the Live CD environment:
If you have a second system, using SSH to access the target system can be convenient. If you have a second system, using SSH to access the target system can be convenient.
$ passwd passwd
There is no current password; hit enter at that prompt. There is no current password; hit enter at that prompt.
$ sudo apt install --yes openssh-server sudo apt install --yes openssh-server
**Hint:** You can find your IP address with `ip addr show scope global | grep inet`. Then, from your main machine, connect with `ssh ubuntu@IP`. **Hint:** You can find your IP address with `ip addr show scope global | grep inet`. Then, from your main machine, connect with `ssh ubuntu@IP`.
1.4 Become root: 1.4 Become root:
$ sudo -i sudo -i
1.5 Install ZFS in the Live CD environment: 1.5 Install ZFS in the Live CD environment:
# apt install --yes debootstrap gdisk zfs-initramfs apt install --yes debootstrap gdisk zfs-initramfs
## Step 2: Disk Formatting ## Step 2: Disk Formatting
2.1 If you are re-using a disk, clear it as necessary: 2.1 If you are re-using a disk, clear it as necessary:
If the disk was previously used in an MD array, zero the superblock: If the disk was previously used in an MD array, zero the superblock:
# apt install --yes mdadm
# mdadm --zero-superblock --force /dev/disk/by-id/scsi-SATA_disk1
Clear the partition table: apt install --yes mdadm
# sgdisk --zap-all /dev/disk/by-id/scsi-SATA_disk1 mdadm --zero-superblock --force /dev/disk/by-id/scsi-SATA_disk1
Clear the partition table:
sgdisk --zap-all /dev/disk/by-id/scsi-SATA_disk1
2.2 Partition your disk(s): 2.2 Partition your disk(s):
Run this if you need legacy (BIOS) booting: Run this if you need legacy (BIOS) booting:
# sgdisk -a1 -n1:24K:+1000K -t1:EF02 /dev/disk/by-id/scsi-SATA_disk1
Run this for UEFI booting (for use now or in the future): sgdisk -a1 -n1:24K:+1000K -t1:EF02 /dev/disk/by-id/scsi-SATA_disk1
# sgdisk -n2:1M:+512M -t2:EF00 /dev/disk/by-id/scsi-SATA_disk1
Run this for the boot pool: Run this for UEFI booting (for use now or in the future):
# sgdisk -n3:0:+1G -t3:BF01 /dev/disk/by-id/scsi-SATA_disk1
sgdisk -n2:1M:+512M -t2:EF00 /dev/disk/by-id/scsi-SATA_disk1
Run this for the boot pool:
sgdisk -n3:0:+1G -t3:BF01 /dev/disk/by-id/scsi-SATA_disk1
Choose one of the following options: Choose one of the following options:
2.2a Unencrypted: 2.2a Unencrypted:
# sgdisk -n4:0:0 -t4:BF01 /dev/disk/by-id/scsi-SATA_disk1 sgdisk -n4:0:0 -t4:BF01 /dev/disk/by-id/scsi-SATA_disk1
2.2b LUKS: 2.2b LUKS:
# sgdisk -n4:0:0 -t4:8300 /dev/disk/by-id/scsi-SATA_disk1 sgdisk -n4:0:0 -t4:8300 /dev/disk/by-id/scsi-SATA_disk1
Always use the long `/dev/disk/by-id/*` aliases with ZFS. Using the `/dev/sd*` device nodes directly can cause sporadic import failures, especially on systems that have more than one storage pool. Always use the long `/dev/disk/by-id/*` aliases with ZFS. Using the `/dev/sd*` device nodes directly can cause sporadic import failures, especially on systems that have more than one storage pool.
@ -98,23 +103,23 @@ Always use the long `/dev/disk/by-id/*` aliases with ZFS. Using the `/dev/sd*`
2.3 Create the boot pool: 2.3 Create the boot pool:
# zpool create -o ashift=12 -d \ zpool create -o ashift=12 -d \
-o feature@async_destroy=enabled \ -o feature@async_destroy=enabled \
-o feature@bookmarks=enabled \ -o feature@bookmarks=enabled \
-o feature@embedded_data=enabled \ -o feature@embedded_data=enabled \
-o feature@empty_bpobj=enabled \ -o feature@empty_bpobj=enabled \
-o feature@enabled_txg=enabled \ -o feature@enabled_txg=enabled \
-o feature@extensible_dataset=enabled \ -o feature@extensible_dataset=enabled \
-o feature@filesystem_limits=enabled \ -o feature@filesystem_limits=enabled \
-o feature@hole_birth=enabled \ -o feature@hole_birth=enabled \
-o feature@large_blocks=enabled \ -o feature@large_blocks=enabled \
-o feature@lz4_compress=enabled \ -o feature@lz4_compress=enabled \
-o feature@spacemap_histogram=enabled \ -o feature@spacemap_histogram=enabled \
-o feature@userobj_accounting=enabled \ -o feature@userobj_accounting=enabled \
-O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \ -O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \
-O normalization=formD -O relatime=on -O xattr=sa \ -O normalization=formD -O relatime=on -O xattr=sa \
-O mountpoint=/ -R /mnt \ -O mountpoint=/ -R /mnt \
bpool /dev/disk/by-id/scsi-SATA_disk1-part3 bpool /dev/disk/by-id/scsi-SATA_disk1-part3
You should not need to customize any of the options for the boot pool. You should not need to customize any of the options for the boot pool.
@ -130,22 +135,22 @@ Choose one of the following options:
2.4a Unencrypted: 2.4a Unencrypted:
# zpool create -o ashift=12 \ zpool create -o ashift=12 \
-O acltype=posixacl -O canmount=off -O compression=lz4 \ -O acltype=posixacl -O canmount=off -O compression=lz4 \
-O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
-O mountpoint=/ -R /mnt \ -O mountpoint=/ -R /mnt \
rpool /dev/disk/by-id/scsi-SATA_disk1-part4 rpool /dev/disk/by-id/scsi-SATA_disk1-part4
2.4b LUKS: 2.4b LUKS:
# cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 \ cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 \
/dev/disk/by-id/scsi-SATA_disk1-part4 /dev/disk/by-id/scsi-SATA_disk1-part4
# cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
# zpool create -o ashift=12 \ zpool create -o ashift=12 \
-O acltype=posixacl -O canmount=off -O compression=lz4 \ -O acltype=posixacl -O canmount=off -O compression=lz4 \
-O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
-O mountpoint=/ -R /mnt \ -O mountpoint=/ -R /mnt \
rpool /dev/mapper/luks1 rpool /dev/mapper/luks1
* The use of `ashift=12` is recommended here because many drives today have 4KiB (or larger) physical sectors, even though they present 512B logical sectors. Also, a future replacement drive may have 4KiB physical sectors (in which case `ashift=12` is desirable) or 4KiB logical sectors (in which case `ashift=12` is required). * The use of `ashift=12` is recommended here because many drives today have 4KiB (or larger) physical sectors, even though they present 512B logical sectors. Also, a future replacement drive may have 4KiB physical sectors (in which case `ashift=12` is desirable) or 4KiB logical sectors (in which case `ashift=12` is required).
* Setting `-O acltype=posixacl` enables POSIX ACLs globally. If you do not want this, remove that option, but later add `-o acltype=posixacl` (note: lowercase "o") to the `zfs create` for `/var/log`, as [journald requires ACLs](https://askubuntu.com/questions/970886/journalctl-says-failed-to-search-journal-acl-operation-not-supported) * Setting `-O acltype=posixacl` enables POSIX ACLs globally. If you do not want this, remove that option, but later add `-o acltype=posixacl` (note: lowercase "o") to the `zfs create` for `/var/log`, as [journald requires ACLs](https://askubuntu.com/questions/970886/journalctl-says-failed-to-search-journal-acl-operation-not-supported)
@ -164,72 +169,84 @@ Choose one of the following options:
3.1 Create filesystem datasets to act as containers: 3.1 Create filesystem datasets to act as containers:
# zfs create -o canmount=off -o mountpoint=none rpool/ROOT zfs create -o canmount=off -o mountpoint=none rpool/ROOT
# zfs create -o canmount=off -o mountpoint=none bpool/BOOT zfs create -o canmount=off -o mountpoint=none bpool/BOOT
On Solaris systems, the root filesystem is cloned and the suffix is incremented for major system changes through `pkg image-update` or `beadm`. Similar functionality for APT is possible but currently unimplemented. Even without such a tool, it can still be used for manually created clones. On Solaris systems, the root filesystem is cloned and the suffix is incremented for major system changes through `pkg image-update` or `beadm`. Similar functionality for APT is possible but currently unimplemented. Even without such a tool, it can still be used for manually created clones.
3.2 Create filesystem datasets for the root and boot filesystems: 3.2 Create filesystem datasets for the root and boot filesystems:
# zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
# zfs mount rpool/ROOT/ubuntu zfs mount rpool/ROOT/ubuntu
# zfs create -o canmount=noauto -o mountpoint=/boot bpool/BOOT/ubuntu zfs create -o canmount=noauto -o mountpoint=/boot bpool/BOOT/ubuntu
# zfs mount bpool/BOOT/ubuntu zfs mount bpool/BOOT/ubuntu
With ZFS, it is not normally necessary to use a mount command (either `mount` or `zfs mount`). This situation is an exception because of `canmount=noauto`. With ZFS, it is not normally necessary to use a mount command (either `mount` or `zfs mount`). This situation is an exception because of `canmount=noauto`.
3.3 Create datasets: 3.3 Create datasets:
# zfs create rpool/home zfs create rpool/home
# zfs create -o mountpoint=/root rpool/home/root zfs create -o mountpoint=/root rpool/home/root
# zfs create -o canmount=off rpool/var zfs create -o canmount=off rpool/var
# zfs create -o canmount=off rpool/var/lib zfs create -o canmount=off rpool/var/lib
# zfs create rpool/var/log zfs create rpool/var/log
# zfs create rpool/var/spool zfs create rpool/var/spool
The datasets below are optional, depending on your preferences and/or The datasets below are optional, depending on your preferences and/or software
software choices: choices.
If you wish to exclude these from snapshots: If you wish to exclude these from snapshots:
# zfs create -o com.sun:auto-snapshot=false rpool/var/cache
# zfs create -o com.sun:auto-snapshot=false rpool/var/tmp
# chmod 1777 /mnt/var/tmp
If you use /opt on this system: zfs create -o com.sun:auto-snapshot=false rpool/var/cache
# zfs create rpool/opt zfs create -o com.sun:auto-snapshot=false rpool/var/tmp
chmod 1777 /mnt/var/tmp
If you use /srv on this system: If you use /opt on this system:
# zfs create rpool/srv
If you use /usr/local on this system: zfs create rpool/opt
# zfs create -o canmount=off rpool/usr
# zfs create rpool/usr/local
If this system will have games installed: If you use /srv on this system:
# zfs create rpool/var/games
If this system will store local email in /var/mail: zfs create rpool/srv
# zfs create rpool/var/mail
If this system will use Snap packages: If you use /usr/local on this system:
# zfs create rpool/var/snap
If you use /var/www on this system: zfs create -o canmount=off rpool/usr
# zfs create rpool/var/www zfs create rpool/usr/local
If this system will use GNOME: If this system will have games installed:
# zfs create rpool/var/lib/AccountsService
If this system will use Docker (which manages its own datasets & snapshots): zfs create rpool/var/games
# zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker
If this system will use NFS (locking): If this system will store local email in /var/mail:
# zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs
A tmpfs is recommended later, but if you want a separate dataset for /tmp: zfs create rpool/var/mail
# zfs create -o com.sun:auto-snapshot=false rpool/tmp
# chmod 1777 /mnt/tmp If this system will use Snap packages:
zfs create rpool/var/snap
If you use /var/www on this system:
zfs create rpool/var/www
If this system will use GNOME:
zfs create rpool/var/lib/AccountsService
If this system will use Docker (which manages its own datasets & snapshots):
zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker
If this system will use NFS (locking):
zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs
A tmpfs is recommended later, but if you want a separate dataset for /tmp:
zfs create -o com.sun:auto-snapshot=false rpool/tmp
chmod 1777 /mnt/tmp
The primary goal of this dataset layout is to separate the OS from user data. This allows the root filesystem to be rolled back without rolling back user data such as logs (in `/var/log`). This will be especially important if/when a `beadm` or similar utility is integrated. The `com.sun.auto-snapshot` setting is used by some ZFS snapshot utilities to exclude transient data. The primary goal of this dataset layout is to separate the OS from user data. This allows the root filesystem to be rolled back without rolling back user data such as logs (in `/var/log`). This will be especially important if/when a `beadm` or similar utility is integrated. The `com.sun.auto-snapshot` setting is used by some ZFS snapshot utilities to exclude transient data.
@ -237,8 +254,8 @@ If you do nothing extra, `/tmp` will be stored as part of the root filesystem. A
3.4 Install the minimal system: 3.4 Install the minimal system:
# debootstrap bionic /mnt debootstrap bionic /mnt
# zfs set devices=off rpool zfs set devices=off rpool
The `debootstrap` command leaves the new system in an unconfigured state. An alternative to using `debootstrap` is to copy the entirety of a working system into the new ZFS root. The `debootstrap` command leaves the new system in an unconfigured state. An alternative to using `debootstrap` is to copy the entirety of a working system into the new ZFS root.
@ -246,9 +263,9 @@ The `debootstrap` command leaves the new system in an unconfigured state. An al
4.1 Configure the hostname (change `HOSTNAME` to the desired hostname). 4.1 Configure the hostname (change `HOSTNAME` to the desired hostname).
# echo HOSTNAME > /mnt/etc/hostname echo HOSTNAME > /mnt/etc/hostname
# vi /mnt/etc/hosts vi /mnt/etc/hosts
Add a line: Add a line:
127.0.1.1 HOSTNAME 127.0.1.1 HOSTNAME
or if the system has a real name in DNS: or if the system has a real name in DNS:
@ -258,11 +275,13 @@ The `debootstrap` command leaves the new system in an unconfigured state. An al
4.2 Configure the network interface: 4.2 Configure the network interface:
Find the interface name: Find the interface name:
# ip addr show
Adjust NAME below to match your interface name: ip addr show
# vi /mnt/etc/netplan/01-netcfg.yaml
Adjust NAME below to match your interface name:
vi /mnt/etc/netplan/01-netcfg.yaml
network: network:
version: 2 version: 2
ethernets: ethernets:
@ -273,7 +292,7 @@ Customize this file if the system is not a DHCP client.
4.3 Configure the package sources: 4.3 Configure the package sources:
# vi /mnt/etc/apt/sources.list vi /mnt/etc/apt/sources.list
deb http://archive.ubuntu.com/ubuntu bionic main universe deb http://archive.ubuntu.com/ubuntu bionic main universe
deb-src http://archive.ubuntu.com/ubuntu bionic main universe deb-src http://archive.ubuntu.com/ubuntu bionic main universe
@ -285,41 +304,42 @@ Customize this file if the system is not a DHCP client.
4.4 Bind the virtual filesystems from the LiveCD environment to the new system and `chroot` into it: 4.4 Bind the virtual filesystems from the LiveCD environment to the new system and `chroot` into it:
# mount --rbind /dev /mnt/dev mount --rbind /dev /mnt/dev
# mount --rbind /proc /mnt/proc mount --rbind /proc /mnt/proc
# mount --rbind /sys /mnt/sys mount --rbind /sys /mnt/sys
# chroot /mnt /bin/bash --login chroot /mnt /bin/bash --login
**Note:** This is using `--rbind`, not `--bind`. **Note:** This is using `--rbind`, not `--bind`.
4.5 Configure a basic system environment: 4.5 Configure a basic system environment:
# ln -s /proc/self/mounts /etc/mtab ln -s /proc/self/mounts /etc/mtab
# apt update apt update
# dpkg-reconfigure locales dpkg-reconfigure locales
Even if you prefer a non-English system language, always ensure that `en_US.UTF-8` is available. Even if you prefer a non-English system language, always ensure that `en_US.UTF-8` is available.
# dpkg-reconfigure tzdata dpkg-reconfigure tzdata
If you prefer nano over vi, install it: If you prefer nano over vi, install it:
# apt install --yes nano
apt install --yes nano
4.6 Install ZFS in the chroot environment for the new system: 4.6 Install ZFS in the chroot environment for the new system:
# apt install --yes --no-install-recommends linux-image-generic apt install --yes --no-install-recommends linux-image-generic
# apt install --yes zfs-initramfs apt install --yes zfs-initramfs
**Hint:** For the HWE kernel, install `linux-image-generic-hwe-18.04` instead of `linux-image-generic`. **Hint:** For the HWE kernel, install `linux-image-generic-hwe-18.04` instead of `linux-image-generic`.
4.7 For LUKS installs only, setup crypttab: 4.7 For LUKS installs only, setup crypttab:
# apt install --yes cryptsetup apt install --yes cryptsetup
# echo luks1 UUID=$(blkid -s UUID -o value \ echo luks1 UUID=$(blkid -s UUID -o value \
/dev/disk/by-id/scsi-SATA_disk1-part4) none \ /dev/disk/by-id/scsi-SATA_disk1-part4) none \
luks,discard,initramfs > /etc/crypttab luks,discard,initramfs > /etc/crypttab
* The use of `initramfs` is a work-around for [cryptsetup does not support ZFS](https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906). * The use of `initramfs` is a work-around for [cryptsetup does not support ZFS](https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906).
@ -331,20 +351,20 @@ Choose one of the following options:
4.8a Install GRUB for legacy (BIOS) booting 4.8a Install GRUB for legacy (BIOS) booting
# apt install --yes grub-pc apt install --yes grub-pc
Install GRUB to the disk(s), not the partition(s). Install GRUB to the disk(s), not the partition(s).
4.8b Install GRUB for UEFI booting 4.8b Install GRUB for UEFI booting
# apt install dosfstools apt install dosfstools
# mkdosfs -F 32 -s 1 -n EFI /dev/disk/by-id/scsi-SATA_disk1-part2 mkdosfs -F 32 -s 1 -n EFI /dev/disk/by-id/scsi-SATA_disk1-part2
# mkdir /boot/efi mkdir /boot/efi
# echo PARTUUID=$(blkid -s PARTUUID -o value \ echo PARTUUID=$(blkid -s PARTUUID -o value \
/dev/disk/by-id/scsi-SATA_disk1-part2) \ /dev/disk/by-id/scsi-SATA_disk1-part2) \
/boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab /boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab
# mount /boot/efi mount /boot/efi
# apt install --yes grub-efi-amd64-signed shim-signed apt install --yes grub-efi-amd64-signed shim-signed
* The `-s 1` for `mkdosfs` is only necessary for drives which present 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size (given the partition size of 512 MiB) for FAT32. It also works fine on drives which present 512 B sectors. * The `-s 1` for `mkdosfs` is only necessary for drives which present 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size (given the partition size of 512 MiB) for FAT32. It also works fine on drives which present 512 B sectors.
@ -352,13 +372,13 @@ Install GRUB to the disk(s), not the partition(s).
4.9 Set a root password 4.9 Set a root password
# passwd passwd
4.10 Enable importing bpool 4.10 Enable importing bpool
This ensures that `bpool` is always imported, regardless of whether `/etc/zfs/zpool.cache` exists, whether it is in the cachefile or not, or whether `zfs-import-scan.service` is enabled. This ensures that `bpool` is always imported, regardless of whether `/etc/zfs/zpool.cache` exists, whether it is in the cachefile or not, or whether `zfs-import-scan.service` is enabled.
``` ```
# vi /etc/systemd/system/zfs-import-bpool.service vi /etc/systemd/system/zfs-import-bpool.service
[Unit] [Unit]
DefaultDependencies=no DefaultDependencies=no
Before=zfs-import-scan.service Before=zfs-import-scan.service
@ -372,43 +392,41 @@ This ensures that `bpool` is always imported, regardless of whether `/etc/zfs/zp
[Install] [Install]
WantedBy=zfs-import.target WantedBy=zfs-import.target
# systemctl enable zfs-import-bpool.service systemctl enable zfs-import-bpool.service
``` ```
4.11 Optional (but recommended): Mount a tmpfs to /tmp 4.11 Optional (but recommended): Mount a tmpfs to /tmp
If you chose to create a `/tmp` dataset above, skip this step, as they are mutually exclusive choices. Otherwise, you can put `/tmp` on a tmpfs (RAM filesystem) by enabling the `tmp.mount` unit. If you chose to create a `/tmp` dataset above, skip this step, as they are mutually exclusive choices. Otherwise, you can put `/tmp` on a tmpfs (RAM filesystem) by enabling the `tmp.mount` unit.
# cp /usr/share/systemd/tmp.mount /etc/systemd/system/ cp /usr/share/systemd/tmp.mount /etc/systemd/system/
# systemctl enable tmp.mount systemctl enable tmp.mount
4.12 Setup system groups: 4.12 Setup system groups:
# addgroup --system lpadmin addgroup --system lpadmin
# addgroup --system sambashare addgroup --system sambashare
## Step 5: GRUB Installation ## Step 5: GRUB Installation
5.1 Verify that the ZFS boot filesystem is recognized: 5.1 Verify that the ZFS boot filesystem is recognized:
# grub-probe /boot grub-probe /boot
zfs
5.2 Refresh the initrd files: 5.2 Refresh the initrd files:
# update-initramfs -u -k all update-initramfs -u -k all
update-initramfs: Generating /boot/initrd.img-4.15.0-46-generic
**Note:** When using LUKS, this will print "WARNING could not determine root device from /etc/fstab". This is because [cryptsetup does not support ZFS](https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906). **Note:** When using LUKS, this will print "WARNING could not determine root device from /etc/fstab". This is because [cryptsetup does not support ZFS](https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906).
5.3 Workaround GRUB's missing zpool-features support: 5.3 Workaround GRUB's missing zpool-features support:
# vi /etc/default/grub vi /etc/default/grub
Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/ubuntu" Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/ubuntu"
5.4 Optional (but highly recommended): Make debugging GRUB easier: 5.4 Optional (but highly recommended): Make debugging GRUB easier:
# vi /etc/default/grub vi /etc/default/grub
Comment out: GRUB_TIMEOUT_STYLE=hidden Comment out: GRUB_TIMEOUT_STYLE=hidden
Set: GRUB_TIMEOUT=5 Set: GRUB_TIMEOUT=5
Below GRUB_TIMEOUT, add: GRUB_RECORDFAIL_TIMEOUT=5 Below GRUB_TIMEOUT, add: GRUB_RECORDFAIL_TIMEOUT=5
@ -420,11 +438,7 @@ Later, once the system has rebooted twice and you are sure everything is working
5.5 Update the boot configuration: 5.5 Update the boot configuration:
# update-grub update-grub
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.15.0-46-generic
Found initrd image: /boot/initrd.img-4.15.0-46-generic
done
**Note:** Ignore errors from `osprober`, if present. **Note:** Ignore errors from `osprober`, if present.
@ -432,22 +446,20 @@ Later, once the system has rebooted twice and you are sure everything is working
5.6a For legacy (BIOS) booting, install GRUB to the MBR: 5.6a For legacy (BIOS) booting, install GRUB to the MBR:
# grub-install /dev/disk/by-id/scsi-SATA_disk1 grub-install /dev/disk/by-id/scsi-SATA_disk1
Installing for i386-pc platform.
Installation finished. No error reported.
Do not reboot the computer until you get exactly that result message. Note that you are installing GRUB to the whole disk, not a partition. Note that you are installing GRUB to the whole disk, not a partition.
If you are creating a mirror or raidz topology, repeat the `grub-install` command for each disk in the pool. If you are creating a mirror or raidz topology, repeat the `grub-install` command for each disk in the pool.
5.6b For UEFI booting, install GRUB: 5.6b For UEFI booting, install GRUB:
# grub-install --target=x86_64-efi --efi-directory=/boot/efi \ grub-install --target=x86_64-efi --efi-directory=/boot/efi \
--bootloader-id=ubuntu --recheck --no-floppy --bootloader-id=ubuntu --recheck --no-floppy
5.7 Verify that the ZFS module is installed: 5.7 Verify that the ZFS module is installed:
# ls /boot/grub/*/zfs.mod ls /boot/grub/*/zfs.mod
5.8 Fix filesystem mount ordering 5.8 Fix filesystem mount ordering
@ -458,63 +470,66 @@ Until there is support for mounting `/boot` in the initramfs, we also need to mo
`rpool` is guaranteed to be imported by the initramfs, so there is no point in adding `x-systemd.requires=zfs-import.target` to those filesystems. `rpool` is guaranteed to be imported by the initramfs, so there is no point in adding `x-systemd.requires=zfs-import.target` to those filesystems.
For UEFI booting, unmount /boot/efi first: For UEFI booting, unmount /boot/efi first:
# umount /boot/efi
Everything else applies to both BIOS and UEFI booting: umount /boot/efi
# zfs set mountpoint=legacy bpool/BOOT/ubuntu Everything else applies to both BIOS and UEFI booting:
# echo bpool/BOOT/ubuntu /boot zfs \
nodev,relatime,x-systemd.requires=zfs-import-bpool.service 0 0 >> /etc/fstab
# zfs set mountpoint=legacy rpool/var/log zfs set mountpoint=legacy bpool/BOOT/ubuntu
# echo rpool/var/log /var/log zfs nodev,relatime 0 0 >> /etc/fstab echo bpool/BOOT/ubuntu /boot zfs \
nodev,relatime,x-systemd.requires=zfs-import-bpool.service 0 0 >> /etc/fstab
# zfs set mountpoint=legacy rpool/var/spool zfs set mountpoint=legacy rpool/var/log
# echo rpool/var/spool /var/spool zfs nodev,relatime 0 0 >> /etc/fstab echo rpool/var/log /var/log zfs nodev,relatime 0 0 >> /etc/fstab
If you created a /var/tmp dataset: zfs set mountpoint=legacy rpool/var/spool
# zfs set mountpoint=legacy rpool/var/tmp echo rpool/var/spool /var/spool zfs nodev,relatime 0 0 >> /etc/fstab
# echo rpool/var/tmp /var/tmp zfs nodev,relatime 0 0 >> /etc/fstab
If you created a /tmp dataset: If you created a /var/tmp dataset:
# zfs set mountpoint=legacy rpool/tmp
# echo rpool/tmp /tmp zfs nodev,relatime 0 0 >> /etc/fstab zfs set mountpoint=legacy rpool/var/tmp
echo rpool/var/tmp /var/tmp zfs nodev,relatime 0 0 >> /etc/fstab
If you created a /tmp dataset:
zfs set mountpoint=legacy rpool/tmp
echo rpool/tmp /tmp zfs nodev,relatime 0 0 >> /etc/fstab
## Step 6: First Boot ## Step 6: First Boot
6.1 Snapshot the initial installation: 6.1 Snapshot the initial installation:
# zfs snapshot bpool/BOOT/ubuntu@install zfs snapshot bpool/BOOT/ubuntu@install
# zfs snapshot rpool/ROOT/ubuntu@install zfs snapshot rpool/ROOT/ubuntu@install
In the future, you will likely want to take snapshots before each upgrade, and remove old snapshots (including this one) at some point to save space. In the future, you will likely want to take snapshots before each upgrade, and remove old snapshots (including this one) at some point to save space.
6.2 Exit from the `chroot` environment back to the LiveCD environment: 6.2 Exit from the `chroot` environment back to the LiveCD environment:
# exit exit
6.3 Run these commands in the LiveCD environment to unmount all filesystems: 6.3 Run these commands in the LiveCD environment to unmount all filesystems:
# mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
# zpool export -a zpool export -a
6.4 Reboot: 6.4 Reboot:
# reboot reboot
6.5 Wait for the newly installed system to boot normally. Login as root. 6.5 Wait for the newly installed system to boot normally. Login as root.
6.6 Create a user account: 6.6 Create a user account:
# zfs create rpool/home/YOURUSERNAME zfs create rpool/home/YOURUSERNAME
# adduser YOURUSERNAME adduser YOURUSERNAME
# cp -a /etc/skel/. /home/YOURUSERNAME cp -a /etc/skel/. /home/YOURUSERNAME
# chown -R YOURUSERNAME:YOURUSERNAME /home/YOURUSERNAME chown -R YOURUSERNAME:YOURUSERNAME /home/YOURUSERNAME
6.7 Add your user account to the default set of groups for an administrator: 6.7 Add your user account to the default set of groups for an administrator:
# usermod -a -G adm,cdrom,dip,lpadmin,plugdev,sambashare,sudo YOURUSERNAME usermod -a -G adm,cdrom,dip,lpadmin,plugdev,sambashare,sudo YOURUSERNAME
6.8 Mirror GRUB 6.8 Mirror GRUB
@ -522,21 +537,22 @@ If you installed to multiple disks, install GRUB on the additional disks:
6.8a For legacy (BIOS) booting: 6.8a For legacy (BIOS) booting:
# dpkg-reconfigure grub-pc dpkg-reconfigure grub-pc
Hit enter until you get to the device selection screen. Hit enter until you get to the device selection screen.
Select (using the space bar) all of the disks (not partitions) in your pool. Select (using the space bar) all of the disks (not partitions) in your pool.
6.8b UEFI 6.8b UEFI
# umount /boot/efi umount /boot/efi
For the second and subsequent disks (increment ubuntu-2 to -3, etc.): For the second and subsequent disks (increment ubuntu-2 to -3, etc.):
# dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \
of=/dev/disk/by-id/scsi-SATA_disk2-part2
# efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \
-p 3 -L "ubuntu-2" -l '\EFI\ubuntu\grubx64.efi'
# mount /boot/efi dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \
of=/dev/disk/by-id/scsi-SATA_disk2-part2
efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \
-p 3 -L "ubuntu-2" -l '\EFI\ubuntu\grubx64.efi'
mount /boot/efi
## Step 7: (Optional) Configure Swap ## Step 7: (Optional) Configure Swap
@ -544,10 +560,10 @@ If you installed to multiple disks, install GRUB on the additional disks:
7.1 Create a volume dataset (zvol) for use as a swap device: 7.1 Create a volume dataset (zvol) for use as a swap device:
# zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \ zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \ -o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \ -o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap -o com.sun:auto-snapshot=false rpool/swap
You can adjust the size (the `4G` part) to your needs. You can adjust the size (the `4G` part) to your needs.
@ -557,21 +573,21 @@ The compression algorithm is set to `zle` because it is the cheapest available a
**Caution**: Always use long `/dev/zvol` aliases in configuration files. Never use a short `/dev/zdX` device name. **Caution**: Always use long `/dev/zvol` aliases in configuration files. Never use a short `/dev/zdX` device name.
# mkswap -f /dev/zvol/rpool/swap mkswap -f /dev/zvol/rpool/swap
# echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab
# echo RESUME=none > /etc/initramfs-tools/conf.d/resume echo RESUME=none > /etc/initramfs-tools/conf.d/resume
The `RESUME=none` is necessary to disable resuming from hibernation. This does not work, as the zvol is not present (because the pool has not yet been imported) at the time the resume script runs. If it is not disabled, the boot process hangs for 30 seconds waiting for the swap zvol to appear. The `RESUME=none` is necessary to disable resuming from hibernation. This does not work, as the zvol is not present (because the pool has not yet been imported) at the time the resume script runs. If it is not disabled, the boot process hangs for 30 seconds waiting for the swap zvol to appear.
7.3 Enable the swap device: 7.3 Enable the swap device:
# swapon -av swapon -av
## Step 8: Full Software Installation ## Step 8: Full Software Installation
8.1 Upgrade the minimal system: 8.1 Upgrade the minimal system:
# apt dist-upgrade --yes apt dist-upgrade --yes
8.2 Install a regular set of software: 8.2 Install a regular set of software:
@ -579,15 +595,15 @@ Choose one of the following options:
8.2a Install a command-line environment only: 8.2a Install a command-line environment only:
# apt install --yes ubuntu-standard apt install --yes ubuntu-standard
8.2b Install a full GUI environment: 8.2b Install a full GUI environment:
# apt install --yes ubuntu-desktop apt install --yes ubuntu-desktop
**Hint**: If you are installing a full GUI environment, you will likely want to manage your network with NetworkManager: **Hint**: If you are installing a full GUI environment, you will likely want to manage your network with NetworkManager:
# vi /etc/netplan/01-netcfg.yaml vi /etc/netplan/01-netcfg.yaml
network: network:
version: 2 version: 2
renderer: NetworkManager renderer: NetworkManager
@ -596,7 +612,7 @@ Choose one of the following options:
As `/var/log` is already compressed by ZFS, logrotates compression is going to burn CPU and disk I/O for (in most cases) very little gain. Also, if you are making snapshots of `/var/log`, logrotates compression will actually waste space, as the uncompressed data will live on in the snapshot. You can edit the files in `/etc/logrotate.d` by hand to comment out `compress`, or use this loop (copy-and-paste highly recommended): As `/var/log` is already compressed by ZFS, logrotates compression is going to burn CPU and disk I/O for (in most cases) very little gain. Also, if you are making snapshots of `/var/log`, logrotates compression will actually waste space, as the uncompressed data will live on in the snapshot. You can edit the files in `/etc/logrotate.d` by hand to comment out `compress`, or use this loop (copy-and-paste highly recommended):
# for file in /etc/logrotate.d/* ; do for file in /etc/logrotate.d/* ; do
if grep -Eq "(^|[^#y])compress" "$file" ; then if grep -Eq "(^|[^#y])compress" "$file" ; then
sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
fi fi
@ -604,7 +620,7 @@ As `/var/log` is already compressed by ZFS, logrotates compression is going t
8.4 Reboot: 8.4 Reboot:
# reboot reboot
### Step 9: Final Cleanup ### Step 9: Final Cleanup
@ -612,30 +628,30 @@ As `/var/log` is already compressed by ZFS, logrotates compression is going t
9.2 Optional: Delete the snapshots of the initial installation: 9.2 Optional: Delete the snapshots of the initial installation:
$ sudo zfs destroy bpool/BOOT/ubuntu@install sudo zfs destroy bpool/BOOT/ubuntu@install
$ sudo zfs destroy rpool/ROOT/ubuntu@install sudo zfs destroy rpool/ROOT/ubuntu@install
9.3 Optional: Disable the root password 9.3 Optional: Disable the root password
$ sudo usermod -p '*' root sudo usermod -p '*' root
9.4 Optional: Re-enable the graphical boot process: 9.4 Optional: Re-enable the graphical boot process:
If you prefer the graphical boot process, you can re-enable it now. If you are using LUKS, it makes the prompt look nicer. If you prefer the graphical boot process, you can re-enable it now. If you are using LUKS, it makes the prompt look nicer.
$ sudo vi /etc/default/grub sudo vi /etc/default/grub
Uncomment: GRUB_TIMEOUT_STYLE=hidden Uncomment: GRUB_TIMEOUT_STYLE=hidden
Add quiet and splash to: GRUB_CMDLINE_LINUX_DEFAULT Add quiet and splash to: GRUB_CMDLINE_LINUX_DEFAULT
Comment out: GRUB_TERMINAL=console Comment out: GRUB_TERMINAL=console
Save and quit. Save and quit.
$ sudo update-grub sudo update-grub
**Note:** Ignore errors from `osprober`, if present. **Note:** Ignore errors from `osprober`, if present.
9.5 Optional: For LUKS installs only, backup the LUKS header: 9.5 Optional: For LUKS installs only, backup the LUKS header:
$ sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
--header-backup-file luks1-header.dat --header-backup-file luks1-header.dat
Store that backup somewhere safe (e.g. cloud storage). It is protected by your LUKS passphrase, but you may wish to use additional encryption. Store that backup somewhere safe (e.g. cloud storage). It is protected by your LUKS passphrase, but you may wish to use additional encryption.
@ -650,34 +666,34 @@ Go through [Step 1: Prepare The Install Environment](#step-1-prepare-the-install
For LUKS, first unlock the disk(s): For LUKS, first unlock the disk(s):
# cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
Repeat for additional disks, if this is a mirror or raidz topology. Repeat for additional disks, if this is a mirror or raidz topology.
Mount everything correctly: Mount everything correctly:
# zpool export -a zpool export -a
# zpool import -N -R /mnt rpool zpool import -N -R /mnt rpool
# zpool import -N -R /mnt bpool zpool import -N -R /mnt bpool
# zfs mount rpool/ROOT/ubuntu zfs mount rpool/ROOT/ubuntu
# zfs mount -a zfs mount -a
If needed, you can chroot into your installed environment: If needed, you can chroot into your installed environment:
# mount --rbind /dev /mnt/dev mount --rbind /dev /mnt/dev
# mount --rbind /proc /mnt/proc mount --rbind /proc /mnt/proc
# mount --rbind /sys /mnt/sys mount --rbind /sys /mnt/sys
# chroot /mnt /bin/bash --login chroot /mnt /bin/bash --login
# mount /boot mount /boot
# mount -a mount -a
Do whatever you need to do to fix your system. Do whatever you need to do to fix your system.
When done, cleanup: When done, cleanup:
# exit exit
# mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
# zpool export -a zpool export -a
# reboot reboot
### MPT2SAS ### MPT2SAS
@ -703,11 +719,13 @@ Set a unique serial number on each virtual disk using libvirt or qemu (e.g. `-dr
To be able to use UEFI in guests (instead of only BIOS booting), run this on the host: To be able to use UEFI in guests (instead of only BIOS booting), run this on the host:
$ sudo apt install ovmf sudo apt install ovmf
$ sudo vi /etc/libvirt/qemu.conf
sudo vi /etc/libvirt/qemu.conf
Uncomment these lines: Uncomment these lines:
nvram = [ nvram = [
"/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
"/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd" "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd"
] ]
$ sudo service libvirt-bin restart
sudo service libvirt-bin restart