Redirect all pages to new documentation resource

Signed-off-by: George Melikov <mail@gmelikov.ru>
George Melikov 2020-05-21 21:11:38 +03:00
parent c8c5d42f0a
commit 075ad350f8
57 changed files with 101 additions and 11268 deletions

@ -1,7 +1,4 @@
* [Aaron Toponce's ZFS on Linux User Guide][zol-guide]
* [OpenZFS System Administration][openzfs-docs]
* [Oracle Solaris ZFS Administration Guide][solaris-docs]
[zol-guide]: https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/
[openzfs-docs]: http://open-zfs.org/wiki/System_Administration
[solaris-docs]: http://docs.oracle.com/cd/E19253-01/819-5461/
This page was moved to: https://openzfs.github.io/openzfs-docs/Project%20and%20Community/Admin%20Documentation.html
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,33 +1,3 @@
### Async Writes
This page was moved to: https://openzfs.github.io/openzfs-docs/Performance%20and%20tuning/Async%20Write.html
The number of concurrent operations issued for the async write I/O class
follows a piece-wise linear function defined by a few adjustable points.
```
| o---------| <-- zfs_vdev_async_write_max_active
^ | /^ |
| | / | |
active | / | |
I/O | / | |
count | / | |
| / | |
|-------o | | <-- zfs_vdev_async_write_min_active
0|_______^______|_________|
0% | | 100% of zfs_dirty_data_max
| |
| `-- zfs_vdev_async_write_active_max_dirty_percent
`--------- zfs_vdev_async_write_active_min_dirty_percent
```
Until the amount of dirty data exceeds a minimum percentage of the dirty
data allowed in the pool, the I/O scheduler will limit the number of
concurrent operations to the minimum. As that threshold is crossed, the
number of concurrent operations issued increases linearly to the maximum at
the specified maximum percentage of the dirty data allowed in the pool.
Ideally, the amount of dirty data on a busy pool will stay in the sloped
part of the function between zfs_vdev_async_write_active_min_dirty_percent
and zfs_vdev_async_write_active_max_dirty_percent. If it exceeds the
maximum percentage, this indicates that the rate of incoming data is
greater than the rate that the backend storage can handle. In this case, we
must further throttle incoming writes, as described in the next section.
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,178 +1,3 @@
There are a number of ways to control the ZFS Buildbot at a commit level. This page
provides a summary of various options that the ZFS Buildbot supports and how it impacts
testing. More detailed information regarding its implementation can be found at the
[ZFS Buildbot Github page](https://github.com/zfsonlinux/zfs-buildbot).
This page was moved to: https://openzfs.github.io/openzfs-docs/Developer%20Resources/Buildbot%20Options.html
## Choosing Builders
By default, all commits in your ZFS pull request are compiled by the BUILD
builders. Additionally, the top commit of your ZFS pull request is tested by
TEST builders. However, there is the option to override which types of builder
should be used on a per commit basis. In this case, you can add
`Requires-builders: <none|all|style|build|arch|distro|test|perf|coverage|unstable>` to your
commit message. A comma separated list of options can be
provided. Supported options are:
* `all`: This commit should be built by all available builders
* `none`: This commit should not be built by any builders
* `style`: This commit should be built by STYLE builders
* `build`: This commit should be built by all BUILD builders
* `arch`: This commit should be built by BUILD builders tagged as 'Architectures'
* `distro`: This commit should be built by BUILD builders tagged as 'Distributions'
* `test`: This commit should be built and tested by the TEST builders (excluding the Coverage TEST builders)
* `perf`: This commit should be built and tested by the PERF builders
* `coverage` : This commit should be built and tested by the Coverage TEST builders
* `unstable` : This commit should be built and tested by the Unstable TEST builders (currently only the Fedora Rawhide TEST builder)
A couple of examples on how to use `Requires-builders:` in commit messages can be found below.
### Preventing a commit from being built and tested.
```
This is a commit message
This text is part of the commit message body.
Signed-off-by: Contributor <contributor@email.com>
Requires-builders: none
```
### Submitting a commit to STYLE and TEST builders only.
```
This is a commit message
This text is part of the commit message body.
Signed-off-by: Contributor <contributor@email.com>
Requires-builders: style test
```
## Requiring SPL Versions
Currently, the ZFS Buildbot attempts to choose the correct SPL branch to build
based on a pull request's base branch. In the cases where a specific SPL version
needs to be built, the ZFS buildbot supports specifying an SPL version for pull
request testing. By opening a pull request against ZFS and adding `Requires-spl:`
in a commit message, you can instruct the buildbot to use a specific SPL version.
Below are examples of a commit messages that specify the SPL version.
### Build SPL from a specific pull request
```
This is a commit message
This text is part of the commit message body.
Signed-off-by: Contributor <contributor@email.com>
Requires-spl: refs/pull/123/head
```
### Build SPL branch `spl-branch-name` from `zfsonlinux/spl` repository
```
This is a commit message
This text is part of the commit message body.
Signed-off-by: Contributor <contributor@email.com>
Requires-spl: spl-branch-name
```
## Requiring Kernel Version
Currently, Kernel.org builders will clone and build the master branch of Linux.
In cases where a specific version of the Linux kernel needs to be built, the ZFS
buildbot supports specifying the Linux kernel to be built via commit message.
By opening a pull request against ZFS and adding `Requires-kernel:` in a commit
message, you can instruct the buildbot to use a specific Linux kernel.
Below is an example commit message that specifies a specific Linux kernel tag.
### Build Linux Kernel Version 4.14
```
This is a commit message
This text is part of the commit message body.
Signed-off-by: Contributor <contributor@email.com>
Requires-kernel: v4.14
```
## Build Steps Overrides
Each builder will execute or skip build steps based on its default
preferences. In some scenarios, it might be possible to skip various build
steps. The ZFS buildbot supports overriding the defaults of all builders
in a commit message. The list of available overrides are:
* `Build-linux: <Yes|No>`: All builders should build Linux for this commit
* `Build-lustre: <Yes|No>`: All builders should build Lustre for this commit
* `Build-spl: <Yes|No>`: All builders should build the SPL for this commit
* `Build-zfs: <Yes|No>`: All builders should build ZFS for this commit
* `Built-in: <Yes|No>`: All Linux builds should build in SPL and ZFS
* `Check-lint: <Yes|No>`: All builders should perform lint checks for this commit
* `Configure-lustre: <options>`: Provide `<options>` as configure flags when building Lustre
* `Configure-spl: <options>`: Provide `<options>` as configure flags when building the SPL
* `Configure-zfs: <options>`: Provide `<options>` as configure flags when building ZFS
A couple of examples on how to use overrides in commit messages can be found below.
### Skip building the SPL and build Lustre without ldiskfs
```
This is a commit message
This text is part of the commit message body.
Signed-off-by: Contributor <contributor@email.com>
Build-lustre: Yes
Configure-lustre: --disable-ldiskfs
Build-spl: No
```
### Build ZFS Only
```
This is a commit message
This text is part of the commit message body.
Signed-off-by: Contributor <contributor@email.com>
Build-lustre: No
Build-spl: No
```
## Configuring Tests with the TEST File
At the top level of the ZFS source tree, there is the [`TEST`
file](https://github.com/zfsonlinux/zfs/blob/master/TEST) which contains variables
that control if and how a specific test should run. Below is a list of each variable
and a brief description of what each variable controls.
* `TEST_PREPARE_WATCHDOG` - Enables the Linux kernel watchdog
* `TEST_PREPARE_SHARES` - Start NFS and Samba servers
* `TEST_SPLAT_SKIP` - Determines if `splat` testing is skipped
* `TEST_SPLAT_OPTIONS` - Command line options to provide to `splat`
* `TEST_ZTEST_SKIP` - Determines if `ztest` testing is skipped
* `TEST_ZTEST_TIMEOUT` - The length of time `ztest` should run
* `TEST_ZTEST_DIR` - Directory where `ztest` will create vdevs
* `TEST_ZTEST_OPTIONS` - Options to pass to `ztest`
* `TEST_ZTEST_CORE_DIR` - Directory for `ztest` to store core dumps
* `TEST_ZIMPORT_SKIP` - Determines if `zimport` testing is skipped
* `TEST_ZIMPORT_DIR` - Directory used during `zimport`
* `TEST_ZIMPORT_VERSIONS` - Source versions to test
* `TEST_ZIMPORT_POOLS` - Names of the pools for `zimport` to use for testing
* `TEST_ZIMPORT_OPTIONS` - Command line options to provide to `zimport`
* `TEST_XFSTESTS_SKIP` - Determines if `xfstest` testing is skipped
* `TEST_XFSTESTS_URL` - URL to download `xfstest` from
* `TEST_XFSTESTS_VER` - Name of the tarball to download from `TEST_XFSTESTS_URL`
* `TEST_XFSTESTS_POOL` - Name of pool to create and used by `xfstest`
* `TEST_XFSTESTS_FS` - Name of dataset for use by `xfstest`
* `TEST_XFSTESTS_VDEV` - Name of the vdev used by `xfstest`
* `TEST_XFSTESTS_OPTIONS` - Command line options to provide to `xfstest`
* `TEST_ZFSTESTS_SKIP` - Determines if `zfs-tests` testing is skipped
* `TEST_ZFSTESTS_DIR` - Directory to store files and loopback devices
* `TEST_ZFSTESTS_DISKS` - Space delimited list of disks that `zfs-tests` is allowed to use
* `TEST_ZFSTESTS_DISKSIZE` - File size of file based vdevs used by `zfs-tests`
* `TEST_ZFSTESTS_ITERS` - Number of times `test-runner` should execute its set of tests
* `TEST_ZFSTESTS_OPTIONS` - Options to provide `zfs-tests`
* `TEST_ZFSTESTS_RUNFILE` - The runfile to use when running `zfs-tests`
* `TEST_ZFSTESTS_TAGS` - List of tags to provide to `test-runner`
* `TEST_ZFSSTRESS_SKIP` - Determines if `zfsstress` testing is skipped
* `TEST_ZFSSTRESS_URL` - URL to download `zfsstress` from
* `TEST_ZFSSTRESS_VER` - Name of the tarball to download from `TEST_ZFSSTRESS_URL`
* `TEST_ZFSSTRESS_RUNTIME` - Duration to run `runstress.sh`
* `TEST_ZFSSTRESS_POOL` - Name of pool to create and use for `zfsstress` testing
* `TEST_ZFSSTRESS_FS` - Name of dataset for use during `zfsstress` tests
* `TEST_ZFSSTRESS_FSOPT` - File system options to provide to `zfsstress`
* `TEST_ZFSSTRESS_VDEV` - Directory to store vdevs for use during `zfsstress` tests
* `TEST_ZFSSTRESS_OPTIONS` - Command line options to provide to `runstress.sh`
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,150 +1,3 @@
### GitHub Repositories
This page was moved to: https://openzfs.github.io/openzfs-docs/Developer%20Resources/Building%20ZFS.html
The official source for ZFS on Linux is maintained at GitHub by the [zfsonlinux][zol-org] organization. The project consists of two primary git repositories named [spl][spl-repo] and [zfs][zfs-repo], both are required to build ZFS on Linux.
**NOTE:** The SPL was merged in to the [zfs][zfs-repo] repository, the last major release with a separate SPL is `0.7`.
* **SPL**: The SPL is thin shim layer which is responsible for implementing the fundamental interfaces required by OpenZFS. It's this layer which allows OpenZFS to be used across multiple platforms.
* **ZFS**: The ZFS repository contains a copy of the upstream OpenZFS code which has been adapted and extended for Linux. The vast majority of the core OpenZFS code is self-contained and can be used without modification.
### Installing Dependencies
The first thing you'll need to do is prepare your environment by installing a full development tool chain. In addition, development headers for both the kernel and the following libraries must be available. It is important to note that if the development kernel headers for the currently running kernel aren't installed, the modules won't compile properly.
The following dependencies should be installed to build the latest ZFS 0.8 release.
* **RHEL/CentOS 7**:
```sh
sudo yum install epel-release gcc make autoconf automake libtool rpm-build dkms libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) python python2-devel python-setuptools python-cffi libffi-devel
```
* **RHEL/CentOS 8, Fedora**:
```sh
sudo dnf install gcc make autoconf automake libtool rpm-build dkms libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) python3 python3-devel python3-setuptools python3-cffi libffi-devel
```
* **Debian, Ubuntu**:
```sh
sudo apt install build-essential autoconf automake libtool gawk alien fakeroot dkms libblkid-dev uuid-dev libudev-dev libssl-dev zlib1g-dev libaio-dev libattr1-dev libelf-dev linux-headers-$(uname -r) python3 python3-dev python3-setuptools python3-cffi libffi-dev
```
### Build Options
There are two options for building ZFS on Linux, the correct one largely depends on your requirements.
* **Packages**: Often it can be useful to build custom packages from git which can be installed on a system. This is the best way to perform integration testing with systemd, dracut, and udev. The downside to using packages it is greatly increases the time required to build, install, and test a change.
* **In-tree**: Development can be done entirely in the SPL and ZFS source trees. This speeds up development by allowing developers to rapidly iterate on a patch. When working in-tree developers can leverage incremental builds, load/unload kernel modules, execute utilities, and verify all their changes with the ZFS Test Suite.
The remainder of this page focuses on the **in-tree** option which is the recommended method of development for the majority of changes. See the [[custom-packages]] page for additional information on building custom packages.
### Developing In-Tree
#### Clone from GitHub
Start by cloning the SPL and ZFS repositories from GitHub. The repositories have a **master** branch for development and a series of **\*-release** branches for tagged releases. After checking out the repository your clone will default to the master branch. Tagged releases may be built by checking out spl/zfs-x.y.z tags with matching version numbers or matching release branches. Avoid using mismatched versions, this can result build failures due to interface changes.
**NOTE:** SPL was merged in to the [zfs][zfs-repo] repository, last release with separate SPL is `0.7`.
```
git clone https://github.com/zfsonlinux/zfs
```
If you need 0.7 release or older:
```
git clone https://github.com/zfsonlinux/spl
```
#### Configure and Build
For developers working on a change always create a new topic branch based off of master. This will make it easy to open a pull request with your change latter. The master branch is kept stable with extensive [regression testing][buildbot] of every pull request before and after it's merged. Every effort is made to catch defects as early as possible and to keep them out of the tree. Developers should be comfortable frequently rebasing their work against the latest master branch.
If you want to build 0.7 release or older, you should compile SPL first:
```
cd ./spl
git checkout master
sh autogen.sh
./configure
make -s -j$(nproc)
```
In this example we'll use the master branch and walk through a stock **in-tree** build, so we don't need to build SPL separately. Start by checking out the desired branch then build the ZFS and SPL source in the tradition autotools fashion.
```
cd ./zfs
git checkout master
sh autogen.sh
./configure
make -s -j$(nproc)
```
**tip:** `--with-linux=PATH` and `--with-linux-obj=PATH` can be passed to configure to specify a kernel installed in a non-default location. This option is also supported when building ZFS.
**tip:** `--enable-debug` can be set to enable all ASSERTs and additional correctness tests. This option is also supported when building ZFS.
**tip:** for version `<=0.7` `--with-spl=PATH` and `--with-spl-obj=PATH`, where `PATH` is a full path, can be passed to configure if it is unable to locate the SPL.
**Optional** Build packages
```
make deb #example for Debian/Ubuntu
```
#### Install
You can run `zfs-tests.sh` without installing ZFS, see below. If you have reason to install ZFS after building it, pay attention to how your distribution handles kernel modules.
On Ubuntu, for example, the modules from this repository install in the `extra` kernel module path, which is not in the standard `depmod` search path. Therefore, for the duration of your testing, edit `/etc/depmod.d/ubuntu.conf` and add `extra` to the beginning of the search path.
You may then install using `sudo make install; sudo ldconfig; sudo depmod`. You'd uninstall with `sudo make uninstall; sudo ldconfig; sudo depmod`.
#### Running zloop.sh and zfs-tests.sh
If you wish to run the ZFS Test Suite (ZTS), then `ksh` and a few additional utilities must be installed.
* **RHEL/CentOS 7:**
```sh
sudo yum install ksh bc fio acl sysstat mdadm lsscsi parted attr dbench nfs-utils samba rng-tools pax perf
```
* **RHEL/CentOS 8, Fedora:**
```sh
sudo dnf install ksh bc fio acl sysstat mdadm lsscsi parted attr dbench nfs-utils samba rng-tools pax perf
```
* **Debian, Ubuntu:**
```sh
sudo apt install ksh bc fio acl sysstat mdadm lsscsi parted attr dbench nfs-kernel-server samba rng-tools pax linux-tools-common selinux-utils quota
```
There are a few helper scripts provided in the top-level scripts directory designed to aid developers working with in-tree builds.
* **zfs-helper.sh:** Certain functionality (i.e. /dev/zvol/) depends on the ZFS provided udev helper scripts being installed on the system. This script can be used to create symlinks on the system from the installation location to the in-tree helper. These links must be in place to successfully run the ZFS Test Suite. The **-i** and **-r** options can be used to install and remove the symlinks.
```
sudo ./scripts/zfs-helpers.sh -i
```
* **zfs.sh:** The freshly built kernel modules can be loaded using `zfs.sh`. This script can latter be used to unload the kernel modules with the **-u** option.
```
sudo ./scripts/zfs.sh
```
* **zloop.sh:** A wrapper to run ztest repeatedly with randomized arguments. The ztest command is a user space stress test designed to detect correctness issues by concurrently running a random set of test cases. If a crash is encountered, the ztest logs, any associated vdev files, and core file (if one exists) are collected and moved to the output directory for analysis.
```
sudo ./scripts/zloop.sh
```
* **zfs-tests.sh:** A wrapper which can be used to launch the ZFS Test Suite. Three loopback devices are created on top of sparse files located in `/var/tmp/` and used for the regression test. Detailed directions for the ZFS Test Suite can be found in the [README][zts-readme] located in the top-level tests directory.
```
./scripts/zfs-tests.sh -vx
```
**tip:** The **delegate** tests will be skipped unless group read permission is set on the zfs directory and its parents.
[zol-org]: https://github.com/zfsonlinux/
[spl-repo]: https://github.com/zfsonlinux/spl
[zfs-repo]: https://github.com/zfsonlinux/zfs
[buildbot]: http://build.zfsonlinux.org/
[zts-readme]: https://github.com/zfsonlinux/zfs/tree/master/tests
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,58 +1,3 @@
### Checksums and Their Use in ZFS
This page was moved to: https://openzfs.github.io/openzfs-docs/Basics%20concepts/Checksums.html
End-to-end checksums are a key feature of ZFS and an important differentiator
for ZFS over other RAID implementations and filesystems.
Advantages of end-to-end checksums include:
+ detects data corruption upon reading from media
+ blocks that are detected as corrupt are automatically repaired if possible, by
using the RAID protection in suitably configured pools, or redundant copies (see
the zfs `copies` property)
+ periodic scrubs can check data to detect and repair latent media degradation
(bit rot) and corruption from other sources
+ checksums on ZFS replication streams, `zfs send` and `zfs receive`, ensure the
data received is not corrupted by intervening storage or transport mechanisms
#### Checksum Algorithms
The checksum algorithms in ZFS can be changed for datasets (filesystems or
volumes). The checksum algorithm used for each block is stored in the block
pointer (metadata). The block checksum is calculated when the block is written,
so changing the algorithm only affects writes occurring after the change.
The checksum algorithm for a dataset can be changed by setting the `checksum`
property:
```bash
zfs set checksum=sha256 pool_name/dataset_name
```
| Checksum | Ok for dedup and nopwrite? | Compatible with other ZFS implementations? | Notes
|---|---|---|---
| on | see notes | yes | `on` is a short hand for `fletcher4` for non-deduped datasets and `sha256` for deduped datasets
| off | no | yes | Do not do use `off`
| fletcher2 | no | yes | Deprecated implementation of Fletcher checksum, use `fletcher4` instead
| fletcher4 | no | yes | Fletcher algorithm, also used for `zfs send` streams
| sha256 | yes | yes | Default for deduped datasets
| noparity | no | yes | Do not use `noparity`
| sha512 | yes | requires pool feature `org.illumos:sha512` | salted `sha512` currently not supported for any filesystem on the boot pools
| skein | yes | requires pool feature `org.illumos:skein` | salted `skein` currently not supported for any filesystem on the boot pools
| edonr | yes | requires pool feature `org.illumos:edonr` | salted `edonr` currently not supported for any filesystem on the boot pools
#### Checksum Accelerators
ZFS has the ability to offload checksum operations to the Intel QuickAssist
Technology (QAT) adapters.
#### Checksum Microbenchmarks
Some ZFS features use microbenchmarks when the `zfs.ko` kernel module is loaded
to determine the optimal algorithm for checksums. The results of the microbenchmarks
are observable in the `/proc/spl/kstat/zfs` directory. The winning algorithm is
reported as the "fastest" and becomes the default. The default can be overridden
by setting zfs module parameters.
| Checksum | Results Filename | `zfs` module parameter
|---|---|---
| Fletcher4 | /proc/spl/kstat/zfs/fletcher_4_bench | zfs_fletcher_4_impl
#### Disabling Checksums
While it may be tempting to disable checksums to improve CPU performance, it is
widely considered by the ZFS community to be an extrodinarily bad idea. Don't
disable checksums.
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,128 +1,3 @@
The following instructions assume you are building from an official [release tarball][release] (version 0.8.0 or newer) or directly from the [git repository][git]. Most users should not need to do this and should preferentially use the distribution packages. As a general rule the distribution packages will be more tightly integrated, widely tested, and better supported. However, if your distribution of choice doesn't provide packages, or you're a developer and want to roll your own, here's how to do it.
This page was moved to: https://openzfs.github.io/openzfs-docs/Developer%20Resources/Custom%20Packages.html
The first thing to be aware of is that the build system is capable of generating several different types of packages. Which type of package you choose depends on what's supported on your platform and exactly what your needs are.
* **DKMS** packages contain only the source code and scripts for rebuilding the kernel modules. When the DKMS package is installed kernel modules will be built for all available kernels. Additionally, when the kernel is upgraded new kernel modules will be automatically built for that kernel. This is particularly convenient for desktop systems which receive frequent kernel updates. The downside is that because the DKMS packages build the kernel modules from source a full development environment is required which may not be appropriate for large deployments.
* **kmods** packages are binary kernel modules which are compiled against a specific version of the kernel. This means that if you update the kernel you must compile and install a new kmod package. If you don't frequently update your kernel, or if you're managing a large number of systems, then kmod packages are a good choice.
* **kABI-tracking kmod** Packages are similar to standard binary kmods and may be used with Enterprise Linux distributions like Red Hat and CentOS. These distributions provide a stable kABI (Kernel Application Binary Interface) which allows the same binary modules to be used with new versions of the distribution provided kernel.
By default the build system will generate user packages and both DKMS and kmod style kernel packages if possible. The user packages can be used with either set of kernel packages and do not need to be rebuilt when the kernel is updated. You can also streamline the build process by building only the DKMS or kmod packages as shown below.
Be aware that when building directly from a git repository you must first run the *autogen.sh* script to create the *configure* script. This will require installing the GNU autotools packages for your distribution. To perform any of the builds, you must install all the necessary development tools and headers for your distribution.
It is important to note that if the development kernel headers for the currently running kernel aren't installed, the modules won't compile properly.
* [Red Hat, CentOS and Fedora](#red-hat-centos-and-fedora)
* [Debian and Ubuntu](#debian-and-ubuntu)
## RHEL, CentOS and Fedora
Make sure that the required packages are installed to build the latest ZFS 0.8 release:
* **RHEL/CentOS 7**:
```sh
sudo yum install epel-release gcc make autoconf automake libtool rpm-build dkms libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) python python2-devel python-setuptools python-cffi libffi-devel
```
* **RHEL/CentOS 8, Fedora**:
```sh
sudo dnf install gcc make autoconf automake libtool rpm-build kernel-rpm-macros dkms libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) python3 python3-devel python3-setuptools python3-cffi libffi-devel
```
[Get the source code](#get-the-source-code).
### DKMS
Building rpm-based DKMS and user packages can be done as follows:
```sh
$ cd zfs
$ ./configure
$ make -j1 rpm-utils rpm-dkms
$ sudo yum localinstall *.$(uname -p).rpm *.noarch.rpm
```
### kmod
The key thing to know when building a kmod package is that a specific Linux kernel must be specified. At configure time the build system will make an educated guess as to which kernel you want to build against. However, if configure is unable to locate your kernel development headers, or you want to build against a different kernel, you must specify the exact path with the *--with-linux* and *--with-linux-obj* options.
```sh
$ cd zfs
$ ./configure
$ make -j1 rpm-utils rpm-kmod
$ sudo yum localinstall *.$(uname -p).rpm
```
### kABI-tracking kmod
The process for building kABI-tracking kmods is almost identical to for building normal kmods. However, it will only produce binaries which can be used by multiple kernels if the distribution supports a stable kABI. In order to request kABI-tracking package the *--with-spec=redhat* option must be passed to configure.
**NOTE:** This type of package is not available for Fedora.
```sh
$ cd zfs
$ ./configure --with-spec=redhat
$ make -j1 rpm-utils rpm-kmod
$ sudo yum localinstall *.$(uname -p).rpm
```
## Debian and Ubuntu
Make sure that the required packages are installed:
```sh
sudo apt install build-essential autoconf automake libtool gawk alien fakeroot dkms libblkid-dev uuid-dev libudev-dev libssl-dev zlib1g-dev libaio-dev libattr1-dev libelf-dev linux-headers-$(uname -r) python3 python3-dev python3-setuptools python3-cffi libffi-dev
```
[Get the source code](#get-the-source-code).
### kmod
The key thing to know when building a kmod package is that a specific Linux kernel must be specified. At configure time the build system will make an educated guess as to which kernel you want to build against. However, if configure is unable to locate your kernel development headers, or you want to build against a different kernel, you must specify the exact path with the *--with-linux* and *--with-linux-obj* options.
```sh
$ cd zfs
$ ./configure --enable-systemd
$ make -j1 deb-utils deb-kmod
$ for file in *.deb; do sudo gdebi -q --non-interactive $file; done
```
### DKMS
Building deb-based DKMS and user packages can be done as follows:
```sh
$ sudo apt-get install dkms
$ cd zfs
$ ./configure --enable-systemd
$ make -j1 deb-utils deb-dkms
$ for file in *.deb; do sudo gdebi -q --non-interactive $file; done
```
## Get the Source Code
### Released Tarball
The released tarball contains the latest fully tested and released version of ZFS. This is the preferred source code location for use in production systems. If you want to use the official released tarballs, then use the following commands to fetch and prepare the source.
```sh
$ wget http://archive.zfsonlinux.org/downloads/zfsonlinux/zfs/zfs-x.y.z.tar.gz
$ tar -xzf zfs-x.y.z.tar.gz
```
### Git Master Branch
The Git *master* branch contains the latest version of the software, and will probably contain fixes that, for some reason, weren't included in the released tarball. This is the preferred source code location for developers who intend to modify ZFS. If you would like to use the git version, you can clone it from Github and prepare the source like this.
```sh
$ git clone https://github.com/zfsonlinux/zfs.git
$ cd zfs
$ ./autogen.sh
```
Once the source has been prepared you'll need to decide what kind of packages you're building and jump the to appropriate section above. Note that not all package types are supported for all platforms.
[release]: https://github.com/zfsonlinux/zfs/releases/latest
[git]: https://github.com/zfsonlinux/zfs
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,33 +1 @@
This experimental guide has been made official at [[Debian Buster Root on ZFS]].
If you have an existing system installed from the experimental guide, adjust your sources:
vi /etc/apt/sources.list.d/buster-backports.list
deb http://deb.debian.org/debian buster-backports main contrib
deb-src http://deb.debian.org/debian buster-backports main contrib
vi /etc/apt/preferences.d/90_zfs
Package: libnvpair1linux libuutil1linux libzfs2linux libzpool2linux zfs-dkms zfs-initramfs zfs-test zfsutils-linux zfs-zed
Pin: release n=buster-backports
Pin-Priority: 990
This will allow you to upgrade from the locally-built packages to the official buster-backports packages.
You should set a root password before upgrading:
passwd
Apply updates:
apt update
apt dist-upgrade
Reboot:
reboot
If the bpool fails to import, then enter the rescue shell (which requires a root password) and run:
zpool import -f bpool
zpool export bpool
reboot
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,743 +1,3 @@
### Caution
* This HOWTO uses a whole physical disk.
* Do not use these instructions for dual-booting.
* Backup your data. Any existing data will be lost.
This page was moved to: https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Buster%20Root%20on%20ZFS.html
### System Requirements
* [64-bit Debian GNU/Linux Buster Live CD w/ GUI (e.g. gnome iso)](https://cdimage.debian.org/mirror/cdimage/release/current-live/amd64/iso-hybrid/)
* [A 64-bit kernel is *strongly* encouraged.](https://github.com/zfsonlinux/zfs/wiki/FAQ#32-bit-vs-64-bit-systems)
* Installing on a drive which presents 4KiB logical sectors (a “4Kn” drive) only works with UEFI booting. This not unique to ZFS. [GRUB does not and will not work on 4Kn with legacy (BIOS) booting.](http://savannah.gnu.org/bugs/?46700)
Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory is recommended for normal performance in basic workloads. If you wish to use deduplication, you will need [massive amounts of RAM](http://wiki.freebsd.org/ZFSTuningGuide#Deduplication). Enabling deduplication is a permanent change that cannot be easily reverted.
## Support
If you need help, reach out to the community using the [zfs-discuss mailing list](https://github.com/zfsonlinux/zfs/wiki/Mailing-Lists) or IRC at #zfsonlinux on [freenode](https://freenode.net/). If you have a bug report or feature request related to this HOWTO, please [file a new issue](https://github.com/zfsonlinux/zfs/issues/new) and mention @rlaager.
## Contributing
Edit permission on this wiki is restricted. Also, GitHub wikis do not support pull requests. However, you can clone the wiki using git.
1) `git clone https://github.com/zfsonlinux/zfs.wiki.git`
2) Make your changes.
3) Use `git diff > my-changes.patch` to create a patch. (Advanced git users may wish to `git commit` to a branch and `git format-patch`.)
4) [File a new issue](https://github.com/zfsonlinux/zfs/issues/new), mention @rlaager, and attach the patch.
## Encryption
This guide supports three different encryption options: unencrypted, LUKS (full-disk encryption), and ZFS native encryption. With any option, all ZFS features are fully available.
Unencrypted does not encrypt anything, of course. With no encryption happening, this option naturally has the best performance.
LUKS encrypts almost everything: the OS, swap, home directories, and anything else. The only unencrypted data is the bootloader, kernel, and initrd. The system cannot boot without the passphrase being entered at the console. Performance is good, but LUKS sits underneath ZFS, so if multiple disks (mirror or raidz topologies) are used, the data has to be encrypted once per disk.
ZFS native encryption encrypts the data and most metadata in the root pool. It does not encrypt dataset or snapshot names or properties. The boot pool is not encrypted at all, but it only contains the bootloader, kernel, and initrd. (Unless you put a password in `/etc/fstab`, the initrd is unlikely to contain sensitive data.) The system cannot boot without the passphrase being entered at the console. Performance is good. As the encryption happens in ZFS, even if multiple disks (mirror or raidz topologies) are used, the data only has to be encrypted once.
## Step 1: Prepare The Install Environment
1.1 Boot the Debian GNU/Linux Live CD. If prompted, login with the username `user` and password `live`. Connect your system to the Internet as appropriate (e.g. join your WiFi network).
1.2 Optional: Install and start the OpenSSH server in the Live CD environment:
If you have a second system, using SSH to access the target system can be convenient.
sudo apt update
sudo apt install --yes openssh-server
sudo systemctl restart ssh
**Hint:** You can find your IP address with `ip addr show scope global | grep inet`. Then, from your main machine, connect with `ssh user@IP`.
1.3 Become root:
sudo -i
1.4 Setup and update the repositories:
echo deb http://deb.debian.org/debian buster contrib >> /etc/apt/sources.list
echo deb http://deb.debian.org/debian buster-backports main contrib >> /etc/apt/sources.list
apt update
1.5 Install ZFS in the Live CD environment:
apt install --yes debootstrap gdisk dkms dpkg-dev linux-headers-$(uname -r)
apt install --yes -t buster-backports --no-install-recommends zfs-dkms
modprobe zfs
apt install --yes -t buster-backports zfsutils-linux
* The dkms dependency is installed manually just so it comes from buster and not buster-backports. This is not critical.
* We need to get the module built and loaded before installing zfsutils-linux or [zfs-mount.service will fail to start](https://github.com/zfsonlinux/zfs/issues/9599).
## Step 2: Disk Formatting
2.1 Set a variable with the disk name:
DISK=/dev/disk/by-id/scsi-SATA_disk1
Always use the long `/dev/disk/by-id/*` aliases with ZFS. Using the `/dev/sd*` device nodes directly can cause sporadic import failures, especially on systems that have more than one storage pool.
**Hints:**
* `ls -la /dev/disk/by-id` will list the aliases.
* Are you doing this in a virtual machine? If your virtual disk is missing from `/dev/disk/by-id`, use `/dev/vda` if you are using KVM with virtio; otherwise, read the [troubleshooting](#troubleshooting) section.
2.2 If you are re-using a disk, clear it as necessary:
If the disk was previously used in an MD array, zero the superblock:
apt install --yes mdadm
mdadm --zero-superblock --force $DISK
Clear the partition table:
sgdisk --zap-all $DISK
2.3 Partition your disk(s):
Run this if you need legacy (BIOS) booting:
sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK
Run this for UEFI booting (for use now or in the future):
sgdisk -n2:1M:+512M -t2:EF00 $DISK
Run this for the boot pool:
sgdisk -n3:0:+1G -t3:BF01 $DISK
Choose one of the following options:
2.3a Unencrypted or ZFS native encryption:
sgdisk -n4:0:0 -t4:BF01 $DISK
2.3b LUKS:
sgdisk -n4:0:0 -t4:8300 $DISK
If you are creating a mirror or raidz topology, repeat the partitioning commands for all the disks which will be part of the pool.
2.4 Create the boot pool:
zpool create -o ashift=12 -d \
-o feature@async_destroy=enabled \
-o feature@bookmarks=enabled \
-o feature@embedded_data=enabled \
-o feature@empty_bpobj=enabled \
-o feature@enabled_txg=enabled \
-o feature@extensible_dataset=enabled \
-o feature@filesystem_limits=enabled \
-o feature@hole_birth=enabled \
-o feature@large_blocks=enabled \
-o feature@lz4_compress=enabled \
-o feature@spacemap_histogram=enabled \
-o feature@userobj_accounting=enabled \
-o feature@zpool_checkpoint=enabled \
-o feature@spacemap_v2=enabled \
-o feature@project_quota=enabled \
-o feature@resilver_defer=enabled \
-o feature@allocation_classes=enabled \
-O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \
-O normalization=formD -O relatime=on -O xattr=sa \
-O mountpoint=/ -R /mnt bpool ${DISK}-part3
You should not need to customize any of the options for the boot pool.
GRUB does not support all of the zpool features. See `spa_feature_names` in [grub-core/fs/zfs/zfs.c](http://git.savannah.gnu.org/cgit/grub.git/tree/grub-core/fs/zfs/zfs.c#n276). This step creates a separate boot pool for `/boot` with the features limited to only those that GRUB supports, allowing the root pool to use any/all features. Note that GRUB opens the pool read-only, so all read-only compatible features are "supported" by GRUB.
**Hints:**
* If you are creating a mirror or raidz topology, create the pool using `zpool create ... bpool mirror /dev/disk/by-id/scsi-SATA_disk1-part3 /dev/disk/by-id/scsi-SATA_disk2-part3` (or replace `mirror` with `raidz`, `raidz2`, or `raidz3` and list the partitions from additional disks).
* The pool name is arbitrary. If changed, the new name must be used consistently. The `bpool` convention originated in this HOWTO.
2.5 Create the root pool:
Choose one of the following options:
2.5a Unencrypted:
zpool create -o ashift=12 \
-O acltype=posixacl -O canmount=off -O compression=lz4 \
-O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
-O mountpoint=/ -R /mnt rpool ${DISK}-part4
2.5b LUKS:
apt install --yes cryptsetup
cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4
cryptsetup luksOpen ${DISK}-part4 luks1
zpool create -o ashift=12 \
-O acltype=posixacl -O canmount=off -O compression=lz4 \
-O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
-O mountpoint=/ -R /mnt rpool /dev/mapper/luks1
2.5c ZFS native encryption:
zpool create -o ashift=12 \
-O acltype=posixacl -O canmount=off -O compression=lz4 \
-O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
-O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase \
-O mountpoint=/ -R /mnt rpool ${DISK}-part4
* The use of `ashift=12` is recommended here because many drives today have 4KiB (or larger) physical sectors, even though they present 512B logical sectors. Also, a future replacement drive may have 4KiB physical sectors (in which case `ashift=12` is desirable) or 4KiB logical sectors (in which case `ashift=12` is required).
* Setting `-O acltype=posixacl` enables POSIX ACLs globally. If you do not want this, remove that option, but later add `-o acltype=posixacl` (note: lowercase "o") to the `zfs create` for `/var/log`, as [journald requires ACLs](https://askubuntu.com/questions/970886/journalctl-says-failed-to-search-journal-acl-operation-not-supported)
* Setting `normalization=formD` eliminates some corner cases relating to UTF-8 filename normalization. It also implies `utf8only=on`, which means that only UTF-8 filenames are allowed. If you care to support non-UTF-8 filenames, do not use this option. For a discussion of why requiring UTF-8 filenames may be a bad idea, see [The problems with enforced UTF-8 only filenames](http://utcc.utoronto.ca/~cks/space/blog/linux/ForcedUTF8Filenames).
* Setting `relatime=on` is a middle ground between classic POSIX `atime` behavior (with its significant performance impact) and `atime=off` (which provides the best performance by completely disabling atime updates). Since Linux 2.6.30, `relatime` has been the default for other filesystems. See [RedHat's documentation](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/power_management_guide/relatime) for further information.
* Setting `xattr=sa` [vastly improves the performance of extended attributes](https://github.com/zfsonlinux/zfs/commit/82a37189aac955c81a59a5ecc3400475adb56355). Inside ZFS, extended attributes are used to implement POSIX ACLs. Extended attributes can also be used by user-space applications. [They are used by some desktop GUI applications.](https://en.wikipedia.org/wiki/Extended_file_attributes#Linux) [They can be used by Samba to store Windows ACLs and DOS attributes; they are required for a Samba Active Directory domain controller.](https://wiki.samba.org/index.php/Setting_up_a_Share_Using_Windows_ACLs) Note that [`xattr=sa` is Linux-specific.](http://open-zfs.org/wiki/Platform_code_differences) If you move your `xattr=sa` pool to another OpenZFS implementation besides ZFS-on-Linux, extended attributes will not be readable (though your data will be). If portability of extended attributes is important to you, omit the `-O xattr=sa` above. Even if you do not want `xattr=sa` for the whole pool, it is probably fine to use it for `/var/log`.
* Make sure to include the `-part4` portion of the drive path. If you forget that, you are specifying the whole disk, which ZFS will then re-partition, and you will lose the bootloader partition(s).
* For LUKS, the key size chosen is 512 bits. However, XTS mode requires two keys, so the LUKS key is split in half. Thus, `-s 512` means AES-256.
* ZFS native encryption uses `aes-256-ccm` by default. [AES-GCM seems to be generally preferred over AES-CCM](https://crypto.stackexchange.com/questions/6842/how-to-choose-between-aes-ccm-and-aes-gcm-for-storage-volume-encryption), [is faster now](https://github.com/zfsonlinux/zfs/pull/9749#issuecomment-569132997), and [will be even faster in the future](https://github.com/zfsonlinux/zfs/pull/9749).
* Your passphrase will likely be the weakest link. Choose wisely. See [section 5 of the cryptsetup FAQ](https://gitlab.com/cryptsetup/cryptsetup/wikis/FrequentlyAskedQuestions#5-security-aspects) for guidance.
**Hints:**
* If you are creating a mirror or raidz topology, create the pool using `zpool create ... rpool mirror /dev/disk/by-id/scsi-SATA_disk1-part4 /dev/disk/by-id/scsi-SATA_disk2-part4` (or replace `mirror` with `raidz`, `raidz2`, or `raidz3` and list the partitions from additional disks). For LUKS, use `/dev/mapper/luks1`, `/dev/mapper/luks2`, etc., which you will have to create using `cryptsetup`.
* The pool name is arbitrary. If changed, the new name must be used consistently. On systems that can automatically install to ZFS, the root pool is named `rpool` by default.
## Step 3: System Installation
3.1 Create filesystem datasets to act as containers:
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=off -o mountpoint=none bpool/BOOT
On Solaris systems, the root filesystem is cloned and the suffix is incremented for major system changes through `pkg image-update` or `beadm`. Similar functionality for APT is possible but currently unimplemented. Even without such a tool, it can still be used for manually created clones.
3.2 Create filesystem datasets for the root and boot filesystems:
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian
zfs mount rpool/ROOT/debian
zfs create -o canmount=noauto -o mountpoint=/boot bpool/BOOT/debian
zfs mount bpool/BOOT/debian
With ZFS, it is not normally necessary to use a mount command (either `mount` or `zfs mount`). This situation is an exception because of `canmount=noauto`.
3.3 Create datasets:
zfs create rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off rpool/var
zfs create -o canmount=off rpool/var/lib
zfs create rpool/var/log
zfs create rpool/var/spool
The datasets below are optional, depending on your preferences and/or software
choices.
If you wish to exclude these from snapshots:
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create -o com.sun:auto-snapshot=false rpool/var/tmp
chmod 1777 /mnt/var/tmp
If you use /opt on this system:
zfs create rpool/opt
If you use /srv on this system:
zfs create rpool/srv
If you use /usr/local on this system:
zfs create -o canmount=off rpool/usr
zfs create rpool/usr/local
If this system will have games installed:
zfs create rpool/var/games
If this system will store local email in /var/mail:
zfs create rpool/var/mail
If this system will use Snap packages:
zfs create rpool/var/snap
If you use /var/www on this system:
zfs create rpool/var/www
If this system will use GNOME:
zfs create rpool/var/lib/AccountsService
If this system will use Docker (which manages its own datasets & snapshots):
zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker
If this system will use NFS (locking):
zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs
A tmpfs is recommended later, but if you want a separate dataset for /tmp:
zfs create -o com.sun:auto-snapshot=false rpool/tmp
chmod 1777 /mnt/tmp
The primary goal of this dataset layout is to separate the OS from user data. This allows the root filesystem to be rolled back without rolling back user data such as logs (in `/var/log`). This will be especially important if/when a `beadm` or similar utility is integrated. The `com.sun.auto-snapshot` setting is used by some ZFS snapshot utilities to exclude transient data.
If you do nothing extra, `/tmp` will be stored as part of the root filesystem. Alternatively, you can create a separate dataset for `/tmp`, as shown above. This keeps the `/tmp` data out of snapshots of your root filesystem. It also allows you to set a quota on `rpool/tmp`, if you want to limit the maximum space used. Otherwise, you can use a tmpfs (RAM filesystem) later.
3.4 Install the minimal system:
debootstrap buster /mnt
zfs set devices=off rpool
The `debootstrap` command leaves the new system in an unconfigured state. An alternative to using `debootstrap` is to copy the entirety of a working system into the new ZFS root.
## Step 4: System Configuration
4.1 Configure the hostname (change `HOSTNAME` to the desired hostname).
echo HOSTNAME > /mnt/etc/hostname
vi /mnt/etc/hosts
Add a line:
127.0.1.1 HOSTNAME
or if the system has a real name in DNS:
127.0.1.1 FQDN HOSTNAME
**Hint:** Use `nano` if you find `vi` confusing.
4.2 Configure the network interface:
Find the interface name:
ip addr show
Adjust NAME below to match your interface name:
vi /mnt/etc/network/interfaces.d/NAME
auto NAME
iface NAME inet dhcp
Customize this file if the system is not a DHCP client.
4.3 Configure the package sources:
vi /mnt/etc/apt/sources.list
deb http://deb.debian.org/debian buster main contrib
deb-src http://deb.debian.org/debian buster main contrib
vi /mnt/etc/apt/sources.list.d/buster-backports.list
deb http://deb.debian.org/debian buster-backports main contrib
deb-src http://deb.debian.org/debian buster-backports main contrib
vi /mnt/etc/apt/preferences.d/90_zfs
Package: libnvpair1linux libuutil1linux libzfs2linux libzfslinux-dev libzpool2linux python3-pyzfs pyzfs-doc spl spl-dkms zfs-dkms zfs-dracut zfs-initramfs zfs-test zfsutils-linux zfsutils-linux-dev zfs-zed
Pin: release n=buster-backports
Pin-Priority: 990
4.4 Bind the virtual filesystems from the LiveCD environment to the new system and `chroot` into it:
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
chroot /mnt /usr/bin/env DISK=$DISK bash --login
**Note:** This is using `--rbind`, not `--bind`.
4.5 Configure a basic system environment:
ln -s /proc/self/mounts /etc/mtab
apt update
apt install --yes locales
dpkg-reconfigure locales
Even if you prefer a non-English system language, always ensure that `en_US.UTF-8` is available.
dpkg-reconfigure tzdata
4.6 Install ZFS in the chroot environment for the new system:
apt install --yes dpkg-dev linux-headers-amd64 linux-image-amd64
apt install --yes zfs-initramfs
4.7 For LUKS installs only, setup crypttab:
apt install --yes cryptsetup
echo luks1 UUID=$(blkid -s UUID -o value ${DISK}-part4) none \
luks,discard,initramfs > /etc/crypttab
* The use of `initramfs` is a work-around for [cryptsetup does not support ZFS](https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906).
**Hint:** If you are creating a mirror or raidz topology, repeat the `/etc/crypttab` entries for `luks2`, etc. adjusting for each disk.
4.8 Install GRUB
Choose one of the following options:
4.8a Install GRUB for legacy (BIOS) booting
apt install --yes grub-pc
Install GRUB to the disk(s), not the partition(s).
4.8b Install GRUB for UEFI booting
apt install dosfstools
mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2
mkdir /boot/efi
echo PARTUUID=$(blkid -s PARTUUID -o value ${DISK}-part2) \
/boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab
mount /boot/efi
apt install --yes grub-efi-amd64 shim-signed
* The `-s 1` for `mkdosfs` is only necessary for drives which present 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size (given the partition size of 512 MiB) for FAT32. It also works fine on drives which present 512 B sectors.
**Note:** If you are creating a mirror or raidz topology, this step only installs GRUB on the first disk. The other disk(s) will be handled later.
4.9 Set a root password
passwd
4.10 Enable importing bpool
This ensures that `bpool` is always imported, regardless of whether `/etc/zfs/zpool.cache` exists, whether it is in the cachefile or not, or whether `zfs-import-scan.service` is enabled.
```
vi /etc/systemd/system/zfs-import-bpool.service
[Unit]
DefaultDependencies=no
Before=zfs-import-scan.service
Before=zfs-import-cache.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/sbin/zpool import -N -o cachefile=none bpool
[Install]
WantedBy=zfs-import.target
```
systemctl enable zfs-import-bpool.service
4.11 Optional (but recommended): Mount a tmpfs to /tmp
If you chose to create a `/tmp` dataset above, skip this step, as they are mutually exclusive choices. Otherwise, you can put `/tmp` on a tmpfs (RAM filesystem) by enabling the `tmp.mount` unit.
cp /usr/share/systemd/tmp.mount /etc/systemd/system/
systemctl enable tmp.mount
4.12 Optional (but kindly requested): Install popcon
The `popularity-contest` package reports the list of packages install on your system. Showing that ZFS is popular may be helpful in terms of long-term attention from the distro.
apt install --yes popularity-contest
Choose Yes at the prompt.
## Step 5: GRUB Installation
5.1 Verify that the ZFS boot filesystem is recognized:
grub-probe /boot
5.2 Refresh the initrd files:
update-initramfs -u -k all
**Note:** When using LUKS, this will print "WARNING could not determine root device from /etc/fstab". This is because [cryptsetup does not support ZFS](https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906).
5.3 Workaround GRUB's missing zpool-features support:
vi /etc/default/grub
Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian"
5.4 Optional (but highly recommended): Make debugging GRUB easier:
vi /etc/default/grub
Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
Save and quit.
Later, once the system has rebooted twice and you are sure everything is working, you can undo these changes, if desired.
5.5 Update the boot configuration:
update-grub
**Note:** Ignore errors from `osprober`, if present.
5.6 Install the boot loader
5.6a For legacy (BIOS) booting, install GRUB to the MBR:
grub-install $DISK
Note that you are installing GRUB to the whole disk, not a partition.
If you are creating a mirror or raidz topology, repeat the `grub-install` command for each disk in the pool.
5.6b For UEFI booting, install GRUB:
grub-install --target=x86_64-efi --efi-directory=/boot/efi \
--bootloader-id=debian --recheck --no-floppy
It is not necessary to specify the disk here. If you are creating a mirror or raidz topology, the additional disks will be handled later.
5.7 Verify that the ZFS module is installed:
ls /boot/grub/*/zfs.mod
5.8 Fix filesystem mount ordering
Until there is support for mounting `/boot` in the initramfs, we also need to mount that, because it was marked `canmount=noauto`. Also, with UEFI, we need to ensure it is mounted before its child filesystem `/boot/efi`.
We need to activate `zfs-mount-generator`. This makes systemd aware of the separate mountpoints, which is important for things like `/var/log` and `/var/tmp`. In turn, `rsyslog.service` depends on `var-log.mount` by way of `local-fs.target` and services using the `PrivateTmp` feature of systemd automatically use `After=var-tmp.mount`.
For UEFI booting, unmount /boot/efi first:
umount /boot/efi
Everything else applies to both BIOS and UEFI booting:
zfs set mountpoint=legacy bpool/BOOT/debian
echo bpool/BOOT/debian /boot zfs \
nodev,relatime,x-systemd.requires=zfs-import-bpool.service 0 0 >> /etc/fstab
mkdir /etc/zfs/zfs-list.cache
touch /etc/zfs/zfs-list.cache/rpool
ln -s /usr/lib/zfs-linux/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d
zed -F &
Verify that zed updated the cache by making sure this is not empty:
cat /etc/zfs/zfs-list.cache/rpool
If it is empty, force a cache update and check again:
zfs set canmount=noauto rpool/ROOT/debian
Stop zed:
fg
Press Ctrl-C.
Fix the paths to eliminate /mnt:
sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/rpool
## Step 6: First Boot
6.1 Snapshot the initial installation:
zfs snapshot bpool/BOOT/debian@install
zfs snapshot rpool/ROOT/debian@install
In the future, you will likely want to take snapshots before each upgrade, and remove old snapshots (including this one) at some point to save space.
6.2 Exit from the `chroot` environment back to the LiveCD environment:
exit
6.3 Run these commands in the LiveCD environment to unmount all filesystems:
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export -a
6.4 Reboot:
reboot
6.5 Wait for the newly installed system to boot normally. Login as root.
6.6 Create a user account:
zfs create rpool/home/YOURUSERNAME
adduser YOURUSERNAME
cp -a /etc/skel/. /home/YOURUSERNAME
chown -R YOURUSERNAME:YOURUSERNAME /home/YOURUSERNAME
6.7 Add your user account to the default set of groups for an administrator:
usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video YOURUSERNAME
6.8 Mirror GRUB
If you installed to multiple disks, install GRUB on the additional disks:
6.8a For legacy (BIOS) booting:
dpkg-reconfigure grub-pc
Hit enter until you get to the device selection screen.
Select (using the space bar) all of the disks (not partitions) in your pool.
6.8b UEFI
umount /boot/efi
For the second and subsequent disks (increment debian-2 to -3, etc.):
dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \
of=/dev/disk/by-id/scsi-SATA_disk2-part2
efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \
-p 2 -L "debian-2" -l '\EFI\debian\grubx64.efi'
mount /boot/efi
## Step 7: (Optional) Configure Swap
**Caution**: On systems with extremely high memory pressure, using a zvol for swap can result in lockup, regardless of how much swap is still available. This issue is currently being investigated in: https://github.com/zfsonlinux/zfs/issues/7734
7.1 Create a volume dataset (zvol) for use as a swap device:
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
You can adjust the size (the `4G` part) to your needs.
The compression algorithm is set to `zle` because it is the cheapest available algorithm. As this guide recommends `ashift=12` (4 kiB blocks on disk), the common case of a 4 kiB page size means that no compression algorithm can reduce I/O. The exception is all-zero pages, which are dropped by ZFS; but some form of compression has to be enabled to get this behavior.
7.2 Configure the swap device:
**Caution**: Always use long `/dev/zvol` aliases in configuration files. Never use a short `/dev/zdX` device name.
mkswap -f /dev/zvol/rpool/swap
echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab
echo RESUME=none > /etc/initramfs-tools/conf.d/resume
The `RESUME=none` is necessary to disable resuming from hibernation. This does not work, as the zvol is not present (because the pool has not yet been imported) at the time the resume script runs. If it is not disabled, the boot process hangs for 30 seconds waiting for the swap zvol to appear.
7.3 Enable the swap device:
swapon -av
## Step 8: Full Software Installation
8.1 Upgrade the minimal system:
apt dist-upgrade --yes
8.2 Install a regular set of software:
tasksel
8.3 Optional: Disable log compression:
As `/var/log` is already compressed by ZFS, logrotates compression is going to burn CPU and disk I/O for (in most cases) very little gain. Also, if you are making snapshots of `/var/log`, logrotates compression will actually waste space, as the uncompressed data will live on in the snapshot. You can edit the files in `/etc/logrotate.d` by hand to comment out `compress`, or use this loop (copy-and-paste highly recommended):
for file in /etc/logrotate.d/* ; do
if grep -Eq "(^|[^#y])compress" "$file" ; then
sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
fi
done
8.4 Reboot:
reboot
### Step 9: Final Cleanup
9.1 Wait for the system to boot normally. Login using the account you created. Ensure the system (including networking) works normally.
9.2 Optional: Delete the snapshots of the initial installation:
sudo zfs destroy bpool/BOOT/debian@install
sudo zfs destroy rpool/ROOT/debian@install
9.3 Optional: Disable the root password
sudo usermod -p '*' root
9.4 Optional: Re-enable the graphical boot process:
If you prefer the graphical boot process, you can re-enable it now. If you are using LUKS, it makes the prompt look nicer.
sudo vi /etc/default/grub
Add quiet to GRUB_CMDLINE_LINUX_DEFAULT
Comment out GRUB_TERMINAL=console
Save and quit.
sudo update-grub
**Note:** Ignore errors from `osprober`, if present.
9.5 Optional: For LUKS installs only, backup the LUKS header:
sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
--header-backup-file luks1-header.dat
Store that backup somewhere safe (e.g. cloud storage). It is protected by your LUKS passphrase, but you may wish to use additional encryption.
**Hint:** If you created a mirror or raidz topology, repeat this for each LUKS volume (`luks2`, etc.).
## Troubleshooting
### Rescuing using a Live CD
Go through [Step 1: Prepare The Install Environment](#step-1-prepare-the-install-environment).
For LUKS, first unlock the disk(s):
apt install --yes cryptsetup
cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
Repeat for additional disks, if this is a mirror or raidz topology.
Mount everything correctly:
zpool export -a
zpool import -N -R /mnt rpool
zpool import -N -R /mnt bpool
zfs load-key -a
zfs mount rpool/ROOT/debian
zfs mount -a
If needed, you can chroot into your installed environment:
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
chroot /mnt /bin/bash --login
mount /boot
mount -a
Do whatever you need to do to fix your system.
When done, cleanup:
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export -a
reboot
### MPT2SAS
Most problem reports for this tutorial involve `mpt2sas` hardware that does slow asynchronous drive initialization, like some IBM M1015 or OEM-branded cards that have been flashed to the reference LSI firmware.
The basic problem is that disks on these controllers are not visible to the Linux kernel until after the regular system is started, and ZoL does not hotplug pool members. See https://github.com/zfsonlinux/zfs/issues/330.
Most LSI cards are perfectly compatible with ZoL. If your card has this glitch, try setting ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X in /etc/default/zfs. The system will wait X seconds for all drives to appear before importing the pool.
### Areca
Systems that require the `arcsas` blob driver should add it to the `/etc/initramfs-tools/modules` file and run `update-initramfs -u -k all`.
Upgrade or downgrade the Areca driver if something like `RIP: 0010:[<ffffffff8101b316>] [<ffffffff8101b316>] native_read_tsc+0x6/0x20` appears anywhere in kernel log. ZoL is unstable on systems that emit this error message.
### VMware
* Set `disk.EnableUUID = "TRUE"` in the vmx file or vsphere configuration. Doing this ensures that `/dev/disk` aliases are created in the guest.
### QEMU/KVM/XEN
Set a unique serial number on each virtual disk using libvirt or qemu (e.g. `-drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890`).
To be able to use UEFI in guests (instead of only BIOS booting), run this on the host:
sudo apt install ovmf
sudo vi /etc/libvirt/qemu.conf
Uncomment these lines:
nvram = [
"/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
"/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd",
"/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd",
"/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd"
]
sudo systemctl restart libvirtd.service
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,70 +1,3 @@
# Supported boot parameters
* rollback=\<on|yes|1\> Do a rollback of specified snapshot.
* zfs_debug=\<on|yes|1\> Debug the initrd script
* zfs_force=\<on|yes|1\> Force importing the pool. Should not be necessary.
* zfs=\<off|no|0\> Don't try to import ANY pool, mount ANY filesystem or even load the module.
* rpool=\<pool\> Use this pool for root pool.
* bootfs=\<pool\>/\<dataset\> Use this dataset for root filesystem.
* root=\<pool\>/\<dataset\> Use this dataset for root filesystem.
* root=ZFS=\<pool\>/\<dataset\> Use this dataset for root filesystem.
* root=zfs:\<pool\>/\<dataset\> Use this dataset for root filesystem.
* root=zfs:AUTO Try to detect both pool and rootfs
This page was moved to: https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20GNU%20Linux%20initrd%20documentation.html
In all these cases, \<dataset\> could also be \<dataset\>@\<snapshot\>.
The reason there are so many supported boot options to get the root filesystem, is that there are a lot of different ways too boot ZFS out there, and I wanted to make sure I supported them all.
# Pool imports
## Import using /dev/disk/by-*
The initrd will, if the variable <code>USE_DISK_BY_ID</code> is set in the file <code>/etc/default/zfs</code>, to import using the /dev/disk/by-* links. It will try to import in this order:
1. /dev/disk/by-vdev
2. /dev/disk/by-\*
3. /dev
## Import using cache file
If all of these imports fail (or if <code>USE_DISK_BY_ID</code> is unset), it will then try to import using the cache file.
## Last ditch attempt at importing
If that ALSO fails, it will try one more time, without any <code>-d</code> or <code>-c</code> options.
# Booting
## Booting from snapshot:
Enter the snapshot for the <code>root=</code> parameter like in this example:
```
linux /ROOT/debian-1@/boot/vmlinuz-3.2.0-4-amd64 root=ZFS=rpool/ROOT/debian-1@some_snapshot ro boot=zfs $bootfs quiet
```
This will clone the snapshot <code>rpool/ROOT/debian-1@some_snapshot</code> into the filesystem <code>rpool/ROOT/debian-1_some_snapshot</code> and use that as root filesystem. The original filesystem and snapshot is left alone in this case.
**BEWARE** that it will first destroy, blindingly, the <code>rpool/ROOT/debian-1_some_snapshot</code> filesystem before trying to clone the snapshot into it again. So if you've booted from the same snapshot previously and done some changes in that root filesystem, they will be undone by the destruction of the filesystem.
## Snapshot rollback
From version <code>0.6.4-1-3</code> it is now also possible to specify <code>rollback=1</code> to do a rollback of the snapshot instead of cloning it. **BEWARE** that this will destroy _all_ snapshots done after the specified snapshot!
## Select snapshot dynamically
From version <code>0.6.4-1-3</code> it is now also possible to specify a NULL snapshot name (such as <code>root=rpool/ROOT/debian-1@</code>) and if so, the initrd script will discover all snapshots below that filesystem (sans the at), and output a list of snapshot for the user to choose from.
## Booting from native encrypted filesystem
Although there is currently no support for native encryption in ZFS On Linux, there is a patch floating around 'out there' and the initrd supports loading key and unlock such encrypted filesystem.
## Separated filesystems
### Descended filesystems
If there are separate filesystems (for example a separate dataset for <code>/usr</code>), the snapshot boot code will try to find the snapshot under each filesystems and clone (or rollback) them.
Example:
```
rpool/ROOT/debian-1@some_snapshot
rpool/ROOT/debian-1/usr@some_snapshot
```
These will create the following filesystems respectively (if not doing a rollback):
```
rpool/ROOT/debian-1_some_snapshot
rpool/ROOT/debian-1/usr_some_snapshot
```
The initrd code will use the <code>mountpoint</code> option (if any) in the original (without the snapshot part) dataset to find _where_ it should mount the dataset. Or it will use the name of the dataset below the root filesystem (<code>rpool/ROOT/debian-1</code> in this example) for the mount point.
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,702 +1,3 @@
### Newer release available
* See [[Debian Buster Root on ZFS]] for new installs.
This page was moved to: https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Stretch%20Root%20on%20ZFS.html
### Caution
* This HOWTO uses a whole physical disk.
* Do not use these instructions for dual-booting.
* Backup your data. Any existing data will be lost.
### System Requirements
* [64-bit Debian GNU/Linux Stretch Live CD](http://cdimage.debian.org/debian-cd/current-live/amd64/iso-hybrid/)
* [A 64-bit kernel is *strongly* encouraged.](https://github.com/zfsonlinux/zfs/wiki/FAQ#32-bit-vs-64-bit-systems)
* Installing on a drive which presents 4KiB logical sectors (a “4Kn” drive) only works with UEFI booting. This not unique to ZFS. [GRUB does not and will not work on 4Kn with legacy (BIOS) booting.](http://savannah.gnu.org/bugs/?46700)
Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory is recommended for normal performance in basic workloads. If you wish to use deduplication, you will need [massive amounts of RAM](http://wiki.freebsd.org/ZFSTuningGuide#Deduplication). Enabling deduplication is a permanent change that cannot be easily reverted.
## Support
If you need help, reach out to the community using the [zfs-discuss mailing list](https://github.com/zfsonlinux/zfs/wiki/Mailing-Lists) or IRC at #zfsonlinux on [freenode](https://freenode.net/). If you have a bug report or feature request related to this HOWTO, please [file a new issue](https://github.com/zfsonlinux/zfs/issues/new) and mention @rlaager.
## Contributing
Edit permission on this wiki is restricted. Also, GitHub wikis do not support pull requests. However, you can clone the wiki using git.
1) `git clone https://github.com/zfsonlinux/zfs.wiki.git`
2) Make your changes.
3) Use `git diff > my-changes.patch` to create a patch. (Advanced git users may wish to `git commit` to a branch and `git format-patch`.)
4) [File a new issue](https://github.com/zfsonlinux/zfs/issues/new), mention @rlaager, and attach the patch.
## Encryption
This guide supports two different encryption options: unencrypted and LUKS (full-disk encryption). ZFS native encryption has not yet been released. With either option, all ZFS features are fully available.
Unencrypted does not encrypt anything, of course. With no encryption happening, this option naturally has the best performance.
LUKS encrypts almost everything: the OS, swap, home directories, and anything else. The only unencrypted data is the bootloader, kernel, and initrd. The system cannot boot without the passphrase being entered at the console. Performance is good, but LUKS sits underneath ZFS, so if multiple disks (mirror or raidz topologies) are used, the data has to be encrypted once per disk.
## Step 1: Prepare The Install Environment
1.1 Boot the Debian GNU/Linux Live CD. If prompted, login with the username `user` and password `live`. Connect your system to the Internet as appropriate (e.g. join your WiFi network).
1.2 Optional: Install and start the OpenSSH server in the Live CD environment:
If you have a second system, using SSH to access the target system can be convenient.
$ sudo apt update
$ sudo apt install --yes openssh-server
$ sudo systemctl restart ssh
**Hint:** You can find your IP address with `ip addr show scope global | grep inet`. Then, from your main machine, connect with `ssh user@IP`.
1.3 Become root:
$ sudo -i
1.4 Setup and update the repositories:
# echo deb http://deb.debian.org/debian stretch contrib >> /etc/apt/sources.list
# echo deb http://deb.debian.org/debian stretch-backports main contrib >> /etc/apt/sources.list
# apt update
1.5 Install ZFS in the Live CD environment:
# apt install --yes debootstrap gdisk dkms dpkg-dev linux-headers-$(uname -r)
# apt install --yes -t stretch-backports zfs-dkms
# modprobe zfs
* The dkms dependency is installed manually just so it comes from stretch and not stretch-backports. This is not critical.
## Step 2: Disk Formatting
2.1 If you are re-using a disk, clear it as necessary:
If the disk was previously used in an MD array, zero the superblock:
# apt install --yes mdadm
# mdadm --zero-superblock --force /dev/disk/by-id/scsi-SATA_disk1
Clear the partition table:
# sgdisk --zap-all /dev/disk/by-id/scsi-SATA_disk1
2.2 Partition your disk(s):
Run this if you need legacy (BIOS) booting:
# sgdisk -a1 -n1:24K:+1000K -t1:EF02 /dev/disk/by-id/scsi-SATA_disk1
Run this for UEFI booting (for use now or in the future):
# sgdisk -n2:1M:+512M -t2:EF00 /dev/disk/by-id/scsi-SATA_disk1
Run this for the boot pool:
# sgdisk -n3:0:+1G -t3:BF01 /dev/disk/by-id/scsi-SATA_disk1
Choose one of the following options:
2.2a Unencrypted:
# sgdisk -n4:0:0 -t4:BF01 /dev/disk/by-id/scsi-SATA_disk1
2.2b LUKS:
# sgdisk -n4:0:0 -t4:8300 /dev/disk/by-id/scsi-SATA_disk1
Always use the long `/dev/disk/by-id/*` aliases with ZFS. Using the `/dev/sd*` device nodes directly can cause sporadic import failures, especially on systems that have more than one storage pool.
**Hints:**
* `ls -la /dev/disk/by-id` will list the aliases.
* Are you doing this in a virtual machine? If your virtual disk is missing from `/dev/disk/by-id`, use `/dev/vda` if you are using KVM with virtio; otherwise, read the [troubleshooting](#troubleshooting) section.
* If you are creating a mirror or raidz topology, repeat the partitioning commands for all the disks which will be part of the pool.
2.3 Create the boot pool:
# zpool create -o ashift=12 -d \
-o feature@async_destroy=enabled \
-o feature@bookmarks=enabled \
-o feature@embedded_data=enabled \
-o feature@empty_bpobj=enabled \
-o feature@enabled_txg=enabled \
-o feature@extensible_dataset=enabled \
-o feature@filesystem_limits=enabled \
-o feature@hole_birth=enabled \
-o feature@large_blocks=enabled \
-o feature@lz4_compress=enabled \
-o feature@spacemap_histogram=enabled \
-o feature@userobj_accounting=enabled \
-O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \
-O normalization=formD -O relatime=on -O xattr=sa \
-O mountpoint=/ -R /mnt \
bpool /dev/disk/by-id/scsi-SATA_disk1-part3
You should not need to customize any of the options for the boot pool.
GRUB does not support all of the zpool features. See `spa_feature_names` in [grub-core/fs/zfs/zfs.c](http://git.savannah.gnu.org/cgit/grub.git/tree/grub-core/fs/zfs/zfs.c#n276). This step creates a separate boot pool for `/boot` with the features limited to only those that GRUB supports, allowing the root pool to use any/all features. Note that GRUB opens the pool read-only, so all read-only compatible features are "supported" by GRUB.
**Hints:**
* If you are creating a mirror or raidz topology, create the pool using `zpool create ... bpool mirror /dev/disk/by-id/scsi-SATA_disk1-part3 /dev/disk/by-id/scsi-SATA_disk2-part3` (or replace `mirror` with `raidz`, `raidz2`, or `raidz3` and list the partitions from additional disks).
* The pool name is arbitrary. If changed, the new name must be used consistently. The `bpool` convention originated in this HOWTO.
2.4 Create the root pool:
Choose one of the following options:
2.4a Unencrypted:
# zpool create -o ashift=12 \
-O acltype=posixacl -O canmount=off -O compression=lz4 \
-O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
-O mountpoint=/ -R /mnt \
rpool /dev/disk/by-id/scsi-SATA_disk1-part4
2.4b LUKS:
# apt install --yes cryptsetup
# cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 \
/dev/disk/by-id/scsi-SATA_disk1-part4
# cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
# zpool create -o ashift=12 \
-O acltype=posixacl -O canmount=off -O compression=lz4 \
-O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
-O mountpoint=/ -R /mnt \
rpool /dev/mapper/luks1
* The use of `ashift=12` is recommended here because many drives today have 4KiB (or larger) physical sectors, even though they present 512B logical sectors. Also, a future replacement drive may have 4KiB physical sectors (in which case `ashift=12` is desirable) or 4KiB logical sectors (in which case `ashift=12` is required).
* Setting `-O acltype=posixacl` enables POSIX ACLs globally. If you do not want this, remove that option, but later add `-o acltype=posixacl` (note: lowercase "o") to the `zfs create` for `/var/log`, as [journald requires ACLs](https://askubuntu.com/questions/970886/journalctl-says-failed-to-search-journal-acl-operation-not-supported)
* Setting `normalization=formD` eliminates some corner cases relating to UTF-8 filename normalization. It also implies `utf8only=on`, which means that only UTF-8 filenames are allowed. If you care to support non-UTF-8 filenames, do not use this option. For a discussion of why requiring UTF-8 filenames may be a bad idea, see [The problems with enforced UTF-8 only filenames](http://utcc.utoronto.ca/~cks/space/blog/linux/ForcedUTF8Filenames).
* Setting `relatime=on` is a middle ground between classic POSIX `atime` behavior (with its significant performance impact) and `atime=off` (which provides the best performance by completely disabling atime updates). Since Linux 2.6.30, `relatime` has been the default for other filesystems. See [RedHat's documentation](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/power_management_guide/relatime) for further information.
* Setting `xattr=sa` [vastly improves the performance of extended attributes](https://github.com/zfsonlinux/zfs/commit/82a37189aac955c81a59a5ecc3400475adb56355). Inside ZFS, extended attributes are used to implement POSIX ACLs. Extended attributes can also be used by user-space applications. [They are used by some desktop GUI applications.](https://en.wikipedia.org/wiki/Extended_file_attributes#Linux) [They can be used by Samba to store Windows ACLs and DOS attributes; they are required for a Samba Active Directory domain controller.](https://wiki.samba.org/index.php/Setting_up_a_Share_Using_Windows_ACLs) Note that [`xattr=sa` is Linux-specific.](http://open-zfs.org/wiki/Platform_code_differences) If you move your `xattr=sa` pool to another OpenZFS implementation besides ZFS-on-Linux, extended attributes will not be readable (though your data will be). If portability of extended attributes is important to you, omit the `-O xattr=sa` above. Even if you do not want `xattr=sa` for the whole pool, it is probably fine to use it for `/var/log`.
* Make sure to include the `-part4` portion of the drive path. If you forget that, you are specifying the whole disk, which ZFS will then re-partition, and you will lose the bootloader partition(s).
* For LUKS, the key size chosen is 512 bits. However, XTS mode requires two keys, so the LUKS key is split in half. Thus, `-s 512` means AES-256.
* Your passphrase will likely be the weakest link. Choose wisely. See [section 5 of the cryptsetup FAQ](https://gitlab.com/cryptsetup/cryptsetup/wikis/FrequentlyAskedQuestions#5-security-aspects) for guidance.
**Hints:**
* If you are creating a mirror or raidz topology, create the pool using `zpool create ... rpool mirror /dev/disk/by-id/scsi-SATA_disk1-part4 /dev/disk/by-id/scsi-SATA_disk2-part4` (or replace `mirror` with `raidz`, `raidz2`, or `raidz3` and list the partitions from additional disks). For LUKS, use `/dev/mapper/luks1`, `/dev/mapper/luks2`, etc., which you will have to create using `cryptsetup`.
* The pool name is arbitrary. If changed, the new name must be used consistently. On systems that can automatically install to ZFS, the root pool is named `rpool` by default.
## Step 3: System Installation
3.1 Create filesystem datasets to act as containers:
# zfs create -o canmount=off -o mountpoint=none rpool/ROOT
# zfs create -o canmount=off -o mountpoint=none bpool/BOOT
On Solaris systems, the root filesystem is cloned and the suffix is incremented for major system changes through `pkg image-update` or `beadm`. Similar functionality for APT is possible but currently unimplemented. Even without such a tool, it can still be used for manually created clones.
3.2 Create filesystem datasets for the root and boot filesystems:
# zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian
# zfs mount rpool/ROOT/debian
# zfs create -o canmount=noauto -o mountpoint=/boot bpool/BOOT/debian
# zfs mount bpool/BOOT/debian
With ZFS, it is not normally necessary to use a mount command (either `mount` or `zfs mount`). This situation is an exception because of `canmount=noauto`.
3.3 Create datasets:
# zfs create rpool/home
# zfs create -o mountpoint=/root rpool/home/root
# zfs create -o canmount=off rpool/var
# zfs create -o canmount=off rpool/var/lib
# zfs create rpool/var/log
# zfs create rpool/var/spool
The datasets below are optional, depending on your preferences and/or
software choices:
If you wish to exclude these from snapshots:
# zfs create -o com.sun:auto-snapshot=false rpool/var/cache
# zfs create -o com.sun:auto-snapshot=false rpool/var/tmp
# chmod 1777 /mnt/var/tmp
If you use /opt on this system:
# zfs create rpool/opt
If you use /srv on this system:
# zfs create rpool/srv
If you use /usr/local on this system:
# zfs create -o canmount=off rpool/usr
# zfs create rpool/usr/local
If this system will have games installed:
# zfs create rpool/var/games
If this system will store local email in /var/mail:
# zfs create rpool/var/mail
If this system will use Snap packages:
# zfs create rpool/var/snap
If you use /var/www on this system:
# zfs create rpool/var/www
If this system will use GNOME:
# zfs create rpool/var/lib/AccountsService
If this system will use Docker (which manages its own datasets & snapshots):
# zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker
If this system will use NFS (locking):
# zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs
A tmpfs is recommended later, but if you want a separate dataset for /tmp:
# zfs create -o com.sun:auto-snapshot=false rpool/tmp
# chmod 1777 /mnt/tmp
The primary goal of this dataset layout is to separate the OS from user data. This allows the root filesystem to be rolled back without rolling back user data such as logs (in `/var/log`). This will be especially important if/when a `beadm` or similar utility is integrated. The `com.sun.auto-snapshot` setting is used by some ZFS snapshot utilities to exclude transient data.
If you do nothing extra, `/tmp` will be stored as part of the root filesystem. Alternatively, you can create a separate dataset for `/tmp`, as shown above. This keeps the `/tmp` data out of snapshots of your root filesystem. It also allows you to set a quota on `rpool/tmp`, if you want to limit the maximum space used. Otherwise, you can use a tmpfs (RAM filesystem) later.
3.4 Install the minimal system:
# debootstrap stretch /mnt
# zfs set devices=off rpool
The `debootstrap` command leaves the new system in an unconfigured state. An alternative to using `debootstrap` is to copy the entirety of a working system into the new ZFS root.
## Step 4: System Configuration
4.1 Configure the hostname (change `HOSTNAME` to the desired hostname).
# echo HOSTNAME > /mnt/etc/hostname
# vi /mnt/etc/hosts
Add a line:
127.0.1.1 HOSTNAME
or if the system has a real name in DNS:
127.0.1.1 FQDN HOSTNAME
**Hint:** Use `nano` if you find `vi` confusing.
4.2 Configure the network interface:
Find the interface name:
# ip addr show
# vi /mnt/etc/network/interfaces.d/NAME
auto NAME
iface NAME inet dhcp
Customize this file if the system is not a DHCP client.
4.3 Configure the package sources:
# vi /mnt/etc/apt/sources.list
deb http://deb.debian.org/debian stretch main contrib
deb-src http://deb.debian.org/debian stretch main contrib
# vi /mnt/etc/apt/sources.list.d/stretch-backports.list
deb http://deb.debian.org/debian stretch-backports main contrib
deb-src http://deb.debian.org/debian stretch-backports main contrib
# vi /mnt/etc/apt/preferences.d/90_zfs
Package: libnvpair1linux libuutil1linux libzfs2linux libzpool2linux spl-dkms zfs-dkms zfs-test zfsutils-linux zfsutils-linux-dev zfs-zed
Pin: release n=stretch-backports
Pin-Priority: 990
4.4 Bind the virtual filesystems from the LiveCD environment to the new system and `chroot` into it:
# mount --rbind /dev /mnt/dev
# mount --rbind /proc /mnt/proc
# mount --rbind /sys /mnt/sys
# chroot /mnt /bin/bash --login
**Note:** This is using `--rbind`, not `--bind`.
4.5 Configure a basic system environment:
# ln -s /proc/self/mounts /etc/mtab
# apt update
# apt install --yes locales
# dpkg-reconfigure locales
Even if you prefer a non-English system language, always ensure that `en_US.UTF-8` is available.
# dpkg-reconfigure tzdata
4.6 Install ZFS in the chroot environment for the new system:
# apt install --yes dpkg-dev linux-headers-amd64 linux-image-amd64
# apt install --yes zfs-initramfs
4.7 For LUKS installs only, setup crypttab:
# apt install --yes cryptsetup
# echo luks1 UUID=$(blkid -s UUID -o value \
/dev/disk/by-id/scsi-SATA_disk1-part4) none \
luks,discard,initramfs > /etc/crypttab
* The use of `initramfs` is a work-around for [cryptsetup does not support ZFS](https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906).
**Hint:** If you are creating a mirror or raidz topology, repeat the `/etc/crypttab` entries for `luks2`, etc. adjusting for each disk.
4.8 Install GRUB
Choose one of the following options:
4.8a Install GRUB for legacy (BIOS) booting
# apt install --yes grub-pc
Install GRUB to the disk(s), not the partition(s).
4.8b Install GRUB for UEFI booting
# apt install dosfstools
# mkdosfs -F 32 -s 1 -n EFI /dev/disk/by-id/scsi-SATA_disk1-part2
# mkdir /boot/efi
# echo PARTUUID=$(blkid -s PARTUUID -o value \
/dev/disk/by-id/scsi-SATA_disk1-part2) \
/boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab
# mount /boot/efi
# apt install --yes grub-efi-amd64 shim
* The `-s 1` for `mkdosfs` is only necessary for drives which present 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size (given the partition size of 512 MiB) for FAT32. It also works fine on drives which present 512 B sectors.
**Note:** If you are creating a mirror or raidz topology, this step only installs GRUB on the first disk. The other disk(s) will be handled later.
4.9 Set a root password
# passwd
4.10 Enable importing bpool
This ensures that `bpool` is always imported, regardless of whether `/etc/zfs/zpool.cache` exists, whether it is in the cachefile or not, or whether `zfs-import-scan.service` is enabled.
```
# vi /etc/systemd/system/zfs-import-bpool.service
[Unit]
DefaultDependencies=no
Before=zfs-import-scan.service
Before=zfs-import-cache.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/sbin/zpool import -N -o cachefile=none bpool
[Install]
WantedBy=zfs-import.target
# systemctl enable zfs-import-bpool.service
```
4.11 Optional (but recommended): Mount a tmpfs to /tmp
If you chose to create a `/tmp` dataset above, skip this step, as they are mutually exclusive choices. Otherwise, you can put `/tmp` on a tmpfs (RAM filesystem) by enabling the `tmp.mount` unit.
# cp /usr/share/systemd/tmp.mount /etc/systemd/system/
# systemctl enable tmp.mount
4.12 Optional (but kindly requested): Install popcon
The `popularity-contest` package reports the list of packages install on your system. Showing that ZFS is popular may be helpful in terms of long-term attention from the distro.
# apt install --yes popularity-contest
Choose Yes at the prompt.
## Step 5: GRUB Installation
5.1 Verify that the ZFS boot filesystem is recognized:
# grub-probe /boot
zfs
5.2 Refresh the initrd files:
# update-initramfs -u -k all
update-initramfs: Generating /boot/initrd.img-4.9.0-8-amd64
**Note:** When using LUKS, this will print "WARNING could not determine root device from /etc/fstab". This is because [cryptsetup does not support ZFS](https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906).
5.3 Workaround GRUB's missing zpool-features support:
# vi /etc/default/grub
Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian"
5.4 Optional (but highly recommended): Make debugging GRUB easier:
# vi /etc/default/grub
Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
Save and quit.
Later, once the system has rebooted twice and you are sure everything is working, you can undo these changes, if desired.
5.5 Update the boot configuration:
# update-grub
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.9.0-8-amd64
Found initrd image: /boot/initrd.img-4.9.0-8-amd64
done
**Note:** Ignore errors from `osprober`, if present.
5.6 Install the boot loader
5.6a For legacy (BIOS) booting, install GRUB to the MBR:
# grub-install /dev/disk/by-id/scsi-SATA_disk1
Installing for i386-pc platform.
Installation finished. No error reported.
Do not reboot the computer until you get exactly that result message. Note that you are installing GRUB to the whole disk, not a partition.
If you are creating a mirror or raidz topology, repeat the `grub-install` command for each disk in the pool.
5.6b For UEFI booting, install GRUB:
# grub-install --target=x86_64-efi --efi-directory=/boot/efi \
--bootloader-id=debian --recheck --no-floppy
5.7 Verify that the ZFS module is installed:
# ls /boot/grub/*/zfs.mod
5.8 Fix filesystem mount ordering
[Until ZFS gains a systemd mount generator](https://github.com/zfsonlinux/zfs/issues/4898), there are races between mounting filesystems and starting certain daemons. In practice, the issues (e.g. [#5754](https://github.com/zfsonlinux/zfs/issues/5754)) seem to be with certain filesystems in `/var`, specifically `/var/log` and `/var/tmp`. Setting these to use `legacy` mounting, and listing them in `/etc/fstab` makes systemd aware that these are separate mountpoints. In turn, `rsyslog.service` depends on `var-log.mount` by way of `local-fs.target` and services using the `PrivateTmp` feature of systemd automatically use `After=var-tmp.mount`.
Until there is support for mounting `/boot` in the initramfs, we also need to mount that, because it was marked `canmount=noauto`. Also, with UEFI, we need to ensure it is mounted before its child filesystem `/boot/efi`.
`rpool` is guaranteed to be imported by the initramfs, so there is no point in adding `x-systemd.requires=zfs-import.target` to those filesystems.
For UEFI booting, unmount /boot/efi first:
# umount /boot/efi
Everything else applies to both BIOS and UEFI booting:
# zfs set mountpoint=legacy bpool/BOOT/debian
# echo bpool/BOOT/debian /boot zfs \
nodev,relatime,x-systemd.requires=zfs-import-bpool.service 0 0 >> /etc/fstab
# zfs set mountpoint=legacy rpool/var/log
# echo rpool/var/log /var/log zfs nodev,relatime 0 0 >> /etc/fstab
# zfs set mountpoint=legacy rpool/var/spool
# echo rpool/var/spool /var/spool zfs nodev,relatime 0 0 >> /etc/fstab
If you created a /var/tmp dataset:
# zfs set mountpoint=legacy rpool/var/tmp
# echo rpool/var/tmp /var/tmp zfs nodev,relatime 0 0 >> /etc/fstab
If you created a /tmp dataset:
# zfs set mountpoint=legacy rpool/tmp
# echo rpool/tmp /tmp zfs nodev,relatime 0 0 >> /etc/fstab
## Step 6: First Boot
6.1 Snapshot the initial installation:
# zfs snapshot bpool/BOOT/debian@install
# zfs snapshot rpool/ROOT/debian@install
In the future, you will likely want to take snapshots before each upgrade, and remove old snapshots (including this one) at some point to save space.
6.2 Exit from the `chroot` environment back to the LiveCD environment:
# exit
6.3 Run these commands in the LiveCD environment to unmount all filesystems:
# mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
# zpool export -a
6.4 Reboot:
# reboot
6.5 Wait for the newly installed system to boot normally. Login as root.
6.6 Create a user account:
# zfs create rpool/home/YOURUSERNAME
# adduser YOURUSERNAME
# cp -a /etc/skel/.[!.]* /home/YOURUSERNAME
# chown -R YOURUSERNAME:YOURUSERNAME /home/YOURUSERNAME
6.7 Add your user account to the default set of groups for an administrator:
# usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video YOURUSERNAME
6.8 Mirror GRUB
If you installed to multiple disks, install GRUB on the additional disks:
6.8a For legacy (BIOS) booting:
# dpkg-reconfigure grub-pc
Hit enter until you get to the device selection screen.
Select (using the space bar) all of the disks (not partitions) in your pool.
6.8b UEFI
# umount /boot/efi
For the second and subsequent disks (increment debian-2 to -3, etc.):
# dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \
of=/dev/disk/by-id/scsi-SATA_disk2-part2
# efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \
-p 2 -L "debian-2" -l '\EFI\debian\grubx64.efi'
# mount /boot/efi
## Step 7: (Optional) Configure Swap
**Caution**: On systems with extremely high memory pressure, using a zvol for swap can result in lockup, regardless of how much swap is still available. This issue is currently being investigated in: https://github.com/zfsonlinux/zfs/issues/7734
7.1 Create a volume dataset (zvol) for use as a swap device:
# zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
You can adjust the size (the `4G` part) to your needs.
The compression algorithm is set to `zle` because it is the cheapest available algorithm. As this guide recommends `ashift=12` (4 kiB blocks on disk), the common case of a 4 kiB page size means that no compression algorithm can reduce I/O. The exception is all-zero pages, which are dropped by ZFS; but some form of compression has to be enabled to get this behavior.
7.2 Configure the swap device:
**Caution**: Always use long `/dev/zvol` aliases in configuration files. Never use a short `/dev/zdX` device name.
# mkswap -f /dev/zvol/rpool/swap
# echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab
# echo RESUME=none > /etc/initramfs-tools/conf.d/resume
The `RESUME=none` is necessary to disable resuming from hibernation. This does not work, as the zvol is not present (because the pool has not yet been imported) at the time the resume script runs. If it is not disabled, the boot process hangs for 30 seconds waiting for the swap zvol to appear.
7.3 Enable the swap device:
# swapon -av
## Step 8: Full Software Installation
8.1 Upgrade the minimal system:
# apt dist-upgrade --yes
8.2 Install a regular set of software:
# tasksel
8.3 Optional: Disable log compression:
As `/var/log` is already compressed by ZFS, logrotates compression is going to burn CPU and disk I/O for (in most cases) very little gain. Also, if you are making snapshots of `/var/log`, logrotates compression will actually waste space, as the uncompressed data will live on in the snapshot. You can edit the files in `/etc/logrotate.d` by hand to comment out `compress`, or use this loop (copy-and-paste highly recommended):
# for file in /etc/logrotate.d/* ; do
if grep -Eq "(^|[^#y])compress" "$file" ; then
sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
fi
done
8.4 Reboot:
# reboot
### Step 9: Final Cleanup
9.1 Wait for the system to boot normally. Login using the account you created. Ensure the system (including networking) works normally.
9.2 Optional: Delete the snapshots of the initial installation:
$ sudo zfs destroy bpool/BOOT/debian@install
$ sudo zfs destroy rpool/ROOT/debian@install
9.3 Optional: Disable the root password
$ sudo usermod -p '*' root
9.4 Optional: Re-enable the graphical boot process:
If you prefer the graphical boot process, you can re-enable it now. If you are using LUKS, it makes the prompt look nicer.
$ sudo vi /etc/default/grub
Add quiet to GRUB_CMDLINE_LINUX_DEFAULT
Comment out GRUB_TERMINAL=console
Save and quit.
$ sudo update-grub
**Note:** Ignore errors from `osprober`, if present.
9.5 Optional: For LUKS installs only, backup the LUKS header:
$ sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
--header-backup-file luks1-header.dat
Store that backup somewhere safe (e.g. cloud storage). It is protected by your LUKS passphrase, but you may wish to use additional encryption.
**Hint:** If you created a mirror or raidz topology, repeat this for each LUKS volume (`luks2`, etc.).
## Troubleshooting
### Rescuing using a Live CD
Go through [Step 1: Prepare The Install Environment](#step-1-prepare-the-install-environment).
This will automatically import your pool. Export it and re-import it to get the mounts right:
For LUKS, first unlock the disk(s):
# apt install --yes cryptsetup
# cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
Repeat for additional disks, if this is a mirror or raidz topology.
# zpool export -a
# zpool import -N -R /mnt rpool
# zpool import -N -R /mnt bpool
# zfs mount rpool/ROOT/debian
# zfs mount -a
If needed, you can chroot into your installed environment:
# mount --rbind /dev /mnt/dev
# mount --rbind /proc /mnt/proc
# mount --rbind /sys /mnt/sys
# chroot /mnt /bin/bash --login
# mount /boot
# mount -a
Do whatever you need to do to fix your system.
When done, cleanup:
# exit
# mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
# zpool export -a
# reboot
### MPT2SAS
Most problem reports for this tutorial involve `mpt2sas` hardware that does slow asynchronous drive initialization, like some IBM M1015 or OEM-branded cards that have been flashed to the reference LSI firmware.
The basic problem is that disks on these controllers are not visible to the Linux kernel until after the regular system is started, and ZoL does not hotplug pool members. See https://github.com/zfsonlinux/zfs/issues/330.
Most LSI cards are perfectly compatible with ZoL. If your card has this glitch, try setting ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X in /etc/default/zfs. The system will wait X seconds for all drives to appear before importing the pool.
### Areca
Systems that require the `arcsas` blob driver should add it to the `/etc/initramfs-tools/modules` file and run `update-initramfs -u -k all`.
Upgrade or downgrade the Areca driver if something like `RIP: 0010:[<ffffffff8101b316>] [<ffffffff8101b316>] native_read_tsc+0x6/0x20` appears anywhere in kernel log. ZoL is unstable on systems that emit this error message.
### VMware
* Set `disk.EnableUUID = "TRUE"` in the vmx file or vsphere configuration. Doing this ensures that `/dev/disk` aliases are created in the guest.
### QEMU/KVM/XEN
Set a unique serial number on each virtual disk using libvirt or qemu (e.g. `-drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890`).
To be able to use UEFI in guests (instead of only BIOS booting), run this on the host:
$ sudo apt install ovmf
$ sudo vi /etc/libvirt/qemu.conf
Uncomment these lines:
nvram = [
"/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
"/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd"
]
$ sudo service libvirt-bin restart
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,42 +1,4 @@
Offical ZFS on Linux [DKMS](https://en.wikipedia.org/wiki/Dynamic_Kernel_Module_Support) style packages are available from the [Debian GNU/Linux repository](https://tracker.debian.org/pkg/zfs-linux) for the following configurations. The packages previously hosted at archive.zfsonlinux.org will not be updated and are not recommended for new installations.
**Debian Releases:** Stretch (oldstable), Buster (stable), and newer (testing, sid)
**Architectures:** amd64
This page was moved to: https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/index.html
# Table of contents
- [Installation](#installation)
- [Related Links](#related-links)
## Installation
For Debian Buster, ZFS packages are included in the [contrib repository](https://packages.debian.org/source/buster/zfs-linux).
If you want to boot from ZFS, see [[Debian Buster Root on ZFS]] instead. For troubleshooting existing installations on Stretch, see [[Debian Stretch Root on ZFS]].
The [backports repository](https://backports.debian.org/Instructions/) often provides newer releases of ZFS. You can use it as follows:
Add the backports repository:
# vi /etc/apt/sources.list.d/buster-backports.list
deb http://deb.debian.org/debian buster-backports main contrib
deb-src http://deb.debian.org/debian buster-backports main contrib
# vi /etc/apt/preferences.d/90_zfs
Package: libnvpair1linux libuutil1linux libzfs2linux libzpool2linux spl-dkms zfs-dkms zfs-test zfsutils-linux zfsutils-linux-dev zfs-zed
Pin: release n=buster-backports
Pin-Priority: 990
Update the list of packages:
# apt update
Install the kernel headers and other dependencies:
# apt install --yes dpkg-dev linux-headers-$(uname -r) linux-image-amd64
Install the zfs packages:
# apt-get install zfs-dkms zfsutils-linux
## Related Links
- [[Debian GNU Linux initrd documentation]]
- [[Debian Buster Root on ZFS]]
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1 +0,0 @@
The future home for documenting ZFS on Linux development and debugging techniques.

@ -1,16 +1,3 @@
# Developer Resources
This page was moved to: https://openzfs.github.io/openzfs-docs/Developer%20Resources/index.html
[[Custom Packages]]
[[Building ZFS]]
[Buildbot Status][buildbot-status]
[Buildbot Options][control-buildbot]
[OpenZFS Tracking][openzfs-tracking]
[[OpenZFS Patches]]
[[OpenZFS Exceptions]]
[OpenZFS Documentation][openzfs-devel]
[[Git and GitHub for beginners]]
[openzfs-devel]: http://open-zfs.org/wiki/Developer_resources
[openzfs-tracking]: http://build.zfsonlinux.org/openzfs-tracking.html
[buildbot-status]: http://build.zfsonlinux.org/tgrid?length=100&branch=master&category=Tests&rev_order=desc
[control-buildbot]: https://github.com/zfsonlinux/zfs/wiki/Buildbot-Options
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

418
FAQ.md

@ -1,418 +1,4 @@
## Table Of Contents
- [What is ZFS on Linux](#what-is-zfs-on-linux)
- [Hardware Requirements](#hardware-requirements)
- [Do I have to use ECC memory for ZFS?](#do-i-have-to-use-ecc-memory-for-zfs)
- [Installation](#installation)
- [Supported Architectures](#supported-architectures)
- [Supported Kernels](#supported-kernels)
- [32-bit vs 64-bit Systems](#32-bit-vs-64-bit-systems)
- [Booting from ZFS](#booting-from-zfs)
- [Selecting /dev/ names when creating a pool](#selecting-dev-names-when-creating-a-pool)
- [Setting up the /etc/zfs/vdev_id.conf file](#setting-up-the-etczfsvdev_idconf-file)
- [Changing /dev/ names on an existing pool](#changing-dev-names-on-an-existing-pool)
- [The /etc/zfs/zpool.cache file](#the-etczfszpoolcache-file)
- [Generating a new /etc/zfs/zpool.cache file](#generating-a-new-etczfszpoolcache-file)
- [Sending and Receiving Streams](#sending-and-receiving-streams)
* [hole_birth Bugs](#hole_birth-bugs)
* [Sending Large Blocks](#sending-large-blocks)
- [CEPH/ZFS](#cephzfs)
* [ZFS Configuration](#zfs-configuration)
* [CEPH Configuration (ceph.conf}](#ceph-configuration-cephconf)
* [Other General Guidelines](#other-general-guidelines)
- [Performance Considerations](#performance-considerations)
- [Advanced Format Disks](#advanced-format-disks)
- [ZVOL used space larger than expected](#ZVOL-used-space-larger-than-expected)
- [Using a zvol for a swap device](#using-a-zvol-for-a-swap-device)
- [Using ZFS on Xen Hypervisor or Xen Dom0](#using-zfs-on-xen-hypervisor-or-xen-dom0)
- [udisks2 creates /dev/mapper/ entries for zvol](#udisks2-creating-devmapper-entries-for-zvol)
- [Licensing](#licensing)
- [Reporting a problem](#reporting-a-problem)
- [Does ZFS on Linux have a Code of Conduct?](#does-zfs-on-linux-have-a-code-of-conduct)
## What is ZFS on Linux
This page was moved to: https://openzfs.github.io/openzfs-docs/Project%20and%20Community/FAQ.html
The ZFS on Linux project is an implementation of [OpenZFS][OpenZFS] designed to work in a Linux environment. OpenZFS is an outstanding storage platform that encompasses the functionality of traditional filesystems, volume managers, and more, with consistent reliability, functionality and performance across all distributions. Additional information about OpenZFS can be found in the [OpenZFS wikipedia article][wikipedia].
## Hardware Requirements
Because ZFS was originally designed for Sun Solaris it was long considered a filesystem for large servers and for companies that could afford the best and most powerful hardware available. But since the porting of ZFS to numerous OpenSource platforms (The BSDs, Illumos and Linux - under the umbrella organization "OpenZFS"), these requirements have been lowered.
The suggested hardware requirements are:
* ECC memory. This isn't really a requirement, but it's highly recommended.
* 8GB+ of memory for the best performance. It's perfectly possible to run with 2GB or less (and people do), but you'll need more if using deduplication.
## Do I have to use ECC memory for ZFS?
Using ECC memory for OpenZFS is strongly recommended for enterprise environments where the strongest data integrity guarantees are required. Without ECC memory rare random bit flips caused by cosmic rays or by faulty memory can go undetected. If this were to occur OpenZFS (or any other filesystem) will write the damaged data to disk and be unable to automatically detect the corruption.
Unfortunately, ECC memory is not always supported by consumer grade hardware. And even when it is ECC memory will be more expensive. For home users the additional safety brought by ECC memory might not justify the cost. It's up to you to determine what level of protection your data requires.
## Installation
ZFS on Linux is available for all major Linux distributions. Refer to the [[getting started]] section of the wiki for links to installations instructions for many popular distributions. If your distribution isn't listed you can always build ZFS on Linux from the latest official [tarball][releases].
## Supported Architectures
ZFS on Linux is regularly compiled for the following architectures: x86_64, x86, aarch64, arm, ppc64, ppc.
## Supported Kernels
The [notes][releases] for a given ZFS on Linux release will include a range of supported kernels. Point releases will be tagged as needed in order to support the *stable* kernel available from [kernel.org][kernel]. The oldest supported kernel is 2.6.32 due to its prominence in Enterprise Linux distributions.
## 32-bit vs 64-bit Systems
You are **strongly** encouraged to use a 64-bit kernel. ZFS on Linux will build for 32-bit kernels but you may encounter stability problems.
ZFS was originally developed for the Solaris kernel which differs from the Linux kernel in several significant ways. Perhaps most importantly for ZFS it is common practice in the Solaris kernel to make heavy use of the virtual address space. However, use of the virtual address space is strongly discouraged in the Linux kernel. This is particularly true on 32-bit architectures where the virtual address space is limited to 100M by default. Using the virtual address space on 64-bit Linux kernels is also discouraged but the address space is so much larger than physical memory it is less of an issue.
If you are bumping up against the virtual memory limit on a 32-bit system you will see the following message in your system logs. You can increase the virtual address size with the boot option `vmalloc=512M`.
```
vmap allocation for size 4198400 failed: use vmalloc=<size> to increase size.
```
However, even after making this change your system will likely not be entirely stable. Proper support for 32-bit systems is contingent upon the OpenZFS code being weaned off its dependence on virtual memory. This will take some time to do correctly but it is planned for OpenZFS. This change is also expected to improve how efficiently OpenZFS manages the ARC cache and allow for tighter integration with the standard Linux page cache.
## Booting from ZFS
Booting from ZFS on Linux is possible and many people do it. There are excellent walk throughs available for [[Debian]], [[Ubuntu]] and [Gentoo][gentoo-root].
## Selecting /dev/ names when creating a pool
There are different /dev/ names that can be used when creating a ZFS pool. Each option has advantages and drawbacks, the right choice for your ZFS pool really depends on your requirements. For development and testing using /dev/sdX naming is quick and easy. A typical home server might prefer /dev/disk/by-id/ naming for simplicity and readability. While very large configurations with multiple controllers, enclosures, and switches will likely prefer /dev/disk/by-vdev naming for maximum control. But in the end, how you choose to identify your disks is up to you.
* **/dev/sdX, /dev/hdX:** Best for development/test pools
* Summary: The top level /dev/ names are the default for consistency with other ZFS implementations. They are available under all Linux distributions and are commonly used. However, because they are not persistent they should only be used with ZFS for development/test pools.
* Benefits:This method is easy for a quick test, the names are short, and they will be available on all Linux distributions.
* Drawbacks:The names are not persistent and will change depending on what order they disks are detected in. Adding or removing hardware for your system can easily cause the names to change. You would then need to remove the zpool.cache file and re-import the pool using the new names.
* Example: `zpool create tank sda sdb`
* **/dev/disk/by-id/:** Best for small pools (less than 10 disks)
* Summary: This directory contains disk identifiers with more human readable names. The disk identifier usually consists of the interface type, vendor name, model number, device serial number, and partition number. This approach is more user friendly because it simplifies identifying a specific disk.
* Benefits: Nice for small systems with a single disk controller. Because the names are persistent and guaranteed not to change, it doesn't matter how the disks are attached to the system. You can take them all out, randomly mixed them up on the desk, put them back anywhere in the system and your pool will still be automatically imported correctly.
* Drawbacks: Configuring redundancy groups based on physical location becomes difficult and error prone.
* Example: `zpool create tank scsi-SATA_Hitachi_HTS7220071201DP1D10DGG6HMRP`
* **/dev/disk/by-path/:** Good for large pools (greater than 10 disks)
* Summary: This approach is to use device names which include the physical cable layout in the system, which means that a particular disk is tied to a specific location. The name describes the PCI bus number, as well as enclosure names and port numbers. This allows the most control when configuring a large pool.
* Benefits: Encoding the storage topology in the name is not only helpful for locating a disk in large installations. But it also allows you to explicitly layout your redundancy groups over multiple adapters or enclosures.
* Drawbacks: These names are long, cumbersome, and difficult for a human to manage.
* Example: `zpool create tank pci-0000:00:1f.2-scsi-0:0:0:0 pci-0000:00:1f.2-scsi-1:0:0:0`
* **/dev/disk/by-vdev/:** Best for large pools (greater than 10 disks)
* Summary: This approach provides administrative control over device naming using the configuration file /etc/zfs/vdev_id.conf. Names for disks in JBODs can be generated automatically to reflect their physical location by enclosure IDs and slot numbers. The names can also be manually assigned based on existing udev device links, including those in /dev/disk/by-path or /dev/disk/by-id. This allows you to pick your own unique meaningful names for the disks. These names will be displayed by all the zfs utilities so it can be used to clarify the administration of a large complex pool. See the vdev_id and vdev_id.conf man pages for further details.
* Benefits: The main benefit of this approach is that it allows you to choose meaningful human-readable names. Beyond that, the benefits depend on the naming method employed. If the names are derived from the physical path the benefits of /dev/disk/by-path are realized. On the other hand, aliasing the names based on drive identifiers or WWNs has the same benefits as using /dev/disk/by-id.
* Drawbacks: This method relies on having a /etc/zfs/vdev_id.conf file properly configured for your system. To configure this file please refer to section [Setting up the /etc/zfs/vdev_id.conf file](#setting-up-the-etczfsvdev_idconf-file). As with benefits, the drawbacks of /dev/disk/by-id or /dev/disk/by-path may apply depending on the naming method employed.
* Example: `zpool create tank mirror A1 B1 mirror A2 B2`
## Setting up the /etc/zfs/vdev_id.conf file
In order to use /dev/disk/by-vdev/ naming the `/etc/zfs/vdev_id.conf` must be configured. The format of this file is described in the vdev_id.conf man page. Several examples follow.
A non-multipath configuration with direct-attached SAS enclosures and an arbitrary slot re-mapping.
```
multipath no
topology sas_direct
phys_per_port 4
# PCI_SLOT HBA PORT CHANNEL NAME
channel 85:00.0 1 A
channel 85:00.0 0 B
# Linux Mapped
# Slot Slot
slot 0 2
slot 1 6
slot 2 0
slot 3 3
slot 4 5
slot 5 7
slot 6 4
slot 7 1
```
A SAS-switch topology. Note that the channel keyword takes only two arguments in this example.
```
topology sas_switch
# SWITCH PORT CHANNEL NAME
channel 1 A
channel 2 B
channel 3 C
channel 4 D
```
A multipath configuration. Note that channel names have multiple definitions - one per physical path.
```
multipath yes
# PCI_SLOT HBA PORT CHANNEL NAME
channel 85:00.0 1 A
channel 85:00.0 0 B
channel 86:00.0 1 A
channel 86:00.0 0 B
```
A configuration using device link aliases.
```
# by-vdev
# name fully qualified or base name of device link
alias d1 /dev/disk/by-id/wwn-0x5000c5002de3b9ca
alias d2 wwn-0x5000c5002def789e
```
After defining the new disk names run `udevadm trigger` to prompt udev to parse the configuration file. This will result in a new /dev/disk/by-vdev directory which is populated with symlinks to /dev/sdX names. Following the first example above, you could then create the new pool of mirrors with the following command:
```
$ zpool create tank \
mirror A0 B0 mirror A1 B1 mirror A2 B2 mirror A3 B3 \
mirror A4 B4 mirror A5 B5 mirror A6 B6 mirror A7 B7
$ zpool status
pool: tank
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
A0 ONLINE 0 0 0
B0 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
A1 ONLINE 0 0 0
B1 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
A2 ONLINE 0 0 0
B2 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
A3 ONLINE 0 0 0
B3 ONLINE 0 0 0
mirror-4 ONLINE 0 0 0
A4 ONLINE 0 0 0
B4 ONLINE 0 0 0
mirror-5 ONLINE 0 0 0
A5 ONLINE 0 0 0
B5 ONLINE 0 0 0
mirror-6 ONLINE 0 0 0
A6 ONLINE 0 0 0
B6 ONLINE 0 0 0
mirror-7 ONLINE 0 0 0
A7 ONLINE 0 0 0
B7 ONLINE 0 0 0
errors: No known data errors
```
## Changing /dev/ names on an existing pool
Changing the /dev/ names on an existing pool can be done by simply exporting the pool and re-importing it with the -d option to specify which new names should be used. For example, to use the custom names in /dev/disk/by-vdev:
```
$ zpool export tank
$ zpool import -d /dev/disk/by-vdev tank
```
## The /etc/zfs/zpool.cache file
Whenever a pool is imported on the system it will be added to the `/etc/zfs/zpool.cache file`. This file stores pool configuration information, such as the device names and pool state. If this file exists when running the `zpool import` command then it will be used to determine the list of pools available for import. When a pool is not listed in the cache file it will need to be detected and imported using the `zpool import -d /dev/disk/by-id` command.
## Generating a new /etc/zfs/zpool.cache file
The `/etc/zfs/zpool.cache` file will be automatically updated when your pool configuration is changed. However, if for some reason it becomes stale you can force the generation of a new `/etc/zfs/zpool.cache` file by setting the cachefile property on the pool.
```
$ zpool set cachefile=/etc/zfs/zpool.cache tank
```
Conversely the cache file can be disabled by setting `cachefile=none`. This is useful for failover configurations where the pool should always be explicitly imported by the failover software.
```
$ zpool set cachefile=none tank
```
## Sending and Receiving Streams
### hole_birth Bugs
The hole_birth feature has/had bugs, the result of which is that, if you do a `zfs send -i` (or `-R`, since it uses `-i`) from an affected dataset, the receiver *will not see any checksum or other errors, but will not match the source*.
ZoL versions 0.6.5.8 and 0.7.0-rc1 (and above) default to ignoring the faulty metadata which causes this issue *on the sender side*.
For more details, see the [[hole_birth FAQ]].
### Sending Large Blocks
When sending incremental streams which contain large blocks (>128K) the `--large-block` flag must be specified. Inconsist use of the flag between incremental sends can result in files being incorrectly zeroed when they are received. Raw encrypted send/recvs automatically imply the `--large-block` flag and are therefore unaffected.
For more details, see [issue 6224](https://github.com/zfsonlinux/zfs/issues/6224).
## CEPH/ZFS
There is a lot of tuning that can be done that's dependent on the workload that is being put on CEPH/ZFS, as well as some general guidelines. Some are as follow;
### ZFS Configuration
The CEPH filestore back-end heavily relies on xattrs, for optimal performance all CEPH workloads will benefit from the following ZFS dataset parameters
* `xattr=sa`
* `dnodesize=auto`
Beyond that typically rbd/cephfs focused workloads benefit from small recordsize({16K-128K), while objectstore/s3/rados focused workloads benefit from large recordsize (128K-1M).
### CEPH Configuration (ceph.conf}
Additionally CEPH sets various values internally for handling xattrs based on the underlying filesystem. As CEPH only officially supports/detects XFS and BTRFS, for all other filesystems it falls back to rather [limited "safe" values](https://github.com/ceph/ceph/blob/4fe7e2a458a1521839bc390c2e3233dd809ec3ac/src/common/config_opts.h#L1125-L1148). On newer releases need for larger xattrs will prevent OSD's from even starting.
The officially recommended workaround ([see here](http://docs.ceph.com/docs/jewel/rados/configuration/filesystem-recommendations/#not-recommended)) has some severe downsides, and more specifically is geared toward filesystems with "limited" xattr support such as ext4.
ZFS does not have a limit internally to xattrs length, as such we can treat it similarly to how CEPH treats XFS. We can set overrides to set 3 internal values to the same as those used with XFS([see here](https://github.com/ceph/ceph/blob/9b317f7322848802b3aab9fec3def81dddd4a49b/src/os/filestore/FileStore.cc#L5714-L5737) and [here](https://github.com/ceph/ceph/blob/4fe7e2a458a1521839bc390c2e3233dd809ec3ac/src/common/config_opts.h#L1125-L1148)) and allow it be used without the severe limitations of the "official" workaround.
```
[osd]
filestore_max_inline_xattrs = 10
filestore_max_inline_xattr_size = 65536
filestore_max_xattr_value_size = 65536
```
### Other General Guidelines
* Use a separate journal device. Do not don't collocate CEPH journal on ZFS dataset if at all possible, this will quickly lead to terrible fragmentation, not to mention terrible performance upfront even before fragmentation (CEPH journal does a dsync for every write).
* Use a SLOG device, even with a separate CEPH journal device. For some workloads, skipping SLOG and setting `logbias=throughput` may be acceptable.
* Use a high-quality SLOG/CEPH journal device, consumer based SSD, or even NVMe WILL NOT DO (Samsung 830, 840, 850, etc) for a variety of reasons. CEPH will kill them quickly, on-top of the performance being quite low in this use. Generally recommended are [Intel DC S3610, S3700, S3710, P3600, P3700], or [Samsung SM853, SM863], or better.
* If using an high quality SSD or NVMe device(as mentioned above), you CAN share SLOG and CEPH Journal to good results on single device. A ratio of 4 HDDs to 1 SSD (Intel DC S3710 200GB), with each SSD partitioned (remember to align!) to 4x10GB (for ZIL/SLOG) + 4x20GB (for CEPH journal) has been reported to work well.
Again - CEPH + ZFS will KILL a consumer based SSD VERY quickly. Even ignoring the lack of power-loss protection, and endurance ratings, you will be very disappointed with performance of consumer based SSD under such a workload.
## Performance Considerations
To achieve good performance with your pool there are some easy best practices you should follow. Additionally, it should be made clear that the ZFS on Linux implementation has not yet been optimized for performance. As the project matures we can expect performance to improve.
* **Evenly balance your disk across controllers:** Often the limiting factor for performance is not the disk but the controller. By balancing your disks evenly across controllers you can often improve throughput.
* **Create your pool using whole disks:** When running zpool create use whole disk names. This will allow ZFS to automatically partition the disk to ensure correct alignment. It will also improve interoperability with other OpenZFS implementations which honor the wholedisk property.
* **Have enough memory:** A minimum of 2GB of memory is recommended for ZFS. Additional memory is strongly recommended when the compression and deduplication features are enabled.
* **Improve performance by setting ashift=12:** You may be able to improve performance for some workloads by setting `ashift=12`. This tuning can only be set when block devices are first added to a pool, such as when the pool is first created or when a new vdev is added to the pool. This tuning parameter can result in a decrease of capacity for RAIDZ configuratons.
## Advanced Format Disks
Advanced Format (AF) is a new disk format which natively uses a 4,096 byte, instead of 512 byte, sector size. To maintain compatibility with legacy systems many AF disks emulate a sector size of 512 bytes. By default, ZFS will automatically detect the sector size of the drive. This combination can result in poorly aligned disk accesses which will greatly degrade the pool performance.
Therefore, the ability to set the ashift property has been added to the zpool command. This allows users to explicitly assign the sector size when devices are first added to a pool (typically at pool creation time or adding a vdev to the pool). The ashift values range from 9 to 16 with the default value 0 meaning that zfs should auto-detect the sector size. This value is actually a bit shift value, so an ashift value for 512 bytes is 9 (2^9 = 512) while the ashift value for 4,096 bytes is 12 (2^12 = 4,096).
To force the pool to use 4,096 byte sectors at pool creation time, you may run:
```
$ zpool create -o ashift=12 tank mirror sda sdb
```
To force the pool to use 4,096 byte sectors when adding a vdev to a pool, you may run:
```
$ zpool add -o ashift=12 tank mirror sdc sdd
```
## ZVOL used space larger than expected
Depending on the filesystem used on the zvol (e.g. ext4) and the usage (e.g. deletion and creation of many files) the `used` and `referenced` properties reported by the zvol may be larger than the "actual" space that is being used as reported by the consumer.
This can happen due to the way some filesystems work, in which they prefer to allocate files in new untouched blocks rather than the fragmented used blocks marked as free. This forces zfs to reference all blocks that the underlying filesystem has ever touched.
This is in itself not much of a problem, as when the `used` property reaches the configured `volsize` the underlying filesystem will start reusing blocks. But the problem arises if it is desired to snapshot the zvol, as the space referenced by the snapshots will contain the unused blocks.
This issue can be prevented, by using the `fstrim` command to allow the kernel to specify to zfs which blocks are unused.
Executing a `fstrim` command before a snapshot is taken will ensure a minimum snapshot size.
Adding the `discard` option for the mounted ZVOL in `\etc\fstab` effectively enables the Linux kernel to issue the trim commands continuously, without the need to execute fstrim on-demand.
## Using a zvol for a swap device
You may use a zvol as a swap device but you'll need to configure it appropriately.
**CAUTION:** for now swap on zvol may lead to deadlock, in this case please send your logs [here](https://github.com/zfsonlinux/zfs/issues/7734).
* Set the volume block size to match your systems page size. This tuning prevents ZFS from having to perform read-modify-write options on a larger block while the system is already low on memory.
* Set the `logbias=throughput` and `sync=always` properties. Data written to the volume will be flushed immediately to disk freeing up memory as quickly as possible.
* Set `primarycache=metadata` to avoid keeping swap data in RAM via the ARC.
* Disable automatic snapshots of the swap device.
```
$ zfs create -V 4G -b $(getconf PAGESIZE) \
-o logbias=throughput -o sync=always \
-o primarycache=metadata \
-o com.sun:auto-snapshot=false rpool/swap
```
## Using ZFS on Xen Hypervisor or Xen Dom0
It is usually recommended to keep virtual machine storage and hypervisor pools, quite separate. Although few people have managed to successfully deploy and run ZFS on Linux using the same machine configured as Dom0. There are few caveats:
* Set a fair amount of memory in grub.conf, dedicated to Dom0.
* dom0_mem=16384M,max:16384M
* Allocate no more of 30-40% of Dom0's memory to ZFS in `/etc/modprobe.d/zfs.conf`.
* options zfs zfs_arc_max=6442450944
* Disable Xen's auto-ballooning in `/etc/xen/xl.conf`
* Watch out for any Xen bugs, such as [this one][xen-bug] related to ballooning
## udisks2 creating /dev/mapper/ entries for zvol
To prevent udisks2 from creating /dev/mapper entries that must be manually removed or maintained during zvol remove / rename, create a udev rule such as `/etc/udev/rules.d/80-udisks2-ignore-zfs.rules` with the following contents:
```
ENV{ID_PART_ENTRY_SCHEME}=="gpt", ENV{ID_FS_TYPE}=="zfs_member", ENV{ID_PART_ENTRY_TYPE}=="6a898cc3-1dd2-11b2-99a6-080020736631", ENV{UDISKS_IGNORE}="1"
```
## Licensing
ZFS is licensed under the Common Development and Distribution License ([CDDL][cddl]), and the Linux kernel is licensed under the GNU General Public License Version 2 ([GPLv2][gpl]). While both are free open source licenses they are restrictive licenses. The combination of them causes problems because it prevents using pieces of code exclusively available under one license with pieces of code exclusively available under the other in the same binary. In the case of the kernel, this prevents us from distributing ZFS on Linux as part of the kernel binary. However, there is nothing in either license that prevents distributing it in the form of a binary module or in the form of source code.
Additional reading and opinions:
* [Software Freedom Law Center][lawcenter]
* [Software Freedom Conservancy][conservancy]
* [Free Software Foundation][fsf]
* [Encouraging closed source modules][networkworld]
## Reporting a problem
You can open a new issue and search existing issues using the public [issue tracker][issues]. The issue tracker is used to organize outstanding bug reports, feature requests, and other development tasks. Anyone may post comments after signing up for a github account.
Please make sure that what you're actually seeing is a bug and not a support issue. If in doubt, please ask on the mailing list first, and if you're then asked to file an issue, do so.
When opening a new issue include this information at the top of the issue:
* What distribution you're using and the version.
* What spl/zfs packages you're using and the version.
* Describe the problem you're observing.
* Describe how to reproduce the problem.
* Including any warning/errors/backtraces from the system logs.
When a new issue is opened it's not uncommon for a developer to request additional information about the problem. In general, the more detail you share about a problem the quicker a developer can resolve it. For example, providing a simple test case is always exceptionally helpful. Be prepared to work with the developer looking in to your bug in order to get it resolved. They may ask for information like:
* Your pool configuration as reported by `zdb` or `zpool status`.
* Your hardware configuration, such as
* Number of CPUs.
* Amount of memory.
* Whether your system has ECC memory.
* Whether it is running under a VMM/Hypervisor.
* Kernel version.
* Values of the spl/zfs module parameters.
* Stack traces which may be logged to `dmesg`.
## Does ZFS on Linux have a Code of Conduct?
Yes, the ZFS on Linux community has a code of conduct. See the [Code of Conduct][CoC] for details.
[OpenZFS]: http://open-zfs.org/wiki/Main_Page
[wikipedia]: https://en.wikipedia.org/wiki/OpenZFS
[releases]: https://github.com/zfsonlinux/zfs/releases
[kernel]: https://www.kernel.org/
[gentoo-root]: https://github.com/pendor/gentoo-zfs-install/tree/master/install
[xen-bug]: https://github.com/zfsonlinux/zfs/issues/1067
[cddl]: http://hub.opensolaris.org/bin/view/Main/opensolaris_license
[gpl]: http://www.gnu.org/licenses/gpl2.html
[lawcenter]: https://www.softwarefreedom.org/resources/2016/linux-kernel-cddl.html
[conservancy]: https://sfconservancy.org/blog/2016/feb/25/zfs-and-linux/
[fsf]: https://www.fsf.org/licensing/zfs-and-linux
[networkworld]: http://www.networkworld.com/article/2301697/smb/encouraging-closed-source-modules-part-1--copyright-and-software.html
[issues]: https://github.com/zfsonlinux/zfs/issues
[CoC]: http://open-zfs.org/wiki/Code_of_Conduct
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,44 +1,3 @@
Only [DKMS][dkms] style packages can be provided for Fedora from the official zfsonlinux.org repository. This is because Fedora is a fast moving distribution which does not provide a stable kABI. These packages track the official ZFS on Linux tags and are updated as new versions are released. Packages are available for the following configurations:
This page was moved to: https://openzfs.github.io/openzfs-docs/Getting%20Started/Fedora.html
**Fedora Releases:** 30, 31, 32
**Architectures:** x86_64
To simplify installation a zfs-release package is provided which includes a zfs.repo configuration file and the ZFS on Linux public signing key. All official ZFS on Linux packages are signed using this key, and by default both yum and dnf will verify a package's signature before allowing it be to installed. Users are strongly encouraged to verify the authenticity of the ZFS on Linux public key using the fingerprint listed here.
**Location:** /etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux
**Fedora 30 Package:** http://download.zfsonlinux.org/fedora/zfs-release.fc30.noarch.rpm
**Fedora 31 Package:** http://download.zfsonlinux.org/fedora/zfs-release.fc31.noarch.rpm
**Fedora 32 Package:** http://download.zfsonlinux.org/fedora/zfs-release.fc32.noarch.rpm
**Download from:** [pgp.mit.edu][pubkey]
**Fingerprint:** C93A FFFD 9F3F 7B03 C310 CEB6 A9D5 A1C0 F14A B620
```sh
$ sudo dnf install http://download.zfsonlinux.org/fedora/zfs-release$(rpm -E %dist).noarch.rpm
$ gpg --quiet --with-fingerprint /etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux
pub 2048R/F14AB620 2013-03-21 ZFS on Linux <zfs@zfsonlinux.org>
Key fingerprint = C93A FFFD 9F3F 7B03 C310 CEB6 A9D5 A1C0 F14A B620
sub 2048R/99685629 2013-03-21
```
The ZFS on Linux packages should be installed with `dnf` on Fedora. Note that it is important to make sure that the matching *kernel-devel* package is installed for the running kernel since DKMS requires it to build ZFS.
```sh
$ sudo dnf install kernel-devel zfs
```
If the Fedora provided *zfs-fuse* package is already installed on the system. Then the `dnf swap` command should be used to replace the existing fuse packages with the ZFS on Linux packages.
```sh
$ sudo dnf swap zfs-fuse zfs
```
## Testing Repositories
In addition to the primary *zfs* repository a *zfs-testing* repository is available. This repository, which is disabled by default, contains the latest version of ZFS on Linux which is under active development. These packages are made available in order to get feedback from users regarding the functionality and stability of upcoming releases. These packages **should not** be used on production systems. Packages from the testing repository can be installed as follows.
```
$ sudo dnf --enablerepo=zfs-testing install kernel-devel zfs
```
[dkms]: https://en.wikipedia.org/wiki/Dynamic_Kernel_Module_Support
[pubkey]: http://pgp.mit.edu/pks/lookup?search=0xF14AB620&op=index&fingerprint=on
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,16 +1,3 @@
To get started with OpenZFS refer to the provided documentation for your distribution. It will cover the recommended installation method and any distribution specific information. First time OpenZFS users are encouraged to check out Aaron Toponce's [excellent documentation][docs].
This page was moved to: https://openzfs.github.io/openzfs-docs/Getting%20Started/index.html
[ArchLinux][arch]
[[Debian]]
[[Fedora]]
[FreeBSD][freebsd]
[Gentoo][gentoo]
[openSUSE][opensuse]
[[RHEL and CentOS]]
[[Ubuntu]]
[arch]: https://wiki.archlinux.org/index.php/ZFS
[freebsd]: https://zfsonfreebsd.github.io/ZoF/
[gentoo]: https://wiki.gentoo.org/wiki/ZFS
[opensuse]: https://software.opensuse.org/package/zfs
[docs]: https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,146 +1,3 @@
# Git and GitHub for beginners (ZoL edition)
This page was moved to: https://openzfs.github.io/openzfs-docs/Developer%20Resources/Git%20and%20GitHub%20for%20beginners.html
This is a very basic rundown of how to use Git and GitHub to make changes.
Recommended reading: [ZFS on Linux CONTRIBUTING.md](https://github.com/zfsonlinux/zfs/blob/master/.github/CONTRIBUTING.md)
# First time setup
If you've never used Git before, you'll need a little setup to start things off.
```
git config --global user.name "My Name"
git config --global user.email myemail@noreply.non
```
# Cloning the initial repository
The easiest way to get started is to click the fork icon at the top of the main repository page. From there you need to download a copy of the forked repository to your computer:
```
git clone https://github.com/<your-account-name>/zfs.git
```
This sets the "origin" repository to your fork. This will come in handy
when creating pull requests. To make pulling from the "upstream" repository
as changes are made, it is very useful to establish the upstream repository
as another remote (man git-remote):
```
cd zfs
git remote add upstream https://github.com/zfsonlinux/zfs.git
```
# Preparing and making changes
In order to make changes it is recommended to make a branch, this lets you work on several unrelated changes at once. It is also not recommended to make changes to the master branch unless you own the repository.
```
git checkout -b my-new-branch
```
From here you can make your changes and move on to the next step.
Recommended reading: [C Style and Coding Standards for SunOS](https://www.cis.upenn.edu/~lee/06cse480/data/cstyle.ms.pdf), [ZFS on Linux Developer Resources](https://github.com/zfsonlinux/zfs/wiki/Developer-Resources), [OpenZFS Developer Resources](http://open-zfs.org/wiki/Developer_resources)
# Testing your patches before pushing
Before committing and pushing, you may want to test your patches. There are several tests you can run against your branch such as style checking, and functional tests. All pull requests go through these tests before being pushed to the main repository, however testing locally takes the load off the build/test servers. This step is optional but highly recommended, however the test suite should be run on a virtual machine or a host that currently does not use ZFS. You may need to install `shellcheck` and `flake8` to run the `checkstyle` correctly.
```
sh autogen.sh
./configure
make checkstyle
```
Recommended reading: [Building ZFS](https://github.com/zfsonlinux/zfs/wiki/Building-ZFS), [ZFS Test Suite README](https://github.com/zfsonlinux/zfs/blob/master/tests/README.md)
# Committing your changes to be pushed
When you are done making changes to your branch there are a few more steps before you can make a pull request.
```
git commit --all --signoff
```
This command opens an editor and adds all unstaged files from your branch. Here you need to describe your change and add a few things:
```
# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
# On branch my-new-branch
# Changes to be committed:
# (use "git reset HEAD <file>..." to unstage)
#
# modified: hello.c
#
```
The first thing we need to add is the commit message. This is what is displayed on the git log, and should be a short description of the change. By style guidelines, this has to be less than 72 characters in length.
Underneath the commit message you can add a more descriptive text to your commit. The lines in this section have to be less than 72 characters.
When you are done, the commit should look like this:
```
Add hello command
This is a test commit with a descriptive commit message.
This message can be more than one line as shown here.
Signed-off-by: My Name <myemail@noreply.non>
Closes #9998
Issue #9999
# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
# On branch my-new-branch
# Changes to be committed:
# (use "git reset HEAD <file>..." to unstage)
#
# modified: hello.c
#
```
You can also reference issues and pull requests if you are filing a pull request for an existing issue as shown above. Save and exit the editor when you are done.
# Pushing and creating the pull request
Home stretch. You've made your change and made the commit. Now it's time to push it.
```
git push --set-upstream origin my-new-branch
```
This should ask you for your github credentials and upload your changes to your repository.
The last step is to either go to your repository or the upstream repository on GitHub and you should see a button for making a new pull request for your recently committed branch.
# Correcting issues with your pull request
Sometimes things don't always go as planned and you may need to update your pull request with a correction to either your commit message, or your changes. This can be accomplished by re-pushing your branch. If you need to make code changes or `git add` a file, you can do those now, along with the following:
```
git commit --amend
git push --force
```
This will return you to the commit editor screen, and push your changes over top of the old ones. Do note that this will restart the process of any build/test servers currently running and excessively pushing can cause delays in processing of all pull requests.
# Maintaining your repository
When you wish to make changes in the future you will want to have an up-to-date copy of the upstream repository to make your changes on. Here is how you keep updated:
```
git checkout master
git pull upstream master
git push origin master
```
This will make sure you are on the master branch of the repository, grab the changes from upstream, then push them back to your repository.
# Final words
This is a very basic introduction to Git and GitHub, but should get you on your way to contributing to many open source projects. Not all projects have style requirements and some may have different processes to getting changes committed so please refer to their documentation to see if you need to do anything different. One topic we have not touched on is the `git rebase` command which is a little more advanced for this wiki article.
Additional resources: [Github Help](https://help.github.com/), [Atlassian Git Tutorials](https://www.atlassian.com/git/tutorials)
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1 +1,3 @@
This page has moved to [[Debian Jessie Root on ZFS]].
This page was moved to: https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/index.html
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,8 +1,3 @@
<p align="center">[[/img/480px-Open-ZFS-Secondary-Logo-Colour-halfsize.png|alt=openzfs]]</p>
Welcome to the OpenZFS GitHub wiki. This wiki provides documentation for users and developers working
with (or contributing to) the OpenZFS project. New users or system administrators should refer to the documentation for their favorite platform to get started.
| [[Getting Started]] | [[Project and Community]] | [[Developer Resources]] |
|------------------------------|-------------------------------|------------------------ |
| How to get started with OpenZFS on your favorite platform | About the project and how to contribute | Technical documentation discussing the OpenZFS implementation |
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,5 +1,3 @@
[![Creative Commons License](https://i.creativecommons.org/l/by-sa/3.0/88x31.png)][license]
This page was moved to: https://openzfs.github.io/openzfs-docs/License.html
Wiki content is licensed under a [Creative Commons Attribution-ShareAlike license][license] unless otherwise noted.
[license]: http://creativecommons.org/licenses/by-sa/3.0/
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,15 +1,4 @@
| &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;List&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description | List&nbsp;Archive |
|--------------------------|-------------|:------------:|
| [zfs-announce@list.zfsonlinux.org][zfs-ann] | A low-traffic list for announcements such as new releases | [archive][zfs-ann-archive] |
| [zfs-discuss@list.zfsonlinux.org][zfs-discuss] | A user discussion list for issues related to functionality and usability | [archive][zfs-discuss-archive] |
| [zfs-devel@list.zfsonlinux.org][zfs-devel] | A development list for developers to discuss technical issues | [archive][zfs-devel-archive] |
| [developer@open-zfs.org][open-zfs] | A platform-independent mailing list for ZFS developers to review ZFS code and architecture changes from all platforms | [archive][open-zfs-archive] |
[zfs-ann]: https://zfsonlinux.topicbox.com/groups/zfs-announce
[zfs-ann-archive]: https://zfsonlinux.topicbox.com/groups/zfs-announce
[zfs-discuss]: https://zfsonlinux.topicbox.com/groups/zfs-discuss
[zfs-discuss-archive]: https://zfsonlinux.topicbox.com/groups/zfs-discuss
[zfs-devel]: https://zfsonlinux.topicbox.com/groups/zfs-devel
[zfs-devel-archive]: https://zfsonlinux.topicbox.com/groups/zfs-devel
[open-zfs]: http://open-zfs.org/wiki/Mailing_list
[open-zfs-archive]: https://openzfs.topicbox.com/groups/developer
This page was moved to: https://openzfs.github.io/openzfs-docs/Project%20and%20Community/Mailing%20Lists.html
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,199 +1,3 @@
The ZFS on Linux project is an adaptation of the upstream [OpenZFS repository][openzfs-repo] designed to work in a Linux environment. This upstream repository acts as a location where new features, bug fixes, and performance improvements from all the OpenZFS platforms can be integrated. Each platform is responsible for tracking the OpenZFS repository and merging the relevant improvements back in to their release.
This page was moved to: https://openzfs.github.io/openzfs-docs/Developer%20Resources/OpenZFS%20Patches.html
For the ZFS on Linux project this tracking is managed through an [OpenZFS tracking](http://build.zfsonlinux.org/openzfs-tracking.html) page. The page is updated regularly and shows a list of OpenZFS commits and their status in regard to the ZFS on Linux master branch.
This page describes the process of applying outstanding OpenZFS commits to ZFS on Linux and submitting those changes for inclusion. As a developer this is a great way to familiarize yourself with ZFS on Linux and to begin quickly making a valuable contribution to the project. The following guide assumes you have a [github account][github-account], are familiar with git, and are used to developing in a Linux environment.
## Porting OpenZFS changes to ZFS on Linux
### Setup the Environment
**Clone the source.** Start by making a local clone of the [spl][spl-repo] and [zfs][zfs-repo] repositories.
```
$ git clone -o zfsonlinux https://github.com/zfsonlinux/spl.git
$ git clone -o zfsonlinux https://github.com/zfsonlinux/zfs.git
```
**Add remote repositories.** Using the GitHub web interface [fork][github-fork] the [zfs][zfs-repo] repository in to your personal GitHub account. Add your new zfs fork and the [openzfs][openzfs-repo] repository as remotes and then fetch both repositories. The OpenZFS repository is large and the initial fetch may take some time over a slow connection.
```
$ cd zfs
$ git remote add <your-github-account> git@github.com:<your-github-account>/zfs.git
$ git remote add openzfs https://github.com/openzfs/openzfs.git
$ git fetch --all
```
**Build the source.** Compile the spl and zfs master branches. These branches are always kept stable and this is a useful verification that you have a full build environment installed and all the required dependencies are available. This may also speed up the compile time latter for small patches where incremental builds are an option.
```
$ cd ../spl
$ sh autogen.sh && ./configure --enable-debug && make -s -j$(nproc)
$
$ cd ../zfs
$ sh autogen.sh && ./configure --enable-debug && make -s -j$(nproc)
```
### Pick a patch
Consult the [OpenZFS tracking](http://build.zfsonlinux.org/openzfs-tracking.html) page and select a patch which has not yet been applied. For your first patch you will want to select a small patch to familiarize yourself with the process.
### Porting a Patch
There are 2 methods:
- [cherry-pick (easier)](#cherry-pick)
- [manual merge](#manual-merge)
Please read about [manual merge](#manual-merge) first to learn the whole process.
#### Cherry-pick
You can start to [cherry-pick](https://git-scm.com/docs/git-cherry-pick) by your own, but we have made a special [script](https://github.com/zfsonlinux/zfs-buildbot/blob/master/scripts/openzfs-merge.sh), which tries to [cherry-pick](https://git-scm.com/docs/git-cherry-pick) the patch automatically and generates the description.
0) Prepare environment:
Mandatory git settings (add to `~/.gitconfig`):
```
[merge]
renameLimit = 999999
[user]
email = mail@yourmail.com
name = Your Name
```
Download the script:
```
wget https://raw.githubusercontent.com/zfsonlinux/zfs-buildbot/master/scripts/openzfs-merge.sh
```
1) Run:
```
./openzfs-merge.sh -d path_to_zfs_folder -c openzfs_commit_hash
```
This command will fetch all repositories, create a new branch `autoport-ozXXXX` (XXXX - OpenZFS issue number), try to cherry-pick, compile and check cstyle on success.
If it succeeds without any merge conflicts - go to `autoport-ozXXXX` branch, it will have ready to pull commit. Congratulations, you can go to step 7!
Otherwise you should go to step 2.
2) Resolve all merge conflicts manually. Easy method - install [Meld](http://meldmerge.org/) or any other diff tool and run `git mergetool`.
3) Check all compile and cstyle errors (See [Testing a patch](#testing-a-patch)).
4) Commit your changes with any description.
5) Update commit description (last commit will be changed):
```
./openzfs-merge.sh -d path_to_zfs_folder -g openzfs_commit_hash
```
6) Add any porting notes (if you have modified something): `git commit --amend`
7) Push your commit to github: `git push <your-github-account> autoport-ozXXXX`
8) Create a pull request to ZoL master branch.
9) Go to [Testing a patch](#testing-a-patch) section.
#### Manual merge
**Create a new branch.** It is important to create a new branch for every commit you port to ZFS on Linux. This will allow you to easily submit your work as a GitHub pull request and it makes it possible to work on multiple OpenZFS changes concurrently. All development branches need to be based off of the ZFS master branch and it's helpful to name the branches after the issue number you're working on.
```
$ git checkout -b openzfs-<issue-nr> master
```
**Generate a patch.** One of the first things you'll notice about the ZFS on Linux repository is that it is laid out differently than the OpenZFS repository. Organizationally it is much flatter, this is possible because it only contains the code for OpenZFS not an entire OS. That means that in order to apply a patch from OpenZFS the path names in the patch must be changed. A script called zfs2zol-patch.sed has been provided to perform this translation. Use the `git format-patch` command and this script to generate a patch.
```
$ git format-patch --stdout <commit-hash>^..<commit-hash> | \
./scripts/zfs2zol-patch.sed >openzfs-<issue-nr>.diff
```
**Apply the patch.** In many cases the generated patch will apply cleanly to the repository. However, it's important to keep in mind the zfs2zol-patch.sed script only translates the paths. There are often additional reasons why a patch might not apply. In some cases hunks of the patch may not be applicable to Linux and should be dropped. In other cases a patch may depend on other changes which must be applied first. The changes may also conflict with Linux specific modifications. In all of these cases the patch will need to be manually modified to apply cleanly while preserving the its original intent.
```
$ git am ./openzfs-<commit-nr>.diff
```
**Update the commit message.** By using `git format-patch` to generate the patch and then `git am` to apply it the original comment and authorship will be preserved. However, due to the formatting of the OpenZFS commit you will likely find that the entire commit comment has been squashed in to the subject line. Use `git commit --amend` to cleanup the comment and be careful to follow [these standard guidelines][guidelines].
The summary line of an OpenZFS commit is often very long and you should truncate it to 50 characters. This is useful because it preserves the correct formatting of `git log --pretty=oneline` command. Make sure to leave a blank line between the summary and body of the commit. Then include the full OpenZFS commit message wrapping any lines which exceed 72 characters. Finally, add a `Ported-by` tag with your contact information and both a `OpenZFS-issue` and `OpenZFS-commit` tag with appropriate links. You'll want to verify your commit contains all of the following information:
* The subject line from the original OpenZFS patch in the form: "OpenZFS \<issue-nr\> - short description".
* The original patch authorship should be preserved.
* The OpenZFS commit message.
* The following tags:
* **Authored by:** Original patch author
* **Reviewed by:** All OpenZFS reviewers from the original patch.
* **Approved by:** All OpenZFS reviewers from the original patch.
* **Ported-by:** Your name and email address.
* **OpenZFS-issue:** https ://www.illumos.org/issues/issue
* **OpenZFS-commit:** https ://github.com/openzfs/openzfs/commit/hash
* **Porting Notes:** An optional section describing any changes required when porting.
For example, OpenZFS issue 6873 was [applied to Linux][zol-6873] from this upstream [OpenZFS commit][openzfs-6873].
```
OpenZFS 6873 - zfs_destroy_snaps_nvl leaks errlist
Authored by: Chris Williamson <chris.williamson@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Ported-by: Denys Rtveliashvili <denys@rtveliashvili.name>
lzc_destroy_snaps() returns an nvlist in errlist.
zfs_destroy_snaps_nvl() should nvlist_free() it before returning.
OpenZFS-issue: https://www.illumos.org/issues/6873
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/ee06391
```
### Testing a Patch
**Build the source.** Verify the patched source compiles without errors and all warnings are resolved.
```
$ make -s -j$(nproc)
```
**Run the style checker.** Verify the patched source passes the style checker, the command should return without printing any output.
```
$ make cstyle
```
**Open a Pull Request.** When your patch builds cleanly and passes the style checks [open a new pull request][github-pr]. The pull request will be queued for [automated testing][buildbot]. As part of the testing the change is built for a wide range of Linux distributions and a battery of functional and stress tests are run to detect regressions.
```
$ git push <your-github-account> openzfs-<issue-nr>
```
**Fix any issues.** Testing takes approximately 2 hours to fully complete and the results are posted in the GitHub [pull request][openzfs-pr]. All the tests are expected to pass and you should investigate and resolve any test failures. The [test scripts][buildbot-scripts] are all available and designed to run locally in order reproduce an issue. Once you've resolved the issue force update the pull request to trigger a new round of testing. Iterate until all the tests are passing.
```
# Fix issue, amend commit, force update branch.
$ git commit --amend
$ git push --force <your-github-account> openzfs-<issue-nr>
```
### Merging the Patch
**Review.** Lastly one of the ZFS on Linux maintainers will make a final review of the patch and may request additional changes. Once the maintainer is happy with the final version of the patch they will add their signed-off-by, merge it to the master branch, mark it complete on the tracking page, and thank you for your contribution to the project!
## Porting ZFS on Linux changes to OpenZFS
Often an issue will be first fixed in ZFS on Linux or a new feature developed. Changes which are not Linux specific should be submitted upstream to the OpenZFS GitHub repository for review. The process for this is described in the [OpenZFS README][openzfs-repo].
[github-account]: https://help.github.com/articles/signing-up-for-a-new-github-account/
[github-pr]: https://help.github.com/articles/creating-a-pull-request/
[github-fork]: https://help.github.com/articles/fork-a-repo/
[buildbot]: https://github.com/zfsonlinux/zfs-buildbot/
[buildbot-scripts]: https://github.com/zfsonlinux/zfs-buildbot/tree/master/scripts
[spl-repo]: https://github.com/zfsonlinux/spl
[zfs-repo]: https://github.com/zfsonlinux/zfs
[openzfs-repo]: https://github.com/openzfs/openzfs/
[openzfs-6873]: https://github.com/openzfs/openzfs/commit/ee06391
[zol-6873]: https://github.com/zfsonlinux/zfs/commit/b3744ae
[openzfs-pr]: https://github.com/zfsonlinux/zfs/pull/4594
[guidelines]: http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1 +1,3 @@
This page is obsolete, use http://build.zfsonlinux.org/openzfs-tracking.html
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,3 +1,7 @@
**This page will be moved to: https://openzfs.github.io/openzfs-docs/Developer%20Resources/OpenZFS%20Exceptions.html**
**DON'T EDIT THIS PAGE!**
Commit exceptions used to explicitly reference a given Linux commit.
These exceptions are useful for a variety of reasons.

@ -1,16 +1,3 @@
OpenZFS is storage software which combines the functionality of traditional filesystems, volume manager, and more. OpenZFS includes protection against data corruption, support for high storage capacities, efficient data compression, snapshots and copy-on-write clones, continuous integrity checking and automatic repair, remote replication with ZFS send and receive, and RAID-Z.
This page was moved to: https://openzfs.github.io/openzfs-docs/Project%20and%20Community/index.html
OpenZFS brings together developers from the illumos, Linux, FreeBSD and OS X platforms, and a wide range of companies -- both online and at the annual OpenZFS Developer Summit. High-level goals of the project include raising awareness of the quality, utility and availability of open-source implementations of ZFS, encouraging open communication about ongoing efforts toward improving open-source variants of ZFS, and ensuring consistent reliability, functionality and performance of all distributions of ZFS.
[Admin Documentation][admin-docs]
[[FAQ]]
[[Mailing Lists]]
[Releases][releases]
[Issue Tracker][issues]
[Roadmap][roadmap]
[[Signing Keys]]
[admin-docs]: https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/
[issues]: https://github.com/zfsonlinux/zfs/issues
[roadmap]: https://github.com/zfsonlinux/zfs/milestones
[releases]: https://github.com/zfsonlinux/zfs/releases
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,108 +1,3 @@
[kABI-tracking kmod][kmod] or [DKMS][dkms] style packages are provided for RHEL / CentOS based distributions from the official zfsonlinux.org repository. These packages track the official ZFS on Linux tags and are updated as new versions are released. Packages are available for the following configurations:
This page was moved to: https://openzfs.github.io/openzfs-docs/Getting%20Started/RHEL%20and%20CentOS.html
**EL Releases:** 6.x, 7.x, 8.x
**Architectures:** x86_64
To simplify installation a zfs-release package is provided which includes a zfs.repo configuration file and the ZFS on Linux public signing key. All official ZFS on Linux packages are signed using this key, and by default yum will verify a package's signature before allowing it be to installed. Users are strongly encouraged to verify the authenticity of the ZFS on Linux public key using the fingerprint listed here.
**Location:** /etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux
**EL6 Package:** http://download.zfsonlinux.org/epel/zfs-release.el6.noarch.rpm
**EL7.5 Package:** http://download.zfsonlinux.org/epel/zfs-release.el7_5.noarch.rpm
**EL7.6 Package:** http://download.zfsonlinux.org/epel/zfs-release.el7_6.noarch.rpm
**EL7.7 Package:** http://download.zfsonlinux.org/epel/zfs-release.el7_7.noarch.rpm
**EL7.8 Package:** http://download.zfsonlinux.org/epel/zfs-release.el7_8.noarch.rpm
**EL8.0 Package:** http://download.zfsonlinux.org/epel/zfs-release.el8_0.noarch.rpm
**EL8.1 Package:** http://download.zfsonlinux.org/epel/zfs-release.el8_1.noarch.rpm
**Note:** Starting with EL7.7 **zfs-0.8** will become the default, EL7.6 and older will continue to track the **zfs-0.7** point releases.
**Download from:** [pgp.mit.edu][pubkey]
**Fingerprint:** C93A FFFD 9F3F 7B03 C310 CEB6 A9D5 A1C0 F14A B620
```
$ sudo yum install http://download.zfsonlinux.org/epel/zfs-release.<dist>.noarch.rpm
$ gpg --quiet --with-fingerprint /etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux
pub 2048R/F14AB620 2013-03-21 ZFS on Linux <zfs@zfsonlinux.org>
Key fingerprint = C93A FFFD 9F3F 7B03 C310 CEB6 A9D5 A1C0 F14A B620
sub 2048R/99685629 2013-03-21
```
After installing the zfs-release package and verifying the public key users can opt to install ether the kABI-tracking kmod or DKMS style packages. For most users the kABI-tracking kmod packages are recommended in order to avoid needing to rebuild ZFS for every kernel update. DKMS packages are recommended for users running a non-distribution kernel or for users who wish to apply local customizations to ZFS on Linux.
## kABI-tracking kmod
By default the zfs-release package is configured to install DKMS style packages so they will work with a wide range of kernels. In order to install the kABI-tracking kmods the default repository in the */etc/yum.repos.d/zfs.repo* file must be switch from *zfs* to *zfs-kmod*. Keep in mind that the kABI-tracking kmods are only verified to work with the distribution provided kernel.
```diff
# /etc/yum.repos.d/zfs.repo
[zfs]
name=ZFS on Linux for EL 7 - dkms
baseurl=http://download.zfsonlinux.org/epel/7/$basearch/
-enabled=1
+enabled=0
metadata_expire=7d
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux
@@ -9,7 +9,7 @@
[zfs-kmod]
name=ZFS on Linux for EL 7 - kmod
baseurl=http://download.zfsonlinux.org/epel/7/kmod/$basearch/
-enabled=0
+enabled=1
metadata_expire=7d
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux
```
The ZFS on Linux packages can now be installed using yum.
```
$ sudo yum install zfs
```
## DKMS
To install DKMS style packages issue the following yum commands. First add the [EPEL repository](https://fedoraproject.org/wiki/EPEL) which provides DKMS by installing the *epel-release* package, then the *kernel-devel* and *zfs* packages. Note that it is important to make sure that the matching *kernel-devel* package is installed for the running kernel since DKMS requires it to build ZFS.
```
$ sudo yum install epel-release
$ sudo yum install "kernel-devel-uname-r == $(uname -r)" zfs
```
## Important Notices
### RHEL/CentOS 7.x kmod package upgrade
When updating to a new RHEL/CentOS 7.x release the existing kmod packages will not work due to upstream kABI changes in the kernel. After upgrading to 7.x users must uninstall ZFS and then reinstall it as described in the [kABI-tracking kmod](https://github.com/zfsonlinux/zfs/wiki/RHEL-%26-CentOS/#kabi-tracking-kmod) section. Compatible kmod packages will be installed from the matching CentOS 7.x repository.
```
$ sudo yum remove zfs zfs-kmod spl spl-kmod libzfs2 libnvpair1 libuutil1 libzpool2 zfs-release
$ sudo yum install http://download.zfsonlinux.org/epel/zfs-release.el7_6.noarch.rpm
$ sudo yum autoremove
$ sudo yum clean metadata
$ sudo yum install zfs
```
### Switching from DKMS to kABI-tracking kmod
When switching from DKMS to kABI-tracking kmods first uninstall the existing DKMS packages. This should remove the kernel modules for all installed kernels but in practice it's not always perfectly reliable. Therefore, it's recommended that you manually remove any remaining ZFS kernel modules as shown. At this point the kABI-tracking kmods can be installed as described in the section above.
```
$ sudo yum remove zfs zfs-kmod spl spl-kmod libzfs2 libnvpair1 libuutil1 libzpool2 zfs-release
$ sudo find /lib/modules/ \( -name "splat.ko" -or -name "zcommon.ko" \
-or -name "zpios.ko" -or -name "spl.ko" -or -name "zavl.ko" -or \
-name "zfs.ko" -or -name "znvpair.ko" -or -name "zunicode.ko" \) \
-exec /bin/rm {} \;
```
## Testing Repositories
In addition to the primary *zfs* repository a *zfs-testing* repository is available. This repository, which is disabled by default, contains the latest version of ZFS on Linux which is under active development. These packages are made available in order to get feedback from users regarding the functionality and stability of upcoming releases. These packages **should not** be used on production systems. Packages from the testing repository can be installed as follows.
```
$ sudo yum --enablerepo=zfs-testing install kernel-devel zfs
```
[kmod]: http://elrepoproject.blogspot.com/2016/02/kabi-tracking-kmod-packages.html
[dkms]: https://en.wikipedia.org/wiki/Dynamic_Kernel_Module_Support
[pubkey]: http://pgp.mit.edu/pks/lookup?search=0xF14AB620&op=index&fingerprint=on
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,57 +1,4 @@
All tagged ZFS on Linux [releases][releases] are signed by the official maintainer for that branch. These signatures are automatically verified by GitHub and can be checked locally by downloading the maintainers public key.
## Maintainers
This page was moved to: https://openzfs.github.io/openzfs-docs/Project%20and%20Community/Signing%20Keys.html
### Release branch (spl/zfs-*-release)
**Maintainer:** [Ned Bass][nedbass]
**Download:** [pgp.mit.edu][nedbass-pubkey]
**Key ID:** C77B9667
**Fingerprint:** 29D5 610E AE29 41E3 55A2 FE8A B974 67AA C77B 9667
**Maintainer:** [Tony Hutter][tonyhutter]
**Download:** [pgp.mit.edu][tonyhutter-pubkey]
**Key ID:** D4598027
**Fingerprint:** 4F3B A9AB 6D1F 8D68 3DC2 DFB5 6AD8 60EE D459 8027
### Master branch (master)
**Maintainer:** [Brian Behlendorf][behlendorf]
**Download:** [pgp.mit.edu][behlendorf-pubkey]
**Key ID:** C6AF658B
**Fingerprint:** C33D F142 657E D1F7 C328 A296 0AB9 E991 C6AF 658B
## Checking the Signature of a Git Tag
First import the public key listed above in to your key ring.
```
$ gpg --keyserver pgp.mit.edu --recv C6AF658B
gpg: requesting key C6AF658B from hkp server pgp.mit.edu
gpg: key C6AF658B: "Brian Behlendorf <behlendorf1@llnl.gov>" not changed
gpg: Total number processed: 1
gpg: unchanged: 1
```
After the pubic key is imported the signature of a git tag can be verified as shown.
```
$ git tag --verify zfs-0.6.5
object 7a27ad00ae142b38d4aef8cc0af7a72b4c0e44fe
type commit
tag zfs-0.6.5
tagger Brian Behlendorf <behlendorf1@llnl.gov> 1441996302 -0700
ZFS Version 0.6.5
gpg: Signature made Fri 11 Sep 2015 11:31:42 AM PDT using DSA key ID C6AF658B
gpg: Good signature from "Brian Behlendorf <behlendorf1@llnl.gov>"
gpg: aka "Brian Behlendorf (LLNL) <behlendorf1@llnl.gov>"
```
[nedbass]: https://github.com/nedbass
[nedbass-pubkey]: http://pgp.mit.edu/pks/lookup?op=vindex&search=0xB97467AAC77B9667&fingerprint=on
[tonyhutter]: https://github.com/tonyhutter
[tonyhutter-pubkey]: http://pgp.mit.edu/pks/lookup?op=vindex&search=0x6ad860eed4598027&fingerprint=on
[behlendorf]: https://github.com/behlendorf
[behlendorf-pubkey]: http://pgp.mit.edu/pks/lookup?op=vindex&search=0x0AB9E991C6AF658B&fingerprint=on
[releases]: https://github.com/zfsonlinux/zfs/releases
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,66 +1,3 @@
# DRAFT
This page contains tips for troubleshooting ZFS on Linux and what info developers might want for bug triage.
This page was moved to: https://openzfs.github.io/openzfs-docs/Basics%20concepts/Troubleshooting.html
- [About Log Files](#about-log-files)
- [Generic Kernel Log](#generic-kernel-log)
- [ZFS Kernel Module Debug Messages](#zfs-kernel-module-debug-messages)
- [Unkillable Process](#unkillable-process)
- [ZFS Events](#zfs-events)
***
## About Log Files
Log files can be very useful for troubleshooting. In some cases, interesting information is stored in multiple log files that are correlated to system events.
Pro tip: logging infrastructure tools like _elasticsearch_, _fluentd_, _influxdb_, or _splunk_ can simplify log analysis and event correlation.
### Generic Kernel Log
Typically, Linux kernel log messages are available from `dmesg -T`, `/var/log/syslog`, or where kernel log messages are sent (eg by `rsyslogd`).
### ZFS Kernel Module Debug Messages
The ZFS kernel modules use an internal log buffer for detailed logging information.
This log information is available in the pseudo file `/proc/spl/kstat/zfs/dbgmsg` for ZFS builds where ZFS module parameter [zfs_dbgmsg_enable = 1](https://github.com/zfsonlinux/zfs/wiki/ZFS-on-Linux-Module-Parameters#zfs_dbgmsg_enable)
***
## Unkillable Process
Symptom: `zfs` or `zpool` command appear hung, does not return, and is not killable
Likely cause: kernel thread hung or panic
Log files of interest: [Generic Kernel Log](#generic-kernel-log), [ZFS Kernel Module Debug Messages](#zfs-kernel-module-debug-messages)
Important information: if a kernel thread is stuck, then a backtrace of the stuck thread can be in the logs.
In some cases, the stuck thread is not logged until the deadman timer expires. See also [debug tunables](https://github.com/zfsonlinux/zfs/wiki/ZFS-on-Linux-Module-Parameters#debug)
***
## ZFS Events
ZFS uses an event-based messaging interface for communication of important events to
other consumers running on the system. The ZFS Event Daemon (zed) is a userland daemon that
listens for these events and processes them. zed is extensible so you can write shell scripts
or other programs that subscribe to events and take action. For example, the script usually
installed at `/etc/zfs/zed.d/all-syslog.sh` writes a formatted event message to `syslog.`
See the man page for `zed(8)` for more information.
A history of events is also available via the `zpool events` command. This history begins at
ZFS kernel module load and includes events from any pool. These events are stored in RAM and
limited in count to a value determined by the kernel tunable [zfs_event_len_max](https://github.com/zfsonlinux/zfs/wiki/ZFS-on-Linux-Module-Parameters#zfs_zevent_len_max).
`zed` has an internal throttling mechanism to prevent overconsumption of system resources
processing ZFS events.
More detailed information about events is observable using `zpool events -v`
The contents of the verbose events is subject to change, based on the event and information
available at the time of the event.
Each event has a class identifier used for filtering event types. Commonly seen events are
those related to pool management with class `sysevent.fs.zfs.*` including import, export,
configuration updates, and `zpool history` updates.
Events related to errors are reported as class `ereport.*` These can be invaluable for
troubleshooting. Some faults can cause multiple ereports as various layers of the software
deal with the fault. For example, on a simple pool without parity protection, a faulty
disk could cause an `ereport.io` during a read from the disk that results in an
`erport.fs.zfs.checksum` at the pool level. These events are also reflected by the error
counters observed in `zpool status`
If you see checksum or read/write errors in `zpool status` then there should be one or more
corresponding ereports in the `zpool events` output.
# DRAFT
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,604 +1,3 @@
### Newer release available
* See [[Ubuntu 18.04 Root on ZFS]] for new installs.
This page was moved to: https://openzfs.github.io/openzfs-docs/Getting%20Started/Ubuntu/Ubuntu%2016.04%20Root%20on%20ZFS.html
### Caution
* This HOWTO uses a whole physical disk.
* Do not use these instructions for dual-booting.
* Backup your data. Any existing data will be lost.
### System Requirements
* [64-bit Ubuntu 16.04.5 ("Xenial") Desktop CD](http://releases.ubuntu.com/16.04/ubuntu-16.04.5-desktop-amd64.iso) (*not* the server image)
* [A 64-bit kernel is *strongly* encouraged.](https://github.com/zfsonlinux/zfs/wiki/FAQ#32-bit-vs-64-bit-systems)
* A drive which presents 512B logical sectors. Installing on a drive which presents 4KiB logical sectors (a “4Kn” drive) should work with UEFI partitioning, but this has not been tested.
Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory is recommended for normal performance in basic workloads. If you wish to use deduplication, you will need [massive amounts of RAM](http://wiki.freebsd.org/ZFSTuningGuide#Deduplication). Enabling deduplication is a permanent change that cannot be easily reverted.
## Support
If you need help, reach out to the community using the [zfs-discuss mailing list](https://github.com/zfsonlinux/zfs/wiki/Mailing-Lists) or IRC at #zfsonlinux on [freenode](https://freenode.net/). If you have a bug report or feature request related to this HOWTO, please [file a new issue](https://github.com/zfsonlinux/zfs/issues/new) and mention @rlaager.
## Encryption
This guide supports the three different Ubuntu encryption options: unencrypted, LUKS (full-disk encryption), and eCryptfs (home directory encryption).
Unencrypted does not encrypt anything, of course. All ZFS features are fully available. With no encryption happening, this option naturally has the best performance.
LUKS encrypts almost everything: the OS, swap, home directories, and anything else. The only unencrypted data is the bootloader, kernel, and initrd. The system cannot boot without the passphrase being entered at the console. All ZFS features are fully available. Performance is good, but LUKS sits underneath ZFS, so if multiple disks (mirror or raidz configurations) are used, the data has to be encrypted once per disk.
eCryptfs protects the contents of the specified home directories. This guide also recommends encrypted swap when using eCryptfs. Other operating system directories, which may contain sensitive data, logs, and/or configuration information, are not encrypted. ZFS compression is useless on the encrypted home directories. ZFS snapshots are not automatically and transparently mounted when using eCryptfs, and manually mounting them requires serious knowledge of eCryptfs administrative commands. eCryptfs sits above ZFS, so the encryption only happens once, regardless of the number of disks in the pool. The performance of eCryptfs may be lower than LUKS in single-disk scenarios.
If you want encryption, LUKS is recommended.
## Step 1: Prepare The Install Environment
1.1 Boot the Ubuntu Live CD. Select Try Ubuntu. Connect your system to the Internet as appropriate (e.g. join your WiFi network). Open a terminal (press Ctrl-Alt-T).
1.2 Setup and update the repositories:
$ sudo apt-add-repository universe
$ sudo apt update
1.3 Optional: Start the OpenSSH server in the Live CD environment:
If you have a second system, using SSH to access the target system can be convenient.
$ passwd
There is no current password; hit enter at that prompt.
$ sudo apt --yes install openssh-server
**Hint:** You can find your IP address with `ip addr show scope global | grep inet`. Then, from your main machine, connect with `ssh ubuntu@IP`.
1.4 Become root:
$ sudo -i
1.5 Install ZFS in the Live CD environment:
# apt install --yes debootstrap gdisk zfs-initramfs
**Note:** You can ignore the two error lines about "AppStream". They are harmless.
## Step 2: Disk Formatting
2.1 If you are re-using a disk, clear it as necessary:
If the disk was previously used in an MD array, zero the superblock:
# apt install --yes mdadm
# mdadm --zero-superblock --force /dev/disk/by-id/scsi-SATA_disk1
Clear the partition table:
# sgdisk --zap-all /dev/disk/by-id/scsi-SATA_disk1
2.2 Partition your disk:
Run this if you need legacy (BIOS) booting:
# sgdisk -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-SATA_disk1
Run this for UEFI booting (for use now or in the future):
# sgdisk -n3:1M:+512M -t3:EF00 /dev/disk/by-id/scsi-SATA_disk1
Choose one of the following options:
2.2a Unencrypted or eCryptfs:
# sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-SATA_disk1
2.2b LUKS:
# sgdisk -n4:0:+512M -t4:8300 /dev/disk/by-id/scsi-SATA_disk1
# sgdisk -n1:0:0 -t1:8300 /dev/disk/by-id/scsi-SATA_disk1
Always use the long `/dev/disk/by-id/*` aliases with ZFS. Using the `/dev/sd*` device nodes directly can cause sporadic import failures, especially on systems that have more than one storage pool.
**Hints:**
* `ls -la /dev/disk/by-id` will list the aliases.
* Are you doing this in a virtual machine? If your virtual disk is missing from `/dev/disk/by-id`, use `/dev/vda` if you are using KVM with virtio; otherwise, read the [troubleshooting](https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS#troubleshooting) section.
2.3 Create the root pool:
Choose one of the following options:
2.3a Unencrypted or eCryptfs:
# zpool create -o ashift=12 \
-O atime=off -O canmount=off -O compression=lz4 -O normalization=formD \
-O mountpoint=/ -R /mnt \
rpool /dev/disk/by-id/scsi-SATA_disk1-part1
2.3b LUKS:
# cryptsetup luksFormat -c aes-xts-plain64 -s 256 -h sha256 \
/dev/disk/by-id/scsi-SATA_disk1-part1
# cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part1 luks1
# zpool create -o ashift=12 \
-O atime=off -O canmount=off -O compression=lz4 -O normalization=formD \
-O mountpoint=/ -R /mnt \
rpool /dev/mapper/luks1
**Notes:**
* The use of `ashift=12` is recommended here because many drives today have 4KiB (or larger) physical sectors, even though they present 512B logical sectors. Also, a future replacement drive may have 4KiB physical sectors (in which case `ashift=12` is desirable) or 4KiB logical sectors (in which case `ashift=12` is required).
* Setting `normalization=formD` eliminates some corner cases relating to UTF-8 filename normalization. It also implies `utf8only=on`, which means that only UTF-8 filenames are allowed. If you care to support non-UTF-8 filenames, do not use this option. For a discussion of why requiring UTF-8 filenames may be a bad idea, see [The problems with enforced UTF-8 only filenames](http://utcc.utoronto.ca/~cks/space/blog/linux/ForcedUTF8Filenames).
* Make sure to include the `-part1` portion of the drive path. If you forget that, you are specifying the whole disk, which ZFS will then re-partition, and you will lose the bootloader partition(s).
* For LUKS, the key size chosen is 256 bits. However, XTS mode requires two keys, so the LUKS key is split in half. Thus, `-s 256` means AES-128, which is the LUKS and Ubuntu default.
* Your passphrase will likely be the weakest link. Choose wisely. See [section 5 of the cryptsetup FAQ](https://gitlab.com/cryptsetup/cryptsetup/wikis/FrequentlyAskedQuestions#5-security-aspects) for guidance.
**Hints:**
* The root pool does not have to be a single disk; it can have a mirror or raidz topology. In that case, repeat the partitioning commands for all the disks which will be part of the pool. Then, create the pool using `zpool create ... rpool mirror /dev/disk/by-id/scsi-SATA_disk1-part1 /dev/disk/by-id/scsi-SATA_disk2-part1` (or replace `mirror` with `raidz`, `raidz2`, or `raidz3` and list the partitions from additional disks).
* The pool name is arbitrary. On systems that can automatically install to ZFS, the root pool is named `rpool` by default. If you work with multiple systems, it might be wise to use `hostname`, `hostname0`, or `hostname-1` instead.
## Step 3: System Installation
3.1 Create a filesystem dataset to act as a container:
# zfs create -o canmount=off -o mountpoint=none rpool/ROOT
On Solaris systems, the root filesystem is cloned and the suffix is incremented for major system changes through `pkg image-update` or `beadm`. Similar functionality for APT is possible but currently unimplemented. Even without such a tool, it can still be used for manually created clones.
3.2 Create a filesystem dataset for the root filesystem of the Ubuntu system:
# zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
# zfs mount rpool/ROOT/ubuntu
With ZFS, it is not normally necessary to use a mount command (either `mount` or `zfs mount`). This situation is an exception because of `canmount=noauto`.
3.3 Create datasets:
# zfs create -o setuid=off rpool/home
# zfs create -o mountpoint=/root rpool/home/root
# zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
# zfs create -o com.sun:auto-snapshot=false rpool/var/cache
# zfs create rpool/var/log
# zfs create rpool/var/spool
# zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
If you use /srv on this system:
# zfs create rpool/srv
If this system will have games installed:
# zfs create rpool/var/games
If this system will store local email in /var/mail:
# zfs create rpool/var/mail
If this system will use NFS (locking):
# zfs create -o com.sun:auto-snapshot=false \
-o mountpoint=/var/lib/nfs rpool/var/nfs
The primary goal of this dataset layout is to separate the OS from user data. This allows the root filesystem to be rolled back without rolling back user data such as logs (in `/var/log`). This will be especially important if/when a `beadm` or similar utility is integrated. Since we are creating multiple datasets anyway, it is trivial to add some restrictions (for extra security) at the same time. The `com.sun.auto-snapshot` setting is used by some ZFS snapshot utilities to exclude transient data.
3.4 For LUKS installs only:
# mke2fs -t ext2 /dev/disk/by-id/scsi-SATA_disk1-part4
# mkdir /mnt/boot
# mount /dev/disk/by-id/scsi-SATA_disk1-part4 /mnt/boot
3.5 Install the minimal system:
# chmod 1777 /mnt/var/tmp
# debootstrap xenial /mnt
# zfs set devices=off rpool
The `debootstrap` command leaves the new system in an unconfigured state. An alternative to using `debootstrap` is to copy the entirety of a working system into the new ZFS root.
## Step 4: System Configuration
4.1 Configure the hostname (change `HOSTNAME` to the desired hostname).
# echo HOSTNAME > /mnt/etc/hostname
# vi /mnt/etc/hosts
Add a line:
127.0.1.1 HOSTNAME
or if the system has a real name in DNS:
127.0.1.1 FQDN HOSTNAME
**Hint:** Use `nano` if you find `vi` confusing.
4.2 Configure the network interface:
Find the interface name:
# ip addr show
# vi /mnt/etc/network/interfaces.d/NAME
auto NAME
iface NAME inet dhcp
Customize this file if the system is not a DHCP client.
4.3 Configure the package sources:
# vi /mnt/etc/apt/sources.list
deb http://archive.ubuntu.com/ubuntu xenial main universe
deb-src http://archive.ubuntu.com/ubuntu xenial main universe
deb http://security.ubuntu.com/ubuntu xenial-security main universe
deb-src http://security.ubuntu.com/ubuntu xenial-security main universe
deb http://archive.ubuntu.com/ubuntu xenial-updates main universe
deb-src http://archive.ubuntu.com/ubuntu xenial-updates main universe
4.4 Bind the virtual filesystems from the LiveCD environment to the new system and `chroot` into it:
# mount --rbind /dev /mnt/dev
# mount --rbind /proc /mnt/proc
# mount --rbind /sys /mnt/sys
# chroot /mnt /bin/bash --login
**Note:** This is using `--rbind`, not `--bind`.
4.5 Configure a basic system environment:
# locale-gen en_US.UTF-8
Even if you prefer a non-English system language, always ensure that `en_US.UTF-8` is available.
# echo LANG=en_US.UTF-8 > /etc/default/locale
# dpkg-reconfigure tzdata
# ln -s /proc/self/mounts /etc/mtab
# apt update
# apt install --yes ubuntu-minimal
If you prefer nano over vi, install it:
# apt install --yes nano
4.6 Install ZFS in the chroot environment for the new system:
# apt install --yes --no-install-recommends linux-image-generic
# apt install --yes zfs-initramfs
4.7 For LUKS installs only:
# echo UUID=$(blkid -s UUID -o value \
/dev/disk/by-id/scsi-SATA_disk1-part4) \
/boot ext2 defaults 0 2 >> /etc/fstab
# apt install --yes cryptsetup
# echo luks1 UUID=$(blkid -s UUID -o value \
/dev/disk/by-id/scsi-SATA_disk1-part1) none \
luks,discard,initramfs > /etc/crypttab
# vi /etc/udev/rules.d/99-local-crypt.rules
ENV{DM_NAME}!="", SYMLINK+="$env{DM_NAME}"
ENV{DM_NAME}!="", SYMLINK+="dm-name-$env{DM_NAME}"
# ln -s /dev/mapper/luks1 /dev/luks1
**Notes:**
* The use of `initramfs` is a work-around for [cryptsetup does not support ZFS](https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906).
* The 99-local-crypt.rules file and symlink in /dev are a work-around for [grub-probe assuming all devices are in /dev](https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1527727).
4.8 Install GRUB
Choose one of the following options:
4.8a Install GRUB for legacy (MBR) booting
# apt install --yes grub-pc
Install GRUB to the disk(s), not the partition(s).
4.8b Install GRUB for UEFI booting
# apt install dosfstools
# mkdosfs -F 32 -n EFI /dev/disk/by-id/scsi-SATA_disk1-part3
# mkdir /boot/efi
# echo PARTUUID=$(blkid -s PARTUUID -o value \
/dev/disk/by-id/scsi-SATA_disk1-part3) \
/boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab
# mount /boot/efi
# apt install --yes grub-efi-amd64
4.9 Setup system groups:
# addgroup --system lpadmin
# addgroup --system sambashare
4.10 Set a root password
# passwd
4.11 Fix filesystem mount ordering
[Until ZFS gains a systemd mount generator](https://github.com/zfsonlinux/zfs/issues/4898), there are races between mounting filesystems and starting certain daemons. In practice, the issues (e.g. [#5754](https://github.com/zfsonlinux/zfs/issues/5754)) seem to be with certain filesystems in `/var`, specifically `/var/log` and `/var/tmp`. Setting these to use `legacy` mounting, and listing them in `/etc/fstab` makes systemd aware that these are separate mountpoints. In turn, `rsyslog.service` depends on `var-log.mount` by way of `local-fs.target` and services using the `PrivateTmp` feature of systemd automatically use `After=var-tmp.mount`.
# zfs set mountpoint=legacy rpool/var/log
# zfs set mountpoint=legacy rpool/var/tmp
# cat >> /etc/fstab << EOF
rpool/var/log /var/log zfs defaults 0 0
rpool/var/tmp /var/tmp zfs defaults 0 0
EOF
## Step 5: GRUB Installation
5.1 Verify that the ZFS root filesystem is recognized:
# grub-probe /
zfs
**Note:** GRUB uses `zpool status` in order to determine the location of devices. [grub-probe assumes all devices are in /dev](https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1527727). The `zfs-initramfs` package [ships udev rules that create symlinks](https://packages.ubuntu.com/xenial-updates/all/zfs-initramfs/filelist) to [work around the problem](https://bugs.launchpad.net/ubuntu/+source/zfs-initramfs/+bug/1530953), but [there have still been reports of problems](https://github.com/zfsonlinux/grub/issues/5#issuecomment-249427634). If this happens, you will get an error saying `grub-probe: error: failed to get canonical path` and should run the following:
# export ZPOOL_VDEV_NAME_PATH=YES
5.2 Refresh the initrd files:
# update-initramfs -c -k all
update-initramfs: Generating /boot/initrd.img-4.4.0-21-generic
**Note:** When using LUKS, this will print "WARNING could not determine root device from /etc/fstab". This is because [cryptsetup does not support ZFS](https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906).
5.3 Optional (but highly recommended): Make debugging GRUB easier:
# vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
Save and quit.
Later, once the system has rebooted twice and you are sure everything is working, you can undo these changes, if desired.
5.4 Update the boot configuration:
# update-grub
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.4.0-21-generic
Found initrd image: /boot/initrd.img-4.4.0-21-generic
done
5.5 Install the boot loader
5.5a For legacy (MBR) booting, install GRUB to the MBR:
# grub-install /dev/disk/by-id/scsi-SATA_disk1
Installing for i386-pc platform.
Installation finished. No error reported.
Do not reboot the computer until you get exactly that result message. Note that you are installing GRUB to the whole disk, not a partition.
If you are creating a mirror, repeat the grub-install command for each disk in the pool.
5.5b For UEFI booting, install GRUB:
# grub-install --target=x86_64-efi --efi-directory=/boot/efi \
--bootloader-id=ubuntu --recheck --no-floppy
5.6 Verify that the ZFS module is installed:
# ls /boot/grub/*/zfs.mod
## Step 6: First Boot
6.1 Snapshot the initial installation:
# zfs snapshot rpool/ROOT/ubuntu@install
In the future, you will likely want to take snapshots before each upgrade, and remove old snapshots (including this one) at some point to save space.
6.2 Exit from the `chroot` environment back to the LiveCD environment:
# exit
6.3 Run these commands in the LiveCD environment to unmount all filesystems:
# mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
# zpool export rpool
6.4 Reboot:
# reboot
6.5 Wait for the newly installed system to boot normally. Login as root.
6.6 Create a user account:
Choose one of the following options:
6.6a Unencrypted or LUKS:
# zfs create rpool/home/YOURUSERNAME
# adduser YOURUSERNAME
# cp -a /etc/skel/.[!.]* /home/YOURUSERNAME
# chown -R YOURUSERNAME:YOURUSERNAME /home/YOURUSERNAME
6.6b eCryptfs:
# apt install ecryptfs-utils
# zfs create -o compression=off -o mountpoint=/home/.ecryptfs/YOURUSERNAME \
rpool/home/temp-YOURUSERNAME
# adduser --encrypt-home YOURUSERNAME
# zfs rename rpool/home/temp-YOURUSERNAME rpool/home/YOURUSERNAME
The temporary name for the dataset is required to work-around [a bug in ecryptfs-setup-private](https://bugs.launchpad.net/ubuntu/+source/ecryptfs-utils/+bug/1574174). Otherwise, it will fail with an error saying the home directory is already mounted; that check is not specific enough in the pattern it uses.
**Note:** Automatically mounted snapshots (i.e. the `.zfs/snapshots` directory) will not work through eCryptfs. You can do another eCryptfs mount manually if you need to access files in a snapshot. A script to automate the mounting should be possible, but has not yet been implemented.
6.7 Add your user account to the default set of groups for an administrator:
# usermod -a -G adm,cdrom,dip,lpadmin,plugdev,sambashare,sudo YOURUSERNAME
6.8 Mirror GRUB
If you installed to multiple disks, install GRUB on the additional disks:
6.8a For legacy (MBR) booting:
# dpkg-reconfigure grub-pc
Hit enter until you get to the device selection screen.
Select (using the space bar) all of the disks (not partitions) in your pool.
6.8b UEFI
# umount /boot/efi
For the second and subsequent disks (increment ubuntu-2 to -3, etc.):
# dd if=/dev/disk/by-id/scsi-SATA_disk1-part3 \
of=/dev/disk/by-id/scsi-SATA_disk2-part3
# efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \
-p 3 -L "ubuntu-2" -l '\EFI\Ubuntu\grubx64.efi'
# mount /boot/efi
## Step 7: Configure Swap
7.1 Create a volume dataset (zvol) for use as a swap device:
# zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
You can adjust the size (the `4G` part) to your needs.
The compression algorithm is set to `zle` because it is the cheapest available algorithm. As this guide recommends `ashift=12` (4 kiB blocks on disk), the common case of a 4 kiB page size means that no compression algorithm can reduce I/O. The exception is all-zero pages, which are dropped by ZFS; but some form of compression has to be enabled to get this behavior.
7.2 Configure the swap device:
Choose one of the following options:
7.2a Unencrypted or LUKS:
**Caution**: Always use long `/dev/zvol` aliases in configuration files. Never use a short `/dev/zdX` device name.
# mkswap -f /dev/zvol/rpool/swap
# echo /dev/zvol/rpool/swap none swap defaults 0 0 >> /etc/fstab
7.2b eCryptfs:
# apt install cryptsetup
# echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom \
swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
# systemctl daemon-reload
# systemctl start systemd-cryptsetup@cryptswap1.service
# echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
7.3 Enable the swap device:
# swapon -av
## Step 8: Full Software Installation
8.1 Upgrade the minimal system:
# apt dist-upgrade --yes
8.2 Install a regular set of software:
Choose one of the following options:
8.2a Install a command-line environment only:
# apt install --yes ubuntu-standard
8.2b Install a full GUI environment:
# apt install --yes ubuntu-desktop
**Hint**: If you are installing a full GUI environment, you will likely want to manage your network with NetworkManager. In that case, `rm /etc/network/interfaces.d/eth0`.
8.3 Optional: Disable log compression:
As `/var/log` is already compressed by ZFS, logrotates compression is going to burn CPU and disk I/O for (in most cases) very little gain. Also, if you are making snapshots of `/var/log`, logrotates compression will actually waste space, as the uncompressed data will live on in the snapshot. You can edit the files in `/etc/logrotate.d` by hand to comment out `compress`, or use this loop (copy-and-paste highly recommended):
# for file in /etc/logrotate.d/* ; do
if grep -Eq "(^|[^#y])compress" "$file" ; then
sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
fi
done
8.4 Reboot:
# reboot
### Step 9: Final Cleanup
9.1 Wait for the system to boot normally. Login using the account you created. Ensure the system (including networking) works normally.
9.2 Optional: Delete the snapshot of the initial installation:
$ sudo zfs destroy rpool/ROOT/ubuntu@install
9.3 Optional: Disable the root password
$ sudo usermod -p '*' root
9.4 Optional:
If you prefer the graphical boot process, you can re-enable it now. If you are using LUKS, it makes the prompt look nicer.
$ sudo vi /etc/default/grub
Uncomment GRUB_HIDDEN_TIMEOUT=0
Add quiet and splash to GRUB_CMDLINE_LINUX_DEFAULT
Comment out GRUB_TERMINAL=console
Save and quit.
$ sudo update-grub
## Troubleshooting
### Rescuing using a Live CD
Boot the Live CD and open a terminal.
Become root and install the ZFS utilities:
$ sudo -i
# apt update
# apt install --yes zfsutils-linux
This will automatically import your pool. Export it and re-import it to get the mounts right:
# zpool export -a
# zpool import -N -R /mnt rpool
# zfs mount rpool/ROOT/ubuntu
# zfs mount -a
If needed, you can chroot into your installed environment:
# mount --rbind /dev /mnt/dev
# mount --rbind /proc /mnt/proc
# mount --rbind /sys /mnt/sys
# chroot /mnt /bin/bash --login
Do whatever you need to do to fix your system.
When done, cleanup:
# mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
# zpool export rpool
# reboot
### MPT2SAS
Most problem reports for this tutorial involve `mpt2sas` hardware that does slow asynchronous drive initialization, like some IBM M1015 or OEM-branded cards that have been flashed to the reference LSI firmware.
The basic problem is that disks on these controllers are not visible to the Linux kernel until after the regular system is started, and ZoL does not hotplug pool members. See https://github.com/zfsonlinux/zfs/issues/330.
Most LSI cards are perfectly compatible with ZoL. If your card has this glitch, try setting rootdelay=X in GRUB_CMDLINE_LINUX. The system will wait up to X seconds for all drives to appear before importing the pool.
### Areca
Systems that require the `arcsas` blob driver should add it to the `/etc/initramfs-tools/modules` file and run `update-initramfs -c -k all`.
Upgrade or downgrade the Areca driver if something like `RIP: 0010:[<ffffffff8101b316>] [<ffffffff8101b316>] native_read_tsc+0x6/0x20` appears anywhere in kernel log. ZoL is unstable on systems that emit this error message.
### VMware
* Set `disk.EnableUUID = "TRUE"` in the vmx file or vsphere configuration. Doing this ensures that `/dev/disk` aliases are created in the guest.
### QEMU/KVM/XEN
Set a unique serial number on each virtual disk using libvirt or qemu (e.g. `-drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890`).
To be able to use UEFI in guests (instead of only BIOS booting), run this on the host:
$ sudo apt install ovmf
$ sudo vi /etc/libvirt/qemu.conf
Uncomment these lines:
nvram = [
"/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
"/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd"
]
$ sudo service libvirt-bin restart
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,734 +1,3 @@
### Caution
* This HOWTO uses a whole physical disk.
* Do not use these instructions for dual-booting.
* Backup your data. Any existing data will be lost.
This page was moved to: https://openzfs.github.io/openzfs-docs/Getting%20Started/Ubuntu/Ubuntu%2018.04%20Root%20on%20ZFS.html
### System Requirements
* [Ubuntu 18.04.3 ("Bionic") Desktop CD](http://releases.ubuntu.com/18.04.3/ubuntu-18.04.3-desktop-amd64.iso) (*not* any server images)
* Installing on a drive which presents 4KiB logical sectors (a “4Kn” drive) only works with UEFI booting. This not unique to ZFS. [GRUB does not and will not work on 4Kn with legacy (BIOS) booting.](http://savannah.gnu.org/bugs/?46700)
Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory is recommended for normal performance in basic workloads. If you wish to use deduplication, you will need [massive amounts of RAM](http://wiki.freebsd.org/ZFSTuningGuide#Deduplication). Enabling deduplication is a permanent change that cannot be easily reverted.
## Support
If you need help, reach out to the community using the [zfs-discuss mailing list](https://github.com/zfsonlinux/zfs/wiki/Mailing-Lists) or IRC at #zfsonlinux on [freenode](https://freenode.net/). If you have a bug report or feature request related to this HOWTO, please [file a new issue](https://github.com/zfsonlinux/zfs/issues/new) and mention @rlaager.
## Contributing
Edit permission on this wiki is restricted. Also, GitHub wikis do not support pull requests. However, you can clone the wiki using git.
1) `git clone https://github.com/zfsonlinux/zfs.wiki.git`
2) Make your changes.
3) Use `git diff > my-changes.patch` to create a patch. (Advanced git users may wish to `git commit` to a branch and `git format-patch`.)
4) [File a new issue](https://github.com/zfsonlinux/zfs/issues/new), mention @rlaager, and attach the patch.
## Encryption
This guide supports two different encryption options: unencrypted and LUKS (full-disk encryption). ZFS native encryption has not yet been released. With either option, all ZFS features are fully available.
Unencrypted does not encrypt anything, of course. With no encryption happening, this option naturally has the best performance.
LUKS encrypts almost everything: the OS, swap, home directories, and anything else. The only unencrypted data is the bootloader, kernel, and initrd. The system cannot boot without the passphrase being entered at the console. Performance is good, but LUKS sits underneath ZFS, so if multiple disks (mirror or raidz topologies) are used, the data has to be encrypted once per disk.
## Step 1: Prepare The Install Environment
1.1 Boot the Ubuntu Live CD. Select Try Ubuntu. Connect your system to the Internet as appropriate (e.g. join your WiFi network). Open a terminal (press Ctrl-Alt-T).
1.2 Setup and update the repositories:
sudo apt-add-repository universe
sudo apt update
1.3 Optional: Install and start the OpenSSH server in the Live CD environment:
If you have a second system, using SSH to access the target system can be convenient.
passwd
There is no current password; hit enter at that prompt.
sudo apt install --yes openssh-server
**Hint:** You can find your IP address with `ip addr show scope global | grep inet`. Then, from your main machine, connect with `ssh ubuntu@IP`.
1.4 Become root:
sudo -i
1.5 Install ZFS in the Live CD environment:
apt install --yes debootstrap gdisk zfs-initramfs
## Step 2: Disk Formatting
2.1 Set a variable with the disk name:
DISK=/dev/disk/by-id/scsi-SATA_disk1
Always use the long `/dev/disk/by-id/*` aliases with ZFS. Using the `/dev/sd*` device nodes directly can cause sporadic import failures, especially on systems that have more than one storage pool.
**Hints:**
* `ls -la /dev/disk/by-id` will list the aliases.
* Are you doing this in a virtual machine? If your virtual disk is missing from `/dev/disk/by-id`, use `/dev/vda` if you are using KVM with virtio; otherwise, read the [troubleshooting](#troubleshooting) section.
2.2 If you are re-using a disk, clear it as necessary:
If the disk was previously used in an MD array, zero the superblock:
apt install --yes mdadm
mdadm --zero-superblock --force $DISK
Clear the partition table:
sgdisk --zap-all $DISK
2.3 Partition your disk(s):
Run this if you need legacy (BIOS) booting:
sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK
Run this for UEFI booting (for use now or in the future):
sgdisk -n2:1M:+512M -t2:EF00 $DISK
Run this for the boot pool:
sgdisk -n3:0:+1G -t3:BF01 $DISK
Choose one of the following options:
2.3a Unencrypted:
sgdisk -n4:0:0 -t4:BF01 $DISK
2.3b LUKS:
sgdisk -n4:0:0 -t4:8300 $DISK
If you are creating a mirror or raidz topology, repeat the partitioning commands for all the disks which will be part of the pool.
2.4 Create the boot pool:
zpool create -o ashift=12 -d \
-o feature@async_destroy=enabled \
-o feature@bookmarks=enabled \
-o feature@embedded_data=enabled \
-o feature@empty_bpobj=enabled \
-o feature@enabled_txg=enabled \
-o feature@extensible_dataset=enabled \
-o feature@filesystem_limits=enabled \
-o feature@hole_birth=enabled \
-o feature@large_blocks=enabled \
-o feature@lz4_compress=enabled \
-o feature@spacemap_histogram=enabled \
-o feature@userobj_accounting=enabled \
-O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \
-O normalization=formD -O relatime=on -O xattr=sa \
-O mountpoint=/ -R /mnt bpool ${DISK}-part3
You should not need to customize any of the options for the boot pool.
GRUB does not support all of the zpool features. See `spa_feature_names` in [grub-core/fs/zfs/zfs.c](http://git.savannah.gnu.org/cgit/grub.git/tree/grub-core/fs/zfs/zfs.c#n276). This step creates a separate boot pool for `/boot` with the features limited to only those that GRUB supports, allowing the root pool to use any/all features. Note that GRUB opens the pool read-only, so all read-only compatible features are "supported" by GRUB.
**Hints:**
* If you are creating a mirror or raidz topology, create the pool using `zpool create ... bpool mirror /dev/disk/by-id/scsi-SATA_disk1-part3 /dev/disk/by-id/scsi-SATA_disk2-part3` (or replace `mirror` with `raidz`, `raidz2`, or `raidz3` and list the partitions from additional disks).
* The pool name is arbitrary. If changed, the new name must be used consistently. The `bpool` convention originated in this HOWTO.
2.5 Create the root pool:
Choose one of the following options:
2.5a Unencrypted:
zpool create -o ashift=12 \
-O acltype=posixacl -O canmount=off -O compression=lz4 \
-O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
-O mountpoint=/ -R /mnt rpool ${DISK}-part4
2.5b LUKS:
cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4
cryptsetup luksOpen ${DISK}-part4 luks1
zpool create -o ashift=12 \
-O acltype=posixacl -O canmount=off -O compression=lz4 \
-O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
-O mountpoint=/ -R /mnt rpool /dev/mapper/luks1
* The use of `ashift=12` is recommended here because many drives today have 4KiB (or larger) physical sectors, even though they present 512B logical sectors. Also, a future replacement drive may have 4KiB physical sectors (in which case `ashift=12` is desirable) or 4KiB logical sectors (in which case `ashift=12` is required).
* Setting `-O acltype=posixacl` enables POSIX ACLs globally. If you do not want this, remove that option, but later add `-o acltype=posixacl` (note: lowercase "o") to the `zfs create` for `/var/log`, as [journald requires ACLs](https://askubuntu.com/questions/970886/journalctl-says-failed-to-search-journal-acl-operation-not-supported)
* Setting `normalization=formD` eliminates some corner cases relating to UTF-8 filename normalization. It also implies `utf8only=on`, which means that only UTF-8 filenames are allowed. If you care to support non-UTF-8 filenames, do not use this option. For a discussion of why requiring UTF-8 filenames may be a bad idea, see [The problems with enforced UTF-8 only filenames](http://utcc.utoronto.ca/~cks/space/blog/linux/ForcedUTF8Filenames).
* Setting `relatime=on` is a middle ground between classic POSIX `atime` behavior (with its significant performance impact) and `atime=off` (which provides the best performance by completely disabling atime updates). Since Linux 2.6.30, `relatime` has been the default for other filesystems. See [RedHat's documentation](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/power_management_guide/relatime) for further information.
* Setting `xattr=sa` [vastly improves the performance of extended attributes](https://github.com/zfsonlinux/zfs/commit/82a37189aac955c81a59a5ecc3400475adb56355). Inside ZFS, extended attributes are used to implement POSIX ACLs. Extended attributes can also be used by user-space applications. [They are used by some desktop GUI applications.](https://en.wikipedia.org/wiki/Extended_file_attributes#Linux) [They can be used by Samba to store Windows ACLs and DOS attributes; they are required for a Samba Active Directory domain controller.](https://wiki.samba.org/index.php/Setting_up_a_Share_Using_Windows_ACLs) Note that [`xattr=sa` is Linux-specific.](http://open-zfs.org/wiki/Platform_code_differences) If you move your `xattr=sa` pool to another OpenZFS implementation besides ZFS-on-Linux, extended attributes will not be readable (though your data will be). If portability of extended attributes is important to you, omit the `-O xattr=sa` above. Even if you do not want `xattr=sa` for the whole pool, it is probably fine to use it for `/var/log`.
* Make sure to include the `-part4` portion of the drive path. If you forget that, you are specifying the whole disk, which ZFS will then re-partition, and you will lose the bootloader partition(s).
* For LUKS, the key size chosen is 512 bits. However, XTS mode requires two keys, so the LUKS key is split in half. Thus, `-s 512` means AES-256.
* Your passphrase will likely be the weakest link. Choose wisely. See [section 5 of the cryptsetup FAQ](https://gitlab.com/cryptsetup/cryptsetup/wikis/FrequentlyAskedQuestions#5-security-aspects) for guidance.
**Hints:**
* If you are creating a mirror or raidz topology, create the pool using `zpool create ... rpool mirror /dev/disk/by-id/scsi-SATA_disk1-part4 /dev/disk/by-id/scsi-SATA_disk2-part4` (or replace `mirror` with `raidz`, `raidz2`, or `raidz3` and list the partitions from additional disks). For LUKS, use `/dev/mapper/luks1`, `/dev/mapper/luks2`, etc., which you will have to create using `cryptsetup`.
* The pool name is arbitrary. If changed, the new name must be used consistently. On systems that can automatically install to ZFS, the root pool is named `rpool` by default.
## Step 3: System Installation
3.1 Create filesystem datasets to act as containers:
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=off -o mountpoint=none bpool/BOOT
On Solaris systems, the root filesystem is cloned and the suffix is incremented for major system changes through `pkg image-update` or `beadm`. Similar functionality for APT is possible but currently unimplemented. Even without such a tool, it can still be used for manually created clones.
3.2 Create filesystem datasets for the root and boot filesystems:
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
zfs mount rpool/ROOT/ubuntu
zfs create -o canmount=noauto -o mountpoint=/boot bpool/BOOT/ubuntu
zfs mount bpool/BOOT/ubuntu
With ZFS, it is not normally necessary to use a mount command (either `mount` or `zfs mount`). This situation is an exception because of `canmount=noauto`.
3.3 Create datasets:
zfs create rpool/home
zfs create -o mountpoint=/root rpool/home/root
zfs create -o canmount=off rpool/var
zfs create -o canmount=off rpool/var/lib
zfs create rpool/var/log
zfs create rpool/var/spool
The datasets below are optional, depending on your preferences and/or software
choices.
If you wish to exclude these from snapshots:
zfs create -o com.sun:auto-snapshot=false rpool/var/cache
zfs create -o com.sun:auto-snapshot=false rpool/var/tmp
chmod 1777 /mnt/var/tmp
If you use /opt on this system:
zfs create rpool/opt
If you use /srv on this system:
zfs create rpool/srv
If you use /usr/local on this system:
zfs create -o canmount=off rpool/usr
zfs create rpool/usr/local
If this system will have games installed:
zfs create rpool/var/games
If this system will store local email in /var/mail:
zfs create rpool/var/mail
If this system will use Snap packages:
zfs create rpool/var/snap
If you use /var/www on this system:
zfs create rpool/var/www
If this system will use GNOME:
zfs create rpool/var/lib/AccountsService
If this system will use Docker (which manages its own datasets & snapshots):
zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker
If this system will use NFS (locking):
zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs
A tmpfs is recommended later, but if you want a separate dataset for /tmp:
zfs create -o com.sun:auto-snapshot=false rpool/tmp
chmod 1777 /mnt/tmp
The primary goal of this dataset layout is to separate the OS from user data. This allows the root filesystem to be rolled back without rolling back user data such as logs (in `/var/log`). This will be especially important if/when a `beadm` or similar utility is integrated. The `com.sun.auto-snapshot` setting is used by some ZFS snapshot utilities to exclude transient data.
If you do nothing extra, `/tmp` will be stored as part of the root filesystem. Alternatively, you can create a separate dataset for `/tmp`, as shown above. This keeps the `/tmp` data out of snapshots of your root filesystem. It also allows you to set a quota on `rpool/tmp`, if you want to limit the maximum space used. Otherwise, you can use a tmpfs (RAM filesystem) later.
3.4 Install the minimal system:
debootstrap bionic /mnt
zfs set devices=off rpool
The `debootstrap` command leaves the new system in an unconfigured state. An alternative to using `debootstrap` is to copy the entirety of a working system into the new ZFS root.
## Step 4: System Configuration
4.1 Configure the hostname (change `HOSTNAME` to the desired hostname).
echo HOSTNAME > /mnt/etc/hostname
vi /mnt/etc/hosts
Add a line:
127.0.1.1 HOSTNAME
or if the system has a real name in DNS:
127.0.1.1 FQDN HOSTNAME
**Hint:** Use `nano` if you find `vi` confusing.
4.2 Configure the network interface:
Find the interface name:
ip addr show
Adjust NAME below to match your interface name:
vi /mnt/etc/netplan/01-netcfg.yaml
network:
version: 2
ethernets:
NAME:
dhcp4: true
Customize this file if the system is not a DHCP client.
4.3 Configure the package sources:
vi /mnt/etc/apt/sources.list
deb http://archive.ubuntu.com/ubuntu bionic main universe
deb-src http://archive.ubuntu.com/ubuntu bionic main universe
deb http://security.ubuntu.com/ubuntu bionic-security main universe
deb-src http://security.ubuntu.com/ubuntu bionic-security main universe
deb http://archive.ubuntu.com/ubuntu bionic-updates main universe
deb-src http://archive.ubuntu.com/ubuntu bionic-updates main universe
4.4 Bind the virtual filesystems from the LiveCD environment to the new system and `chroot` into it:
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
chroot /mnt /usr/bin/env DISK=$DISK bash --login
**Note:** This is using `--rbind`, not `--bind`.
4.5 Configure a basic system environment:
ln -s /proc/self/mounts /etc/mtab
apt update
dpkg-reconfigure locales
Even if you prefer a non-English system language, always ensure that `en_US.UTF-8` is available.
dpkg-reconfigure tzdata
If you prefer nano over vi, install it:
apt install --yes nano
4.6 Install ZFS in the chroot environment for the new system:
apt install --yes --no-install-recommends linux-image-generic
apt install --yes zfs-initramfs
**Hint:** For the HWE kernel, install `linux-image-generic-hwe-18.04` instead of `linux-image-generic`.
4.7 For LUKS installs only, setup crypttab:
apt install --yes cryptsetup
echo luks1 UUID=$(blkid -s UUID -o value ${DISK}-part4) none \
luks,discard,initramfs > /etc/crypttab
* The use of `initramfs` is a work-around for [cryptsetup does not support ZFS](https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906).
**Hint:** If you are creating a mirror or raidz topology, repeat the `/etc/crypttab` entries for `luks2`, etc. adjusting for each disk.
4.8 Install GRUB
Choose one of the following options:
4.8a Install GRUB for legacy (BIOS) booting
apt install --yes grub-pc
Install GRUB to the disk(s), not the partition(s).
4.8b Install GRUB for UEFI booting
apt install dosfstools
mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2
mkdir /boot/efi
echo PARTUUID=$(blkid -s PARTUUID -o value ${DISK}-part2) \
/boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab
mount /boot/efi
apt install --yes grub-efi-amd64-signed shim-signed
* The `-s 1` for `mkdosfs` is only necessary for drives which present 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size (given the partition size of 512 MiB) for FAT32. It also works fine on drives which present 512 B sectors.
**Note:** If you are creating a mirror or raidz topology, this step only installs GRUB on the first disk. The other disk(s) will be handled later.
4.9 Set a root password
passwd
4.10 Enable importing bpool
This ensures that `bpool` is always imported, regardless of whether `/etc/zfs/zpool.cache` exists, whether it is in the cachefile or not, or whether `zfs-import-scan.service` is enabled.
```
vi /etc/systemd/system/zfs-import-bpool.service
[Unit]
DefaultDependencies=no
Before=zfs-import-scan.service
Before=zfs-import-cache.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/sbin/zpool import -N -o cachefile=none bpool
[Install]
WantedBy=zfs-import.target
```
systemctl enable zfs-import-bpool.service
4.11 Optional (but recommended): Mount a tmpfs to /tmp
If you chose to create a `/tmp` dataset above, skip this step, as they are mutually exclusive choices. Otherwise, you can put `/tmp` on a tmpfs (RAM filesystem) by enabling the `tmp.mount` unit.
cp /usr/share/systemd/tmp.mount /etc/systemd/system/
systemctl enable tmp.mount
4.12 Setup system groups:
addgroup --system lpadmin
addgroup --system sambashare
## Step 5: GRUB Installation
5.1 Verify that the ZFS boot filesystem is recognized:
grub-probe /boot
5.2 Refresh the initrd files:
update-initramfs -u -k all
**Note:** When using LUKS, this will print "WARNING could not determine root device from /etc/fstab". This is because [cryptsetup does not support ZFS](https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906).
5.3 Workaround GRUB's missing zpool-features support:
vi /etc/default/grub
Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/ubuntu"
5.4 Optional (but highly recommended): Make debugging GRUB easier:
vi /etc/default/grub
Comment out: GRUB_TIMEOUT_STYLE=hidden
Set: GRUB_TIMEOUT=5
Below GRUB_TIMEOUT, add: GRUB_RECORDFAIL_TIMEOUT=5
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
Save and quit.
Later, once the system has rebooted twice and you are sure everything is working, you can undo these changes, if desired.
5.5 Update the boot configuration:
update-grub
**Note:** Ignore errors from `osprober`, if present.
5.6 Install the boot loader
5.6a For legacy (BIOS) booting, install GRUB to the MBR:
grub-install $DISK
Note that you are installing GRUB to the whole disk, not a partition.
If you are creating a mirror or raidz topology, repeat the `grub-install` command for each disk in the pool.
5.6b For UEFI booting, install GRUB:
grub-install --target=x86_64-efi --efi-directory=/boot/efi \
--bootloader-id=ubuntu --recheck --no-floppy
It is not necessary to specify the disk here. If you are creating a mirror or raidz topology, the additional disks will be handled later.
5.7 Verify that the ZFS module is installed:
ls /boot/grub/*/zfs.mod
5.8 Fix filesystem mount ordering
[Until ZFS gains a systemd mount generator](https://github.com/zfsonlinux/zfs/issues/4898), there are races between mounting filesystems and starting certain daemons. In practice, the issues (e.g. [#5754](https://github.com/zfsonlinux/zfs/issues/5754)) seem to be with certain filesystems in `/var`, specifically `/var/log` and `/var/tmp`. Setting these to use `legacy` mounting, and listing them in `/etc/fstab` makes systemd aware that these are separate mountpoints. In turn, `rsyslog.service` depends on `var-log.mount` by way of `local-fs.target` and services using the `PrivateTmp` feature of systemd automatically use `After=var-tmp.mount`.
Until there is support for mounting `/boot` in the initramfs, we also need to mount that, because it was marked `canmount=noauto`. Also, with UEFI, we need to ensure it is mounted before its child filesystem `/boot/efi`.
`rpool` is guaranteed to be imported by the initramfs, so there is no point in adding `x-systemd.requires=zfs-import.target` to those filesystems.
For UEFI booting, unmount /boot/efi first:
umount /boot/efi
Everything else applies to both BIOS and UEFI booting:
zfs set mountpoint=legacy bpool/BOOT/ubuntu
echo bpool/BOOT/ubuntu /boot zfs \
nodev,relatime,x-systemd.requires=zfs-import-bpool.service 0 0 >> /etc/fstab
zfs set mountpoint=legacy rpool/var/log
echo rpool/var/log /var/log zfs nodev,relatime 0 0 >> /etc/fstab
zfs set mountpoint=legacy rpool/var/spool
echo rpool/var/spool /var/spool zfs nodev,relatime 0 0 >> /etc/fstab
If you created a /var/tmp dataset:
zfs set mountpoint=legacy rpool/var/tmp
echo rpool/var/tmp /var/tmp zfs nodev,relatime 0 0 >> /etc/fstab
If you created a /tmp dataset:
zfs set mountpoint=legacy rpool/tmp
echo rpool/tmp /tmp zfs nodev,relatime 0 0 >> /etc/fstab
## Step 6: First Boot
6.1 Snapshot the initial installation:
zfs snapshot bpool/BOOT/ubuntu@install
zfs snapshot rpool/ROOT/ubuntu@install
In the future, you will likely want to take snapshots before each upgrade, and remove old snapshots (including this one) at some point to save space.
6.2 Exit from the `chroot` environment back to the LiveCD environment:
exit
6.3 Run these commands in the LiveCD environment to unmount all filesystems:
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export -a
6.4 Reboot:
reboot
6.5 Wait for the newly installed system to boot normally. Login as root.
6.6 Create a user account:
zfs create rpool/home/YOURUSERNAME
adduser YOURUSERNAME
cp -a /etc/skel/. /home/YOURUSERNAME
chown -R YOURUSERNAME:YOURUSERNAME /home/YOURUSERNAME
6.7 Add your user account to the default set of groups for an administrator:
usermod -a -G adm,cdrom,dip,lpadmin,plugdev,sambashare,sudo YOURUSERNAME
6.8 Mirror GRUB
If you installed to multiple disks, install GRUB on the additional disks:
6.8a For legacy (BIOS) booting:
dpkg-reconfigure grub-pc
Hit enter until you get to the device selection screen.
Select (using the space bar) all of the disks (not partitions) in your pool.
6.8b UEFI
umount /boot/efi
For the second and subsequent disks (increment ubuntu-2 to -3, etc.):
dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \
of=/dev/disk/by-id/scsi-SATA_disk2-part2
efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \
-p 2 -L "ubuntu-2" -l '\EFI\ubuntu\shimx64.efi'
mount /boot/efi
## Step 7: (Optional) Configure Swap
**Caution**: On systems with extremely high memory pressure, using a zvol for swap can result in lockup, regardless of how much swap is still available. This issue is currently being investigated in: https://github.com/zfsonlinux/zfs/issues/7734
7.1 Create a volume dataset (zvol) for use as a swap device:
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
You can adjust the size (the `4G` part) to your needs.
The compression algorithm is set to `zle` because it is the cheapest available algorithm. As this guide recommends `ashift=12` (4 kiB blocks on disk), the common case of a 4 kiB page size means that no compression algorithm can reduce I/O. The exception is all-zero pages, which are dropped by ZFS; but some form of compression has to be enabled to get this behavior.
7.2 Configure the swap device:
**Caution**: Always use long `/dev/zvol` aliases in configuration files. Never use a short `/dev/zdX` device name.
mkswap -f /dev/zvol/rpool/swap
echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab
echo RESUME=none > /etc/initramfs-tools/conf.d/resume
The `RESUME=none` is necessary to disable resuming from hibernation. This does not work, as the zvol is not present (because the pool has not yet been imported) at the time the resume script runs. If it is not disabled, the boot process hangs for 30 seconds waiting for the swap zvol to appear.
7.3 Enable the swap device:
swapon -av
## Step 8: Full Software Installation
8.1 Upgrade the minimal system:
apt dist-upgrade --yes
8.2 Install a regular set of software:
Choose one of the following options:
8.2a Install a command-line environment only:
apt install --yes ubuntu-standard
8.2b Install a full GUI environment:
apt install --yes ubuntu-desktop
vi /etc/gdm3/custom.conf
In the [daemon] section, add: InitialSetupEnable=false
**Hint**: If you are installing a full GUI environment, you will likely want to manage your network with NetworkManager:
vi /etc/netplan/01-netcfg.yaml
network:
version: 2
renderer: NetworkManager
8.3 Optional: Disable log compression:
As `/var/log` is already compressed by ZFS, logrotates compression is going to burn CPU and disk I/O for (in most cases) very little gain. Also, if you are making snapshots of `/var/log`, logrotates compression will actually waste space, as the uncompressed data will live on in the snapshot. You can edit the files in `/etc/logrotate.d` by hand to comment out `compress`, or use this loop (copy-and-paste highly recommended):
for file in /etc/logrotate.d/* ; do
if grep -Eq "(^|[^#y])compress" "$file" ; then
sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
fi
done
8.4 Reboot:
reboot
### Step 9: Final Cleanup
9.1 Wait for the system to boot normally. Login using the account you created. Ensure the system (including networking) works normally.
9.2 Optional: Delete the snapshots of the initial installation:
sudo zfs destroy bpool/BOOT/ubuntu@install
sudo zfs destroy rpool/ROOT/ubuntu@install
9.3 Optional: Disable the root password
sudo usermod -p '*' root
9.4 Optional: Re-enable the graphical boot process:
If you prefer the graphical boot process, you can re-enable it now. If you are using LUKS, it makes the prompt look nicer.
sudo vi /etc/default/grub
Uncomment: GRUB_TIMEOUT_STYLE=hidden
Add quiet and splash to: GRUB_CMDLINE_LINUX_DEFAULT
Comment out: GRUB_TERMINAL=console
Save and quit.
sudo update-grub
**Note:** Ignore errors from `osprober`, if present.
9.5 Optional: For LUKS installs only, backup the LUKS header:
sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
--header-backup-file luks1-header.dat
Store that backup somewhere safe (e.g. cloud storage). It is protected by your LUKS passphrase, but you may wish to use additional encryption.
**Hint:** If you created a mirror or raidz topology, repeat this for each LUKS volume (`luks2`, etc.).
## Troubleshooting
### Rescuing using a Live CD
Go through [Step 1: Prepare The Install Environment](#step-1-prepare-the-install-environment).
For LUKS, first unlock the disk(s):
cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1
Repeat for additional disks, if this is a mirror or raidz topology.
Mount everything correctly:
zpool export -a
zpool import -N -R /mnt rpool
zpool import -N -R /mnt bpool
zfs mount rpool/ROOT/ubuntu
zfs mount -a
If needed, you can chroot into your installed environment:
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
chroot /mnt /bin/bash --login
mount /boot
mount -a
Do whatever you need to do to fix your system.
When done, cleanup:
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
zpool export -a
reboot
### MPT2SAS
Most problem reports for this tutorial involve `mpt2sas` hardware that does slow asynchronous drive initialization, like some IBM M1015 or OEM-branded cards that have been flashed to the reference LSI firmware.
The basic problem is that disks on these controllers are not visible to the Linux kernel until after the regular system is started, and ZoL does not hotplug pool members. See https://github.com/zfsonlinux/zfs/issues/330.
Most LSI cards are perfectly compatible with ZoL. If your card has this glitch, try setting ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X in /etc/default/zfs. The system will wait X seconds for all drives to appear before importing the pool.
### Areca
Systems that require the `arcsas` blob driver should add it to the `/etc/initramfs-tools/modules` file and run `update-initramfs -u -k all`.
Upgrade or downgrade the Areca driver if something like `RIP: 0010:[<ffffffff8101b316>] [<ffffffff8101b316>] native_read_tsc+0x6/0x20` appears anywhere in kernel log. ZoL is unstable on systems that emit this error message.
### VMware
* Set `disk.EnableUUID = "TRUE"` in the vmx file or vsphere configuration. Doing this ensures that `/dev/disk` aliases are created in the guest.
### QEMU/KVM/XEN
Set a unique serial number on each virtual disk using libvirt or qemu (e.g. `-drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890`).
To be able to use UEFI in guests (instead of only BIOS booting), run this on the host:
sudo apt install ovmf
sudo vi /etc/libvirt/qemu.conf
Uncomment these lines:
nvram = [
"/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
"/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd"
]
sudo service libvirt-bin restart
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,9 +1,3 @@
ZFS packages are [provided by the distribution][ubuntu-wiki].
This page was moved to: https://openzfs.github.io/openzfs-docs/Getting%20Started/Ubuntu/index.html
If you want to use ZFS as your root filesystem, see these instructions:
* [[Ubuntu 18.04 Root on ZFS]]
For troubleshooting existing installations, see:
* 16.04: [[Ubuntu 16.04 Root on ZFS]] <!-- 2021-04 -->
[ubuntu-wiki]: https://wiki.ubuntu.com/Kernel/Reference/ZFS
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,7 +1 @@
# Accept a PR
After a PR is generated, it is available to be commented upon by project members. They may request additional changes, please work with them.
In addition, project members may accept PRs; this is not an automatic process. By convention, PRs aren't accepted for at least a day, to allow all members a chance to comment.
After a PR has been accepted, it is available to be merged.
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1 +1 @@
# Close a PR
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,13 +1 @@
# Commit Often
When writing complex code, it is strongly suggested that developers save their changes, and commit those changes to their local repository, on a frequent basis. In general, this means every hour or two, or when a specific milestone is hit in the development. This allows you to easily *checkpoint* your work.
Details of this process can be found in the [Commit the changes][W-commit] page.
In addition, it is suggested that the changes be pushed to your forked Github repository with the `git push` command at least every day, as a backup. Changes should also be pushed prior to running a test, in case your system crashes. This project works with kernel software. A crash while testing development software could easily cause loss of data.
For developers who want to keep their development branches clean, it might be useful to [*squash*][W-squash] commits from time to time, even before you're ready to [create a PR][W-create-PR].
[W-commit]: https://github.com/zfsonlinux/zfs/wiki/Workflow-Commit
[W-squash]: https://github.com/zfsonlinux/zfs/wiki/Workflow-Squash
[W-create-PR]: https://github.com/zfsonlinux/zfs/wiki/Workflow-Create-PR
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,32 +1 @@
# Commit the Changes
In order for your changes to be merged into the ZFS on Linux project, you must first send the changes made in your *topic* branch to your *local* repository. This can be done with the `git commit -sa`. If there are any new files, they will be reported as *untracked*, and they will not be created in the *local* repository. To add newly created files to the *local* repository, use the `git add (file-name) ...` command.
The `-s` option adds a *signed off by* line to the commit. This *signed off by* line is required for the ZFS on Linux project. It performs the following functions:
* It is an acceptance of the [License Terms][license] of the project.
* It is the developer's certification that they have the right to submit the patch for inclusion into the code base.
* It indicates agreement to the [Developer's Certificate of Origin][COA].
The `-a` option causes all modified files in the current branch to be *staged* prior to performing the commit. A list of the modified files in the *local* branch can be created by the use of the `git status` command. If there are files that have been modified that shouldn't be part of the commit, they can either be rolled back in the current branch, or the files can be manually staged with the `git add (file-name) ...` command, and the `git commit -s` command can be run without the `-a` option.
When you run the `git commit` command, an editor will appear to allow you to enter the commit messages. The following requirements apply to a commit message:
* The first line is a title for the commit, and must be bo longer than 50 characters.
* The second line should be blank, separating the title of the commit message from the body of the commit message.
* There may be one or more lines in the commit message describing the reason for the changes (the body of the commit message). These lines must be no longer than 72 characters, and may contain blank lines.
* If the commit closes an Issue, there should be a line in the body with the string `Closes`, followed by the issue number. If multiple issues are closed, multiple lines should be used.
* After the body of the commit message, there should be a blank line. This separates the body from the *signed off by* line.
* The *signed off by* line should have been created by the `git commit -s` command. If not, the line has the following format:
* The string "Signed-off-by:"
* The name of the developer. Please do not use any no pseudonyms or make any anonymous contributions.
* The email address of the developer, enclosed by angle brackets ("<>").
* An example of this is `Signed-off-by: Random Developer <random@developer.example.org>`
* If the commit changes only documentation, the line `Requires-builders: style` may be included in the body. This will cause only the *style* testing to be run. This can save a significant amount of time when Github runs the automated testing. For information on other testing options, please see the [Buildbot options][buildbot-options] page.
For more information about writing commit messages, please visit [How to Write a Git Commit Message][writing-commit-message].
After the changes have been committed to your *local* repository, they should be pushed to your *forked* repository. This is done with the `git push` command.
[license]: https://github.com/zfsonlinux/zfs/blob/master/COPYRIGHT
[COA]: https://www.kernel.org/doc/html/latest/process/submitting-patches.html#sign-your-work-the-developer-s-certificate-of-origin
[buildbot-options]: https://github.com/zfsonlinux/zfs/wiki/Buildbot-Options
[writing-commit-message]: https://chris.beams.io/posts/git-commit/
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1 +1 @@
# Fix Conflicts
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,22 +1 @@
# Create a Branch
With small projects, it's possible to develop code as commits directly on the *master* branch. In the ZFS-on-Linux project, that sort of development would create havoc and make it difficult to open a PR or rebase the code. For this reason, development in the ZFS-on-Linux project is done on *topic* branches.
The following commands will perform the required functions:
```
$ cd zfs
$ git fetch upstream master
$ git checkout master
$ git merge upstream/master
$ git branch (topic-branch-name)
$ git checkout (topic-branch-name)
```
1. Navigate to your *local* repository.
1. Fetch the updates from the *upstream* repository.
1. Set the current branch to *master*.
1. Merge the fetched updates into the *local* repository.
1. Create a new *topic* branch on the updated *master* branch. The name of the branch should be either the name of the feature (preferred for development of features) or an indication of the issue being worked on (preferred for bug fixes).
1. Set the current branch to the newly created *topic* branch.
**Pro Tip**: The `git checkout -b (topic-branch-name)` command can be used to create and checkout a new branch with one command.
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,14 +1 @@
# Create a Github Account
This page goes over how to create a Github account. There are no special settings needed to use your Github account on the [ZFS on Linux Project][zol].
Github did an excellent job of documenting how to create an account. The following link provides everything you need to know to get your Github account up and running.
https://help.github.com/articles/signing-up-for-a-new-github-account/
In addition, the following articles might be useful:
* https://help.github.com/articles/keeping-your-account-and-data-secure/
* https://help.github.com/articles/securing-your-account-with-two-factor-authentication-2fa/
* https://help.github.com/articles/adding-a-fallback-authentication-method-with-recover-accounts-elsewhere/
[zol]: https://github.com/zfsonlinux
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1 +1 @@
# Create a New Test
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,5 +1 @@
# Delete a Branch
When a commit has been accepted and merged into the main ZFS repository, the developer's topic branch should be deleted. This is also appropriate if the developer abandons the change, and could be appropriate if they change the direction of the change.
To delete a topic branch, navigate to the base directory of your local Git repository and use the `git branch -d (branch-name)` command. The name of the branch should be the same as the branch that was created.
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1 +1 @@
# Generate a PR
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,31 +1 @@
<!--- When this page is updated, please also check the 'Get-the-Source-Code' page -->
# Get the Source Code
This document goes over how a developer can get the ZFS source code for the purpose of making changes to it. For other purposes, please see the [Get the Source Code][get-source] page.
The Git *master* branch contains the latest version of the software, including changes that weren't included in the released tarball. This is the preferred source code location and procedure for ZFS development. If you would like to do development work for the [ZFS on Linux Project][zol], you can fork the Github repository and prepare the source by using the following process.
1. Go to the [ZFS on Linux Project][zol] and fork both the ZFS and SPL repositories. This will create two new repositories (your *forked* repositories) under your account. Detailed instructions can be found at https://help.github.com/articles/fork-a-repo/.
1. Clone both of these repositories onto your development system. This will create your *local* repositories. As an example, if your Github account is *newzfsdeveloper*, the commands to clone the repositories would be:
```
$ mkdir zfs-on-linux
$ cd zfs-on-linux
$ git clone https://github.com/newzfsdeveloper/spl.git
$ git clone https://github.com/newzfsdeveloper/zfs.git
```
3. Enter the following commands to make the necessary linkage to the *upstream master* repositories and prepare the source to be compiled:
```
$ cd spl
$ git remote add upstream https://github.com/zfsonlinux/spl.git
$ ./autogen.sh
$ cd ../zfs
$ git remote add upstream https://github.com/zfsonlinux/zfs.git
$ ./autogen.sh
cd ..
```
The `./autogen.sh` script generates the build files. If the build system is updated by any developer, these scripts need to be run again.
[zol]: https://github.com/zfsonlinux
[get-source]: https://github.com/zfsonlinux/zfs/wiki/Get-the-Source-Code
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,37 +1 @@
# Install Git
To work with the ZFS software on Github, it's necessary to install the Git software on your computer and set it up. This page covers that process for some common Linux operating systems. Other Linux operating systems should be similar.
## Install the Software Package
The first step is to actually install the Git software package. This package can be found in the repositories used by most Linux distributions. If your distribution isn't listed here, or you'd like to install from source, please have a look in the [official Git documentation][git-install-linux].
### Red Hat and CentOS
```
# yum install git
```
### Fedora
```
$ sudo dnf install git
```
### Debian and Ubuntu
```
$ sudo apt install git
```
## Configuring Git
Your user name and email address must be set within Git before you can make commits to the ZFS project. In addition, your preferred text editor should be set to whatever you would like to use.
```
$ git config --global user.name "John Doe"
$ git config --global user.email johndoe@example.com
$ git config --global core.editor emacs
```
[git-install-linux]: https://git-scm.com/download/linux
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1 +1 @@
# Adding Large Features
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,5 +1 @@
# Merge a PR
Once all the feedback has been addressed, the PR will be merged into the *master* branch by a member with write permission (most members don't have this permission).
After the PR has been merged, it is eligible to be added to the *release* branch.
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,18 +1 @@
# Rebase the Update
Updates to the ZFS on Linux project should always be based on the current *master* branch. This makes them easier to merge into the repository.
There are two steps in the rebase process. The first step is to update the *local master* branch from the *upstream master* repository. This can be done by entering the following commands:
```
$ git fetch upstream master
$ git checkout master
$ git merge upstream/master
```
The second step is to perform the actual rebase of the updates. This is done by entering the command `git rebase upstream/master`. If there are any conflicts between the updates in your *local* branch and the updates in the *upstream master* branch, you will be informed of them, and allowed to correct them (see the [Conflicts][W-conflicts] page).
This would also be a good time to [*squash*][W-squash] your commits.
[W-conflicts]: https://github.com/zfsonlinux/zfs/wiki/Workflow-Conflicts
[W-squash]: https://github.com/zfsonlinux/zfs/wiki/Workflow-Squash
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1 +1 @@
# Squash the Commits
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,59 +1 @@
# Testing Changes to ZFS
The code in the ZFS on Linux project is quite complex. A minor error in a change could easily introduce new bugs into the software, causing unforeseeable problems. In an attempt to avoid this, the ZTS (ZFS Test Suite) was developed. This test suite is run against multiple architectures and distributions by the Github system when a PR (Pull Request) is submitted.
A subset of the full test suite can be run by the developer to perform a preliminary verification of the changes in their *local* repository.
## Style Testing
The first part of the testing is to verify that the software meets the project's style guidelines. To verify that the code meets those guidelines, run ```make checkstyle``` from the *local* repository.
## Basic Functionality Testing
The second part of the testing is to verify basic functionality. This is to ensure that the changes made don't break previous functionality.
There are a few helper scripts provided in the top-level scripts directory designed to aid developers working with in-tree builds.
* **zfs-helper.sh:** Certain functionality (i.e. /dev/zvol/) depends on the ZFS provided udev helper scripts being installed on the system. This script can be used to create symlinks on the system from the installation location to the in-tree helper. These links must be in place to successfully run the ZFS Test Suite. The `-i` and `-r` options can be used to install and remove the symlinks.
```
$ sudo ./scripts/zfs-helpers.sh -i
```
* **zfs.sh:** The freshly built kernel modules from the *local* repository can be loaded using `zfs.sh`. This script will load those modules, **even if there are ZFS modules loaded** from another location, which could cause long-term problems if any of the non-testing file-systems on the system use ZFS.
This script can latter be used to unload the kernel modules with the `-u` option.
```
$ sudo ./scripts/zfs.sh
```
* **zfs-tests.sh:** A wrapper which can be used to launch the ZFS Test Suite. Three loopback devices are created on top of sparse files located in `/var/tmp/` and used for the regression test. Detailed directions for the running the ZTS can be found in the [ZTS Readme][zts-readme] file.
**WARNING**: This script should **only** be run on a development system. It makes configuration changes to the system to run the tests, and it *tries* to remove those changes after completion, but the change removal could fail, and dynamic canges of this nature are usually undesirable on a production system. For more information on the changes made, please see the [ZTS Readme][zts-readme] file.
```
$ sudo ./scripts/zfs-tests.sh -vx
```
**tip:** The **delegate** tests will be skipped unless group read permission is set on the zfs directory and its parents.
* **zloop.sh:** A wrapper to run ztest repeatedly with randomized arguments. The ztest command is a user space stress test designed to detect correctness issues by concurrently running a random set of test cases. If a crash is encountered, the ztest logs, any associated vdev files, and core file (if one exists) are collected and moved to the output directory for analysis.
If there are any failures in this test, please see the [zloop debugging][W-zloop] page.
```
$ sudo ./scripts/zloop.sh
```
## Change Testing
Finally, it's necessary to verify that the changes made actually do what they were intended to do. The extent of the testing would depend on the complexity of the changes.
After the changes are tested, if the testing can be automated for addition to ZTS, a [new test][W-create-test] should be created. This test should be part of the PR that resolves the issue or adds the feature. If the festure is split into multiple PRs, some testing should be included in the first, with additions to the test as required.
It should be noted that if the change adds too many lines of code that don't get tested by ZTS, the change will not pass testing.
[zts-readme]: https://github.com/zfsonlinux/zfs/tree/master/tests
[W-zloop]: https://github.com/zfsonlinux/zfs/wiki/Workflow-Zloop-Debugging
[W-create-test]: https://github.com/zfsonlinux/zfs/wiki/Workflow-Create-Test
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1 +1 @@
# Update a PR
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1 +1 @@
# Debugging *Zloop* Failures
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,98 +1,3 @@
### ZFS Transaction Delay
This page was moved to: https://openzfs.github.io/openzfs-docs/Performance%20and%20tuning/ZFS%20Transaction%20Delay.html
ZFS write operations are delayed when the
backend storage isn't able to accommodate the rate of incoming writes.
This delay process is known as the ZFS write throttle.
If there is already a write transaction waiting, the delay is relative to
when that transaction will finish waiting. Thus the calculated delay time
is independent of the number of threads concurrently executing
transactions.
If there is only one waiter, the delay is relative to when the transaction
started, rather than the current time. This credits the transaction for
"time already served." For example, if a write transaction requires reading
indirect blocks first, then the delay is counted at the start of the
transaction, just prior to the indirect block reads.
The minimum time for a transaction to take is calculated as:
```
min_time = zfs_delay_scale * (dirty - min) / (max - dirty)
min_time is then capped at 100 milliseconds
```
The delay has two degrees of freedom that can be adjusted via tunables:
1. The percentage of dirty data at which we start to delay is defined by
zfs_delay_min_dirty_percent. This is typically be at or above
zfs_vdev_async_write_active_max_dirty_percent so delays occur
after writing at full speed has failed to keep up with the incoming write
rate.
2. The scale of the curve is defined by zfs_delay_scale. Roughly speaking,
this variable determines the amount of delay at the midpoint of the curve.
```
delay
10ms +-------------------------------------------------------------*+
| *|
9ms + *+
| *|
8ms + *+
| * |
7ms + * +
| * |
6ms + * +
| * |
5ms + * +
| * |
4ms + * +
| * |
3ms + * +
| * |
2ms + (midpoint) * +
| | ** |
1ms + v *** +
| zfs_delay_scale ----------> ******** |
0 +-------------------------------------*********----------------+
0% <- zfs_dirty_data_max -> 100%
```
Note that since the delay is added to the outstanding time remaining on the
most recent transaction, the delay is effectively the inverse of IOPS.
Here the midpoint of 500 microseconds translates to 2000 IOPS.
The shape of the curve was chosen such that small changes in the amount of
accumulated dirty data in the first 3/4 of the curve yield relatively small
differences in the amount of delay.
The effects can be easier to understand when the amount of delay is
represented on a log scale:
```
delay
100ms +-------------------------------------------------------------++
+ +
| |
+ *+
10ms + *+
+ ** +
| (midpoint) ** |
+ | ** +
1ms + v **** +
+ zfs_delay_scale ----------> ***** +
| **** |
+ **** +
100us + ** +
+ * +
| * |
+ * +
10us + * +
+ +
| |
+ +
+--------------------------------------------------------------+
0% <- zfs_dirty_data_max -> 100%
```
Note here that only as the amount of dirty data approaches its limit does
the delay start to increase rapidly. The goal of a properly tuned system
should be to keep the amount of dirty data out of that range by first
ensuring that the appropriate limits are set for the I/O scheduler to reach
optimal throughput on the backend storage, and then by changing the value
of zfs_delay_scale to increase the steepness of the curve.
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

File diff suppressed because it is too large Load Diff

@ -1,74 +1,3 @@
# ZFS I/O (ZIO) Scheduler
ZFS issues I/O operations to leaf vdevs (usually devices) to satisfy and
complete I/Os. The ZIO scheduler determines when and in what order those
operations are issued. Operations into five I/O classes
prioritized in the following order:
This page was moved to: https://openzfs.github.io/openzfs-docs/Performance%20and%20tuning/ZIO%20Scheduler.html
| Priority | I/O Class | Description
|---|---|---
| highest | sync read | most reads
| | sync write | as defined by application or via 'zfs' 'sync' property
| | async read | prefetch reads
| | async write | most writes
| lowest | scrub read | scan read: includes both scrub and resilver
Each queue defines the minimum and maximum number of concurrent operations
issued to the device. In addition, the device has an aggregate maximum,
zfs_vdev_max_active. Note that the sum of the per-queue minimums
must not exceed the aggregate maximum. If the sum of the per-queue
maximums exceeds the aggregate maximum, then the number of active I/Os
may reach zfs_vdev_max_active, in which case no further I/Os are issued
regardless of whether all per-queue minimums have been met.
| I/O Class | Min Active Parameter | Max Active Parameter
|---|---|---
| sync read | zfs_vdev_sync_read_min_active | zfs_vdev_sync_read_max_active
| sync write | zfs_vdev_sync_write_min_active | zfs_vdev_sync_write_max_active
| async read | zfs_vdev_async_read_min_active | zfs_vdev_async_read_max_active
| async write | zfs_vdev_async_write_min_active | zfs_vdev_async_write_max_active
| scrub read | zfs_vdev_scrub_min_active | zfs_vdev_scrub_max_active
For many physical devices, throughput increases with the number of
concurrent operations, but latency typically suffers. Further, physical
devices typically have a limit at which more concurrent operations have no
effect on throughput or can actually cause it to performance to decrease.
The ZIO scheduler selects the next operation to issue by first looking for an
I/O class whose minimum has not been satisfied. Once all are satisfied and
the aggregate maximum has not been hit, the scheduler looks for classes
whose maximum has not been satisfied. Iteration through the I/O classes is
done in the order specified above. No further operations are issued if the
aggregate maximum number of concurrent operations has been hit or if there
are no operations queued for an I/O class that has not hit its maximum.
Every time an I/O is queued or an operation completes, the I/O scheduler
looks for new operations to issue.
In general, smaller max_active's will lead to lower latency of synchronous
operations. Larger max_active's may lead to higher overall throughput,
depending on underlying storage and the I/O mix.
The ratio of the queues' max_actives determines the balance of performance
between reads, writes, and scrubs. For example, when there is contention,
increasing zfs_vdev_scrub_max_active will cause the scrub or resilver to
complete more quickly, but reads and writes to have higher latency and
lower throughput.
All I/O classes have a fixed maximum number of outstanding operations
except for the async write class. Asynchronous writes represent the data
that is committed to stable storage during the syncing stage for
transaction groups (txgs). Transaction groups enter the syncing state
periodically so the number of queued async writes quickly bursts up
and then reduce down to zero. The zfs_txg_timeout tunable (default=5 seconds)
sets the target interval for txg sync. Thus a burst of async writes every
5 seconds is a normal ZFS I/O pattern.
Rather than servicing I/Os as quickly as possible, the ZIO scheduler changes
the maximum number of active async write I/Os according to the amount of
dirty data in the pool. Since both throughput and latency typically increase
as the number of concurrent operations issued to physical devices, reducing
the burstiness in the number of concurrent operations also stabilizes the
response time of operations from other queues. This is particular important
for the sync read and write queues, where the periodic async write bursts of
the txg sync can lead to device-level contention. In broad strokes, the ZIO
scheduler issues more concurrent operations from the async write queue as
there's more dirty data in the pool.
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1 +1 @@
[[Home]] / [[Project and Community]] / [[Developer Resources]] / [[License]] [![Creative Commons License](https://i.creativecommons.org/l/by-sa/3.0/80x15.png)](http://creativecommons.org/licenses/by-sa/3.0/)
[OpenZFS documentation](https://openzfs.github.io/openzfs-docs/)

@ -1,50 +0,0 @@
* [[Home]]
* [[Getting Started]]
* [ArchLinux][arch]
* [[Debian]]
* [[Fedora]]
* [FreeBSD][freebsd]
* [Gentoo][gentoo]
* [openSUSE][opensuse]
* [[RHEL and CentOS]]
* [[Ubuntu]]
* [[Project and Community]]
* [[Admin Documentation]]
* [[FAQ]]
* [[Mailing Lists]]
* [Releases][releases]
* [[Signing Keys]]
* [Issue Tracker][issues]
* [Roadmap][roadmap]
* [[Developer Resources]]
* [[Custom Packages]]
* [[Building ZFS]]
* [Buildbot Status][buildbot-status]
* [Buildbot Issue Tracking][known-zts-failures]
* [Buildbot Options][control-buildbot]
* [OpenZFS Tracking][openzfs-tracking]
* [[OpenZFS Patches]]
* [[OpenZFS Exceptions]]
* [OpenZFS Documentation][openzfs-devel]
* [[Git and GitHub for beginners]]
* Performance and Tuning
* [[ZFS on Linux Module Parameters]]
* [ZFS Transaction Delay and Write Throttle][ZFS-Transaction-Delay]
* [[ZIO Scheduler]]
* [[Checksums]]
* [Asynchronous Writes][Async-Write]
[arch]: https://wiki.archlinux.org/index.php/ZFS
[gentoo]: https://wiki.gentoo.org/wiki/ZFS
[freebsd]: https://zfsonfreebsd.github.io/ZoF/
[opensuse]: https://software.opensuse.org/package/zfs
[releases]: https://github.com/zfsonlinux/zfs/releases
[issues]: https://github.com/zfsonlinux/zfs/issues
[roadmap]: https://github.com/zfsonlinux/zfs/milestones
[openzfs-devel]: http://open-zfs.org/wiki/Developer_resources
[openzfs-tracking]: http://build.zfsonlinux.org/openzfs-tracking.html
[buildbot-status]: http://build.zfsonlinux.org/tgrid?length=100&branch=master&category=Platforms&rev_order=desc
[control-buildbot]: https://github.com/zfsonlinux/zfs/wiki/Buildbot-Options
[known-zts-failures]: http://build.zfsonlinux.org/known-issues.html
[ZFS-Transaction-Delay]: https://github.com/zfsonlinux/zfs/wiki/ZFS-Transaction-Delay
[Async-Write]: https://github.com/zfsonlinux/zfs/wiki/Async-Write

@ -1,289 +1,3 @@
# Introduction
This page was moved to: https://openzfs.github.io/openzfs-docs/Basics%20concepts/dRAID%20Howto.html
## raidz vs draid
ZFS users are most likely very familiar with raidz already, so a comparison with draid would help. The illustrations below are simplified, but sufficient for the purpose of a comparison. For example, 31 drives can be configured as a zpool of 6 raidz1 vdevs and a hot spare:
![raidz1](https://cloud.githubusercontent.com/assets/6722662/23642396/9790e432-02b7-11e7-8198-ae9f17c61d85.png)
As shown above, if drive 0 fails and is replaced by the hot spare, only 5 out of the 30 surviving drives will work to resilver: drives 1-4 read, and drive 30 writes.
The same 30 drives can be configured as 1 draid1 vdev of the same level of redundancy (i.e. single parity, 1/4 parity ratio) and single spare capacity:
![draid1](https://cloud.githubusercontent.com/assets/6722662/23642395/9783ef8e-02b7-11e7-8d7e-31d1053ee4ff.png)
The drives are shuffled in a way that, after drive 0 fails, all 30 surviving drives will work together to restore the lost data/parity:
* All 30 drives read, because unlike the raidz1 configuration shown above, in the draid1 configuration the neighbor drives of the failed drive 0 (i.e. drives in a same data+parity group) are not fixed.
* All 30 drives write, because now there is no dedicated spare drive. Instead, spare blocks come from all drives.
To summarize:
* Normal application IO: draid and raidz are very similar. There's a slight advantage in draid, since there's no dedicated spare drive which is idle when not in use.
* Restore lost data/parity: for raidz, not all surviving drives will work to rebuild, and in addition it's bounded by the write throughput of a single replacement drive. For draid, the rebuild speed will scale with the total number of drives because all surviving drives will work to rebuild.
The dRAID vdev must shuffle its child drives in a way that regardless of which drive has failed, the rebuild IO (both read and write) will distribute evenly among all surviving drives, so the rebuild speed will scale. The exact mechanism used by the dRAID vdev driver is beyond the scope of this simple introduction here. If interested, please refer to the recommended readings in the next section.
## Recommended Reading
Parity declustering (the fancy term for shuffling drives) has been an active research topic, and many papers have been published in this area. The [Permutation Development Data Layout](http://www.cse.scu.edu/~tschwarz/TechReports/hpca.pdf) is a good paper to begin. The dRAID vdev driver uses a shuffling algorithm loosely based on the mechanism described in this paper.
# Using dRAID
First get the code [here](https://github.com/openzfs/zfs/pull/10102), build zfs with _configure --enable-debug_, and install. Then load the zfs kernel module with the following options which help dRAID rebuild performance.
* zfs_vdev_scrub_max_active=10
* zfs_vdev_async_write_min_active=4
## Create a dRAID vdev
Similar to raidz vdev a dRAID vdev can be created using the `zpool create` command:
```
# zpool create <pool> draid[1,2,3][ <vdevs...>
```
Unlike raidz, additional options may be provided as part of the `draid` vdev type to specify an exact dRAID layout. When unspecific reasonable defaults will be chosen.
```
# zpool create <pool> draid[1,2,3][:<groups>g][:<spares>s][:<data>d][:<iterations>] <vdevs...>
```
* groups - Number of redundancy groups (default: 1 group per 12 vdevs)
* spares - Number of distributed hot spares (default: 1)
* data - Number of data devices per group (default: determined by number of groups)
* iterations - Number of iterations to perform generating a valid dRAID mapping (default 3).
_Notes_:
* The default values are not set in stone and may change.
* For the majority of common configurations we intend to provide pre-computed balanced dRAID mappings.
* When _data_ is specified then: (draid_children - spares) % (parity + data) == 0, otherwise the pool creation will fail.
Now the dRAID vdev is online and ready for IO:
```
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
draid2:4g:2s-0 ONLINE 0 0 0
L0 ONLINE 0 0 0
L1 ONLINE 0 0 0
L2 ONLINE 0 0 0
L3 ONLINE 0 0 0
...
L50 ONLINE 0 0 0
L51 ONLINE 0 0 0
L52 ONLINE 0 0 0
spares
s0-draid2:4g:2s-0 AVAIL
s1-draid2:4g:2s-0 AVAIL
errors: No known data errors
```
There are two logical hot spare vdevs shown above at the bottom:
* The names begin with a `s<id>-` followed by the name of the parent dRAID vdev.
* These hot spares are logical, made from reserved blocks on all the 53 child drives of the dRAID vdev.
* Unlike traditional hot spares, the distributed spare can only replace a drive in its parent dRAID vdev.
The dRAID vdev behaves just like a raidz vdev of the same parity level. You can do IO to/from it, scrub it, fail a child drive and it'd operate in degraded mode.
## Rebuild to distributed spare
When there's a failed/offline child drive, the dRAID vdev supports a completely new mechanism to reconstruct lost data/parity, in addition to the resilver. First of all, resilver is still supported - if a failed drive is replaced by another physical drive, the resilver process is used to reconstruct lost data/parity to the new replacement drive, which is the same as a resilver in a raidz vdev.
But if a child drive is replaced with a distributed spare, a new process called rebuild is used instead of resilver:
```
# zpool offline tank sdo
# zpool replace tank sdo '%draid1-0-s0'
# zpool status
pool: tank
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: rebuilt 2.00G in 0h0m5s with 0 errors on Fri Feb 24 20:37:06 2017
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
draid1-0 DEGRADED 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
sdg ONLINE 0 0 0
sdh ONLINE 0 0 0
sdu ONLINE 0 0 0
sdj ONLINE 0 0 0
sdv ONLINE 0 0 0
sdl ONLINE 0 0 0
sdm ONLINE 0 0 0
sdn ONLINE 0 0 0
spare-11 DEGRADED 0 0 0
sdo OFFLINE 0 0 0
%draid1-0-s0 ONLINE 0 0 0
sdp ONLINE 0 0 0
sdq ONLINE 0 0 0
sdr ONLINE 0 0 0
sds ONLINE 0 0 0
sdt ONLINE 0 0 0
spares
%draid1-0-s0 INUSE currently in use
%draid1-0-s1 AVAIL
```
The scan status line of the _zpool status_ output now says _"rebuilt"_ instead of _"resilvered"_, because the lost data/parity was rebuilt to the distributed spare by a brand new process called _"rebuild"_. The main differences from _resilver_ are:
* The rebuild process does not scan the whole block pointer tree. Instead, it only scans the spacemap objects.
* The IO from rebuild is sequential, because it rebuilds metaslabs one by one in sequential order.
* The rebuild process is not limited to block boundaries. For example, if 10 64K blocks are allocated contiguously, then rebuild will fix 640K at one time. So rebuild process will generate larger IOs than resilver.
* For all the benefits above, there is one price to pay. The rebuild process cannot verify block checksums, since it doesn't have block pointers.
* Moreover, the rebuild process requires support from on-disk format, and **only** works on draid and mirror vdevs. Resilver, on the other hand, works with any vdev (including draid).
Although rebuild process creates larger IOs, the drives will not necessarily see large IO requests. The block device queue parameter _/sys/block/*/queue/max_sectors_kb_ must be tuned accordingly. However, since the rebuild IO is already sequential, the benefits of enabling larger IO requests might be marginal.
At this point, redundancy has been fully restored without adding any new drive to the pool. If another drive is offlined, the pool is still able to do IO:
```
# zpool offline tank sdj
# zpool status
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: rebuilt 2.00G in 0h0m5s with 0 errors on Fri Feb 24 20:37:06 2017
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
draid1-0 DEGRADED 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
sdg ONLINE 0 0 0
sdh ONLINE 0 0 0
sdu ONLINE 0 0 0
sdj OFFLINE 0 0 0
sdv ONLINE 0 0 0
sdl ONLINE 0 0 0
sdm ONLINE 0 0 0
sdn ONLINE 0 0 0
spare-11 DEGRADED 0 0 0
sdo OFFLINE 0 0 0
%draid1-0-s0 ONLINE 0 0 0
sdp ONLINE 0 0 0
sdq ONLINE 0 0 0
sdr ONLINE 0 0 0
sds ONLINE 0 0 0
sdt ONLINE 0 0 0
spares
%draid1-0-s0 INUSE currently in use
%draid1-0-s1 AVAIL
```
As shown above, the _draid1-0_ vdev is still in _DEGRADED_ mode although two child drives have failed and it's only single-parity. Since the _%draid1-0-s1_ is still _AVAIL_, full redundancy can be restored by replacing _sdj_ with it, without adding new drive to the pool:
```
# zpool replace tank sdj '%draid1-0-s1'
# zpool status
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: rebuilt 2.13G in 0h0m5s with 0 errors on Fri Feb 24 23:20:59 2017
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
draid1-0 DEGRADED 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
sdg ONLINE 0 0 0
sdh ONLINE 0 0 0
sdu ONLINE 0 0 0
spare-6 DEGRADED 0 0 0
sdj OFFLINE 0 0 0
%draid1-0-s1 ONLINE 0 0 0
sdv ONLINE 0 0 0
sdl ONLINE 0 0 0
sdm ONLINE 0 0 0
sdn ONLINE 0 0 0
spare-11 DEGRADED 0 0 0
sdo OFFLINE 0 0 0
%draid1-0-s0 ONLINE 0 0 0
sdp ONLINE 0 0 0
sdq ONLINE 0 0 0
sdr ONLINE 0 0 0
sds ONLINE 0 0 0
sdt ONLINE 0 0 0
spares
%draid1-0-s0 INUSE currently in use
%draid1-0-s1 INUSE currently in use
```
Again, full redundancy has been restored without adding any new drive. If another drive fails, the pool will still be able to handle IO, but there'd be no more distributed spare to rebuild (both are in _INUSE_ state now). At this point, there's no urgency to add a new replacement drive because the pool can survive yet another drive failure.
### Rebuild for mirror vdev
The sequential rebuild process also works for the mirror vdev, when a drive is attached to a mirror or a mirror child vdev is replaced.
By default, rebuild for mirror vdev is turned off. It can be turned on using the zfs module option _spa_rebuild_mirror=1_.
### Rebuild throttling
The rebuild process may delay _zio_ by _spa_vdev_scan_delay_ if the draid vdev has seen any important IO in the recent _spa_vdev_scan_idle_ period. But when a dRAID vdev has lost all redundancy, e.g. a draid2 with 2 faulted child drives, the rebuild process will go full speed by ignoring _spa_vdev_scan_delay_ and _spa_vdev_scan_idle_ altogether because the vdev is now in critical state.
After delaying, the rebuild zio is issued using priority _ZIO_PRIORITY_SCRUB_ for reads and _ZIO_PRIORITY_ASYNC_WRITE_ for writes. Therefore the options that control the queuing of these two IO priorities will affect rebuild _zio_ as well, for example _zfs_vdev_scrub_min_active_, _zfs_vdev_scrub_max_active_, _zfs_vdev_async_write_min_active_, and _zfs_vdev_async_write_max_active_.
## Rebalance
Distributed spare space can be made available again by simply replacing any failed drive with a new drive. This process is called _rebalance_ which is essentially a _resilver_:
```
# zpool replace -f tank sdo sdw
# zpool status
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: resilvered 2.21G in 0h0m58s with 0 errors on Fri Feb 24 23:31:45 2017
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
draid1-0 DEGRADED 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
sdg ONLINE 0 0 0
sdh ONLINE 0 0 0
sdu ONLINE 0 0 0
spare-6 DEGRADED 0 0 0
sdj OFFLINE 0 0 0
%draid1-0-s1 ONLINE 0 0 0
sdv ONLINE 0 0 0
sdl ONLINE 0 0 0
sdm ONLINE 0 0 0
sdn ONLINE 0 0 0
sdw ONLINE 0 0 0
sdp ONLINE 0 0 0
sdq ONLINE 0 0 0
sdr ONLINE 0 0 0
sds ONLINE 0 0 0
sdt ONLINE 0 0 0
spares
%draid1-0-s0 AVAIL
%draid1-0-s1 INUSE currently in use
```
Note that the scan status now says _"resilvered"_. Also, the state of _%draid1-0-s0_ has become _AVAIL_ again. Since the resilver process checks block checksums, it makes up for the lack of checksum verification during previous rebuild.
The dRAID1 vdev in this example shuffles three (4 data + 1 parity) redundancy groups to the 17 drives. For any single drive failure, only about 1/3 of the blocks are affected (and should be resilvered/rebuilt). The rebuild process is able to avoid unnecessary work, but the resilver process by default will not. The rebalance (which is essentially resilver) can speed up a lot by setting module option _zfs_no_resilver_skip_ to 0. This feature is turned off by default because of issue https://github.com/zfsonlinux/zfs/issues/5806.
# Troubleshooting
Please report bugs to [the dRAID PR](https://github.com/zfsonlinux/zfs/pull/10102), as long as the code is not merged upstream.
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)

@ -1,25 +1,4 @@
### Short explanation
The hole_birth feature has/had bugs, the result of which is that, if you do a `zfs send -i` (or `-R`, since it uses `-i`) from an affected dataset, the receiver will not see any checksum or other errors, but the resulting destination snapshot will not match the source.
ZoL versions 0.6.5.8 and 0.7.0-rc1 (and above) default to ignoring the faulty metadata which causes this issue *on the sender side*.
This page was moved to: https://openzfs.github.io/openzfs-docs/Project%20and%20Community/FAQ%20hole%20birth.html
### FAQ
#### I have a pool with hole_birth enabled, how do I know if I am affected?
It is technically possible to calculate whether you have any affected files, but it requires scraping zdb output for each file in each snapshot in each dataset, which is a combinatoric nightmare. (If you really want it, there is a proof of concept [here](https://github.com/rincebrain/hole_birth_test).
#### Is there any less painful way to fix this if we have already received an affected snapshot?
No, the data you need was simply not present in the send stream, unfortunately, and cannot feasibly be rewritten in place.
### Long explanation
hole_birth is a feature to speed up ZFS send -i - in particular, ZFS used to not store metadata on when "holes" (sparse regions) in files were created, so every zfs send -i needed to include every hole.
hole_birth, as the name implies, added tracking for the txg (transaction group) when a hole was created, so that zfs send -i could only send holes that had a birth_time between (starting snapshot txg) and (ending snapshot txg), and life was wonderful.
Unfortunately, hole_birth had a number of edge cases where it could "forget" to set the birth_time of holes in some cases, causing it to record the birth_time as 0 (the value used prior to hole_birth, and essentially equivalent to "since file creation").
This meant that, when you did a zfs send -i, since zfs send does not have any knowledge of the surrounding snapshots when sending a given snapshot, it would see the creation txg as 0, conclude "oh, it is 0, I must have already sent this before", and not include it.
This means that, on the receiving side, it does not know those holes should exist, and does not create them. This leads to differences between the source and the destination.
ZoL versions 0.6.5.8 and 0.7.0-rc1 (and above) default to ignoring this metadata and always sending holes with birth_time 0, configurable using the tunable known as `ignore_hole_birth` or `send_holes_without_birth_time`. The latter is what OpenZFS standardized on. ZoL version 0.6.5.8 only has the former, but for any ZoL version with `send_holes_without_birth_time`, they point to the same value, so changing either will work.
[Go to OpenZFS documentation.](https://openzfs.github.io/openzfs-docs/)