Modernise/fix/rewrite unlinted manpages

zpool-destroy.8: flatten, fix description
zfs-wait.8: flatten, fix description, use list for events
zpool-reguid.8: flatten, fix description
zpool-history.8: flatten, fix description
zpool-export.8: flatten, fix description, remove -f "unmount" reference
  AFAICT no such command exists even in Illumos (as of today, anyway),
  and we definitely don't call it
zpool-labelclear.8: flatten, fix description
zpool-features.5: modernise
spl-module-parameters.5: modernise
zfs-mount-generator.8: rewrite
zfs-module-parameters.5: modernise

Reviewed-by: Richard Laager <rlaager@wiktel.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12169
This commit is contained in:
наб 2021-06-07 21:41:54 +02:00 committed by Brian Behlendorf
parent 2f23f0f940
commit d7e6f293da
11 changed files with 3144 additions and 5530 deletions

View File

@ -1,285 +1,196 @@
'\" te .\"
.\" The contents of this file are subject to the terms of the Common Development
.\" and Distribution License (the "License"). You may not use this file except
.\" in compliance with the License. You can obtain a copy of the license at
.\" usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing.
.\"
.\" See the License for the specific language governing permissions and
.\" limitations under the License. When distributing Covered Code, include this
.\" CDDL HEADER in each file and include the License file at
.\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this
.\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your
.\" own identifying information:
.\" Portions Copyright [yyyy] [name of copyright owner]
.\" .\"
.\" Copyright 2013 Turbo Fredriksson <turbo@bayour.com>. All rights reserved. .\" Copyright 2013 Turbo Fredriksson <turbo@bayour.com>. All rights reserved.
.\" .\"
.TH SPL-MODULE-PARAMETERS 5 "Aug 24, 2020" OpenZFS .Dd August 24, 2020
.SH NAME .Dt SPL-MODULE-PARAMETERS 5
spl\-module\-parameters \- SPL module parameters .Os
.SH DESCRIPTION .
.sp .Sh NAME
.LP .Nm spl-module-parameters
Description of the different parameters to the SPL module. .Nd parameters of the SPL kernel module
.
.SS "Module parameters" .Sh DESCRIPTION
.sp .Bl -tag -width Ds
.LP .It Sy spl_kmem_cache_kmem_threads Ns = Ns Sy 4 Pq uint
The number of threads created for the spl_kmem_cache task queue.
.sp This task queue is responsible for allocating new slabs
.ne 2 for use by the kmem caches.
.na
\fBspl_kmem_cache_kmem_threads\fR (uint)
.ad
.RS 12n
The number of threads created for the spl_kmem_cache task queue. This task
queue is responsible for allocating new slabs for use by the kmem caches.
For the majority of systems and workloads only a small number of threads are For the majority of systems and workloads only a small number of threads are
required. required.
.sp .
Default value: \fB4\fR .It Sy spl_kmem_cache_reclaim Ns = Ns Sy 0 Pq uint
.RE
.sp
.ne 2
.na
\fBspl_kmem_cache_reclaim\fR (uint)
.ad
.RS 12n
When this is set it prevents Linux from being able to rapidly reclaim all the When this is set it prevents Linux from being able to rapidly reclaim all the
memory held by the kmem caches. This may be useful in circumstances where memory held by the kmem caches.
it's preferable that Linux reclaim memory from some other subsystem first. This may be useful in circumstances where it's preferable that Linux
reclaim memory from some other subsystem first.
Setting this will increase the likelihood out of memory events on a memory Setting this will increase the likelihood out of memory events on a memory
constrained system. constrained system.
.sp .
Default value: \fB0\fR .It Sy spl_kmem_cache_obj_per_slab Ns = Ns Sy 8 Pq uint
.RE The preferred number of objects per slab in the cache.
In general, a larger value will increase the caches memory footprint
.sp while decreasing the time required to perform an allocation.
.ne 2 Conversely, a smaller value will minimize the footprint
.na and improve cache reclaim time but individual allocations may take longer.
\fBspl_kmem_cache_obj_per_slab\fR (uint) .
.ad .It Sy spl_kmem_cache_max_size Ns = Ns Sy 32 Po 64-bit Pc or Sy 4 Po 32-bit Pc Pq uint
.RS 12n The maximum size of a kmem cache slab in MiB.
The preferred number of objects per slab in the cache. In general, a larger This effectively limits the maximum cache object size to
value will increase the caches memory footprint while decreasing the time .Sy spl_kmem_cache_max_size Ns / Ns Sy spl_kmem_cache_obj_per_slab .
required to perform an allocation. Conversely, a smaller value will minimize .Pp
the footprint and improve cache reclaim time but individual allocations may Caches may not be created with
take longer.
.sp
Default value: \fB8\fR
.RE
.sp
.ne 2
.na
\fBspl_kmem_cache_max_size\fR (uint)
.ad
.RS 12n
The maximum size of a kmem cache slab in MiB. This effectively limits
the maximum cache object size to \fBspl_kmem_cache_max_size\fR /
\fBspl_kmem_cache_obj_per_slab\fR. Caches may not be created with
object sized larger than this limit. object sized larger than this limit.
.sp .
Default value: \fB32 (64-bit) or 4 (32-bit)\fR .It Sy spl_kmem_cache_slab_limit Ns = Ns Sy 16384 Pq uint
.RE
.sp
.ne 2
.na
\fBspl_kmem_cache_slab_limit\fR (uint)
.ad
.RS 12n
For small objects the Linux slab allocator should be used to make the most For small objects the Linux slab allocator should be used to make the most
efficient use of the memory. However, large objects are not supported by efficient use of the memory.
the Linux slab and therefore the SPL implementation is preferred. This However, large objects are not supported by
value is used to determine the cutoff between a small and large object. the Linux slab and therefore the SPL implementation is preferred.
.sp This value is used to determine the cutoff between a small and large object.
Objects of \fBspl_kmem_cache_slab_limit\fR or smaller will be allocated .Pp
using the Linux slab allocator, large objects use the SPL allocator. A Objects of size
cutoff of 16K was determined to be optimal for architectures using 4K pages. .Sy spl_kmem_cache_slab_limit
.sp or smaller will be allocated using the Linux slab allocator,
Default value: \fB16,384\fR large objects use the SPL allocator.
.RE A cutoff of 16K was determined to be optimal for architectures using 4K pages.
.
.sp .It Sy spl_kmem_alloc_warn Ns = Ns Sy 32768 Pq uint
.ne 2 As a general rule
.na .Fn kmem_alloc
\fBspl_kmem_alloc_warn\fR (uint) allocations should be small,
.ad preferably just a few pages, since they must by physically contiguous.
.RS 12n Therefore, a rate limited warning will be printed to the console for any
As a general rule kmem_alloc() allocations should be small, preferably .Fn kmem_alloc
just a few pages since they must by physically contiguous. Therefore, a
rate limited warning will be printed to the console for any kmem_alloc()
which exceeds a reasonable threshold. which exceeds a reasonable threshold.
.sp .Pp
The default warning threshold is set to eight pages but capped at 32K to The default warning threshold is set to eight pages but capped at 32K to
accommodate systems using large pages. This value was selected to be small accommodate systems using large pages.
enough to ensure the largest allocations are quickly noticed and fixed. This value was selected to be small enough to ensure
the largest allocations are quickly noticed and fixed.
But large enough to avoid logging any warnings when a allocation size is But large enough to avoid logging any warnings when a allocation size is
larger than optimal but not a serious concern. Since this value is tunable, larger than optimal but not a serious concern.
developers are encouraged to set it lower when testing so any new largish Since this value is tunable, developers are encouraged to set it lower
allocations are quickly caught. These warnings may be disabled by setting when testing so any new largish allocations are quickly caught.
the threshold to zero. These warnings may be disabled by setting the threshold to zero.
.sp .
Default value: \fB32,768\fR .It Sy spl_kmem_alloc_max Ns = Ns Sy KMALLOC_MAX_SIZE Ns / Ns Sy 4 Pq uint
.RE Large
.Fn kmem_alloc
.sp allocations will fail if they exceed
.ne 2 .Sy KMALLOC_MAX_SIZE .
.na
\fBspl_kmem_alloc_max\fR (uint)
.ad
.RS 12n
Large kmem_alloc() allocations will fail if they exceed KMALLOC_MAX_SIZE.
Allocations which are marginally smaller than this limit may succeed but Allocations which are marginally smaller than this limit may succeed but
should still be avoided due to the expense of locating a contiguous range should still be avoided due to the expense of locating a contiguous range
of free pages. Therefore, a maximum kmem size with reasonable safely of free pages.
margin of 4x is set. Kmem_alloc() allocations larger than this maximum Therefore, a maximum kmem size with reasonable safely margin of 4x is set.
will quickly fail. Vmem_alloc() allocations less than or equal to this .Fn kmem_alloc
value will use kmalloc(), but shift to vmalloc() when exceeding this value. allocations larger than this maximum will quickly fail.
.sp .Fn vmem_alloc
Default value: \fBKMALLOC_MAX_SIZE/4\fR allocations less than or equal to this value will use
.RE .Fn kmalloc ,
but shift to
.sp .Fn vmalloc
.ne 2 when exceeding this value.
.na .
\fBspl_kmem_cache_magazine_size\fR (uint) .It Sy spl_kmem_cache_magazine_size Ns = Ns Sy 0 Pq uint
.ad
.RS 12n
Cache magazines are an optimization designed to minimize the cost of Cache magazines are an optimization designed to minimize the cost of
allocating memory. They do this by keeping a per-cpu cache of recently allocating memory.
freed objects, which can then be reallocated without taking a lock. This They do this by keeping a per-cpu cache of recently
can improve performance on highly contended caches. However, because freed objects, which can then be reallocated without taking a lock.
objects in magazines will prevent otherwise empty slabs from being This can improve performance on highly contended caches.
immediately released this may not be ideal for low memory machines. However, because objects in magazines will prevent otherwise empty slabs
.sp from being immediately released this may not be ideal for low memory machines.
For this reason \fBspl_kmem_cache_magazine_size\fR can be used to set a .Pp
maximum magazine size. When this value is set to 0 the magazine size will For this reason,
be automatically determined based on the object size. Otherwise magazines .Sy spl_kmem_cache_magazine_size
will be limited to 2-256 objects per magazine (i.e per cpu). Magazines can be used to set a maximum magazine size.
may never be entirely disabled in this implementation. When this value is set to 0 the magazine size will
.sp be automatically determined based on the object size.
Default value: \fB0\fR Otherwise magazines will be limited to 2-256 objects per magazine (i.e per cpu).
.RE Magazines may never be entirely disabled in this implementation.
.
.sp .It Sy spl_hostid Ns = Ns Sy 0 Pq ulong
.ne 2
.na
\fBspl_hostid\fR (ulong)
.ad
.RS 12n
The system hostid, when set this can be used to uniquely identify a system. The system hostid, when set this can be used to uniquely identify a system.
By default this value is set to zero which indicates the hostid is disabled. By default this value is set to zero which indicates the hostid is disabled.
It can be explicitly enabled by placing a unique non-zero value in It can be explicitly enabled by placing a unique non-zero value in
\fB/etc/hostid/\fR. .Pa /etc/hostid .
.sp .
Default value: \fB0\fR .It Sy spl_hostid_path Ns = Ns Pa /etc/hostid Pq charp
.RE The expected path to locate the system hostid when specified.
This value may be overridden for non-standard configurations.
.sp .
.ne 2 .It Sy spl_panic_halt Ns = Ns Sy 0 Pq uint
.na Cause a kernel panic on assertion failures.
\fBspl_hostid_path\fR (charp) When not enabled, the thread is halted to facilitate further debugging.
.ad .Pp
.RS 12n
The expected path to locate the system hostid when specified. This value
may be overridden for non-standard configurations.
.sp
Default value: \fB/etc/hostid\fR
.RE
.sp
.ne 2
.na
\fBspl_panic_halt\fR (uint)
.ad
.RS 12n
Cause a kernel panic on assertion failures. When not enabled, the thread is
halted to facilitate further debugging.
.sp
Set to a non-zero value to enable. Set to a non-zero value to enable.
.sp .
Default value: \fB0\fR .It Sy spl_taskq_kick Ns = Ns Sy 0 Pq uint
.RE Kick stuck taskq to spawn threads.
When writing a non-zero value to it, it will scan all the taskqs.
.sp If any of them have a pending task more than 5 seconds old,
.ne 2 it will kick it to spawn more threads.
.na This can be used if you find a rare
\fBspl_taskq_kick\fR (uint)
.ad
.RS 12n
Kick stuck taskq to spawn threads. When writing a non-zero value to it, it will
scan all the taskqs. If any of them have a pending task more than 5 seconds old,
it will kick it to spawn more threads. This can be used if you find a rare
deadlock occurs because one or more taskqs didn't spawn a thread when it should. deadlock occurs because one or more taskqs didn't spawn a thread when it should.
.sp .
Default value: \fB0\fR .It Sy spl_taskq_thread_bind Ns = Ns Sy 0 Pq int
.RE Bind taskq threads to specific CPUs.
When enabled all taskq threads will be distributed evenly
.sp across the available CPUs.
.ne 2 By default, this behavior is disabled to allow the Linux scheduler
.na the maximum flexibility to determine where a thread should run.
\fBspl_taskq_thread_bind\fR (int) .
.ad .It Sy spl_taskq_thread_dynamic Ns = Ns Sy 1 Pq int
.RS 12n Allow dynamic taskqs.
Bind taskq threads to specific CPUs. When enabled all taskq threads will When enabled taskqs which set the
be distributed evenly over the available CPUs. By default, this behavior .Sy TASKQ_DYNAMIC
is disabled to allow the Linux scheduler the maximum flexibility to determine flag will by default create only a single thread.
where a thread should run. New threads will be created on demand up to a maximum allowed number
.sp to facilitate the completion of outstanding tasks.
Default value: \fB0\fR Threads which are no longer needed will be promptly destroyed.
.RE By default this behavior is enabled but it can be disabled to
.sp
.ne 2
.na
\fBspl_taskq_thread_dynamic\fR (int)
.ad
.RS 12n
Allow dynamic taskqs. When enabled taskqs which set the TASKQ_DYNAMIC flag
will by default create only a single thread. New threads will be created on
demand up to a maximum allowed number to facilitate the completion of
outstanding tasks. Threads which are no longer needed will be promptly
destroyed. By default this behavior is enabled but it can be disabled to
aid performance analysis or troubleshooting. aid performance analysis or troubleshooting.
.sp .
Default value: \fB1\fR .It Sy spl_taskq_thread_priority Ns = Ns Sy 1 Pq int
.RE
.sp
.ne 2
.na
\fBspl_taskq_thread_priority\fR (int)
.ad
.RS 12n
Allow newly created taskq threads to set a non-default scheduler priority. Allow newly created taskq threads to set a non-default scheduler priority.
When enabled the priority specified when a taskq is created will be applied When enabled, the priority specified when a taskq is created will be applied
to all threads created by that taskq. When disabled all threads will use to all threads created by that taskq.
the default Linux kernel thread priority. By default, this behavior is When disabled all threads will use the default Linux kernel thread priority.
enabled. By default, this behavior is enabled.
.sp .
Default value: \fB1\fR .It Sy spl_taskq_thread_sequential Ns = Ns Sy 4 Pq int
.RE
.sp
.ne 2
.na
\fBspl_taskq_thread_sequential\fR (int)
.ad
.RS 12n
The number of items a taskq worker thread must handle without interruption The number of items a taskq worker thread must handle without interruption
before requesting a new worker thread be spawned. This is used to control before requesting a new worker thread be spawned.
This is used to control
how quickly taskqs ramp up the number of threads processing the queue. how quickly taskqs ramp up the number of threads processing the queue.
Because Linux thread creation and destruction are relatively inexpensive a Because Linux thread creation and destruction are relatively inexpensive a
small default value has been selected. This means that normally threads will small default value has been selected.
be created aggressively which is desirable. Increasing this value will This means that normally threads will be created aggressively which is desirable.
Increasing this value will
result in a slower thread creation rate which may be preferable for some result in a slower thread creation rate which may be preferable for some
configurations. configurations.
.sp .
Default value: \fB4\fR .It Sy spl_max_show_tasks Ns = Ns Sy 512 Pq uint
.RE
.sp
.ne 2
.na
\fBspl_max_show_tasks\fR (uint)
.ad
.RS 12n
The maximum number of tasks per pending list in each taskq shown in The maximum number of tasks per pending list in each taskq shown in
/proc/spl/{taskq,taskq-all}. Write 0 to turn off the limit. The proc file will .Pa /proc/spl/taskq{,-all} .
walk the lists with lock held, reading it could cause a lock up if the list Write
grow too large without limiting the output. "(truncated)" will be shown if the .Sy 0
list is larger than the limit. to turn off the limit.
.sp The proc file will walk the lists with lock held,
Default value: \fB512\fR reading it could cause a lock-up if the list grow too large
.RE without limiting the output.
"(truncated)" will be shown if the list is larger than the limit.
.
.El

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -21,232 +21,172 @@
.\" LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION .\" LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
.\" OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION .\" OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
.\" WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. .\" WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
.\"
.TH ZFS-MOUNT-GENERATOR 8 "Apr 19, 2021" OpenZFS .Dd May 31, 2021
.Dt ZFS-MOUNT-GENERATOR 8
.SH "NAME" .Os
zfs\-mount\-generator \- generates systemd mount units for ZFS .
.SH SYNOPSIS .Sh NAME
.B @systemdgeneratordir@/zfs\-mount\-generator .Nm zfs-mount-generator
.sp .Nd generate systemd mount units for ZFS filesystems
.SH DESCRIPTION .Sh SYNOPSIS
zfs\-mount\-generator implements the \fBGenerators Specification\fP .Pa @systemdgeneratordir@/zfs-mount-generator
of .
.BR systemd (1), .Sh DESCRIPTION
and is called during early boot to generate .Nm
.BR systemd.mount (5) is a
units for automatically mounted datasets. Mount ordering and dependencies .Xr systemd.generator 7
are created for all tracked pools (see below). that generates native
.Xr systemd.mount 5
.SS ENCRYPTION KEYS units for configured ZFS datasets.
If the dataset is an encryption root, a service that loads the associated key (either from file or through a .
.BR systemd\-ask\-password (1) .Ss Properties
prompt) will be created. This service .Bl -tag -compact -width "org.openzfs.systemd:required-by=unit[ unit]…"
. BR RequiresMountsFor .It Sy mountpoint Ns =
the path of the key (if file-based) and also copies the mount unit's .No Skipped if Sy legacy No or Sy none .
.BR After , .
.BR Before .It Sy canmount Ns =
.No Skipped if Sy off .
.No Skipped if only Sy noauto
datasets exist for a given mountpoint and there's more than one.
.No Datasets with Sy yes No take precedence over ones with Sy noauto No for the same mountpoint.
.No Sets logical Em noauto No flag if Sy noauto .
Encryption roots always generate
.Sy zfs-load-key@ Ns Ar root Ns Sy .service ,
even if
.Sy off .
.
.It Sy atime Ns = , Sy relatime Ns = , Sy devices Ns = , Sy exec Ns = , Sy readonly Ns = , Sy setuid Ns = , Sy nbmand Ns =
Used to generate mount options equivalent to
.Nm zfs Cm mount .
.
.It Sy encroot Ns = , Sy keylocation Ns =
If the dataset is an encryption root, its mount unit will bind to
.Sy zfs-load-key@ Ns Ar root Ns Sy .service ,
with additional dependencies as follows:
.Bl -tag -compact -offset Ds -width "keylocation=https://URL (et al.)"
.It Sy keylocation Ns = Ns Sy prompt
None, uses
.Xr systemd-ask-password 1
.It Sy keylocation Ns = Ns Sy https:// Ns Ar URL Pq et al.\&
.Sy Wants Ns = , Sy After Ns = : Pa network-online.target
.It Sy keylocation Ns = Ns Sy file:// Ns < Ns Ar path Ns >
.Sy RequiresMountsFor Ns = Ns Ar path
.El
.
The service also uses the same
.Sy Wants Ns = ,
.Sy After Ns = ,
.Sy Requires Ns = , No and
.Sy RequiresMountsFor Ns = ,
as the mount unit.
.
.It Sy org.openzfs.systemd:requires Ns = Ns Pa path Ns Oo " " Ns Pa path Oc Ns
.No Sets Sy Requires Ns = for the mount- and key-loading unit.
.
.It Sy org.openzfs.systemd:requires-mounts-for Ns = Ns Pa path Ns Oo " " Ns Pa path Oc Ns
.No Sets Sy RequiresMountsFor Ns = for the mount- and key-loading unit.
.
.It Sy org.openzfs.systemd:before Ns = Ns Pa unit Ns Oo " " Ns Pa unit Oc Ns
.No Sets Sy Before Ns = for the mount unit.
.
.It Sy org.openzfs.systemd:after Ns = Ns Pa unit Ns Oo " " Ns Pa unit Oc Ns
.No Sets Sy After Ns = for the mount unit.
.
.It Sy org.openzfs.systemd:wanted-by Ns = Ns Pa unit Ns Oo " " Ns Pa unit Oc Ns
.No Sets logical Em noauto No flag (see below).
.No If not Sy none , No sets Sy WantedBy Ns = for the mount unit.
.It Sy org.openzfs.systemd:required-by Ns = Ns Pa unit Ns Oo " " Ns Pa unit Oc Ns
.No Sets logical Em noauto No flag (see below).
.No If not Sy none , No sets Sy RequiredBy Ns = for the mount unit.
.
.It Sy org.openzfs.systemd:nofail Ns = Ns (unset) Ns | Ns Sy on Ns | Ns Sy off
Waxes or wanes strength of default reverse dependencies of the mount unit, see below.
.
.It Sy org.openzfs.systemd:ignore Ns = Ns Sy on Ns | Ns Sy off
.No Skip if Sy on .
.No Defaults to Sy off .
.El
.
.Ss Unit Ordering And Dependencies
Additionally, unless the pool the dataset resides on
is imported at generation time, both units gain
.Sy Wants Ns = Ns Pa zfs-import.target
and and
.BR Requires . .Sy After Ns = Ns Pa zfs-import.target .
All mount units of encrypted datasets add the key\-load service for their encryption root to their .Pp
.BR Wants Additionally, unless the logical
and .Em noauto
.BR After . flag is set, the mount unit gains a reverse-dependency for
The service will not be .Pa local-fs.target
.BR Want ed of strength
or .Bl -tag -compact -offset Ds -width "(unset)"
.BR Require d .It (unset)
by .Sy WantedBy Ns = No + Sy Before Ns =
.BR local-fs.target .It Sy on
directly, and so will only be started manually or as a dependency of a started mount unit. .Sy WantedBy Ns =
.It Sy off
.SS UNIT ORDERING AND DEPENDENCIES .Sy RequiredBy Ns = No + Sy Before Ns =
mount unit's .El
.BR Before .
\-> .Ss Cache File
key\-load service (if any)
\->
mount unit
\->
mount unit's
.BR After
It is worth nothing that when a mount unit is activated, it activates all available mount units for parent paths to its mountpoint, i.e. activating the mount unit for /tmp/foo/1/2/3 automatically activates all available mount units for /tmp, /tmp/foo, /tmp/foo/1, and /tmp/foo/1/2. This is true for any combination of mount units from any sources, not just ZFS.
.SS CACHE FILE
Because ZFS pools may not be available very early in the boot process, Because ZFS pools may not be available very early in the boot process,
information on ZFS mountpoints must be stored separately. The output of the command information on ZFS mountpoints must be stored separately.
.PP The output of
.RS 4 .Dl Nm zfs Cm list Fl Ho Ar name , Ns Aq every property above in order
zfs list -H -o name,mountpoint,canmount,atime,relatime,devices,exec,readonly,setuid,nbmand,encroot,keylocation,org.openzfs.systemd:requires,org.openzfs.systemd:requires-mounts-for,org.openzfs.systemd:before,org.openzfs.systemd:after,org.openzfs.systemd:wanted-by,org.openzfs.systemd:required-by,org.openzfs.systemd:nofail,org.openzfs.systemd:ignore for datasets that should be mounted by systemd should be kept at
.Pa @sysconfdir@/zfs/zfs-list.cache/ Ns Ar poolname ,
.RE and, if writeable, will be kept synchronized for the entire pool by the
.PP .Pa history_event-zfs-list-cacher.sh
for datasets that should be mounted by systemd, should be kept ZEDLET, if enabled
separate from the pool at .Pq see Xr zed 8 .
.RI @sysconfdir@/zfs/zfs-list.cache/ POOLNAME . .
.PP .Sh ENVIRONMENT
The cache file, if writeable, will be kept synchronized with the pool
state by the
.I history_event-zfs-list-cacher.sh
ZEDLET.
.PP
.sp
.SS PROPERTIES
The behavior of the generator script can be influenced by the following dataset properties:
.sp
.TP 4
.BR canmount = on | off | noauto
If a dataset has
.BR mountpoint
set and
.BR canmount
is not
.BR off ,
a mount unit will be generated.
Additionally, if
.BR canmount
is
.BR on ,
.BR local-fs.target
will gain a dependency on the mount unit.
This behavior is equal to the
.BR auto
and
.BR noauto
legacy mount options, see
.BR systemd.mount (5).
Encryption roots always generate a key-load service, even for
.BR canmount=off .
.TP 4
.BR org.openzfs.systemd:requires\-mounts\-for = \fIpath\fR...
Space\-separated list of mountpoints to require to be mounted for this mount unit
.TP 4
.BR org.openzfs.systemd:before = \fIunit\fR...
The mount unit and associated key\-load service will be ordered before this space\-separated list of units.
.TP 4
.BR org.openzfs.systemd:after = \fIunit\fR...
The mount unit and associated key\-load service will be ordered after this space\-separated list of units.
.TP 4
.BR org.openzfs.systemd:wanted\-by = \fIunit\fR...
Space-separated list of units that will gain a
.BR Wants
dependency on this mount unit.
Setting this property implies
.BR noauto .
.TP 4
.BR org.openzfs.systemd:required\-by = \fIunit\fR...
Space-separated list of units that will gain a
.BR Requires
dependency on this mount unit.
Setting this property implies
.BR noauto .
.TP 4
.BR org.openzfs.systemd:nofail = unset | on | off
Toggles between a
.BR Wants
and
.BR Requires
type of dependency between the mount unit and
.BR local-fs.target ,
if
.BR noauto
isn't set or implied.
.BR on :
Mount will be
.BR WantedBy
local-fs.target
.BR off :
Mount will be
.BR Before
and
.BR RequiredBy
local-fs.target
.BR unset :
Mount will be
.BR Before
and
.BR WantedBy
local-fs.target
.TP 4
.BR org.openzfs.systemd:ignore = on | off
If set to
.BR on ,
do not generate a mount unit for this dataset.
See also
.BR systemd.mount (5)
.PP
.SH ENVIRONMENT
The The
.BR $ZFS_DEBUG .Sy ZFS_DEBUG
environment variable, which can either be 0 (default), environment variable can either be
1 (print summary accounting information at the end), .Sy 0
or at least 2 (print accounting information for each subprocess as it finishes). (default),
.Sy 1
If not present, /proc/cmdline is additionally checked for (print summary accounting information at the end), or at least
.BR debug , .Sy 2
in which case the debug level is set to 2. (print accounting information for each subprocess as it finishes).
.
.SH EXAMPLE If not present,
.Pa /proc/cmdline
is additionally checked for
.Qq debug ,
in which case the debug level is set to
.Sy 2 .
.
.Sh EXAMPLES
To begin, enable tracking for the pool: To begin, enable tracking for the pool:
.PP .Dl # Nm touch Pa @sysconfdir@/zfs/zfs-list.cache/ Ns Ar poolname
.RS 4 Then enable the tracking ZEDLET:
touch .Dl # Nm ln Fl s Pa @zfsexecdir@/zed.d/history_event-zfs-list-cacher.sh @sysconfdir@/zfs/zed.d
.RI @sysconfdir@/zfs/zfs-list.cache/ POOLNAME .Dl # Nm systemctl Cm enable Pa zfs-zed.service
.RE .Dl # Nm systemctl Cm restart Pa zfs-zed.service
.PP .Pp
Then, enable the tracking ZEDLET: If no history event is in the queue,
.PP inject one to ensure the ZEDLET runs to refresh the cache file
.RS 4 by setting a monitored property somewhere on the pool:
ln -s "@zfsexecdir@/zed.d/history_event-zfs-list-cacher.sh" "@sysconfdir@/zfs/zed.d" .Dl # Nm zfs Cm set Sy relatime Ns = Ns Sy off Ar poolname/dset
.Dl # Nm zfs Cm inherit Sy relatime Ar poolname/dset
systemctl enable zfs-zed.service .Pp
To test the generator output:
systemctl restart zfs-zed.service .Dl $ Nm mkdir Pa /tmp/zfs-mount-generator
.RE .Dl $ Nm @systemdgeneratordir@/zfs-mount-generator Pa /tmp/zfs-mount-generator
.PP .
Force the running of the ZEDLET by setting a monitored property, e.g. If the generated units are satisfactory, instruct
.BR canmount , .Nm systemd
for at least one dataset in the pool: to re-run all generators:
.PP .Dl # Nm systemctl daemon-reload
.RS 4 .
zfs set canmount=on .Sh SEE ALSO
.I DATASET .Xr systemd.mount 5 ,
.RE .Xr systemd.target 5 ,
.PP .Xr zfs 5 ,
This forces an update to the stale cache file. .Xr zfs-events 5 ,
.Xr systemd.generator 7 ,
To test the generator output, run .Xr systemd.special 7 ,
.PP .Xr zed 8
.RS 4
@systemdgeneratordir@/zfs-mount-generator /tmp/zfs-mount-generator
.RE
.PP
This will generate units and dependencies in
.I /tmp/zfs-mount-generator
for you to inspect them. The second and third argument are ignored.
If you're satisfied with the generated units, instruct systemd to re-run all generators:
.PP
.RS 4
systemctl daemon-reload
.RE
.PP
.sp
.SH SEE ALSO
.BR zfs (5)
.BR zfs-events (5)
.BR zed (8)
.BR zpool (5)
.BR systemd (1)
.BR systemd.target (5)
.BR systemd.special (7)
.BR systemd.mount (7)

View File

@ -18,7 +18,6 @@
.\" .\"
.\" CDDL HEADER END .\" CDDL HEADER END
.\" .\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. .\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -27,25 +26,20 @@
.\" Copyright 2017 Nexenta Systems, Inc. .\" Copyright 2017 Nexenta Systems, Inc.
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
.\" .\"
.Dd August 9, 2019 .Dd May 31, 2021
.Dt ZFS-WAIT 8 .Dt ZFS-WAIT 8
.Os .Os
.
.Sh NAME .Sh NAME
.Nm zfs-wait .Nm zfs-wait
.Nd Wait for background activity to stop in a ZFS filesystem .Nd wait for activity in ZFS filesystem to stop
.Sh SYNOPSIS .Sh SYNOPSIS
.Nm zfs .Nm zfs
.Cm wait .Cm wait
.Op Fl t Ar activity Ns Oo , Ns Ar activity Ns Oc Ns ... .Op Fl t Ar activity Ns Oo , Ns Ar activity Ns Oc Ns
.Ar fs .Ar filesystem
.
.Sh DESCRIPTION .Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zfs
.Cm wait
.Op Fl t Ar activity Ns Oo , Ns Ar activity Ns Oc Ns ...
.Ar fs
.Xc
Waits until all background activity of the given types has ceased in the given Waits until all background activity of the given types has ceased in the given
filesystem. filesystem.
The activity could cease because it has completed or because the filesystem has The activity could cease because it has completed or because the filesystem has
@ -58,13 +52,14 @@ immediately.
These are the possible values for These are the possible values for
.Ar activity , .Ar activity ,
along with what each one waits for: along with what each one waits for:
.Bd -literal .Bl -tag -compact -offset Ds -width "deleteq"
deleteq The filesystem's internal delete queue to empty .It Sy deleteq
.Ed The filesystem's internal delete queue to empty
.El
.Pp .Pp
Note that the internal delete queue does not finish draining until Note that the internal delete queue does not finish draining until
all large files have had time to be fully destroyed and all open file all large files have had time to be fully destroyed and all open file
handles to unlinked files are closed. handles to unlinked files are closed.
.El .
.Sh SEE ALSO .Sh SEE ALSO
.Xr lsof 8 .Xr lsof 8

View File

@ -18,7 +18,6 @@
.\" .\"
.\" CDDL HEADER END .\" CDDL HEADER END
.\" .\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. .\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -27,29 +26,23 @@
.\" Copyright 2017 Nexenta Systems, Inc. .\" Copyright 2017 Nexenta Systems, Inc.
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
.\" .\"
.Dd August 9, 2019 .Dd May 31, 2021
.Dt ZPOOL-DESTROY 8 .Dt ZPOOL-DESTROY 8
.Os .Os
.
.Sh NAME .Sh NAME
.Nm zpool-destroy .Nm zpool-destroy
.Nd Destroys the given ZFS storage pool, freeing up any devices for other use .Nd destroy ZFS storage pool
.Sh SYNOPSIS .Sh SYNOPSIS
.Nm zpool .Nm zpool
.Cm destroy .Cm destroy
.Op Fl f .Op Fl f
.Ar pool .Ar pool
.
.Sh DESCRIPTION .Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm destroy
.Op Fl f
.Ar pool
.Xc
Destroys the given pool, freeing up any devices for other use. Destroys the given pool, freeing up any devices for other use.
This command tries to unmount any active datasets before destroying the pool. This command tries to unmount any active datasets before destroying the pool.
.Bl -tag -width Ds .Bl -tag -width Ds
.It Fl f .It Fl f
Forces any active datasets contained within the pool to be unmounted. Forcefully unmount all active datasets.
.El
.El .El

View File

@ -18,7 +18,6 @@
.\" .\"
.\" CDDL HEADER END .\" CDDL HEADER END
.\" .\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. .\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -30,24 +29,17 @@
.Dd February 16, 2020 .Dd February 16, 2020
.Dt ZPOOL-EXPORT 8 .Dt ZPOOL-EXPORT 8
.Os .Os
.
.Sh NAME .Sh NAME
.Nm zpool-export .Nm zpool-export
.Nd Exports the given ZFS storage pools from the system .Nd export ZFS storage pools
.Sh SYNOPSIS .Sh SYNOPSIS
.Nm zpool .Nm zpool
.Cm export .Cm export
.Op Fl a
.Op Fl f .Op Fl f
.Ar pool Ns ... .Fl a Ns | Ns Ar pool Ns
.
.Sh DESCRIPTION .Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm export
.Op Fl a
.Op Fl f
.Ar pool Ns ...
.Xc
Exports the given pools from the system. Exports the given pools from the system.
All devices are marked as exported, but are still considered in use by other All devices are marked as exported, but are still considered in use by other
subsystems. subsystems.
@ -69,15 +61,12 @@ the disks.
.It Fl a .It Fl a
Exports all pools imported on the system. Exports all pools imported on the system.
.It Fl f .It Fl f
Forcefully unmount all datasets, using the Forcefully unmount all datasets, and allow export of pools with active shared spares.
.Nm unmount Fl f
command.
This option is not supported on Linux.
.Pp .Pp
This command will forcefully export the pool even if it has a shared spare that This command will forcefully export the pool even if it has a shared spare that
is currently being used. is currently being used.
This may lead to potential data corruption. This may lead to potential data corruption.
.El .El
.El .
.Sh SEE ALSO .Sh SEE ALSO
.Xr zpool-import 8 .Xr zpool-import 8

View File

@ -18,7 +18,6 @@
.\" .\"
.\" CDDL HEADER END .\" CDDL HEADER END
.\" .\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. .\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -30,22 +29,17 @@
.Dd August 9, 2019 .Dd August 9, 2019
.Dt ZPOOL-HISTORY 8 .Dt ZPOOL-HISTORY 8
.Os .Os
.
.Sh NAME .Sh NAME
.Nm zpool-history .Nm zpool-history
.Nd Displays the command history of the specified ZFS storage pool(s) .Nd inspect command history of ZFS storage pools
.Sh SYNOPSIS .Sh SYNOPSIS
.Nm zpool .Nm zpool
.Cm history .Cm history
.Op Fl il .Op Fl il
.Oo Ar pool Oc Ns ... .Oo Ar pool Oc Ns
.
.Sh DESCRIPTION .Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm history
.Op Fl il
.Oo Ar pool Oc Ns ...
.Xc
Displays the command history of the specified pool(s) or all pools if no pool is Displays the command history of the specified pool(s) or all pools if no pool is
specified. specified.
.Bl -tag -width Ds .Bl -tag -width Ds
@ -56,7 +50,7 @@ Displays log records in long format, which in addition to standard format
includes, the user name, the hostname, and the zone in which the operation was includes, the user name, the hostname, and the zone in which the operation was
performed. performed.
.El .El
.El .
.Sh SEE ALSO .Sh SEE ALSO
.Xr zpool-checkpoint 8 , .Xr zpool-checkpoint 8 ,
.Xr zpool-events 8 , .Xr zpool-events 8 ,

View File

@ -18,7 +18,6 @@
.\" .\"
.\" CDDL HEADER END .\" CDDL HEADER END
.\" .\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. .\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -27,25 +26,20 @@
.\" Copyright 2017 Nexenta Systems, Inc. .\" Copyright 2017 Nexenta Systems, Inc.
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
.\" .\"
.Dd August 9, 2019 .Dd May 31, 2021
.Dt ZPOOL-LABELCLEAR 8 .Dt ZPOOL-LABELCLEAR 8
.Os .Os
.
.Sh NAME .Sh NAME
.Nm zpool-labelclear .Nm zpool-labelclear
.Nd Removes ZFS label information from the specified physical device .Nd remove ZFS label information from device
.Sh SYNOPSIS .Sh SYNOPSIS
.Nm zpool .Nm zpool
.Cm labelclear .Cm labelclear
.Op Fl f .Op Fl f
.Ar device .Ar device
.
.Sh DESCRIPTION .Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm labelclear
.Op Fl f
.Ar device
.Xc
Removes ZFS label information from the specified Removes ZFS label information from the specified
.Ar device . .Ar device .
If the If the
@ -58,7 +52,7 @@ must not be part of an active pool configuration.
.It Fl f .It Fl f
Treat exported or foreign devices as inactive. Treat exported or foreign devices as inactive.
.El .El
.El .
.Sh SEE ALSO .Sh SEE ALSO
.Xr zpool-destroy 8 , .Xr zpool-destroy 8 ,
.Xr zpool-detach 8 , .Xr zpool-detach 8 ,

View File

@ -18,7 +18,6 @@
.\" .\"
.\" CDDL HEADER END .\" CDDL HEADER END
.\" .\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. .\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -27,27 +26,23 @@
.\" Copyright 2017 Nexenta Systems, Inc. .\" Copyright 2017 Nexenta Systems, Inc.
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
.\" .\"
.Dd August 9, 2019 .Dd May 31, 2021
.Dt ZPOOL-REGUID 8 .Dt ZPOOL-REGUID 8
.Os .Os
.
.Sh NAME .Sh NAME
.Nm zpool-reguid .Nm zpool-reguid
.Nd Generate a new unique identifier for a ZFS storage pool .Nd generate new unique identifier for ZFS storage pool
.Sh SYNOPSIS .Sh SYNOPSIS
.Nm zpool .Nm zpool
.Cm reguid .Cm reguid
.Ar pool .Ar pool
.
.Sh DESCRIPTION .Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm reguid
.Ar pool
.Xc
Generates a new unique identifier for the pool. Generates a new unique identifier for the pool.
You must ensure that all devices in this pool are online and healthy before You must ensure that all devices in this pool are online and healthy before
performing this action. performing this action.
.El .
.Sh SEE ALSO .Sh SEE ALSO
.Xr zpool-export 8 , .Xr zpool-export 8 ,
.Xr zpool-import 8 .Xr zpool-import 8

View File

@ -26,7 +26,7 @@ fi
IFS=" IFS="
" "
files="$(find "$@" -type f -name '*[1-9]*' ! -name '*module-param*' ! -name 'zpool-features*' ! -name 'zfs-mount-generator*')" || exit 1 files="$(find "$@" -type f -name '*[1-9]*')" || exit 1
add_excl="$(awk ' add_excl="$(awk '
/^.\\" lint-ok:/ { /^.\\" lint-ok:/ {