Lint most manpages

Reviewed-by: Richard Laager <rlaager@wiktel.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Reviewed-by: Ryan Moeller <ryan@iXsystems.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12129
This commit is contained in:
наб 2021-05-27 02:46:40 +02:00 committed by Brian Behlendorf
parent 4a98300feb
commit f84fe3fc87
58 changed files with 2716 additions and 2698 deletions

View File

@ -1,21 +1,32 @@
.Dd July 5, 2019
.Dt ZVOL_WAIT 1 SMM
.\"
.\" This file and its contents are supplied under the terms of the
.\" Common Development and Distribution License ("CDDL"), version 1.0.
.\" You may only use this file in accordance with the terms of version
.\" 1.0 of the CDDL.
.\"
.\" A full copy of the text of the CDDL should have accompanied this
.\" source. A copy of the CDDL is also available via the Internet at
.\" http://www.illumos.org/license/CDDL.
.\"
.Dd May 27, 2021
.Dt ZVOL_WAIT 1
.Os
.
.Sh NAME
.Nm zvol_wait
.Nd Wait for ZFS volume links in
.Em /dev
to be created.
.Nd wait for ZFS volume links to appear in /dev
.Sh SYNOPSIS
.Nm
.
.Sh DESCRIPTION
When a ZFS pool is imported, ZFS will register each ZFS volume
(zvol) as a disk device with the system. As the disks are registered,
.Xr \fBudev 7\fR
will asynchronously create symlinks under
.Em /dev/zvol
using the zvol's name.
When a ZFS pool is imported, the volumes within it will appear as block devices.
As they're registered,
.Xr udev 7
asynchronously creates symlinks under
.Pa /dev/zvol
using the volumes' names.
.Nm
will wait for all those symlinks to be created before returning.
will wait for all those symlinks to be created before exiting.
.
.Sh SEE ALSO
.Xr \fBudev 7\fR
.Xr udev 7

View File

@ -26,7 +26,7 @@
.
.Sh NAME
.Nm mount.zfs
.Nd mount a ZFS filesystem
.Nd mount ZFS filesystem
.Sh SYNOPSIS
.Nm
.Op Fl sfnvh
@ -44,7 +44,7 @@ to mount filesystem snapshots and
ZFS filesystems, as well as by
.Xr zfs 8
when the
.Ev Em $ZFS_MOUNT_HELPER
.Sy ZFS_MOUNT_HELPER
environment variable is not set.
Users should should invoke either
.Xr mount 8

View File

@ -8,7 +8,6 @@
.\" source. A copy of the CDDL is also available via the Internet at
.\" http://www.illumos.org/license/CDDL.
.\"
.\"
.\" Copyright 2012, Richard Lowe.
.\" Copyright (c) 2012, 2019 by Delphix. All rights reserved.
.\" Copyright 2017 Nexenta Systems, Inc.
@ -16,27 +15,29 @@
.\" Copyright (c) 2017 Intel Corporation.
.\"
.Dd October 7, 2020
.Dt ZDB 8 SMM
.Dt ZDB 8
.Os
.
.Sh NAME
.Nm zdb
.Nd display zpool debugging and consistency information
.Nd display ZFS storage pool debugging and consistency information
.Sh SYNOPSIS
.Nm
.Op Fl AbcdDFGhikLMPsvXYy
.Op Fl e Oo Fl V Oc Op Fl p Ar path ...
.Op Fl e Oo Fl V Oc Oo Fl p Ar path Oc Ns
.Op Fl I Ar inflight I/Os
.Oo Fl o Ar var Ns = Ns Ar value Oc Ns ...
.Oo Fl o Ar var Ns = Ns Ar value Oc Ns
.Op Fl t Ar txg
.Op Fl U Ar cache
.Op Fl x Ar dumpdir
.Op Ar poolname[/dataset | objset ID]
.Op Ar object | range ...
.Op Ar poolname Ns Op / Ns Ar dataset | objset ID
.Op Ar object Ns | Ns Ar range Ns
.Nm
.Op Fl AdiPv
.Op Fl e Oo Fl V Oc Op Fl p Ar path ...
.Op Fl e Oo Fl V Oc Oo Fl p Ar path Oc Ns
.Op Fl U Ar cache
.Ar poolname[/dataset | objset ID] Op Ar object | range ...
.Ar poolname Ns Op Ar / Ns Ar dataset | objset ID
.Op Ar object Ns | Ns Ar range Ns
.Nm
.Fl C
.Op Fl A
@ -44,7 +45,7 @@
.Nm
.Fl E
.Op Fl A
.Ar word0 Ns \&: Ns Ar word1 Ns :...: Ns Ar word15
.Ar word0 : Ns Ar word1 Ns :: Ns Ar word15
.Nm
.Fl l
.Op Fl Aqu
@ -52,10 +53,10 @@
.Nm
.Fl m
.Op Fl AFLPXY
.Op Fl e Oo Fl V Oc Op Fl p Ar path ...
.Op Fl e Oo Fl V Oc Oo Fl p Ar path Oc Ns
.Op Fl t Ar txg
.Op Fl U Ar cache
.Ar poolname Op Ar vdev Op Ar metaslab ...
.Ar poolname Op Ar vdev Oo Ar metaslab Oc Ns
.Nm
.Fl O
.Ar dataset path
@ -65,15 +66,16 @@
.Nm
.Fl R
.Op Fl A
.Op Fl e Oo Fl V Oc Op Fl p Ar path ...
.Op Fl e Oo Fl V Oc Oo Fl p Ar path Oc Ns
.Op Fl U Ar cache
.Ar poolname vdev Ns \&: Ns Ar offset Ns \&: Ns Ar [<lsize>/]<psize> Ns Op : Ns Ar flags
.Ar poolname vdev : Ns Ar offset : Ns Oo Ar lsize Ns / Oc Ns Ar psize Ns Op : Ns Ar flags
.Nm
.Fl S
.Op Fl AP
.Op Fl e Oo Fl V Oc Op Fl p Ar path ...
.Op Fl e Oo Fl V Oc Oo Fl p Ar path Oc Ns
.Op Fl U Ar cache
.Ar poolname
.
.Sh DESCRIPTION
The
.Nm
@ -99,11 +101,11 @@ or
.Qq Sy @
characters, it is interpreted as a pool name.
The root dataset can be specified as
.Ar pool Ns /
.Pq pool name followed by a slash .
.Qq Ar pool Ns / .
.Pp
When operating on an imported and active pool it is possible, though unlikely,
that zdb may interpret inconsistent pool data and behave erratically.
.
.Sh OPTIONS
Display options:
.Bl -tag -width Ds
@ -143,27 +145,30 @@ those specific objects or ranges only.
.Pp
An object ID range is specified in terms of a colon-separated tuple of
the form
.Ao start Ac Ns : Ns Ao end Ac Ns Op Ns : Ns Ao flags Ac Ns .
.Ao start Ac : Ns Ao end Ac Ns Op : Ns Ao flags Ac .
The fields
.Ar start
and
.Ar end
are integer object identifiers that denote the upper and lower bounds
of the range. An
of the range.
An
.Ar end
value of -1 specifies a range with no upper bound. The
value of -1 specifies a range with no upper bound.
The
.Ar flags
field optionally specifies a set of flags, described below, that control
which object types are dumped. By default, all object types are dumped. A minus
sign
which object types are dumped.
By default, all object types are dumped.
A minus sign
.Pq -
negates the effect of the flag that follows it and has no effect unless
preceded by the
.Ar A
flag. For example, the range 0:-1:A-d will dump all object types except
for directories.
flag.
For example, the range 0:-1:A-d will dump all object types except for directories.
.Pp
.Bl -tag -compact
.Bl -tag -compact -width Ds
.It Sy A
Dump all objects (this is the default)
.It Sy d
@ -198,7 +203,7 @@ Display the statistics independently for each deduplication table.
Dump the contents of the deduplication tables describing duplicate blocks.
.It Fl DDDDD
Also dump the contents of the deduplication tables describing unique blocks.
.It Fl E Ar word0 Ns \&: Ns Ar word1 Ns :...: Ns Ar word15
.It Fl E Ar word0 : Ns Ar word1 Ns :: Ns Ar word15
Decode and display block from an embedded block pointer specified by the
.Ar word
arguments.
@ -218,18 +223,21 @@ Note, the on disk format of the pool is not reverted to the checkpointed state.
Read the vdev labels and L2ARC header from the specified device.
.Nm Fl l
will return 0 if valid label was found, 1 if error occurred, and 2 if no valid
labels were found. The presence of L2ARC header is indicated by a specific
sequence (L2ARC_DEV_HDR_MAGIC). If there is an accounting error in the size
or the number of L2ARC log blocks
labels were found.
The presence of L2ARC header is indicated by a specific
sequence (L2ARC_DEV_HDR_MAGIC).
If there is an accounting error in the size or the number of L2ARC log blocks
.Nm Fl l
will return 1. Each unique configuration is displayed only
once.
will return 1.
Each unique configuration is displayed only once.
.It Fl ll Ar device
In addition display label space usage stats. If a valid L2ARC header was found
In addition display label space usage stats.
If a valid L2ARC header was found
also display the properties of log blocks used for restoring L2ARC contents
(persistent L2ARC).
.It Fl lll Ar device
Display every configuration, unique or not. If a valid L2ARC header was found
Display every configuration, unique or not.
If a valid L2ARC header was found
also display the properties of log entries in log blocks used for restoring
L2ARC contents (persistent L2ARC).
.Pp
@ -239,8 +247,8 @@ option is also specified, don't print the labels or the L2ARC header.
.Pp
If the
.Fl u
option is also specified, also display the uberblocks on this device. Specify
multiple times to increase verbosity.
option is also specified, also display the uberblocks on this device.
Specify multiple times to increase verbosity.
.It Fl L
Disable leak detection and the loading of space maps.
By default,
@ -291,7 +299,7 @@ This option can be combined with
.Fl v
for increasing verbosity.
.It Xo
.Fl R Ar poolname vdev Ns \&: Ns Ar offset Ns \&: Ns Ar [<lsize>/]<psize> Ns Op : Ns Ar flags
.Fl R Ar poolname vdev : Ns Ar offset : Ns Oo Ar lsize Ns / Oc Ns Ar psize Ns Op : Ns Ar flags
.Xc
Read and display a block from the specified device.
By default the block is displayed as a hex dump, but see the description of the
@ -315,7 +323,8 @@ Print block pointer at hex offset
.It Sy c
Calculate and display checksums
.It Sy d
Decompress the block. Set environment variable
Decompress the block.
Set environment variable
.Nm ZDB_NO_ZLE
to skip zle when guessing.
.It Sy e
@ -352,7 +361,7 @@ Enable panic recovery, certain errors which would otherwise be fatal are
demoted to warnings.
.It Fl AAA
Do not abort if asserts fail and also enable panic recovery.
.It Fl e Op Fl p Ar path ...
.It Fl e Oo Fl p Ar path Oc Ns
Operate on an exported pool, not present in
.Pa /etc/zfs/zpool.cache .
The
@ -382,14 +391,16 @@ The default value is 200.
This option affects the performance of the
.Fl c
option.
.It Fl o Ar var Ns = Ns Ar value ...
.It Fl o Ar var Ns = Ns Ar value
Set the given global libzpool variable to the provided value.
The value must be an unsigned 32-bit integer.
Currently only little-endian systems are supported to avoid accidentally setting
the high 32 bits of 64-bit variables.
.It Fl P
Print numbers in an unscaled form more amenable to parsing, eg. 1000000 rather
than 1M.
Print numbers in an unscaled form more amenable to parsing, e.g.\&
.Sy 1000000
rather than
.Sy 1M .
.It Fl t Ar transaction
Specify the highest transaction to use when searching for uberblocks.
See also the
@ -432,51 +443,51 @@ option, with more occurrences enabling more verbosity.
.Pp
If no options are specified, all information about the named pool will be
displayed at default verbosity.
.
.Sh EXAMPLES
.Bl -tag -width Ds
.It Xo
.Sy Example 1
.Sy Example 1 :
Display the configuration of imported pool
.Pa rpool
.Ar rpool
.Xc
.Bd -literal
# zdb -C rpool
.No # Nm zdb Fl C Ar rpool
MOS Configuration:
version: 28
name: 'rpool'
...
.Ed
.It Xo
.Sy Example 2
.Sy Example 2 :
Display basic dataset information about
.Pa rpool
.Ar rpool
.Xc
.Bd -literal
# zdb -d rpool
.No # Nm zdb Fl d Ar rpool
Dataset mos [META], ID 0, cr_txg 4, 26.9M, 1051 objects
Dataset rpool/swap [ZVOL], ID 59, cr_txg 356, 486M, 2 objects
...
.Ed
.It Xo
.Sy Example 3
.Sy Example 3 :
Display basic information about object 0 in
.Pa rpool/export/home
.Ar rpool/export/home
.Xc
.Bd -literal
# zdb -d rpool/export/home 0
.No # Nm zdb Fl d Ar rpool/export/home 0
Dataset rpool/export/home [ZPL], ID 137, cr_txg 1546, 32K, 8 objects
Object lvl iblk dblk dsize lsize %full type
0 7 16K 16K 15.0K 16K 25.00 DMU dnode
.Ed
.It Xo
.Sy Example 4
.Sy Example 4 :
Display the predicted effect of enabling deduplication on
.Pa rpool
.Ar rpool
.Xc
.Bd -literal
# zdb -S rpool
.No # Nm zdb Fl S Ar rpool
Simulated DDT histogram:
bucket allocated referenced
@ -485,10 +496,11 @@ refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE
------ ------ ----- ----- ----- ------ ----- ----- -----
1 694K 27.1G 15.0G 15.0G 694K 27.1G 15.0G 15.0G
2 35.0K 1.33G 699M 699M 74.7K 2.79G 1.45G 1.45G
...
dedup = 1.11, compress = 1.80, copies = 1.00, dedup * compress / copies = 2.00
.Ed
.El
.
.Sh SEE ALSO
.Xr zfs 8 ,
.Xr zpool 8

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
@ -30,67 +29,69 @@
.\" Copyright 2018 Nexenta Systems, Inc.
.\" Copyright 2019 Joyent, Inc.
.\"
.Dd June 30, 2019
.Dd May 27, 2021
.Dt ZFS-ALLOW 8
.Os
.
.Sh NAME
.Nm zfs-allow
.Nd Delegates ZFS administration permission for the file systems to non-privileged users.
.Nd delegate ZFS administration permissions to unprivileged users
.Sh SYNOPSIS
.Nm zfs
.Cm allow
.Op Fl dglu
.Ar user Ns | Ns Ar group Ns Oo , Ns Ar user Ns | Ns Ar group Oc Ns ...
.Ar user Ns | Ns Ar group Ns Oo , Ns Ar user Ns | Ns Ar group Oc Ns
.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
.Ar setname Oc Ns ...
.Ar setname Oc Ns
.Ar filesystem Ns | Ns Ar volume
.Nm zfs
.Cm allow
.Op Fl dl
.Fl e Ns | Ns Sy everyone
.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
.Ar setname Oc Ns ...
.Ar setname Oc Ns
.Ar filesystem Ns | Ns Ar volume
.Nm zfs
.Cm allow
.Fl c
.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
.Ar setname Oc Ns ...
.Ar setname Oc Ns
.Ar filesystem Ns | Ns Ar volume
.Nm zfs
.Cm allow
.Fl s No @ Ns Ar setname
.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
.Ar setname Oc Ns ...
.Ar setname Oc Ns
.Ar filesystem Ns | Ns Ar volume
.Nm zfs
.Cm unallow
.Op Fl dglru
.Ar user Ns | Ns Ar group Ns Oo , Ns Ar user Ns | Ns Ar group Oc Ns ...
.Ar user Ns | Ns Ar group Ns Oo , Ns Ar user Ns | Ns Ar group Oc Ns
.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
.Ar setname Oc Ns ... Oc
.Ar setname Oc Ns Oc
.Ar filesystem Ns | Ns Ar volume
.Nm zfs
.Cm unallow
.Op Fl dlr
.Fl e Ns | Ns Sy everyone
.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
.Ar setname Oc Ns ... Oc
.Ar setname Oc Ns Oc
.Ar filesystem Ns | Ns Ar volume
.Nm zfs
.Cm unallow
.Op Fl r
.Fl c
.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
.Ar setname Oc Ns ... Oc
.Ar setname Oc Ns Oc
.Ar filesystem Ns | Ns Ar volume
.Nm zfs
.Cm unallow
.Op Fl r
.Fl s No @ Ns Ar setname
.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
.Ar setname Oc Ns ... Oc
.Ar setname Oc Ns Oc
.Ar filesystem Ns | Ns Ar volume
.
.Sh DESCRIPTION
.Bl -tag -width ""
.It Xo
@ -119,9 +120,9 @@ command restricts modifications of the global namespace to the root user.
.Nm zfs
.Cm allow
.Op Fl dglu
.Ar user Ns | Ns Ar group Ns Oo , Ns Ar user Ns | Ns Ar group Oc Ns ...
.Ar user Ns | Ns Ar group Ns Oo , Ns Ar user Ns | Ns Ar group Oc Ns
.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
.Ar setname Oc Ns ...
.Ar setname Oc Ns
.Ar filesystem Ns | Ns Ar volume
.Xc
.It Xo
@ -130,7 +131,7 @@ command restricts modifications of the global namespace to the root user.
.Op Fl dl
.Fl e Ns | Ns Sy everyone
.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
.Ar setname Oc Ns ...
.Ar setname Oc Ns
.Ar filesystem Ns | Ns Ar volume
.Xc
Delegates ZFS administration permission for the file systems to non-privileged
@ -140,15 +141,15 @@ users.
Allow only for the descendent file systems.
.It Fl e Ns | Ns Sy everyone
Specifies that the permissions be delegated to everyone.
.It Fl g Ar group Ns Oo , Ns Ar group Oc Ns ...
.It Fl g Ar group Ns Oo , Ns Ar group Oc Ns
Explicitly specify that permissions are delegated to the group.
.It Fl l
Allow
.Qq locally
only for the specified file system.
.It Fl u Ar user Ns Oo , Ns Ar user Oc Ns ...
.It Fl u Ar user Ns Oo , Ns Ar user Oc Ns
Explicitly specify that permissions are delegated to the user.
.It Ar user Ns | Ns Ar group Ns Oo , Ns Ar user Ns | Ns Ar group Oc Ns ...
.It Ar user Ns | Ns Ar group Ns Oo , Ns Ar user Ns | Ns Ar group Oc Ns
Specifies to whom the permissions are delegated.
Multiple entities can be specified as a comma-separated list.
If neither of the
@ -169,7 +170,7 @@ To specify a group with the same name as a user, use the
options.
.It Xo
.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
.Ar setname Oc Ns ...
.Ar setname Oc Ns
.Xc
The permissions to delegate.
Multiple permissions may be specified as a comma-separated list.
@ -191,53 +192,38 @@ file system or volume, and all of its descendents.
Permissions are generally the ability to use a ZFS subcommand or change a ZFS
property.
The following permissions are available:
.Bd -literal
.TS
l l l .
NAME TYPE NOTES
allow subcommand Must also have the permission that is
being allowed
_ _ _
allow subcommand Must also have the permission that is being allowed
bookmark subcommand
clone subcommand Must also have the 'create' ability and
'mount' ability in the origin file system
create subcommand Must also have the 'mount' ability.
Must also have the 'refreservation' ability to
create a non-sparse volume.
destroy subcommand Must also have the 'mount' ability
diff subcommand Allows lookup of paths within a dataset
given an object number, and the ability
to create snapshots necessary to
'zfs diff'.
clone subcommand Must also have the \fBcreate\fR ability and \fBmount\fR ability in the origin file system
create subcommand Must also have the \fBmount\fR ability. Must also have the \fBrefreservation\fR ability to create a non-sparse volume.
destroy subcommand Must also have the \fBmount\fR ability
diff subcommand Allows lookup of paths within a dataset given an object number, and the ability to create snapshots necessary to \fBzfs diff\fR.
hold subcommand Allows adding a user hold to a snapshot
load-key subcommand Allows loading and unloading of encryption key
(see 'zfs load-key' and 'zfs unload-key').
change-key subcommand Allows changing an encryption key via
'zfs change-key'.
mount subcommand Allows mount/umount of ZFS datasets
promote subcommand Must also have the 'mount' and 'promote'
ability in the origin file system
receive subcommand Must also have the 'mount' and 'create'
ability
release subcommand Allows releasing a user hold which might
destroy the snapshot
rename subcommand Must also have the 'mount' and 'create'
ability in the new parent
rollback subcommand Must also have the 'mount' ability
load subcommand Allows loading and unloading of encryption key (see \fBzfs load-key\fR and \fBzfs unload-key\fR).
change subcommand Allows changing an encryption key via \fBzfs change-key\fR.
mount subcommand Allows mounting/umounting ZFS datasets
promote subcommand Must also have the \fBmount\fR and \fBpromote\fR ability in the origin file system
receive subcommand Must also have the \fBmount\fR and \fBcreate\fR ability
release subcommand Allows releasing a user hold which might destroy the snapshot
rename subcommand Must also have the \fBmount\fR and \fBcreate\fR ability in the new parent
rollback subcommand Must also have the \fBmount\fR ability
send subcommand
share subcommand Allows sharing file systems over NFS
or SMB protocols
snapshot subcommand Must also have the 'mount' ability
share subcommand Allows sharing file systems over NFS or SMB protocols
snapshot subcommand Must also have the \fBmount\fR ability
groupquota other Allows accessing any groupquota@...
property
groupused other Allows reading any groupused@... property
groupquota other Allows accessing any \fBgroupquota@\fI...\fR property
groupused other Allows reading any \fBgroupused@\fI...\fR property
userprop other Allows changing any user property
userquota other Allows accessing any userquota@...
property
userused other Allows reading any userused@... property
projectobjquota other Allows accessing any projectobjquota@...
property
projectquota other Allows accessing any projectquota@... property
projectobjused other Allows reading any projectobjused@... property
projectused other Allows reading any projectused@... property
userquota other Allows accessing any \fBuserquota@\fI...\fR property
userused other Allows reading any \fBuserused@\fI...\fR property
projectobjquota other Allows accessing any \fBprojectobjquota@\fI...\fR property
projectquota other Allows accessing any \fBprojectquota@\fI...\fR property
projectobjused other Allows reading any \fBprojectobjused@\fI...\fR property
projectused other Allows reading any \fBprojectused@\fI...\fR property
aclinherit property
acltype property
@ -273,13 +259,13 @@ volsize property
vscan property
xattr property
zoned property
.Ed
.TE
.It Xo
.Nm zfs
.Cm allow
.Fl c
.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
.Ar setname Oc Ns ...
.Ar setname Oc Ns
.Ar filesystem Ns | Ns Ar volume
.Xc
Sets
@ -293,7 +279,7 @@ to the creator of any newly-created descendent file system.
.Cm allow
.Fl s No @ Ns Ar setname
.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
.Ar setname Oc Ns ...
.Ar setname Oc Ns
.Ar filesystem Ns | Ns Ar volume
.Xc
Defines or adds permissions to a permission set.
@ -309,9 +295,9 @@ and can be no more than 64 characters long.
.Nm zfs
.Cm unallow
.Op Fl dglru
.Ar user Ns | Ns Ar group Ns Oo , Ns Ar user Ns | Ns Ar group Oc Ns ...
.Ar user Ns | Ns Ar group Ns Oo , Ns Ar user Ns | Ns Ar group Oc Ns
.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
.Ar setname Oc Ns ... Oc
.Ar setname Oc Ns Oc
.Ar filesystem Ns | Ns Ar volume
.Xc
.It Xo
@ -320,7 +306,7 @@ and can be no more than 64 characters long.
.Op Fl dlr
.Fl e Ns | Ns Sy everyone
.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
.Ar setname Oc Ns ... Oc
.Ar setname Oc Ns Oc
.Ar filesystem Ns | Ns Ar volume
.Xc
.It Xo
@ -329,7 +315,7 @@ and can be no more than 64 characters long.
.Op Fl r
.Fl c
.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
.Ar setname Oc Ns ... Oc
.Ar setname Oc Ns Oc
.Ar filesystem Ns | Ns Ar volume
.Xc
Removes permissions that were granted with the
@ -367,7 +353,7 @@ Recursively remove the permissions from this file system and all descendents.
.Op Fl r
.Fl s No @ Ns Ar setname
.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
.Ar setname Oc Ns ... Oc
.Ar setname Oc Ns Oc
.Ar filesystem Ns | Ns Ar volume
.Xc
Removes permissions from a permission set.

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
@ -31,29 +30,28 @@
.\" Copyright 2019 Joyent, Inc.
.\" Copyright (c) 2019, 2020 by Christian Schwarz. All Rights Reserved.
.\"
.Dd June 30, 2019
.Dt ZFS-BOOKMARK 8 SMM
.Dd May 27, 2021
.Dt ZFS-BOOKMARK 8
.Os
.
.Sh NAME
.Nm zfs-bookmark
.Nd Creates a bookmark of the given snapshot.
.Nd create bookmark of ZFS snapshot
.Sh SYNOPSIS
.Sh DESCRIPTION
.Bl -tag -width ""
.It Xo
.Nm zfs
.Cm bookmark
.Ar snapshot Ns | Ns Ar bookmark newbookmark
.Xc
.Ar snapshot Ns | Ns Ar bookmark
.Ar newbookmark
.
.Sh DESCRIPTION
Creates a new bookmark of the given snapshot or bookmark.
Bookmarks mark the point in time when the snapshot was created, and can be used
as the incremental source for a
.Xr zfs-send 8
command.
.Nm zfs Cm send .
.Pp
When creating a bookmark from an existing redaction bookmark, the resulting
bookmark is
.Sy not
.Em not
a redaction bookmark.
.Pp
This feature must be enabled to be used.
@ -62,7 +60,7 @@ See
for details on ZFS feature flags and the
.Sy bookmarks
feature.
.El
.
.Sh SEE ALSO
.Xr zfs-destroy 8 ,
.Xr zfs-send 8 ,

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
@ -30,35 +29,29 @@
.\" Copyright 2018 Nexenta Systems, Inc.
.\" Copyright 2019 Joyent, Inc.
.\"
.Dd June 30, 2019
.Dd May 27, 2021
.Dt ZFS-CLONE 8
.Os
.
.Sh NAME
.Nm zfs-clone
.Nd Creates a clone of the given snapshot.
.Nd clone snapshot of ZFS dataset
.Sh SYNOPSIS
.Nm zfs
.Cm clone
.Op Fl p
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns
.Ar snapshot Ar filesystem Ns | Ns Ar volume
.
.Sh DESCRIPTION
.Bl -tag -width ""
.It Xo
.Nm zfs
.Cm clone
.Op Fl p
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
.Ar snapshot Ar filesystem Ns | Ns Ar volume
.Xc
See the
.Em Clones
.Sx Clones
section of
.Xr zfsconcepts 8
for details.
The target dataset can be located anywhere in the ZFS hierarchy, and is created
as the same type as the original.
.Bl -tag -width "-o"
The target dataset can be located anywhere in the ZFS hierarchy,
and is created as the same type as the original.
.Bl -tag -width Ds
.It Fl o Ar property Ns = Ns Ar value
Sets the specified property; see
.Nm zfs Cm create
@ -71,7 +64,7 @@ property inherited from their parent.
If the target filesystem or volume already exists, the operation completes
successfully.
.El
.El
.
.Sh SEE ALSO
.Xr zfs-promote 8 ,
.Xr zfs-snapshot 8

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
@ -33,28 +32,30 @@
.Dd December 1, 2020
.Dt ZFS-CREATE 8
.Os
.
.Sh NAME
.Nm zfs-create
.Nd Creates a new ZFS file system.
.Nd create ZFS dataset
.Sh SYNOPSIS
.Nm zfs
.Cm create
.Op Fl Pnpuv
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns
.Ar filesystem
.Nm zfs
.Cm create
.Op Fl ps
.Op Fl b Ar blocksize
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns
.Fl V Ar size Ar volume
.
.Sh DESCRIPTION
.Bl -tag -width ""
.It Xo
.Nm zfs
.Cm create
.Op Fl Pnpuv
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns
.Ar filesystem
.Xc
Creates a new ZFS file system.
@ -134,7 +135,7 @@ Print verbose information about the created dataset.
.Cm create
.Op Fl ps
.Op Fl b Ar blocksize
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns
.Fl V Ar size Ar volume
.Xc
Creates a volume of the given size.
@ -234,14 +235,14 @@ Print verbose information about the created dataset.
.El
.El
.Ss ZFS Volumes as Swap
ZFS volumes may be used as swap devices. After creating the volume with the
ZFS volumes may be used as swap devices.
After creating the volume with the
.Nm zfs Cm create Fl V
command set up and enable the swap area using the
.Xr mkswap 8
and
enable the swap area using the
.Xr swapon 8
commands. Do not swap to a file on a ZFS file system. A ZFS swap file
configuration is not supported.
command.
Swapping to files on ZFS filesystems is not supported.
.
.Sh SEE ALSO
.Xr zfs-destroy 8 ,
.Xr zfs-list 8 ,

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
@ -33,9 +32,10 @@
.Dd June 30, 2019
.Dt ZFS-DESTROY 8
.Os
.
.Sh NAME
.Nm zfs-destroy
.Nd Destroys the given dataset(s), snapshot(s), or bookmark.
.Nd destroy ZFS dataset, snapshots, or bookmark
.Sh SYNOPSIS
.Nm zfs
.Cm destroy
@ -45,10 +45,11 @@
.Cm destroy
.Op Fl Rdnprv
.Ar filesystem Ns | Ns Ar volume Ns @ Ns Ar snap Ns
.Oo % Ns Ar snap Ns Oo , Ns Ar snap Ns Oo % Ns Ar snap Oc Oc Oc Ns ...
.Oo % Ns Ar snap Ns Oo , Ns Ar snap Ns Oo % Ns Ar snap Oc Oc Oc Ns
.Nm zfs
.Cm destroy
.Ar filesystem Ns | Ns Ar volume Ns # Ns Ar bookmark
.
.Sh DESCRIPTION
.Bl -tag -width ""
.It Xo
@ -67,9 +68,7 @@ dataset that has active dependents
Recursively destroy all dependents, including cloned file systems outside the
target hierarchy.
.It Fl f
Force an unmount of any file systems using the
.Nm unmount Fl f
command.
Forcibly unmount file systems.
This option has no effect on non-file systems or unmounted file systems.
.It Fl n
Do a dry-run
@ -100,10 +99,10 @@ behavior for mounted file systems in use.
.Cm destroy
.Op Fl Rdnprv
.Ar filesystem Ns | Ns Ar volume Ns @ Ns Ar snap Ns
.Oo % Ns Ar snap Ns Oo , Ns Ar snap Ns Oo % Ns Ar snap Oc Oc Oc Ns ...
.Oo % Ns Ar snap Ns Oo , Ns Ar snap Ns Oo % Ns Ar snap Oc Oc Oc Ns
.Xc
The given snapshots are destroyed immediately if and only if the
.Ql zfs destroy
.Nm zfs Cm destroy
command without the
.Fl d
option would have destroyed it.
@ -138,8 +137,8 @@ If this flag is specified, the
.Fl d
flag will have no effect.
.It Fl d
Destroy immediately. If a snapshot cannot be destroyed now, mark it for
deferred destruction.
Destroy immediately.
If a snapshot cannot be destroyed now, mark it for deferred destruction.
.It Fl n
Do a dry-run
.Pq Qq No-op
@ -173,6 +172,7 @@ behavior for mounted file systems in use.
.Xc
The given bookmark is destroyed.
.El
.
.Sh SEE ALSO
.Xr zfs-create 8 ,
.Xr zfs-hold 8

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
@ -30,25 +29,20 @@
.\" Copyright 2018 Nexenta Systems, Inc.
.\" Copyright 2019 Joyent, Inc.
.\"
.Dd June 30, 2019
.Dd May 29, 2021
.Dt ZFS-DIFF 8
.Os
.
.Sh NAME
.Nm zfs-diff
.Nd Display the difference between two snapshots of a given filesystem.
.Nd show difference between ZFS snapshots
.Sh SYNOPSIS
.Nm zfs
.Cm diff
.Op Fl FHt
.Ar snapshot Ar snapshot Ns | Ns Ar filesystem
.
.Sh DESCRIPTION
.Bl -tag -width ""
.It Xo
.Nm zfs
.Cm diff
.Op Fl FHt
.Ar snapshot Ar snapshot Ns | Ns Ar filesystem
.Xc
Display the difference between a snapshot of a given filesystem and another
snapshot of that filesystem from a later time or the current contents of the
filesystem.
@ -57,35 +51,48 @@ indicate pathname, new pathname
.Pq in case of rename ,
change in link count, and optionally file type and/or change time.
The types of change are:
.Bd -literal
- The path has been removed
+ The path has been created
M The path has been modified
R The path has been renamed
.Ed
.Bl -tag -compact -offset Ds -width "M"
.It Sy -
The path has been removed
.It Sy +
The path has been created
.It Sy M
The path has been modified
.It Sy R
The path has been renamed
.El
.Bl -tag -width "-F"
.It Fl F
Display an indication of the type of file, in a manner similar to the
.Fl F
option of
.Xr ls 1 .
.Bd -literal
B Block device
C Character device
/ Directory
> Door
| Named pipe
@ Symbolic link
P Event port
= Socket
F Regular file
.Ed
.Bl -tag -compact -offset 2n -width "B"
.It Sy B
Block device
.It Sy C
Character device
.It Sy /
Directory
.It Sy >
Door
.It Sy |\&
Named pipe
.It Sy @
Symbolic link
.It Sy P
Event port
.It Sy =
Socket
.It Sy F
Regular file
.El
.It Fl H
Give more parsable tab-separated output, without header lines and without
arrows.
.It Fl t
Display the path's inode change time as the first column of output.
.El
.El
.
.Sh SEE ALSO
.Xr zfs-snapshot 8

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
@ -33,40 +32,42 @@
.Dd June 30, 2019
.Dt ZFS-HOLD 8
.Os
.
.Sh NAME
.Nm zfs-hold
.Nd Hold a snapshot to prevent it being removed with the zfs destroy command.
.Nd hold ZFS snapshots to prevent their removal
.Sh SYNOPSIS
.Nm zfs
.Cm hold
.Op Fl r
.Ar tag Ar snapshot Ns ...
.Ar tag Ar snapshot Ns
.Nm zfs
.Cm holds
.Op Fl rH
.Ar snapshot Ns ...
.Ar snapshot Ns
.Nm zfs
.Cm release
.Op Fl r
.Ar tag Ar snapshot Ns ...
.Ar tag Ar snapshot Ns
.
.Sh DESCRIPTION
.Bl -tag -width ""
.It Xo
.Nm zfs
.Cm hold
.Op Fl r
.Ar tag Ar snapshot Ns ...
.Ar tag Ar snapshot Ns
.Xc
Adds a single reference, named with the
.Ar tag
argument, to the specified snapshot or snapshots.
argument, to the specified snapshots.
Each snapshot has its own tag namespace, and tags must be unique within that
space.
.Pp
If a hold exists on a snapshot, attempts to destroy that snapshot by using the
.Nm zfs Cm destroy
command return
.Er EBUSY .
.Sy EBUSY .
.Bl -tag -width "-r"
.It Fl r
Specifies that a hold with the given tag is applied recursively to the snapshots
@ -76,7 +77,7 @@ of all descendent file systems.
.Nm zfs
.Cm holds
.Op Fl rH
.Ar snapshot Ns ...
.Ar snapshot Ns
.Xc
Lists all existing user references for the given snapshot or snapshots.
.Bl -tag -width "-r"
@ -90,7 +91,7 @@ Do not print headers, use tab-delimited output.
.Nm zfs
.Cm release
.Op Fl r
.Ar tag Ar snapshot Ns ...
.Ar tag Ar snapshot Ns
.Xc
Removes a single reference, named with the
.Ar tag
@ -99,12 +100,13 @@ The tag must already exist for each snapshot.
If a hold exists on a snapshot, attempts to destroy that snapshot by using the
.Nm zfs Cm destroy
command return
.Er EBUSY .
.Sy EBUSY .
.Bl -tag -width "-r"
.It Fl r
Recursively releases a hold with the given tag on the snapshots of all
descendent file systems.
.El
.El
.
.Sh SEE ALSO
.Xr zfs-destroy 8

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
@ -37,82 +36,87 @@
.\" Copyright 2018 Nexenta Systems, Inc.
.\" Copyright 2019 Joyent, Inc.
.\"
.Dd December 9, 2019
.Dd May 27, 2021
.Dt ZFS-JAIL 8
.Os FreeBSD
.Os
.
.Sh NAME
.Nm zfs-jail
.Nd Attaches and detaches ZFS filesystems from FreeBSD jails.
.No A Tn ZFS
dataset can be attached to a jail by using the
.Qq Nm zfs jail
subcommand. You cannot attach a dataset to one jail and the children of the
same dataset to another jail. You can also not attach the root file system
of the jail or any dataset which needs to be mounted before the zfs rc script
is run inside the jail, as it would be attached unmounted until it is
mounted from the rc script inside the jail. To allow management of the
dataset from within a jail, the
.Sy jailed
property has to be set and the jail needs access to the
.Pa /dev/zfs
device. The
.Sy quota
property cannot be changed from within a jail. See
.Xr jail 8
for information on how to allow mounting
.Tn ZFS
datasets from within a jail.
.Pp
.No A Tn ZFS
dataset can be detached from a jail using the
.Qq Nm zfs unjail
subcommand.
.Pp
After a dataset is attached to a jail and the jailed property is set, a jailed
file system cannot be mounted outside the jail, since the jail administrator
might have set the mount point to an unacceptable value.
.Nd attach or detach ZFS filesystem from FreeBSD jail
.Sh SYNOPSIS
.Nm zfs
.Cm jail
.Ar jailid Ns | Ns Ar jailname filesystem
.Nm zfs
.Cm unjail
.Ar jailid Ns | Ns Ar jailname filesystem
.Nm zfs Cm jail
.Ar jailid Ns | Ns Ar jailname
.Ar filesystem
.Nm zfs Cm unjail
.Ar jailid Ns | Ns Ar jailname
.Ar filesystem
.
.Sh DESCRIPTION
.Bl -tag -width ""
.It Xo
.Nm zfs
.Cm jail
.Ar jailid filesystem
.Ar jailid Ns | Ns Ar jailname
.Ar filesystem
.Xc
.Pp
Attaches the specified
Attach the specified
.Ar filesystem
to the jail identified by JID
.Ar jailid .
.Ar jailid
or name
.Ar jailname .
From now on this file system tree can be managed from within a jail if the
.Sy jailed
property has been set. To use this functuinality, the jail needs the
.Va allow.mount
property has been set.
To use this functionality, the jail needs the
.Sy allow.mount
and
.Va allow.mount.zfs
parameters set to 1 and the
.Va enforce_statfs
parameter set to a value lower than 2.
.Sy allow.mount.zfs
parameters set to
.Sy 1
and the
.Sy enforce_statfs
parameter set to a value lower than
.Sy 2 .
.Pp
You cannot attach a jailed dataset's children to another jail.
You can also not attach the root file system
of the jail or any dataset which needs to be mounted before the zfs rc script
is run inside the jail, as it would be attached unmounted until it is
mounted from the rc script inside the jail.
.Pp
To allow management of the dataset from within a jail, the
.Sy jailed
property has to be set and the jail needs access to the
.Pa /dev/zfs
device.
The
.Sy quota
property cannot be changed from within a jail.
.Pp
After a dataset is attached to a jail and the
.Sy jailed
property is set, a jailed file system cannot be mounted outside the jail,
since the jail administrator might have set the mount point to an unacceptable value.
.Pp
See
.Xr jail 8
for more information on managing jails and configuring the parameters above.
for more information on managing jails.
Jails are a
.Fx
feature and are not relevant on other platforms.
.It Xo
.Nm zfs
.Cm unjail
.Ar jailid filesystem
.Ar jailid Ns | Ns Ar jailname
.Ar filesystem
.Xc
.Pp
Detaches the specified
.Ar filesystem
from the jail identified by JID
.Ar jailid .
.Ar jailid
or name
.Ar jailname .
.El
.Sh SEE ALSO
.Xr jail 8 ,

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
@ -30,35 +29,25 @@
.\" Copyright 2018 Nexenta Systems, Inc.
.\" Copyright 2019 Joyent, Inc.
.\"
.Dd June 30, 2019
.Dd May 27, 2021
.Dt ZFS-LIST 8
.Os
.
.Sh NAME
.Nm zfs-list
.Nd Lists the property information for the given datasets in tabular form.
.Nd list properties of ZFS datasets
.Sh SYNOPSIS
.Nm zfs
.Cm list
.Op Fl r Ns | Ns Fl d Ar depth
.Op Fl Hp
.Oo Fl o Ar property Ns Oo , Ns Ar property Oc Ns ... Oc
.Oo Fl s Ar property Oc Ns ...
.Oo Fl S Ar property Oc Ns ...
.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns ... Oc
.Oo Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Oc Ns ...
.Oo Fl o Ar property Ns Oo , Ns Ar property Oc Ns Oc
.Oo Fl s Ar property Oc Ns
.Oo Fl S Ar property Oc Ns
.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns Oc
.Oo Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Oc Ns
.
.Sh DESCRIPTION
.Bl -tag -width ""
.It Xo
.Nm zfs
.Cm list
.Op Fl r Ns | Ns Fl d Ar depth
.Op Fl Hp
.Oo Fl o Ar property Ns Oo , Ns Ar property Oc Ns ... Oc
.Oo Fl s Ar property Oc Ns ...
.Oo Fl S Ar property Oc Ns ...
.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns ... Oc
.Oo Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Oc Ns ...
.Xc
If specified, you can list property information by the absolute pathname or the
relative pathname.
By default, all file systems and volumes are displayed.
@ -75,7 +64,7 @@ or
.Fl t Sy all
options are specified.
The following fields are displayed:
.Sy name Ns \&, Sy used Ns \&, Sy available Ns \&, Sy referenced Ns \&, Sy mountpoint Ns .
.Sy name , Sy used , Sy available , Sy referenced , Sy mountpoint .
.Bl -tag -width "-H"
.It Fl H
Used for scripting mode.
@ -96,10 +85,10 @@ will display only the dataset and its direct children.
.It Fl o Ar property
A comma-separated list of properties to display.
The property must be:
.Bl -bullet
.Bl -bullet -compact
.It
One of the properties described in the
.Em Native Properties
.Sx Native Properties
section of
.Xr zfsprops 8
.It
@ -113,10 +102,9 @@ The value
.Sy space
to display space usage properties on file systems and volumes.
This is a shortcut for specifying
.Fl o Sy name Ns \&, Ns Sy avail Ns \&, Ns Sy used Ns \&, Ns Sy usedsnap Ns \&, Ns
.Sy usedds Ns \&, Ns Sy usedrefreserv Ns \&, Ns Sy usedchild Fl t
.Sy filesystem Ns \&, Ns Sy volume
syntax.
.Fl o Ns \ \& Ns Sy name , Ns Sy avail , Ns Sy used , Ns Sy usedsnap , Ns
.Sy usedds , Ns Sy usedrefreserv , Ns Sy usedchild
.Fl t Sy filesystem , Ns Sy volume .
.El
.It Fl p
Display numbers in parsable
@ -128,7 +116,7 @@ Recursively display any children of the dataset on the command line.
A property for sorting the output by column in ascending order based on the
value of the property.
The property must be one of the properties described in the
.Em Properties
.Sx Properties
section of
.Xr zfsprops 8
or the value
@ -141,7 +129,7 @@ Multiple
.Fl s
options are evaluated from left to right in decreasing order of importance.
The following is a list of sorting criteria:
.Bl -bullet
.Bl -bullet -compact
.It
Numeric types sort in numeric order.
.It
@ -168,7 +156,7 @@ For example, specifying
.Fl t Sy snapshot
displays only snapshots.
.El
.El
.
.Sh SEE ALSO
.Xr zfs-get 8 ,
.Xr zfsprops 8

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
@ -33,19 +32,20 @@
.Dd January 13, 2020
.Dt ZFS-LOAD-KEY 8
.Os
.
.Sh NAME
.Nm zfs-load-key
.Nd Load, unload, or change the encryption key used to access a dataset.
.Nd load, unload, or change encryption key of ZFS dataset
.Sh SYNOPSIS
.Nm zfs
.Cm load-key
.Op Fl nr
.Op Fl L Ar keylocation
.Fl a | Ar filesystem
.Fl a Ns | Ns Ar filesystem
.Nm zfs
.Cm unload-key
.Op Fl r
.Fl a | Ar filesystem
.Fl a Ns | Ns Ar filesystem
.Nm zfs
.Cm change-key
.Op Fl l
@ -58,6 +58,7 @@
.Fl i
.Op Fl l
.Ar filesystem
.
.Sh DESCRIPTION
.Bl -tag -width ""
.It Xo
@ -65,22 +66,25 @@
.Cm load-key
.Op Fl nr
.Op Fl L Ar keylocation
.Fl a | Ar filesystem
.Fl a Ns | Ns Ar filesystem
.Xc
Load the key for
.Ar filesystem ,
allowing it and all children that inherit the
.Sy keylocation
property to be accessed. The key will be expected in the format specified by the
property to be accessed.
The key will be expected in the format specified by the
.Sy keyformat
and location specified by the
.Sy keylocation
property. Note that if the
property.
Note that if the
.Sy keylocation
is set to
.Sy prompt
the terminal will interactively wait for the key to be entered. Loading a key
will not automatically mount the dataset. If that functionality is desired,
the terminal will interactively wait for the key to be entered.
Loading a key will not automatically mount the dataset.
If that functionality is desired,
.Nm zfs Cm mount Fl l
will ask for the key and mount the dataset
.Po
@ -100,16 +104,19 @@ Loads the keys for all encryption roots in all imported pools.
.It Fl n
Do a dry-run
.Pq Qq No-op
load-key. This will cause zfs to simply check that the
provided key is correct. This command may be run even if the key is already
loaded.
.Cm load-key .
This will cause
.Nm zfs
to simply check that the provided key is correct.
This command may be run even if the key is already loaded.
.It Fl L Ar keylocation
Use
.Ar keylocation
instead of the
.Sy keylocation
property. This will not change the value of the property on the dataset. Note
that if used with either
property.
This will not change the value of the property on the dataset.
Note that if used with either
.Fl r
or
.Fl a ,
@ -121,13 +128,14 @@ may only be given as
.Nm zfs
.Cm unload-key
.Op Fl r
.Fl a | Ar filesystem
.Fl a Ns | Ns Ar filesystem
.Xc
Unloads a key from ZFS, removing the ability to access the dataset and all of
its children that inherit the
.Sy keylocation
property. This requires that the dataset is not currently open or mounted. Once
the key is unloaded the
property.
This requires that the dataset is not currently open or mounted.
Once the key is unloaded the
.Sy keystatus
property will become
.Sy unavailable .
@ -154,15 +162,16 @@ Unloads the keys for all encryption roots in all imported pools.
.Op Fl l
.Ar filesystem
.Xc
Changes the user's key (e.g. a passphrase) used to access a dataset. This
command requires that the existing key for the dataset is already loaded into
ZFS. This command may also be used to change the
Changes the user's key (e.g. a passphrase) used to access a dataset.
This command requires that the existing key for the dataset is already loaded.
This command may also be used to change the
.Sy keylocation ,
.Sy keyformat ,
and
.Sy pbkdf2iters
properties as needed. If the dataset was not previously an encryption root it
will become one. Alternatively, the
properties as needed.
If the dataset was not previously an encryption root it will become one.
Alternatively, the
.Fl i
flag may be provided to cause an encryption root to inherit the parent's key
instead.
@ -171,36 +180,33 @@ If the user's key is compromised,
.Nm zfs Cm change-key
does not necessarily protect existing or newly-written data from attack.
Newly-written data will continue to be encrypted with the same master key as
the existing data. The master key is compromised if an attacker obtains a
user key and the corresponding wrapped master key. Currently,
the existing data.
The master key is compromised if an attacker obtains a
user key and the corresponding wrapped master key.
Currently,
.Nm zfs Cm change-key
does not overwrite the previous wrapped master key on disk, so it is
accessible via forensic analysis for an indeterminate length of time.
.Pp
In the event of a master key compromise, ideally the drives should be securely
erased to remove all the old data (which is readable using the compromised
master key), a new pool created, and the data copied back. This can be
approximated in place by creating new datasets, copying the data
(e.g. using
.Nm zfs Cm send
|
.Nm zfs Cm recv Ns
), and then clearing the free space with
.Nm zpool Cm trim --secure
master key), a new pool created, and the data copied back.
This can be approximated in place by creating new datasets, copying the data
.Pq e.g. using Nm zfs Cm send | Nm zfs Cm recv ,
and then clearing the free space with
.Nm zpool Cm trim Fl -secure
if supported by your hardware, otherwise
.Nm zpool Cm initialize Ns .
.Nm zpool Cm initialize .
.Bl -tag -width "-r"
.It Fl l
Ensures the key is loaded before attempting to change the key. This is
effectively equivalent to
.Qq Nm zfs Cm load-key Ar filesystem ; Nm zfs Cm change-key Ar filesystem
Ensures the key is loaded before attempting to change the key.
This is effectively equivalent to runnin
.Nm zfs Cm load-key Ar filesystem ; Nm zfs Cm change-key Ar filesystem
.It Fl o Ar property Ns = Ns Ar value
Allows the user to set encryption key properties (
.Sy keyformat ,
.Sy keylocation ,
and
.Sy pbkdf2iters
) while changing the key. This is the only way to alter
Allows the user to set encryption key properties
.Pq Sy keyformat , keylocation , No and Sy pbkdf2iters
while changing the key.
This is the only way to alter
.Sy keyformat
and
.Sy pbkdf2iters
@ -208,44 +214,43 @@ after the dataset has been created.
.It Fl i
Indicates that zfs should make
.Ar filesystem
inherit the key of its parent. Note that this command can only be run on an
encryption root that has an encrypted parent.
inherit the key of its parent.
Note that this command can only be run on an encryption root
that has an encrypted parent.
.El
.El
.Ss Encryption
Enabling the
.Sy encryption
feature allows for the creation of encrypted filesystems and volumes. ZFS
will encrypt file and zvol data, file attributes, ACLs, permission bits,
feature allows for the creation of encrypted filesystems and volumes.
ZFS will encrypt file and volume data, file attributes, ACLs, permission bits,
directory listings, FUID mappings, and
.Sy userused
/
.Sy groupused
data. ZFS will not encrypt metadata related to the pool structure, including
.Sy userused Ns / Ns Sy groupused
data.
ZFS will not encrypt metadata related to the pool structure, including
dataset and snapshot names, dataset hierarchy, properties, file size, file
holes, and deduplication tables (though the deduplicated data itself is
encrypted).
.Pp
Key rotation is managed by ZFS. Changing the user's key (e.g. a passphrase)
does not require re-encrypting the entire dataset. Datasets can be scrubbed,
Key rotation is managed by ZFS.
Changing the user's key (e.g. a passphrase)
does not require re-encrypting the entire dataset.
Datasets can be scrubbed,
resilvered, renamed, and deleted without the encryption keys being loaded (see the
.Nm zfs Cm load-key
.Cm load-key
subcommand for more info on key loading).
.Pp
Creating an encrypted dataset requires specifying the
.Sy encryption
and
.Sy keyformat
.Sy encryption No and Sy keyformat
properties at creation time, along with an optional
.Sy keylocation
and
.Sy pbkdf2iters .
.Sy keylocation No and Sy pbkdf2iters .
After entering an encryption key, the
created dataset will become an encryption root. Any descendant datasets will
created dataset will become an encryption root.
Any descendant datasets will
inherit their encryption key from the encryption root by default, meaning that
loading, unloading, or changing the key for the encryption root will implicitly
do the same for all inheriting datasets. If this inheritance is not desired,
simply supply a
do the same for all inheriting datasets.
If this inheritance is not desired, simply supply a
.Sy keyformat
when creating the child dataset or use
.Nm zfs Cm change-key
@ -256,39 +261,40 @@ may match that of the parent while still creating a new encryption root, and
that changing the
.Sy encryption
property alone does not create a new encryption root; this would simply use a
different cipher suite with the same key as its encryption root. The one
exception is that clones will always use their origin's encryption key.
As a result of this exception, some encryption-related properties (namely
.Sy keystatus ,
.Sy keyformat ,
.Sy keylocation ,
and
.Sy pbkdf2iters )
different cipher suite with the same key as its encryption root.
The one exception is that clones will always use their origin's encryption key.
As a result of this exception, some encryption-related properties
.Pq namely Sy keystatus , keyformat , keylocation , No and Sy pbkdf2iters
do not inherit like other ZFS properties and instead use the value determined
by their encryption root. Encryption root inheritance can be tracked via the
read-only
by their encryption root.
Encryption root inheritance can be tracked via the read-only
.Sy encryptionroot
property.
.Pp
Encryption changes the behavior of a few ZFS
operations. Encryption is applied after compression so compression ratios are
preserved. Normally checksums in ZFS are 256 bits long, but for encrypted data
operations.
Encryption is applied after compression so compression ratios are preserved.
Normally checksums in ZFS are 256 bits long, but for encrypted data
the checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from
the encryption suite, which provides additional protection against maliciously
altered data. Deduplication is still possible with encryption enabled but for
security, datasets will only dedup against themselves, their snapshots, and
their clones.
altered data.
Deduplication is still possible with encryption enabled but for security,
datasets will only deduplicate against themselves, their snapshots,
and their clones.
.Pp
There are a few limitations on encrypted datasets. Encrypted data cannot be
embedded via the
There are a few limitations on encrypted datasets.
Encrypted data cannot be embedded via the
.Sy embedded_data
feature. Encrypted datasets may not have
feature.
Encrypted datasets may not have
.Sy copies Ns = Ns Em 3
since the implementation stores some encryption metadata where the third copy
would normally be. Since compression is applied before encryption datasets may
be vulnerable to a CRIME-like attack if applications accessing the data allow
for it. Deduplication with encryption will leak information about which blocks
are equivalent in a dataset and will incur an extra CPU cost per block written.
would normally be.
Since compression is applied before encryption, datasets may
be vulnerable to a CRIME-like attack if applications accessing the data allow for it.
Deduplication with encryption will leak information about which blocks
are equivalent in a dataset and will incur an extra CPU cost for each block written.
.
.Sh SEE ALSO
.Xr zfs-create 8 ,
.Xr zfs-set 8 ,

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
@ -33,9 +32,10 @@
.Dd February 16, 2019
.Dt ZFS-MOUNT 8
.Os
.
.Sh NAME
.Nm zfs-mount
.Nd Manage mount state of ZFS file systems.
.Nd manage mount state of ZFS filesystems
.Sh SYNOPSIS
.Nm zfs
.Cm mount
@ -43,11 +43,12 @@
.Cm mount
.Op Fl Oflv
.Op Fl o Ar options
.Fl a | Ar filesystem
.Fl a Ns | Ns Ar filesystem
.Nm zfs
.Cm unmount
.Op Fl fu
.Fl a | Ar filesystem Ns | Ns Ar mountpoint
.Fl a Ns | Ns Ar filesystem Ns | Ns Ar mountpoint
.
.Sh DESCRIPTION
.Bl -tag -width ""
.It Xo
@ -60,11 +61,12 @@ Displays all ZFS file systems currently mounted.
.Cm mount
.Op Fl Oflv
.Op Fl o Ar options
.Fl a | Ar filesystem
.Fl a Ns | Ns Ar filesystem
.Xc
Mount ZFS filesystem on a path described by its
.Sy mountpoint
property, if the path exists and is empty. If
property, if the path exists and is empty.
If
.Sy mountpoint
is set to
.Em legacy ,
@ -72,7 +74,8 @@ the filesystem should be instead mounted using
.Xr mount 8 .
.Bl -tag -width "-O"
.It Fl O
Perform an overlay mount. Allows mounting in non-empty
Perform an overlay mount.
Allows mounting in non-empty
.Sy mountpoint .
See
.Xr mount 8
@ -91,13 +94,12 @@ section of
.Xr zfsprops 8
for details.
.It Fl l
Load keys for encrypted filesystems as they are being mounted. This is
equivalent to executing
Load keys for encrypted filesystems as they are being mounted.
This is equivalent to executing
.Nm zfs Cm load-key
on each encryption root before mounting it. Note that if a filesystem has a
.Sy keylocation
of
.Sy prompt
on each encryption root before mounting it.
Note that if a filesystem has
.Sy keylocation Ns = Ns Sy prompt ,
this will cause the terminal to interactively block after asking for the key.
.It Fl v
Report mount progress.
@ -108,7 +110,7 @@ Attempt to force mounting of all filesystems, even those that couldn't normally
.Nm zfs
.Cm unmount
.Op Fl fu
.Fl a | Ar filesystem Ns | Ns Ar mountpoint
.Fl a Ns | Ns Ar filesystem Ns | Ns Ar mountpoint
.Xc
Unmounts currently mounted ZFS file systems.
.Bl -tag -width "-a"

View File

@ -1,3 +1,4 @@
.\"
.\" This file and its contents are supplied under the terms of the
.\" Common Development and Distribution License ("CDDL"), version 1.0.
.\" You may only use this file in accordance with the terms of version
@ -7,17 +8,17 @@
.\" source. A copy of the CDDL is also available via the Internet at
.\" http://www.illumos.org/license/CDDL.
.\"
.\"
.\" Copyright (c) 2016, 2019 by Delphix. All Rights Reserved.
.\" Copyright (c) 2019, 2020 by Christian Schwarz. All Rights Reserved.
.\" Copyright 2020 Joyent, Inc.
.\"
.Dd January 26, 2021
.Dd May 27, 2021
.Dt ZFS-PROGRAM 8
.Os
.
.Sh NAME
.Nm zfs-program
.Nd executes ZFS channel programs
.Nd execute ZFS channel programs
.Sh SYNOPSIS
.Nm zfs
.Cm program
@ -26,7 +27,8 @@
.Op Fl m Ar memory-limit
.Ar pool
.Ar script
.\".Op Ar optional arguments to channel program
.Op Ar script arguments
.
.Sh DESCRIPTION
The ZFS channel program interface allows ZFS administrative operations to be
run programmatically as a Lua script.
@ -37,22 +39,22 @@ Channel programs may only be run with root privileges.
.Pp
A modified version of the Lua 5.2 interpreter is used to run channel program
scripts.
The Lua 5.2 manual can be found at:
.Bd -centered -offset indent
The Lua 5.2 manual can be found at
.Lk http://www.lua.org/manual/5.2/
.Ed
.Pp
The channel program given by
.Ar script
will be run on
.Ar pool ,
and any attempts to access or modify other pools will cause an error.
.
.Sh OPTIONS
.Bl -tag -width "-t"
.It Fl j
Display channel program output in JSON format. When this flag is specified and
standard output is empty - channel program encountered an error. The details of
such an error will be printed to standard error in plain text.
Display channel program output in JSON format.
When this flag is specified and standard output is empty -
channel program encountered an error.
The details of such an error will be printed to standard error in plain text.
.It Fl n
Executes a read-only channel program, which runs faster.
The program cannot change on-disk state by calling functions from the
@ -78,15 +80,17 @@ All remaining argument strings will be passed directly to the Lua script as
described in the
.Sx LUA INTERFACE
section below.
.
.Sh LUA INTERFACE
A channel program can be invoked either from the command line, or via a library
call to
.Fn lzc_channel_program .
.
.Ss Arguments
Arguments passed to the channel program are converted to a Lua table.
If invoked from the command line, extra arguments to the Lua script will be
accessible as an array stored in the argument table with the key 'argv':
.Bd -literal -offset indent
.Bd -literal -compact -offset indent
args = ...
argv = args["argv"]
-- argv == {1="arg1", 2="arg2", ...}
@ -95,7 +99,7 @@ argv = args["argv"]
If invoked from the libZFS interface, an arbitrary argument list can be
passed to the channel program, which is accessible via the same
"..." syntax in Lua:
.Bd -literal -offset indent
.Bd -literal -compact -offset indent
args = ...
-- args == {"foo"="bar", "baz"={...}, ...}
.Ed
@ -108,37 +112,35 @@ in
in a C array passed to a channel program will be stored in
.Va arr[1]
when accessed from Lua.
.
.Ss Return Values
Lua return statements take the form:
.Bd -literal -offset indent
return ret0, ret1, ret2, ...
.Ed
.Dl return ret0, ret1, ret2, ...
.Pp
Return statements returning multiple values are permitted internally in a
channel program script, but attempting to return more than one value from the
top level of the channel program is not permitted and will throw an error.
However, tables containing multiple values can still be returned.
If invoked from the command line, a return statement:
.Bd -literal -offset indent
.Bd -literal -compact -offset indent
a = {foo="bar", baz=2}
return a
.Ed
.Pp
Will be output formatted as:
.Bd -literal -offset indent
.Bd -literal -compact -offset indent
Channel program fully executed with return value:
return:
baz: 2
foo: 'bar'
.Ed
.
.Ss Fatal Errors
If the channel program encounters a fatal error while running, a non-zero exit
status will be returned.
If more information about the error is available, a singleton list will be
returned detailing the error:
.Bd -literal -offset indent
error: "error string, including Lua stack trace"
.Ed
.Dl error: \&"error string, including Lua stack trace"
.Pp
If a fatal error is returned, the channel program may have not executed at all,
may have partially executed, or may have fully executed but failed to pass a
@ -162,6 +164,7 @@ return an error code and the channel program continues executing.
See the
.Sx ZFS API
section below for function-specific details on error return codes.
.
.Ss Lua to C Value Conversion
When invoking a channel program via the libZFS interface, it is necessary to
translate arguments and return values from Lua values to their C equivalents,
@ -171,37 +174,37 @@ There is a correspondence between nvlist values in C and Lua tables.
A Lua table which is returned from the channel program will be recursively
converted to an nvlist, with table values converted to their natural
equivalents:
.Bd -literal -offset indent
string -> string
number -> int64
boolean -> boolean_value
nil -> boolean (no value)
table -> nvlist
.Ed
.TS
cw3 l c l .
string -> string
number -> int64
boolean -> boolean_value
nil -> boolean (no value)
table -> nvlist
.TE
.Pp
Likewise, table keys are replaced by string equivalents as follows:
.Bd -literal -offset indent
string -> no change
number -> signed decimal string ("%lld")
boolean -> "true" | "false"
.Ed
.TS
cw3 l c l .
string -> no change
number -> signed decimal string ("%lld")
boolean -> "true" | "false"
.TE
.Pp
Any collision of table key strings (for example, the string "true" and a
true boolean value) will cause a fatal error.
.Pp
Lua numbers are represented internally as signed 64-bit integers.
.
.Sh LUA STANDARD LIBRARY
The following Lua built-in base library functions are available:
.Bd -literal -offset indent
assert rawlen
collectgarbage rawget
error rawset
getmetatable select
ipairs setmetatable
next tonumber
pairs tostring
rawequal type
.Ed
.TS
cw3 l l l l .
assert rawlen collectgarbage rawget
error rawset getmetatable select
ipairs setmetatable next tonumber
pairs tostring rawequal type
.TE
.Pp
All functions in the
.Em coroutine ,
@ -214,15 +217,13 @@ manual.
.Pp
The following functions base library functions have been disabled and are
not available for use in channel programs:
.Bd -literal -offset indent
dofile
loadfile
load
pcall
print
xpcall
.Ed
.TS
cw3 l l l l l l .
dofile loadfile load pcall print xpcall
.TE
.
.Sh ZFS API
.
.Ss Function Arguments
Each API function takes a fixed set of required positional arguments and
optional keyword arguments.
@ -231,22 +232,17 @@ For example, the destroy function takes a single positional string argument
argument.
When using parentheses to specify the arguments to a Lua function, only
positional arguments can be used:
.Bd -literal -offset indent
zfs.sync.destroy("rpool@snap")
.Ed
.Dl Sy zfs.sync.destroy Ns Pq \&"rpool@snap"
.Pp
To use keyword arguments, functions must be called with a single argument that
is a Lua table containing entries mapping integers to positional arguments and
strings to keyword arguments:
.Bd -literal -offset indent
zfs.sync.destroy({1="rpool@snap", defer=true})
.Ed
.Dl Sy zfs.sync.destroy Ns Pq {1="rpool@snap", defer=true}
.Pp
The Lua language allows curly braces to be used in place of parenthesis as
syntactic sugar for this calling convention:
.Bd -literal -offset indent
zfs.sync.snapshot{"rpool@snap", defer=true}
.Ed
.Dl Sy zfs.sync.snapshot Ns {"rpool@snap", defer=true}
.
.Ss Function Return Values
If an API function succeeds, it returns 0.
If it fails, it returns an error code and the channel program continues
@ -261,13 +257,11 @@ Lua table, or Nil if no error details were returned.
Different keys will exist in the error details table depending on the function
and error case.
Any such function may be called expecting a single return value:
.Bd -literal -offset indent
errno = zfs.sync.promote(dataset)
.Ed
.Dl errno = Sy zfs.sync.promote Ns Pq dataset
.Pp
Or, the error details can be retrieved:
.Bd -literal -offset indent
errno, details = zfs.sync.promote(dataset)
.Bd -literal -compact -offset indent
.No errno, details = Sy zfs.sync.promote Ns Pq dataset
if (errno == EEXIST) then
assert(details ~= Nil)
list_of_conflicting_snapshots = details
@ -276,48 +270,46 @@ end
.Pp
The following global aliases for API function error return codes are defined
for use in channel programs:
.Bd -literal -offset indent
EPERM ECHILD ENODEV ENOSPC
ENOENT EAGAIN ENOTDIR ESPIPE
ESRCH ENOMEM EISDIR EROFS
EINTR EACCES EINVAL EMLINK
EIO EFAULT ENFILE EPIPE
ENXIO ENOTBLK EMFILE EDOM
E2BIG EBUSY ENOTTY ERANGE
ENOEXEC EEXIST ETXTBSY EDQUOT
EBADF EXDEV EFBIG
.Ed
.TS
cw3 l l l l l l l .
EPERM ECHILD ENODEV ENOSPC ENOENT EAGAIN ENOTDIR
ESPIPE ESRCH ENOMEM EISDIR EROFS EINTR EACCES
EINVAL EMLINK EIO EFAULT ENFILE EPIPE ENXIO
ENOTBLK EMFILE EDOM E2BIG EBUSY ENOTTY ERANGE
ENOEXEC EEXIST ETXTBSY EDQUOT EBADF EXDEV EFBIG
.TE
.
.Ss API Functions
For detailed descriptions of the exact behavior of any zfs administrative
For detailed descriptions of the exact behavior of any ZFS administrative
operations, see the main
.Xr zfs 8
manual page.
.Bl -tag -width "xx"
.It Em zfs.debug(msg)
.It Fn zfs.debug msg
Record a debug message in the zfs_dbgmsg log.
A log of these messages can be printed via mdb's "::zfs_dbgmsg" command, or
can be monitored live by running:
.Bd -literal -offset indent
dtrace -n 'zfs-dbgmsg{trace(stringof(arg0))}'
.Ed
can be monitored live by running
.Dl dtrace -n 'zfs-dbgmsg{trace(stringof(arg0))}'
.Pp
msg (string)
.Bd -ragged -compact -offset "xxxx"
.Bl -tag -compact -width "property (string)"
.It Ar msg Pq string
Debug message to be printed.
.Ed
.It Em zfs.exists(dataset)
.El
.It Fn zfs.exists dataset
Returns true if the given dataset exists, or false if it doesn't.
A fatal error will be thrown if the dataset is not in the target pool.
That is, in a channel program running on rpool,
zfs.exists("rpool/nonexistent_fs") returns false, but
zfs.exists("somepool/fs_that_may_exist") will error.
.Sy zfs.exists Ns Pq \&"rpool/nonexistent_fs"
returns false, but
.Sy zfs.exists Ns Pq \&"somepool/fs_that_may_exist"
will error.
.Pp
dataset (string)
.Bd -ragged -compact -offset "xxxx"
.Bl -tag -compact -width "property (string)"
.It Ar dataset Pq string
Dataset to check for existence.
Must be in the target pool.
.Ed
.It Em zfs.get_prop(dataset, property)
.El
.It Fn zfs.get_prop dataset property
Returns two values.
First, a string, number or table containing the property value for the given
dataset.
@ -326,22 +318,25 @@ dataset in which it was set or nil if it is readonly).
Throws a Lua error if the dataset is invalid or the property doesn't exist.
Note that Lua only supports int64 number types whereas ZFS number properties
are uint64.
This means very large values (like guid) may wrap around and appear negative.
This means very large values (like GUIDs) may wrap around and appear negative.
.Pp
dataset (string)
.Bd -ragged -compact -offset "xxxx"
.Bl -tag -compact -width "property (string)"
.It Ar dataset Pq string
Filesystem or snapshot path to retrieve properties from.
.Ed
.Pp
property (string)
.Bd -ragged -compact -offset "xxxx"
.It Ar property Pq string
Name of property to retrieve.
All filesystem, snapshot and volume properties are supported except
for 'mounted' and 'iscsioptions.'
Also supports the 'written@snap' and 'written#bookmark' properties and
the '<user|group><quota|used>@id' properties, though the id must be in numeric
form.
.Ed
All filesystem, snapshot and volume properties are supported except for
.Sy mounted
and
.Sy iscsioptions .
Also supports the
.Sy written@ Ns Ar snap
and
.Sy written# Ns Ar bookmark
properties and the
.Ao Sy user Ns | Ns Sy group Ac Ns Ao Sy quota Ns | Ns Sy used Ac Ns Sy @ Ns Ar id
properties, though the id must be in numeric form.
.El
.El
.Bl -tag -width "xx"
.It Sy zfs.sync submodule
@ -350,86 +345,73 @@ They are executed in "syncing context".
.Pp
The available sync submodule functions are as follows:
.Bl -tag -width "xx"
.It Em zfs.sync.destroy(dataset, [defer=true|false])
.It Sy zfs.sync.destroy Ns Pq Ar dataset , Op Ar defer Ns = Ns Sy true Ns | Ns Sy false
Destroy the given dataset.
Returns 0 on successful destroy, or a nonzero error code if the dataset could
not be destroyed (for example, if the dataset has any active children or
clones).
.Pp
dataset (string)
.Bd -ragged -compact -offset "xxxx"
.Bl -tag -compact -width "newbookmark (string)"
.It Ar dataset Pq string
Filesystem or snapshot to be destroyed.
.Ed
.Pp
[optional] defer (boolean)
.Bd -ragged -compact -offset "xxxx"
.It Op Ar defer Pq boolean
Valid only for destroying snapshots.
If set to true, and the snapshot has holds or clones, allows the snapshot to be
marked for deferred deletion rather than failing.
.Ed
.It Em zfs.sync.inherit(dataset, property)
.El
.It Fn zfs.sync.inherit dataset property
Clears the specified property in the given dataset, causing it to be inherited
from an ancestor, or restored to the default if no ancestor property is set.
The
.Ql zfs inherit -S
.Nm zfs Cm inherit Fl S
option has not been implemented.
Returns 0 on success, or a nonzero error code if the property could not be
cleared.
.Pp
dataset (string)
.Bd -ragged -compact -offset "xxxx"
.Bl -tag -compact -width "newbookmark (string)"
.It Ar dataset Pq string
Filesystem or snapshot containing the property to clear.
.Ed
.Pp
property (string)
.Bd -ragged -compact -offset "xxxx"
.It Ar property Pq string
The property to clear.
Allowed properties are the same as those for the
.Nm zfs Cm inherit
command.
.Ed
.It Em zfs.sync.promote(dataset)
.El
.It Fn zfs.sync.promote dataset
Promote the given clone to a filesystem.
Returns 0 on successful promotion, or a nonzero error code otherwise.
If EEXIST is returned, the second return value will be an array of the clone's
snapshots whose names collide with snapshots of the parent filesystem.
.Pp
dataset (string)
.Bd -ragged -compact -offset "xxxx"
.Bl -tag -compact -width "newbookmark (string)"
.It Ar dataset Pq string
Clone to be promoted.
.Ed
.It Em zfs.sync.rollback(filesystem)
.El
.It Fn zfs.sync.rollback filesystem
Rollback to the previous snapshot for a dataset.
Returns 0 on successful rollback, or a nonzero error code otherwise.
Rollbacks can be performed on filesystems or zvols, but not on snapshots
or mounted datasets.
EBUSY is returned in the case where the filesystem is mounted.
.Pp
filesystem (string)
.Bd -ragged -compact -offset "xxxx"
.Bl -tag -compact -width "newbookmark (string)"
.It Ar filesystem Pq string
Filesystem to rollback.
.Ed
.It Em zfs.sync.set_prop(dataset, property, value)
.El
.It Fn zfs.sync.set_prop dataset property value
Sets the given property on a dataset.
Currently only user properties are supported.
Returns 0 if the property was set, or a nonzero error code otherwise.
.Pp
dataset (string)
.Bd -ragged -compact -offset "xxxx"
.Bl -tag -compact -width "newbookmark (string)"
.It Ar dataset Pq string
The dataset where the property will be set.
.Ed
.Pp
property (string)
.Bd -ragged -compact -offset "xxxx"
.It Ar property Pq string
The property to set.
Only user properties are supported.
.Ed
.Pp
value (string)
.Bd -ragged -compact -offset "xxxx"
.It Ar value Pq string
The value of the property to be set.
.Ed
.It Em zfs.sync.snapshot(dataset)
.El
.It Fn zfs.sync.snapshot dataset
Create a snapshot of a filesystem.
Returns 0 if the snapshot was successfully created,
and a nonzero error code otherwise.
@ -437,132 +419,142 @@ and a nonzero error code otherwise.
Note: Taking a snapshot will fail on any pool older than legacy version 27.
To enable taking snapshots from ZCP scripts, the pool must be upgraded.
.Pp
dataset (string)
.Bd -ragged -compact -offset "xxxx"
.Bl -tag -compact -width "newbookmark (string)"
.It Ar dataset Pq string
Name of snapshot to create.
.Ed
.It Em zfs.sync.bookmark(source, newbookmark)
.El
.It Fn zfs.sync.bookmark source newbookmark
Create a bookmark of an existing source snapshot or bookmark.
Returns 0 if the new bookmark was successfully created,
and a nonzero error code otherwise.
.Pp
Note: Bookmarking requires the corresponding pool feature to be enabled.
.Pp
source (string)
.Bd -ragged -compact -offset "xxxx"
.Bl -tag -compact -width "newbookmark (string)"
.It Ar source Pq string
Full name of the existing snapshot or bookmark.
.Ed
.Pp
newbookmark (string)
.Bd -ragged -compact -offset "xxxx"
.It Ar newbookmark Pq string
Full name of the new bookmark.
.El
.Ed
.El
.It Sy zfs.check submodule
For each function in the zfs.sync submodule, there is a corresponding zfs.check
For each function in the
.Sy zfs.sync
submodule, there is a corresponding
.Sy zfs.check
function which performs a "dry run" of the same operation.
Each takes the same arguments as its zfs.sync counterpart and returns 0 if the
operation would succeed, or a non-zero error code if it would fail, along with
any other error details.
Each takes the same arguments as its
.Sy zfs.sync
counterpart and returns 0 if the operation would succeed,
or a non-zero error code if it would fail, along with any other error details.
That is, each has the same behavior as the corresponding sync function except
for actually executing the requested change.
For example,
.Em zfs.check.destroy("fs")
.Fn zfs.check.destroy \&"fs"
returns 0 if
.Em zfs.sync.destroy("fs")
.Fn zfs.sync.destroy \&"fs"
would successfully destroy the dataset.
.Pp
The available zfs.check functions are:
.Bl -tag -width "xx"
.It Em zfs.check.destroy(dataset, [defer=true|false])
.It Em zfs.check.promote(dataset)
.It Em zfs.check.rollback(filesystem)
.It Em zfs.check.set_property(dataset, property, value)
.It Em zfs.check.snapshot(dataset)
The available
.Sy zfs.check
functions are:
.Bl -tag -compact -width "xx"
.It Sy zfs.check.destroy Ns Pq Ar dataset , Op Ar defer Ns = Ns Sy true Ns | Ns Sy false
.It Fn zfs.check.promote dataset
.It Fn zfs.check.rollback filesystem
.It Fn zfs.check.set_property dataset property value
.It Fn zfs.check.snapshot dataset
.El
.It Sy zfs.list submodule
The zfs.list submodule provides functions for iterating over datasets and
properties.
Rather than returning tables, these functions act as Lua iterators, and are
generally used as follows:
.Bd -literal -offset indent
for child in zfs.list.children("rpool") do
.Bd -literal -compact -offset indent
.No for child in Fn zfs.list.children \&"rpool" No do
...
end
.Ed
.Pp
The available zfs.list functions are:
The available
.Sy zfs.list
functions are:
.Bl -tag -width "xx"
.It Em zfs.list.clones(snapshot)
.It Fn zfs.list.clones snapshot
Iterate through all clones of the given snapshot.
.Pp
snapshot (string)
.Bd -ragged -compact -offset "xxxx"
.Bl -tag -compact -width "snapshot (string)"
.It Ar snapshot Pq string
Must be a valid snapshot path in the current pool.
.Ed
.It Em zfs.list.snapshots(dataset)
.El
.It Fn zfs.list.snapshots dataset
Iterate through all snapshots of the given dataset.
Each snapshot is returned as a string containing the full dataset name, e.g.
"pool/fs@snap".
Each snapshot is returned as a string containing the full dataset name,
e.g. "pool/fs@snap".
.Pp
dataset (string)
.Bd -ragged -compact -offset "xxxx"
.Bl -tag -compact -width "snapshot (string)"
.It Ar dataset Pq string
Must be a valid filesystem or volume.
.Ed
.It Em zfs.list.children(dataset)
.El
.It Fn zfs.list.children dataset
Iterate through all direct children of the given dataset.
Each child is returned as a string containing the full dataset name, e.g.
"pool/fs/child".
Each child is returned as a string containing the full dataset name,
e.g. "pool/fs/child".
.Pp
dataset (string)
.Bd -ragged -compact -offset "xxxx"
.Bl -tag -compact -width "snapshot (string)"
.It Ar dataset Pq string
Must be a valid filesystem or volume.
.Ed
.It Em zfs.list.bookmarks(dataset)
Iterate through all bookmarks of the given dataset. Each bookmark is returned
as a string containing the full dataset name, e.g. "pool/fs#bookmark".
.El
.It Fn zfs.list.bookmarks dataset
Iterate through all bookmarks of the given dataset.
Each bookmark is returned as a string containing the full dataset name,
e.g. "pool/fs#bookmark".
.Pp
dataset (string)
.Bd -ragged -compact -offset "xxxx"
.Bl -tag -compact -width "snapshot (string)"
.It Ar dataset Pq string
Must be a valid filesystem or volume.
.Ed
.It Em zfs.list.holds(snapshot)
Iterate through all user holds on the given snapshot. Each hold is returned
.El
.It Fn zfs.list.holds snapshot
Iterate through all user holds on the given snapshot.
Each hold is returned
as a pair of the hold's tag and the timestamp (in seconds since the epoch) at
which it was created.
.Pp
snapshot (string)
.Bd -ragged -compact -offset "xxxx"
.Bl -tag -compact -width "snapshot (string)"
.It Ar snapshot Pq string
Must be a valid snapshot.
.Ed
.It Em zfs.list.properties(dataset)
.El
.It Fn zfs.list.properties dataset
An alias for zfs.list.user_properties (see relevant entry).
.Pp
dataset (string)
.Bd -ragged -compact -offset "xxxx"
.Bl -tag -compact -width "snapshot (string)"
.It Ar dataset Pq string
Must be a valid filesystem, snapshot, or volume.
.Ed
.It Em zfs.list.user_properties(dataset)
Iterate through all user properties for the given dataset. For each
step of the iteration, output the property name, its value, and its source.
.El
.It Fn zfs.list.user_properties dataset
Iterate through all user properties for the given dataset.
For each step of the iteration, output the property name, its value,
and its source.
Throws a Lua error if the dataset is invalid.
.Pp
dataset (string)
.Bd -ragged -compact -offset "xxxx"
.Bl -tag -compact -width "snapshot (string)"
.It Ar dataset Pq string
Must be a valid filesystem, snapshot, or volume.
.Ed
.It Em zfs.list.system_properties(dataset)
.El
.It Fn zfs.list.system_properties dataset
Returns an array of strings, the names of the valid system (non-user defined)
properties for the given dataset.
Throws a Lua error if the dataset is invalid.
.Pp
dataset (string)
.Bd -ragged -compact -offset "xxxx"
.Bl -tag -compact -width "snapshot (string)"
.It Ar dataset Pq string
Must be a valid filesystem, snapshot or volume.
.Ed
.El
.El
.El
.
.Sh EXAMPLES
.
.Ss Example 1
The following channel program recursively destroys a filesystem and all its
snapshots and children in a naive manner.
@ -579,6 +571,7 @@ function destroy_recursive(root)
end
destroy_recursive("pool/somefs")
.Ed
.
.Ss Example 2
A more verbose and robust version of the same channel program, which
properly detects and reports errors, and also takes the dataset to destroy
@ -617,6 +610,7 @@ results["succeeded"] = succeeded
results["failed"] = failed
return results
.Ed
.
.Ss Example 3
The following function performs a forced promote operation by attempting to
promote the given clone and destroying any conflicting snapshots.

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
@ -30,68 +29,65 @@
.\" Copyright 2018 Nexenta Systems, Inc.
.\" Copyright 2019 Joyent, Inc.
.\"
.Dd June 30, 2019
.Dd May 27, 2021
.Dt ZFS-PROJECT 8
.Os
.
.Sh NAME
.Nm zfs-project
.Nd List, set, or clear project ID and/or inherit flag on the file(s) or directories.
.Nd manage projects in ZFS filesystem
.Sh SYNOPSIS
.Nm zfs
.Cm project
.Oo Fl d Ns | Ns Fl r Ns Oc
.Ar file Ns | Ns Ar directory Ns ...
.Ar file Ns | Ns Ar directory Ns
.Nm zfs
.Cm project
.Fl C
.Oo Fl kr Ns Oc
.Ar file Ns | Ns Ar directory Ns ...
.Ar file Ns | Ns Ar directory Ns
.Nm zfs
.Cm project
.Fl c
.Oo Fl 0 Ns Oc
.Oo Fl d Ns | Ns Fl r Ns Oc
.Op Fl p Ar id
.Ar file Ns | Ns Ar directory Ns ...
.Ar file Ns | Ns Ar directory Ns
.Nm zfs
.Cm project
.Op Fl p Ar id
.Oo Fl rs Ns Oc
.Ar file Ns | Ns Ar directory Ns ...
.Ar file Ns | Ns Ar directory Ns
.
.Sh DESCRIPTION
.Bl -tag -width ""
.It Xo
.Nm zfs
.Cm project
.Oo Fl d Ns | Ns Fl r Ns Oc
.Ar file Ns | Ns Ar directory Ns ...
.Ar file Ns | Ns Ar directory Ns
.Xc
List project identifier (ID) and inherit flag of file(s) or directories.
List project identifier (ID) and inherit flag of files and directories.
.Bl -tag -width "-d"
.It Fl d
Show the directory project ID and inherit flag, not its children. It will
overwrite the former specified
.Fl r
option.
Show the directory project ID and inherit flag, not its children.
.It Fl r
Show on subdirectories recursively. It will overwrite the former specified
.Fl d
option.
List subdirectories recursively.
.El
.It Xo
.Nm zfs
.Cm project
.Fl C
.Oo Fl kr Ns Oc
.Ar file Ns | Ns Ar directory Ns ...
.Ar file Ns | Ns Ar directory Ns
.Xc
Clear project inherit flag and/or ID on the file(s) or directories.
Clear project inherit flag and/or ID on the files and directories.
.Bl -tag -width "-k"
.It Fl k
Keep the project ID unchanged. If not specified, the project ID will be reset
as zero.
Keep the project ID unchanged.
If not specified, the project ID will be reset to zero.
.It Fl r
Clear on subdirectories recursively.
Clear subdirectories' flags recursively.
.El
.It Xo
.Nm zfs
@ -100,54 +96,46 @@ Clear on subdirectories recursively.
.Oo Fl 0 Ns Oc
.Oo Fl d Ns | Ns Fl r Ns Oc
.Op Fl p Ar id
.Ar file Ns | Ns Ar directory Ns ...
.Ar file Ns | Ns Ar directory Ns
.Xc
Check project ID and inherit flag on the file(s) or directories, report the
entries without project inherit flag or with different project IDs from the
specified (via
.Fl p
option) value or the target directory's project ID.
.Bl -tag -width "-0"
Check project ID and inherit flag on the files and directories:
report entries without the project inherit flag, or with project IDs different from the
target directory's project ID or the one specified with
.Fl p .
.Bl -tag -width "-p id"
.It Fl 0
Print file name with a trailing NUL instead of newline (by default), like
"find -print0".
Delimit filenames with a NUL byte instead of newline.
.It Fl d
Check the directory project ID and inherit flag, not its children. It will
overwrite the former specified
.Fl r
option.
.It Fl p
Specify the referenced ID for comparing with the target file(s) or directories'
project IDs. If not specified, the target (top) directory's project ID will be
used as the referenced one.
Check the directory project ID and inherit flag, not its children.
.It Fl p Ar id
Compare to
.Ar id
instead of the target files and directories' project IDs.
.It Fl r
Check on subdirectories recursively. It will overwrite the former specified
.Fl d
option.
Check subdirectories recursively.
.El
.It Xo
.Nm zfs
.Cm project
.Op Fl p Ar id
.Fl p Ar id
.Oo Fl rs Ns Oc
.Ar file Ns | Ns Ar directory Ns ...
.Ar file Ns | Ns Ar directory Ns
.Xc
Set project ID and/or inherit flag on the file(s) or directories.
.Bl -tag -width "-p"
.It Fl p
Set the file(s)' or directories' project ID with the given value.
Set project ID and/or inherit flag on the files and directories.
.Bl -tag -width "-p id"
.It Fl p Ar id
Set the project ID to the given value.
.It Fl r
Set on subdirectories recursively.
.It Fl s
Set project inherit flag on the given file(s) or directories. It is usually used
for setup tree quota on the directory target with
.Fl r
option specified together. When setup tree quota, by default the directory's
project ID will be set to all its descendants unless you specify the project
ID via
.Fl p
option explicitly.
Set project inherit flag on the given files and directories.
This is usually used for setting up tree quotas with
.Fl r .
In that case, the directory's project ID
will be set for all its descendants, unless specified explicitly with
.Fl p .
.El
.El
.
.Sh SEE ALSO
.Xr zfs-projectspace 8

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
@ -33,37 +32,33 @@
.Dd June 30, 2019
.Dt ZFS-PROMOTE 8
.Os
.
.Sh NAME
.Nm zfs-promote
.Nd Promotes a clone file system to no longer be dependent on its origin snapshot.
.Nd promote clone dataset to no longer depend on origin snapshot
.Sh SYNOPSIS
.Nm zfs
.Cm promote
.Ar clone-filesystem
.Ar clone
.
.Sh DESCRIPTION
.Bl -tag -width ""
.It Xo
.Nm zfs
.Cm promote
.Ar clone-filesystem
.Xc
The
.Cm promote
command makes it possible to destroy the file system that the clone was created
from.
.Nm zfs Cm promote
command makes it possible to destroy the dataset that the clone was created from.
The clone parent-child dependency relationship is reversed, so that the origin
file system becomes a clone of the specified file system.
dataset becomes a clone of the specified dataset.
.Pp
The snapshot that was cloned, and any snapshots previous to this snapshot, are
now owned by the promoted clone.
The space they use moves from the origin file system to the promoted clone, so
The space they use moves from the origin dataset to the promoted clone, so
enough space must be available to accommodate these snapshots.
No new space is consumed by this operation, but the space accounting is
adjusted.
The promoted clone must not have any conflicting snapshot names of its own.
The
.Xr zfs-rename 8
.Nm zfs Cm rename
subcommand can be used to rename any conflicting snapshots.
.El
.
.Sh SEE ALSO
.Xr zfs-clone 8
.Xr zfs-clone 8 ,
.Xr zfs-rename 8

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
@ -33,9 +32,10 @@
.Dd February 16, 2020
.Dt ZFS-RECEIVE 8
.Os
.
.Sh NAME
.Nm zfs-receive
.Nd Creates a snapshot whose contents are as specified in the stream provided on standard input.
.Nd create snapshot from backup stream
.Sh SYNOPSIS
.Nm zfs
.Cm receive
@ -56,6 +56,7 @@
.Cm receive
.Fl A
.Ar filesystem Ns | Ns Ar volume
.
.Sh DESCRIPTION
.Bl -tag -width ""
.It Xo
@ -85,7 +86,7 @@ Streams are created using the
subcommand, which by default creates a full stream.
.Nm zfs Cm recv
can be used as an alias for
.Nm zfs Cm receive.
.Nm zfs Cm receive .
.Pp
If an incremental stream is received, then the destination file system must
already exist, and its most recent snapshot must match the incremental stream's
@ -116,15 +117,17 @@ If
or
.Fl x Em property
is specified, it applies to the effective value of the property throughout
the entire subtree of replicated datasets. Effective property values will be
set (
.Fl o
) or inherited (
.Fl x
) on the topmost in the replicated subtree. In descendant datasets, if the
the entire subtree of replicated datasets.
Effective property values will be set
.Pq Fl o
or inherited
.Pq Fl x
on the topmost in the replicated subtree.
In descendant datasets, if the
property is set by the send stream, it will be overridden by forcing the
property to be inherited from the topmost file system. Received properties
are retained in spite of being overridden and may be restored with
property to be inherited from the topmost file system.
Received properties are retained in spite of being overridden
and may be restored with
.Nm zfs Cm inherit Fl S .
Specifying
.Fl o Sy origin Ns = Ns Em snapshot
@ -134,41 +137,51 @@ is a read-only property and cannot be set, it's allowed to receive the send
stream as a clone of the given snapshot.
.Pp
Raw encrypted send streams (created with
.Nm zfs Cm send Fl w
) may only be received as is, and cannot be re-encrypted, decrypted, or
recompressed by the receive process. Unencrypted streams can be received as
.Nm zfs Cm send Fl w )
may only be received as is, and cannot be re-encrypted, decrypted, or
recompressed by the receive process.
Unencrypted streams can be received as
encrypted datasets, either through inheritance or by specifying encryption
parameters with the
.Fl o
options. Note that the
options.
Note that the
.Sy keylocation
property cannot be overridden to
.Sy prompt
during a receive. This is because the receive process itself is already using
stdin for the send stream. Instead, the property can be overridden after the
receive completes.
during a receive.
This is because the receive process itself is already using
the standard input for the send stream.
Instead, the property can be overridden after the receive completes.
.Pp
The added security provided by raw sends adds some restrictions to the send
and receive process. ZFS will not allow a mix of raw receives and non-raw
receives. Specifically, any raw incremental receives that are attempted after
a non-raw receive will fail. Non-raw receives do not have this restriction and,
therefore, are always possible. Because of this, it is best practice to always
and receive process.
ZFS will not allow a mix of raw receives and non-raw receives.
Specifically, any raw incremental receives that are attempted after
a non-raw receive will fail.
Non-raw receives do not have this restriction and,
therefore, are always possible.
Because of this, it is best practice to always
use either raw sends for their security benefits or non-raw sends for their
flexibility when working with encrypted datasets, but not a combination.
.Pp
The reason for this restriction stems from the inherent restrictions of the
AEAD ciphers that ZFS uses to encrypt data. When using ZFS native encryption,
AEAD ciphers that ZFS uses to encrypt data.
When using ZFS native encryption,
each block of data is encrypted against a randomly generated number known as
the "initialization vector" (IV), which is stored in the filesystem metadata.
This number is required by the encryption algorithms whenever the data is to
be decrypted. Together, all of the IVs provided for all of the blocks in a
given snapshot are collectively called an "IV set". When ZFS performs a raw
send, the IV set is transferred from the source to the destination in the send
stream. When ZFS performs a non-raw send, the data is decrypted by the source
be decrypted.
Together, all of the IVs provided for all of the blocks in a
given snapshot are collectively called an "IV set".
When ZFS performs a raw send, the IV set is transferred from the source
to the destination in the send stream.
When ZFS performs a non-raw send, the data is decrypted by the source
system and re-encrypted by the destination system, creating a snapshot with
effectively the same data, but a different IV set. In order for decryption to
work after a raw send, ZFS must ensure that the IV set used on both the source
and destination side match. When an incremental raw receive is performed on
effectively the same data, but a different IV set.
In order for decryption to work after a raw send, ZFS must ensure that
the IV set used on both the source and destination side match.
When an incremental raw receive is performed on
top of an existing snapshot, ZFS will check to confirm that the "from"
snapshot on both the source and destination were using the same IV set,
ensuring the new IV set is consistent.
@ -234,7 +247,8 @@ Discard all but the last element of the sent snapshot's file system name, using
that element to determine the name of the target file system for the new
snapshot as described in the paragraph above.
.It Fl h
Skip the receive of holds. There is no effect if holds are not sent.
Skip the receive of holds.
There is no effect if holds are not sent.
.It Fl M
Force an unmount of the file system while receiving a snapshot.
This option is not supported on Linux.
@ -254,7 +268,8 @@ performed.
.It Fl o Em property Ns = Ns Ar value
Sets the specified property as if the command
.Nm zfs Cm set Em property Ns = Ns Ar value
was invoked immediately before the receive. When receiving a stream from
was invoked immediately before the receive.
When receiving a stream from
.Nm zfs Cm send Fl R ,
causes the property to be inherited by all descendant datasets, as through
.Nm zfs Cm inherit Em property
@ -267,11 +282,13 @@ then overriding the
.Sy compression
property will have no affect on received data but the
.Sy compression
property will be set. To have the data recompressed on receive remove the
property will be set.
To have the data recompressed on receive remove the
.Fl c
flag from the send stream.
.Pp
Any editable property can be set at receive time. Set-once properties bound
Any editable property can be set at receive time.
Set-once properties bound
to the received data, such as
.Sy normalization
and
@ -286,8 +303,8 @@ cannot be set at receive time.
.Pp
The
.Fl o
option may be specified multiple times, for different properties. An error
results if the same property is specified in multiple
option may be specified multiple times, for different properties.
An error results if the same property is specified in multiple
.Fl o
or
.Fl x
@ -295,30 +312,27 @@ options.
.Pp
The
.Fl o
option may also be used to override encryption properties upon initial
receive. This allows unencrypted streams to be received as encrypted datasets.
option may also be used to override encryption properties upon initial receive.
This allows unencrypted streams to be received as encrypted datasets.
To cause the received dataset (or root dataset of a recursive stream) to be
received as an encryption root, specify encryption properties in the same
manner as is required for
.Nm zfs
.Cm create .
.Nm zfs Cm create .
For instance:
.Bd -literal
# zfs send tank/test@snap1 | zfs recv -o encryption=on -o keyformat=passphrase -o keylocation=file:///path/to/keyfile
.Ed
.Dl # Nm zfs Cm send Pa tank/test@snap1 | Nm zfs Cm recv Fl o Sy encryption Ns = Ns Sy on Fl o keyformat=passphrase Fl o Sy keylocation Ns = Ns Pa file:///path/to/keyfile
.Pp
Note that
.Op Fl o Ar keylocation Ns = Ns Ar prompt
may not be specified here, since stdin is already being utilized for the send
stream. Once the receive has completed, you can use
.Nm zfs
.Cm set
to change this setting after the fact. Similarly, you can receive a dataset as
an encrypted child by specifying
.Fl o Sy keylocation Ns = Ns Sy prompt
may not be specified here, since the standard input
is already being utilized for the send stream.
Once the receive has completed, you can use
.Nm zfs Cm set
to change this setting after the fact.
Similarly, you can receive a dataset as an encrypted child by specifying
.Op Fl x Ar encryption
to force the property to be inherited. Overriding encryption properties (except
for
.Sy keylocation Ns )
to force the property to be inherited.
Overriding encryption properties (except for
.Sy keylocation )
is not possible with raw send streams.
.It Fl s
If the receive is interrupted, save the partially received state, rather
@ -380,6 +394,7 @@ Abort an interrupted
.Nm zfs Cm receive Fl s ,
deleting its saved partially received state.
.El
.
.Sh SEE ALSO
.Xr zfs-send 8
.Xr zfs-send 8 ,
.Xr zstream 8

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
@ -33,9 +32,10 @@
.Dd September 1, 2020
.Dt ZFS-RENAME 8
.Os
.
.Sh NAME
.Nm zfs-rename
.Nd Renames the given dataset (filesystem or snapshot).
.Nd rename ZFS dataset
.Sh SYNOPSIS
.Nm zfs
.Cm rename
@ -57,6 +57,7 @@
.Cm rename
.Fl r
.Ar snapshot Ar snapshot
.
.Sh DESCRIPTION
.Bl -tag -width ""
.It Xo

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
@ -30,25 +29,20 @@
.\" Copyright 2018 Nexenta Systems, Inc.
.\" Copyright 2019 Joyent, Inc.
.\"
.Dd June 30, 2019
.Dd May 27, 2021
.Dt ZFS-ROLLBACK 8
.Os
.
.Sh NAME
.Nm zfs-rollback
.Nd Roll back the given dataset to a previous snapshot.
.Nd roll ZFS dataset back to snapshot
.Sh SYNOPSIS
.Nm zfs
.Cm rollback
.Op Fl Rfr
.Ar snapshot
.
.Sh DESCRIPTION
.Bl -tag -width ""
.It Xo
.Nm zfs
.Cm rollback
.Op Fl Rfr
.Ar snapshot
.Xc
When a dataset is rolled back, all data that has changed since the snapshot is
discarded, and the dataset reverts to the state at the time of the snapshot.
By default, the command refuses to roll back to a snapshot other than the most
@ -63,7 +57,7 @@ The
options do not recursively destroy the child snapshots of a recursive snapshot.
Only direct snapshots of the specified filesystem are destroyed by either of
these options.
To completely roll back a recursive snapshot, you must rollback the individual
To completely roll back a recursive snapshot, you must roll back the individual
child snapshots.
.Bl -tag -width "-R"
.It Fl R
@ -76,6 +70,6 @@ option to force an unmount of any clone file systems that are to be destroyed.
.It Fl r
Destroy any snapshots and bookmarks more recent than the one specified.
.El
.El
.
.Sh SEE ALSO
.Xr zfs-snapshot 8

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
@ -33,9 +32,10 @@
.Dd April 15, 2021
.Dt ZFS-SEND 8
.Os
.
.Sh NAME
.Nm zfs-send
.Nd Generate a send stream, which may be of a filesystem, and may be incremental from a bookmark.
.Nd generate backup stream of ZFS dataset
.Sh SYNOPSIS
.Nm zfs
.Cm send
@ -51,7 +51,6 @@
.Cm send
.Fl -redact Ar redaction_bookmark
.Op Fl DLPcenpv
.br
.Op Fl i Ar snapshot Ns | Ns Ar bookmark
.Ar snapshot
.Nm zfs
@ -66,7 +65,8 @@
.Nm zfs
.Cm redact
.Ar snapshot redaction_bookmark
.Ar redaction_snapshot Ns ...
.Ar redaction_snapshot Ns
.
.Sh DESCRIPTION
.Bl -tag -width ""
.It Xo
@ -85,7 +85,7 @@ The output can be redirected to a file or to a different system
.Pc .
By default, a full stream is generated.
.Bl -tag -width "-D"
.It Fl D, -dedup
.It Fl D , -dedup
Deduplicated send is no longer supported.
This flag is accepted for backwards compatibility, but a regular,
non-deduplicated stream will be generated.
@ -99,7 +99,7 @@ is similar to
The incremental source may be specified as with the
.Fl i
option.
.It Fl L, -large-block
.It Fl L , -large-block
Generate a stream which may contain blocks larger than 128KB.
This flag has no effect if the
.Sy large_blocks
@ -114,9 +114,9 @@ See
for details on ZFS feature flags and the
.Sy large_blocks
feature.
.It Fl P, -parsable
.It Fl P , -parsable
Print machine-parsable verbose information about the stream package generated.
.It Fl R, -replicate
.It Fl R , -replicate
Generate a replication stream package, which will replicate the specified
file system, and all descendent file systems, up to the named snapshot.
When received, all properties, snapshots, descendent file systems, and clones
@ -134,12 +134,13 @@ set when the stream is received.
If the
.Fl F
flag is specified when this stream is received, snapshots and file systems that
do not exist on the sending side are destroyed. If the
do not exist on the sending side are destroyed.
If the
.Fl R
flag is used to send encrypted datasets, then
.Fl w
must also be specified.
.It Fl e, -embed
.It Fl e , -embed
Generate a more compact stream by using
.Sy WRITE_EMBEDDED
records for blocks which are stored more compactly on disk by the
@ -154,7 +155,8 @@ feature enabled.
If the
.Sy lz4_compress
feature is active on the sending system, then the receiving system must have
that feature enabled as well. Datasets that are sent with this flag may not be
that feature enabled as well.
Datasets that are sent with this flag may not be
received as an encrypted dataset, since encrypted datasets cannot use the
.Sy embedded_data
feature.
@ -163,15 +165,15 @@ See
for details on ZFS feature flags and the
.Sy embedded_data
feature.
.It Fl b, -backup
.It Fl b , -backup
Sends only received property values whether or not they are overridden by local
settings, but only if the dataset has ever been received. Use this option when
you want
settings, but only if the dataset has ever been received.
Use this option when you want
.Nm zfs Cm receive
to restore received properties backed up on the sent dataset and to avoid
sending local settings that may have nothing to do with the source dataset,
but only with how the data is backed up.
.It Fl c, -compressed
.It Fl c , -compressed
Generate a more compact stream by using compressed WRITE records for blocks
which are compressed on disk and in memory
.Po see the
@ -189,34 +191,36 @@ feature is enabled on the sending system but the
option is not supplied in conjunction with
.Fl c ,
then the data will be decompressed before sending so it can be split into
smaller block sizes. Streams sent with
smaller block sizes.
Streams sent with
.Fl c
will not have their data recompressed on the receiver side using
.Fl o compress=value.
The data will stay compressed as it was from the sender. The new compression
property will be set for future data.
.It Fl w, -raw
For encrypted datasets, send data exactly as it exists on disk. This allows
backups to be taken even if encryption keys are not currently loaded. The
backup may then be received on an untrusted machine since that machine will
.Fl o Sy compress Ns = Ar value .
The data will stay compressed as it was from the sender.
The new compression property will be set for future data.
.It Fl w , -raw
For encrypted datasets, send data exactly as it exists on disk.
This allows backups to be taken even if encryption keys are not currently loaded.
The backup may then be received on an untrusted machine since that machine will
not have the encryption keys to read the protected data or alter it without
being detected. Upon being received, the dataset will have the same encryption
being detected.
Upon being received, the dataset will have the same encryption
keys as it did on the send side, although the
.Sy keylocation
property will be defaulted to
.Sy prompt
if not otherwise provided. For unencrypted datasets, this flag will be
equivalent to
if not otherwise provided.
For unencrypted datasets, this flag will be equivalent to
.Fl Lec .
Note that if you do not use this flag for sending encrypted datasets, data will
be sent unencrypted and may be re-encrypted with a different encryption key on
the receiving system, which will disable the ability to do a raw send to that
system for incrementals.
.It Fl h, -holds
.It Fl h , -holds
Generate a stream package that includes any snapshot holds (created with the
.Sy zfs hold
.Nm zfs Cm hold
command), and indicating to
.Sy zfs receive
.Nm zfs Cm receive
that the holds be applied to the dataset on the receiving system.
.It Fl i Ar snapshot
Generate an incremental stream from the first
@ -240,7 +244,7 @@ be fully specified
not just
.Em @origin
.Pc .
.It Fl n, -dryrun
.It Fl n , -dryrun
Do a dry-run
.Pq Qq No-op
send.
@ -254,22 +258,24 @@ In this case, the verbose output will be written to standard output
.Po contrast with a non-dry-run, where the stream is written to standard output
and the verbose output goes to standard error
.Pc .
.It Fl p, -props
.It Fl p , -props
Include the dataset's properties in the stream.
This flag is implicit when
.Fl R
is specified.
The receiving system must also support this feature. Sends of encrypted datasets
must use
The receiving system must also support this feature.
Sends of encrypted datasets must use
.Fl w
when using this flag.
.It Fl s, -skip-missing
.It Fl s , -skip-missing
Allows sending a replication stream even when there are snapshots missing in the
hierarchy. When a snapshot is missing, instead of throwing an error and aborting
the send, a warning is printed to STDERR and the dataset to which it belongs
and its descendents are skipped. This flag can only be used in conjunction with
hierarchy.
When a snapshot is missing, instead of throwing an error and aborting the send,
a warning is printed to the standard error stream and the dataset to which it belongs
and its descendents are skipped.
This flag can only be used in conjunction with
.Fl R .
.It Fl v, -verbose
.It Fl v , -verbose
Print verbose information about the stream package generated.
This information includes a per-second report of how much data has been sent.
.Pp
@ -291,7 +297,7 @@ When the stream generated from a filesystem or volume is received, the default
snapshot name will be
.Qq --head-- .
.Bl -tag -width "-L"
.It Fl L, -large-block
.It Fl L , -large-block
Generate a stream which may contain blocks larger than 128KB.
This flag has no effect if the
.Sy large_blocks
@ -306,9 +312,9 @@ See
for details on ZFS feature flags and the
.Sy large_blocks
feature.
.It Fl P, -parsable
.It Fl P , -parsable
Print machine-parsable verbose information about the stream package generated.
.It Fl c, -compressed
.It Fl c , -compressed
Generate a more compact stream by using compressed WRITE records for blocks
which are compressed on disk and in memory
.Po see the
@ -327,24 +333,25 @@ option is not supplied in conjunction with
.Fl c ,
then the data will be decompressed before sending so it can be split into
smaller block sizes.
.It Fl w, -raw
For encrypted datasets, send data exactly as it exists on disk. This allows
backups to be taken even if encryption keys are not currently loaded. The
backup may then be received on an untrusted machine since that machine will
.It Fl w , -raw
For encrypted datasets, send data exactly as it exists on disk.
This allows backups to be taken even if encryption keys are not currently loaded.
The backup may then be received on an untrusted machine since that machine will
not have the encryption keys to read the protected data or alter it without
being detected. Upon being received, the dataset will have the same encryption
being detected.
Upon being received, the dataset will have the same encryption
keys as it did on the send side, although the
.Sy keylocation
property will be defaulted to
.Sy prompt
if not otherwise provided. For unencrypted datasets, this flag will be
equivalent to
if not otherwise provided.
For unencrypted datasets, this flag will be equivalent to
.Fl Lec .
Note that if you do not use this flag for sending encrypted datasets, data will
be sent unencrypted and may be re-encrypted with a different encryption key on
the receiving system, which will disable the ability to do a raw send to that
system for incrementals.
.It Fl e, -embed
.It Fl e , -embed
Generate a more compact stream by using
.Sy WRITE_EMBEDDED
records for blocks which are stored more compactly on disk by the
@ -359,8 +366,9 @@ feature enabled.
If the
.Sy lz4_compress
feature is active on the sending system, then the receiving system must have
that feature enabled as well. Datasets that are sent with this flag may not be
received as an encrypted dataset, since encrypted datasets cannot use the
that feature enabled as well.
Datasets that are sent with this flag may not be received as an encrypted dataset,
since encrypted datasets cannot use the
.Sy embedded_data
feature.
See
@ -383,7 +391,7 @@ character and following
If the incremental target is a clone, the incremental source can be the origin
snapshot, or an earlier snapshot in the origin's filesystem, or the origin's
origin, etc.
.It Fl n, -dryrun
.It Fl n , -dryrun
Do a dry-run
.Pq Qq No-op
send.
@ -397,7 +405,7 @@ In this case, the verbose output will be written to standard output
.Po contrast with a non-dry-run, where the stream is written to standard output
and the verbose output goes to standard error
.Pc .
.It Fl v, -verbose
.It Fl v , -verbose
Print verbose information about the stream package generated.
This information includes a per-second report of how much data has been sent.
.El
@ -406,7 +414,6 @@ This information includes a per-second report of how much data has been sent.
.Cm send
.Fl -redact Ar redaction_bookmark
.Op Fl DLPcenpv
.br
.Op Fl i Ar snapshot Ns | Ns Ar bookmark
.Ar snapshot
.Xc
@ -415,15 +422,15 @@ This send stream contains all blocks from the snapshot being sent that aren't
included in the redaction list contained in the bookmark specified by the
.Fl -redact
(or
.Fl -d
) flag.
.Fl d )
flag.
The resulting send stream is said to be redacted with respect to the snapshots
the bookmark specified by the
.Fl -redact No flag was created with.
The bookmark must have been created by running
.Sy zfs redact
.Nm zfs Cm redact
on the snapshot being sent.
.sp
.Pp
This feature can be used to allow clones of a filesystem to be made available on
a remote system, in the case where their parent need not (or needs to not) be
usable.
@ -439,21 +446,23 @@ parent, that block will not be sent; but if one or more snapshots have not
modified a block in the parent, they will still reference the parent's block, so
that block will be sent.
Note that only user data will be redacted.
.sp
.Pp
When the redacted send stream is received, we will generate a redacted
snapshot.
Due to the nature of redaction, a redacted dataset can only be used in the
following ways:
.sp
1. To receive, as a clone, an incremental send from the original snapshot to one
.Bl -enum -width "a."
.It
To receive, as a clone, an incremental send from the original snapshot to one
of the snapshots it was redacted with respect to.
In this case, the stream will produce a valid dataset when received because all
blocks that were redacted in the parent are guaranteed to be present in the
child's send stream.
This use case will produce a normal snapshot, which can be used just like other
snapshots.
.sp
2. To receive an incremental send from the original snapshot to something
.
.It
To receive an incremental send from the original snapshot to something
redacted with respect to a subset of the set of snapshots the initial snapshot
was redacted with respect to.
In this case, each block that was redacted in the original is still redacted
@ -461,8 +470,8 @@ In this case, each block that was redacted in the original is still redacted
(because the snapshots define what is permitted, and everything else is
redacted)).
This use case will produce a new redacted snapshot.
.sp
3. To receive an incremental send from a redaction bookmark of the original
.It
To receive an incremental send from a redaction bookmark of the original
snapshot that was created when redacting with respect to a subset of the set of
snapshots the initial snapshot was created with respect to
anything else.
@ -471,27 +480,30 @@ necessary to fill in any redacted data, should it be needed, because the sending
system is aware of what blocks were originally redacted.
This will either produce a normal snapshot or a redacted one, depending on
whether the new send stream is redacted.
.sp
4. To receive an incremental send from a redacted version of the initial
.It
To receive an incremental send from a redacted version of the initial
snapshot that is redacted with respect to a subject of the set of snapshots the
initial snapshot was created with respect to.
A send stream from a compatible redacted dataset will contain all of the blocks
necessary to fill in any redacted data.
This will either produce a normal snapshot or a redacted one, depending on
whether the new send stream is redacted.
.sp
5. To receive a full send as a clone of the redacted snapshot.
.It
To receive a full send as a clone of the redacted snapshot.
Since the stream is a full send, it definitionally contains all the data needed
to create a new dataset.
This use case will either produce a normal snapshot or a redacted one, depending
on whether the full send stream was redacted.
.sp
These restrictions are detected and enforced by \fBzfs receive\fR; a
redacted send stream will contain the list of snapshots that the stream is
.El
.Pp
These restrictions are detected and enforced by
.Nm zfs Cm receive ;
a redacted send stream will contain the list of snapshots that the stream is
redacted with respect to.
These are stored with the redacted snapshot, and are used to detect and
correctly handle the cases above. Note that for technical reasons, raw sends
and redacted sends cannot be combined at this time.
correctly handle the cases above.
Note that for technical reasons,
raw sends and redacted sends cannot be combined at this time.
.It Xo
.Nm zfs
.Cm send
@ -505,7 +517,7 @@ The
is the value of this property on the filesystem or volume that was being
received into.
See the documentation for
.Sy zfs receive -s
.Nm zfs Cm receive Fl s
for more details.
.It Xo
.Nm zfs
@ -517,18 +529,19 @@ for more details.
.Xc
Generate a send stream from a dataset that has been partially received.
.Bl -tag -width "-L"
.It Fl S, -saved
.It Fl S , -saved
This flag requires that the specified filesystem previously received a resumable
send that did not finish and was interrupted. In such scenarios this flag
enables the user to send this partially received state. Using this flag will
always use the last fully received snapshot as the incremental source if it
exists.
send that did not finish and was interrupted.
In such scenarios this flag
enables the user to send this partially received state.
Using this flag will always use the last fully received snapshot
as the incremental source if it exists.
.El
.It Xo
.Nm zfs
.Cm redact
.Ar snapshot redaction_bookmark
.Ar redaction_snapshot Ns ...
.Ar redaction_snapshot Ns
.Xc
Generate a new redaction bookmark.
In addition to the typical bookmark information, a redaction bookmark contains
@ -538,81 +551,96 @@ of the redaction snapshots.
These blocks are found by iterating over the metadata in each redaction snapshot
to determine what has been changed since the target snapshot.
Redaction is designed to support redacted zfs sends; see the entry for
.Sy zfs send
.Nm zfs Cm send
for more information on the purpose of this operation.
If a redact operation fails partway through (due to an error or a system
failure), the redaction can be resumed by rerunning the same command.
.El
.Ss Redaction
ZFS has support for a limited version of data subsetting, in the form of
redaction. Using the
.Sy zfs redact
redaction.
Using the
.Nm zfs Cm redact
command, a
.Sy redaction bookmark
can be created that stores a list of blocks containing sensitive information. When
provided to
.Sy zfs
.Sy send ,
can be created that stores a list of blocks containing sensitive information.
When provided to
.Nm zfs Cm send ,
this causes a
.Sy redacted send
to occur. Redacted sends omit the blocks containing sensitive information,
replacing them with REDACT records. When these send streams are received, a
to occur.
Redacted sends omit the blocks containing sensitive information,
replacing them with REDACT records.
When these send streams are received, a
.Sy redacted dataset
is created. A redacted dataset cannot be mounted by default, since it is
incomplete. It can be used to receive other send streams. In this way datasets
can be used for data backup and replication, with all the benefits that zfs send
and receive have to offer, while protecting sensitive information from being
is created.
A redacted dataset cannot be mounted by default, since it is incomplete.
It can be used to receive other send streams.
In this way datasets can be used for data backup and replication,
with all the benefits that zfs send and receive have to offer,
while protecting sensitive information from being
stored on less-trusted machines or services.
.Pp
For the purposes of redaction, there are two steps to the process. A redact
step, and a send/receive step. First, a redaction bookmark is created. This is
done by providing the
.Sy zfs redact
For the purposes of redaction, there are two steps to the process.
A redact step, and a send/receive step.
First, a redaction bookmark is created.
This is done by providing the
.Nm zfs Cm redact
command with a parent snapshot, a bookmark to be created, and a number of
redaction snapshots. These redaction snapshots must be descendants of the
parent snapshot, and they should modify data that is considered sensitive in
some way. Any blocks of data modified by all of the redaction snapshots will
redaction snapshots.
These redaction snapshots must be descendants of the parent snapshot,
and they should modify data that is considered sensitive in some way.
Any blocks of data modified by all of the redaction snapshots will
be listed in the redaction bookmark, because it represents the truly sensitive
information. When it comes to the send step, the send process will not send
information.
When it comes to the send step, the send process will not send
the blocks listed in the redaction bookmark, instead replacing them with
REDACT records. When received on the target system, this will create a
REDACT records.
When received on the target system, this will create a
redacted dataset, missing the data that corresponds to the blocks in the
redaction bookmark on the sending system. The incremental send streams from
redaction bookmark on the sending system.
The incremental send streams from
the original parent to the redaction snapshots can then also be received on
the target system, and this will produce a complete snapshot that can be used
normally. Incrementals from one snapshot on the parent filesystem and another
normally.
Incrementals from one snapshot on the parent filesystem and another
can also be done by sending from the redaction bookmark, rather than the
snapshots themselves.
.Pp
In order to make the purpose of the feature more clear, an example is
provided. Consider a zfs filesystem containing four files. These files
represent information for an online shopping service. One file contains a list
of usernames and passwords, another contains purchase histories, a third
contains click tracking data, and a fourth contains user preferences. The
owner of this data wants to make it available for their development teams to
test against, and their market research teams to do analysis on. The
development teams need information about user preferences and the click
In order to make the purpose of the feature more clear, an example is provided.
Consider a zfs filesystem containing four files.
These files represent information for an online shopping service.
One file contains a list of usernames and passwords, another contains purchase histories,
a third contains click tracking data, and a fourth contains user preferences.
The owner of this data wants to make it available for their development teams to
test against, and their market research teams to do analysis on.
The development teams need information about user preferences and the click
tracking data, while the market research teams need information about purchase
histories and user preferences. Neither needs access to the usernames and
passwords. However, because all of this data is stored in one ZFS filesystem,
it must all be sent and received together. In addition, the owner of the data
histories and user preferences.
Neither needs access to the usernames and passwords.
However, because all of this data is stored in one ZFS filesystem,
it must all be sent and received together.
In addition, the owner of the data
wants to take advantage of features like compression, checksumming, and
snapshots, so they do want to continue to use ZFS to store and transmit their
data. Redaction can help them do so. First, they would make two clones of a
snapshot of the data on the source. In one clone, they create the setup they
want their market research team to see; they delete the usernames and
passwords file, and overwrite the click tracking data with dummy
information. In another, they create the setup they want the development teams
snapshots, so they do want to continue to use ZFS to store and transmit their data.
Redaction can help them do so.
First, they would make two clones of a snapshot of the data on the source.
In one clone, they create the setup they want their market research team to see;
they delete the usernames and passwords file,
and overwrite the click tracking data with dummy information.
In another, they create the setup they want the development teams
to see, by replacing the passwords with fake information and replacing the
purchase histories with randomly generated ones. They would then create a
redaction bookmark on the parent snapshot, using snapshots on the two clones
as redaction snapshots. The parent can then be sent, redacted, to the target
server where the research and development teams have access. Finally,
incremental sends from the parent snapshot to each of the clones can be send
purchase histories with randomly generated ones.
They would then create a redaction bookmark on the parent snapshot,
using snapshots on the two clones as redaction snapshots.
The parent can then be sent, redacted, to the target
server where the research and development teams have access.
Finally, incremental sends from the parent snapshot to each of the clones can be sent
to and received on the target server; these snapshots are identical to the
ones on the source, and are ready to be used, while the parent snapshot on the
target contains none of the username and password data present on the source,
because it was removed by the redacted send operation.
.
.Sh SEE ALSO
.Xr zfs-bookmark 8 ,
.Xr zfs-receive 8 ,

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
@ -30,37 +29,39 @@
.\" Copyright 2018 Nexenta Systems, Inc.
.\" Copyright 2019 Joyent, Inc.
.\"
.Dd June 30, 2019
.Dd June 2, 2021
.Dt ZFS-SET 8
.Os
.
.Sh NAME
.Nm zfs-set
.Nd Sets the property or list of properties to the given value(s) for each dataset.
.Nd set properties on ZFS datasets
.Sh SYNOPSIS
.Nm zfs
.Cm set
.Ar property Ns = Ns Ar value Oo Ar property Ns = Ns Ar value Oc Ns ...
.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns ...
.Ar property Ns = Ns Ar value Oo Ar property Ns = Ns Ar value Oc Ns
.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns
.Nm zfs
.Cm get
.Op Fl r Ns | Ns Fl d Ar depth
.Op Fl Hp
.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns ... Oc
.Oo Fl s Ar source Ns Oo , Ns Ar source Oc Ns ... Oc
.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns ... Oc
.Cm all | Ar property Ns Oo , Ns Ar property Oc Ns ...
.Oo Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns | Ns Ar bookmark Oc Ns ...
.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns Oc
.Oo Fl s Ar source Ns Oo , Ns Ar source Oc Ns Oc
.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns Oc
.Cm all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns
.Oo Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns | Ns Ar bookmark Oc Ns
.Nm zfs
.Cm inherit
.Op Fl rS
.Ar property Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns ...
.Ar property Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns
.
.Sh DESCRIPTION
.Bl -tag -width ""
.It Xo
.Nm zfs
.Cm set
.Ar property Ns = Ns Ar value Oo Ar property Ns = Ns Ar value Oc Ns ...
.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns ...
.Ar property Ns = Ns Ar value Oo Ar property Ns = Ns Ar value Oc Ns
.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns
.Xc
Only some properties can be edited.
See
@ -83,39 +84,43 @@ section of
.Cm get
.Op Fl r Ns | Ns Fl d Ar depth
.Op Fl Hp
.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns ... Oc
.Oo Fl s Ar source Ns Oo , Ns Ar source Oc Ns ... Oc
.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns ... Oc
.Cm all | Ar property Ns Oo , Ns Ar property Oc Ns ...
.Oo Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns | Ns Ar bookmark Oc Ns ...
.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns Oc
.Oo Fl s Ar source Ns Oo , Ns Ar source Oc Ns Oc
.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns Oc
.Cm all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns
.Oo Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns | Ns Ar bookmark Oc Ns
.Xc
Displays properties for the given datasets.
If no datasets are specified, then the command displays properties for all
datasets on the system.
For each property, the following columns are displayed:
.Bd -literal
name Dataset name
property Property name
value Property value
source Property source \fBlocal\fP, \fBdefault\fP, \fBinherited\fP,
\fBtemporary\fP, \fBreceived\fP or none (\fB-\fP).
.Ed
.Bl -tag -compact -offset 4n -width "property"
.It Sy name
Dataset name
.It Sy property
Property name
.It Sy value
Property value
.It Sy source
Property source
.Sy local , default , inherited , temporary , received , No or Sy - Pq none .
.El
.Pp
All columns are displayed by default, though this can be controlled by using the
.Fl o
option.
This command takes a comma-separated list of properties as described in the
.Em Native Properties
.Sx Native Properties
and
.Em User Properties
.Sx User Properties
sections of
.Xr zfsprops 8 .
.Pp
The value
.Sy all
can be used to display all properties that apply to the given dataset's type
.Pq filesystem, volume, snapshot, or bookmark .
.Bl -tag -width "-H"
.Pq Sy filesystem , volume , snapshot , No or Sy bookmark .
.Bl -tag -width "-s source"
.It Fl H
Display output in a form more easily parsed by scripts.
Any headers are omitted, and fields are explicitly separated by a single tab
@ -127,9 +132,8 @@ A depth of
.Sy 1
will display only the dataset and its direct children.
.It Fl o Ar field
A comma-separated list of columns to display.
.Sy name Ns \&, Ns Sy property Ns \&, Ns Sy value Ns \&, Ns Sy source
is the default value.
A comma-separated list of columns to display, defaults to
.Sy name , Ns Sy property , Ns Sy value , Ns Sy source .
.It Fl p
Display numbers in parsable
.Pq exact
@ -140,30 +144,19 @@ Recursively display properties for any children.
A comma-separated list of sources to display.
Those properties coming from a source other than those in this list are ignored.
Each source must be one of the following:
.Sy local ,
.Sy default ,
.Sy inherited ,
.Sy temporary ,
.Sy received ,
and
.Sy none .
.Sy local , default , inherited , temporary , received , No or Sy none .
The default value is all sources.
.It Fl t Ar type
A comma-separated list of types to display, where
.Ar type
is one of
.Sy filesystem ,
.Sy snapshot ,
.Sy volume ,
.Sy bookmark ,
or
.Sy all .
.Sy filesystem , snapshot , volume , bookmark , No or Sy all .
.El
.It Xo
.Nm zfs
.Cm inherit
.Op Fl rS
.Ar property Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns ...
.Ar property Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns
.Xc
Clears the specified property, causing it to be inherited from an ancestor,
restored to default if no ancestor has the property set, or with the
@ -183,6 +176,7 @@ if the
option was not specified.
.El
.El
.
.Sh SEE ALSO
.Xr zfs-list 8 ,
.Xr zfsprops 8

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
@ -33,22 +32,24 @@
.Dd June 30, 2019
.Dt ZFS-SHARE 8
.Os
.
.Sh NAME
.Nm zfs-share
.Nd Shares and unshares available ZFS filesystems.
.Nd share and unshare ZFS filesystems
.Sh SYNOPSIS
.Nm zfs
.Cm share
.Fl a | Ar filesystem
.Fl a Ns | Ns Ar filesystem
.Nm zfs
.Cm unshare
.Fl a | Ar filesystem Ns | Ns Ar mountpoint
.Fl a Ns | Ns Ar filesystem Ns | Ns Ar mountpoint
.
.Sh DESCRIPTION
.Bl -tag -width ""
.It Xo
.Nm zfs
.Cm share
.Fl a | Ar filesystem
.Fl a Ns | Ns Ar filesystem
.Xc
Shares available ZFS file systems.
.Bl -tag -width "-a"
@ -70,7 +71,7 @@ property is set.
.It Xo
.Nm zfs
.Cm unshare
.Fl a | Ar filesystem Ns | Ns Ar mountpoint
.Fl a Ns | Ns Ar filesystem Ns | Ns Ar mountpoint
.Xc
Unshares currently shared ZFS file systems.
.Bl -tag -width "-a"
@ -82,6 +83,7 @@ Unshare the specified filesystem.
The command can also be given a path to a ZFS file system shared on the system.
.El
.El
.
.Sh SEE ALSO
.Xr exports 5 ,
.Xr smb.conf 5 ,

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
@ -30,48 +29,42 @@
.\" Copyright 2018 Nexenta Systems, Inc.
.\" Copyright 2019 Joyent, Inc.
.\"
.Dd June 30, 2019
.Dd May 27, 2021
.Dt ZFS-SNAPSHOT 8
.Os
.
.Sh NAME
.Nm zfs-snapshot
.Nd Creates snapshots with the given names.
.Nd create snapshots of ZFS datasets
.Sh SYNOPSIS
.Nm zfs
.Cm snapshot
.Op Fl r
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
.Ar filesystem Ns @ Ns Ar snapname Ns | Ns Ar volume Ns @ Ns Ar snapname Ns ...
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns
.Ar dataset Ns @ Ns Ar snapname Ns
.
.Sh DESCRIPTION
.Bl -tag -width ""
.It Xo
.Nm zfs
.Cm snapshot
.Op Fl r
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
.Ar filesystem Ns @ Ns Ar snapname Ns | Ns Ar volume Ns @ Ns Ar snapname Ns ...
.Xc
All previous modifications by successful system calls to the file system are
part of the snapshots.
Snapshots are taken atomically, so that all snapshots correspond to the same
moment in time.
.Nm zfs Cm snap
can be used as an alias for
.Nm zfs Cm snapshot.
.Nm zfs Cm snapshot .
See the
.Em Snapshots
.Sx Snapshots
section of
.Xr zfsconcepts 8
for details.
.Bl -tag -width "-o"
.It Fl o Ar property Ns = Ns Ar value
Sets the specified property; see
Set the specified property; see
.Nm zfs Cm create
for details.
.It Fl r
Recursively create snapshots of all descendent datasets
.El
.El
.
.Sh SEE ALSO
.Xr zfs-bookmark 8 ,
.Xr zfs-clone 8 ,

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
@ -33,9 +32,10 @@
.Dd June 30, 2019
.Dt ZFS-UPGRADE 8
.Os
.
.Sh NAME
.Nm zfs-upgrade
.Nd Manage upgrading the on-disk version of filesystems.
.Nd manage on-disk version of ZFS filesystems
.Sh SYNOPSIS
.Nm zfs
.Cm upgrade
@ -46,7 +46,8 @@
.Cm upgrade
.Op Fl r
.Op Fl V Ar version
.Fl a | Ar filesystem
.Fl a Ns | Ns Ar filesystem
.
.Sh DESCRIPTION
.Bl -tag -width ""
.It Xo
@ -65,35 +66,31 @@ Displays a list of currently supported file system versions.
.Cm upgrade
.Op Fl r
.Op Fl V Ar version
.Fl a | Ar filesystem
.Fl a Ns | Ns Ar filesystem
.Xc
Upgrades file systems to a new on-disk version.
Once this is done, the file systems will no longer be accessible on systems
running older versions of the software.
running older versions of ZFS.
.Nm zfs Cm send
streams generated from new snapshots of these file systems cannot be accessed on
systems running older versions of the software.
systems running older versions of ZFS.
.Pp
In general, the file system version is independent of the pool version.
See
.Xr zpool 8
for information on the
.Nm zpool Cm upgrade
command.
.Xr zpool-features 5
for information on features of ZFS storage pools.
.Pp
In some cases, the file system version and the pool version are interrelated and
the pool version must be upgraded before the file system version can be
upgraded.
.Bl -tag -width "-V"
.Bl -tag -width "filesystem"
.It Fl V Ar version
Upgrade to the specified
Upgrade to
.Ar version .
If the
.Fl V
flag is not specified, this command upgrades to the most recent version.
If not specified, upgrade to the most recent version.
This
option can only be used to increase the version number, and only up to the most
recent version supported by this software.
recent version supported by this version of ZFS.
.It Fl a
Upgrade all file systems on all imported pools.
.It Ar filesystem

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
@ -33,43 +32,45 @@
.Dd June 30, 2019
.Dt ZFS-USERSPACE 8
.Os
.
.Sh NAME
.Nm zfs-userspace
.Nd Displays space consumed by, and quotas on, each user or group in the specified filesystem or snapshot.
.Nd display space and quotas of ZFS dataset
.Sh SYNOPSIS
.Nm zfs
.Cm userspace
.Op Fl Hinp
.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns ... Oc
.Oo Fl s Ar field Oc Ns ...
.Oo Fl S Ar field Oc Ns ...
.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns ... Oc
.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns Oc
.Oo Fl s Ar field Oc Ns
.Oo Fl S Ar field Oc Ns
.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns Oc
.Ar filesystem Ns | Ns Ar snapshot Ns | Ns Ar path
.Nm zfs
.Cm groupspace
.Op Fl Hinp
.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns ... Oc
.Oo Fl s Ar field Oc Ns ...
.Oo Fl S Ar field Oc Ns ...
.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns ... Oc
.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns Oc
.Oo Fl s Ar field Oc Ns
.Oo Fl S Ar field Oc Ns
.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns Oc
.Ar filesystem Ns | Ns Ar snapshot Ns | Ns Ar path
.Nm zfs
.Cm projectspace
.Op Fl Hp
.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns ... Oc
.Oo Fl s Ar field Oc Ns ...
.Oo Fl S Ar field Oc Ns ...
.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns Oc
.Oo Fl s Ar field Oc Ns
.Oo Fl S Ar field Oc Ns
.Ar filesystem Ns | Ns Ar snapshot Ns | Ns Ar path
.
.Sh DESCRIPTION
.Bl -tag -width ""
.It Xo
.Nm zfs
.Cm userspace
.Op Fl Hinp
.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns ... Oc
.Oo Fl s Ar field Oc Ns ...
.Oo Fl S Ar field Oc Ns ...
.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns ... Oc
.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns Oc
.Oo Fl s Ar field Oc Ns
.Oo Fl S Ar field Oc Ns
.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns Oc
.Ar filesystem Ns | Ns Ar snapshot Ns | Ns Ar path
.Xc
Displays space consumed by, and quotas on, each user in the specified filesystem,
@ -78,11 +79,11 @@ If a path is given, the filesystem that contains that path will be used.
This corresponds to the
.Sy userused@ Ns Em user ,
.Sy userobjused@ Ns Em user ,
.Sy userquota@ Ns Em user,
.Sy userquota@ Ns Em user ,
and
.Sy userobjquota@ Ns Em user
properties.
.Bl -tag -width "-H"
.Bl -tag -width "-S field"
.It Fl H
Do not print headers, use tab-delimited output.
.It Fl S Ar field
@ -93,10 +94,7 @@ See
Translate SID to POSIX ID.
The POSIX ID may be ephemeral if no mapping exists.
Normal POSIX interfaces
.Po for example,
.Xr stat 2 ,
.Nm ls Fl l
.Pc
.Pq like Xr stat 2 , Nm ls Fl l
perform this translation, so the
.Fl i
option allows the output from
@ -113,7 +111,7 @@ However, the
option will report that the POSIX entity has the total usage and quota for both.
.It Fl n
Print numeric ID instead of user/group name.
.It Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
.It Fl o Ar field Ns Oo , Ns Ar field Oc Ns
Display only the specified fields from the following set:
.Sy type ,
.Sy name ,
@ -134,7 +132,7 @@ flags may be specified multiple times to sort first by one field, then by
another.
The default is
.Fl s Sy type Fl s Sy name .
.It Fl t Ar type Ns Oo , Ns Ar type Oc Ns ...
.It Fl t Ar type Ns Oo , Ns Ar type Oc Ns
Print only the specified types from the following set:
.Sy all ,
.Sy posixuser ,
@ -142,17 +140,17 @@ Print only the specified types from the following set:
.Sy posixgroup ,
.Sy smbgroup .
The default is
.Fl t Sy posixuser Ns \&, Ns Sy smbuser .
.Fl t Sy posixuser , Ns Sy smbuser .
The default can be changed to include group types.
.El
.It Xo
.Nm zfs
.Cm groupspace
.Op Fl Hinp
.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns ... Oc
.Oo Fl s Ar field Oc Ns ...
.Oo Fl S Ar field Oc Ns ...
.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns ... Oc
.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns Oc
.Oo Fl s Ar field Oc Ns
.Oo Fl S Ar field Oc Ns
.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns Oc
.Ar filesystem Ns | Ns Ar snapshot
.Xc
Displays space consumed by, and quotas on, each group in the specified
@ -160,28 +158,30 @@ filesystem or snapshot.
This subcommand is identical to
.Cm userspace ,
except that the default types to display are
.Fl t Sy posixgroup Ns \&, Ns Sy smbgroup .
.Fl t Sy posixgroup , Ns Sy smbgroup .
.It Xo
.Nm zfs
.Cm projectspace
.Op Fl Hp
.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns ... Oc
.Oo Fl s Ar field Oc Ns ...
.Oo Fl S Ar field Oc Ns ...
.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns Oc
.Oo Fl s Ar field Oc Ns
.Oo Fl S Ar field Oc Ns
.Ar filesystem Ns | Ns Ar snapshot Ns | Ns Ar path
.Xc
Displays space consumed by, and quotas on, each project in the specified
filesystem or snapshot. This subcommand is identical to
filesystem or snapshot.
This subcommand is identical to
.Cm userspace ,
except that the project identifier is numeral, not name. So need neither
the option
.Sy -i
except that the project identifier is a numeral, not a name.
So need neither the option
.Fl i
for SID to POSIX ID nor
.Sy -n
.Fl n
for numeric ID, nor
.Sy -t
.Fl t
for types.
.El
.
.Sh SEE ALSO
.Xr zfs-set 8 ,
.Xr zfsprops 8

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
@ -40,17 +39,19 @@
.Dd June 30, 2019
.Dt ZFS 8
.Os
.
.Sh NAME
.Nm zfs
.Nd configures ZFS file systems
.Nd configure ZFS datasets
.Sh SYNOPSIS
.Nm
.Fl ?V
.Nm
.Cm version
.Nm
.Cm <subcommand>
.Op Ar <args>
.Cm subcommand
.Op Ar arguments
.
.Sh DESCRIPTION
The
.Nm
@ -58,23 +59,18 @@ command configures ZFS datasets within a ZFS storage pool, as described in
.Xr zpool 8 .
A dataset is identified by a unique path within the ZFS namespace.
For example:
.Bd -literal
pool/{filesystem,volume,snapshot}
.Ed
.Dl pool/{filesystem,volume,snapshot}
.Pp
where the maximum length of a dataset name is
.Dv MAXNAMELEN
.Pq 256 bytes
.Sy MAXNAMELEN Pq 256B
and the maximum amount of nesting allowed in a path is 50 levels deep.
.Pp
A dataset can be one of the following:
.Bl -tag -width "file system"
.Bl -tag -offset Ds -width "file system"
.It Sy file system
A ZFS dataset of type
.Sy filesystem
can be mounted within the standard system namespace and behaves like other file
Can be mounted within the standard system namespace and behaves like other file
systems.
While ZFS file systems are designed to be POSIX compliant, known issues exist
While ZFS file systems are designed to be POSIX-compliant, known issues exist
that prevent compliance in some cases.
Applications that depend on standards conformance might fail due to non-standard
behavior when checking file system free space.
@ -92,38 +88,39 @@ or
Much like a
.Sy snapshot ,
but without the hold on on-disk data.
It can be used as the source of a send (but not for a receive). It is specified as
It can be used as the source of a send (but not for a receive).
It is specified as
.Ar filesystem Ns # Ns Ar name
or
.Ar volume Ns # Ns Ar name .
.El
.Pp
For details see
.Xr zfsconcepts 8 .
See
.Xr zfsconcepts 8
for details.
.
.Ss Properties
Properties are divided into two types, native properties and user-defined
.Po or
.Qq user
.Pc
Properties are divided into two types: native properties and user-defined
.Pq or Qq user
properties.
Native properties either export internal statistics or control ZFS behavior.
In addition, native properties are either editable or read-only.
User properties have no effect on ZFS behavior, but you can use them to annotate
datasets in a way that is meaningful in your environment.
For more information about properties, see the
.Xr zfsprops 8 man page.
For more information about properties, see
.Xr zfsprops 8 .
.
.Ss Encryption
Enabling the
.Sy encryption
feature allows for the creation of encrypted filesystems and volumes.
ZFS will encrypt file and zvol data, file attributes, ACLs, permission bits,
directory listings, FUID mappings, and
.Sy userused
/
.Sy groupused
.Sy userused Ns / Ns Sy groupused Ns / Ns Sy projectused
data.
For an overview of encryption see the
.Xr zfs-load-key 8 command manual.
For an overview of encryption, see
.Xr zfs-load-key 8 .
.
.Sh SUBCOMMANDS
All subcommands that modify state are logged persistently to the pool in their
original form.
@ -134,9 +131,6 @@ Displays a help message.
.Nm
.Fl V , -version
.Xc
An alias for the
.Nm zfs Cm version
subcommand.
.It Xo
.Nm
.Cm version
@ -145,6 +139,7 @@ Displays the software version of the
.Nm
userland utility and the zfs kernel module.
.El
.
.Ss Dataset Management
.Bl -tag -width ""
.It Xr zfs-list 8
@ -158,26 +153,25 @@ Renames the given dataset (filesystem or snapshot).
.It Xr zfs-upgrade 8
Manage upgrading the on-disk version of filesystems.
.El
.
.Ss Snapshots
.Bl -tag -width ""
.It Xr zfs-snapshot 8
Creates snapshots with the given names.
.It Xr zfs-rollback 8
Roll back the given dataset to a previous snapshot.
.It Xo
.Xr zfs-hold 8 /
.Xr zfs-release 8
.Xc
.It Xr zfs-hold 8 Ns / Ns Xr zfs-release 8
Add or remove a hold reference to the specified snapshot or snapshots.
If a hold exists on a snapshot, attempts to destroy that snapshot by using the
.Nm zfs Cm destroy
command return
.Er EBUSY .
.Sy EBUSY .
.It Xr zfs-diff 8
Display the difference between a snapshot of a given filesystem and another
snapshot of that filesystem from a later time or the current contents of the
filesystem.
.El
.
.Ss Clones
.Bl -tag -width ""
.It Xr zfs-clone 8
@ -187,6 +181,7 @@ Promotes a clone file system to no longer be dependent on its
.Qq origin
snapshot.
.El
.
.Ss Send & Receive
.Bl -tag -width ""
.It Xr zfs-send 8
@ -211,6 +206,7 @@ This feature can be used to allow clones of a filesystem to be made available on
a remote system, in the case where their parent need not (or needs to not) be
usable.
.El
.
.Ss Properties
.Bl -tag -width ""
.It Xr zfs-get 8
@ -223,18 +219,16 @@ restored to default if no ancestor has the property set, or with the
.Fl S
option reverted to the received value if one exists.
.El
.
.Ss Quotas
.Bl -tag -width ""
.It Xo
.Xr zfs-userspace 8 /
.Xr zfs-groupspace 8 /
.Xr zfs-projectspace 8
.Xc
.It Xr zfs-userspace 8 Ns / Ns Xr zfs-groupspace 8 Ns / Ns Xr zfs-projectspace 8
Displays space consumed by, and quotas on, each user, group, or project
in the specified filesystem or snapshot.
.It Xr zfs-project 8
List, set, or clear project ID and/or inherit flag on the file(s) or directories.
.El
.
.Ss Mountpoints
.Bl -tag -width ""
.It Xr zfs-mount 8
@ -245,6 +239,7 @@ property.
.It Xr zfs-unmount 8
Unmounts currently mounted ZFS file systems.
.El
.
.Ss Shares
.Bl -tag -width ""
.It Xr zfs-share 8
@ -252,6 +247,7 @@ Shares available ZFS file systems.
.It Xr zfs-unshare 8
Unshares currently shared ZFS file systems.
.El
.
.Ss Delegated Administration
.Bl -tag -width ""
.It Xr zfs-allow 8
@ -259,6 +255,7 @@ Delegate permissions on the specified filesystem or volume.
.It Xr zfs-unallow 8
Remove delegated permissions on the specified filesystem or volume.
.El
.
.Ss Encryption
.Bl -tag -width ""
.It Xr zfs-change-key 8
@ -268,12 +265,14 @@ Load the key for the specified encrypted dataset, enabling access.
.It Xr zfs-unload-key 8
Unload a key for the specified dataset, removing the ability to access the dataset.
.El
.
.Ss Channel Programs
.Bl -tag -width ""
.It Xr zfs-program 8
Execute ZFS administrative operations
programmatically via a Lua script-language channel program.
.El
.
.Ss Jails
.Bl -tag -width ""
.It Xr zfs-jail 8
@ -281,100 +280,101 @@ Attaches a filesystem to a jail.
.It Xr zfs-unjail 8
Detaches a filesystem from a jail.
.El
.
.Ss Waiting
.Bl -tag -width ""
.It Xr zfs-wait 8
Wait for background activity in a filesystem to complete.
.El
.
.Sh EXIT STATUS
The
.Nm
utility exits 0 on success, 1 if an error occurs, and 2 if invalid command line
options were specified.
utility exits
.Sy 0
on success,
.Sy 1
if an error occurs, and
.Sy 2
if invalid command line options were specified.
.
.Sh EXAMPLES
.Bl -tag -width ""
.It Sy Example 1 No Creating a ZFS File System Hierarchy
.
.It Sy Example 1 : No Creating a ZFS File System Hierarchy
The following commands create a file system named
.Em pool/home
.Ar pool/home
and a file system named
.Em pool/home/bob .
.Ar pool/home/bob .
The mount point
.Pa /export/home
is set for the parent file system, and is automatically inherited by the child
file system.
.Bd -literal
# zfs create pool/home
# zfs set mountpoint=/export/home pool/home
# zfs create pool/home/bob
.Ed
.It Sy Example 2 No Creating a ZFS Snapshot
.Dl # Nm zfs Cm create Ar pool/home
.Dl # Nm zfs Cm set Sy mountpoint Ns = Ns Ar /export/home pool/home
.Dl # Nm zfs Cm create Ar pool/home/bob
.
.It Sy Example 2 : No Creating a ZFS Snapshot
The following command creates a snapshot named
.Sy yesterday .
.Ar yesterday .
This snapshot is mounted on demand in the
.Pa .zfs/snapshot
directory at the root of the
.Em pool/home/bob
.Ar pool/home/bob
file system.
.Bd -literal
# zfs snapshot pool/home/bob@yesterday
.Ed
.It Sy Example 3 No Creating and Destroying Multiple Snapshots
.Dl # Nm zfs Cm snapshot Ar pool/home/bob Ns @ Ns Ar yesterday
.
.It Sy Example 3 : No Creating and Destroying Multiple Snapshots
The following command creates snapshots named
.Sy yesterday
of
.Em pool/home
.Ar yesterday No of Ar pool/home
and all of its descendent file systems.
Each snapshot is mounted on demand in the
.Pa .zfs/snapshot
directory at the root of its file system.
The second command destroys the newly created snapshots.
.Bd -literal
# zfs snapshot -r pool/home@yesterday
# zfs destroy -r pool/home@yesterday
.Ed
.It Sy Example 4 No Disabling and Enabling File System Compression
.Dl # Nm zfs Cm snapshot Fl r Ar pool/home Ns @ Ns Ar yesterday
.Dl # Nm zfs Cm destroy Fl r Ar pool/home Ns @ Ns Ar yesterday
.
.It Sy Example 4 : No Disabling and Enabling File System Compression
The following command disables the
.Sy compression
property for all file systems under
.Em pool/home .
.Ar pool/home .
The next command explicitly enables
.Sy compression
for
.Em pool/home/anne .
.Bd -literal
# zfs set compression=off pool/home
# zfs set compression=on pool/home/anne
.Ed
.It Sy Example 5 No Listing ZFS Datasets
.Ar pool/home/anne .
.Dl # Nm zfs Cm set Sy compression Ns = Ns Sy off Ar pool/home
.Dl # Nm zfs Cm set Sy compression Ns = Ns Sy on Ar pool/home/anne
.
.It Sy Example 5 : No Listing ZFS Datasets
The following command lists all active file systems and volumes in the system.
Snapshots are displayed if the
.Sy listsnaps
property is
.Sy on .
Snapshots are displayed if
.Sy listsnaps Ns = Ns Sy on .
The default is
.Sy off .
See
.Xr zpool 8
.Xr zpoolprops 8
for more information on pool properties.
.Bd -literal
# zfs list
.Bd -literal -compact -offset Ds
.No # Nm zfs Cm list
NAME USED AVAIL REFER MOUNTPOINT
pool 450K 457G 18K /pool
pool/home 315K 457G 21K /export/home
pool/home/anne 18K 457G 18K /export/home/anne
pool/home/bob 276K 457G 276K /export/home/bob
.Ed
.It Sy Example 6 No Setting a Quota on a ZFS File System
.
.It Sy Example 6 : No Setting a Quota on a ZFS File System
The following command sets a quota of 50 Gbytes for
.Em pool/home/bob .
.Bd -literal
# zfs set quota=50G pool/home/bob
.Ed
.It Sy Example 7 No Listing ZFS Properties
.Ar pool/home/bob :
.Dl # Nm zfs Cm set Sy quota Ns = Ns Ar 50G pool/home/bob
.
.It Sy Example 7 : No Listing ZFS Properties
The following command lists all properties for
.Em pool/home/bob .
.Bd -literal
# zfs get all pool/home/bob
.Ar pool/home/bob :
.Bd -literal -compact -offset Ds
.No # Nm zfs Cm get Sy all Ar pool/home/bob
NAME PROPERTY VALUE SOURCE
pool/home/bob type filesystem -
pool/home/bob creation Tue Jul 21 15:53 2009 -
@ -420,63 +420,61 @@ pool/home/bob usedbychildren 0 -
pool/home/bob usedbyrefreservation 0 -
.Ed
.Pp
The following command gets a single property value.
.Bd -literal
# zfs get -H -o value compression pool/home/bob
The following command gets a single property value:
.Bd -literal -compact -offset Ds
.No # Nm zfs Cm get Fl H o Sy value compression Ar pool/home/bob
on
.Ed
.Pp
The following command lists all properties with local settings for
.Em pool/home/bob .
.Bd -literal
# zfs get -r -s local -o name,property,value all pool/home/bob
.Ar pool/home/bob :
.Bd -literal -compact -offset Ds
.No # Nm zfs Cm get Fl r s Sy local Fl o Sy name , Ns Sy property , Ns Sy value all Ar pool/home/bob
NAME PROPERTY VALUE
pool/home/bob quota 20G
pool/home/bob compression on
.Ed
.It Sy Example 8 No Rolling Back a ZFS File System
.
.It Sy Example 8 : No Rolling Back a ZFS File System
The following command reverts the contents of
.Em pool/home/anne
.Ar pool/home/anne
to the snapshot named
.Sy yesterday ,
deleting all intermediate snapshots.
.Bd -literal
# zfs rollback -r pool/home/anne@yesterday
.Ed
.It Sy Example 9 No Creating a ZFS Clone
.Ar yesterday ,
deleting all intermediate snapshots:
.Dl # Nm zfs Cm rollback Fl r Ar pool/home/anne Ns @ Ns Ar yesterday
.
.It Sy Example 9 : No Creating a ZFS Clone
The following command creates a writable file system whose initial contents are
the same as
.Em pool/home/bob@yesterday .
.Bd -literal
# zfs clone pool/home/bob@yesterday pool/clone
.Ed
.It Sy Example 10 No Promoting a ZFS Clone
.Ar pool/home/bob@yesterday .
.Dl # Nm zfs Cm clone Ar pool/home/bob@yesterday pool/clone
.
.It Sy Example 10 : No Promoting a ZFS Clone
The following commands illustrate how to test out changes to a file system, and
then replace the original file system with the changed one, using clones, clone
promotion, and renaming:
.Bd -literal
# zfs create pool/project/production
.Bd -literal -compact -offset Ds
.No # Nm zfs Cm create Ar pool/project/production
populate /pool/project/production with data
# zfs snapshot pool/project/production@today
# zfs clone pool/project/production@today pool/project/beta
.No # Nm zfs Cm snapshot Ar pool/project/production Ns @ Ns Ar today
.No # Nm zfs Cm clone Ar pool/project/production@today pool/project/beta
make changes to /pool/project/beta and test them
# zfs promote pool/project/beta
# zfs rename pool/project/production pool/project/legacy
# zfs rename pool/project/beta pool/project/production
.No # Nm zfs Cm promote Ar pool/project/beta
.No # Nm zfs Cm rename Ar pool/project/production pool/project/legacy
.No # Nm zfs Cm rename Ar pool/project/beta pool/project/production
once the legacy version is no longer needed, it can be destroyed
# zfs destroy pool/project/legacy
.No # Nm zfs Cm destroy Ar pool/project/legacy
.Ed
.It Sy Example 11 No Inheriting ZFS Properties
.
.It Sy Example 11 : No Inheriting ZFS Properties
The following command causes
.Em pool/home/bob
and
.Em pool/home/anne
.Ar pool/home/bob No and Ar pool/home/anne
to inherit the
.Sy checksum
property from their parent.
.Bd -literal
# zfs inherit checksum pool/home/bob pool/home/anne
.Ed
.It Sy Example 12 No Remotely Replicating ZFS Data
.Dl # Nm zfs Cm inherit Sy checksum Ar pool/home/bob pool/home/anne
.
.It Sy Example 12 : No Remotely Replicating ZFS Data
The following commands send a full stream and then an incremental stream to a
remote machine, restoring them into
.Em poolB/received/fs@a
@ -488,147 +486,145 @@ must contain the file system
.Em poolB/received ,
and must not initially contain
.Em poolB/received/fs .
.Bd -literal
# zfs send pool/fs@a | \e
ssh host zfs receive poolB/received/fs@a
# zfs send -i a pool/fs@b | \e
ssh host zfs receive poolB/received/fs
.Bd -literal -compact -offset Ds
.No # Nm zfs Cm send Ar pool/fs@a |
.No " " Nm ssh Ar host Nm zfs Cm receive Ar poolB/received/fs Ns @ Ns Ar a
.No # Nm zfs Cm send Fl i Ar a pool/fs@b |
.No " " Nm ssh Ar host Nm zfs Cm receive Ar poolB/received/fs
.Ed
.It Sy Example 13 No Using the Nm zfs Cm receive Fl d No Option
.
.It Sy Example 13 : No Using the Nm zfs Cm receive Fl d No Option
The following command sends a full stream of
.Em poolA/fsA/fsB@snap
.Ar poolA/fsA/fsB@snap
to a remote machine, receiving it into
.Em poolB/received/fsA/fsB@snap .
.Ar poolB/received/fsA/fsB@snap .
The
.Em fsA/fsB@snap
.Ar fsA/fsB@snap
portion of the received snapshot's name is determined from the name of the sent
snapshot.
.Em poolB
.Ar poolB
must contain the file system
.Em poolB/received .
.Ar poolB/received .
If
.Em poolB/received/fsA
.Ar poolB/received/fsA
does not exist, it is created as an empty file system.
.Bd -literal
# zfs send poolA/fsA/fsB@snap | \e
ssh host zfs receive -d poolB/received
.Bd -literal -compact -offset Ds
.No # Nm zfs Cm send Ar poolA/fsA/fsB@snap |
.No " " Nm ssh Ar host Nm zfs Cm receive Fl d Ar poolB/received
.Ed
.It Sy Example 14 No Setting User Properties
.
.It Sy Example 14 : No Setting User Properties
The following example sets the user-defined
.Sy com.example:department
property for a dataset.
.Bd -literal
# zfs set com.example:department=12345 tank/accounting
.Ed
.It Sy Example 15 No Performing a Rolling Snapshot
.Ar com.example : Ns Ar department
property for a dataset:
.Dl # Nm zfs Cm set Ar com.example : Ns Ar department Ns = Ns Ar 12345 tank/accounting
.
.It Sy Example 15 : No Performing a Rolling Snapshot
The following example shows how to maintain a history of snapshots with a
consistent naming scheme.
To keep a week's worth of snapshots, the user destroys the oldest snapshot,
renames the remaining snapshots, and then creates a new snapshot, as follows:
.Bd -literal
# zfs destroy -r pool/users@7daysago
# zfs rename -r pool/users@6daysago @7daysago
# zfs rename -r pool/users@5daysago @6daysago
# zfs rename -r pool/users@4daysago @5daysago
# zfs rename -r pool/users@3daysago @4daysago
# zfs rename -r pool/users@2daysago @3daysago
# zfs rename -r pool/users@yesterday @2daysago
# zfs rename -r pool/users@today @yesterday
# zfs snapshot -r pool/users@today
.Bd -literal -compact -offset Ds
.No # Nm zfs Cm destroy Fl r Ar pool/users@7daysago
.No # Nm zfs Cm rename Fl r Ar pool/users@6daysago No @ Ns Ar 7daysago
.No # Nm zfs Cm rename Fl r Ar pool/users@5daysago No @ Ns Ar 6daysago
.No # Nm zfs Cm rename Fl r Ar pool/users@4daysago No @ Ns Ar 5daysago
.No # Nm zfs Cm rename Fl r Ar pool/users@3daysago No @ Ns Ar 4daysago
.No # Nm zfs Cm rename Fl r Ar pool/users@2daysago No @ Ns Ar 3daysago
.No # Nm zfs Cm rename Fl r Ar pool/users@yesterday No @ Ns Ar 2daysago
.No # Nm zfs Cm rename Fl r Ar pool/users@today No @ Ns Ar yesterday
.No # Nm zfs Cm snapshot Fl r Ar pool/users Ns @ Ns Ar today
.Ed
.It Sy Example 16 No Setting sharenfs Property Options on a ZFS File System
.
.It Sy Example 16 : No Setting sharenfs Property Options on a ZFS File System
The following commands show how to set
.Sy sharenfs
property options to enable
.Sy rw
access for a set of
.Sy IP
addresses and to enable root access for system
.Sy neo
property options to enable read-write
access for a set of IP addresses and to enable root access for system
.Qq neo
on the
.Em tank/home
file system.
.Bd -literal
# zfs set sharenfs='rw=@123.123.0.0/16,root=neo' tank/home
.Ed
.Ar tank/home
file system:
.Dl # Nm zfs Cm set Sy sharenfs Ns = Ns ' Ns Ar rw Ns =@123.123.0.0/16,root= Ns Ar neo Ns ' tank/home
.Pp
If you are using
.Sy DNS
for host name resolution, specify the fully qualified hostname.
.It Sy Example 17 No Delegating ZFS Administration Permissions on a ZFS Dataset
If you are using DNS for host name resolution,
specify the fully-qualified hostname.
.
.It Sy Example 17 : No Delegating ZFS Administration Permissions on a ZFS Dataset
The following example shows how to set permissions so that user
.Sy cindys
.Ar cindys
can create, destroy, mount, and take snapshots on
.Em tank/cindys .
.Ar tank/cindys .
The permissions on
.Em tank/cindys
.Ar tank/cindys
are also displayed.
.Bd -literal
# zfs allow cindys create,destroy,mount,snapshot tank/cindys
# zfs allow tank/cindys
.Bd -literal -compact -offset Ds
.No # Nm zfs Cm allow Sy cindys create , Ns Sy destroy , Ns Sy mount , Ns Sy snapshot Ar tank/cindys
.No # Nm zfs Cm allow Ar tank/cindys
---- Permissions on tank/cindys --------------------------------------
Local+Descendent permissions:
user cindys create,destroy,mount,snapshot
.Ed
.Pp
Because the
.Em tank/cindys
.Ar tank/cindys
mount point permission is set to 755 by default, user
.Sy cindys
.Ar cindys
will be unable to mount file systems under
.Em tank/cindys .
.Ar tank/cindys .
Add an ACE similar to the following syntax to provide mount point access:
.Bd -literal
# chmod A+user:cindys:add_subdirectory:allow /tank/cindys
.Ed
.It Sy Example 18 No Delegating Create Time Permissions on a ZFS Dataset
.Dl # Cm chmod No A+user: Ns Ar cindys Ns :add_subdirectory:allow Ar /tank/cindys
.
.It Sy Example 18 : No Delegating Create Time Permissions on a ZFS Dataset
The following example shows how to grant anyone in the group
.Sy staff
.Ar staff
to create file systems in
.Em tank/users .
.Ar tank/users .
This syntax also allows staff members to destroy their own file systems, but not
destroy anyone else's file system.
The permissions on
.Em tank/users
.Ar tank/users
are also displayed.
.Bd -literal
# zfs allow staff create,mount tank/users
# zfs allow -c destroy tank/users
# zfs allow tank/users
.Bd -literal -compact -offset Ds
.No # Nm zfs Cm allow Ar staff Sy create , Ns Sy mount Ar tank/users
.No # Nm zfs Cm allow Fl c Sy destroy Ar tank/users
.No # Nm zfs Cm allow Ar tank/users
---- Permissions on tank/users ---------------------------------------
Permission sets:
destroy
Local+Descendent permissions:
group staff create,mount
.Ed
.It Sy Example 19 No Defining and Granting a Permission Set on a ZFS Dataset
.
.It Sy Example 19 : No Defining and Granting a Permission Set on a ZFS Dataset
The following example shows how to define and grant a permission set on the
.Em tank/users
.Ar tank/users
file system.
The permissions on
.Em tank/users
.Ar tank/users
are also displayed.
.Bd -literal
# zfs allow -s @pset create,destroy,snapshot,mount tank/users
# zfs allow staff @pset tank/users
# zfs allow tank/users
.Bd -literal -compact -offset Ds
.No # Nm zfs Cm allow Fl s No @ Ns Ar pset Sy create , Ns Sy destroy , Ns Sy snapshot , Ns Sy mount Ar tank/users
.No # Nm zfs Cm allow staff No @ Ns Ar pset tank/users
.No # Nm zfs Cm allow Ar tank/users
---- Permissions on tank/users ---------------------------------------
Permission sets:
@pset create,destroy,mount,snapshot
Local+Descendent permissions:
group staff @pset
.Ed
.It Sy Example 20 No Delegating Property Permissions on a ZFS Dataset
.
.It Sy Example 20 : No Delegating Property Permissions on a ZFS Dataset
The following example shows to grant the ability to set quotas and reservations
on the
.Em users/home
.Ar users/home
file system.
The permissions on
.Em users/home
.Ar users/home
are also displayed.
.Bd -literal
# zfs allow cindys quota,reservation users/home
# zfs allow users/home
.Bd -literal -compact -offset Ds
.No # Nm zfs Cm allow Ar cindys Sy quota , Ns Sy reservation Ar users/home
.No # Nm zfs Cm allow Ar users/home
---- Permissions on users/home ---------------------------------------
Local+Descendent permissions:
user cindys quota,reservation
@ -637,32 +633,34 @@ cindys% zfs get quota users/home/marks
NAME PROPERTY VALUE SOURCE
users/home/marks quota 10G local
.Ed
.It Sy Example 21 No Removing ZFS Delegated Permissions on a ZFS Dataset
.
.It Sy Example 21 : No Removing ZFS Delegated Permissions on a ZFS Dataset
The following example shows how to remove the snapshot permission from the
.Sy staff
.Ar staff
group on the
.Em tank/users
.Sy tank/users
file system.
The permissions on
.Em tank/users
.Sy tank/users
are also displayed.
.Bd -literal
# zfs unallow staff snapshot tank/users
# zfs allow tank/users
.Bd -literal -compact -offset Ds
.No # Nm zfs Cm unallow Ar staff Sy snapshot Ar tank/users
.No # Nm zfs Cm allow Ar tank/users
---- Permissions on tank/users ---------------------------------------
Permission sets:
@pset create,destroy,mount,snapshot
Local+Descendent permissions:
group staff @pset
.Ed
.It Sy Example 22 No Showing the differences between a snapshot and a ZFS Dataset
.
.It Sy Example 22 : No Showing the differences between a snapshot and a ZFS Dataset
The following example shows how to see what has changed between a prior
snapshot of a ZFS dataset and its current state.
The
.Fl F
option is used to indicate type information for the files affected.
.Bd -literal
# zfs diff -F tank/test@before tank/test
.Bd -literal -compact -offset Ds
.No # Nm zfs Cm diff Fl F Ar tank/test@before tank/test
M / /tank/test/
M F /tank/test/linked (+1)
R F /tank/test/oldname -> /tank/test/newname
@ -670,57 +668,55 @@ R F /tank/test/oldname -> /tank/test/newname
+ F /tank/test/created
M F /tank/test/modified
.Ed
.It Sy Example 23 No Creating a bookmark
.
.It Sy Example 23 : No Creating a bookmark
The following example create a bookmark to a snapshot.
This bookmark can then be used instead of snapshot in send streams.
.Bd -literal
# zfs bookmark rpool@snapshot rpool#bookmark
.Ed
.It Sy Example 24 No Setting sharesmb Property Options on a ZFS File System
.Dl # Nm zfs Cm bookmark Ar rpool Ns @ Ns Ar snapshot rpool Ns # Ns Ar bookmark
.
.It Sy Example 24 : No Setting Sy sharesmb No Property Options on a ZFS File System
The following example show how to share SMB filesystem through ZFS.
Note that a user and his/her password must be given.
.Bd -literal
# smbmount //127.0.0.1/share_tmp /mnt/tmp \\
-o user=workgroup/turbo,password=obrut,uid=1000
.Ed
Note that a user and their password must be given.
.Dl # Nm smbmount Ar //127.0.0.1/share_tmp /mnt/tmp Fl o No user=workgroup/turbo,password=obrut,uid=1000
.Pp
Minimal
.Em /etc/samba/smb.conf
configuration required:
.Pa /etc/samba/smb.conf
configuration is required, as follows.
.Pp
Samba will need to listen to 'localhost' (127.0.0.1) for the ZFS utilities to
Samba will need to bind to the loopback interface for the ZFS utilities to
communicate with Samba.
This is the default behavior for most Linux distributions.
.Pp
Samba must be able to authenticate a user.
This can be done in a number of ways, depending on if using the system password file, LDAP or the Samba
specific smbpasswd file.
How to do this is outside the scope of this manual.
Please refer to the
This can be done in a number of ways
.Pq Xr passwd 5 , LDAP , Xr smbpasswd 5 , &c.\& .
How to do this is outside the scope of this document refer to
.Xr smb.conf 5
man page for more information.
for more information.
.Pp
See the
.Sy USERSHARE section
of the
.Xr smb.conf 5
man page for all configuration options in case you need to modify any options
to the share afterwards.
.Sx USERSHARES
section for all configuration options,
in case you need to modify any options of the share afterwards.
Do note that any changes done with the
.Xr net 8
command will be undone if the share is ever unshared (such as at a reboot etc).
command will be undone if the share is ever unshared (like via a reboot).
.El
.
.Sh ENVIRONMENT VARIABLES
.Bl -tag -width "ZFS_MOUNT_HELPER"
.It Ev ZFS_MOUNT_HELPER
.It Sy ZFS_MOUNT_HELPER
Cause
.Nm zfs mount
.Nm zfs Cm mount
to use
.Em /bin/mount
to mount zfs datasets. This option is provided for backwards compatibility with older zfs versions.
.Xr mount 8
to mount ZFS datasets.
This option is provided for backwards compatibility with older ZFS versions.
.El
.
.Sh INTERFACE STABILITY
.Sy Committed .
.
.Sh SEE ALSO
.Xr attr 1 ,
.Xr gzip 1 ,

View File

@ -18,11 +18,12 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2020 by Delphix. All rights reserved.
.\"
.Dd April 17, 2020
.Dt ZFS_IDS_TO_PATH 8
.Os
.
.Sh NAME
.Nm zfs_ids_to_path
.Nd convert objset and object ids to names and paths
@ -30,21 +31,21 @@
.Nm
.Op Fl v
.Ar pool
.Ar objset id
.Ar object id
.Nm
.Ar objset-id
.Ar object-id
.
.Sh DESCRIPTION
.Pp
The
.Sy zfs_ids_to_path
utility converts a provided objset and object id into a path to the file that
those ids refer to.
utility converts a provided objset and object ids
into a path to the file they refer to.
.Bl -tag -width "-D"
.It Fl v
Verbose.
Print the dataset name and the file path within the dataset separately. This
will work correctly even if the dataset is not mounted.
Print the dataset name and the file path within the dataset separately.
This will work correctly even if the dataset is not mounted.
.El
.
.Sh SEE ALSO
.Xr zfs 8 ,
.Xr zdb 8
.Xr zdb 8 ,
.Xr zfs 8

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
@ -33,9 +32,11 @@
.Dd June 30, 2019
.Dt ZFSCONCEPTS 8
.Os
.
.Sh NAME
.Nm zfsconcepts
.Nd An overview of ZFS concepts.
.Nd overview of ZFS concepts
.
.Sh DESCRIPTION
.Ss ZFS File System Hierarchy
A ZFS storage pool is a logical collection of devices that provide space for
@ -77,15 +78,15 @@ property.
.Ss Bookmarks
A bookmark is like a snapshot, a read-only copy of a file system or volume.
Bookmarks can be created extremely quickly, compared to snapshots, and they
consume no additional space within the pool. Bookmarks can also have arbitrary
names, much like snapshots.
consume no additional space within the pool.
Bookmarks can also have arbitrary names, much like snapshots.
.Pp
Unlike snapshots, bookmarks can not be accessed through the filesystem in any
way. From a storage standpoint a bookmark just provides a way to reference
when a snapshot was created as a distinct object. Bookmarks are initially
tied to a snapshot, not the filesystem or volume, and they will survive if the
snapshot itself is destroyed. Since they are very light weight there's little
incentive to destroy them.
Unlike snapshots, bookmarks can not be accessed through the filesystem in any way.
From a storage standpoint a bookmark just provides a way to reference
when a snapshot was created as a distinct object.
Bookmarks are initially tied to a snapshot, not the filesystem or volume,
and they will survive if the snapshot itself is destroyed.
Since they are very light weight there's little incentive to destroy them.
.Ss Clones
A clone is a writable volume or file system whose initial contents are the same
as another dataset.
@ -162,37 +163,44 @@ If needed, ZFS file systems can also be managed with traditional tools
If a file system's mount point is set to
.Sy legacy ,
ZFS makes no attempt to manage the file system, and the administrator is
responsible for mounting and unmounting the file system. Because pools must
responsible for mounting and unmounting the file system.
Because pools must
be imported before a legacy mount can succeed, administrators should ensure
that legacy mounts are only attempted after the zpool import process
finishes at boot time. For example, on machines using systemd, the mount
option
finishes at boot time.
For example, on machines using systemd, the mount option
.Pp
.Nm x-systemd.requires=zfs-import.target
.Pp
will ensure that the zfs-import completes before systemd attempts mounting
the filesystem. See systemd.mount(5) for details.
the filesystem.
See
.Xr systemd.mount 5
for details.
.Ss Deduplication
Deduplication is the process for removing redundant data at the block level,
reducing the total amount of data stored. If a file system has the
reducing the total amount of data stored.
If a file system has the
.Sy dedup
property enabled, duplicate data blocks are removed synchronously. The result
property enabled, duplicate data blocks are removed synchronously.
The result
is that only unique data is stored and common components are shared among files.
.Pp
Deduplicating data is a very resource-intensive operation. It is generally
recommended that you have at least 1.25 GiB of RAM per 1 TiB of storage when
you enable deduplication. Calculating the exact requirement depends heavily
Deduplicating data is a very resource-intensive operation.
It is generally recommended that you have at least 1.25 GiB of RAM
per 1 TiB of storage when you enable deduplication.
Calculating the exact requirement depends heavily
on the type of data stored in the pool.
.Pp
Enabling deduplication on an improperly-designed system can result in
performance issues (slow IO and administrative operations). It can potentially
lead to problems importing a pool due to memory exhaustion. Deduplication
can consume significant processing power (CPU) and memory as well as generate
additional disk IO.
performance issues (slow IO and administrative operations).
It can potentially lead to problems importing a pool due to memory exhaustion.
Deduplication can consume significant processing power (CPU) and memory as well
as generate additional disk IO.
.Pp
Before creating a pool with deduplication enabled, ensure that you have planned
your hardware requirements appropriately and implemented appropriate recovery
practices, such as regular backups. As an alternative to deduplication
consider using
.Sy compression=on ,
as a less resource-intensive alternative.
practices, such as regular backups.
Consider using the
.Sy compression
property as a less resource-intensive alternative.

File diff suppressed because it is too large Load Diff

View File

@ -17,7 +17,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -26,34 +25,28 @@
.\" Copyright 2017 Nexenta Systems, Inc.
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
.\"
.Dd August 9, 2019
.Dd May 27, 2021
.Dt ZPOOL-ADD 8
.Os
.
.Sh NAME
.Nm zpool-add
.Nd Adds specified virtual devices to a ZFS storage pool
.Nd add vdevs to ZFS storage pool
.Sh SYNOPSIS
.Nm zpool
.Cm add
.Op Fl fgLnP
.Oo Fl o Ar property Ns = Ns Ar value Oc
.Ar pool vdev Ns ...
.Ar pool vdev Ns
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm add
.Op Fl fgLnP
.Oo Fl o Ar property Ns = Ns Ar value Oc
.Ar pool vdev Ns ...
.Xc
Adds the specified virtual devices to the given pool.
The
.Ar vdev
specification is described in the
.Em Virtual Devices
section of
.Xr zpoolconcepts 8.
.Xr zpoolconcepts 8 .
The behavior of the
.Fl f
option, and the device checks performed are described in the
@ -68,13 +61,17 @@ Not all devices can be overridden in this manner.
.It Fl g
Display
.Ar vdev ,
GUIDs instead of the normal device names. These GUIDs can be used in place of
GUIDs instead of the normal device names.
These GUIDs can be used in place of
device names for the zpool detach/offline/remove/replace commands.
.It Fl L
Display real paths for
.Ar vdev Ns s
resolving all symbolic links. This can be used to look up the current block
device name regardless of the /dev/disk/ path used to open it.
resolving all symbolic links.
This can be used to look up the current block
device name regardless of the
.Pa /dev/disk
path used to open it.
.It Fl n
Displays the configuration that would be used without actually adding the
.Ar vdev Ns s .
@ -83,20 +80,22 @@ device sharing.
.It Fl P
Display real paths for
.Ar vdev Ns s
instead of only the last component of the path. This can be used in
conjunction with the
instead of only the last component of the path.
This can be used in conjunction with the
.Fl L
flag.
.It Fl o Ar property Ns = Ns Ar value
Sets the given pool properties. See the
Sets the given pool properties.
See the
.Xr zpoolprops 8
manual page for a list of valid properties that can be set. The only property
supported at the moment is ashift.
.El
manual page for a list of valid properties that can be set.
The only property supported at the moment is
.Sy ashift .
.El
.
.Sh SEE ALSO
.Xr zpool-remove 8 ,
.Xr zpool-attach 8 ,
.Xr zpool-import 8 ,
.Xr zpool-initialize 8 ,
.Xr zpool-online 8
.Xr zpool-online 8 ,
.Xr zpool-remove 8

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -30,24 +29,18 @@
.Dd May 15, 2020
.Dt ZPOOL-ATTACH 8
.Os
.
.Sh NAME
.Nm zpool-attach
.Nd Attach a new device to an existing ZFS virtual device (vdev).
.Nd attach new device to existing ZFS vdev
.Sh SYNOPSIS
.Nm zpool
.Cm attach
.Op Fl fsw
.Oo Fl o Ar property Ns = Ns Ar value Oc
.Ar pool device new_device
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm attach
.Op Fl fsw
.Oo Fl o Ar property Ns = Ns Ar value Oc
.Ar pool device new_device
.Xc
Attaches
.Ar new_device
to the existing
@ -76,10 +69,12 @@ Forces use of
even if it appears to be in use.
Not all devices can be overridden in this manner.
.It Fl o Ar property Ns = Ns Ar value
Sets the given pool properties. See the
Sets the given pool properties.
See the
.Xr zpoolprops 8
manual page for a list of valid properties that can be set. The only property
supported at the moment is ashift.
manual page for a list of valid properties that can be set.
The only property supported at the moment is
.Sy ashift .
.It Fl s
The
.Ar new_device
@ -92,10 +87,10 @@ Waits until
.Ar new_device
has finished resilvering before returning.
.El
.El
.
.Sh SEE ALSO
.Xr zpool-detach 8 ,
.Xr zpool-add 8 ,
.Xr zpool-detach 8 ,
.Xr zpool-import 8 ,
.Xr zpool-initialize 8 ,
.Xr zpool-online 8 ,

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -27,56 +26,47 @@
.\" Copyright 2017 Nexenta Systems, Inc.
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
.\"
.Dd August 9, 2019
.Dd May 27, 2021
.Dt ZPOOL-CHECKPOINT 8
.Os
.
.Sh NAME
.Nm zpool-checkpoint
.Nd Checkpoints the current state of a ZFS storage pool
.Nd check-point current ZFS storage pool state
.Sh SYNOPSIS
.Nm zpool
.Cm checkpoint
.Op Fl d, -discard Oo Fl w, -wait Oc
.Op Fl d Op Fl w
.Ar pool
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm checkpoint
.Op Fl d, -discard Oo Fl w, -wait Oc
.Ar pool
.Xc
Checkpoints the current state of
.Ar pool
, which can be later restored by
.Nm zpool Cm import --rewind-to-checkpoint .
The existence of a checkpoint in a pool prohibits the following
.Nm zpool
commands:
.Cm remove ,
.Cm attach ,
.Cm detach ,
.Cm split ,
and
.Cm reguid .
subcommands:
.Cm remove , attach , detach , split , No and Cm reguid .
In addition, it may break reservation boundaries if the pool lacks free
space.
The
.Nm zpool Cm status
command indicates the existence of a checkpoint or the progress of discarding a
checkpoint from a pool.
The
.Nm zpool Cm list
command reports how much space the checkpoint takes from the pool.
can be used to check how much space the checkpoint takes from the pool.
.
.Sh OPTIONS
.Bl -tag -width Ds
.It Fl d, -discard
.It Fl d , -discard
Discards an existing checkpoint from
.Ar pool .
.It Fl w, -wait
.It Fl w , -wait
Waits until the checkpoint has finished being discarded before returning.
.El
.El
.
.Sh SEE ALSO
.Xr zfs-snapshot 8 ,
.Xr zpool-import 8 ,
.Xr zpool-status 8 ,
.Xr zfs-snapshot 8
.Xr zpool-status 8

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -27,33 +26,30 @@
.\" Copyright 2017 Nexenta Systems, Inc.
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
.\"
.Dd August 9, 2019
.Dd May 27, 2021
.Dt ZPOOL-CLEAR 8
.Os
.
.Sh NAME
.Nm zpool-clear
.Nd Clears device errors in a ZFS storage pool.
.Nd clear device errors in ZFS storage pool
.Sh SYNOPSIS
.Nm zpool
.Cm clear
.Ar pool
.Op Ar device
.Oo Ar device Oc Ns
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm clear
.Ar pool
.Op Ar device
.Xc
Clears device errors in a pool.
If no arguments are specified, all device errors within the pool are cleared.
If one or more devices is specified, only those errors associated with the
specified device or devices are cleared.
If multihost is enabled, and the pool has been suspended, this will not
resume I/O. While the pool was suspended, it may have been imported on
If
.Sy multihost
is enabled and the pool has been suspended, this will not resume I/O.
While the pool was suspended, it may have been imported on
another host, and resuming I/O could result in pool damage.
.El
.
.Sh SEE ALSO
.Xr zdb 8 ,
.Xr zpool-reopen 8 ,

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -28,42 +27,32 @@
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
.\" Copyright (c) 2021, Colm Buckley <colm@tuatha.org>
.\"
.Dd August 9, 2019
.Dd June 2, 2021
.Dt ZPOOL-CREATE 8
.Os
.
.Sh NAME
.Nm zpool-create
.Nd Creates a new ZFS storage pool
.Nd create ZFS storage pool
.Sh SYNOPSIS
.Nm zpool
.Cm create
.Op Fl dfn
.Op Fl m Ar mountpoint
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
.Oo Fl o Ar feature@feature Ns = Ns Ar value Oc
.Op Fl o Ar compatibility Ns = Ns Ar off | legacy | file Bq , Ns Ar file Ns ...
.Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
.Op Fl R Ar root
.Ar pool vdev Ns ...
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm create
.Op Fl dfn
.Op Fl m Ar mountpoint
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
.Oo Fl o Ar feature@feature Ns = Ns Ar value Oc Ns ...
.Op Fl o Ar compatibility Ns = Ns Ar off | legacy | file Bq , Ns Ar file Ns ...
.Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns
.Oo Fl o Sy feature@ Ns Ar feature Ns = Ns Ar value Oc
.Op Fl o Ar compatibility Ns = Ns Sy off Ns | Ns Sy legacy Ns | Ns Ar file Ns Oo , Ns Ar file Oc Ns
.Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns
.Op Fl R Ar root
.Op Fl t Ar tname
.Ar pool vdev Ns ...
.Xc
.Ar pool
.Ar vdev Ns
.
.Sh DESCRIPTION
Creates a new storage pool containing the virtual devices specified on the
command line.
The pool name must begin with a letter, and can only contain
alphanumeric characters as well as underscore
alphanumeric characters as well as the underscore
.Pq Qq Sy _ ,
dash
.Pq Qq Sy \&- ,
@ -84,46 +73,41 @@ are reserved, as are names beginning with
.Sy mirror ,
.Sy raidz ,
.Sy draid ,
.Sy spare ,
and the pattern
.Sy c[0-9] .
and
.Sy spare .
The
.Ar vdev
specification is described in the
.Em Virtual Devices
.Sx Virtual Devices
section of
.Xr zpoolconcepts 8 .
.Pp
The command attempts to verify that each device specified is accessible and not
currently in use by another subsystem. However this check is not robust enough
currently in use by another subsystem.
However this check is not robust enough
to detect simultaneous attempts to use a new device in different pools, even if
.Sy multihost
is
.Sy enabled.
The
administrator must ensure that simultaneous invocations of any combination of
.Sy zpool replace ,
.Sy zpool create ,
.Sy zpool add ,
.Sy multihost Ns = Sy enabled .
The administrator must ensure, that simultaneous invocations of any combination of
.Nm zpool Cm replace ,
.Nm zpool Cm create ,
.Nm zpool Cm add ,
or
.Sy zpool labelclear ,
do not refer to the same device. Using the same device in two pools will
result in pool corruption.
.Nm zpool Cm labelclear ,
do not refer to the same device.
Using the same device in two pools will result in pool corruption.
.Pp
There are some uses, such as being currently mounted, or specified as the
dedicated dump device, that prevents a device from ever being used by ZFS.
Other uses, such as having a preexisting UFS file system, can be overridden with
the
.Fl f
option.
.Fl f .
.Pp
The command also checks that the replication strategy for the pool is
consistent.
An attempt to combine redundant and non-redundant storage in a single pool, or
to mix disks and files, results in an error unless
An attempt to combine redundant and non-redundant storage in a single pool,
or to mix disks and files, results in an error unless
.Fl f
is specified.
The use of differently sized devices within a single raidz or mirror group is
The use of differently-sized devices within a single raidz or mirror group is
also flagged as an error unless
.Fl f
is specified.
@ -133,27 +117,27 @@ Unless the
option is specified, the default mount point is
.Pa / Ns Ar pool .
The mount point must not exist or must be empty, or else the root dataset
cannot be mounted.
will not be able to be be mounted.
This can be overridden with the
.Fl m
option.
.Pp
By default all supported features are enabled on the new pool. The
By default all supported features are enabled on the new pool.
The
.Fl d
option or the
option and the
.Fl o Ar compatibility
property (eg:
.Fl o Ar compatibility=2020
) can be used to restrict the features that are enabled, so that the
pool can be imported on other releases of the ZFS software.
.Bl -tag -width Ds
property
.Pq e.g Fl o Sy compatibility Ns = Ns Ar 2020
can be used to restrict the features that are enabled, so that the
pool can be imported on other releases of ZFS.
.Bl -tag -width "-t tname"
.It Fl d
Do not enable any features on the new pool.
Individual features can be enabled by setting their corresponding properties to
.Sy enabled
with the
.Fl o
option.
with
.Fl o .
See
.Xr zpool-features 5
for details about feature properties.
@ -169,14 +153,14 @@ The default mount point is
or
.Pa altroot/pool
if
.Ar altroot
.Sy altroot
is specified.
The mount point must be an absolute path,
.Sy legacy ,
or
.Sy none .
For more information on dataset mount points, see
.Xr zfs 8 .
.Xr zfsprops 8 .
.It Fl n
Displays the configuration that would be used without actually creating the
pool.
@ -184,37 +168,43 @@ The actual pool creation can still fail due to insufficient privileges or
device sharing.
.It Fl o Ar property Ns = Ns Ar value
Sets the given pool properties.
See the
See
.Xr zpoolprops 8
manual page for a list of valid properties that can be set.
.It Fl o Ar compatibility Ns = Ns Ar off | legacy | file Bq , Ns Ar file Ns ...
Specifies compatibility feature sets. See
for a list of valid properties that can be set.
.It Fl o Ar compatibility Ns = Ns Sy off Ns | Ns Sy legacy Ns | Ns Ar file Ns Oo , Ns Ar file Oc Ns
Specifies compatibility feature sets.
See
.Xr zpool-features 5
for more information about compatibility feature sets.
.It Fl o Ar feature@feature Ns = Ns Ar value
Sets the given pool feature. See the
.It Fl o Sy feature@ Ns Ar feature Ns = Ns Ar value
Sets the given pool feature.
See the
.Xr zpool-features 5
section for a list of valid features that can be set.
Value can be either disabled or enabled.
.It Fl O Ar file-system-property Ns = Ns Ar value
Sets the given file system properties in the root file system of the pool.
See the
See
.Xr zfsprops 8
manual page for a list of valid properties that can be set.
for a list of valid properties that can be set.
.It Fl R Ar root
Equivalent to
.Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
.It Fl t Ar tname
Sets the in-core pool name to
.Sy tname
while the on-disk name will be the name specified as the pool name
.Sy pool .
This will set the default cachefile property to none. This is intended
.Ar tname
while the on-disk name will be the name specified as
.Ar pool .
This will set the default of the
.Sy cachefile
property to
.Sy none .
This is intended
to handle name space collisions when creating pools for other systems,
such as virtual machines or physical machines whose pools live on network
block devices.
.El
.El
.
.Sh SEE ALSO
.Xr zpool-destroy 8 ,
.Xr zpool-export 8 ,

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -30,32 +29,30 @@
.Dd August 9, 2019
.Dt ZPOOL-DETACH 8
.Os
.
.Sh NAME
.Nm zpool-detach
.Nd Detaches a device from a ZFS mirror vdev (virtual device)
.Nd detach device from ZFS mirror
.Sh SYNOPSIS
.Nm zpool
.Cm detach
.Ar pool device
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm detach
.Ar pool device
.Xc
Detaches
.Ar device
from a mirror.
The operation is refused if there are no other valid replicas of the data.
If device may be re-added to the pool later on then consider the
.Sy zpool offline
If
.Ar device
may be re-added to the pool later on then consider the
.Nm zpool Cm offline
command instead.
.El
.
.Sh SEE ALSO
.Xr zpool-attach 8 ,
.Xr zpool-offline 8 ,
.Xr zpool-labelclear 8 ,
.Xr zpool-offline 8 ,
.Xr zpool-remove 8 ,
.Xr zpool-replace 8 ,
.Xr zpool-split 8

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -27,45 +26,48 @@
.\" Copyright 2017 Nexenta Systems, Inc.
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
.\"
.Dd August 9, 2019
.Dd May 27, 2021
.Dt ZPOOL-EVENTS 8
.Os
.
.Sh NAME
.Nm zpool-events
.Nd Lists all recent events generated by the ZFS kernel modules
.Nd list recent events generated by kernel
.Sh SYNOPSIS
.Nm zpool
.Cm events
.Op Fl vHf Oo Ar pool Oc | Fl c
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Op Fl vHf
.Op Ar pool
.Nm zpool
.Cm events
.Op Fl vHf Oo Ar pool Oc | Fl c
.Xc
Lists all recent events generated by the ZFS kernel modules. These events
are consumed by the
.Fl c
.
.Sh DESCRIPTION
Lists all recent events generated by the ZFS kernel modules.
These events are consumed by the
.Xr zed 8
and used to automate administrative tasks such as replacing a failed device
with a hot spare. For more information about the subclasses and event payloads
with a hot spare.
For more information about the subclasses and event payloads
that can be generated see the
.Xr zfs-events 5
man page.
.Bl -tag -width Ds
.Pp
.Bl -tag -compact -width Ds
.It Fl c
Clear all previous events.
.It Fl f
Follow mode.
.It Fl H
Scripted mode. Do not display headers, and separate fields by a
Scripted mode.
Do not display headers, and separate fields by a
single tab instead of arbitrary space.
.It Fl v
Print the entire payload for each event.
.El
.El
.
.Sh SEE ALSO
.Xr zed 8 ,
.Xr zpool-wait 8 ,
.Xr zfs-events 5 ,
.Xr zfs-module-parameters 5
.Xr zfs-module-parameters 5 ,
.Xr zed 8 ,
.Xr zpool-wait 8

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -30,29 +29,31 @@
.Dd August 9, 2019
.Dt ZPOOL-GET 8
.Os
.
.Sh NAME
.Nm zpool-get
.Nd Retrieves properties for the specified ZFS storage pool(s)
.Nd retrieve properties of ZFS storage pools
.Sh SYNOPSIS
.Nm zpool
.Cm get
.Op Fl Hp
.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
.Oo Ar pool Oc Ns ...
.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns
.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns
.Oo Ar pool Oc Ns
.Nm zpool
.Cm set
.Ar property Ns = Ns Ar value
.Ar pool
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm get
.Op Fl Hp
.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
.Oo Ar pool Oc Ns ...
.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns
.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns
.Oo Ar pool Oc Ns
.Xc
Retrieves the given list of properties
.Po
@ -62,25 +63,29 @@ is used
.Pc
for the specified storage pool(s).
These properties are displayed with the following fields:
.Bd -literal
name Name of storage pool
property Property name
value Property value
source Property source, either 'default' or 'local'.
.Ed
.Bl -tag -compact -offset Ds -width "property"
.It Sy name
Name of storage pool.
.It Sy property
Property name.
.It Sy value
Property value.
.It Sy source
Property source, either
.Sy default No or Sy local .
.El
.Pp
See the
.Xr zpoolprops 8
manual page for more information on the available pool properties.
.Bl -tag -width Ds
.Bl -tag -compact -offset Ds -width "-o field"
.It Fl H
Scripted mode.
Do not display headers, and separate fields by a single tab instead of arbitrary
space.
.It Fl o Ar field
A comma-separated list of columns to display.
.Sy name Ns \&, Ns Sy property Ns \&, Ns Sy value Ns \&, Ns Sy source
is the default value.
A comma-separated list of columns to display, defaults to
.Sy name , Ns Sy property , Ns Sy value , Ns Sy source .
.It Fl p
Display numbers in parsable (exact) values.
.El
@ -96,7 +101,8 @@ See the
manual page for more information on what properties can be set and acceptable
values.
.El
.
.Sh SEE ALSO
.Xr zpoolprops 8 ,
.Xr zpool-features 5 ,
.Xr zpool-list 8 ,
.Xr zpool-features 5
.Xr zpoolprops 8

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -30,50 +29,53 @@
.Dd August 9, 2019
.Dt ZPOOL-IMPORT 8
.Os
.
.Sh NAME
.Nm zpool-import
.Nd Lists ZFS storage pools available to import or import the specified pools
.Nd import ZFS storage pools or list available pools
.Sh SYNOPSIS
.Nm zpool
.Cm import
.Op Fl D
.Op Fl d Ar dir Ns | Ns device
.Oo Fl d Ar dir Ns | Ns Ar device Oc Ns
.Nm zpool
.Cm import
.Fl a
.Op Fl DflmN
.Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
.Op Fl F Op Fl nTX
.Op Fl -rewind-to-checkpoint
.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns Ar device
.Op Fl o Ar mntopts
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns
.Op Fl R Ar root
.Nm zpool
.Cm import
.Op Fl Dflm
.Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
.Op Fl Dflmt
.Op Fl F Op Fl nTX
.Op Fl -rewind-to-checkpoint
.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns Ar device
.Op Fl o Ar mntopts
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns
.Op Fl R Ar root
.Op Fl s
.Ar pool Ns | Ns Ar id
.Op Ar newpool Oo Fl t Oc
.Op Ar newpool
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm import
.Op Fl D
.Op Fl d Ar dir Ns | Ns device
.Oo Fl d Ar dir Ns | Ns Ar device Oc Ns
.Xc
Lists pools available to import.
If the
.Fl d or
.Fl c
options are not specified, this command searches for devices using libblkid
on Linux and geom on FreeBSD.
on Linux and geom on
.Fx .
The
.Fl d
option can be specified multiple times, and all directories are searched.
@ -114,10 +116,10 @@ Lists destroyed pools only.
.Cm import
.Fl a
.Op Fl DflmN
.Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
.Op Fl F Op Fl nTX
.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns Ar device
.Op Fl o Ar mntopts
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns
.Op Fl R Ar root
.Op Fl s
.Xc
@ -168,12 +170,13 @@ If successful, the data from the discarded transactions is irretrievably lost.
This option is ignored if the pool is importable or already imported.
.It Fl l
Indicates that this command will request encryption keys for all encrypted
datasets it attempts to mount as it is bringing the pool online. Note that if
any datasets have a
datasets it attempts to mount as it is bringing the pool online.
Note that if any datasets have a
.Sy keylocation
of
.Sy prompt
this command will block waiting for the keys to be entered. Without this flag
this command will block waiting for the keys to be entered.
Without this flag
encrypted datasets will be left unavailable until the keys are loaded.
.It Fl m
Allows a pool to import when there is a missing log device.
@ -221,36 +224,42 @@ administrator can see how the pool would look like if they were
to fully rewind.
.It Fl s
Scan using the default search path, the libblkid cache will not be
consulted. A custom search path may be specified by setting the
ZPOOL_IMPORT_PATH environment variable.
consulted.
A custom search path may be specified by setting the
.Sy ZPOOL_IMPORT_PATH
environment variable.
.It Fl X
Used with the
.Fl F
recovery option. Determines whether extreme
measures to find a valid txg should take place. This allows the pool to
recovery option.
Determines whether extreme measures to find a valid txg should take place.
This allows the pool to
be rolled back to a txg which is no longer guaranteed to be consistent.
Pools imported at an inconsistent txg may contain uncorrectable
checksum errors. For more details about pool recovery mode, see the
Pools imported at an inconsistent txg may contain uncorrectable checksum errors.
For more details about pool recovery mode, see the
.Fl F
option, above. WARNING: This option can be extremely hazardous to the
option, above.
WARNING: This option can be extremely hazardous to the
health of your pool and should only be used as a last resort.
.It Fl T
Specify the txg to use for rollback. Implies
Specify the txg to use for rollback.
Implies
.Fl FX .
For more details
about pool recovery mode, see the
.Fl X
option, above. WARNING: This option can be extremely hazardous to the
option, above.
WARNING: This option can be extremely hazardous to the
health of your pool and should only be used as a last resort.
.El
.It Xo
.Nm zpool
.Cm import
.Op Fl Dflm
.Op Fl F Oo Fl n Oc Oo Fl t Oc Oo Fl T Oc Oo Fl X Oc
.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
.Op Fl Dflmt
.Op Fl F Op Fl nTX
.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns Ar device
.Op Fl o Ar mntopts
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns
.Op Fl R Ar root
.Op Fl s
.Ar pool Ns | Ns Ar id
@ -309,12 +318,13 @@ If successful, the data from the discarded transactions is irretrievably lost.
This option is ignored if the pool is importable or already imported.
.It Fl l
Indicates that this command will request encryption keys for all encrypted
datasets it attempts to mount as it is bringing the pool online. Note that if
any datasets have a
datasets it attempts to mount as it is bringing the pool online.
Note that if any datasets have a
.Sy keylocation
of
.Sy prompt
this command will block waiting for the keys to be entered. Without this flag
this command will block waiting for the keys to be entered.
Without this flag
encrypted datasets will be left unavailable until the keys are loaded.
.It Fl m
Allows a pool to import when there is a missing log device.
@ -350,38 +360,49 @@ property to
.Ar root .
.It Fl s
Scan using the default search path, the libblkid cache will not be
consulted. A custom search path may be specified by setting the
ZPOOL_IMPORT_PATH environment variable.
consulted.
A custom search path may be specified by setting the
.Sy ZPOOL_IMPORT_PATH
environment variable.
.It Fl X
Used with the
.Fl F
recovery option. Determines whether extreme
measures to find a valid txg should take place. This allows the pool to
recovery option.
Determines whether extreme measures to find a valid txg should take place.
This allows the pool to
be rolled back to a txg which is no longer guaranteed to be consistent.
Pools imported at an inconsistent txg may contain uncorrectable
checksum errors. For more details about pool recovery mode, see the
checksum errors.
For more details about pool recovery mode, see the
.Fl F
option, above. WARNING: This option can be extremely hazardous to the
option, above.
WARNING: This option can be extremely hazardous to the
health of your pool and should only be used as a last resort.
.It Fl T
Specify the txg to use for rollback. Implies
Specify the txg to use for rollback.
Implies
.Fl FX .
For more details
about pool recovery mode, see the
.Fl X
option, above. WARNING: This option can be extremely hazardous to the
option, above.
WARNING: This option can be extremely hazardous to the
health of your pool and should only be used as a last resort.
.It Fl t
Used with
.Sy newpool .
Specifies that
.Sy newpool
is temporary. Temporary pool names last until export. Ensures that
the original pool name will be used in all label updates and therefore
is retained upon export.
Will also set -o cachefile=none when not explicitly specified.
is temporary.
Temporary pool names last until export.
Ensures that the original pool name will be used
in all label updates and therefore is retained upon export.
Will also set
.Fl o Sy cachefile Ns = Ns Sy none
when not explicitly specified.
.El
.El
.
.Sh SEE ALSO
.Xr zpool-export 8 ,
.Xr zpool-list 8 ,

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -27,40 +26,33 @@
.\" Copyright 2017 Nexenta Systems, Inc.
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
.\"
.Dd August 9, 2019
.Dd May 27, 2021
.Dt ZPOOL-INITIALIZE 8
.Os
.
.Sh NAME
.Nm zpool-initialize
.Nd Write to all unallocated regions of eligible devices in a ZFS storage pool
.Nd write to unallocated regions of ZFS storage pool
.Sh SYNOPSIS
.Nm zpool
.Cm initialize
.Op Fl c | Fl s
.Op Fl c Ns | Ns Fl s
.Op Fl w
.Ar pool
.Op Ar device Ns ...
.Oo Ar device Oc Ns
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm initialize
.Op Fl c | Fl s
.Op Fl w
.Ar pool
.Op Ar device Ns ...
.Xc
Begins initializing by writing to all unallocated regions on the specified
devices, or all eligible devices in the pool if no individual devices are
specified.
Only leaf data or log devices may be initialized.
.Bl -tag -width Ds
.It Fl c, -cancel
.It Fl c , -cancel
Cancel initializing on the specified devices, or all eligible devices if none
are specified.
If one or more target devices are invalid or are not currently being
initialized, the command will fail and no cancellation will occur on any device.
.It Fl s -suspend
.It Fl s , -suspend
Suspend initializing on the specified devices, or all eligible devices if none
are specified.
If one or more target devices are invalid or are not currently being
@ -68,10 +60,10 @@ initialized, the command will fail and no suspension will occur on any device.
Initializing can then be resumed by running
.Nm zpool Cm initialize
with no flags on the relevant target devices.
.It Fl w, -wait
.It Fl w , -wait
Wait until the devices have finished initializing before returning.
.El
.El
.
.Sh SEE ALSO
.Xr zpool-add 8 ,
.Xr zpool-attach 8 ,

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -27,79 +26,85 @@
.\" Copyright 2017 Nexenta Systems, Inc.
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
.\"
.Dd August 9, 2019
.Dd May 27, 2021
.Dt ZPOOL-IOSTAT 8
.Os
.
.Sh NAME
.Nm zpool-iostat
.Nd Display logical I/O statistics for the given ZFS storage pools/vdevs
.Nd display logical I/O statistics for ZFS storage pools
.Sh SYNOPSIS
.Nm zpool
.Cm iostat
.Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
.Op Fl T Sy u Ns | Ns Sy d
.Op Fl ghHLnpPvy
.Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
.Oo Ar pool Ns Ns | Ns Oo Ar pool vdev Ns Oc Ns | Ns Ar vdev Ns Oc
.Op Ar interval Op Ar count
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm iostat
.Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
.Op Fl T Sy u Ns | Ns Sy d
.Op Fl ghHLnpPvy
.Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
.Op Ar interval Op Ar count
.Xc
Displays logical I/O statistics for the given pools/vdevs. Physical I/Os may
be observed via
Displays logical I/O statistics for the given pools/vdevs.
Physical I/O statistics may be observed via
.Xr iostat 1 .
If writes are located nearby, they may be merged into a single
larger operation. Additional I/O may be generated depending on the level of
vdev redundancy.
larger operation.
Additional I/O may be generated depending on the level of vdev redundancy.
To filter output, you may pass in a list of pools, a pool and list of vdevs
in that pool, or a list of any vdevs from any pool. If no items are specified,
statistics for every pool in the system are shown.
in that pool, or a list of any vdevs from any pool.
If no items are specified, statistics for every pool in the system are shown.
When given an
.Ar interval ,
the statistics are printed every
.Ar interval
seconds until ^C is pressed. If
seconds until killed.
If
.Fl n
flag is specified the headers are displayed only once, otherwise they are
displayed periodically. If count is specified, the command exits
after count reports are printed. The first report printed is always
the statistics since boot regardless of whether
displayed periodically.
If
.Ar count
is specified, the command exits after
.Ar count
reports are printed.
The first report printed is always the statistics since boot regardless of whether
.Ar interval
and
.Ar count
are passed. However, this behavior can be suppressed with the
are passed.
However, this behavior can be suppressed with the
.Fl y
flag. Also note that the units of
flag.
Also note that the units of
.Sy K ,
.Sy M ,
.Sy G ...
that are printed in the report are in base 1024. To get the raw
values, use the
.Sy G Ns
that are printed in the report are in base 1024.
To get the raw values, use the
.Fl p
flag.
.Bl -tag -width Ds
.It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
.It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns
Run a script (or scripts) on each vdev and include the output as a new column
in the
.Nm zpool Cm iostat
output. Users can run any script found in their
output.
Users can run any script found in their
.Pa ~/.zpool.d
directory or from the system
.Pa /etc/zfs/zpool.d
directory. Script names containing the slash (/) character are not allowed.
directory.
Script names containing the slash
.Pq Sy /
character are not allowed.
The default search path can be overridden by setting the
ZPOOL_SCRIPTS_PATH environment variable. A privileged user can run
.Sy ZPOOL_SCRIPTS_PATH
environment variable.
A privileged user can only run
.Fl c
if they have the ZPOOL_SCRIPTS_AS_ROOT
environment variable set. If a script requires the use of a privileged
command, like
if they have the
.Sy ZPOOL_SCRIPTS_AS_ROOT
environment variable set.
If a script requires the use of a privileged command, like
.Xr smartctl 8 ,
then it's recommended you allow the user access to it in
.Pa /etc/sudoers
@ -114,25 +119,23 @@ is passed without a script name, it prints a list of all scripts.
also sets verbose mode
.No \&( Ns Fl v Ns No \&).
.Pp
Script output should be in the form of "name=value". The column name is
set to "name" and the value is set to "value". Multiple lines can be
used to output multiple columns. The first line of output not in the
"name=value" format is displayed without a column title, and no more
output after that is displayed. This can be useful for printing error
messages. Blank or NULL values are printed as a '-' to make output
awk-able.
Script output should be in the form of "name=value".
The column name is set to "name" and the value is set to "value".
Multiple lines can be used to output multiple columns.
The first line of output not in the
"name=value" format is displayed without a column title,
and no more output after that is displayed.
This can be useful for printing error messages.
Blank or NULL values are printed as a '-' to make output AWKable.
.Pp
The following environment variables are set before running each script:
.Bl -tag -width "VDEV_PATH"
.Bl -tag -compact -width "VDEV_ENC_SYSFS_PATH"
.It Sy VDEV_PATH
Full path to the vdev
.El
.Bl -tag -width "VDEV_UPATH"
.It Sy VDEV_UPATH
Underlying path to the vdev (/dev/sd*). For use with device mapper,
multipath, or partitioned vdevs.
.El
.Bl -tag -width "VDEV_ENC_SYSFS_PATH"
Underlying path to the vdev
.Pq Pa /dev/sd* .
For use with device mapper, multipath, or partitioned vdevs.
.It Sy VDEV_ENC_SYSFS_PATH
The sysfs path to the enclosure for the vdev (if any).
.El
@ -149,99 +152,106 @@ for standard date format.
See
.Xr date 1 .
.It Fl g
Display vdev GUIDs instead of the normal device names. These GUIDs
can be used in place of device names for the zpool
Display vdev GUIDs instead of the normal device names.
These GUIDs can be used in place of device names for the zpool
detach/offline/remove/replace commands.
.It Fl H
Scripted mode. Do not display headers, and separate fields by a
Scripted mode.
Do not display headers, and separate fields by a
single tab instead of arbitrary space.
.It Fl L
Display real paths for vdevs resolving all symbolic links. This can
be used to look up the current block device name regardless of the
Display real paths for vdevs resolving all symbolic links.
This can be used to look up the current block device name regardless of the
.Pa /dev/disk/
path used to open it.
.It Fl n
Print headers only once when passed
.It Fl p
Display numbers in parsable (exact) values. Time values are in
nanoseconds.
Display numbers in parsable (exact) values.
Time values are in nanoseconds.
.It Fl P
Display full paths for vdevs instead of only the last component of
the path. This can be used in conjunction with the
Display full paths for vdevs instead of only the last component of the path.
This can be used in conjunction with the
.Fl L
flag.
.It Fl r
Print request size histograms for the leaf vdev's IO. This includes
histograms of individual IOs (ind) and aggregate IOs (agg). These stats
can be useful for observing how well IO aggregation is working. Note
that TRIM IOs may exceed 16M, but will be counted as 16M.
Print request size histograms for the leaf vdev's I/O.
This includes histograms of individual I/O (ind) and aggregate I/O (agg).
These stats can be useful for observing how well I/O aggregation is working.
Note that TRIM I/O may exceed 16M, but will be counted as 16M.
.It Fl v
Verbose statistics Reports usage statistics for individual vdevs within the
pool, in addition to the pool-wide statistics.
.It Fl y
Omit statistics since boot.
Normally the first line of output reports the statistics since boot.
This option suppresses that first line of output.
.Ar interval
Normally the first line of output reports the statistics since boot:
suppress it.
.It Fl w
Display latency histograms:
.Pp
.Ar total_wait :
Total IO time (queuing + disk IO time).
.Ar disk_wait :
Disk IO time (time reading/writing the disk).
.Ar syncq_wait :
Amount of time IO spent in synchronous priority queues. Does not include
disk time.
.Ar asyncq_wait :
Amount of time IO spent in asynchronous priority queues. Does not include
disk time.
.Ar scrub :
Amount of time IO spent in scrub queue. Does not include disk time.
.Bl -tag -compact -width "asyncq_read/write"
.It Sy total_wait
Total I/O time (queuing + disk I/O time).
.It Sy disk_wait
Disk I/O time (time reading/writing the disk).
.It Sy syncq_wait
Amount of time I/O spent in synchronous priority queues.
Does not include disk time.
.It Sy asyncq_wait
Amount of time I/O spent in asynchronous priority queues.
Does not include disk time.
.It Sy scrub
Amount of time I/O spent in scrub queue.
Does not include disk time.
.El
.It Fl l
Include average latency statistics:
.Pp
.Ar total_wait :
Average total IO time (queuing + disk IO time).
.Ar disk_wait :
Average disk IO time (time reading/writing the disk).
.Ar syncq_wait :
Average amount of time IO spent in synchronous priority queues. Does
not include disk time.
.Ar asyncq_wait :
Average amount of time IO spent in asynchronous priority queues.
.Bl -tag -compact -width "asyncq_read/write"
.It Sy total_wait
Average total I/O time (queuing + disk I/O time).
.It Sy disk_wait
Average disk I/O time (time reading/writing the disk).
.It Sy syncq_wait
Average amount of time I/O spent in synchronous priority queues.
Does not include disk time.
.Ar scrub :
Average queuing time in scrub queue. Does not include disk time.
.Ar trim :
Average queuing time in trim queue. Does not include disk time.
.It Sy asyncq_wait
Average amount of time I/O spent in asynchronous priority queues.
Does not include disk time.
.It Sy scrub
Average queuing time in scrub queue.
Does not include disk time.
.It Sy trim
Average queuing time in trim queue.
Does not include disk time.
.El
.It Fl q
Include active queue statistics. Each priority queue has both
pending (
.Ar pend )
and active (
.Ar activ )
IOs. Pending IOs are waiting to
be issued to the disk, and active IOs have been issued to disk and are
waiting for completion. These stats are broken out by priority queue:
.Pp
.Ar syncq_read/write :
Include active queue statistics.
Each priority queue has both pending
.Sy ( pend )
and active
.Sy ( activ )
I/O requests.
Pending requests are waiting to be issued to the disk,
and active requests have been issued to disk and are waiting for completion.
These stats are broken out by priority queue:
.Bl -tag -compact -width "asyncq_read/write"
.It Sy syncq_read/write
Current number of entries in synchronous priority
queues.
.Ar asyncq_read/write :
.It Sy asyncq_read/write
Current number of entries in asynchronous priority queues.
.Ar scrubq_read :
.It Sy scrubq_read
Current number of entries in scrub queue.
.Ar trimq_write :
.It Sy trimq_write
Current number of entries in trim queue.
.El
.Pp
All queue statistics are instantaneous measurements of the number of
entries in the queues. If you specify an interval, the measurements
will be sampled from the end of the interval.
.El
entries in the queues.
If you specify an interval,
the measurements will be sampled from the end of the interval.
.El
.
.Sh SEE ALSO
.Xr zpool-list 8 ,
.Xr zpool-status 8 ,
.Xr iostat 1 ,
.Xr smartctl 8
.Xr smartctl 8 ,
.Xr zpool-list 8 ,
.Xr zpool-status 8

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -30,28 +29,20 @@
.Dd August 9, 2019
.Dt ZPOOL-LIST 8
.Os
.
.Sh NAME
.Nm zpool-list
.Nd Lists ZFS storage pools along with a health status and space usage
.Nd list information about ZFS storage pools
.Sh SYNOPSIS
.Nm zpool
.Cm list
.Op Fl HgLpPv
.Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
.Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns
.Op Fl T Sy u Ns | Ns Sy d
.Oo Ar pool Oc Ns ...
.Oo Ar pool Oc Ns
.Op Ar interval Op Ar count
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm list
.Op Fl HgLpPv
.Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
.Op Fl T Sy u Ns | Ns Sy d
.Oo Ar pool Oc Ns ...
.Op Ar interval Op Ar count
.Xc
Lists the given pools along with a health status and space usage.
If no
.Ar pool Ns s
@ -60,7 +51,7 @@ When given an
.Ar interval ,
the information is printed every
.Ar interval
seconds until ^C is pressed.
seconds until killed.
If
.Ar count
is specified, the command exits after
@ -68,8 +59,8 @@ is specified, the command exits after
reports are printed.
.Bl -tag -width Ds
.It Fl g
Display vdev GUIDs instead of the normal device names. These GUIDs
can be used in place of device names for the zpool
Display vdev GUIDs instead of the normal device names.
These GUIDs can be used in place of device names for the zpool
detach/offline/remove/replace commands.
.It Fl H
Scripted mode.
@ -81,19 +72,21 @@ See the
.Xr zpoolprops 8
manual page for a list of valid properties.
The default list is
.Cm name , size , allocated , free , checkpoint, expandsize , fragmentation ,
.Cm capacity , dedupratio , health , altroot .
.Sy name , size , allocated , free , checkpoint, expandsize , fragmentation ,
.Sy capacity , dedupratio , health , altroot .
.It Fl L
Display real paths for vdevs resolving all symbolic links. This can
be used to look up the current block device name regardless of the
/dev/disk/ path used to open it.
Display real paths for vdevs resolving all symbolic links.
This can be used to look up the current block device name regardless of the
.Pa /dev/disk
path used to open it.
.It Fl p
Display numbers in parsable
.Pq exact
values.
.It Fl P
Display full paths for vdevs instead of only the last component of
the path. This can be used in conjunction with the
the path.
This can be used in conjunction with the
.Fl L
flag.
.It Fl T Sy u Ns | Ns Sy d
@ -113,7 +106,7 @@ Verbose statistics.
Reports usage statistics for individual vdevs within the pool, in addition to
the pool-wide statistics.
.El
.El
.
.Sh SEE ALSO
.Xr zpool-import 8 ,
.Xr zpool-status 8

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -30,27 +29,30 @@
.Dd August 9, 2019
.Dt ZPOOL-OFFLINE 8
.Os
.
.Sh NAME
.Nm zpool-offline
.Nd Take a physical device in a ZFS storage pool offline
.Nd take physical devices offline in ZFS storage pool
.Sh SYNOPSIS
.Nm zpool
.Cm offline
.Op Fl f
.Op Fl t
.Ar pool Ar device Ns ...
.Op Fl ft
.Ar pool
.Ar device Ns
.Nm zpool
.Cm online
.Op Fl e
.Ar pool Ar device Ns ...
.Ar pool
.Ar device Ns
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm offline
.Op Fl f
.Op Fl t
.Ar pool Ar device Ns ...
.Op Fl ft
.Ar pool
.Ar device Ns
.Xc
Takes the specified physical device offline.
While the
@ -59,8 +61,9 @@ is offline, no attempt is made to read or write to the device.
This command is not applicable to spares.
.Bl -tag -width Ds
.It Fl f
Force fault. Instead of offlining the disk, put it into a faulted
state. The fault will persist across imports unless the
Force fault.
Instead of offlining the disk, put it into a faulted state.
The fault will persist across imports unless the
.Fl t
flag was specified.
.It Fl t
@ -71,7 +74,8 @@ Upon reboot, the specified physical device reverts to its previous state.
.Nm zpool
.Cm online
.Op Fl e
.Ar pool Ar device Ns ...
.Ar pool
.Ar device Ns
.Xc
Brings the specified physical device online.
This command is not applicable to spares.
@ -82,6 +86,7 @@ If the device is part of a mirror or raidz then all devices must be expanded
before the new space will become available to the pool.
.El
.El
.
.Sh SEE ALSO
.Xr zpool-detach 8 ,
.Xr zpool-remove 8 ,

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -32,12 +31,12 @@
.Os
.Sh NAME
.Nm zpool-remove
.Nd Remove a device from a ZFS storage pool
.Nd remove devices from ZFS storage pool
.Sh SYNOPSIS
.Nm zpool
.Cm remove
.Op Fl npw
.Ar pool Ar device Ns ...
.Ar pool Ar device Ns
.Nm zpool
.Cm remove
.Fl s
@ -48,7 +47,7 @@
.Nm zpool
.Cm remove
.Op Fl npw
.Ar pool Ar device Ns ...
.Ar pool Ar device Ns
.Xc
Removes the specified device from the pool.
This command supports removing hot spare, cache, log, and both mirrored and
@ -57,7 +56,7 @@ When the primary pool storage includes a top-level raidz vdev only hot spare,
cache, and log devices can be removed.
Note that keys for all encrypted datasets must be loaded for top-level vdevs
to be removed.
.sp
.Pp
Removing a top-level vdev reduces the total amount of space in the storage pool.
The specified device will be evacuated by copying all allocated space from it to
the other devices in the pool.
@ -67,8 +66,8 @@ command initiates the removal and returns, while the evacuation continues in
the background.
The removal progress can be monitored with
.Nm zpool Cm status .
If an IO error is encountered during the removal process it will be
cancelled. The
If an IO error is encountered during the removal process it will be cancelled.
The
.Sy device_removal
feature flag must be enabled to remove a top-level vdev, see
.Xr zpool-features 5 .
@ -81,7 +80,8 @@ the
command.
.Bl -tag -width Ds
.It Fl n
Do not actually perform the removal ("no-op").
Do not actually perform the removal
.Pq Qq No-op .
Instead, print the estimated amount of memory that will be used by the
mapping table after the removal completes.
This is nonzero only for top-level vdevs.
@ -105,7 +105,7 @@ Stops and cancels an in-progress removal of a top-level vdev.
.Sh SEE ALSO
.Xr zpool-add 8 ,
.Xr zpool-detach 8 ,
.Xr zpool-offline 8 ,
.Xr zpool-labelclear 8 ,
.Xr zpool-offline 8 ,
.Xr zpool-replace 8 ,
.Xr zpool-split 8

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -27,29 +26,27 @@
.\" Copyright 2017 Nexenta Systems, Inc.
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
.\"
.Dd August 9, 2019
.Dd June 2, 2021
.Dt ZPOOL-REOPEN 8
.Os
.
.Sh NAME
.Nm zpool-reopen
.Nd Reopen all virtual devices (vdevs) associated with a ZFS storage pool
.Nd reopen vdevs associated with ZFS storage pools
.Sh SYNOPSIS
.Nm zpool
.Cm reopen
.Op Fl n
.Ar pool
.Oo Ar pool Oc Ns
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm reopen
.Op Fl n
.Ar pool
.Xc
Reopen all the vdevs associated with the pool.
.Bl -tag -width Ds
Reopen all vdevs associated with the specified pools,
or all pools if none specified.
.
.Sh OPTIONS
.Bl -tag -width "-n"
.It Fl n
Do not restart an in-progress scrub operation. This is not recommended and can
Do not restart an in-progress scrub operation.
This is not recommended and can
result in partially resilvered devices unless a second scrub is performed.
.El
.El

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -27,48 +26,42 @@
.\" Copyright 2017 Nexenta Systems, Inc.
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
.\"
.Dd May 15, 2020
.Dd May 29, 2021
.Dt ZPOOL-REPLACE 8
.Os
.
.Sh NAME
.Nm zpool-replace
.Nd Replace one device with another in a ZFS storage pool
.Nd replace one device with another in ZFS storage pool
.Sh SYNOPSIS
.Nm zpool
.Cm replace
.Op Fl fsw
.Oo Fl o Ar property Ns = Ns Ar value Oc
.Ar pool Ar device Op Ar new_device
.Ar pool Ar device Op Ar new-device
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm replace
.Op Fl fsw
.Op Fl o Ar property Ns = Ns Ar value
.Ar pool Ar device Op Ar new_device
.Xc
Replaces
.Ar old_device
.Ar device
with
.Ar new_device .
.Ar new-device .
This is equivalent to attaching
.Ar new_device ,
.Ar new-device ,
waiting for it to resilver, and then detaching
.Ar old_device .
.Ar device .
Any in progress scrub will be cancelled.
.Pp
The size of
.Ar new_device
.Ar new-device
must be greater than or equal to the minimum size of all the devices in a mirror
or raidz configuration.
.Pp
.Ar new_device
.Ar new-device
is required if the pool is not redundant.
If
.Ar new_device
.Ar new-device
is not specified, it defaults to
.Ar old_device .
.Ar device .
This form of replacement is useful after an existing disk has failed and has
been physically replaced.
In this case, the new disk may have the same
@ -78,18 +71,19 @@ ZFS recognizes this.
.Bl -tag -width Ds
.It Fl f
Forces use of
.Ar new_device ,
.Ar new-device ,
even if it appears to be in use.
Not all devices can be overridden in this manner.
.It Fl o Ar property Ns = Ns Ar value
Sets the given pool properties. See the
Sets the given pool properties.
See the
.Xr zpoolprops 8
manual page for a list of valid properties that can be set.
The only property supported at the moment is
.Sy ashift .
.It Fl s
The
.Ar new_device
.Ar new-device
is reconstructed sequentially to restore redundancy as quickly as possible.
Checksums are not verfied during sequential reconstruction so a scrub is
started when the resilver completes.
@ -97,7 +91,7 @@ Sequential reconstruction is not supported for raidz configurations.
.It Fl w
Waits until the replacement has completed before returning.
.El
.El
.
.Sh SEE ALSO
.Xr zpool-detach 8 ,
.Xr zpool-initialize 8 ,

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -27,29 +26,27 @@
.\" Copyright 2017 Nexenta Systems, Inc.
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
.\"
.Dd August 9, 2019
.Dd May 27, 2021
.Dt ZPOOL-RESILVER 8
.Os
.
.Sh NAME
.Nm zpool-resilver
.Nd Start a resilver of a device in a ZFS storage pool
.Nd resilver devices in ZFS storage pools
.Sh SYNOPSIS
.Nm zpool
.Cm resilver
.Ar pool Ns ...
.Ar pool Ns
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm resilver
.Ar pool Ns ...
.Xc
Starts a resilver. If an existing resilver is already running it will be
restarted from the beginning. Any drives that were scheduled for a deferred
resilver will be added to the new one. This requires the
Starts a resilver of the specified pools.
If an existing resilver is already running it will be restarted from the beginning.
Any drives that were scheduled for a deferred
resilver will be added to the new one.
This requires the
.Sy resilver_defer
feature.
.El
pool feature.
.
.Sh SEE ALSO
.Xr zpool-iostat 8 ,
.Xr zpool-online 8 ,

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -27,27 +26,21 @@
.\" Copyright 2017 Nexenta Systems, Inc.
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
.\"
.Dd August 9, 2019
.Dd May 27, 2021
.Dt ZPOOL-SCRUB 8
.Os
.
.Sh NAME
.Nm zpool-scrub
.Nd Begin a scrub or resume a paused scrub of a ZFS storage pool
.Nd begin or resume scrub of ZFS storage pools
.Sh SYNOPSIS
.Nm zpool
.Cm scrub
.Op Fl s | Fl p
.Op Fl s Ns | Ns Fl p
.Op Fl w
.Ar pool Ns ...
.Ar pool Ns
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm scrub
.Op Fl s | Fl p
.Op Fl w
.Ar pool Ns ...
.Xc
Begins a scrub or resumes a paused scrub.
The scrub examines all data in the specified pools to verify that it checksums
correctly.
@ -78,13 +71,13 @@ If a resilver is in progress, ZFS does not allow a scrub to be started until the
resilver completes.
.Pp
Note that, due to changes in pool data on a live system, it is possible for
scrubs to progress slightly beyond 100% completion. During this period, no
completion time estimate will be provided.
.Bl -tag -width Ds
scrubs to progress slightly beyond 100% completion.
During this period, no completion time estimate will be provided.
.
.Sh OPTIONS
.Bl -tag -width "-s"
.It Fl s
Stop scrubbing.
.El
.Bl -tag -width Ds
.It Fl p
Pause scrubbing.
Scrub pause state and progress are periodically synced to disk.
@ -98,7 +91,7 @@ again.
.It Fl w
Wait until scrub has completed before returning.
.El
.El
.
.Sh SEE ALSO
.Xr zpool-iostat 8 ,
.Xr zpool-resilver 8 ,

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -27,31 +26,23 @@
.\" Copyright 2017 Nexenta Systems, Inc.
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
.\"
.Dd August 9, 2019
.Dd June 2, 2021
.Dt ZPOOL-SPLIT 8
.Os
.
.Sh NAME
.Nm zpool-split
.Nd Split devices off a ZFS storage pool creating a new pool
.Nd split devices off ZFS storage pool, creating new pool
.Sh SYNOPSIS
.Nm zpool
.Cm split
.Op Fl gLlnP
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns
.Op Fl R Ar root
.Ar pool newpool
.Oo Ar device Oc Ns ...
.Oo Ar device Oc Ns
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm split
.Op Fl gLlnP
.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
.Op Fl R Ar root
.Ar pool newpool
.Op Ar device ...
.Xc
Splits devices off
.Ar pool
creating
@ -76,30 +67,31 @@ and, should any devices remain unspecified,
the last device in each mirror is used as would be by default.
.Bl -tag -width Ds
.It Fl g
Display vdev GUIDs instead of the normal device names. These GUIDs
can be used in place of device names for the zpool
Display vdev GUIDs instead of the normal device names.
These GUIDs can be used in place of device names for the zpool
detach/offline/remove/replace commands.
.It Fl L
Display real paths for vdevs resolving all symbolic links. This can
be used to look up the current block device name regardless of the
Display real paths for vdevs resolving all symbolic links.
This can be used to look up the current block device name regardless of the
.Pa /dev/disk/
path used to open it.
.It Fl l
Indicates that this command will request encryption keys for all encrypted
datasets it attempts to mount as it is bringing the new pool online. Note that
if any datasets have a
.Sy keylocation
of
.Sy prompt
this command will block waiting for the keys to be entered. Without this flag
encrypted datasets will be left unavailable until the keys are loaded.
datasets it attempts to mount as it is bringing the new pool online.
Note that if any datasets have
.Sy keylocation Ns = Ns Sy prompt ,
this command will block waiting for the keys to be entered.
Without this flag, encrypted datasets will be left unavailable until the keys are loaded.
.It Fl n
Do dry run, do not actually perform the split.
Do a dry-run
.Pq Qq No-op
split: do not actually perform it.
Print out the expected configuration of
.Ar newpool .
.It Fl P
Display full paths for vdevs instead of only the last component of
the path. This can be used in conjunction with the
the path.
This can be used in conjunction with the
.Fl L
flag.
.It Fl o Ar property Ns = Ns Ar value
@ -117,7 +109,7 @@ to
.Ar root
and automatically import it.
.El
.El
.
.Sh SEE ALSO
.Xr zpool-import 8 ,
.Xr zpool-list 8 ,

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -27,37 +26,29 @@
.\" Copyright 2017 Nexenta Systems, Inc.
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
.\"
.Dd May 15, 2020
.Dd June 2, 2021
.Dt ZPOOL-STATUS 8
.Os
.
.Sh NAME
.Nm zpool-status
.Nd Display detailed health status for the given ZFS storage pools
.Nd show detailed health status for ZFS storage pools
.Sh SYNOPSIS
.Nm zpool
.Cm status
.Oo Fl c Ar SCRIPT Oc
.Op Fl DigLpPstvx
.Op Fl T Sy u Ns | Ns Sy d
.Oo Ar pool Oc Ns ...
.Op Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns
.Oo Ar pool Oc Ns
.Op Ar interval Op Ar count
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm status
.Op Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
.Op Fl DigLpPstvx
.Op Fl T Sy u Ns | Ns Sy d
.Oo Ar pool Oc Ns ...
.Op Ar interval Op Ar count
.Xc
Displays the detailed health status for the given pools.
If no
.Ar pool
is specified, then the status of each pool in the system is displayed.
For more information on pool and device health, see the
.Em Device Failure and Recovery
.Sx Device Failure and Recovery
section of
.Xr zpoolconcepts 8 .
.Pp
@ -66,11 +57,12 @@ and the estimated time to completion.
Both of these are only approximate, because the amount of data in the pool and
the other workloads on the system can change.
.Bl -tag -width Ds
.It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
.It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns
Run a script (or scripts) on each vdev and include the output as a new column
in the
.Nm zpool Cm status
output. See the
output.
See the
.Fl c
option of
.Nm zpool Cm iostat
@ -78,19 +70,20 @@ for complete details.
.It Fl i
Display vdev initialization status.
.It Fl g
Display vdev GUIDs instead of the normal device names. These GUIDs
can be used in place of device names for the zpool
Display vdev GUIDs instead of the normal device names
These GUIDs can be used in place of device names for the zpool
detach/offline/remove/replace commands.
.It Fl L
Display real paths for vdevs resolving all symbolic links. This can
be used to look up the current block device name regardless of the
Display real paths for vdevs resolving all symbolic links.
This can be used to look up the current block device name regardless of the
.Pa /dev/disk/
path used to open it.
.It Fl p
Display numbers in parsable (exact) values.
.It Fl P
Display full paths for vdevs instead of only the last component of
the path. This can be used in conjunction with the
the path.
This can be used in conjunction with the
.Fl L
flag.
.It Fl D
@ -100,11 +93,14 @@ and referenced
.Pq logically referenced in the pool
block counts and sizes by reference count.
.It Fl s
Display the number of leaf VDEV slow IOs. This is the number of IOs that
didn't complete in \fBzio_slow_io_ms\fR milliseconds (default 30 seconds).
Display the number of leaf VDEV slow IOs.
This is the number of IOs that
didn't complete in
.Sy zio_slow_io_ms
milliseconds (default 30 seconds).
This does not necessarily mean the IOs failed to complete, just took an
unreasonably long amount of time. This may indicate a problem with the
underlying storage.
unreasonably long amount of time.
This may indicate a problem with the underlying storage.
.It Fl t
Display vdev TRIM status.
.It Fl T Sy u Ns | Ns Sy d
@ -127,7 +123,7 @@ Only display status for pools that are exhibiting errors or are otherwise
unavailable.
Warnings about pools not using the latest on-disk format will not be included.
.El
.El
.
.Sh SEE ALSO
.Xr zpool-events 8 ,
.Xr zpool-history 8 ,

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -30,28 +29,25 @@
.Dd August 9, 2019
.Dt ZPOOL-SYNC 8
.Os
.
.Sh NAME
.Nm zpool-sync
.Nd Force data to be written to primary storage of a ZFS storage pool and update reporting data
.Nd flush data to primary storage of ZFS storage pools
.Sh SYNOPSIS
.Nm zpool
.Cm sync
.Oo Ar pool Oc Ns ...
.Oo Ar pool Oc Ns
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm sync
.Op Ar pool ...
.Xc
This command forces all in-core dirty data to be written to the primary
pool storage and not the ZIL. It will also update administrative
information including quota reporting. Without arguments,
.Sy zpool sync
will sync all pools on the system. Otherwise, it will sync only the
specified pool(s).
.El
pool storage and not the ZIL.
It will also update administrative information including quota reporting.
Without arguments,
.Nm zpool Cm sync
will sync all pools on the system.
Otherwise, it will sync only the specified pools.
.
.Sh SEE ALSO
.Xr zpoolconcepts 8 ,
.Xr zpool-export 8 ,
.Xr zpool-iostat 8
.Xr zpool-iostat 8 ,
.Xr zpoolconcepts 8

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -27,56 +26,54 @@
.\" Copyright 2017 Nexenta Systems, Inc.
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
.\"
.Dd February 25, 2020
.Dd May 27, 2021
.Dt ZPOOL-TRIM 8
.Os
.
.Sh NAME
.Nm zpool-trim
.Nd Initiate immediate TRIM operations for all free space in a ZFS storage pool
.Nd initiate TRIM of free space in ZFS storage pool
.Sh SYNOPSIS
.Nm zpool
.Cm trim
.Op Fl dw
.Op Fl r Ar rate
.Op Fl c | Fl s
.Op Fl c Ns | Ns Fl s
.Ar pool
.Op Ar device Ns ...
.Oo Ar device Ns Oc Ns
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm trim
.Op Fl dw
.Op Fl c | Fl s
.Ar pool
.Op Ar device Ns ...
.Xc
Initiates an immediate on-demand TRIM operation for all of the free space in
a pool. This operation informs the underlying storage devices of all blocks
a pool.
This operation informs the underlying storage devices of all blocks
in the pool which are no longer allocated and allows thinly provisioned
devices to reclaim the space.
.Pp
A manual on-demand TRIM operation can be initiated irrespective of the
.Sy autotrim
pool property setting. See the documentation for the
pool property setting.
See the documentation for the
.Sy autotrim
property above for the types of vdev devices which can be trimmed.
.Bl -tag -width Ds
.It Fl d -secure
Causes a secure TRIM to be initiated. When performing a secure TRIM, the
.It Fl d , -secure
Causes a secure TRIM to be initiated.
When performing a secure TRIM, the
device guarantees that data stored on the trimmed blocks has been erased.
This requires support from the device and is not supported by all SSDs.
.It Fl r -rate Ar rate
Controls the rate at which the TRIM operation progresses. Without this
option TRIM is executed as quickly as possible. The rate, expressed in bytes
.It Fl r , -rate Ar rate
Controls the rate at which the TRIM operation progresses.
Without this
option TRIM is executed as quickly as possible.
The rate, expressed in bytes
per second, is applied on a per-vdev basis and may be set differently for
each leaf vdev.
.It Fl c, -cancel
.It Fl c , -cancel
Cancel trimming on the specified devices, or all eligible devices if none
are specified.
If one or more target devices are invalid or are not currently being
trimmed, the command will fail and no cancellation will occur on any device.
.It Fl s -suspend
.It Fl s , -suspend
Suspend trimming on the specified devices, or all eligible devices if none
are specified.
If one or more target devices are invalid or are not currently being
@ -84,10 +81,10 @@ trimmed, the command will fail and no suspension will occur on any device.
Trimming can then be resumed by running
.Nm zpool Cm trim
with no flags on the relevant target devices.
.It Fl w -wait
.It Fl w , -wait
Wait until the devices are done being trimmed before returning.
.El
.El
.
.Sh SEE ALSO
.Xr zpool-initialize 8 ,
.Xr zpool-wait 8 ,

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -31,9 +30,10 @@
.Dd August 9, 2019
.Dt ZPOOL-UPGRADE 8
.Os
.
.Sh NAME
.Nm zpool-upgrade
.Nd Manage version and feature flags of ZFS storage pools
.Nd manage version and feature flags of ZFS storage pools
.Sh SYNOPSIS
.Nm zpool
.Cm upgrade
@ -43,7 +43,8 @@
.Nm zpool
.Cm upgrade
.Op Fl V Ar version
.Fl a Ns | Ns Ar pool Ns ...
.Fl a Ns | Ns Ar pool Ns
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
@ -56,29 +57,30 @@ These pools can continue to be used, but some features may not be available.
Use
.Nm zpool Cm upgrade Fl a
to enable all features on all pools (subject to the
.Fl o Ar compatibility
.Fl o Sy compatibility
property).
.It Xo
.Nm zpool
.Cm upgrade
.Fl v
.Xc
Displays legacy ZFS versions supported by the current software.
Displays legacy ZFS versions supported by the this version of ZFS.
See
.Xr zpool-features 5
for a description of feature flags features supported by the current software.
for a description of feature flags features supported by this version of ZFS.
.It Xo
.Nm zpool
.Cm upgrade
.Op Fl V Ar version
.Fl a Ns | Ns Ar pool Ns ...
.Fl a Ns | Ns Ar pool Ns
.Xc
Enables all supported features on the given pool.
.Pp
If the pool has specified compatibility feature sets using the
.Fl o Ar compatibility
.Fl o Sy compatibility
property, only the features present in all requested compatibility sets will be
enabled. If this property is set to
enabled.
If this property is set to
.Ar legacy
then no upgrade will take place.
.Pp
@ -94,15 +96,14 @@ Enables all supported features (from specified compatibility sets, if any) on al
pools.
.It Fl V Ar version
Upgrade to the specified legacy version.
If the
.Fl V
flag is specified, no features will be enabled on the pool.
If specified, no features will be enabled on the pool.
This option can only be used to increase the version number up to the last
supported legacy version number.
.El
.El
.
.Sh SEE ALSO
.Xr zpool-features 5 ,
.Xr zpool-history 8 ,
.Xr zpoolconcepts 8 ,
.Xr zpoolprops 8 ,
.Xr zpool-history 8
.Xr zpoolprops 8

View File

@ -27,31 +27,23 @@
.\" Copyright 2017 Nexenta Systems, Inc.
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
.\"
.Dd February 25, 2020
.Dd May 27, 2021
.Dt ZPOOL-WAIT 8
.Os
.
.Sh NAME
.Nm zpool-wait
.Nd Wait for background activity to stop in a ZFS storage pool
.Nd wait for activity to stop in a ZFS storage pool
.Sh SYNOPSIS
.Nm zpool
.Cm wait
.Op Fl Hp
.Op Fl T Sy u Ns | Ns Sy d
.Op Fl t Ar activity Ns Oo , Ns Ar activity Ns Oc Ns ...
.Op Fl t Ar activity Ns Oo , Ns Ar activity Ns Oc Ns
.Ar pool
.Op Ar interval
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm wait
.Op Fl Hp
.Op Fl T Sy u Ns | Ns Sy d
.Op Fl t Ar activity Ns Oo , Ns Ar activity Ns Oc Ns ...
.Ar pool
.Op Ar interval
.Xc
Waits until all background activity of the given types has ceased in the given
pool.
The activity could cease because it has completed, or because it has been
@ -65,16 +57,26 @@ immediately.
These are the possible values for
.Ar activity ,
along with what each one waits for:
.Bd -literal
discard Checkpoint to be discarded
free 'freeing' property to become 0
initialize All initializations to cease
replace All device replacements to cease
remove Device removal to cease
resilver Resilver to cease
scrub Scrub to cease
trim Manual trim to cease
.Ed
.Bl -tag -compact -offset Ds -width "initialize"
.It Sy discard
Checkpoint to be discarded
.It Sy free
.Sy freeing
property to become
.Sy 0
.It Sy initialize
All initializations to cease
.It Sy replace
All device replacements to cease
.It Sy remove
Device removal to cease
.It Sy resilver
Resilver to cease
.It Sy scrub
Scrub to cease
.It Sy trim
Manual trim to cease
.El
.Pp
If an
.Ar interval
@ -102,13 +104,13 @@ for standard date format.
See
.Xr date 1 .
.El
.El
.
.Sh SEE ALSO
.Xr zpool-status 8 ,
.Xr zpool-checkpoint 8 ,
.Xr zpool-initialize 8 ,
.Xr zpool-replace 8 ,
.Xr zpool-remove 8 ,
.Xr zpool-replace 8 ,
.Xr zpool-resilver 8 ,
.Xr zpool-scrub 8 ,
.Xr zpool-status 8 ,
.Xr zpool-trim 8

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -27,9 +26,10 @@
.\" Copyright 2017 Nexenta Systems, Inc.
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
.\"
.Dd August 9, 2019
.Dd June 2, 2021
.Dt ZPOOL 8
.Os
.
.Sh NAME
.Nm zpool
.Nd configure ZFS storage pools
@ -39,8 +39,9 @@
.Nm
.Cm version
.Nm
.Cm <subcommand>
.Op Ar <args>
.Cm subcommand
.Op Ar argumentss
.
.Sh DESCRIPTION
The
.Nm
@ -55,6 +56,7 @@ for information on managing datasets.
For an overview of creating and managing ZFS storage pools see the
.Xr zpoolconcepts 8
manual page.
.
.Sh SUBCOMMANDS
All subcommands that modify state are logged persistently to the pool in their
original form.
@ -67,24 +69,22 @@ The following subcommands are supported:
.Bl -tag -width Ds
.It Xo
.Nm
.Fl ?
.Fl ?\&
.Xc
Displays a help message.
.It Xo
.Nm
.Fl V, -version
.Fl V , -version
.Xc
An alias for the
.Nm zpool Cm version
subcommand.
.It Xo
.Nm
.Cm version
.Xc
Displays the software version of the
.Nm
userland utility and the zfs kernel module.
userland utility and the ZFS kernel module.
.El
.
.Ss Creation
.Bl -tag -width Ds
.It Xr zpool-create 8
@ -95,6 +95,7 @@ Begins initializing by writing to all unallocated regions on the specified
devices, or all eligible devices in the pool if no individual devices are
specified.
.El
.
.Ss Destruction
.Bl -tag -width Ds
.It Xr zpool-destroy 8
@ -103,18 +104,17 @@ Destroys the given pool, freeing up any devices for other use.
Removes ZFS label information from the specified
.Ar device .
.El
.
.Ss Virtual Devices
.Bl -tag -width Ds
.It Xo
.Xr zpool-attach 8 /
.Xr zpool-detach 8
.Xr zpool-attach 8 Ns / Ns Xr zpool-detach 8
.Xc
Increases or decreases redundancy by
.Cm attach Ns -ing or
.Cm detach Ns -ing a device on an existing vdev (virtual device).
.Cm attach Ns ing or
.Cm detach Ns ing a device on an existing vdev (virtual device).
.It Xo
.Xr zpool-add 8 /
.Xr zpool-remove 8
.Xr zpool-add 8 Ns / Ns Xr zpool-remove 8
.Xc
Adds the specified virtual devices to the given pool,
or removes the specified device from the pool.
@ -123,6 +123,7 @@ Replaces an existing device (which may be faulted) with a new one.
.It Xr zpool-split 8
Creates a new pool by splitting all mirrors in an existing pool (which decreases its redundancy).
.El
.
.Ss Properties
Available pool properties listed in the
.Xr zpoolprops 8
@ -131,8 +132,7 @@ manual page.
.It Xr zpool-list 8
Lists the given pools along with a health status and space usage.
.It Xo
.Xr zpool-get 8 /
.Xr zpool-set 8
.Xr zpool-get 8 Ns / Ns Xr zpool-set 8
.Xc
Retrieves the given list of properties
.Po
@ -142,6 +142,7 @@ is used
.Pc
for the specified storage pool(s).
.El
.
.Ss Monitoring
.Bl -tag -width Ds
.It Xr zpool-status 8
@ -151,11 +152,12 @@ Displays logical I/O statistics for the given pools/vdevs. Physical I/Os may
be observed via
.Xr iostat 1 .
.It Xr zpool-events 8
Lists all recent events generated by the ZFS kernel modules. These events
are consumed by the
Lists all recent events generated by the ZFS kernel modules.
These events are consumed by the
.Xr zed 8
and used to automate administrative tasks such as replacing a failed device
with a hot spare. For more information about the subclasses and event payloads
with a hot spare.
For more information about the subclasses and event payloads
that can be generated see the
.Xr zfs-events 5
man page.
@ -163,48 +165,51 @@ man page.
Displays the command history of the specified pool(s) or all pools if no pool is
specified.
.El
.
.Ss Maintenance
.Bl -tag -width Ds
.It Xr zpool-scrub 8
Begins a scrub or resumes a paused scrub.
.It Xr zpool-checkpoint 8
Checkpoints the current state of
.Ar pool
, which can be later restored by
.Nm zpool Cm import --rewind-to-checkpoint .
.Ar pool ,
which can be later restored by
.Nm zpool Cm import Fl -rewind-to-checkpoint .
.It Xr zpool-trim 8
Initiates an immediate on-demand TRIM operation for all of the free space in
a pool. This operation informs the underlying storage devices of all blocks
Initiates an immediate on-demand TRIM operation for all of the free space in a pool.
This operation informs the underlying storage devices of all blocks
in the pool which are no longer allocated and allows thinly provisioned
devices to reclaim the space.
.It Xr zpool-sync 8
This command forces all in-core dirty data to be written to the primary
pool storage and not the ZIL. It will also update administrative
information including quota reporting. Without arguments,
.Sy zpool sync
will sync all pools on the system. Otherwise, it will sync only the
specified pool(s).
pool storage and not the ZIL.
It will also update administrative information including quota reporting.
Without arguments,
.Nm zpool Cm sync
will sync all pools on the system.
Otherwise, it will sync only the specified pool(s).
.It Xr zpool-upgrade 8
Manage the on-disk format version of storage pools.
.It Xr zpool-wait 8
Waits until all background activity of the given types has ceased in the given
pool.
.El
.
.Ss Fault Resolution
.Bl -tag -width Ds
.It Xo
.Xr zpool-offline 8
.Xr zpool-online 8
.Xr zpool-offline 8 Ns / Ns Xr zpool-online 8
.Xc
Takes the specified physical device offline or brings it online.
.It Xr zpool-resilver 8
Starts a resilver. If an existing resilver is already running it will be
restarted from the beginning.
Starts a resilver.
If an existing resilver is already running it will be restarted from the beginning.
.It Xr zpool-reopen 8
Reopen all the vdevs associated with the pool.
.It Xr zpool-clear 8
Clears device errors in a pool.
.El
.
.Ss Import & Export
.Bl -tag -width Ds
.It Xr zpool-import 8
@ -214,9 +219,10 @@ Exports the given pools from the system.
.It Xr zpool-reguid 8
Generates a new unique identifier for the pool.
.El
.
.Sh EXIT STATUS
The following exit values are returned:
.Bl -tag -width Ds
.Bl -tag -compact -offset 4n -width "a"
.It Sy 0
Successful completion.
.It Sy 1
@ -224,74 +230,69 @@ An error occurred.
.It Sy 2
Invalid command line options were specified.
.El
.
.Sh EXAMPLES
.Bl -tag -width Ds
.It Sy Example 1 No Creating a RAID-Z Storage Pool
.Bl -tag -width "Exam"
.It Sy Example 1 : No Creating a RAID-Z Storage Pool
The following command creates a pool with a single raidz root vdev that
consists of six disks.
.Bd -literal
# zpool create tank raidz sda sdb sdc sdd sde sdf
.Ed
.It Sy Example 2 No Creating a Mirrored Storage Pool
consists of six disks:
.Dl # Nm zpool Cm create Ar tank Sy raidz Ar sda sdb sdc sdd sde sdf
.
.It Sy Example 2 : No Creating a Mirrored Storage Pool
The following command creates a pool with two mirrors, where each mirror
contains two disks.
.Bd -literal
# zpool create tank mirror sda sdb mirror sdc sdd
.Ed
.It Sy Example 3 No Creating a ZFS Storage Pool by Using Partitions
The following command creates an unmirrored pool using two disk partitions.
.Bd -literal
# zpool create tank sda1 sdb2
.Ed
.It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
contains two disks:
.Dl # Nm zpool Cm create Ar tank Sy mirror Ar sda sdb Sy mirror Ar sdc sdd
.
.It Sy Example 3 : No Creating a ZFS Storage Pool by Using Partitions
The following command creates an unmirrored pool using two disk partitions:
.Dl # Nm zpool Cm create Ar tank sda1 sdb2
.
.It Sy Example 4 : No Creating a ZFS Storage Pool by Using Files
The following command creates an unmirrored pool using files.
While not recommended, a pool based on files can be useful for experimental
purposes.
.Bd -literal
# zpool create tank /path/to/file/a /path/to/file/b
.Ed
.It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
.Dl # Nm zpool Cm create Ar tank /path/to/file/a /path/to/file/b
.
.It Sy Example 5 : No Adding a Mirror to a ZFS Storage Pool
The following command adds two mirrored disks to the pool
.Em tank ,
.Ar tank ,
assuming the pool is already made up of two-way mirrors.
The additional space is immediately available to any datasets within the pool.
.Bd -literal
# zpool add tank mirror sda sdb
.Ed
.It Sy Example 6 No Listing Available ZFS Storage Pools
.Dl # Nm zpool Cm add Ar tank Sy mirror Ar sda sdb
.
.It Sy Example 6 : No Listing Available ZFS Storage Pools
The following command lists all available pools on the system.
In this case, the pool
.Em zion
.Ar zion
is faulted due to a missing device.
The results from this command are similar to the following:
.Bd -literal
# zpool list
.Bd -literal -compact -offset Ds
.No # Nm zpool Cm list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE -
tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE -
zion - - - - - - - FAULTED -
.Ed
.It Sy Example 7 No Destroying a ZFS Storage Pool
.
.It Sy Example 7 : No Destroying a ZFS Storage Pool
The following command destroys the pool
.Em tank
and any datasets contained within.
.Bd -literal
# zpool destroy -f tank
.Ed
.It Sy Example 8 No Exporting a ZFS Storage Pool
.Ar tank
and any datasets contained within:
.Dl # Nm zpool Cm destroy Fl f Ar tank
.
.It Sy Example 8 : No Exporting a ZFS Storage Pool
The following command exports the devices in pool
.Em tank
so that they can be relocated or later imported.
.Bd -literal
# zpool export tank
.Ed
.It Sy Example 9 No Importing a ZFS Storage Pool
.Ar tank
so that they can be relocated or later imported:
.Dl # Nm zpool Cm export Ar tank
.
.It Sy Example 9 : No Importing a ZFS Storage Pool
The following command displays available pools, and then imports the pool
.Em tank
.Ar tank
for use on the system.
The results from this command are similar to the following:
.Bd -literal
# zpool import
.Bd -literal -compact -offset Ds
.No # Nm zpool Cm import
pool: tank
id: 15451357997522795478
state: ONLINE
@ -303,66 +304,58 @@ config:
sda ONLINE
sdb ONLINE
# zpool import tank
.No # Nm zpool Cm import Ar tank
.Ed
.It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
.
.It Sy Example 10 : No Upgrading All ZFS Storage Pools to the Current Version
The following command upgrades all ZFS Storage pools to the current version of
the software.
.Bd -literal
# zpool upgrade -a
the software:
.Bd -literal -compact -offset Ds
.No # Nm zpool Cm upgrade Fl a
This system is currently running ZFS version 2.
.Ed
.It Sy Example 11 No Managing Hot Spares
.
.It Sy Example 11 : No Managing Hot Spares
The following command creates a new pool with an available hot spare:
.Bd -literal
# zpool create tank mirror sda sdb spare sdc
.Ed
.Dl # Nm zpool Cm create Ar tank Sy mirror Ar sda sdb Sy spare Ar sdc
.Pp
If one of the disks were to fail, the pool would be reduced to the degraded
state.
The failed device can be replaced using the following command:
.Bd -literal
# zpool replace tank sda sdd
.Ed
.Dl # Nm zpool Cm replace Ar tank sda sdd
.Pp
Once the data has been resilvered, the spare is automatically removed and is
made available for use should another device fail.
The hot spare can be permanently removed from the pool using the following
command:
.Bd -literal
# zpool remove tank sdc
.Ed
.It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
.Dl # Nm zpool Cm remove Ar tank sdc
.
.It Sy Example 12 : No Creating a ZFS Pool with Mirrored Separate Intent Logs
The following command creates a ZFS storage pool consisting of two, two-way
mirrors and mirrored log devices:
.Bd -literal
# zpool create pool mirror sda sdb mirror sdc sdd log mirror \\
sde sdf
.Ed
.It Sy Example 13 No Adding Cache Devices to a ZFS Pool
.Dl # Nm zpool Cm create Ar pool Sy mirror Ar sda sdb Sy mirror Ar sdc sdd Sy log mirror Ar sde sdf
.
.It Sy Example 13 : No Adding Cache Devices to a ZFS Pool
The following command adds two disks for use as cache devices to a ZFS storage
pool:
.Bd -literal
# zpool add pool cache sdc sdd
.Ed
.Dl # Nm zpool Cm add Ar pool Sy cache Ar sdc sdd
.Pp
Once added, the cache devices gradually fill with content from main memory.
Depending on the size of your cache devices, it could take over an hour for
them to fill.
Capacity and reads can be monitored using the
.Cm iostat
option as follows:
.Bd -literal
# zpool iostat -v pool 5
.Ed
.It Sy Example 14 No Removing a Mirrored top-level (Log or Data) Device
subcommand as follows:
.Dl # Nm zpool Cm iostat Fl v Ar pool 5
.
.It Sy Example 14 : No Removing a Mirrored top-level (Log or Data) Device
The following commands remove the mirrored log device
.Sy mirror-2
and mirrored top-level data device
.Sy mirror-1 .
.Pp
Given this configuration:
.Bd -literal
.Bd -literal -compact -offset Ds
pool: tank
state: ONLINE
scrub: none requested
@ -383,27 +376,22 @@ config:
.Ed
.Pp
The command to remove the mirrored log
.Sy mirror-2
is:
.Bd -literal
# zpool remove tank mirror-2
.Ed
.Ar mirror-2 No is:
.Dl # Nm zpool Cm remove Ar tank mirror-2
.Pp
The command to remove the mirrored data
.Sy mirror-1
is:
.Bd -literal
# zpool remove tank mirror-1
.Ed
.It Sy Example 15 No Displaying expanded space on a device
.Ar mirror-1 No is:
.Dl # Nm zpool Cm remove Ar tank mirror-1
.
.It Sy Example 15 : No Displaying expanded space on a device
The following command displays the detailed information for the pool
.Em data .
.Ar data .
This pool is comprised of a single raidz vdev where one of its devices
increased its capacity by 10GB.
In this example, the pool will not be able to utilize this extra capacity until
all the devices under the raidz vdev have been expanded.
.Bd -literal
# zpool list -v data
.Bd -literal -compact -offset Ds
.No # Nm zpool Cm list Fl v Ar data
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE -
raidz1 23.9G 14.6G 9.30G - 48%
@ -411,16 +399,12 @@ data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE -
sdb - - - 10G -
sdc - - - - -
.Ed
.It Sy Example 16 No Adding output columns
.
.It Sy Example 16 : No Adding output columns
Additional columns can be added to the
.Nm zpool Cm status
and
.Nm zpool Cm iostat
output with
.Fl c
option.
.Bd -literal
# zpool status -c vendor,model,size
.Nm zpool Cm status No and Nm zpool Cm iostat No output with Fl c .
.Bd -literal -compact -offset Ds
.No # Nm zpool Cm status Fl c Ar vendor , Ns Ar model , Ns Ar size
NAME STATE READ WRITE CKSUM vendor model size
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
@ -431,7 +415,7 @@ option.
U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
# zpool iostat -vc size
.No # Nm zpool Cm iostat Fl vc Ar size
capacity operations bandwidth
pool alloc free read write read write size
---------- ----- ----- ----- ----- ----- ----- ----
@ -440,124 +424,104 @@ rpool 14.6G 54.9G 4 55 250K 2.69M
---------- ----- ----- ----- ----- ----- ----- ----
.Ed
.El
.
.Sh ENVIRONMENT VARIABLES
.Bl -tag -width "ZFS_ABORT"
.It Ev ZFS_ABORT
.Bl -tag -compact -width "ZPOOL_IMPORT_UDEV_TIMEOUT_MS"
.It Sy ZFS_ABORT
Cause
.Nm zpool
.Nm
to dump core on exit for the purposes of running
.Sy ::findleaks .
.El
.Bl -tag -width "ZFS_COLOR"
.It Ev ZFS_COLOR
.It Sy ZFS_COLOR
Use ANSI color in
.Nm zpool status
output.
.El
.Bl -tag -width "ZPOOL_IMPORT_PATH"
.It Ev ZPOOL_IMPORT_PATH
The search path for devices or files to use with the pool. This is a colon-separated list of directories in which
.Nm zpool
.It Sy ZPOOL_IMPORT_PATH
The search path for devices or files to use with the pool.
This is a colon-separated list of directories in which
.Nm
looks for device nodes and files.
Similar to the
.Fl d
option in
.Nm zpool import .
.El
.Bl -tag -width "ZPOOL_IMPORT_UDEV_TIMEOUT_MS"
.It Ev ZPOOL_IMPORT_UDEV_TIMEOUT_MS
.It Sy ZPOOL_IMPORT_UDEV_TIMEOUT_MS
The maximum time in milliseconds that
.Nm zpool import
will wait for an expected device to be available.
.El
.Bl -tag -width "ZPOOL_STATUS_NON_NATIVE_ASHIFT_IGNORE"
.It Ev ZPOOL_STATUS_NON_NATIVE_ASHIFT_IGNORE
.It Sy ZPOOL_STATUS_NON_NATIVE_ASHIFT_IGNORE
If set, suppress warning about non-native vdev ashift in
.Nm zpool status .
The value is not used, only the presence or absence of the variable matters.
.El
.Bl -tag -width "ZPOOL_VDEV_NAME_GUID"
.It Ev ZPOOL_VDEV_NAME_GUID
.It Sy ZPOOL_VDEV_NAME_GUID
Cause
.Nm zpool
subcommands to output vdev guids by default. This behavior is identical to the
.Nm
subcommands to output vdev guids by default.
This behavior is identical to the
.Nm zpool Cm status Fl g
command line option.
.El
.Bl -tag -width "ZPOOL_VDEV_NAME_FOLLOW_LINKS"
.It Ev ZPOOL_VDEV_NAME_FOLLOW_LINKS
.It Sy ZPOOL_VDEV_NAME_FOLLOW_LINKS
Cause
.Nm zpool
subcommands to follow links for vdev names by default. This behavior is identical to the
.Nm
subcommands to follow links for vdev names by default.
This behavior is identical to the
.Nm zpool Cm status Fl L
command line option.
.El
.Bl -tag -width "ZPOOL_VDEV_NAME_PATH"
.It Ev ZPOOL_VDEV_NAME_PATH
.It Sy ZPOOL_VDEV_NAME_PATH
Cause
.Nm zpool
subcommands to output full vdev path names by default. This
behavior is identical to the
.Nm
subcommands to output full vdev path names by default.
This behavior is identical to the
.Nm zpool Cm status Fl P
command line option.
.El
.Bl -tag -width "ZFS_VDEV_DEVID_OPT_OUT"
.It Ev ZFS_VDEV_DEVID_OPT_OUT
.It Sy ZFS_VDEV_DEVID_OPT_OUT
Older OpenZFS implementations had issues when attempting to display pool
config VDEV names if a
.Sy devid
NVP value is present in the pool's config.
.Pp
For example, a pool that originated on illumos platform would have a devid
For example, a pool that originated on illumos platform would have a
.Sy devid
value in the config and
.Nm zpool status
would fail when listing the config.
This would also be true for future Linux based pools.
This would also be true for future Linux-based pools.
.Pp
A pool can be stripped of any
.Sy devid
values on import or prevented from adding
them on
.Nm zpool create
.Nm zpool Cm create
or
.Nm zpool add
.Nm zpool Cm add
by setting
.Sy ZFS_VDEV_DEVID_OPT_OUT .
.El
.Bl -tag -width "ZPOOL_SCRIPTS_AS_ROOT"
.It Ev ZPOOL_SCRIPTS_AS_ROOT
Allow a privileged user to run the
.Nm zpool status/iostat
with the
.Fl c
option. Normally, only unprivileged users are allowed to run
.Pp
.It Sy ZPOOL_SCRIPTS_AS_ROOT
Allow a privileged user to run
.Nm zpool status/iostat Fl c .
Normally, only unprivileged users are allowed to run
.Fl c .
.El
.Bl -tag -width "ZPOOL_SCRIPTS_PATH"
.It Ev ZPOOL_SCRIPTS_PATH
.It Sy ZPOOL_SCRIPTS_PATH
The search path for scripts when running
.Nm zpool status/iostat
with the
.Fl c
option. This is a colon-separated list of directories and overrides the default
.Nm zpool status/iostat Fl c .
This is a colon-separated list of directories and overrides the default
.Pa ~/.zpool.d
and
.Pa /etc/zfs/zpool.d
search paths.
.El
.Bl -tag -width "ZPOOL_SCRIPTS_ENABLED"
.It Ev ZPOOL_SCRIPTS_ENABLED
.It Sy ZPOOL_SCRIPTS_ENABLED
Allow a user to run
.Nm zpool status/iostat
with the
.Fl c
option. If
.Nm zpool status/iostat Fl c .
If
.Sy ZPOOL_SCRIPTS_ENABLED
is not set, it is assumed that the user is allowed to run
.Nm zpool Cm status/iostat Fl c .
.Nm zpool Cm status Ns / Ns Cm iostat Fl c .
.El
.
.Sh INTERFACE STABILITY
.Sy Evolving
.
.Sh SEE ALSO
.Xr zfs-events 5 ,
.Xr zfs-module-parameters 5 ,

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -27,18 +26,20 @@
.\" Copyright 2017 Nexenta Systems, Inc.
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
.\"
.Dd August 9, 2019
.Dd June 2, 2021
.Dt ZPOOLCONCEPTS 8
.Os
.
.Sh NAME
.Nm zpoolconcepts
.Nd overview of ZFS storage pools
.
.Sh DESCRIPTION
.Ss Virtual Devices (vdevs)
A "virtual device" describes a single device or a collection of devices
organized according to certain performance and fault characteristics.
The following virtual devices are supported:
.Bl -tag -width Ds
.Bl -tag -width "special"
.It Sy disk
A block device, typically located under
.Pa /dev .
@ -58,13 +59,14 @@ When given a whole disk, ZFS automatically labels the disk, if necessary.
A regular file.
The use of files as a backing store is strongly discouraged.
It is designed primarily for experimental purposes, as the fault tolerance of a
file is only as good as the file system of which it is a part.
file is only as good as the file system on which it resides.
A file must be specified by a full path.
.It Sy mirror
A mirror of two or more devices.
Data is replicated in an identical fashion across all components of a mirror.
A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices
failing without losing data.
A mirror with
.Em N No disks of size Em X No can hold Em X No bytes and can withstand Em N-1
devices failing without losing data.
.It Sy raidz , raidz1 , raidz2 , raidz3
A variation on RAID-5 that allows for better distribution of parity and
eliminates the RAID-5
@ -72,7 +74,7 @@ eliminates the RAID-5
.Pq in which data and parity become inconsistent after a power loss .
Data and parity is striped across all disks within a raidz group.
.Pp
A raidz group can have single-, double-, or triple-parity, meaning that the
A raidz group can have single, double, or triple parity, meaning that the
raidz group can sustain one, two, or three failures, respectively, without
losing any data.
The
@ -87,39 +89,42 @@ The
vdev type is an alias for
.Sy raidz1 .
.Pp
A raidz group with N disks of size X with P parity disks can hold approximately
(N-P)*X bytes and can withstand P device(s) failing without losing data.
A raidz group with
.Em N No disks of size Em X No with Em P No parity disks can hold approximately
.Em (N-P)*X No bytes and can withstand Em P No devices failing without losing data.
The minimum number of devices in a raidz group is one more than the number of
parity disks.
The recommended number is between 3 and 9 to help increase performance.
.It Sy draid , draid1 , draid2 , draid3
A variant of raidz that provides integrated distributed hot spares which
allows for faster resilvering while retaining the benefits of raidz.
A dRAID vdev is constructed from multiple internal raidz groups, each with D
data devices and P parity devices.
A dRAID vdev is constructed from multiple internal raidz groups, each with
.Em D No data devices and Em P No parity devices.
These groups are distributed over all of the children in order to fully
utilize the available disk performance.
.Pp
Unlike raidz, dRAID uses a fixed stripe width (padding as necessary with
zeros) to allow fully sequential resilvering.
This fixed stripe width significantly effects both usable capacity and IOPS.
For example, with the default D=8 and 4k disk sectors the minimum allocation
size is 32k.
For example, with the default
.Em D=8 No and Em 4kB No disk sectors the minimum allocation size is Em 32kB .
If using compression, this relatively large allocation size can reduce the
effective compression ratio.
When using ZFS volumes and dRAID the default volblocksize property is increased
to account for the allocation size.
When using ZFS volumes and dRAID, the default of the
.Sy volblocksize
property is increased to account for the allocation size.
If a dRAID pool will hold a significant amount of small blocks, it is
recommended to also add a mirrored
.Sy special
vdev to store those blocks.
.Pp
In regards to IO/s, performance is similar to raidz since for any read all D
data disks must be accessed.
In regards to I/O, performance is similar to raidz since for any read all
.Em D No data disks must be accessed.
Delivered random IOPS can be reasonably approximated as
floor((N-S)/(D+P))*<single-drive-IOPS>.
.Sy floor((N-S)/(D+P))*single_drive_IOPS .
.Pp
Like raidz a dRAID can have single-, double-, or triple-parity. The
Like raidzm a dRAID can have single-, double-, or triple-parity.
The
.Sy draid1 ,
.Sy draid2 ,
and
@ -130,33 +135,34 @@ The
vdev type is an alias for
.Sy draid1 .
.Pp
A dRAID with N disks of size X, D data disks per redundancy group, P parity
level, and S distributed hot spares can hold approximately (N-S)*(D/(D+P))*X
bytes and can withstand P device(s) failing without losing data.
.It Sy draid[<parity>][:<data>d][:<children>c][:<spares>s]
A dRAID with
.Em N No disks of size Em X , D No data disks per redundancy group, Em P
.No parity level, and Em S No distributed hot spares can hold approximately
.Em (N-S)*(D/(D+P))*X No bytes and can withstand Em P
devices failing without losing data.
.It Sy draid Ns Oo Ar parity Oc Ns Oo Sy \&: Ns Ar data Ns Sy d Oc Ns Oo Sy \&: Ns Ar children Ns Sy c Oc Ns Oo Sy \&: Ns Ar spares Ns Sy s Oc
A non-default dRAID configuration can be specified by appending one or more
of the following optional arguments to the
.Sy draid
keyword.
.Pp
.Em parity
- The parity level (1-3).
.Pp
.Em data
- The number of data devices per redundancy group.
In general a smaller value of D will increase IOPS, improve the compression ratio, and speed up resilvering at the expense of total usable capacity.
Defaults to 8, unless N-P-S is less than 8.
.Pp
.Em children
- The expected number of children.
keyword:
.Bl -tag -compact -width "children"
.It Ar parity
The parity level (1-3).
.It Ar data
The number of data devices per redundancy group.
In general, a smaller value of
.Em D No will increase IOPS, improve the compression ratio,
and speed up resilvering at the expense of total usable capacity.
Defaults to
.Em 8 , No unless Em N-P-S No is less than Em 8 .
.It Ar children
The expected number of children.
Useful as a cross-check when listing a large number of devices.
An error is returned when the provided number of children differs.
.Pp
.Em spares
- The number of distributed hot spares.
.It Ar spares
The number of distributed hot spares.
Defaults to zero.
.Pp
.Pp
.El
.It Sy spare
A pseudo-vdev which keeps track of available hot spares for a pool.
For more information, see the
@ -174,13 +180,15 @@ section.
.It Sy dedup
A device dedicated solely for deduplication tables.
The redundancy of this device should match the redundancy of the other normal
devices in the pool. If more than one dedup device is specified, then
devices in the pool.
If more than one dedup device is specified, then
allocations are load-balanced between those devices.
.It Sy special
A device dedicated solely for allocating various kinds of internal metadata,
and optionally small file blocks.
The redundancy of this device should match the redundancy of the other normal
devices in the pool. If more than one special device is specified, then
devices in the pool.
If more than one special device is specified, then
allocations are load-balanced between those devices.
.Pp
For more information on special allocations, see the
@ -209,17 +217,15 @@ among devices.
As new virtual devices are added, ZFS automatically places data on the newly
available devices.
.Pp
Virtual devices are specified one at a time on the command line, separated by
whitespace.
The keywords
.Sy mirror
and
.Sy raidz
Virtual devices are specified one at a time on the command line,
separated by whitespace.
Keywords like
.Sy mirror No and Sy raidz
are used to distinguish where a group ends and another begins.
For example, the following creates two root vdevs, each a mirror of two disks:
.Bd -literal
# zpool create mypool mirror sda sdb mirror sdc sdd
.Ed
For example, the following creates a pool with two root vdevs,
each a mirror of two disks:
.Dl # Nm zpool Cm create Ar mypool Sy mirror Ar sda sdb Sy mirror Ar sdc sdd
.
.Ss Device Failure and Recovery
ZFS supports a rich set of mechanisms for handling device failure and data
corruption.
@ -232,17 +238,17 @@ While ZFS supports running in a non-redundant configuration, where each root
vdev is simply a disk or file, this is strongly discouraged.
A single case of bit corruption can render some or all of your data unavailable.
.Pp
A pool's health status is described by one of three states: online, degraded,
or faulted.
A pool's health status is described by one of three states:
.Sy online , degraded , No or Sy faulted .
An online pool has all devices operating normally.
A degraded pool is one in which one or more devices have failed, but the data is
still available due to a redundant configuration.
A faulted pool has corrupted metadata, or one or more faulted devices, and
insufficient replicas to continue functioning.
.Pp
The health of the top-level vdev, such as mirror or raidz device, is
potentially impacted by the state of its associated vdevs, or component
devices.
The health of the top-level vdev, such as a mirror or raidz device,
is potentially impacted by the state of its associated vdevs,
or component devices.
A top-level vdev or component device is in one of the following states:
.Bl -tag -width "DEGRADED"
.It Sy DEGRADED
@ -253,7 +259,7 @@ Sufficient replicas exist to continue functioning.
One or more component devices is in the degraded or faulted state, but
sufficient replicas exist to continue functioning.
The underlying conditions are as follows:
.Bl -bullet
.Bl -bullet -compact
.It
The number of checksum errors exceeds acceptable levels and the device is
degraded as an indication that something may be wrong.
@ -271,7 +277,7 @@ Insufficient replicas exist to continue functioning.
One or more component devices is in the faulted state, and insufficient
replicas exist to continue functioning.
The underlying conditions are as follows:
.Bl -bullet
.Bl -bullet -compact
.It
The device could be opened, but the contents did not match expected values.
.It
@ -303,19 +309,20 @@ The checksum errors are reported in
and
.Nm zpool Cm events .
When a block is stored redundantly, a damaged block may be reconstructed
(e.g. from RAIDZ parity or a mirrored copy).
(e.g. from raidz parity or a mirrored copy).
In this case, ZFS reports the checksum error against the disks that contained
damaged data.
If a block is unable to be reconstructed (e.g. due to 3 disks being damaged
in a RAIDZ2 group), it is not possible to determine which disks were silently
in a raidz2 group), it is not possible to determine which disks were silently
corrupted.
In this case, checksum errors are reported for all disks on which the block
is stored.
.Pp
If a device is removed and later re-attached to the system, ZFS attempts
to put the device online automatically.
Device attach detection is hardware-dependent and might not be supported on all
platforms.
If a device is removed and later re-attached to the system,
ZFS attempts online the device automatically.
Device attachment detection is hardware-dependent
and might not be supported on all platforms.
.
.Ss Hot Spares
ZFS allows devices to be associated with pools as
.Qq hot spares .
@ -325,9 +332,7 @@ To create a pool with hot spares, specify a
.Sy spare
vdev with any number of devices.
For example,
.Bd -literal
# zpool create pool mirror sda sdb spare sdc sdd
.Ed
.Dl # Nm zpool Cm create Ar pool Sy mirror Ar sda sdb Sy spare Ar sdc sdd
.Pp
Spares can be shared across multiple pools, and can be added with the
.Nm zpool Cm add
@ -344,10 +349,11 @@ If a pool has a shared spare that is currently being used, the pool can not be
exported since other pools may use this shared spare, which may lead to
potential data corruption.
.Pp
Shared spares add some risk. If the pools are imported on different hosts, and
both pools suffer a device failure at the same time, both could attempt to use
the spare at the same time. This may not be detected, resulting in data
corruption.
Shared spares add some risk.
If the pools are imported on different hosts,
and both pools suffer a device failure at the same time,
both could attempt to use the spare at the same time.
This may not be detected, resulting in data corruption.
.Pp
An in-progress spare replacement can be cancelled by detaching the hot spare.
If the original faulted device is detached, then the hot spare assumes its
@ -357,12 +363,14 @@ pools.
The
.Sy draid
vdev type provides distributed hot spares.
These hot spares are named after the dRAID vdev they're a part of (
.Qq draid1-2-3 specifies spare 3 of vdev 2, which is a single parity dRAID
) and may only be used by that dRAID vdev.
These hot spares are named after the dRAID vdev they're a part of
.Po Sy draid1 Ns - Ns Ar 2 Ns - Ns Ar 3 No specifies spare Ar 3 No of vdev Ar 2 ,
.No which is a single parity dRAID Pc
and may only be used by that dRAID vdev.
Otherwise, they behave the same as normal hot spares.
.Pp
Spares cannot replace log devices.
.
.Ss Intent Log
The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
transactions.
@ -375,26 +383,25 @@ By default, the intent log is allocated from blocks within the main pool.
However, it might be possible to get better performance using separate intent
log devices such as NVRAM or a dedicated disk.
For example:
.Bd -literal
# zpool create pool sda sdb log sdc
.Ed
.Dl # Nm zpool Cm create Ar pool sda sdb Sy log Ar sdc
.Pp
Multiple log devices can also be specified, and they can be mirrored.
See the
.Sx EXAMPLES
section for an example of mirroring multiple log devices.
.Pp
Log devices can be added, replaced, attached, detached and removed. In
addition, log devices are imported and exported as part of the pool
Log devices can be added, replaced, attached, detached and removed.
In addition, log devices are imported and exported as part of the pool
that contains them.
Mirrored devices can be removed by specifying the top-level mirror vdev.
.
.Ss Cache Devices
Devices can be added to a storage pool as
.Qq cache devices .
These devices provide an additional layer of caching between main memory and
disk.
For read-heavy workloads, where the working set size is much larger than what
can be cached in main memory, using cache devices allow much more of this
can be cached in main memory, using cache devices allows much more of this
working set to be served from low latency media.
Using cache devices provides the greatest performance improvement for random
read-workloads of mostly static content.
@ -403,9 +410,7 @@ To create a pool with cache devices, specify a
.Sy cache
vdev with any number of devices.
For example:
.Bd -literal
# zpool create pool sda sdb cache sdc sdd
.Ed
.Dl # Nm zpool Cm create Ar pool sda sdb Sy cache Ar sdc sdd
.Pp
Cache devices cannot be mirrored or part of a raidz configuration.
If a read error is encountered on a cache device, that read I/O is reissued to
@ -415,29 +420,36 @@ configuration.
The content of the cache devices is persistent across reboots and restored
asynchronously when importing the pool in L2ARC (persistent L2ARC).
This can be disabled by setting
.Sy l2arc_rebuild_enabled = 0 .
For cache devices smaller than 1GB we do not write the metadata structures
required for rebuilding the L2ARC in order not to waste space. This can be
changed with
.Sy l2arc_rebuild_enabled Ns = Ns Sy 0 .
For cache devices smaller than
.Em 1GB ,
we do not write the metadata structures
required for rebuilding the L2ARC in order not to waste space.
This can be changed with
.Sy l2arc_rebuild_blocks_min_l2size .
The cache device header (512 bytes) is updated even if no metadata structures
are written. Setting
.Sy l2arc_headroom = 0
The cache device header
.Pq Em 512B
is updated even if no metadata structures are written.
Setting
.Sy l2arc_headroom Ns = Ns Sy 0
will result in scanning the full-length ARC lists for cacheable content to be
written in L2ARC (persistent ARC). If a cache device is added with
written in L2ARC (persistent ARC).
If a cache device is added with
.Nm zpool Cm add
its label and header will be overwritten and its contents are not going to be
restored in L2ARC, even if the device was previously part of the pool. If a
cache device is onlined with
restored in L2ARC, even if the device was previously part of the pool.
If a cache device is onlined with
.Nm zpool Cm online
its contents will be restored in L2ARC. This is useful in case of memory pressure
its contents will be restored in L2ARC.
This is useful in case of memory pressure
where the contents of the cache device are not fully restored in L2ARC.
The user can off/online the cache device when there is less memory pressure
The user can off- and online the cache device when there is less memory pressure
in order to fully restore its contents to L2ARC.
.
.Ss Pool checkpoint
Before starting critical procedures that include destructive actions (e.g
.Nm zfs Cm destroy
), an administrator can checkpoint the pool's state and in the case of a
Before starting critical procedures that include destructive actions
.Pq like Nm zfs Cm destroy ,
an administrator can checkpoint the pool's state and in the case of a
mistake or failure, rewind the entire pool back to the checkpoint.
Otherwise, the checkpoint can be discarded when the procedure has completed
successfully.
@ -445,59 +457,56 @@ successfully.
A pool checkpoint can be thought of as a pool-wide snapshot and should be used
with care as it contains every part of the pool's state, from properties to vdev
configuration.
Thus, while a pool has a checkpoint certain operations are not allowed.
Thus, certain operations are not allowed while a pool has a checkpoint.
Specifically, vdev removal/attach/detach, mirror splitting, and
changing the pool's guid.
Adding a new vdev is supported but in the case of a rewind it will have to be
changing the pool's GUID.
Adding a new vdev is supported, but in the case of a rewind it will have to be
added again.
Finally, users of this feature should keep in mind that scrubs in a pool that
has a checkpoint do not repair checkpointed data.
.Pp
To create a checkpoint for a pool:
.Bd -literal
# zpool checkpoint pool
.Ed
.Dl # Nm zpool Cm checkpoint Ar pool
.Pp
To later rewind to its checkpointed state, you need to first export it and
then rewind it during import:
.Bd -literal
# zpool export pool
# zpool import --rewind-to-checkpoint pool
.Ed
.Dl # Nm zpool Cm export Ar pool
.Dl # Nm zpool Cm import Fl -rewind-to-checkpoint Ar pool
.Pp
To discard the checkpoint from a pool:
.Bd -literal
# zpool checkpoint -d pool
.Ed
.Dl # Nm zpool Cm checkpoint Fl d Ar pool
.Pp
Dataset reservations (controlled by the
.Nm reservation
or
.Nm refreservation
zfs properties) may be unenforceable while a checkpoint exists, because the
.Sy reservation No and Sy refreservation
properties) may be unenforceable while a checkpoint exists, because the
checkpoint is allowed to consume the dataset's reservation.
Finally, data that is part of the checkpoint but has been freed in the
current state of the pool won't be scanned during a scrub.
.
.Ss Special Allocation Class
The allocations in the special class are dedicated to specific block types.
Allocations in the special class are dedicated to specific block types.
By default this includes all metadata, the indirect blocks of user data, and
any deduplication tables. The class can also be provisioned to accept
small file blocks.
any deduplication tables.
The class can also be provisioned to accept small file blocks.
.Pp
A pool must always have at least one normal (non-dedup/special) vdev before
other devices can be assigned to the special class. If the special class
becomes full, then allocations intended for it will spill back into the
normal class.
A pool must always have at least one normal
.Pq non- Ns Sy dedup Ns /- Ns Sy special
vdev before
other devices can be assigned to the special class.
If the
.Sy special
class becomes full, then allocations intended for it
will spill back into the normal class.
.Pp
Deduplication tables can be excluded from the special class by setting the
Deduplication tables can be excluded from the special class by unsetting the
.Sy zfs_ddt_data_is_special
zfs module parameter to false (0).
ZFS module parameter.
.Pp
Inclusion of small file blocks in the special class is opt-in. Each dataset
can control the size of small file blocks allowed in the special class by
setting the
Inclusion of small file blocks in the special class is opt-in.
Each dataset can control the size of small file blocks allowed
in the special class by setting the
.Sy special_small_blocks
dataset property. It defaults to zero, so you must opt-in by setting it to a
non-zero value. See
.Xr zfs 8
for more info on setting this property.
property to nonzero.
See
.Xr zfsprops 8
for more info on this property.

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -28,19 +27,21 @@
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
.\" Copyright (c) 2021, Colm Buckley <colm@tuatha.org>
.\"
.Dd August 9, 2019
.Dd May 27, 2021
.Dt ZPOOLPROPS 8
.Os
.
.Sh NAME
.Nm zpoolprops
.Nd available properties for ZFS storage pools
.Nd properties of ZFS storage pools
.
.Sh DESCRIPTION
Each pool has several properties associated with it.
Some properties are read-only statistics while others are configurable and
change the behavior of the pool.
.Pp
The following are read-only properties:
.Bl -tag -width Ds
.Bl -tag -width "unsupported@guid"
.It Cm allocated
Amount of storage used within the pool.
See
@ -65,11 +66,13 @@ The space can be claimed for the pool by bringing it online with
or using
.Nm zpool Cm online Fl e .
.It Sy fragmentation
The amount of fragmentation in the pool. As the amount of space
The amount of fragmentation in the pool.
As the amount of space
.Sy allocated
increases, it becomes more difficult to locate
.Sy free
space. This may result in lower write performance compared to pools with more
space.
This may result in lower write performance compared to pools with more
unfragmented free space.
.It Sy free
The amount of free space available in the pool.
@ -81,8 +84,9 @@ The zpool
.Sy free
property is not generally useful for this purpose, and can be substantially more than the zfs
.Sy available
space. This discrepancy is due to several factors, including raidz parity; zfs
reservation, quota, refreservation, and refquota properties; and space set aside by
space.
This discrepancy is due to several factors, including raidz parity;
zfs reservation, quota, refreservation, and refquota properties; and space set aside by
.Sy spa_slop_shift
(see
.Xr zfs-module-parameters 5
@ -107,14 +111,14 @@ A unique identifier for the pool.
A unique identifier for the pool.
Unlike the
.Sy guid
property, this identifier is generated every time we load the pool (e.g. does
property, this identifier is generated every time we load the pool (i.e. does
not persist across imports/exports) and never changes while the pool is loaded
(even if a
.Sy reguid
operation takes place).
.It Sy size
Total size of the storage pool.
.It Sy unsupported@ Ns Em feature_guid
.It Sy unsupported@ Ns Em guid
Information about unsupported features that are enabled on the pool.
See
.Xr zpool-features 5
@ -176,19 +180,24 @@ Pool sector size exponent, to the power of
.Sy ashift ) .
Values from 9 to 16, inclusive, are valid; also, the
value 0 (the default) means to auto-detect using the kernel's block
layer and a ZFS internal exception list. I/O operations will be aligned
to the specified size boundaries. Additionally, the minimum (disk)
layer and a ZFS internal exception list.
I/O operations will be aligned to the specified size boundaries.
Additionally, the minimum (disk)
write size will be set to the specified size, so this represents a
space vs. performance trade-off. For optimal performance, the pool
sector size should be greater than or equal to the sector size of the
underlying disks. The typical case for setting this property is when
space vs. performance trade-off.
For optimal performance, the pool sector size should be greater than
or equal to the sector size of the underlying disks.
The typical case for setting this property is when
performance is important and the underlying disks use 4KiB sectors but
report 512B sectors to the OS (for compatibility reasons); in that
case, set
.Sy ashift=12
(which is 1<<12 = 4096). When set, this property is
.Sy ashift Ns = Ns Sy 12
(which is
.Sy 1<<12 No = Sy 4096 ) .
When set, this property is
used as the default hint value in subsequent vdev operations (add,
attach and replace). Changing this value will not modify any existing
attach and replace).
Changing this value will not modify any existing
vdev, not even on disk replacement; however it can be used, for
instance, to replace a dying 512B sectors disk with a newer 4KiB
sectors device: this will probably result in bad performance but at the
@ -222,40 +231,44 @@ This property can also be referred to by its shortened column name,
.Sy replace .
Autoreplace can also be used with virtual disks (like device
mapper) provided that you use the /dev/disk/by-vdev paths setup by
vdev_id.conf. See the
vdev_id.conf.
See the
.Xr vdev_id 8
man page for more details.
manual page for more details.
Autoreplace and autoonline require the ZFS Event Daemon be configured and
running. See the
running.
See the
.Xr zed 8
man page for more details.
manual page for more details.
.It Sy autotrim Ns = Ns Sy on Ns | Ns Sy off
When set to
.Sy on
space which has been recently freed, and is no longer allocated by the pool,
will be periodically trimmed. This allows block device vdevs which support
will be periodically trimmed.
This allows block device vdevs which support
BLKDISCARD, such as SSDs, or file vdevs on which the underlying file system
supports hole-punching, to reclaim unused blocks. The default setting for
this property is
supports hole-punching, to reclaim unused blocks.
The default value for this property is
.Sy off .
.Pp
Automatic TRIM does not immediately reclaim blocks after a free. Instead,
it will optimistically delay allowing smaller ranges to be aggregated in to
a few larger ones. These can then be issued more efficiently to the storage.
Automatic TRIM does not immediately reclaim blocks after a free.
Instead, it will optimistically delay allowing smaller ranges to be aggregated
into a few larger ones.
These can then be issued more efficiently to the storage.
TRIM on L2ARC devices is enabled by setting
.Sy l2arc_trim_ahead > 0 .
.Pp
Be aware that automatic trimming of recently freed data blocks can put
significant stress on the underlying storage devices. This will vary
depending of how well the specific device handles these commands. For
lower end devices it is often possible to achieve most of the benefits
significant stress on the underlying storage devices.
This will vary depending of how well the specific device handles these commands.
For lower-end devices it is often possible to achieve most of the benefits
of automatic trimming by running an on-demand (manual) TRIM periodically
using the
.Nm zpool Cm trim
command.
.It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns / Ns Ar dataset
Identifies the default bootable dataset for the root pool. This property is
expected to be set mainly by the installation and upgrade programs.
.It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns Op / Ns Ar dataset
Identifies the default bootable dataset for the root pool.
This property is expected to be set mainly by the installation and upgrade programs.
Not all Linux distribution boot processes use the bootfs property.
.It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
Controls the location of where the pool configuration is cached.
@ -286,20 +299,24 @@ A text string consisting of printable ASCII characters that will be stored
such that it is available even if the pool becomes faulted.
An administrator can provide additional information about a pool using this
property.
.It Sy compatibility Ns = Ns Ar off | legacy | file Bq , Ns Ar file Ns ...
.It Sy compatibility Ns = Ns Sy off Ns | Ns Sy legacy Ns | Ns Ar file Ns Oo , Ns Ar file Oc Ns
Specifies that the pool maintain compatibility with specific feature sets.
When set to
.Sy off
(or unset); compatibility is disabled (all features are enabled); when set to
.Sy legacy Ns ;
no features are enabled. When set to a comma-separated list of
filenames (each filename may either be an absolute path, or relative to
.Pa /etc/zfs/compatibility.d or Pa /usr/share/zfs/compatibility.d Ns )
(or unset) compatibility is disabled (all features may be enabled); when set to
.Sy legacy Ns
no features may be enabled.
When set to a comma-separated list of filenames
(each filename may either be an absolute path, or relative to
.Pa /etc/zfs/compatibility.d
or
.Pa /usr/share/zfs/compatibility.d )
the lists of requested features are read from those files, separated by
whitespace and/or commas. Only features present in all files are enabled.
whitespace and/or commas.
Only features present in all files may be enabled.
.Pp
See
.Xr zpool-features 5 Ns ,
.Xr zpool-features 5 ,
.Xr zpool-create 8
and
.Xr zpool-upgrade 8
@ -358,25 +375,30 @@ Controls whether a pool activity check should be performed during
.Nm zpool Cm import .
When a pool is determined to be active it cannot be imported, even with the
.Fl f
option. This property is intended to be used in failover configurations
option.
This property is intended to be used in failover configurations
where multiple hosts have access to a pool on shared storage.
.Pp
Multihost provides protection on import only. It does not protect against an
Multihost provides protection on import only.
It does not protect against an
individual device being used in multiple pools, regardless of the type of vdev.
See the discussion under
.Sy zpool create.
.Nm zpool Cm create .
.Pp
When this property is on, periodic writes to storage occur to show the pool is
in use. See
in use.
See
.Sy zfs_multihost_interval
in the
.Xr zfs-module-parameters 5
man page. In order to enable this property each host must set a unique hostid.
manual page.
In order to enable this property each host must set a unique hostid.
See
.Xr genhostid 1
.Xr zgenhostid 8
.Xr spl-module-parameters 5
for additional details. The default value is
for additional details.
The default value is
.Sy off .
.It Sy version Ns = Ns Ar version
The current on-disk version of the pool.

View File

@ -18,14 +18,15 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2020 by Delphix. All rights reserved.
.\"
.Dd May 8, 2021
.Dt ZSTREAM 8
.Os
.
.Sh NAME
.Nm zstream
.Nd manipulate zfs send streams
.Nd manipulate ZFS send streams
.Sh SYNOPSIS
.Nm
.Cm dump
@ -38,11 +39,11 @@
.Nm
.Cm token
.Ar resume_token
.
.Sh DESCRIPTION
.sp
The
.Sy zstream
utility manipulates zfs send streams, which are the output of the
utility manipulates ZFS send streams output by the
.Sy zfs send
command.
.Bl -tag -width ""
@ -102,16 +103,15 @@ command is provided a
containing a deduplicated send stream, and outputs an equivalent
non-deduplicated send stream on standard output.
Therefore, a deduplicated send stream can be received by running:
.Bd -literal
# zstream redup DEDUP_STREAM_FILE | zfs receive ...
.Ed
.Dl # Nm zstream Cm redup Pa DEDUP_STREAM_FILE | Nm zfs Cm receive No
.Bl -tag -width "-D"
.It Fl v
Verbose.
Print summary of converted records.
.El
.El
.
.Sh SEE ALSO
.Xr zfs 8 ,
.Xr zfs-send 8 ,
.Xr zfs-receive 8
.Xr zfs-receive 8 ,
.Xr zfs-send 8