2019-11-13 17:21:07 +00:00
|
|
|
|
.\"
|
|
|
|
|
.\" CDDL HEADER START
|
|
|
|
|
.\"
|
|
|
|
|
.\" The contents of this file are subject to the terms of the
|
|
|
|
|
.\" Common Development and Distribution License (the "License").
|
|
|
|
|
.\" You may not use this file except in compliance with the License.
|
|
|
|
|
.\"
|
|
|
|
|
.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
|
2022-07-11 21:16:13 +00:00
|
|
|
|
.\" or https://opensource.org/licenses/CDDL-1.0.
|
2019-11-13 17:21:07 +00:00
|
|
|
|
.\" See the License for the specific language governing permissions
|
|
|
|
|
.\" and limitations under the License.
|
|
|
|
|
.\"
|
|
|
|
|
.\" When distributing Covered Code, include this CDDL HEADER in each
|
|
|
|
|
.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
|
|
|
|
|
.\" If applicable, add the following below this CDDL HEADER, with the
|
|
|
|
|
.\" fields enclosed by brackets "[]" replaced with your own identifying
|
|
|
|
|
.\" information: Portions Copyright [yyyy] [name of copyright owner]
|
|
|
|
|
.\"
|
|
|
|
|
.\" CDDL HEADER END
|
|
|
|
|
.\"
|
|
|
|
|
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
|
|
|
|
|
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
|
|
|
|
|
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
|
|
|
|
|
.\" Copyright (c) 2017 Datto Inc.
|
|
|
|
|
.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
|
|
|
|
|
.\" Copyright 2017 Nexenta Systems, Inc.
|
|
|
|
|
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
|
2021-02-18 05:30:45 +00:00
|
|
|
|
.\" Copyright (c) 2021, Colm Buckley <colm@tuatha.org>
|
2023-04-21 17:20:36 +00:00
|
|
|
|
.\" Copyright (c) 2023, Klara Inc.
|
2019-11-13 17:21:07 +00:00
|
|
|
|
.\"
|
ddt: dedup table quota enforcement
This adds two new pool properties:
- dedup_table_size, the total size of all DDTs on the pool; and
- dedup_table_quota, the maximum possible size of all DDTs in the pool
When set, quota will be enforced by checking when a new entry is about
to be created. If the pool is over its dedup quota, the entry won't be
created, and the corresponding write will be converted to a regular
non-dedup write. Note that existing entries can be updated (ie their
refcounts changed), as that reuses the space rather than requiring more.
dedup_table_quota can be set to 'auto', which will set it based on the
size of the devices backing the "dedup" allocation device. This makes it
possible to limit the DDTs to the size of a dedup vdev only, such that
when the device fills, no new blocks are deduplicated.
Sponsored-by: iXsystems, Inc.
Sponsored-By: Klara Inc.
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Signed-off-by: Don Brady <don.brady@klarasystems.com>
Co-authored-by: Don Brady <don.brady@klarasystems.com>
Co-authored-by: Rob Wing <rob.wing@klarasystems.com>
Co-authored-by: Sean Eric Fagan <sean.fagan@klarasystems.com>
Closes #15889
2024-07-25 16:47:36 +00:00
|
|
|
|
.Dd January 14, 2024
|
2021-06-04 20:29:26 +00:00
|
|
|
|
.Dt ZPOOLPROPS 7
|
2020-08-21 18:55:47 +00:00
|
|
|
|
.Os
|
2021-05-27 00:46:40 +00:00
|
|
|
|
.
|
2019-11-13 17:21:07 +00:00
|
|
|
|
.Sh NAME
|
|
|
|
|
.Nm zpoolprops
|
2021-05-27 00:46:40 +00:00
|
|
|
|
.Nd properties of ZFS storage pools
|
|
|
|
|
.
|
2019-11-13 17:21:07 +00:00
|
|
|
|
.Sh DESCRIPTION
|
|
|
|
|
Each pool has several properties associated with it.
|
|
|
|
|
Some properties are read-only statistics while others are configurable and
|
|
|
|
|
change the behavior of the pool.
|
|
|
|
|
.Pp
|
2023-04-21 17:20:36 +00:00
|
|
|
|
User properties have no effect on ZFS behavior.
|
|
|
|
|
Use them to annotate pools in a way that is meaningful in your environment.
|
|
|
|
|
For more information about user properties, see the
|
|
|
|
|
.Sx User Properties
|
|
|
|
|
section.
|
|
|
|
|
.Pp
|
2019-11-13 17:21:07 +00:00
|
|
|
|
The following are read-only properties:
|
2021-05-27 00:46:40 +00:00
|
|
|
|
.Bl -tag -width "unsupported@guid"
|
2023-03-10 19:59:53 +00:00
|
|
|
|
.It Sy allocated
|
2019-11-13 17:21:07 +00:00
|
|
|
|
Amount of storage used within the pool.
|
|
|
|
|
See
|
|
|
|
|
.Sy fragmentation
|
|
|
|
|
and
|
|
|
|
|
.Sy free
|
|
|
|
|
for more information.
|
2023-03-10 19:59:53 +00:00
|
|
|
|
.It Sy bcloneratio
|
|
|
|
|
The ratio of the total amount of storage that would be required to store all
|
|
|
|
|
the cloned blocks without cloning to the actual storage used.
|
|
|
|
|
The
|
|
|
|
|
.Sy bcloneratio
|
|
|
|
|
property is calculated as:
|
|
|
|
|
.Pp
|
|
|
|
|
.Sy ( ( bclonesaved + bcloneused ) * 100 ) / bcloneused
|
|
|
|
|
.It Sy bclonesaved
|
|
|
|
|
The amount of additional storage that would be required if block cloning
|
|
|
|
|
was not used.
|
|
|
|
|
.It Sy bcloneused
|
|
|
|
|
The amount of storage used by cloned blocks.
|
2019-11-13 17:21:07 +00:00
|
|
|
|
.It Sy capacity
|
|
|
|
|
Percentage of pool space used.
|
|
|
|
|
This property can also be referred to by its shortened column name,
|
|
|
|
|
.Sy cap .
|
ddt: dedup table quota enforcement
This adds two new pool properties:
- dedup_table_size, the total size of all DDTs on the pool; and
- dedup_table_quota, the maximum possible size of all DDTs in the pool
When set, quota will be enforced by checking when a new entry is about
to be created. If the pool is over its dedup quota, the entry won't be
created, and the corresponding write will be converted to a regular
non-dedup write. Note that existing entries can be updated (ie their
refcounts changed), as that reuses the space rather than requiring more.
dedup_table_quota can be set to 'auto', which will set it based on the
size of the devices backing the "dedup" allocation device. This makes it
possible to limit the DDTs to the size of a dedup vdev only, such that
when the device fills, no new blocks are deduplicated.
Sponsored-by: iXsystems, Inc.
Sponsored-By: Klara Inc.
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Signed-off-by: Don Brady <don.brady@klarasystems.com>
Co-authored-by: Don Brady <don.brady@klarasystems.com>
Co-authored-by: Rob Wing <rob.wing@klarasystems.com>
Co-authored-by: Sean Eric Fagan <sean.fagan@klarasystems.com>
Closes #15889
2024-07-25 16:47:36 +00:00
|
|
|
|
.It Sy dedup_table_size
|
|
|
|
|
Total on-disk size of the deduplication table.
|
2019-11-13 17:21:07 +00:00
|
|
|
|
.It Sy expandsize
|
|
|
|
|
Amount of uninitialized space within the pool or device that can be used to
|
|
|
|
|
increase the total capacity of the pool.
|
zpoolprops.8: clarify vdev expansion rules
Remove reference to EFI(?), explain that the new space
is beyond the GPT for whole-disk vdevs, and add section noting how it
behaves with partition vdevs in terms of how the user is most likely to
encounter it ‒ the previous phrasing was confusing
and seemed to indicate that "zpool online -e" will be able to claim
GPT[whatever, ZFS, free space, whatever]
into
GPT[whatever, ZFS, whatever]
but that's not the case, as it'll only be able to do so after manually
resizing the ZFS partition to include the free space beforehand, i.e.:
GPT[whatever, ZFS, free space, whatever]
GPT[whatever, [ZFS + free space], potentially left-overs, whatever]
# zpool online -e
GPT[whatever, ZFS, whatever]
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Ryan Moeller <ryan@iXsystems.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #11158
2020-11-10 20:48:26 +00:00
|
|
|
|
On whole-disk vdevs, this is the space beyond the end of the GPT –
|
|
|
|
|
typically occurring when a LUN is dynamically expanded
|
|
|
|
|
or a disk replaced with a larger one.
|
|
|
|
|
On partition vdevs, this is the space appended to the partition after it was
|
|
|
|
|
added to the pool – most likely by resizing it in-place.
|
|
|
|
|
The space can be claimed for the pool by bringing it online with
|
|
|
|
|
.Sy autoexpand=on
|
|
|
|
|
or using
|
|
|
|
|
.Nm zpool Cm online Fl e .
|
2019-11-13 17:21:07 +00:00
|
|
|
|
.It Sy fragmentation
|
2021-05-27 00:46:40 +00:00
|
|
|
|
The amount of fragmentation in the pool.
|
|
|
|
|
As the amount of space
|
2019-11-13 17:21:07 +00:00
|
|
|
|
.Sy allocated
|
|
|
|
|
increases, it becomes more difficult to locate
|
|
|
|
|
.Sy free
|
2021-05-27 00:46:40 +00:00
|
|
|
|
space.
|
|
|
|
|
This may result in lower write performance compared to pools with more
|
2019-11-13 17:21:07 +00:00
|
|
|
|
unfragmented free space.
|
|
|
|
|
.It Sy free
|
|
|
|
|
The amount of free space available in the pool.
|
|
|
|
|
By contrast, the
|
|
|
|
|
.Xr zfs 8
|
|
|
|
|
.Sy available
|
|
|
|
|
property describes how much new data can be written to ZFS filesystems/volumes.
|
|
|
|
|
The zpool
|
|
|
|
|
.Sy free
|
2022-11-12 12:23:30 +00:00
|
|
|
|
property is not generally useful for this purpose, and can be substantially more
|
|
|
|
|
than the zfs
|
2019-11-13 17:21:07 +00:00
|
|
|
|
.Sy available
|
2021-05-27 00:46:40 +00:00
|
|
|
|
space.
|
|
|
|
|
This discrepancy is due to several factors, including raidz parity;
|
2022-11-12 12:23:30 +00:00
|
|
|
|
zfs reservation, quota, refreservation, and refquota properties; and space set
|
|
|
|
|
aside by
|
2019-11-13 17:21:07 +00:00
|
|
|
|
.Sy spa_slop_shift
|
|
|
|
|
(see
|
2021-06-04 20:29:26 +00:00
|
|
|
|
.Xr zfs 4
|
2019-11-13 17:21:07 +00:00
|
|
|
|
for more information).
|
|
|
|
|
.It Sy freeing
|
|
|
|
|
After a file system or snapshot is destroyed, the space it was using is
|
|
|
|
|
returned to the pool asynchronously.
|
|
|
|
|
.Sy freeing
|
|
|
|
|
is the amount of space remaining to be reclaimed.
|
|
|
|
|
Over time
|
|
|
|
|
.Sy freeing
|
|
|
|
|
will decrease while
|
|
|
|
|
.Sy free
|
|
|
|
|
increases.
|
2023-03-10 19:59:53 +00:00
|
|
|
|
.It Sy guid
|
|
|
|
|
A unique identifier for the pool.
|
2019-11-13 17:21:07 +00:00
|
|
|
|
.It Sy health
|
|
|
|
|
The current health of the pool.
|
|
|
|
|
Health can be one of
|
|
|
|
|
.Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
|
2023-03-10 19:59:53 +00:00
|
|
|
|
.It Sy leaked
|
|
|
|
|
Space not released while
|
|
|
|
|
.Sy freeing
|
|
|
|
|
due to corruption, now permanently leaked into the pool.
|
2019-11-13 17:21:07 +00:00
|
|
|
|
.It Sy load_guid
|
|
|
|
|
A unique identifier for the pool.
|
|
|
|
|
Unlike the
|
|
|
|
|
.Sy guid
|
2021-05-27 00:46:40 +00:00
|
|
|
|
property, this identifier is generated every time we load the pool (i.e. does
|
2019-11-13 17:21:07 +00:00
|
|
|
|
not persist across imports/exports) and never changes while the pool is loaded
|
|
|
|
|
(even if a
|
|
|
|
|
.Sy reguid
|
|
|
|
|
operation takes place).
|
|
|
|
|
.It Sy size
|
|
|
|
|
Total size of the storage pool.
|
2021-05-27 00:46:40 +00:00
|
|
|
|
.It Sy unsupported@ Ns Em guid
|
2019-11-13 17:21:07 +00:00
|
|
|
|
Information about unsupported features that are enabled on the pool.
|
|
|
|
|
See
|
2021-06-04 20:29:26 +00:00
|
|
|
|
.Xr zpool-features 7
|
2019-11-13 17:21:07 +00:00
|
|
|
|
for details.
|
|
|
|
|
.El
|
|
|
|
|
.Pp
|
|
|
|
|
The space usage properties report actual physical space available to the
|
|
|
|
|
storage pool.
|
|
|
|
|
The physical space can be different from the total amount of space that any
|
|
|
|
|
contained datasets can actually use.
|
|
|
|
|
The amount of space used in a raidz configuration depends on the characteristics
|
|
|
|
|
of the data being written.
|
|
|
|
|
In addition, ZFS reserves some space for internal accounting that the
|
|
|
|
|
.Xr zfs 8
|
|
|
|
|
command takes into account, but the
|
|
|
|
|
.Nm
|
|
|
|
|
command does not.
|
|
|
|
|
For non-full pools of a reasonable size, these effects should be invisible.
|
|
|
|
|
For small pools, or pools that are close to being completely full, these
|
|
|
|
|
discrepancies may become more noticeable.
|
|
|
|
|
.Pp
|
|
|
|
|
The following property can be set at creation time and import time:
|
|
|
|
|
.Bl -tag -width Ds
|
|
|
|
|
.It Sy altroot
|
|
|
|
|
Alternate root directory.
|
|
|
|
|
If set, this directory is prepended to any mount points within the pool.
|
|
|
|
|
This can be used when examining an unknown pool where the mount points cannot be
|
|
|
|
|
trusted, or in an alternate boot environment, where the typical paths are not
|
|
|
|
|
valid.
|
|
|
|
|
.Sy altroot
|
|
|
|
|
is not a persistent property.
|
|
|
|
|
It is valid only while the system is up.
|
|
|
|
|
Setting
|
|
|
|
|
.Sy altroot
|
|
|
|
|
defaults to using
|
|
|
|
|
.Sy cachefile Ns = Ns Sy none ,
|
|
|
|
|
though this may be overridden using an explicit setting.
|
|
|
|
|
.El
|
|
|
|
|
.Pp
|
|
|
|
|
The following property can be set only at import time:
|
|
|
|
|
.Bl -tag -width Ds
|
|
|
|
|
.It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
|
|
|
|
|
If set to
|
|
|
|
|
.Sy on ,
|
|
|
|
|
the pool will be imported in read-only mode.
|
|
|
|
|
This property can also be referred to by its shortened column name,
|
|
|
|
|
.Sy rdonly .
|
|
|
|
|
.El
|
|
|
|
|
.Pp
|
|
|
|
|
The following properties can be set at creation time and import time, and later
|
|
|
|
|
changed with the
|
|
|
|
|
.Nm zpool Cm set
|
|
|
|
|
command:
|
|
|
|
|
.Bl -tag -width Ds
|
2022-09-15 21:22:00 +00:00
|
|
|
|
.It Sy ashift Ns = Ns Ar ashift
|
2019-11-13 17:21:07 +00:00
|
|
|
|
Pool sector size exponent, to the power of
|
|
|
|
|
.Sy 2
|
|
|
|
|
(internally referred to as
|
2021-05-26 12:50:53 +00:00
|
|
|
|
.Sy ashift ) .
|
|
|
|
|
Values from 9 to 16, inclusive, are valid; also, the
|
2019-11-13 17:21:07 +00:00
|
|
|
|
value 0 (the default) means to auto-detect using the kernel's block
|
2021-05-27 00:46:40 +00:00
|
|
|
|
layer and a ZFS internal exception list.
|
|
|
|
|
I/O operations will be aligned to the specified size boundaries.
|
|
|
|
|
Additionally, the minimum (disk)
|
2019-11-13 17:21:07 +00:00
|
|
|
|
write size will be set to the specified size, so this represents a
|
2022-02-17 20:36:30 +00:00
|
|
|
|
space/performance trade-off.
|
2021-05-27 00:46:40 +00:00
|
|
|
|
For optimal performance, the pool sector size should be greater than
|
|
|
|
|
or equal to the sector size of the underlying disks.
|
|
|
|
|
The typical case for setting this property is when
|
2019-11-13 17:21:07 +00:00
|
|
|
|
performance is important and the underlying disks use 4KiB sectors but
|
|
|
|
|
report 512B sectors to the OS (for compatibility reasons); in that
|
|
|
|
|
case, set
|
2021-05-27 00:46:40 +00:00
|
|
|
|
.Sy ashift Ns = Ns Sy 12
|
|
|
|
|
(which is
|
|
|
|
|
.Sy 1<<12 No = Sy 4096 ) .
|
|
|
|
|
When set, this property is
|
2019-11-13 17:21:07 +00:00
|
|
|
|
used as the default hint value in subsequent vdev operations (add,
|
2021-05-27 00:46:40 +00:00
|
|
|
|
attach and replace).
|
|
|
|
|
Changing this value will not modify any existing
|
2019-11-13 17:21:07 +00:00
|
|
|
|
vdev, not even on disk replacement; however it can be used, for
|
|
|
|
|
instance, to replace a dying 512B sectors disk with a newer 4KiB
|
|
|
|
|
sectors device: this will probably result in bad performance but at the
|
|
|
|
|
same time could prevent loss of data.
|
|
|
|
|
.It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
|
|
|
|
|
Controls automatic pool expansion when the underlying LUN is grown.
|
|
|
|
|
If set to
|
|
|
|
|
.Sy on ,
|
|
|
|
|
the pool will be resized according to the size of the expanded device.
|
|
|
|
|
If the device is part of a mirror or raidz then all devices within that
|
|
|
|
|
mirror/raidz group must be expanded before the new space is made available to
|
|
|
|
|
the pool.
|
|
|
|
|
The default behavior is
|
|
|
|
|
.Sy off .
|
|
|
|
|
This property can also be referred to by its shortened column name,
|
|
|
|
|
.Sy expand .
|
|
|
|
|
.It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
|
|
|
|
|
Controls automatic device replacement.
|
|
|
|
|
If set to
|
|
|
|
|
.Sy off ,
|
|
|
|
|
device replacement must be initiated by the administrator by using the
|
|
|
|
|
.Nm zpool Cm replace
|
|
|
|
|
command.
|
|
|
|
|
If set to
|
|
|
|
|
.Sy on ,
|
|
|
|
|
any new device, found in the same physical location as a device that previously
|
|
|
|
|
belonged to the pool, is automatically formatted and replaced.
|
|
|
|
|
The default behavior is
|
|
|
|
|
.Sy off .
|
|
|
|
|
This property can also be referred to by its shortened column name,
|
|
|
|
|
.Sy replace .
|
|
|
|
|
Autoreplace can also be used with virtual disks (like device
|
|
|
|
|
mapper) provided that you use the /dev/disk/by-vdev paths setup by
|
2021-05-27 00:46:40 +00:00
|
|
|
|
vdev_id.conf.
|
|
|
|
|
See the
|
2019-11-13 17:21:07 +00:00
|
|
|
|
.Xr vdev_id 8
|
2021-05-27 00:46:40 +00:00
|
|
|
|
manual page for more details.
|
2019-11-13 17:21:07 +00:00
|
|
|
|
Autoreplace and autoonline require the ZFS Event Daemon be configured and
|
2021-05-27 00:46:40 +00:00
|
|
|
|
running.
|
|
|
|
|
See the
|
2019-11-13 17:21:07 +00:00
|
|
|
|
.Xr zed 8
|
2021-05-27 00:46:40 +00:00
|
|
|
|
manual page for more details.
|
2019-11-13 17:21:07 +00:00
|
|
|
|
.It Sy autotrim Ns = Ns Sy on Ns | Ns Sy off
|
|
|
|
|
When set to
|
|
|
|
|
.Sy on
|
|
|
|
|
space which has been recently freed, and is no longer allocated by the pool,
|
2021-05-27 00:46:40 +00:00
|
|
|
|
will be periodically trimmed.
|
|
|
|
|
This allows block device vdevs which support
|
2019-11-13 17:21:07 +00:00
|
|
|
|
BLKDISCARD, such as SSDs, or file vdevs on which the underlying file system
|
2021-05-27 00:46:40 +00:00
|
|
|
|
supports hole-punching, to reclaim unused blocks.
|
|
|
|
|
The default value for this property is
|
2019-11-13 17:21:07 +00:00
|
|
|
|
.Sy off .
|
|
|
|
|
.Pp
|
2021-05-27 00:46:40 +00:00
|
|
|
|
Automatic TRIM does not immediately reclaim blocks after a free.
|
|
|
|
|
Instead, it will optimistically delay allowing smaller ranges to be aggregated
|
|
|
|
|
into a few larger ones.
|
|
|
|
|
These can then be issued more efficiently to the storage.
|
2020-06-09 17:15:08 +00:00
|
|
|
|
TRIM on L2ARC devices is enabled by setting
|
|
|
|
|
.Sy l2arc_trim_ahead > 0 .
|
2019-11-13 17:21:07 +00:00
|
|
|
|
.Pp
|
|
|
|
|
Be aware that automatic trimming of recently freed data blocks can put
|
2021-05-27 00:46:40 +00:00
|
|
|
|
significant stress on the underlying storage devices.
|
|
|
|
|
This will vary depending of how well the specific device handles these commands.
|
|
|
|
|
For lower-end devices it is often possible to achieve most of the benefits
|
2019-11-13 17:21:07 +00:00
|
|
|
|
of automatic trimming by running an on-demand (manual) TRIM periodically
|
|
|
|
|
using the
|
|
|
|
|
.Nm zpool Cm trim
|
|
|
|
|
command.
|
2021-05-27 00:46:40 +00:00
|
|
|
|
.It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns Op / Ns Ar dataset
|
|
|
|
|
Identifies the default bootable dataset for the root pool.
|
2022-11-12 12:23:30 +00:00
|
|
|
|
This property is expected to be set mainly by the installation and upgrade
|
|
|
|
|
programs.
|
2019-11-13 17:21:07 +00:00
|
|
|
|
Not all Linux distribution boot processes use the bootfs property.
|
|
|
|
|
.It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
|
|
|
|
|
Controls the location of where the pool configuration is cached.
|
|
|
|
|
Discovering all pools on system startup requires a cached copy of the
|
|
|
|
|
configuration data that is stored on the root file system.
|
|
|
|
|
All pools in this cache are automatically imported when the system boots.
|
|
|
|
|
Some environments, such as install and clustering, need to cache this
|
|
|
|
|
information in a different location so that pools are not automatically
|
|
|
|
|
imported.
|
|
|
|
|
Setting this property caches the pool configuration in a different location that
|
|
|
|
|
can later be imported with
|
|
|
|
|
.Nm zpool Cm import Fl c .
|
|
|
|
|
Setting it to the value
|
|
|
|
|
.Sy none
|
|
|
|
|
creates a temporary pool that is never cached, and the
|
|
|
|
|
.Qq
|
|
|
|
|
.Pq empty string
|
|
|
|
|
uses the default location.
|
|
|
|
|
.Pp
|
|
|
|
|
Multiple pools can share the same cache file.
|
|
|
|
|
Because the kernel destroys and recreates this file when pools are added and
|
|
|
|
|
removed, care should be taken when attempting to access this file.
|
|
|
|
|
When the last pool using a
|
|
|
|
|
.Sy cachefile
|
|
|
|
|
is exported or destroyed, the file will be empty.
|
|
|
|
|
.It Sy comment Ns = Ns Ar text
|
|
|
|
|
A text string consisting of printable ASCII characters that will be stored
|
|
|
|
|
such that it is available even if the pool becomes faulted.
|
|
|
|
|
An administrator can provide additional information about a pool using this
|
|
|
|
|
property.
|
2021-05-27 00:46:40 +00:00
|
|
|
|
.It Sy compatibility Ns = Ns Sy off Ns | Ns Sy legacy Ns | Ns Ar file Ns Oo , Ns Ar file Oc Ns …
|
2021-02-18 05:30:45 +00:00
|
|
|
|
Specifies that the pool maintain compatibility with specific feature sets.
|
|
|
|
|
When set to
|
|
|
|
|
.Sy off
|
2021-05-27 00:46:40 +00:00
|
|
|
|
(or unset) compatibility is disabled (all features may be enabled); when set to
|
2024-01-09 01:03:15 +00:00
|
|
|
|
.Sy legacy
|
2021-05-27 00:46:40 +00:00
|
|
|
|
no features may be enabled.
|
|
|
|
|
When set to a comma-separated list of filenames
|
|
|
|
|
(each filename may either be an absolute path, or relative to
|
|
|
|
|
.Pa /etc/zfs/compatibility.d
|
|
|
|
|
or
|
|
|
|
|
.Pa /usr/share/zfs/compatibility.d )
|
2021-02-18 05:30:45 +00:00
|
|
|
|
the lists of requested features are read from those files, separated by
|
2021-05-27 00:46:40 +00:00
|
|
|
|
whitespace and/or commas.
|
|
|
|
|
Only features present in all files may be enabled.
|
|
|
|
|
.Pp
|
2021-02-18 05:30:45 +00:00
|
|
|
|
See
|
2021-06-04 20:29:26 +00:00
|
|
|
|
.Xr zpool-features 7 ,
|
2021-02-18 05:30:45 +00:00
|
|
|
|
.Xr zpool-create 8
|
|
|
|
|
and
|
|
|
|
|
.Xr zpool-upgrade 8
|
|
|
|
|
for more information on the operation of compatibility feature sets.
|
ddt: dedup table quota enforcement
This adds two new pool properties:
- dedup_table_size, the total size of all DDTs on the pool; and
- dedup_table_quota, the maximum possible size of all DDTs in the pool
When set, quota will be enforced by checking when a new entry is about
to be created. If the pool is over its dedup quota, the entry won't be
created, and the corresponding write will be converted to a regular
non-dedup write. Note that existing entries can be updated (ie their
refcounts changed), as that reuses the space rather than requiring more.
dedup_table_quota can be set to 'auto', which will set it based on the
size of the devices backing the "dedup" allocation device. This makes it
possible to limit the DDTs to the size of a dedup vdev only, such that
when the device fills, no new blocks are deduplicated.
Sponsored-by: iXsystems, Inc.
Sponsored-By: Klara Inc.
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Signed-off-by: Don Brady <don.brady@klarasystems.com>
Co-authored-by: Don Brady <don.brady@klarasystems.com>
Co-authored-by: Rob Wing <rob.wing@klarasystems.com>
Co-authored-by: Sean Eric Fagan <sean.fagan@klarasystems.com>
Closes #15889
2024-07-25 16:47:36 +00:00
|
|
|
|
.It Sy dedup_table_quota Ns = Ns Ar number Ns | Ns Sy none Ns | Ns Sy auto
|
|
|
|
|
This property sets a limit on the on-disk size of the pool's dedup table.
|
|
|
|
|
Entries will not be added to the dedup table once this size is reached;
|
|
|
|
|
if a dedup table already exists, and is larger than this size, they
|
|
|
|
|
will not be removed as part of setting this property.
|
|
|
|
|
Existing entries will still have their reference counts updated.
|
|
|
|
|
.Pp
|
|
|
|
|
The actual size limit of the table may be above or below the quota,
|
|
|
|
|
depending on the actual on-disk size of the entries (which may be
|
|
|
|
|
approximated for purposes of calculating the quota).
|
|
|
|
|
That is, setting a quota size of 1M may result in the maximum size being
|
|
|
|
|
slightly below, or slightly above, that value.
|
|
|
|
|
Set to
|
|
|
|
|
.Sy 'none'
|
|
|
|
|
to disable.
|
|
|
|
|
In automatic mode, which is the default, the size of a dedicated dedup vdev
|
|
|
|
|
is used as the quota limit.
|
|
|
|
|
.Pp
|
|
|
|
|
The
|
|
|
|
|
.Sy dedup_table_quota
|
|
|
|
|
property works for both legacy and fast dedup tables.
|
2019-11-13 17:21:07 +00:00
|
|
|
|
.It Sy dedupditto Ns = Ns Ar number
|
|
|
|
|
This property is deprecated and no longer has any effect.
|
|
|
|
|
.It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
|
|
|
|
|
Controls whether a non-privileged user is granted access based on the dataset
|
|
|
|
|
permissions defined on the dataset.
|
|
|
|
|
See
|
|
|
|
|
.Xr zfs 8
|
|
|
|
|
for more information on ZFS delegated administration.
|
|
|
|
|
.It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
|
|
|
|
|
Controls the system behavior in the event of catastrophic pool failure.
|
|
|
|
|
This condition is typically a result of a loss of connectivity to the underlying
|
|
|
|
|
storage device(s) or a failure of all devices within the pool.
|
|
|
|
|
The behavior of such an event is determined as follows:
|
|
|
|
|
.Bl -tag -width "continue"
|
|
|
|
|
.It Sy wait
|
|
|
|
|
Blocks all I/O access until the device connectivity is recovered and the errors
|
2022-01-14 23:30:07 +00:00
|
|
|
|
are cleared with
|
|
|
|
|
.Nm zpool Cm clear .
|
2019-11-13 17:21:07 +00:00
|
|
|
|
This is the default behavior.
|
|
|
|
|
.It Sy continue
|
|
|
|
|
Returns
|
|
|
|
|
.Er EIO
|
|
|
|
|
to any new write I/O requests but allows reads to any of the remaining healthy
|
|
|
|
|
devices.
|
|
|
|
|
Any write requests that have yet to be committed to disk would be blocked.
|
|
|
|
|
.It Sy panic
|
|
|
|
|
Prints out a message to the console and generates a system crash dump.
|
|
|
|
|
.El
|
|
|
|
|
.It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
|
|
|
|
|
The value of this property is the current state of
|
|
|
|
|
.Ar feature_name .
|
|
|
|
|
The only valid value when setting this property is
|
|
|
|
|
.Sy enabled
|
|
|
|
|
which moves
|
|
|
|
|
.Ar feature_name
|
|
|
|
|
to the enabled state.
|
|
|
|
|
See
|
2021-06-04 20:29:26 +00:00
|
|
|
|
.Xr zpool-features 7
|
2019-11-13 17:21:07 +00:00
|
|
|
|
for details on feature states.
|
|
|
|
|
.It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
|
|
|
|
|
Controls whether information about snapshots associated with this pool is
|
|
|
|
|
output when
|
|
|
|
|
.Nm zfs Cm list
|
|
|
|
|
is run without the
|
|
|
|
|
.Fl t
|
|
|
|
|
option.
|
|
|
|
|
The default value is
|
|
|
|
|
.Sy off .
|
|
|
|
|
This property can also be referred to by its shortened name,
|
|
|
|
|
.Sy listsnaps .
|
|
|
|
|
.It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
|
|
|
|
|
Controls whether a pool activity check should be performed during
|
|
|
|
|
.Nm zpool Cm import .
|
|
|
|
|
When a pool is determined to be active it cannot be imported, even with the
|
|
|
|
|
.Fl f
|
2021-05-27 00:46:40 +00:00
|
|
|
|
option.
|
|
|
|
|
This property is intended to be used in failover configurations
|
2019-11-13 17:21:07 +00:00
|
|
|
|
where multiple hosts have access to a pool on shared storage.
|
|
|
|
|
.Pp
|
2021-05-27 00:46:40 +00:00
|
|
|
|
Multihost provides protection on import only.
|
|
|
|
|
It does not protect against an
|
2019-11-13 17:21:07 +00:00
|
|
|
|
individual device being used in multiple pools, regardless of the type of vdev.
|
|
|
|
|
See the discussion under
|
2021-05-27 00:46:40 +00:00
|
|
|
|
.Nm zpool Cm create .
|
2019-11-13 17:21:07 +00:00
|
|
|
|
.Pp
|
|
|
|
|
When this property is on, periodic writes to storage occur to show the pool is
|
2021-05-27 00:46:40 +00:00
|
|
|
|
in use.
|
|
|
|
|
See
|
2019-11-13 17:21:07 +00:00
|
|
|
|
.Sy zfs_multihost_interval
|
|
|
|
|
in the
|
2021-06-04 20:29:26 +00:00
|
|
|
|
.Xr zfs 4
|
2021-05-27 00:46:40 +00:00
|
|
|
|
manual page.
|
|
|
|
|
In order to enable this property each host must set a unique hostid.
|
2019-11-13 17:21:07 +00:00
|
|
|
|
See
|
|
|
|
|
.Xr genhostid 1
|
|
|
|
|
.Xr zgenhostid 8
|
2021-06-04 20:29:26 +00:00
|
|
|
|
.Xr spl 4
|
2021-05-27 00:46:40 +00:00
|
|
|
|
for additional details.
|
|
|
|
|
The default value is
|
2019-11-13 17:21:07 +00:00
|
|
|
|
.Sy off .
|
|
|
|
|
.It Sy version Ns = Ns Ar version
|
|
|
|
|
The current on-disk version of the pool.
|
|
|
|
|
This can be increased, but never decreased.
|
|
|
|
|
The preferred method of updating pools is with the
|
|
|
|
|
.Nm zpool Cm upgrade
|
|
|
|
|
command, though this property can be used when a specific version is needed for
|
|
|
|
|
backwards compatibility.
|
|
|
|
|
Once feature flags are enabled on a pool this property will no longer have a
|
|
|
|
|
value.
|
|
|
|
|
.El
|
2023-04-21 17:20:36 +00:00
|
|
|
|
.
|
|
|
|
|
.Ss User Properties
|
|
|
|
|
In addition to the standard native properties, ZFS supports arbitrary user
|
|
|
|
|
properties.
|
|
|
|
|
User properties have no effect on ZFS behavior, but applications or
|
|
|
|
|
administrators can use them to annotate pools.
|
|
|
|
|
.Pp
|
|
|
|
|
User property names must contain a colon
|
|
|
|
|
.Pq Qq Sy \&:
|
|
|
|
|
character to distinguish them from native properties.
|
|
|
|
|
They may contain lowercase letters, numbers, and the following punctuation
|
|
|
|
|
characters: colon
|
|
|
|
|
.Pq Qq Sy \&: ,
|
|
|
|
|
dash
|
|
|
|
|
.Pq Qq Sy - ,
|
|
|
|
|
period
|
|
|
|
|
.Pq Qq Sy \&. ,
|
|
|
|
|
and underscore
|
|
|
|
|
.Pq Qq Sy _ .
|
|
|
|
|
The expected convention is that the property name is divided into two portions
|
|
|
|
|
such as
|
|
|
|
|
.Ar module : Ns Ar property ,
|
|
|
|
|
but this namespace is not enforced by ZFS.
|
|
|
|
|
User property names can be at most 256 characters, and cannot begin with a dash
|
|
|
|
|
.Pq Qq Sy - .
|
|
|
|
|
.Pp
|
|
|
|
|
When making programmatic use of user properties, it is strongly suggested to use
|
|
|
|
|
a reversed DNS domain name for the
|
|
|
|
|
.Ar module
|
|
|
|
|
component of property names to reduce the chance that two
|
|
|
|
|
independently-developed packages use the same property name for different
|
|
|
|
|
purposes.
|
|
|
|
|
.Pp
|
|
|
|
|
The values of user properties are arbitrary strings and
|
|
|
|
|
are never validated.
|
|
|
|
|
All of the commands that operate on properties
|
|
|
|
|
.Po Nm zpool Cm list ,
|
|
|
|
|
.Nm zpool Cm get ,
|
|
|
|
|
.Nm zpool Cm set ,
|
|
|
|
|
and so forth
|
|
|
|
|
.Pc
|
|
|
|
|
can be used to manipulate both native properties and user properties.
|
|
|
|
|
Use
|
|
|
|
|
.Nm zpool Cm set Ar name Ns =
|
|
|
|
|
to clear a user property.
|
|
|
|
|
Property values are limited to 8192 bytes.
|