2019-11-13 17:21:07 +00:00
|
|
|
.\"
|
|
|
|
.\" CDDL HEADER START
|
|
|
|
.\"
|
|
|
|
.\" The contents of this file are subject to the terms of the
|
|
|
|
.\" Common Development and Distribution License (the "License").
|
|
|
|
.\" You may not use this file except in compliance with the License.
|
|
|
|
.\"
|
|
|
|
.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
|
2022-07-11 21:16:13 +00:00
|
|
|
.\" or https://opensource.org/licenses/CDDL-1.0.
|
2019-11-13 17:21:07 +00:00
|
|
|
.\" See the License for the specific language governing permissions
|
|
|
|
.\" and limitations under the License.
|
|
|
|
.\"
|
|
|
|
.\" When distributing Covered Code, include this CDDL HEADER in each
|
|
|
|
.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
|
|
|
|
.\" If applicable, add the following below this CDDL HEADER, with the
|
|
|
|
.\" fields enclosed by brackets "[]" replaced with your own identifying
|
|
|
|
.\" information: Portions Copyright [yyyy] [name of copyright owner]
|
|
|
|
.\"
|
|
|
|
.\" CDDL HEADER END
|
|
|
|
.\"
|
|
|
|
.\"
|
|
|
|
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
|
RAID-Z expansion feature
This feature allows disks to be added one at a time to a RAID-Z group,
expanding its capacity incrementally. This feature is especially useful
for small pools (typically with only one RAID-Z group), where there
isn't sufficient hardware to add capacity by adding a whole new RAID-Z
group (typically doubling the number of disks).
== Initiating expansion ==
A new device (disk) can be attached to an existing RAIDZ vdev, by
running `zpool attach POOL raidzP-N NEW_DEVICE`, e.g. `zpool attach tank
raidz2-0 sda`. The new device will become part of the RAIDZ group. A
"raidz expansion" will be initiated, and the new device will contribute
additional space to the RAIDZ group once the expansion completes.
The `feature@raidz_expansion` on-disk feature flag must be `enabled` to
initiate an expansion, and it remains `active` for the life of the pool.
In other words, pools with expanded RAIDZ vdevs can not be imported by
older releases of the ZFS software.
== During expansion ==
The expansion entails reading all allocated space from existing disks in
the RAIDZ group, and rewriting it to the new disks in the RAIDZ group
(including the newly added device).
The expansion progress can be monitored with `zpool status`.
Data redundancy is maintained during (and after) the expansion. If a
disk fails while the expansion is in progress, the expansion pauses
until the health of the RAIDZ vdev is restored (e.g. by replacing the
failed disk and waiting for reconstruction to complete).
The pool remains accessible during expansion. Following a reboot or
export/import, the expansion resumes where it left off.
== After expansion ==
When the expansion completes, the additional space is available for use,
and is reflected in the `available` zfs property (as seen in `zfs list`,
`df`, etc).
Expansion does not change the number of failures that can be tolerated
without data loss (e.g. a RAIDZ2 is still a RAIDZ2 even after
expansion).
A RAIDZ vdev can be expanded multiple times.
After the expansion completes, old blocks remain with their old
data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but
distributed among the larger set of disks. New blocks will be written
with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been
expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ
vdev's "assumed parity ratio" does not change, so slightly less space
than is expected may be reported for newly-written blocks, according to
`zfs list`, `df`, `ls -s`, and similar tools.
Sponsored-by: The FreeBSD Foundation
Sponsored-by: iXsystems, Inc.
Sponsored-by: vStack
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Authored-by: Matthew Ahrens <mahrens@delphix.com>
Contributions-by: Fedor Uporov <fuporov.vstack@gmail.com>
Contributions-by: Stuart Maybee <stuart.maybee@comcast.net>
Contributions-by: Thorsten Behrens <tbehrens@outlook.com>
Contributions-by: Fmstrat <nospam@nowsci.com>
Contributions-by: Don Brady <dev.fs.zfs@gmail.com>
Signed-off-by: Don Brady <dev.fs.zfs@gmail.com>
Closes #15022
2023-11-08 18:19:41 +00:00
|
|
|
.\" Copyright (c) 2012, 2021 by Delphix. All rights reserved.
|
2019-11-13 17:21:07 +00:00
|
|
|
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
|
|
|
|
.\" Copyright (c) 2017 Datto Inc.
|
|
|
|
.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
|
|
|
|
.\" Copyright 2017 Nexenta Systems, Inc.
|
|
|
|
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
|
|
|
|
.\"
|
2021-05-27 00:46:40 +00:00
|
|
|
.Dd May 27, 2021
|
2019-11-13 17:21:07 +00:00
|
|
|
.Dt ZPOOL-WAIT 8
|
2020-08-21 18:55:47 +00:00
|
|
|
.Os
|
2021-05-27 00:46:40 +00:00
|
|
|
.
|
2019-11-13 17:21:07 +00:00
|
|
|
.Sh NAME
|
2020-10-22 18:28:10 +00:00
|
|
|
.Nm zpool-wait
|
2021-05-27 00:46:40 +00:00
|
|
|
.Nd wait for activity to stop in a ZFS storage pool
|
2019-11-13 17:21:07 +00:00
|
|
|
.Sh SYNOPSIS
|
2020-10-22 18:28:10 +00:00
|
|
|
.Nm zpool
|
2019-11-13 17:21:07 +00:00
|
|
|
.Cm wait
|
|
|
|
.Op Fl Hp
|
|
|
|
.Op Fl T Sy u Ns | Ns Sy d
|
2021-05-27 00:46:40 +00:00
|
|
|
.Op Fl t Ar activity Ns Oo , Ns Ar activity Ns Oc Ns …
|
2019-11-13 17:21:07 +00:00
|
|
|
.Ar pool
|
|
|
|
.Op Ar interval
|
2021-05-27 00:46:40 +00:00
|
|
|
.
|
2019-11-13 17:21:07 +00:00
|
|
|
.Sh DESCRIPTION
|
|
|
|
Waits until all background activity of the given types has ceased in the given
|
|
|
|
pool.
|
|
|
|
The activity could cease because it has completed, or because it has been
|
|
|
|
paused or canceled by a user, or because the pool has been exported or
|
|
|
|
destroyed.
|
|
|
|
If no activities are specified, the command waits until background activity of
|
|
|
|
every type listed below has ceased.
|
|
|
|
If there is no activity of the given types in progress, the command returns
|
|
|
|
immediately.
|
|
|
|
.Pp
|
|
|
|
These are the possible values for
|
|
|
|
.Ar activity ,
|
|
|
|
along with what each one waits for:
|
RAID-Z expansion feature
This feature allows disks to be added one at a time to a RAID-Z group,
expanding its capacity incrementally. This feature is especially useful
for small pools (typically with only one RAID-Z group), where there
isn't sufficient hardware to add capacity by adding a whole new RAID-Z
group (typically doubling the number of disks).
== Initiating expansion ==
A new device (disk) can be attached to an existing RAIDZ vdev, by
running `zpool attach POOL raidzP-N NEW_DEVICE`, e.g. `zpool attach tank
raidz2-0 sda`. The new device will become part of the RAIDZ group. A
"raidz expansion" will be initiated, and the new device will contribute
additional space to the RAIDZ group once the expansion completes.
The `feature@raidz_expansion` on-disk feature flag must be `enabled` to
initiate an expansion, and it remains `active` for the life of the pool.
In other words, pools with expanded RAIDZ vdevs can not be imported by
older releases of the ZFS software.
== During expansion ==
The expansion entails reading all allocated space from existing disks in
the RAIDZ group, and rewriting it to the new disks in the RAIDZ group
(including the newly added device).
The expansion progress can be monitored with `zpool status`.
Data redundancy is maintained during (and after) the expansion. If a
disk fails while the expansion is in progress, the expansion pauses
until the health of the RAIDZ vdev is restored (e.g. by replacing the
failed disk and waiting for reconstruction to complete).
The pool remains accessible during expansion. Following a reboot or
export/import, the expansion resumes where it left off.
== After expansion ==
When the expansion completes, the additional space is available for use,
and is reflected in the `available` zfs property (as seen in `zfs list`,
`df`, etc).
Expansion does not change the number of failures that can be tolerated
without data loss (e.g. a RAIDZ2 is still a RAIDZ2 even after
expansion).
A RAIDZ vdev can be expanded multiple times.
After the expansion completes, old blocks remain with their old
data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but
distributed among the larger set of disks. New blocks will be written
with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been
expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ
vdev's "assumed parity ratio" does not change, so slightly less space
than is expected may be reported for newly-written blocks, according to
`zfs list`, `df`, `ls -s`, and similar tools.
Sponsored-by: The FreeBSD Foundation
Sponsored-by: iXsystems, Inc.
Sponsored-by: vStack
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Authored-by: Matthew Ahrens <mahrens@delphix.com>
Contributions-by: Fedor Uporov <fuporov.vstack@gmail.com>
Contributions-by: Stuart Maybee <stuart.maybee@comcast.net>
Contributions-by: Thorsten Behrens <tbehrens@outlook.com>
Contributions-by: Fmstrat <nospam@nowsci.com>
Contributions-by: Don Brady <dev.fs.zfs@gmail.com>
Signed-off-by: Don Brady <dev.fs.zfs@gmail.com>
Closes #15022
2023-11-08 18:19:41 +00:00
|
|
|
.Bl -tag -compact -offset Ds -width "raidz_expand"
|
2021-05-27 00:46:40 +00:00
|
|
|
.It Sy discard
|
|
|
|
Checkpoint to be discarded
|
|
|
|
.It Sy free
|
|
|
|
.Sy freeing
|
|
|
|
property to become
|
|
|
|
.Sy 0
|
|
|
|
.It Sy initialize
|
|
|
|
All initializations to cease
|
|
|
|
.It Sy replace
|
|
|
|
All device replacements to cease
|
|
|
|
.It Sy remove
|
|
|
|
Device removal to cease
|
|
|
|
.It Sy resilver
|
|
|
|
Resilver to cease
|
|
|
|
.It Sy scrub
|
|
|
|
Scrub to cease
|
|
|
|
.It Sy trim
|
|
|
|
Manual trim to cease
|
RAID-Z expansion feature
This feature allows disks to be added one at a time to a RAID-Z group,
expanding its capacity incrementally. This feature is especially useful
for small pools (typically with only one RAID-Z group), where there
isn't sufficient hardware to add capacity by adding a whole new RAID-Z
group (typically doubling the number of disks).
== Initiating expansion ==
A new device (disk) can be attached to an existing RAIDZ vdev, by
running `zpool attach POOL raidzP-N NEW_DEVICE`, e.g. `zpool attach tank
raidz2-0 sda`. The new device will become part of the RAIDZ group. A
"raidz expansion" will be initiated, and the new device will contribute
additional space to the RAIDZ group once the expansion completes.
The `feature@raidz_expansion` on-disk feature flag must be `enabled` to
initiate an expansion, and it remains `active` for the life of the pool.
In other words, pools with expanded RAIDZ vdevs can not be imported by
older releases of the ZFS software.
== During expansion ==
The expansion entails reading all allocated space from existing disks in
the RAIDZ group, and rewriting it to the new disks in the RAIDZ group
(including the newly added device).
The expansion progress can be monitored with `zpool status`.
Data redundancy is maintained during (and after) the expansion. If a
disk fails while the expansion is in progress, the expansion pauses
until the health of the RAIDZ vdev is restored (e.g. by replacing the
failed disk and waiting for reconstruction to complete).
The pool remains accessible during expansion. Following a reboot or
export/import, the expansion resumes where it left off.
== After expansion ==
When the expansion completes, the additional space is available for use,
and is reflected in the `available` zfs property (as seen in `zfs list`,
`df`, etc).
Expansion does not change the number of failures that can be tolerated
without data loss (e.g. a RAIDZ2 is still a RAIDZ2 even after
expansion).
A RAIDZ vdev can be expanded multiple times.
After the expansion completes, old blocks remain with their old
data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but
distributed among the larger set of disks. New blocks will be written
with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been
expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ
vdev's "assumed parity ratio" does not change, so slightly less space
than is expected may be reported for newly-written blocks, according to
`zfs list`, `df`, `ls -s`, and similar tools.
Sponsored-by: The FreeBSD Foundation
Sponsored-by: iXsystems, Inc.
Sponsored-by: vStack
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Authored-by: Matthew Ahrens <mahrens@delphix.com>
Contributions-by: Fedor Uporov <fuporov.vstack@gmail.com>
Contributions-by: Stuart Maybee <stuart.maybee@comcast.net>
Contributions-by: Thorsten Behrens <tbehrens@outlook.com>
Contributions-by: Fmstrat <nospam@nowsci.com>
Contributions-by: Don Brady <dev.fs.zfs@gmail.com>
Signed-off-by: Don Brady <dev.fs.zfs@gmail.com>
Closes #15022
2023-11-08 18:19:41 +00:00
|
|
|
.It Sy raidz_expand
|
|
|
|
Attaching to a RAID-Z vdev to complete
|
2021-05-27 00:46:40 +00:00
|
|
|
.El
|
2019-11-13 17:21:07 +00:00
|
|
|
.Pp
|
|
|
|
If an
|
|
|
|
.Ar interval
|
|
|
|
is provided, the amount of work remaining, in bytes, for each activity is
|
|
|
|
printed every
|
|
|
|
.Ar interval
|
|
|
|
seconds.
|
|
|
|
.Bl -tag -width Ds
|
|
|
|
.It Fl H
|
|
|
|
Scripted mode.
|
|
|
|
Do not display headers, and separate fields by a single tab instead of arbitrary
|
|
|
|
space.
|
|
|
|
.It Fl p
|
|
|
|
Display numbers in parsable (exact) values.
|
|
|
|
.It Fl T Sy u Ns | Ns Sy d
|
|
|
|
Display a time stamp.
|
|
|
|
Specify
|
|
|
|
.Sy u
|
|
|
|
for a printed representation of the internal representation of time.
|
|
|
|
See
|
2024-01-29 17:44:08 +00:00
|
|
|
.Xr time 1 .
|
2019-11-13 17:21:07 +00:00
|
|
|
Specify
|
|
|
|
.Sy d
|
|
|
|
for standard date format.
|
|
|
|
See
|
|
|
|
.Xr date 1 .
|
|
|
|
.El
|
2021-05-27 00:46:40 +00:00
|
|
|
.
|
2019-11-13 17:21:07 +00:00
|
|
|
.Sh SEE ALSO
|
|
|
|
.Xr zpool-checkpoint 8 ,
|
|
|
|
.Xr zpool-initialize 8 ,
|
|
|
|
.Xr zpool-remove 8 ,
|
2021-05-27 00:46:40 +00:00
|
|
|
.Xr zpool-replace 8 ,
|
2019-11-13 17:21:07 +00:00
|
|
|
.Xr zpool-resilver 8 ,
|
2020-03-04 23:07:11 +00:00
|
|
|
.Xr zpool-scrub 8 ,
|
2021-05-27 00:46:40 +00:00
|
|
|
.Xr zpool-status 8 ,
|
2020-03-04 23:07:11 +00:00
|
|
|
.Xr zpool-trim 8
|