2013-03-12 20:26:50 +00:00
|
|
|
.\"
|
|
|
|
.\" CDDL HEADER START
|
|
|
|
.\"
|
|
|
|
.\" The contents of this file are subject to the terms of the
|
|
|
|
.\" Common Development and Distribution License (the "License").
|
|
|
|
.\" You may not use this file except in compliance with the License.
|
|
|
|
.\"
|
|
|
|
.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
|
|
|
|
.\" or http://www.opensolaris.org/os/licensing.
|
|
|
|
.\" See the License for the specific language governing permissions
|
|
|
|
.\" and limitations under the License.
|
|
|
|
.\"
|
|
|
|
.\" When distributing Covered Code, include this CDDL HEADER in each
|
|
|
|
.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
|
|
|
|
.\" If applicable, add the following below this CDDL HEADER, with the
|
|
|
|
.\" fields enclosed by brackets "[]" replaced with your own identifying
|
|
|
|
.\" information: Portions Copyright [yyyy] [name of copyright owner]
|
|
|
|
.\"
|
|
|
|
.\" CDDL HEADER END
|
|
|
|
.\"
|
|
|
|
.\" Copyright (c) 2009 Oracle and/or its affiliates. All rights reserved.
|
|
|
|
.\" Copyright (c) 2009 Michael Gebetsroither <michael.geb@gmx.at>. All rights
|
|
|
|
.\" reserved.
|
Distributed Spare (dRAID) Feature
This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID. This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.
A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`. No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.
zpool create <pool> draid[1,2,3] <vdevs...>
Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons. The supported options include:
zpool create <pool> \
draid[<parity>][:<data>d][:<children>c][:<spares>s] \
<vdevs...>
- draid[parity] - Parity level (default 1)
- draid[:<data>d] - Data devices per group (default 8)
- draid[:<children>c] - Expected number of child vdevs
- draid[:<spares>s] - Distributed hot spares (default 0)
Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.
```
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
slag7 ONLINE 0 0 0
draid2:8d:68c:2s-0 ONLINE 0 0 0
L0 ONLINE 0 0 0
L1 ONLINE 0 0 0
...
U25 ONLINE 0 0 0
U26 ONLINE 0 0 0
spare-53 ONLINE 0 0 0
U27 ONLINE 0 0 0
draid2-0-0 ONLINE 0 0 0
U28 ONLINE 0 0 0
U29 ONLINE 0 0 0
...
U42 ONLINE 0 0 0
U43 ONLINE 0 0 0
special
mirror-1 ONLINE 0 0 0
L5 ONLINE 0 0 0
U5 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
L6 ONLINE 0 0 0
U6 ONLINE 0 0 0
spares
draid2-0-0 INUSE currently in use
draid2-0-1 AVAIL
```
When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command. These options are leverages
by zloop.sh to test a wide range of dRAID configurations.
-K draid|raidz|random - kind of RAID to test
-D <value> - dRAID data drives per group
-S <value> - dRAID distributed hot spares
-R <value> - RAID parity (raidz or dRAID)
The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.
Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102
2020-11-13 21:51:51 +00:00
|
|
|
.\" Copyright (c) 2017, Intel Corporation.
|
2013-03-12 20:26:50 +00:00
|
|
|
.\"
|
2021-05-26 12:10:56 +00:00
|
|
|
.Dd May 26, 2021
|
|
|
|
.Dt ZTEST 1
|
|
|
|
.Os
|
|
|
|
.
|
|
|
|
.Sh NAME
|
|
|
|
.Nm ztest
|
|
|
|
.Nd was written by the ZFS Developers as a ZFS unit test
|
|
|
|
.Sh SYNOPSIS
|
|
|
|
.Nm
|
|
|
|
.Op Fl VEG
|
|
|
|
.Op Fl v Ar vdevs
|
|
|
|
.Op Fl s Ar size_of_each_vdev
|
|
|
|
.Op Fl a Ar alignment_shift
|
|
|
|
.Op Fl m Ar mirror_copies
|
|
|
|
.Op Fl r Ar raidz_disks/draid_disks
|
|
|
|
.Op Fl R Ar raid_parity
|
|
|
|
.Op Fl K Ar raid_kind
|
|
|
|
.Op Fl D Ar draid_data
|
|
|
|
.Op Fl S Ar draid_spares
|
|
|
|
.Op Fl C Ar vdev_class_state
|
|
|
|
.Op Fl d Ar datasets
|
|
|
|
.Op Fl t Ar threads
|
|
|
|
.Op Fl g Ar gang_block_threshold
|
|
|
|
.Op Fl i Ar initialize_pool_i_times
|
|
|
|
.Op Fl k Ar kill_percentage
|
|
|
|
.Op Fl p Ar pool_name
|
|
|
|
.Op Fl T Ar time
|
|
|
|
.Op Fl z Ar zil_failure_rate
|
|
|
|
.
|
|
|
|
.Sh DESCRIPTION
|
|
|
|
.Nm
|
|
|
|
was written by the ZFS Developers as a ZFS unit test.
|
|
|
|
The tool was developed in tandem with the ZFS functionality and was
|
|
|
|
executed nightly as one of the many regression test against the daily build.
|
|
|
|
As features were added to ZFS, unit tests were also added to
|
|
|
|
.Nm .
|
|
|
|
In addition, a separate test development team wrote and
|
2013-03-12 20:26:50 +00:00
|
|
|
executed more functional and stress tests.
|
2021-05-26 12:10:56 +00:00
|
|
|
.
|
|
|
|
.Pp
|
|
|
|
By default
|
|
|
|
.Nm
|
|
|
|
runs for ten minutes and uses block files
|
|
|
|
(stored in
|
|
|
|
.Pa /tmp )
|
|
|
|
to create pools rather than using physical disks.
|
|
|
|
Block files afford
|
|
|
|
.Nm
|
|
|
|
its flexibility to play around with
|
2013-03-12 20:26:50 +00:00
|
|
|
zpool components without requiring large hardware configurations.
|
2021-05-26 12:10:56 +00:00
|
|
|
However, storing the block files in
|
|
|
|
.Pa /tmp
|
|
|
|
may not work for you if you
|
2013-03-12 20:26:50 +00:00
|
|
|
have a small tmp directory.
|
2021-05-26 12:10:56 +00:00
|
|
|
.
|
|
|
|
.Pp
|
|
|
|
By default is non-verbose.
|
|
|
|
This is why entering the command above will result in
|
|
|
|
.Nm
|
|
|
|
quietly executing for 5 minutes.
|
|
|
|
The
|
|
|
|
.Fl V
|
|
|
|
option can be used to increase the verbosity of the tool.
|
|
|
|
Adding multiple
|
|
|
|
.Fl V
|
|
|
|
options is allowed and the more you add the more chatty
|
|
|
|
.Nm
|
2013-03-12 20:26:50 +00:00
|
|
|
becomes.
|
2021-05-26 12:10:56 +00:00
|
|
|
.
|
|
|
|
.Pp
|
|
|
|
After the
|
|
|
|
.Nm
|
|
|
|
run completes, you should notice many
|
|
|
|
.Pa ztest.*
|
|
|
|
files lying around.
|
|
|
|
Once the run completes you can safely remove these files.
|
|
|
|
Note that you shouldn't remove these files during a run.
|
|
|
|
You can re-use these files in your next
|
|
|
|
.Nm
|
|
|
|
run by using the
|
|
|
|
.Fl E
|
2013-03-12 20:26:50 +00:00
|
|
|
option.
|
2021-05-26 12:10:56 +00:00
|
|
|
.
|
|
|
|
.Sh OPTIONS
|
|
|
|
.Bl -tag -width "-v v"
|
|
|
|
.It Fl h , \&? , -help
|
2013-03-12 20:26:50 +00:00
|
|
|
Print a help summary.
|
2021-05-26 12:10:56 +00:00
|
|
|
.It Fl v , -vdevs Ns = (default: Sy 5 )
|
2013-03-12 20:26:50 +00:00
|
|
|
Number of vdevs.
|
2021-05-26 12:10:56 +00:00
|
|
|
.It Fl s , -vdev-size Ns = (default: Sy 64M )
|
2013-03-12 20:26:50 +00:00
|
|
|
Size of each vdev.
|
2021-05-26 12:10:56 +00:00
|
|
|
.It Fl a , -alignment-shift Ns = (default: Sy 9 ) No (use Sy 0 No for random)
|
2021-05-28 22:06:07 +00:00
|
|
|
Alignment shift used in test.
|
2021-05-26 12:10:56 +00:00
|
|
|
.It Fl m , -mirror-copies Ns = (default: Sy 2 )
|
2013-03-12 20:26:50 +00:00
|
|
|
Number of mirror copies.
|
2021-05-26 12:10:56 +00:00
|
|
|
.It Fl r , -raid-disks Ns = (default: Sy 4 No for raidz/ Ns Sy 16 No for draid)
|
2021-05-28 22:06:07 +00:00
|
|
|
Number of raidz/draid disks.
|
2021-05-26 12:10:56 +00:00
|
|
|
.It Fl R , -raid-parity Ns = (default: Sy 1 )
|
Distributed Spare (dRAID) Feature
This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID. This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.
A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`. No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.
zpool create <pool> draid[1,2,3] <vdevs...>
Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons. The supported options include:
zpool create <pool> \
draid[<parity>][:<data>d][:<children>c][:<spares>s] \
<vdevs...>
- draid[parity] - Parity level (default 1)
- draid[:<data>d] - Data devices per group (default 8)
- draid[:<children>c] - Expected number of child vdevs
- draid[:<spares>s] - Distributed hot spares (default 0)
Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.
```
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
slag7 ONLINE 0 0 0
draid2:8d:68c:2s-0 ONLINE 0 0 0
L0 ONLINE 0 0 0
L1 ONLINE 0 0 0
...
U25 ONLINE 0 0 0
U26 ONLINE 0 0 0
spare-53 ONLINE 0 0 0
U27 ONLINE 0 0 0
draid2-0-0 ONLINE 0 0 0
U28 ONLINE 0 0 0
U29 ONLINE 0 0 0
...
U42 ONLINE 0 0 0
U43 ONLINE 0 0 0
special
mirror-1 ONLINE 0 0 0
L5 ONLINE 0 0 0
U5 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
L6 ONLINE 0 0 0
U6 ONLINE 0 0 0
spares
draid2-0-0 INUSE currently in use
draid2-0-1 AVAIL
```
When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command. These options are leverages
by zloop.sh to test a wide range of dRAID configurations.
-K draid|raidz|random - kind of RAID to test
-D <value> - dRAID data drives per group
-S <value> - dRAID distributed hot spares
-R <value> - RAID parity (raidz or dRAID)
The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.
Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102
2020-11-13 21:51:51 +00:00
|
|
|
Raid parity (raidz & draid).
|
2021-05-26 12:10:56 +00:00
|
|
|
.It Fl K , -raid-kind Ns = Ns Sy raidz Ns | Ns Sy draid Ns | Ns Sy random No (default: Sy random )
|
|
|
|
The kind of RAID config to use.
|
|
|
|
With
|
|
|
|
.Sy random
|
|
|
|
the kind alternates between raidz and draid.
|
|
|
|
.It Fl D , -draid-data Ns = (default: Sy 4 )
|
Distributed Spare (dRAID) Feature
This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID. This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.
A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`. No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.
zpool create <pool> draid[1,2,3] <vdevs...>
Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons. The supported options include:
zpool create <pool> \
draid[<parity>][:<data>d][:<children>c][:<spares>s] \
<vdevs...>
- draid[parity] - Parity level (default 1)
- draid[:<data>d] - Data devices per group (default 8)
- draid[:<children>c] - Expected number of child vdevs
- draid[:<spares>s] - Distributed hot spares (default 0)
Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.
```
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
slag7 ONLINE 0 0 0
draid2:8d:68c:2s-0 ONLINE 0 0 0
L0 ONLINE 0 0 0
L1 ONLINE 0 0 0
...
U25 ONLINE 0 0 0
U26 ONLINE 0 0 0
spare-53 ONLINE 0 0 0
U27 ONLINE 0 0 0
draid2-0-0 ONLINE 0 0 0
U28 ONLINE 0 0 0
U29 ONLINE 0 0 0
...
U42 ONLINE 0 0 0
U43 ONLINE 0 0 0
special
mirror-1 ONLINE 0 0 0
L5 ONLINE 0 0 0
U5 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
L6 ONLINE 0 0 0
U6 ONLINE 0 0 0
spares
draid2-0-0 INUSE currently in use
draid2-0-1 AVAIL
```
When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command. These options are leverages
by zloop.sh to test a wide range of dRAID configurations.
-K draid|raidz|random - kind of RAID to test
-D <value> - dRAID data drives per group
-S <value> - dRAID distributed hot spares
-R <value> - RAID parity (raidz or dRAID)
The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.
Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102
2020-11-13 21:51:51 +00:00
|
|
|
Number of data disks in a dRAID redundancy group.
|
2021-05-26 12:10:56 +00:00
|
|
|
.It Fl S , -draid-spares Ns = (default: Sy 1 )
|
Distributed Spare (dRAID) Feature
This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID. This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.
A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`. No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.
zpool create <pool> draid[1,2,3] <vdevs...>
Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons. The supported options include:
zpool create <pool> \
draid[<parity>][:<data>d][:<children>c][:<spares>s] \
<vdevs...>
- draid[parity] - Parity level (default 1)
- draid[:<data>d] - Data devices per group (default 8)
- draid[:<children>c] - Expected number of child vdevs
- draid[:<spares>s] - Distributed hot spares (default 0)
Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.
```
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
slag7 ONLINE 0 0 0
draid2:8d:68c:2s-0 ONLINE 0 0 0
L0 ONLINE 0 0 0
L1 ONLINE 0 0 0
...
U25 ONLINE 0 0 0
U26 ONLINE 0 0 0
spare-53 ONLINE 0 0 0
U27 ONLINE 0 0 0
draid2-0-0 ONLINE 0 0 0
U28 ONLINE 0 0 0
U29 ONLINE 0 0 0
...
U42 ONLINE 0 0 0
U43 ONLINE 0 0 0
special
mirror-1 ONLINE 0 0 0
L5 ONLINE 0 0 0
U5 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
L6 ONLINE 0 0 0
U6 ONLINE 0 0 0
spares
draid2-0-0 INUSE currently in use
draid2-0-1 AVAIL
```
When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command. These options are leverages
by zloop.sh to test a wide range of dRAID configurations.
-K draid|raidz|random - kind of RAID to test
-D <value> - dRAID data drives per group
-S <value> - dRAID distributed hot spares
-R <value> - RAID parity (raidz or dRAID)
The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.
Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102
2020-11-13 21:51:51 +00:00
|
|
|
Number of dRAID distributed spare disks.
|
2021-05-26 12:10:56 +00:00
|
|
|
.It Fl d , -datasets Ns = (default: Sy 7 )
|
2013-03-12 20:26:50 +00:00
|
|
|
Number of datasets.
|
2021-05-26 12:10:56 +00:00
|
|
|
.It Fl t , -threads Ns = (default: Sy 23 )
|
2013-03-12 20:26:50 +00:00
|
|
|
Number of threads.
|
2021-05-26 12:10:56 +00:00
|
|
|
.It Fl g , -gang-block-threshold Ns = (default: Sy 32K )
|
2013-03-12 20:26:50 +00:00
|
|
|
Gang block threshold.
|
2021-05-26 12:10:56 +00:00
|
|
|
.It Fl i , -init-count Ns = (default: Sy 1 )
|
2021-05-28 22:06:07 +00:00
|
|
|
Number of pool initializations.
|
2021-05-26 12:10:56 +00:00
|
|
|
.It Fl k , -kill-percentage Ns = (default: Sy 70% )
|
2013-03-12 20:26:50 +00:00
|
|
|
Kill percentage.
|
2021-05-26 12:10:56 +00:00
|
|
|
.It Fl p , -pool-name Ns = (default: Sy ztest )
|
2013-03-12 20:26:50 +00:00
|
|
|
Pool name.
|
2021-05-26 12:10:56 +00:00
|
|
|
.It Fl f , -vdev-file-directory Ns = (default: Pa /tmp )
|
2021-05-28 22:06:07 +00:00
|
|
|
File directory for vdev files.
|
2021-05-26 12:10:56 +00:00
|
|
|
.It Fl M , -multi-host
|
2021-05-28 22:06:07 +00:00
|
|
|
Multi-host; simulate pool imported on remote host.
|
2021-05-26 12:10:56 +00:00
|
|
|
.It Fl E , -use-existing-pool
|
2013-03-12 20:26:50 +00:00
|
|
|
Use existing pool (use existing pool instead of creating new one).
|
2021-05-26 12:10:56 +00:00
|
|
|
.It Fl T , -run-time Ns = (default: Sy 300 Ns s)
|
2013-03-12 20:26:50 +00:00
|
|
|
Total test run time.
|
2021-05-26 12:10:56 +00:00
|
|
|
.It Fl P , -pass-time Ns = (default: Sy 60 Ns s)
|
2021-05-28 22:06:07 +00:00
|
|
|
Time per pass.
|
2021-05-26 12:10:56 +00:00
|
|
|
.It Fl F , -freeze-loops Ns = (default: Sy 50 )
|
|
|
|
Max loops in
|
|
|
|
.Fn spa_freeze .
|
|
|
|
.It Fl B , -alt-ztest Ns =
|
2021-05-28 22:06:07 +00:00
|
|
|
Alternate ztest path.
|
2021-05-26 12:10:56 +00:00
|
|
|
.It Fl C , -vdev-class-state Ns = Ns Sy on Ns | Ns Sy off Ns | Ns Sy random No (default: Sy random )
|
2021-05-28 22:06:07 +00:00
|
|
|
The vdev allocation class state.
|
2021-05-26 12:10:56 +00:00
|
|
|
.It Fl o , -option Ns = Ns Ar variable Ns = Ns Ar value
|
|
|
|
Set global
|
|
|
|
.Ar variable
|
|
|
|
to an unsigned 32-bit integer
|
|
|
|
.Ar value
|
|
|
|
(little-endian only).
|
|
|
|
.It Fl G , -dump-debug
|
2021-05-28 22:06:07 +00:00
|
|
|
Dump zfs_dbgmsg buffer before exiting due to an error.
|
2021-05-26 12:10:56 +00:00
|
|
|
.It Fl V , -verbose
|
2021-05-28 22:06:07 +00:00
|
|
|
Verbose (use multiple times for ever more verbosity).
|
2021-05-26 12:10:56 +00:00
|
|
|
.El
|
|
|
|
.
|
|
|
|
.Sh EXAMPLES
|
|
|
|
To override
|
|
|
|
.Pa /tmp
|
|
|
|
as your location for block files, you can use the
|
|
|
|
.Fl f
|
2013-03-12 20:26:50 +00:00
|
|
|
option:
|
2021-05-26 12:10:56 +00:00
|
|
|
.Dl # ztest -f /
|
|
|
|
.Pp
|
|
|
|
To get an idea of what
|
|
|
|
.Nm
|
|
|
|
is actually testing try this:
|
|
|
|
.Dl # ztest -f / -VVV
|
|
|
|
.Pp
|
|
|
|
Maybe you'd like to run
|
|
|
|
.Nm ztest
|
|
|
|
for longer? To do so simply use the
|
|
|
|
.Fl T
|
2013-03-12 20:26:50 +00:00
|
|
|
option and specify the runlength in seconds like so:
|
2021-05-26 12:10:56 +00:00
|
|
|
.Dl # ztest -f / -V -T 120
|
|
|
|
.
|
|
|
|
.Sh ENVIRONMENT VARIABLES
|
|
|
|
.Bl -tag -width "ZF"
|
|
|
|
.It Ev ZFS_HOSTID Ns = Ns Em id
|
|
|
|
Use
|
|
|
|
.Em id
|
|
|
|
instead of the SPL hostid to identify this host.
|
|
|
|
Intended for use with
|
|
|
|
.Nm , but this environment variable will affect any utility which uses
|
|
|
|
libzpool, including
|
|
|
|
.Xr zpool 8 .
|
|
|
|
Since the kernel is unaware of this setting,
|
2017-08-10 22:45:25 +00:00
|
|
|
results with utilities other than ztest are undefined.
|
2021-05-26 12:10:56 +00:00
|
|
|
.It Ev ZFS_STACK_SIZE Ns = Ns Em stacksize
|
|
|
|
Limit the default stack size to
|
|
|
|
.Em stacksize
|
|
|
|
bytes for the purpose of
|
|
|
|
detecting and debugging kernel stack overflows.
|
|
|
|
This value defaults to
|
|
|
|
.Em 32K
|
|
|
|
which is double the default
|
|
|
|
.Em 16K
|
|
|
|
Linux kernel stack size.
|
|
|
|
.Pp
|
2016-01-13 18:41:24 +00:00
|
|
|
In practice, setting the stack size slightly higher is needed because
|
2014-09-25 22:15:45 +00:00
|
|
|
differences in stack usage between kernel and user space can lead to spurious
|
2021-05-26 12:10:56 +00:00
|
|
|
stack overflows (especially when debugging is enabled).
|
|
|
|
The specified value
|
2014-09-25 22:15:45 +00:00
|
|
|
will be rounded up to a floor of PTHREAD_STACK_MIN which is the minimum stack
|
|
|
|
required for a NULL procedure in user space.
|
2021-05-26 12:10:56 +00:00
|
|
|
.Pp
|
|
|
|
By default the stack size is limited to
|
|
|
|
.Em 256K .
|
|
|
|
.El
|
|
|
|
.
|
|
|
|
.Sh SEE ALSO
|
|
|
|
.Xr zdb 1 ,
|
|
|
|
.Xr zfs 1 ,
|
|
|
|
.Xr zpool 1 ,
|
|
|
|
.Xr spl-module-parameters 5
|