Improve zfs-module-parameters(5)
Various rewrites to the descriptions of module parameters. Corrects spelling mistakes, makes descriptions them more user-friendly and describes some ZFS quirks which should be understood before changing parameter values. Signed-off-by: DHE <git@dehacked.net> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes #4671
This commit is contained in:
parent
cbecb4fb22
commit
8342673502
|
@ -30,7 +30,8 @@ Description of the different parameters to the ZFS module.
|
|||
\fBl2arc_feed_again\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Turbo L2ARC warmup
|
||||
Turbo L2ARC warm-up. When the L2ARC is cold the fill interval will be set as
|
||||
fast as possible.
|
||||
.sp
|
||||
Use \fB1\fR for yes (default) and \fB0\fR to disable.
|
||||
.RE
|
||||
|
@ -41,7 +42,8 @@ Use \fB1\fR for yes (default) and \fB0\fR to disable.
|
|||
\fBl2arc_feed_min_ms\fR (ulong)
|
||||
.ad
|
||||
.RS 12n
|
||||
Min feed interval in milliseconds
|
||||
Min feed interval in milliseconds. Requires \fBl2arc_feed_again=1\fR and only
|
||||
applicable in related situations.
|
||||
.sp
|
||||
Default value: \fB200\fR.
|
||||
.RE
|
||||
|
@ -63,7 +65,8 @@ Default value: \fB1\fR.
|
|||
\fBl2arc_headroom\fR (ulong)
|
||||
.ad
|
||||
.RS 12n
|
||||
Number of max device writes to precache
|
||||
How far through the ARC lists to search for L2ARC cacheable content, expressed
|
||||
as a multiplier of \fBl2arc_write_max\fR
|
||||
.sp
|
||||
Default value: \fB2\fR.
|
||||
.RE
|
||||
|
@ -74,7 +77,8 @@ Default value: \fB2\fR.
|
|||
\fBl2arc_headroom_boost\fR (ulong)
|
||||
.ad
|
||||
.RS 12n
|
||||
Compressed l2arc_headroom multiplier
|
||||
Scales \fBl2arc_headroom\fR by this percentage when L2ARC contents are being
|
||||
successfully compressed before writing. A value of 100 disables this feature.
|
||||
.sp
|
||||
Default value: \fB200\fR.
|
||||
.RE
|
||||
|
@ -110,7 +114,8 @@ Use \fB1\fR for yes and \fB0\fR for no (default).
|
|||
\fBl2arc_noprefetch\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Skip caching prefetched buffers
|
||||
Do not write buffers to L2ARC if they were prefetched but not used by
|
||||
applications
|
||||
.sp
|
||||
Use \fB1\fR for yes (default) and \fB0\fR to disable.
|
||||
.RE
|
||||
|
@ -132,7 +137,8 @@ Use \fB1\fR for yes and \fB0\fR for no (default).
|
|||
\fBl2arc_write_boost\fR (ulong)
|
||||
.ad
|
||||
.RS 12n
|
||||
Extra write bytes during device warmup
|
||||
Cold L2ARC devices will have \fBl2arc_write_nax\fR increased by this amount
|
||||
while they remain cold.
|
||||
.sp
|
||||
Default value: \fB8,388,608\fR.
|
||||
.RE
|
||||
|
@ -266,7 +272,7 @@ configuration. Pool administrators who understand the factors involved
|
|||
may wish to specify a more realistic inflation factor, particularly if
|
||||
they operate close to quota or capacity limits.
|
||||
.sp
|
||||
Default value: 24
|
||||
Default value: \fB24\fR.
|
||||
.RE
|
||||
|
||||
.sp
|
||||
|
@ -283,7 +289,7 @@ blocks in the pool for verification. If this parameter is set to 0,
|
|||
the traversal skips non-metadata blocks. It can be toggled once the
|
||||
import has started to stop or start the traversal of non-metadata blocks.
|
||||
.sp
|
||||
Default value: 1
|
||||
Default value: \fB1\fR.
|
||||
.RE
|
||||
|
||||
.sp
|
||||
|
@ -300,7 +306,7 @@ blocks in the pool for verification. If this parameter is set to 0,
|
|||
the traversal is not performed. It can be toggled once the import has
|
||||
started to stop or start the traversal.
|
||||
.sp
|
||||
Default value: 1
|
||||
Default value: \fB1\fR.
|
||||
.RE
|
||||
|
||||
.sp
|
||||
|
@ -312,7 +318,7 @@ Default value: 1
|
|||
Maximum concurrent I/Os during the traversal performed during an "extreme
|
||||
rewind" (\fB-X\fR) pool import.
|
||||
.sp
|
||||
Default value: 10000
|
||||
Default value: \fB10000\fR.
|
||||
.RE
|
||||
|
||||
.sp
|
||||
|
@ -328,7 +334,7 @@ It also limits the worst-case time to allocate space. If we have
|
|||
less than this amount of free space, most ZPL operations (e.g. write,
|
||||
create) will return ENOSPC.
|
||||
.sp
|
||||
Default value: 5
|
||||
Default value: \fB5\fR.
|
||||
.RE
|
||||
|
||||
.sp
|
||||
|
@ -410,7 +416,8 @@ Default value: \fB10\fR.
|
|||
\fBzfs_arc_grow_retry\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Seconds before growing arc size
|
||||
After a memory pressure event the ARC will wait this many seconds before trying
|
||||
to resume growth
|
||||
.sp
|
||||
Default value: \fB5\fR.
|
||||
.RE
|
||||
|
@ -433,7 +440,12 @@ Default value: \fB10\fR.
|
|||
\fBzfs_arc_max\fR (ulong)
|
||||
.ad
|
||||
.RS 12n
|
||||
Max arc size
|
||||
Max arc size of ARC in bytes. If set to 0 then it will consume 1/2 of system
|
||||
RAM. This value must be at least 67108864 (64 megabytes).
|
||||
.sp
|
||||
This value can be changed dynamically with some caveats. It cannot be set back
|
||||
to 0 while running and reducing it below the current ARC size will not cause
|
||||
the ARC to shrink without memory pressure to induce shrinking.
|
||||
.sp
|
||||
Default value: \fB0\fR.
|
||||
.RE
|
||||
|
@ -450,6 +462,9 @@ be reclaimed even if the overall arc_c_max has not been reached. This
|
|||
value defaults to 0 which indicates that 3/4 of the ARC may be used
|
||||
for meta data.
|
||||
.sp
|
||||
This value my be changed dynamically except that it cannot be set back to 0
|
||||
for 3/4 of the ARC; it must be set to an explicit value.
|
||||
.sp
|
||||
Default value: \fB0\fR.
|
||||
.RE
|
||||
|
||||
|
@ -513,9 +528,10 @@ Default value: \fB100\fR.
|
|||
\fBzfs_arc_min_prefetch_lifespan\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Min life of prefetch block
|
||||
Minimum time prefetched blocks are locked in the ARC, specified in jiffies.
|
||||
A value of 0 will default to 1 second.
|
||||
.sp
|
||||
Default value: \fB100\fR.
|
||||
Default value: \fB0\fR.
|
||||
.RE
|
||||
|
||||
.sp
|
||||
|
@ -529,7 +545,7 @@ of lists for both data and meta data objects. Locking is performed at
|
|||
the level of these "sub-lists". This parameters controls the number of
|
||||
sub-lists per ARC state.
|
||||
.sp
|
||||
Default value: 1 or the number of on-online CPUs, whichever is greater
|
||||
Default value: \fR1\fB or the number of online CPUs, whichever is greater
|
||||
.RE
|
||||
|
||||
.sp
|
||||
|
@ -652,7 +668,8 @@ Default value: \fB4M\fR.
|
|||
\fBzfs_dbuf_state_index\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Calculate arc header index
|
||||
This feature is currently unused. It is normally used for controlling what
|
||||
reporting is available under /proc/spl/kstat/zfs.
|
||||
.sp
|
||||
Default value: \fB0\fR.
|
||||
.RE
|
||||
|
@ -663,7 +680,7 @@ Default value: \fB0\fR.
|
|||
\fBzfs_deadman_enabled\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Enable deadman timer
|
||||
Enable deadman timer. See description below.
|
||||
.sp
|
||||
Use \fB1\fR for yes (default) and \fB0\fR to disable.
|
||||
.RE
|
||||
|
@ -785,7 +802,7 @@ time, and will be ignored if \fBzfs_dirty_data_max\fR is later changed.
|
|||
The parameter \fBzfs_dirty_data_max_max\fR takes precedence over this
|
||||
one. See the section "ZFS TRANSACTION DELAY".
|
||||
.sp
|
||||
Default value: 25
|
||||
Default value: \fN25\fR.
|
||||
.RE
|
||||
|
||||
.sp
|
||||
|
@ -841,7 +858,7 @@ Default value: \fB100,000\fR.
|
|||
\fBzfs_vdev_async_read_max_active\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Maxium asynchronous read I/Os active to each device.
|
||||
Maximum asynchronous read I/Os active to each device.
|
||||
See the section "ZFS I/O SCHEDULER".
|
||||
.sp
|
||||
Default value: \fB3\fR.
|
||||
|
@ -895,7 +912,7 @@ Default value: \fB30\fR.
|
|||
\fBzfs_vdev_async_write_max_active\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Maxium asynchronous write I/Os active to each device.
|
||||
Maximum asynchronous write I/Os active to each device.
|
||||
See the section "ZFS I/O SCHEDULER".
|
||||
.sp
|
||||
Default value: \fB10\fR.
|
||||
|
@ -932,7 +949,7 @@ Default value: \fB1,000\fR.
|
|||
\fBzfs_vdev_scrub_max_active\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Maxium scrub I/Os active to each device.
|
||||
Maximum scrub I/Os active to each device.
|
||||
See the section "ZFS I/O SCHEDULER".
|
||||
.sp
|
||||
Default value: \fB2\fR.
|
||||
|
@ -956,7 +973,7 @@ Default value: \fB1\fR.
|
|||
\fBzfs_vdev_sync_read_max_active\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Maxium synchronous read I/Os active to each device.
|
||||
Maximum synchronous read I/Os active to each device.
|
||||
See the section "ZFS I/O SCHEDULER".
|
||||
.sp
|
||||
Default value: \fB10\fR.
|
||||
|
@ -980,7 +997,7 @@ Default value: \fB10\fR.
|
|||
\fBzfs_vdev_sync_write_max_active\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Maxium synchronous write I/Os active to each device.
|
||||
Maximum synchronous write I/Os active to each device.
|
||||
See the section "ZFS I/O SCHEDULER".
|
||||
.sp
|
||||
Default value: \fB10\fR.
|
||||
|
@ -1125,7 +1142,8 @@ Default value: \fB0\fR.
|
|||
\fBzfs_free_min_time_ms\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Min millisecs to free per txg
|
||||
During a \fRzfs destroy\fB operation using \fRfeature@async_destroy\fB a minimum
|
||||
of this much time will be spent working on freeing blocks per txg.
|
||||
.sp
|
||||
Default value: \fB1,000\fR.
|
||||
.RE
|
||||
|
@ -1136,7 +1154,8 @@ Default value: \fB1,000\fR.
|
|||
\fBzfs_immediate_write_sz\fR (long)
|
||||
.ad
|
||||
.RS 12n
|
||||
Largest data block to write to zil
|
||||
Largest data block to write to zil. Larger blocks will be treated as if the
|
||||
dataset being written to had the property setting \fRlogbias=throughput\fB.
|
||||
.sp
|
||||
Default value: \fB32,768\fR.
|
||||
.RE
|
||||
|
@ -1191,7 +1210,7 @@ Default value: \fB70\fR.
|
|||
.ad
|
||||
.RS 12n
|
||||
Metaslab groups are considered eligible for allocations if their
|
||||
fragmenation metric (measured as a percentage) is less than or equal to
|
||||
fragmentation metric (measured as a percentage) is less than or equal to
|
||||
this value. If a metaslab group exceeds this threshold then it will be
|
||||
skipped unless all metaslab groups within the metaslab class have also
|
||||
crossed this threshold.
|
||||
|
@ -1231,7 +1250,8 @@ Default value: \fB0\fR.
|
|||
\fBzfs_no_scrub_io\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Set for no scrub I/O
|
||||
Set for no scrub I/O. This results in scrubs not actually scrubbing data and
|
||||
simply doing a metadata crawl of the pool instead.
|
||||
.sp
|
||||
Use \fB1\fR for yes and \fB0\fR for no (default).
|
||||
.RE
|
||||
|
@ -1242,7 +1262,7 @@ Use \fB1\fR for yes and \fB0\fR for no (default).
|
|||
\fBzfs_no_scrub_prefetch\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Set for no scrub prefetching
|
||||
Set to disable block prefetching for scrubs.
|
||||
.sp
|
||||
Use \fB1\fR for yes and \fB0\fR for no (default).
|
||||
.RE
|
||||
|
@ -1253,7 +1273,8 @@ Use \fB1\fR for yes and \fB0\fR for no (default).
|
|||
\fBzfs_nocacheflush\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Disable cache flushes
|
||||
Disable cache flush operations on disks when writing. Beware, this may cause
|
||||
corruption if disks re-order writes.
|
||||
.sp
|
||||
Use \fB1\fR for yes and \fB0\fR for no (default).
|
||||
.RE
|
||||
|
@ -1275,7 +1296,8 @@ Use \fB1\fR for yes (default) and \fB0\fR to disable.
|
|||
\fBzfs_pd_bytes_max\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
The number of bytes which should be prefetched.
|
||||
The number of bytes which should be prefetched during a pool traversal
|
||||
(eg: \fRzfs send\fB or other data crawling operations)
|
||||
.sp
|
||||
Default value: \fB52,428,800\fR.
|
||||
.RE
|
||||
|
@ -1311,9 +1333,10 @@ Default value: \fB1,048,576\fR.
|
|||
\fBzfs_read_history\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Historic statistics for the last N reads
|
||||
Historic statistics for the last N reads will be available in
|
||||
\fR/proc/spl/kstat/zfs/POOLNAME/reads\fB
|
||||
.sp
|
||||
Default value: \fB0\fR.
|
||||
Default value: \fB0\fR (no data is kept).
|
||||
.RE
|
||||
|
||||
.sp
|
||||
|
@ -1358,7 +1381,8 @@ Default value: \fB2\fR.
|
|||
\fBzfs_resilver_min_time_ms\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Min millisecs to resilver per txg
|
||||
Resilvers are processed by the sync thread. While resilvering it will spend
|
||||
at least this much time working on a resilver between txg flushes.
|
||||
.sp
|
||||
Default value: \fB3,000\fR.
|
||||
.RE
|
||||
|
@ -1383,7 +1407,8 @@ Default value: \fB50\fR.
|
|||
\fBzfs_scan_min_time_ms\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Min millisecs to scrub per txg
|
||||
Scrubs are processed by the sync thread. While scrubbing it will spend
|
||||
at least this much time working on a scrub between txg flushes.
|
||||
.sp
|
||||
Default value: \fB1,000\fR.
|
||||
.RE
|
||||
|
@ -1407,7 +1432,7 @@ Default value: \fB4\fR.
|
|||
\fBzfs_send_corrupt_data\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Allow to send corrupt data (ignore read/checksum errors when sending data)
|
||||
Allow sending of corrupt data (ignore read/checksum errors when sending data)
|
||||
.sp
|
||||
Use \fB1\fR for yes and \fB0\fR for no (default).
|
||||
.RE
|
||||
|
@ -1418,7 +1443,7 @@ Use \fB1\fR for yes and \fB0\fR for no (default).
|
|||
\fBzfs_sync_pass_deferred_free\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Defer frees starting in this pass
|
||||
Flushing of data to disk is done in passes. Defer frees starting in this pass
|
||||
.sp
|
||||
Default value: \fB2\fR.
|
||||
.RE
|
||||
|
@ -1440,7 +1465,7 @@ Default value: \fB5\fR.
|
|||
\fBzfs_sync_pass_rewrite\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Rewrite new bps starting in this pass
|
||||
Rewrite new block pointers starting in this pass
|
||||
.sp
|
||||
Default value: \fB2\fR.
|
||||
.RE
|
||||
|
@ -1451,7 +1476,8 @@ Default value: \fB2\fR.
|
|||
\fBzfs_top_maxinflight\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Max I/Os per top-level vdev during scrub or resilver operations.
|
||||
Max concurrent I/Os per top-level vdev (mirrors or raidz arrays) allowed during
|
||||
scrub or resilver operations.
|
||||
.sp
|
||||
Default value: \fB32\fR.
|
||||
.RE
|
||||
|
@ -1462,7 +1488,8 @@ Default value: \fB32\fR.
|
|||
\fBzfs_txg_history\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Historic statistics for the last N txgs
|
||||
Historic statistics for the last N txgs will be available in
|
||||
\fR/proc/spl/kstat/zfs/POOLNAME/txgs\fB
|
||||
.sp
|
||||
Default value: \fB0\fR.
|
||||
.RE
|
||||
|
@ -1473,7 +1500,7 @@ Default value: \fB0\fR.
|
|||
\fBzfs_txg_timeout\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Max seconds worth of delta per txg
|
||||
Flush dirty data to disk at least every N seconds (maximum txg duration)
|
||||
.sp
|
||||
Default value: \fB5\fR.
|
||||
.RE
|
||||
|
@ -1497,7 +1524,7 @@ Default value: \fB131,072\fR.
|
|||
.RS 12n
|
||||
Shift size to inflate reads too
|
||||
.sp
|
||||
Default value: \fB16\fR.
|
||||
Default value: \fB16\fR (effectively 65536).
|
||||
.RE
|
||||
|
||||
.sp
|
||||
|
@ -1506,7 +1533,10 @@ Default value: \fB16\fR.
|
|||
\fBzfs_vdev_cache_max\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Inflate reads small than max
|
||||
Inflate reads small than this value to meet the \fBzfs_vdev_cache_bshift\fR
|
||||
size.
|
||||
.sp
|
||||
Default value: \fB16384\fR.
|
||||
.RE
|
||||
|
||||
.sp
|
||||
|
@ -1515,7 +1545,10 @@ Inflate reads small than max
|
|||
\fBzfs_vdev_cache_size\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Total size of the per-disk cache
|
||||
Total size of the per-disk cache in bytes.
|
||||
.sp
|
||||
Currently this feature is disabled as it has been found to not be helpful
|
||||
for performance and in some cases harmful.
|
||||
.sp
|
||||
Default value: \fB0\fR.
|
||||
.RE
|
||||
|
@ -1596,7 +1629,8 @@ Default value: \fB1\fR.
|
|||
\fBzfs_vdev_read_gap_limit\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Aggregate read I/O over gap
|
||||
Aggregate read I/O operations if the gap on-disk between them is within this
|
||||
threshold.
|
||||
.sp
|
||||
Default value: \fB32,768\fR.
|
||||
.RE
|
||||
|
@ -1607,7 +1641,7 @@ Default value: \fB32,768\fR.
|
|||
\fBzfs_vdev_scheduler\fR (charp)
|
||||
.ad
|
||||
.RS 12n
|
||||
I/O scheduler
|
||||
Set the Linux I/O scheduler on whole disk vdevs to this scheduler
|
||||
.sp
|
||||
Default value: \fBnoop\fR.
|
||||
.RE
|
||||
|
@ -1629,7 +1663,7 @@ Default value: \fB4,096\fR.
|
|||
\fBzfs_zevent_cols\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Max event column width
|
||||
When zevents are logged to the console use this as the word wrap width.
|
||||
.sp
|
||||
Default value: \fB80\fR.
|
||||
.RE
|
||||
|
@ -1651,7 +1685,9 @@ Use \fB1\fR for yes and \fB0\fR for no (default).
|
|||
\fBzfs_zevent_len_max\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Max event queue length
|
||||
Max event queue length. A value of 0 will result in a calculated value which
|
||||
increases with the number of CPUs in the system (minimum 64 events). Events
|
||||
in the queue can be viewed with the \fBzpool events\fR command.
|
||||
.sp
|
||||
Default value: \fB0\fR.
|
||||
.RE
|
||||
|
@ -1662,7 +1698,8 @@ Default value: \fB0\fR.
|
|||
\fBzil_replay_disable\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Disable intent logging replay
|
||||
Disable intent logging replay. Can be disabled for recovery from corrupted
|
||||
ZIL
|
||||
.sp
|
||||
Use \fB1\fR for yes and \fB0\fR for no (default).
|
||||
.RE
|
||||
|
@ -1684,7 +1721,9 @@ Default value: \fB1,048,576\fR.
|
|||
\fBzio_delay_max\fR (int)
|
||||
.ad
|
||||
.RS 12n
|
||||
Max zio millisecond delay before posting event
|
||||
A zevent will be logged if a ZIO operation takes more than N milliseconds to
|
||||
complete. Note that this is only a logging facility, not a timeout on
|
||||
operations.
|
||||
.sp
|
||||
Default value: \fB30,000\fR.
|
||||
.RE
|
||||
|
@ -1723,7 +1762,8 @@ Default value: \fB75\fR.
|
|||
\fBzvol_inhibit_dev\fR (uint)
|
||||
.ad
|
||||
.RS 12n
|
||||
Do not create zvol device nodes
|
||||
Do not create zvol device nodes. This may slightly improve startup time on
|
||||
systems with a very large number of zvols.
|
||||
.sp
|
||||
Use \fB1\fR for yes and \fB0\fR for no (default).
|
||||
.RE
|
||||
|
@ -1734,7 +1774,7 @@ Use \fB1\fR for yes and \fB0\fR for no (default).
|
|||
\fBzvol_major\fR (uint)
|
||||
.ad
|
||||
.RS 12n
|
||||
Major number for zvol device
|
||||
Major number for zvol block devices
|
||||
.sp
|
||||
Default value: \fB230\fR.
|
||||
.RE
|
||||
|
@ -1745,7 +1785,9 @@ Default value: \fB230\fR.
|
|||
\fBzvol_max_discard_blocks\fR (ulong)
|
||||
.ad
|
||||
.RS 12n
|
||||
Max number of blocks to discard at once
|
||||
Discard (aka TRIM) operations done on zvols will be done in batches of this
|
||||
many blocks, where block size is determined by the \fBvolblocksize\fR property
|
||||
of a zvol.
|
||||
.sp
|
||||
Default value: \fB16,384\fR.
|
||||
.RE
|
||||
|
|
|
@ -3782,6 +3782,6 @@ Invalid command line options were specified.
|
|||
|
||||
.SH SEE ALSO
|
||||
.LP
|
||||
\fBchmod\fR(2), \fBfsync\fR(2), \fBgzip\fR(1), \fBls\fR(1), \fBmount\fR(8), \fBopen\fR(2), \fBreaddir\fR(3), \fBssh\fR(1), \fBstat\fR(2), \fBwrite\fR(2), \fBzpool\fR(8)
|
||||
\fBchmod\fR(2), \fBfsync\fR(2), \fBgzip\fR(1), \fBls\fR(1), \fBmount\fR(8), \fBopen\fR(2), \fBreaddir\fR(3), \fBssh\fR(1), \fBstat\fR(2), \fBwrite\fR(2), \fBzpool\fR(8), \fBzfs-module-parameters\fR(5)
|
||||
.sp
|
||||
On Solaris: \fBdfstab(4)\fR, \fBiscsitadm(1M)\fR, \fBmount(1M)\fR, \fBshare(1M)\fR, \fBsharemgr(1M)\fR, \fBunshare(1M)\fR
|
||||
|
|
|
@ -2556,4 +2556,4 @@ them on \fBzpool create\fR or \fBzpool add\fR by setting ZFS_VDEV_DEVID_OPT_OUT.
|
|||
.SH SEE ALSO
|
||||
.sp
|
||||
.LP
|
||||
\fBzfs\fR(8), \fBzpool-features\fR(5), \fBzfs-events\fR(5)
|
||||
\fBzfs\fR(8), \fBzpool-features\fR(5), \fBzfs-events\fR(5), \fBzfs-module-parameters\fR(5)
|
||||
|
|
Loading…
Reference in New Issue