2010-05-28 20:45:14 +00:00
|
|
|
/*
|
|
|
|
* CDDL HEADER START
|
|
|
|
*
|
|
|
|
* The contents of this file are subject to the terms of the
|
|
|
|
* Common Development and Distribution License (the "License").
|
|
|
|
* You may not use this file except in compliance with the License.
|
|
|
|
*
|
|
|
|
* You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
|
2022-07-11 21:16:13 +00:00
|
|
|
* or https://opensource.org/licenses/CDDL-1.0.
|
2010-05-28 20:45:14 +00:00
|
|
|
* See the License for the specific language governing permissions
|
|
|
|
* and limitations under the License.
|
|
|
|
*
|
|
|
|
* When distributing Covered Code, include this CDDL HEADER in each
|
|
|
|
* file and include the License file at usr/src/OPENSOLARIS.LICENSE.
|
|
|
|
* If applicable, add the following below this CDDL HEADER, with the
|
|
|
|
* fields enclosed by brackets "[]" replaced with your own identifying
|
|
|
|
* information: Portions Copyright [yyyy] [name of copyright owner]
|
|
|
|
*
|
|
|
|
* CDDL HEADER END
|
|
|
|
*/
|
|
|
|
/*
|
|
|
|
* Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved.
|
OpenZFS 7614, 9064 - zfs device evacuation/removal
OpenZFS 7614 - zfs device evacuation/removal
OpenZFS 9064 - remove_mirror should wait for device removal to complete
This project allows top-level vdevs to be removed from the storage pool
with "zpool remove", reducing the total amount of storage in the pool.
This operation copies all allocated regions of the device to be removed
onto other devices, recording the mapping from old to new location.
After the removal is complete, read and free operations to the removed
(now "indirect") vdev must be remapped and performed at the new location
on disk. The indirect mapping table is kept in memory whenever the pool
is loaded, so there is minimal performance overhead when doing operations
on the indirect vdev.
The size of the in-memory mapping table will be reduced when its entries
become "obsolete" because they are no longer used by any block pointers
in the pool. An entry becomes obsolete when all the blocks that use
it are freed. An entry can also become obsolete when all the snapshots
that reference it are deleted, and the block pointers that reference it
have been "remapped" in all filesystems/zvols (and clones). Whenever an
indirect block is written, all the block pointers in it will be "remapped"
to their new (concrete) locations if possible. This process can be
accelerated by using the "zfs remap" command to proactively rewrite all
indirect blocks that reference indirect (removed) vdevs.
Note that when a device is removed, we do not verify the checksum of
the data that is copied. This makes the process much faster, but if it
were used on redundant vdevs (i.e. mirror or raidz vdevs), it would be
possible to copy the wrong data, when we have the correct data on e.g.
the other side of the mirror.
At the moment, only mirrors and simple top-level vdevs can be removed
and no removal is allowed if any of the top-level vdevs are raidz.
Porting Notes:
* Avoid zero-sized kmem_alloc() in vdev_compact_children().
The device evacuation code adds a dependency that
vdev_compact_children() be able to properly empty the vdev_child
array by setting it to NULL and zeroing vdev_children. Under Linux,
kmem_alloc() and related functions return a sentinel pointer rather
than NULL for zero-sized allocations.
* Remove comment regarding "mpt" driver where zfs_remove_max_segment
is initialized to SPA_MAXBLOCKSIZE.
Change zfs_condense_indirect_commit_entry_delay_ticks to
zfs_condense_indirect_commit_entry_delay_ms for consistency with
most other tunables in which delays are specified in ms.
* ZTS changes:
Use set_tunable rather than mdb
Use zpool sync as appropriate
Use sync_pool instead of sync
Kill jobs during test_removal_with_operation to allow unmount/export
Don't add non-disk names such as "mirror" or "raidz" to $DISKS
Use $TEST_BASE_DIR instead of /tmp
Increase HZ from 100 to 1000 which is more common on Linux
removal_multiple_indirection.ksh
Reduce iterations in order to not time out on the code
coverage builders.
removal_resume_export:
Functionally, the test case is correct but there exists a race
where the kernel thread hasn't been fully started yet and is
not visible. Wait for up to 1 second for the removal thread
to be started before giving up on it. Also, increase the
amount of data copied in order that the removal not finish
before the export has a chance to fail.
* MMP compatibility, the concept of concrete versus non-concrete devices
has slightly changed the semantics of vdev_writeable(). Update
mmp_random_leaf_impl() accordingly.
* Updated dbuf_remap() to handle the org.zfsonlinux:large_dnode pool
feature which is not supported by OpenZFS.
* Added support for new vdev removal tracepoints.
* Test cases removal_with_zdb and removal_condense_export have been
intentionally disabled. When run manually they pass as intended,
but when running in the automated test environment they produce
unreliable results on the latest Fedora release.
They may work better once the upstream pool import refectoring is
merged into ZoL at which point they will be re-enabled.
Authored by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Alex Reece <alex@delphix.com>
Reviewed-by: George Wilson <george.wilson@delphix.com>
Reviewed-by: John Kennedy <john.kennedy@delphix.com>
Reviewed-by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Richard Laager <rlaager@wiktel.com>
Reviewed by: Tim Chase <tim@chase2k.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Garrett D'Amore <garrett@damore.org>
Ported-by: Tim Chase <tim@chase2k.com>
Signed-off-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://www.illumos.org/issues/7614
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/f539f1eb
Closes #6900
2016-09-22 16:30:13 +00:00
|
|
|
* Copyright (c) 2012, 2017 by Delphix. All rights reserved.
|
2019-11-27 18:15:01 +00:00
|
|
|
* Copyright (c) 2017, 2019, Datto Inc. All rights reserved.
|
2010-05-28 20:45:14 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef _SYS_DSL_SCAN_H
|
|
|
|
#define _SYS_DSL_SCAN_H
|
|
|
|
|
|
|
|
#include <sys/zfs_context.h>
|
|
|
|
#include <sys/zio.h>
|
2021-12-17 20:35:28 +00:00
|
|
|
#include <sys/zap.h>
|
2010-05-28 20:45:14 +00:00
|
|
|
#include <sys/ddt.h>
|
|
|
|
#include <sys/bplist.h>
|
|
|
|
|
|
|
|
#ifdef __cplusplus
|
|
|
|
extern "C" {
|
|
|
|
#endif
|
|
|
|
|
|
|
|
struct objset;
|
|
|
|
struct dsl_dir;
|
|
|
|
struct dsl_dataset;
|
|
|
|
struct dsl_pool;
|
|
|
|
struct dmu_tx;
|
|
|
|
|
2020-07-03 18:05:50 +00:00
|
|
|
extern int zfs_scan_suspend_progress;
|
|
|
|
|
2010-05-28 20:45:14 +00:00
|
|
|
/*
|
|
|
|
* All members of this structure must be uint64_t, for byteswap
|
|
|
|
* purposes.
|
|
|
|
*/
|
|
|
|
typedef struct dsl_scan_phys {
|
|
|
|
uint64_t scn_func; /* pool_scan_func_t */
|
|
|
|
uint64_t scn_state; /* dsl_scan_state_t */
|
|
|
|
uint64_t scn_queue_obj;
|
|
|
|
uint64_t scn_min_txg;
|
|
|
|
uint64_t scn_max_txg;
|
|
|
|
uint64_t scn_cur_min_txg;
|
|
|
|
uint64_t scn_cur_max_txg;
|
|
|
|
uint64_t scn_start_time;
|
|
|
|
uint64_t scn_end_time;
|
|
|
|
uint64_t scn_to_examine; /* total bytes to be scanned */
|
|
|
|
uint64_t scn_examined; /* bytes scanned so far */
|
Do not report bytes skipped by scan as issued.
Scan process may skip blocks based on their birth time, DVA, etc.
Traditionally those blocks were accounted as issued, that caused
reporting of hugely over-inflated numbers, having nothing to do
with actual disk I/O. This change utilizes never used field in
struct dsl_scan_phys to account such skipped bytes, allowing to
report how much data were actually scrubbed/resilvered and what
is the actual I/O speed. While formally it is an on-disk format
change, it should be compatible both ways, so should not need a
feature flag.
This should partially address the same issue as c85ac731a0e, but
from a different perspective, complementing it.
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Akash B <akash-b@hpe.com>
Signed-off-by: Alexander Motin <mav@FreeBSD.org>
Sponsored by: iXsystems, Inc.
Closes #15007
2023-06-30 15:47:13 +00:00
|
|
|
uint64_t scn_skipped; /* bytes skipped by scanner */
|
2010-05-28 20:45:14 +00:00
|
|
|
uint64_t scn_processed;
|
|
|
|
uint64_t scn_errors; /* scan I/O error count */
|
|
|
|
uint64_t scn_ddt_class_max;
|
|
|
|
ddt_bookmark_t scn_ddt_bookmark;
|
2014-06-25 18:37:59 +00:00
|
|
|
zbookmark_phys_t scn_bookmark;
|
2010-05-28 20:45:14 +00:00
|
|
|
uint64_t scn_flags; /* dsl_scan_flags_t */
|
|
|
|
} dsl_scan_phys_t;
|
|
|
|
|
|
|
|
#define SCAN_PHYS_NUMINTS (sizeof (dsl_scan_phys_t) / sizeof (uint64_t))
|
|
|
|
|
|
|
|
typedef enum dsl_scan_flags {
|
|
|
|
DSF_VISIT_DS_AGAIN = 1<<0,
|
2017-07-07 05:16:13 +00:00
|
|
|
DSF_SCRUB_PAUSED = 1<<1,
|
2010-05-28 20:45:14 +00:00
|
|
|
} dsl_scan_flags_t;
|
|
|
|
|
Add erratum for issue #2094
ZoL commit 1421c89 unintentionally changed the disk format in a forward-
compatible, but not backward compatible way. This was accomplished by
adding an entry to zbookmark_t, which is included in a couple of
on-disk structures. That lead to the creation of pools with incorrect
dsl_scan_phys_t objects that could only be imported by versions of ZoL
containing that commit. Such pools cannot be imported by other versions
of ZFS or past versions of ZoL.
The additional field has been removed by the previous commit. However,
affected pools must be imported and scrubbed using a version of ZoL with
this commit applied. This will return the pools to a state in which they
may be imported by other implementations.
The 'zpool import' or 'zpool status' command can be used to determine if
a pool is impacted. A message similar to one of the following means your
pool must be scrubbed to restore compatibility.
$ zpool import
pool: zol-0.6.2-173
id: 1165955789558693437
state: ONLINE
status: Errata #1 detected.
action: The pool can be imported using its name or numeric identifier,
however there is a compatibility issue which should be corrected
by running 'zpool scrub'
see: http://zfsonlinux.org/msg/ZFS-8000-ER
config:
...
$ zpool status
pool: zol-0.6.2-173
state: ONLINE
scan: pool compatibility issue detected.
see: https://github.com/zfsonlinux/zfs/issues/2094
action: To correct the issue run 'zpool scrub'.
config:
...
If there was an async destroy in progress 'zpool import' will prevent
the pool from being imported. Further advice on how to proceed will be
provided by the error message as follows.
$ zpool import
pool: zol-0.6.2-173
id: 1165955789558693437
state: ONLINE
status: Errata #2 detected.
action: The pool can not be imported with this version of ZFS due to an
active asynchronous destroy. Revert to an earlier version and
allow the destroy to complete before updating.
see: http://zfsonlinux.org/msg/ZFS-8000-ER
config:
...
Pools affected by the damaged dsl_scan_phys_t can be detected prior to
an upgrade by running the following command as root:
zdb -dddd poolname 1 | grep -P '^\t\tscan = ' | sed -e 's;scan = ;;' | wc -w
Note that `poolname` must be replaced with the name of the pool you wish
to check. A value of 25 indicates the dsl_scan_phys_t has been damaged.
A value of 24 indicates that the dsl_scan_phys_t is normal. A value of 0
indicates that there has never been a scrub run on the pool.
The regression caused by the change to zbookmark_t never made it into a
tagged release, Gentoo backports, Ubuntu, Debian, Fedora, or EPEL
stable respositorys. Only those using the HEAD version directly from
Github after the 0.6.2 but before the 0.6.3 tag are affected.
This patch does have one limitation that should be mentioned. It will not
detect errata #2 on a pool unless errata #1 is also present. It expected
this will not be a significant problem because pools impacted by errata #2
have a high probably of being impacted by errata #1.
End users can ensure they do no hit this unlikely case by waiting for all
asynchronous destroy operations to complete before updating ZoL. The
presence of any background destroys on any imported pools can be checked
by running `zpool get freeing` as root. This will display a non-zero
value for any pool with an active asynchronous destroy.
Lastly, it is expected that no user data has been lost as a result of
this erratum.
Original-patch-by: Tim Chase <tim@chase2k.com>
Reworked-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tim Chase <tim@chase2k.com>
Signed-off-by: Richard Yao <ryao@gentoo.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #2094
2014-02-21 04:28:33 +00:00
|
|
|
#define DSL_SCAN_FLAGS_MASK (DSF_VISIT_DS_AGAIN)
|
|
|
|
|
2021-12-17 20:35:28 +00:00
|
|
|
typedef struct dsl_errorscrub_phys {
|
|
|
|
uint64_t dep_func; /* pool_scan_func_t */
|
|
|
|
uint64_t dep_state; /* dsl_scan_state_t */
|
|
|
|
uint64_t dep_cursor; /* serialized zap cursor for tracing progress */
|
|
|
|
uint64_t dep_start_time; /* error scrub start time, unix timestamp */
|
|
|
|
uint64_t dep_end_time; /* error scrub end time, unix timestamp */
|
|
|
|
uint64_t dep_to_examine; /* total error blocks to be scrubbed */
|
|
|
|
uint64_t dep_examined; /* blocks scrubbed so far */
|
|
|
|
uint64_t dep_errors; /* error scrub I/O error count */
|
|
|
|
uint64_t dep_paused_flags; /* flag for paused */
|
|
|
|
} dsl_errorscrub_phys_t;
|
|
|
|
|
|
|
|
#define ERRORSCRUB_PHYS_NUMINTS (sizeof (dsl_errorscrub_phys_t) \
|
|
|
|
/ sizeof (uint64_t))
|
|
|
|
|
2013-08-07 20:16:22 +00:00
|
|
|
/*
|
|
|
|
* Every pool will have one dsl_scan_t and this structure will contain
|
|
|
|
* in-memory information about the scan and a pointer to the on-disk
|
|
|
|
* representation (i.e. dsl_scan_phys_t). Most of the state of the scan
|
|
|
|
* is contained on-disk to allow the scan to resume in the event of a reboot
|
|
|
|
* or panic. This structure maintains information about the behavior of a
|
|
|
|
* running scan, some caching information, and how it should traverse the pool.
|
|
|
|
*
|
|
|
|
* The following members of this structure direct the behavior of the scan:
|
|
|
|
*
|
2017-07-07 05:16:13 +00:00
|
|
|
* scn_suspending - a scan that cannot be completed in a single txg or
|
|
|
|
* has exceeded its allotted time will need to suspend.
|
2013-08-07 20:16:22 +00:00
|
|
|
* When this flag is set the scanner will stop traversing
|
|
|
|
* the pool and write out the current state to disk.
|
|
|
|
*
|
|
|
|
* scn_restart_txg - directs the scanner to either restart or start a
|
|
|
|
* a scan at the specified txg value.
|
|
|
|
*
|
|
|
|
* scn_done_txg - when a scan completes its traversal it will set
|
|
|
|
* the completion txg to the next txg. This is necessary
|
|
|
|
* to ensure that any blocks that were freed during
|
|
|
|
* the scan but have not yet been processed (i.e deferred
|
|
|
|
* frees) are accounted for.
|
|
|
|
*
|
|
|
|
* This structure also maintains information about deferred frees which are
|
|
|
|
* a special kind of traversal. Deferred free can exist in either a bptree or
|
|
|
|
* a bpobj structure. The scn_is_bptree flag will indicate the type of
|
|
|
|
* deferred free that is in progress. If the deferred free is part of an
|
|
|
|
* asynchronous destroy then the scn_async_destroying flag will be set.
|
|
|
|
*/
|
2010-05-28 20:45:14 +00:00
|
|
|
typedef struct dsl_scan {
|
|
|
|
struct dsl_pool *scn_dp;
|
|
|
|
uint64_t scn_restart_txg;
|
2013-08-07 20:16:22 +00:00
|
|
|
uint64_t scn_done_txg;
|
2010-05-28 20:45:14 +00:00
|
|
|
uint64_t scn_sync_start_time;
|
2017-11-16 01:27:01 +00:00
|
|
|
uint64_t scn_issued_before_pass;
|
2010-05-28 20:45:14 +00:00
|
|
|
|
2012-12-13 23:24:15 +00:00
|
|
|
/* for freeing blocks */
|
|
|
|
boolean_t scn_is_bptree;
|
2013-04-23 17:31:42 +00:00
|
|
|
boolean_t scn_async_destroying;
|
2014-06-05 21:20:08 +00:00
|
|
|
boolean_t scn_async_stalled;
|
OpenZFS 7614, 9064 - zfs device evacuation/removal
OpenZFS 7614 - zfs device evacuation/removal
OpenZFS 9064 - remove_mirror should wait for device removal to complete
This project allows top-level vdevs to be removed from the storage pool
with "zpool remove", reducing the total amount of storage in the pool.
This operation copies all allocated regions of the device to be removed
onto other devices, recording the mapping from old to new location.
After the removal is complete, read and free operations to the removed
(now "indirect") vdev must be remapped and performed at the new location
on disk. The indirect mapping table is kept in memory whenever the pool
is loaded, so there is minimal performance overhead when doing operations
on the indirect vdev.
The size of the in-memory mapping table will be reduced when its entries
become "obsolete" because they are no longer used by any block pointers
in the pool. An entry becomes obsolete when all the blocks that use
it are freed. An entry can also become obsolete when all the snapshots
that reference it are deleted, and the block pointers that reference it
have been "remapped" in all filesystems/zvols (and clones). Whenever an
indirect block is written, all the block pointers in it will be "remapped"
to their new (concrete) locations if possible. This process can be
accelerated by using the "zfs remap" command to proactively rewrite all
indirect blocks that reference indirect (removed) vdevs.
Note that when a device is removed, we do not verify the checksum of
the data that is copied. This makes the process much faster, but if it
were used on redundant vdevs (i.e. mirror or raidz vdevs), it would be
possible to copy the wrong data, when we have the correct data on e.g.
the other side of the mirror.
At the moment, only mirrors and simple top-level vdevs can be removed
and no removal is allowed if any of the top-level vdevs are raidz.
Porting Notes:
* Avoid zero-sized kmem_alloc() in vdev_compact_children().
The device evacuation code adds a dependency that
vdev_compact_children() be able to properly empty the vdev_child
array by setting it to NULL and zeroing vdev_children. Under Linux,
kmem_alloc() and related functions return a sentinel pointer rather
than NULL for zero-sized allocations.
* Remove comment regarding "mpt" driver where zfs_remove_max_segment
is initialized to SPA_MAXBLOCKSIZE.
Change zfs_condense_indirect_commit_entry_delay_ticks to
zfs_condense_indirect_commit_entry_delay_ms for consistency with
most other tunables in which delays are specified in ms.
* ZTS changes:
Use set_tunable rather than mdb
Use zpool sync as appropriate
Use sync_pool instead of sync
Kill jobs during test_removal_with_operation to allow unmount/export
Don't add non-disk names such as "mirror" or "raidz" to $DISKS
Use $TEST_BASE_DIR instead of /tmp
Increase HZ from 100 to 1000 which is more common on Linux
removal_multiple_indirection.ksh
Reduce iterations in order to not time out on the code
coverage builders.
removal_resume_export:
Functionally, the test case is correct but there exists a race
where the kernel thread hasn't been fully started yet and is
not visible. Wait for up to 1 second for the removal thread
to be started before giving up on it. Also, increase the
amount of data copied in order that the removal not finish
before the export has a chance to fail.
* MMP compatibility, the concept of concrete versus non-concrete devices
has slightly changed the semantics of vdev_writeable(). Update
mmp_random_leaf_impl() accordingly.
* Updated dbuf_remap() to handle the org.zfsonlinux:large_dnode pool
feature which is not supported by OpenZFS.
* Added support for new vdev removal tracepoints.
* Test cases removal_with_zdb and removal_condense_export have been
intentionally disabled. When run manually they pass as intended,
but when running in the automated test environment they produce
unreliable results on the latest Fedora release.
They may work better once the upstream pool import refectoring is
merged into ZoL at which point they will be re-enabled.
Authored by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Alex Reece <alex@delphix.com>
Reviewed-by: George Wilson <george.wilson@delphix.com>
Reviewed-by: John Kennedy <john.kennedy@delphix.com>
Reviewed-by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Richard Laager <rlaager@wiktel.com>
Reviewed by: Tim Chase <tim@chase2k.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Garrett D'Amore <garrett@damore.org>
Ported-by: Tim Chase <tim@chase2k.com>
Signed-off-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://www.illumos.org/issues/7614
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/f539f1eb
Closes #6900
2016-09-22 16:30:13 +00:00
|
|
|
uint64_t scn_async_block_min_time_ms;
|
2010-05-28 20:45:14 +00:00
|
|
|
|
2017-11-16 01:27:01 +00:00
|
|
|
/* flags and stats for controlling scan state */
|
|
|
|
boolean_t scn_is_sorted; /* doing sequential scan */
|
|
|
|
boolean_t scn_clearing; /* scan is issuing sequential extents */
|
|
|
|
boolean_t scn_checkpointing; /* scan is issuing all queued extents */
|
|
|
|
boolean_t scn_suspending; /* scan is suspending until next txg */
|
|
|
|
uint64_t scn_last_checkpoint; /* time of last checkpoint */
|
|
|
|
|
|
|
|
/* members for thread synchronization */
|
|
|
|
zio_t *scn_zio_root; /* root zio for waiting on IO */
|
|
|
|
taskq_t *scn_taskq; /* task queue for issuing extents */
|
|
|
|
|
|
|
|
/* for controlling scan prefetch, protected by spa_scrub_lock */
|
|
|
|
boolean_t scn_prefetch_stop; /* prefetch should stop */
|
|
|
|
zbookmark_phys_t scn_prefetch_bookmark; /* prefetch start bookmark */
|
|
|
|
avl_tree_t scn_prefetch_queue; /* priority queue of prefetch IOs */
|
|
|
|
uint64_t scn_maxinflight_bytes; /* max bytes in flight for pool */
|
|
|
|
|
|
|
|
/* per txg statistics */
|
|
|
|
uint64_t scn_visited_this_txg; /* total bps visited this txg */
|
Remove limit on number of async zio_frees of non-dedup blocks
The module parameter zfs_async_block_max_blocks limits the number of
blocks that can be freed by the background freeing of filesystems and
snapshots (from "zfs destroy"), in one TXG. This is useful when freeing
dedup blocks, becuase each zio_free() of a dedup block can require an
i/o to read the relevant part of the dedup table (DDT), and will also
dirty that block.
zfs_async_block_max_blocks is set to 100,000 by default. For the more
typical case where dedup is not used, this can have a negative
performance impact on the rate of background freeing (from "zfs
destroy"). For example, with recordsize=8k, and TXG's syncing once
every 5 seconds, we can free only 160MB of data per second, which may be
much less than the rate we can write data.
This change increases zfs_async_block_max_blocks to be unlimited by
default. To address the dedup freeing issue, a new tunable is
introduced, zfs_max_async_dedup_frees, which limits the number of
zio_free()'s of dedup blocks done by background destroys, per txg. The
default is 100,000 free's (same as the old zfs_async_block_max_blocks
default).
Reviewed-by: Paul Dagnelie <pcd@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Matthew Ahrens <mahrens@delphix.com>
Closes #10000
2020-02-14 16:39:46 +00:00
|
|
|
uint64_t scn_dedup_frees_this_txg; /* dedup bps freed this txg */
|
2017-11-16 01:27:01 +00:00
|
|
|
uint64_t scn_holes_this_txg;
|
|
|
|
uint64_t scn_lt_min_this_txg;
|
|
|
|
uint64_t scn_gt_max_this_txg;
|
|
|
|
uint64_t scn_ddt_contained_this_txg;
|
|
|
|
uint64_t scn_objsets_visited_this_txg;
|
|
|
|
uint64_t scn_avg_seg_size_this_txg;
|
|
|
|
uint64_t scn_segs_this_txg;
|
|
|
|
uint64_t scn_avg_zio_size_this_txg;
|
|
|
|
uint64_t scn_zios_this_txg;
|
|
|
|
|
2021-12-17 20:35:28 +00:00
|
|
|
/* zap cursor for tracing error scrub progress */
|
|
|
|
zap_cursor_t errorscrub_cursor;
|
2017-11-16 01:27:01 +00:00
|
|
|
/* members needed for syncing scan status to disk */
|
|
|
|
dsl_scan_phys_t scn_phys; /* on disk representation of scan */
|
|
|
|
dsl_scan_phys_t scn_phys_cached;
|
|
|
|
avl_tree_t scn_queue; /* queue of datasets to scan */
|
2024-05-09 14:32:59 +00:00
|
|
|
kmutex_t scn_queue_lock; /* serializes scn_queue inserts */
|
2022-06-24 16:50:37 +00:00
|
|
|
uint64_t scn_queues_pending; /* outstanding data to issue */
|
2021-12-17 20:35:28 +00:00
|
|
|
/* members needed for syncing error scrub status to disk */
|
|
|
|
dsl_errorscrub_phys_t errorscrub_phys;
|
2010-05-28 20:45:14 +00:00
|
|
|
} dsl_scan_t;
|
|
|
|
|
2017-11-16 01:27:01 +00:00
|
|
|
typedef struct dsl_scan_io_queue dsl_scan_io_queue_t;
|
|
|
|
|
|
|
|
void scan_init(void);
|
|
|
|
void scan_fini(void);
|
2010-05-28 20:45:14 +00:00
|
|
|
int dsl_scan_init(struct dsl_pool *dp, uint64_t txg);
|
2021-04-08 21:33:15 +00:00
|
|
|
int dsl_scan_setup_check(void *, dmu_tx_t *);
|
Distributed Spare (dRAID) Feature
This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID. This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.
A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`. No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.
zpool create <pool> draid[1,2,3] <vdevs...>
Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons. The supported options include:
zpool create <pool> \
draid[<parity>][:<data>d][:<children>c][:<spares>s] \
<vdevs...>
- draid[parity] - Parity level (default 1)
- draid[:<data>d] - Data devices per group (default 8)
- draid[:<children>c] - Expected number of child vdevs
- draid[:<spares>s] - Distributed hot spares (default 0)
Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.
```
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
slag7 ONLINE 0 0 0
draid2:8d:68c:2s-0 ONLINE 0 0 0
L0 ONLINE 0 0 0
L1 ONLINE 0 0 0
...
U25 ONLINE 0 0 0
U26 ONLINE 0 0 0
spare-53 ONLINE 0 0 0
U27 ONLINE 0 0 0
draid2-0-0 ONLINE 0 0 0
U28 ONLINE 0 0 0
U29 ONLINE 0 0 0
...
U42 ONLINE 0 0 0
U43 ONLINE 0 0 0
special
mirror-1 ONLINE 0 0 0
L5 ONLINE 0 0 0
U5 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
L6 ONLINE 0 0 0
U6 ONLINE 0 0 0
spares
draid2-0-0 INUSE currently in use
draid2-0-1 AVAIL
```
When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command. These options are leverages
by zloop.sh to test a wide range of dRAID configurations.
-K draid|raidz|random - kind of RAID to test
-D <value> - dRAID data drives per group
-S <value> - dRAID distributed hot spares
-R <value> - RAID parity (raidz or dRAID)
The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.
Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102
2020-11-13 21:51:51 +00:00
|
|
|
void dsl_scan_setup_sync(void *, dmu_tx_t *);
|
2010-05-28 20:45:14 +00:00
|
|
|
void dsl_scan_fini(struct dsl_pool *dp);
|
|
|
|
void dsl_scan_sync(struct dsl_pool *, dmu_tx_t *);
|
|
|
|
int dsl_scan_cancel(struct dsl_pool *);
|
|
|
|
int dsl_scan(struct dsl_pool *, pool_scan_func_t);
|
2019-11-27 18:15:01 +00:00
|
|
|
void dsl_scan_assess_vdev(struct dsl_pool *dp, vdev_t *vd);
|
2017-07-07 05:16:13 +00:00
|
|
|
boolean_t dsl_scan_scrubbing(const struct dsl_pool *dp);
|
2021-12-17 20:35:28 +00:00
|
|
|
boolean_t dsl_errorscrubbing(const struct dsl_pool *dp);
|
|
|
|
boolean_t dsl_errorscrub_active(dsl_scan_t *scn);
|
2019-11-27 18:15:01 +00:00
|
|
|
void dsl_scan_restart_resilver(struct dsl_pool *, uint64_t txg);
|
2021-12-17 20:35:28 +00:00
|
|
|
int dsl_scrub_set_pause_resume(const struct dsl_pool *dp,
|
|
|
|
pool_scrub_cmd_t cmd);
|
|
|
|
void dsl_errorscrub_sync(struct dsl_pool *, dmu_tx_t *);
|
2010-05-28 20:45:14 +00:00
|
|
|
boolean_t dsl_scan_resilvering(struct dsl_pool *dp);
|
2019-11-27 18:15:01 +00:00
|
|
|
boolean_t dsl_scan_resilver_scheduled(struct dsl_pool *dp);
|
2010-05-28 20:45:14 +00:00
|
|
|
boolean_t dsl_dataset_unstable(struct dsl_dataset *ds);
|
|
|
|
void dsl_scan_ddt_entry(dsl_scan_t *scn, enum zio_checksum checksum,
|
|
|
|
ddt_entry_t *dde, dmu_tx_t *tx);
|
|
|
|
void dsl_scan_ds_destroyed(struct dsl_dataset *ds, struct dmu_tx *tx);
|
|
|
|
void dsl_scan_ds_snapshotted(struct dsl_dataset *ds, struct dmu_tx *tx);
|
|
|
|
void dsl_scan_ds_clone_swapped(struct dsl_dataset *ds1, struct dsl_dataset *ds2,
|
|
|
|
struct dmu_tx *tx);
|
|
|
|
boolean_t dsl_scan_active(dsl_scan_t *scn);
|
2017-07-07 05:16:13 +00:00
|
|
|
boolean_t dsl_scan_is_paused_scrub(const dsl_scan_t *scn);
|
2021-12-17 20:35:28 +00:00
|
|
|
boolean_t dsl_errorscrub_is_paused(const dsl_scan_t *scn);
|
2017-11-16 01:27:01 +00:00
|
|
|
void dsl_scan_freed(spa_t *spa, const blkptr_t *bp);
|
|
|
|
void dsl_scan_io_queue_destroy(dsl_scan_io_queue_t *queue);
|
|
|
|
void dsl_scan_io_queue_vdev_xfer(vdev_t *svd, vdev_t *tvd);
|
2010-05-28 20:45:14 +00:00
|
|
|
|
|
|
|
#ifdef __cplusplus
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#endif /* _SYS_DSL_SCAN_H */
|