2008-11-20 20:01:55 +00:00
|
|
|
/*
|
|
|
|
* CDDL HEADER START
|
|
|
|
*
|
|
|
|
* The contents of this file are subject to the terms of the
|
|
|
|
* Common Development and Distribution License (the "License").
|
|
|
|
* You may not use this file except in compliance with the License.
|
|
|
|
*
|
|
|
|
* You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
|
2022-07-11 21:16:13 +00:00
|
|
|
* or https://opensource.org/licenses/CDDL-1.0.
|
2008-11-20 20:01:55 +00:00
|
|
|
* See the License for the specific language governing permissions
|
|
|
|
* and limitations under the License.
|
|
|
|
*
|
|
|
|
* When distributing Covered Code, include this CDDL HEADER in each
|
|
|
|
* file and include the License file at usr/src/OPENSOLARIS.LICENSE.
|
|
|
|
* If applicable, add the following below this CDDL HEADER, with the
|
|
|
|
* fields enclosed by brackets "[]" replaced with your own identifying
|
|
|
|
* information: Portions Copyright [yyyy] [name of copyright owner]
|
|
|
|
*
|
|
|
|
* CDDL HEADER END
|
|
|
|
*/
|
|
|
|
/*
|
2010-05-28 20:45:14 +00:00
|
|
|
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
|
2016-12-16 22:11:29 +00:00
|
|
|
* Copyright (c) 2016, 2017 by Delphix. All rights reserved.
|
2008-11-20 20:01:55 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef _SYS_UBERBLOCK_IMPL_H
|
|
|
|
#define _SYS_UBERBLOCK_IMPL_H
|
|
|
|
|
|
|
|
#include <sys/uberblock.h>
|
|
|
|
|
|
|
|
#ifdef __cplusplus
|
|
|
|
extern "C" {
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The uberblock version is incremented whenever an incompatible on-disk
|
|
|
|
* format change is made to the SPA, DMU, or ZAP.
|
|
|
|
*
|
|
|
|
* Note: the first two fields should never be moved. When a storage pool
|
|
|
|
* is opened, the uberblock must be read off the disk before the version
|
|
|
|
* can be checked. If the ub_version field is moved, we may not detect
|
|
|
|
* version mismatch. If the ub_magic field is moved, applications that
|
|
|
|
* expect the magic number in the first word won't work.
|
|
|
|
*/
|
|
|
|
#define UBERBLOCK_MAGIC 0x00bab10c /* oo-ba-bloc! */
|
|
|
|
#define UBERBLOCK_SHIFT 10 /* up to 1K */
|
MMP interval and fail_intervals in uberblock
When Multihost is enabled, and a pool is imported, uberblock writes
include ub_mmp_delay to allow an importing node to calculate the
duration of an activity test. This value, is not enough information.
If zfs_multihost_fail_intervals > 0 on the node with the pool imported,
the safe minimum duration of the activity test is well defined, but does
not depend on ub_mmp_delay:
zfs_multihost_fail_intervals * zfs_multihost_interval
and if zfs_multihost_fail_intervals == 0 on that node, there is no such
well defined safe duration, but the importing host cannot tell whether
mmp_delay is high due to I/O delays, or due to a very large
zfs_multihost_interval setting on the host which last imported the pool.
As a result, it may use a far longer period for the activity test than
is necessary.
This patch renames ub_mmp_sequence to ub_mmp_config and uses it to
record the zfs_multihost_interval and zfs_multihost_fail_intervals
values, as well as the mmp sequence. This allows a shorter activity
test duration to be calculated by the importing host in most situations.
These values are also added to the multihost_history kstat records.
It calculates the activity test duration differently depending on
whether the new fields are present or not; for importing pools with
only ub_mmp_delay, it uses
(zfs_multihost_interval + ub_mmp_delay) * zfs_multihost_import_intervals
Which results in an activity test duration less sensitive to the leaf
count.
In addition, it makes a few other improvements:
* It updates the "sequence" part of ub_mmp_config when MMP writes
in between syncs occur. This allows an importing host to detect MMP
on the remote host sooner, when the pool is idle, as it is not limited
to the granularity of ub_timestamp (1 second).
* It issues writes immediately when zfs_multihost_interval is changed
so remote hosts see the updated value as soon as possible.
* It fixes a bug where setting zfs_multihost_fail_intervals = 1 results
in immediate pool suspension.
* Update tests to verify activity check duration is based on recorded
tunable values, not tunable values on importing host.
* Update tests to verify the expected number of uberblocks have valid
MMP fields - fail_intervals, mmp_interval, mmp_seq (sequence number),
that sequence number is incrementing, and that uberblock values match
tunable settings.
Reviewed-by: Andreas Dilger <andreas.dilger@whamcloud.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Olaf Faaland <faaland1@llnl.gov>
Closes #7842
2019-03-21 19:47:57 +00:00
|
|
|
#define MMP_MAGIC 0xa11cea11 /* all-see-all */
|
|
|
|
|
|
|
|
#define MMP_INTERVAL_VALID_BIT 0x01
|
|
|
|
#define MMP_SEQ_VALID_BIT 0x02
|
|
|
|
#define MMP_FAIL_INT_VALID_BIT 0x04
|
|
|
|
|
|
|
|
#define MMP_VALID(ubp) (ubp->ub_magic == UBERBLOCK_MAGIC && \
|
|
|
|
ubp->ub_mmp_magic == MMP_MAGIC)
|
|
|
|
#define MMP_INTERVAL_VALID(ubp) (MMP_VALID(ubp) && (ubp->ub_mmp_config & \
|
|
|
|
MMP_INTERVAL_VALID_BIT))
|
|
|
|
#define MMP_SEQ_VALID(ubp) (MMP_VALID(ubp) && (ubp->ub_mmp_config & \
|
|
|
|
MMP_SEQ_VALID_BIT))
|
|
|
|
#define MMP_FAIL_INT_VALID(ubp) (MMP_VALID(ubp) && (ubp->ub_mmp_config & \
|
|
|
|
MMP_FAIL_INT_VALID_BIT))
|
|
|
|
|
|
|
|
#define MMP_INTERVAL(ubp) ((ubp->ub_mmp_config & 0x00000000FFFFFF00) \
|
|
|
|
>> 8)
|
|
|
|
#define MMP_SEQ(ubp) ((ubp->ub_mmp_config & 0x0000FFFF00000000) \
|
|
|
|
>> 32)
|
|
|
|
#define MMP_FAIL_INT(ubp) ((ubp->ub_mmp_config & 0xFFFF000000000000) \
|
|
|
|
>> 48)
|
|
|
|
|
|
|
|
#define MMP_INTERVAL_SET(write) \
|
|
|
|
(((uint64_t)(write & 0xFFFFFF) << 8) | MMP_INTERVAL_VALID_BIT)
|
|
|
|
|
|
|
|
#define MMP_SEQ_SET(seq) \
|
|
|
|
(((uint64_t)(seq & 0xFFFF) << 32) | MMP_SEQ_VALID_BIT)
|
|
|
|
|
|
|
|
#define MMP_FAIL_INT_SET(fail) \
|
|
|
|
(((uint64_t)(fail & 0xFFFF) << 48) | MMP_FAIL_INT_VALID_BIT)
|
2008-11-20 20:01:55 +00:00
|
|
|
|
RAID-Z expansion feature
This feature allows disks to be added one at a time to a RAID-Z group,
expanding its capacity incrementally. This feature is especially useful
for small pools (typically with only one RAID-Z group), where there
isn't sufficient hardware to add capacity by adding a whole new RAID-Z
group (typically doubling the number of disks).
== Initiating expansion ==
A new device (disk) can be attached to an existing RAIDZ vdev, by
running `zpool attach POOL raidzP-N NEW_DEVICE`, e.g. `zpool attach tank
raidz2-0 sda`. The new device will become part of the RAIDZ group. A
"raidz expansion" will be initiated, and the new device will contribute
additional space to the RAIDZ group once the expansion completes.
The `feature@raidz_expansion` on-disk feature flag must be `enabled` to
initiate an expansion, and it remains `active` for the life of the pool.
In other words, pools with expanded RAIDZ vdevs can not be imported by
older releases of the ZFS software.
== During expansion ==
The expansion entails reading all allocated space from existing disks in
the RAIDZ group, and rewriting it to the new disks in the RAIDZ group
(including the newly added device).
The expansion progress can be monitored with `zpool status`.
Data redundancy is maintained during (and after) the expansion. If a
disk fails while the expansion is in progress, the expansion pauses
until the health of the RAIDZ vdev is restored (e.g. by replacing the
failed disk and waiting for reconstruction to complete).
The pool remains accessible during expansion. Following a reboot or
export/import, the expansion resumes where it left off.
== After expansion ==
When the expansion completes, the additional space is available for use,
and is reflected in the `available` zfs property (as seen in `zfs list`,
`df`, etc).
Expansion does not change the number of failures that can be tolerated
without data loss (e.g. a RAIDZ2 is still a RAIDZ2 even after
expansion).
A RAIDZ vdev can be expanded multiple times.
After the expansion completes, old blocks remain with their old
data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but
distributed among the larger set of disks. New blocks will be written
with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been
expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ
vdev's "assumed parity ratio" does not change, so slightly less space
than is expected may be reported for newly-written blocks, according to
`zfs list`, `df`, `ls -s`, and similar tools.
Sponsored-by: The FreeBSD Foundation
Sponsored-by: iXsystems, Inc.
Sponsored-by: vStack
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Authored-by: Matthew Ahrens <mahrens@delphix.com>
Contributions-by: Fedor Uporov <fuporov.vstack@gmail.com>
Contributions-by: Stuart Maybee <stuart.maybee@comcast.net>
Contributions-by: Thorsten Behrens <tbehrens@outlook.com>
Contributions-by: Fmstrat <nospam@nowsci.com>
Contributions-by: Don Brady <dev.fs.zfs@gmail.com>
Signed-off-by: Don Brady <dev.fs.zfs@gmail.com>
Closes #15022
2023-11-08 18:19:41 +00:00
|
|
|
/*
|
|
|
|
* RAIDZ expansion reflow information.
|
|
|
|
*
|
|
|
|
* 64 56 48 40 32 24 16 8 0
|
|
|
|
* +-------+-------+-------+-------+-------+-------+-------+-------+
|
|
|
|
* |Scratch | Reflow |
|
|
|
|
* | State | Offset |
|
|
|
|
* +-------+-------+-------+-------+-------+-------+-------+-------+
|
|
|
|
*/
|
|
|
|
typedef enum raidz_reflow_scratch_state {
|
|
|
|
RRSS_SCRATCH_NOT_IN_USE = 0,
|
|
|
|
RRSS_SCRATCH_VALID,
|
|
|
|
RRSS_SCRATCH_INVALID_SYNCED,
|
|
|
|
RRSS_SCRATCH_INVALID_SYNCED_ON_IMPORT,
|
|
|
|
RRSS_SCRATCH_INVALID_SYNCED_REFLOW
|
|
|
|
} raidz_reflow_scratch_state_t;
|
|
|
|
|
|
|
|
#define RRSS_GET_OFFSET(ub) \
|
|
|
|
BF64_GET_SB((ub)->ub_raidz_reflow_info, 0, 55, SPA_MINBLOCKSHIFT, 0)
|
|
|
|
#define RRSS_SET_OFFSET(ub, x) \
|
|
|
|
BF64_SET_SB((ub)->ub_raidz_reflow_info, 0, 55, SPA_MINBLOCKSHIFT, 0, x)
|
|
|
|
|
|
|
|
#define RRSS_GET_STATE(ub) \
|
|
|
|
BF64_GET((ub)->ub_raidz_reflow_info, 55, 9)
|
|
|
|
#define RRSS_SET_STATE(ub, x) \
|
|
|
|
BF64_SET((ub)->ub_raidz_reflow_info, 55, 9, x)
|
|
|
|
|
|
|
|
#define RAIDZ_REFLOW_SET(ub, state, offset) do { \
|
|
|
|
(ub)->ub_raidz_reflow_info = 0; \
|
|
|
|
RRSS_SET_OFFSET(ub, offset); \
|
|
|
|
RRSS_SET_STATE(ub, state); \
|
|
|
|
} while (0)
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
struct uberblock {
|
|
|
|
uint64_t ub_magic; /* UBERBLOCK_MAGIC */
|
|
|
|
uint64_t ub_version; /* SPA_VERSION */
|
|
|
|
uint64_t ub_txg; /* txg of last sync */
|
|
|
|
uint64_t ub_guid_sum; /* sum of all vdev guids */
|
|
|
|
uint64_t ub_timestamp; /* UTC time of last sync */
|
|
|
|
blkptr_t ub_rootbp; /* MOS objset_phys_t */
|
2010-05-28 20:45:14 +00:00
|
|
|
|
|
|
|
/* highest SPA_VERSION supported by software that wrote this txg */
|
|
|
|
uint64_t ub_software_version;
|
Multi-modifier protection (MMP)
Add multihost=on|off pool property to control MMP. When enabled
a new thread writes uberblocks to the last slot in each label, at a
set frequency, to indicate to other hosts the pool is actively imported.
These uberblocks are the last synced uberblock with an updated
timestamp. Property defaults to off.
During tryimport, find the "best" uberblock (newest txg and timestamp)
repeatedly, checking for change in the found uberblock. Include the
results of the activity test in the config returned by tryimport.
These results are reported to user in "zpool import".
Allow the user to control the period between MMP writes, and the
duration of the activity test on import, via a new module parameter
zfs_multihost_interval. The period is specified in milliseconds. The
activity test duration is calculated from this value, and from the
mmp_delay in the "best" uberblock found initially.
Add a kstat interface to export statistics about Multiple Modifier
Protection (MMP) updates. Include the last synced txg number, the
timestamp, the delay since the last MMP update, the VDEV GUID, the VDEV
label that received the last MMP update, and the VDEV path. Abbreviated
output below.
$ cat /proc/spl/kstat/zfs/mypool/multihost
31 0 0x01 10 880 105092382393521 105144180101111
txg timestamp mmp_delay vdev_guid vdev_label vdev_path
20468 261337 250274925 68396651780 3 /dev/sda
20468 261339 252023374 6267402363293 1 /dev/sdc
20468 261340 252000858 6698080955233 1 /dev/sdx
20468 261341 251980635 783892869810 2 /dev/sdy
20468 261342 253385953 8923255792467 3 /dev/sdd
20468 261344 253336622 042125143176 0 /dev/sdab
20468 261345 253310522 1200778101278 2 /dev/sde
20468 261346 253286429 0950576198362 2 /dev/sdt
20468 261347 253261545 96209817917 3 /dev/sds
20468 261349 253238188 8555725937673 3 /dev/sdb
Add a new tunable zfs_multihost_history to specify the number of MMP
updates to store history for. By default it is set to zero meaning that
no MMP statistics are stored.
When using ztest to generate activity, for automated tests of the MMP
function, some test functions interfere with the test. For example, the
pool is exported to run zdb and then imported again. Add a new ztest
function, "-M", to alter ztest behavior to prevent this.
Add new tests to verify the new functionality. Tests provided by
Giuseppe Di Natale.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: Ned Bass <bass6@llnl.gov>
Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Olaf Faaland <faaland1@llnl.gov>
Closes #745
Closes #6279
2017-07-08 03:20:35 +00:00
|
|
|
|
|
|
|
/* Maybe missing in uberblocks we read, but always written */
|
|
|
|
uint64_t ub_mmp_magic; /* MMP_MAGIC */
|
MMP interval and fail_intervals in uberblock
When Multihost is enabled, and a pool is imported, uberblock writes
include ub_mmp_delay to allow an importing node to calculate the
duration of an activity test. This value, is not enough information.
If zfs_multihost_fail_intervals > 0 on the node with the pool imported,
the safe minimum duration of the activity test is well defined, but does
not depend on ub_mmp_delay:
zfs_multihost_fail_intervals * zfs_multihost_interval
and if zfs_multihost_fail_intervals == 0 on that node, there is no such
well defined safe duration, but the importing host cannot tell whether
mmp_delay is high due to I/O delays, or due to a very large
zfs_multihost_interval setting on the host which last imported the pool.
As a result, it may use a far longer period for the activity test than
is necessary.
This patch renames ub_mmp_sequence to ub_mmp_config and uses it to
record the zfs_multihost_interval and zfs_multihost_fail_intervals
values, as well as the mmp sequence. This allows a shorter activity
test duration to be calculated by the importing host in most situations.
These values are also added to the multihost_history kstat records.
It calculates the activity test duration differently depending on
whether the new fields are present or not; for importing pools with
only ub_mmp_delay, it uses
(zfs_multihost_interval + ub_mmp_delay) * zfs_multihost_import_intervals
Which results in an activity test duration less sensitive to the leaf
count.
In addition, it makes a few other improvements:
* It updates the "sequence" part of ub_mmp_config when MMP writes
in between syncs occur. This allows an importing host to detect MMP
on the remote host sooner, when the pool is idle, as it is not limited
to the granularity of ub_timestamp (1 second).
* It issues writes immediately when zfs_multihost_interval is changed
so remote hosts see the updated value as soon as possible.
* It fixes a bug where setting zfs_multihost_fail_intervals = 1 results
in immediate pool suspension.
* Update tests to verify activity check duration is based on recorded
tunable values, not tunable values on importing host.
* Update tests to verify the expected number of uberblocks have valid
MMP fields - fail_intervals, mmp_interval, mmp_seq (sequence number),
that sequence number is incrementing, and that uberblock values match
tunable settings.
Reviewed-by: Andreas Dilger <andreas.dilger@whamcloud.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Olaf Faaland <faaland1@llnl.gov>
Closes #7842
2019-03-21 19:47:57 +00:00
|
|
|
/*
|
|
|
|
* If ub_mmp_delay == 0 and ub_mmp_magic is valid, MMP is off.
|
|
|
|
* Otherwise, nanosec since last MMP write.
|
|
|
|
*/
|
|
|
|
uint64_t ub_mmp_delay;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The ub_mmp_config contains the multihost write interval, multihost
|
|
|
|
* fail intervals, sequence number for sub-second granularity, and
|
|
|
|
* valid bit mask. This layout is as follows:
|
|
|
|
*
|
|
|
|
* 64 56 48 40 32 24 16 8 0
|
|
|
|
* +-------+-------+-------+-------+-------+-------+-------+-------+
|
|
|
|
* 0 | Fail Intervals| Seq | Write Interval (ms) | VALID |
|
|
|
|
* +-------+-------+-------+-------+-------+-------+-------+-------+
|
|
|
|
*
|
|
|
|
* This allows a write_interval of (2^24/1000)s, over 4.5 hours
|
|
|
|
*
|
|
|
|
* VALID Bits:
|
|
|
|
* - 0x01 - Write Interval (ms)
|
|
|
|
* - 0x02 - Sequence number exists
|
|
|
|
* - 0x04 - Fail Intervals
|
|
|
|
* - 0xf8 - Reserved
|
|
|
|
*/
|
|
|
|
uint64_t ub_mmp_config;
|
2016-12-16 22:11:29 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* ub_checkpoint_txg indicates two things about the current uberblock:
|
|
|
|
*
|
|
|
|
* 1] If it is not zero then this uberblock is a checkpoint. If it is
|
|
|
|
* zero, then this uberblock is not a checkpoint.
|
|
|
|
*
|
|
|
|
* 2] On checkpointed uberblocks, the value of ub_checkpoint_txg is
|
|
|
|
* the ub_txg that the uberblock had at the time we moved it to
|
|
|
|
* the MOS config.
|
|
|
|
*
|
|
|
|
* The field is set when we checkpoint the uberblock and continues to
|
|
|
|
* hold that value even after we've rewound (unlike the ub_txg that
|
|
|
|
* is reset to a higher value).
|
|
|
|
*
|
|
|
|
* Besides checks used to determine whether we are reopening the
|
|
|
|
* pool from a checkpointed uberblock [see spa_ld_select_uberblock()],
|
|
|
|
* the value of the field is used to determine which ZIL blocks have
|
|
|
|
* been allocated according to the ms_sm when we are rewinding to a
|
2024-03-25 22:01:54 +00:00
|
|
|
* checkpoint. Specifically, if logical birth > ub_checkpoint_txg,then
|
2016-12-16 22:11:29 +00:00
|
|
|
* the ZIL block is not allocated [see uses of spa_min_claim_txg()].
|
|
|
|
*/
|
2017-07-13 14:32:53 +00:00
|
|
|
uint64_t ub_checkpoint_txg;
|
RAID-Z expansion feature
This feature allows disks to be added one at a time to a RAID-Z group,
expanding its capacity incrementally. This feature is especially useful
for small pools (typically with only one RAID-Z group), where there
isn't sufficient hardware to add capacity by adding a whole new RAID-Z
group (typically doubling the number of disks).
== Initiating expansion ==
A new device (disk) can be attached to an existing RAIDZ vdev, by
running `zpool attach POOL raidzP-N NEW_DEVICE`, e.g. `zpool attach tank
raidz2-0 sda`. The new device will become part of the RAIDZ group. A
"raidz expansion" will be initiated, and the new device will contribute
additional space to the RAIDZ group once the expansion completes.
The `feature@raidz_expansion` on-disk feature flag must be `enabled` to
initiate an expansion, and it remains `active` for the life of the pool.
In other words, pools with expanded RAIDZ vdevs can not be imported by
older releases of the ZFS software.
== During expansion ==
The expansion entails reading all allocated space from existing disks in
the RAIDZ group, and rewriting it to the new disks in the RAIDZ group
(including the newly added device).
The expansion progress can be monitored with `zpool status`.
Data redundancy is maintained during (and after) the expansion. If a
disk fails while the expansion is in progress, the expansion pauses
until the health of the RAIDZ vdev is restored (e.g. by replacing the
failed disk and waiting for reconstruction to complete).
The pool remains accessible during expansion. Following a reboot or
export/import, the expansion resumes where it left off.
== After expansion ==
When the expansion completes, the additional space is available for use,
and is reflected in the `available` zfs property (as seen in `zfs list`,
`df`, etc).
Expansion does not change the number of failures that can be tolerated
without data loss (e.g. a RAIDZ2 is still a RAIDZ2 even after
expansion).
A RAIDZ vdev can be expanded multiple times.
After the expansion completes, old blocks remain with their old
data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but
distributed among the larger set of disks. New blocks will be written
with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been
expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ
vdev's "assumed parity ratio" does not change, so slightly less space
than is expected may be reported for newly-written blocks, according to
`zfs list`, `df`, `ls -s`, and similar tools.
Sponsored-by: The FreeBSD Foundation
Sponsored-by: iXsystems, Inc.
Sponsored-by: vStack
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Authored-by: Matthew Ahrens <mahrens@delphix.com>
Contributions-by: Fedor Uporov <fuporov.vstack@gmail.com>
Contributions-by: Stuart Maybee <stuart.maybee@comcast.net>
Contributions-by: Thorsten Behrens <tbehrens@outlook.com>
Contributions-by: Fmstrat <nospam@nowsci.com>
Contributions-by: Don Brady <dev.fs.zfs@gmail.com>
Signed-off-by: Don Brady <dev.fs.zfs@gmail.com>
Closes #15022
2023-11-08 18:19:41 +00:00
|
|
|
|
|
|
|
uint64_t ub_raidz_reflow_info;
|
2008-11-20 20:01:55 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
#ifdef __cplusplus
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#endif /* _SYS_UBERBLOCK_IMPL_H */
|