2021-11-08 15:44:04 +00:00
|
|
|
<abi-corpus version='2.0' architecture='elf-amd-x86_64' soname='libzfsbootenv.so.1'>
|
2020-11-15 04:38:34 +00:00
|
|
|
<elf-needed>
|
|
|
|
<dependency name='libzfs.so.4'/>
|
|
|
|
<dependency name='libnvpair.so.3'/>
|
|
|
|
<dependency name='libc.so.6'/>
|
|
|
|
</elf-needed>
|
|
|
|
<elf-function-symbols>
|
|
|
|
<elf-symbol name='lzbe_add_pair' type='func-type' binding='global-binding' visibility='default-visibility' is-defined='yes'/>
|
|
|
|
<elf-symbol name='lzbe_bootenv_print' type='func-type' binding='global-binding' visibility='default-visibility' is-defined='yes'/>
|
|
|
|
<elf-symbol name='lzbe_get_boot_device' type='func-type' binding='global-binding' visibility='default-visibility' is-defined='yes'/>
|
|
|
|
<elf-symbol name='lzbe_nvlist_free' type='func-type' binding='global-binding' visibility='default-visibility' is-defined='yes'/>
|
|
|
|
<elf-symbol name='lzbe_nvlist_get' type='func-type' binding='global-binding' visibility='default-visibility' is-defined='yes'/>
|
|
|
|
<elf-symbol name='lzbe_nvlist_set' type='func-type' binding='global-binding' visibility='default-visibility' is-defined='yes'/>
|
|
|
|
<elf-symbol name='lzbe_remove_pair' type='func-type' binding='global-binding' visibility='default-visibility' is-defined='yes'/>
|
|
|
|
<elf-symbol name='lzbe_set_boot_device' type='func-type' binding='global-binding' visibility='default-visibility' is-defined='yes'/>
|
|
|
|
</elf-function-symbols>
|
2021-11-08 15:44:04 +00:00
|
|
|
<abi-instr address-size='64' path='lzbe_device.c' language='LANG_C99'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<type-decl name='char' size-in-bits='8' id='a84c031d'/>
|
|
|
|
<type-decl name='int' size-in-bits='32' id='95e97e5e'/>
|
|
|
|
<type-decl name='unnamed-enum-underlying-type-32' is-anonymous='yes' size-in-bits='32' alignment-in-bits='32' id='9cac1fee'/>
|
Improve zpool status output, list all affected datasets
Currently, determining which datasets are affected by corruption is
a manual process.
The primary difficulty in reporting the list of affected snapshots is
that since the error was initially found, the snapshot where the error
originally occurred in, may have been deleted. To solve this issue, we
add the ID of the head dataset of the original snapshot which the error
was detected in, to the stored error report. Then any time a filesystem
is deleted, the errors associated with it are deleted as well. Any time
a clone promote occurs, we modify reports associated with the original
head to refer to the new head. The stored error reports are identified
by this head ID, the birth time of the block which the error occurred
in, as well as some information about the error itself are also stored.
Once this information is stored, we can find the set of datasets
affected by an error by walking back the list of snapshots in the given
head until we find one with the appropriate birth txg, and then traverse
through the snapshots of the clone family, terminating a branch if the
block was replaced in a given snapshot. Then we report this information
back to libzfs, and to the zpool status command, where it is displayed
as follows:
pool: test
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: scrub repaired 0B in 00:00:00 with 800 errors on Fri Dec 3
08:27:57 2021
config:
NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
sdb ONLINE 0 0 1.58K
errors: Permanent errors have been detected in the following files:
test@1:/test.0.0
/test/test.0.0
/test/1clone/test.0.0
A new feature flag is introduced to mark the presence of this change, as
well as promotion and backwards compatibility logic. This is an updated
version of #9175. Rebase required fixing the tests, updating the ABI of
libzfs, updating the man pages, fixing bugs, fixing the error returns,
and updating the old on-disk error logs to the new format when
activating the feature.
Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Co-authored-by: TulsiJain <tulsi.jain@delphix.com>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
Closes #9175
Closes #12812
2022-04-26 00:25:42 +00:00
|
|
|
<type-decl name='void' id='48b5725f'/>
|
2021-08-31 19:26:30 +00:00
|
|
|
<enum-decl name='lzbe_flags' id='2b77720b'>
|
|
|
|
<underlying-type type-id='9cac1fee'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
<enumerator name='lzbe_add' value='0'/>
|
|
|
|
<enumerator name='lzbe_replace' value='1'/>
|
|
|
|
</enum-decl>
|
2021-11-08 15:44:04 +00:00
|
|
|
<typedef-decl name='lzbe_flags_t' type-id='2b77720b' id='a1936f04'/>
|
2021-08-31 19:26:30 +00:00
|
|
|
<pointer-type-def type-id='a84c031d' size-in-bits='64' id='26a90f95'/>
|
|
|
|
<pointer-type-def type-id='26a90f95' size-in-bits='64' id='9b23c9ad'/>
|
|
|
|
<qualified-type-def type-id='a84c031d' const='yes' id='9b45d938'/>
|
|
|
|
<pointer-type-def type-id='9b45d938' size-in-bits='64' id='80f4b756'/>
|
|
|
|
<function-decl name='lzbe_set_boot_device' mangled-name='lzbe_set_boot_device' visibility='default' binding='global' size-in-bits='64' elf-symbol-id='lzbe_set_boot_device'>
|
|
|
|
<parameter type-id='80f4b756' name='pool'/>
|
|
|
|
<parameter type-id='a1936f04' name='flag'/>
|
|
|
|
<parameter type-id='80f4b756' name='device'/>
|
|
|
|
<return type-id='95e97e5e'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</function-decl>
|
2021-11-08 15:44:04 +00:00
|
|
|
<function-decl name='lzbe_get_boot_device' mangled-name='lzbe_get_boot_device' visibility='default' binding='global' size-in-bits='64' elf-symbol-id='lzbe_get_boot_device'>
|
|
|
|
<parameter type-id='80f4b756' name='pool'/>
|
|
|
|
<parameter type-id='9b23c9ad' name='device'/>
|
|
|
|
<return type-id='95e97e5e'/>
|
|
|
|
</function-decl>
|
2020-11-15 04:38:34 +00:00
|
|
|
</abi-instr>
|
2021-11-08 15:44:04 +00:00
|
|
|
<abi-instr address-size='64' path='lzbe_pair.c' language='LANG_C99'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<type-decl name='unsigned long int' size-in-bits='64' id='7359adad'/>
|
|
|
|
<typedef-decl name='size_t' type-id='7359adad' id='b59d7dce'/>
|
|
|
|
<pointer-type-def type-id='48b5725f' size-in-bits='64' id='eaa32e2f'/>
|
|
|
|
<pointer-type-def type-id='eaa32e2f' size-in-bits='64' id='63e171df'/>
|
2021-11-08 15:44:04 +00:00
|
|
|
<function-decl name='lzbe_nvlist_get' mangled-name='lzbe_nvlist_get' visibility='default' binding='global' size-in-bits='64' elf-symbol-id='lzbe_nvlist_get'>
|
|
|
|
<parameter type-id='80f4b756' name='pool'/>
|
2021-08-31 19:26:30 +00:00
|
|
|
<parameter type-id='80f4b756' name='key'/>
|
2021-11-08 15:44:04 +00:00
|
|
|
<parameter type-id='63e171df' name='ptr'/>
|
2021-08-31 19:26:30 +00:00
|
|
|
<return type-id='95e97e5e'/>
|
2021-07-18 15:55:46 +00:00
|
|
|
</function-decl>
|
2021-11-08 15:44:04 +00:00
|
|
|
<function-decl name='lzbe_nvlist_set' mangled-name='lzbe_nvlist_set' visibility='default' binding='global' size-in-bits='64' elf-symbol-id='lzbe_nvlist_set'>
|
|
|
|
<parameter type-id='80f4b756' name='pool'/>
|
2021-08-31 19:26:30 +00:00
|
|
|
<parameter type-id='80f4b756' name='key'/>
|
2021-11-08 15:44:04 +00:00
|
|
|
<parameter type-id='eaa32e2f' name='ptr'/>
|
2021-08-31 19:26:30 +00:00
|
|
|
<return type-id='95e97e5e'/>
|
2021-07-18 15:55:46 +00:00
|
|
|
</function-decl>
|
|
|
|
<function-decl name='lzbe_nvlist_free' mangled-name='lzbe_nvlist_free' visibility='default' binding='global' size-in-bits='64' elf-symbol-id='lzbe_nvlist_free'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<parameter type-id='eaa32e2f' name='ptr'/>
|
|
|
|
<return type-id='48b5725f'/>
|
2021-07-18 15:55:46 +00:00
|
|
|
</function-decl>
|
2021-11-08 15:44:04 +00:00
|
|
|
<function-decl name='lzbe_add_pair' mangled-name='lzbe_add_pair' visibility='default' binding='global' size-in-bits='64' elf-symbol-id='lzbe_add_pair'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<parameter type-id='eaa32e2f' name='ptr'/>
|
2021-11-08 15:44:04 +00:00
|
|
|
<parameter type-id='80f4b756' name='key'/>
|
|
|
|
<parameter type-id='80f4b756' name='type'/>
|
|
|
|
<parameter type-id='eaa32e2f' name='value'/>
|
|
|
|
<parameter type-id='b59d7dce' name='size'/>
|
2021-08-31 19:26:30 +00:00
|
|
|
<return type-id='95e97e5e'/>
|
2021-07-18 15:55:46 +00:00
|
|
|
</function-decl>
|
2021-11-08 15:44:04 +00:00
|
|
|
<function-decl name='lzbe_remove_pair' mangled-name='lzbe_remove_pair' visibility='default' binding='global' size-in-bits='64' elf-symbol-id='lzbe_remove_pair'>
|
|
|
|
<parameter type-id='eaa32e2f' name='ptr'/>
|
2021-08-31 19:26:30 +00:00
|
|
|
<parameter type-id='80f4b756' name='key'/>
|
|
|
|
<return type-id='95e97e5e'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</function-decl>
|
|
|
|
</abi-instr>
|
2021-11-08 15:44:04 +00:00
|
|
|
<abi-instr address-size='64' path='lzbe_util.c' language='LANG_C99'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<array-type-def dimensions='1' type-id='a84c031d' size-in-bits='8' id='89feb1ec'>
|
|
|
|
<subrange length='1' type-id='7359adad' id='52f813b4'/>
|
|
|
|
</array-type-def>
|
|
|
|
<array-type-def dimensions='1' type-id='a84c031d' size-in-bits='160' id='664ac0b7'>
|
|
|
|
<subrange length='20' type-id='7359adad' id='fdca39cf'/>
|
|
|
|
</array-type-def>
|
Improve zpool status output, list all affected datasets
Currently, determining which datasets are affected by corruption is
a manual process.
The primary difficulty in reporting the list of affected snapshots is
that since the error was initially found, the snapshot where the error
originally occurred in, may have been deleted. To solve this issue, we
add the ID of the head dataset of the original snapshot which the error
was detected in, to the stored error report. Then any time a filesystem
is deleted, the errors associated with it are deleted as well. Any time
a clone promote occurs, we modify reports associated with the original
head to refer to the new head. The stored error reports are identified
by this head ID, the birth time of the block which the error occurred
in, as well as some information about the error itself are also stored.
Once this information is stored, we can find the set of datasets
affected by an error by walking back the list of snapshots in the given
head until we find one with the appropriate birth txg, and then traverse
through the snapshots of the clone family, terminating a branch if the
block was replaced in a given snapshot. Then we report this information
back to libzfs, and to the zpool status command, where it is displayed
as follows:
pool: test
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: scrub repaired 0B in 00:00:00 with 800 errors on Fri Dec 3
08:27:57 2021
config:
NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
sdb ONLINE 0 0 1.58K
errors: Permanent errors have been detected in the following files:
test@1:/test.0.0
/test/test.0.0
/test/1clone/test.0.0
A new feature flag is introduced to mark the presence of this change, as
well as promotion and backwards compatibility logic. This is an updated
version of #9175. Rebase required fixing the tests, updating the ABI of
libzfs, updating the man pages, fixing bugs, fixing the error returns,
and updating the old on-disk error logs to the new format when
activating the feature.
Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Co-authored-by: TulsiJain <tulsi.jain@delphix.com>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
Closes #9175
Closes #12812
2022-04-26 00:25:42 +00:00
|
|
|
<class-decl name='_IO_codecvt' is-struct='yes' visibility='default' is-declaration-only='yes' id='a4036571'/>
|
|
|
|
<class-decl name='_IO_marker' is-struct='yes' visibility='default' is-declaration-only='yes' id='010ae0b9'/>
|
|
|
|
<class-decl name='_IO_wide_data' is-struct='yes' visibility='default' is-declaration-only='yes' id='79bd3751'/>
|
2021-08-31 19:26:30 +00:00
|
|
|
<type-decl name='long int' size-in-bits='64' id='bd54fe1a'/>
|
|
|
|
<type-decl name='signed char' size-in-bits='8' id='28577a57'/>
|
|
|
|
<type-decl name='unsigned short int' size-in-bits='16' id='8efea9e5'/>
|
Improve zpool status output, list all affected datasets
Currently, determining which datasets are affected by corruption is
a manual process.
The primary difficulty in reporting the list of affected snapshots is
that since the error was initially found, the snapshot where the error
originally occurred in, may have been deleted. To solve this issue, we
add the ID of the head dataset of the original snapshot which the error
was detected in, to the stored error report. Then any time a filesystem
is deleted, the errors associated with it are deleted as well. Any time
a clone promote occurs, we modify reports associated with the original
head to refer to the new head. The stored error reports are identified
by this head ID, the birth time of the block which the error occurred
in, as well as some information about the error itself are also stored.
Once this information is stored, we can find the set of datasets
affected by an error by walking back the list of snapshots in the given
head until we find one with the appropriate birth txg, and then traverse
through the snapshots of the clone family, terminating a branch if the
block was replaced in a given snapshot. Then we report this information
back to libzfs, and to the zpool status command, where it is displayed
as follows:
pool: test
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: scrub repaired 0B in 00:00:00 with 800 errors on Fri Dec 3
08:27:57 2021
config:
NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
sdb ONLINE 0 0 1.58K
errors: Permanent errors have been detected in the following files:
test@1:/test.0.0
/test/test.0.0
/test/1clone/test.0.0
A new feature flag is introduced to mark the presence of this change, as
well as promotion and backwards compatibility logic. This is an updated
version of #9175. Rebase required fixing the tests, updating the ABI of
libzfs, updating the man pages, fixing bugs, fixing the error returns,
and updating the old on-disk error logs to the new format when
activating the feature.
Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Co-authored-by: TulsiJain <tulsi.jain@delphix.com>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
Closes #9175
Closes #12812
2022-04-26 00:25:42 +00:00
|
|
|
<typedef-decl name='__off_t' type-id='bd54fe1a' id='79989e9c'/>
|
|
|
|
<typedef-decl name='__off64_t' type-id='bd54fe1a' id='724e4de6'/>
|
|
|
|
<typedef-decl name='FILE' type-id='ec1ed955' id='aa12d1ba'/>
|
2021-11-08 15:44:04 +00:00
|
|
|
<typedef-decl name='_IO_lock_t' type-id='48b5725f' id='bb4788fa'/>
|
2021-08-31 19:26:30 +00:00
|
|
|
<class-decl name='_IO_FILE' size-in-bits='1728' is-struct='yes' visibility='default' id='ec1ed955'>
|
2020-11-15 04:38:34 +00:00
|
|
|
<data-member access='public' layout-offset-in-bits='0'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<var-decl name='_flags' type-id='95e97e5e' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='64'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<var-decl name='_IO_read_ptr' type-id='26a90f95' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='128'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<var-decl name='_IO_read_end' type-id='26a90f95' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='192'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<var-decl name='_IO_read_base' type-id='26a90f95' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='256'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<var-decl name='_IO_write_base' type-id='26a90f95' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='320'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<var-decl name='_IO_write_ptr' type-id='26a90f95' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='384'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<var-decl name='_IO_write_end' type-id='26a90f95' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='448'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<var-decl name='_IO_buf_base' type-id='26a90f95' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='512'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<var-decl name='_IO_buf_end' type-id='26a90f95' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='576'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<var-decl name='_IO_save_base' type-id='26a90f95' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='640'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<var-decl name='_IO_backup_base' type-id='26a90f95' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='704'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<var-decl name='_IO_save_end' type-id='26a90f95' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='768'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<var-decl name='_markers' type-id='e4c6fa61' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='832'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<var-decl name='_chain' type-id='dca988a5' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='896'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<var-decl name='_fileno' type-id='95e97e5e' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='928'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<var-decl name='_flags2' type-id='95e97e5e' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='960'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<var-decl name='_old_offset' type-id='79989e9c' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='1024'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<var-decl name='_cur_column' type-id='8efea9e5' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='1040'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<var-decl name='_vtable_offset' type-id='28577a57' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='1048'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<var-decl name='_shortbuf' type-id='89feb1ec' visibility='default'/>
|
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='1088'>
|
|
|
|
<var-decl name='_lock' type-id='cecf4ea7' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='1152'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<var-decl name='_offset' type-id='724e4de6' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='1216'>
|
Improve zpool status output, list all affected datasets
Currently, determining which datasets are affected by corruption is
a manual process.
The primary difficulty in reporting the list of affected snapshots is
that since the error was initially found, the snapshot where the error
originally occurred in, may have been deleted. To solve this issue, we
add the ID of the head dataset of the original snapshot which the error
was detected in, to the stored error report. Then any time a filesystem
is deleted, the errors associated with it are deleted as well. Any time
a clone promote occurs, we modify reports associated with the original
head to refer to the new head. The stored error reports are identified
by this head ID, the birth time of the block which the error occurred
in, as well as some information about the error itself are also stored.
Once this information is stored, we can find the set of datasets
affected by an error by walking back the list of snapshots in the given
head until we find one with the appropriate birth txg, and then traverse
through the snapshots of the clone family, terminating a branch if the
block was replaced in a given snapshot. Then we report this information
back to libzfs, and to the zpool status command, where it is displayed
as follows:
pool: test
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: scrub repaired 0B in 00:00:00 with 800 errors on Fri Dec 3
08:27:57 2021
config:
NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
sdb ONLINE 0 0 1.58K
errors: Permanent errors have been detected in the following files:
test@1:/test.0.0
/test/test.0.0
/test/1clone/test.0.0
A new feature flag is introduced to mark the presence of this change, as
well as promotion and backwards compatibility logic. This is an updated
version of #9175. Rebase required fixing the tests, updating the ABI of
libzfs, updating the man pages, fixing bugs, fixing the error returns,
and updating the old on-disk error logs to the new format when
activating the feature.
Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Co-authored-by: TulsiJain <tulsi.jain@delphix.com>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
Closes #9175
Closes #12812
2022-04-26 00:25:42 +00:00
|
|
|
<var-decl name='_codecvt' type-id='570f8c59' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='1280'>
|
Improve zpool status output, list all affected datasets
Currently, determining which datasets are affected by corruption is
a manual process.
The primary difficulty in reporting the list of affected snapshots is
that since the error was initially found, the snapshot where the error
originally occurred in, may have been deleted. To solve this issue, we
add the ID of the head dataset of the original snapshot which the error
was detected in, to the stored error report. Then any time a filesystem
is deleted, the errors associated with it are deleted as well. Any time
a clone promote occurs, we modify reports associated with the original
head to refer to the new head. The stored error reports are identified
by this head ID, the birth time of the block which the error occurred
in, as well as some information about the error itself are also stored.
Once this information is stored, we can find the set of datasets
affected by an error by walking back the list of snapshots in the given
head until we find one with the appropriate birth txg, and then traverse
through the snapshots of the clone family, terminating a branch if the
block was replaced in a given snapshot. Then we report this information
back to libzfs, and to the zpool status command, where it is displayed
as follows:
pool: test
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: scrub repaired 0B in 00:00:00 with 800 errors on Fri Dec 3
08:27:57 2021
config:
NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
sdb ONLINE 0 0 1.58K
errors: Permanent errors have been detected in the following files:
test@1:/test.0.0
/test/test.0.0
/test/1clone/test.0.0
A new feature flag is introduced to mark the presence of this change, as
well as promotion and backwards compatibility logic. This is an updated
version of #9175. Rebase required fixing the tests, updating the ABI of
libzfs, updating the man pages, fixing bugs, fixing the error returns,
and updating the old on-disk error logs to the new format when
activating the feature.
Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Co-authored-by: TulsiJain <tulsi.jain@delphix.com>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
Closes #9175
Closes #12812
2022-04-26 00:25:42 +00:00
|
|
|
<var-decl name='_wide_data' type-id='c65a1f29' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='1344'>
|
Improve zpool status output, list all affected datasets
Currently, determining which datasets are affected by corruption is
a manual process.
The primary difficulty in reporting the list of affected snapshots is
that since the error was initially found, the snapshot where the error
originally occurred in, may have been deleted. To solve this issue, we
add the ID of the head dataset of the original snapshot which the error
was detected in, to the stored error report. Then any time a filesystem
is deleted, the errors associated with it are deleted as well. Any time
a clone promote occurs, we modify reports associated with the original
head to refer to the new head. The stored error reports are identified
by this head ID, the birth time of the block which the error occurred
in, as well as some information about the error itself are also stored.
Once this information is stored, we can find the set of datasets
affected by an error by walking back the list of snapshots in the given
head until we find one with the appropriate birth txg, and then traverse
through the snapshots of the clone family, terminating a branch if the
block was replaced in a given snapshot. Then we report this information
back to libzfs, and to the zpool status command, where it is displayed
as follows:
pool: test
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: scrub repaired 0B in 00:00:00 with 800 errors on Fri Dec 3
08:27:57 2021
config:
NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
sdb ONLINE 0 0 1.58K
errors: Permanent errors have been detected in the following files:
test@1:/test.0.0
/test/test.0.0
/test/1clone/test.0.0
A new feature flag is introduced to mark the presence of this change, as
well as promotion and backwards compatibility logic. This is an updated
version of #9175. Rebase required fixing the tests, updating the ABI of
libzfs, updating the man pages, fixing bugs, fixing the error returns,
and updating the old on-disk error logs to the new format when
activating the feature.
Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Co-authored-by: TulsiJain <tulsi.jain@delphix.com>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
Closes #9175
Closes #12812
2022-04-26 00:25:42 +00:00
|
|
|
<var-decl name='_freeres_list' type-id='dca988a5' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='1408'>
|
Improve zpool status output, list all affected datasets
Currently, determining which datasets are affected by corruption is
a manual process.
The primary difficulty in reporting the list of affected snapshots is
that since the error was initially found, the snapshot where the error
originally occurred in, may have been deleted. To solve this issue, we
add the ID of the head dataset of the original snapshot which the error
was detected in, to the stored error report. Then any time a filesystem
is deleted, the errors associated with it are deleted as well. Any time
a clone promote occurs, we modify reports associated with the original
head to refer to the new head. The stored error reports are identified
by this head ID, the birth time of the block which the error occurred
in, as well as some information about the error itself are also stored.
Once this information is stored, we can find the set of datasets
affected by an error by walking back the list of snapshots in the given
head until we find one with the appropriate birth txg, and then traverse
through the snapshots of the clone family, terminating a branch if the
block was replaced in a given snapshot. Then we report this information
back to libzfs, and to the zpool status command, where it is displayed
as follows:
pool: test
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: scrub repaired 0B in 00:00:00 with 800 errors on Fri Dec 3
08:27:57 2021
config:
NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
sdb ONLINE 0 0 1.58K
errors: Permanent errors have been detected in the following files:
test@1:/test.0.0
/test/test.0.0
/test/1clone/test.0.0
A new feature flag is introduced to mark the presence of this change, as
well as promotion and backwards compatibility logic. This is an updated
version of #9175. Rebase required fixing the tests, updating the ABI of
libzfs, updating the man pages, fixing bugs, fixing the error returns,
and updating the old on-disk error logs to the new format when
activating the feature.
Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Co-authored-by: TulsiJain <tulsi.jain@delphix.com>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
Closes #9175
Closes #12812
2022-04-26 00:25:42 +00:00
|
|
|
<var-decl name='_freeres_buf' type-id='eaa32e2f' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='1472'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<var-decl name='__pad5' type-id='b59d7dce' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='1536'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<var-decl name='_mode' type-id='95e97e5e' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
<data-member access='public' layout-offset-in-bits='1568'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<var-decl name='_unused2' type-id='664ac0b7' visibility='default'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</data-member>
|
|
|
|
</class-decl>
|
2021-08-31 19:26:30 +00:00
|
|
|
<pointer-type-def type-id='aa12d1ba' size-in-bits='64' id='822cd80b'/>
|
|
|
|
<pointer-type-def type-id='ec1ed955' size-in-bits='64' id='dca988a5'/>
|
Improve zpool status output, list all affected datasets
Currently, determining which datasets are affected by corruption is
a manual process.
The primary difficulty in reporting the list of affected snapshots is
that since the error was initially found, the snapshot where the error
originally occurred in, may have been deleted. To solve this issue, we
add the ID of the head dataset of the original snapshot which the error
was detected in, to the stored error report. Then any time a filesystem
is deleted, the errors associated with it are deleted as well. Any time
a clone promote occurs, we modify reports associated with the original
head to refer to the new head. The stored error reports are identified
by this head ID, the birth time of the block which the error occurred
in, as well as some information about the error itself are also stored.
Once this information is stored, we can find the set of datasets
affected by an error by walking back the list of snapshots in the given
head until we find one with the appropriate birth txg, and then traverse
through the snapshots of the clone family, terminating a branch if the
block was replaced in a given snapshot. Then we report this information
back to libzfs, and to the zpool status command, where it is displayed
as follows:
pool: test
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: scrub repaired 0B in 00:00:00 with 800 errors on Fri Dec 3
08:27:57 2021
config:
NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
sdb ONLINE 0 0 1.58K
errors: Permanent errors have been detected in the following files:
test@1:/test.0.0
/test/test.0.0
/test/1clone/test.0.0
A new feature flag is introduced to mark the presence of this change, as
well as promotion and backwards compatibility logic. This is an updated
version of #9175. Rebase required fixing the tests, updating the ABI of
libzfs, updating the man pages, fixing bugs, fixing the error returns,
and updating the old on-disk error logs to the new format when
activating the feature.
Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Co-authored-by: TulsiJain <tulsi.jain@delphix.com>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
Closes #9175
Closes #12812
2022-04-26 00:25:42 +00:00
|
|
|
<pointer-type-def type-id='a4036571' size-in-bits='64' id='570f8c59'/>
|
2021-08-31 19:26:30 +00:00
|
|
|
<pointer-type-def type-id='bb4788fa' size-in-bits='64' id='cecf4ea7'/>
|
|
|
|
<pointer-type-def type-id='010ae0b9' size-in-bits='64' id='e4c6fa61'/>
|
Improve zpool status output, list all affected datasets
Currently, determining which datasets are affected by corruption is
a manual process.
The primary difficulty in reporting the list of affected snapshots is
that since the error was initially found, the snapshot where the error
originally occurred in, may have been deleted. To solve this issue, we
add the ID of the head dataset of the original snapshot which the error
was detected in, to the stored error report. Then any time a filesystem
is deleted, the errors associated with it are deleted as well. Any time
a clone promote occurs, we modify reports associated with the original
head to refer to the new head. The stored error reports are identified
by this head ID, the birth time of the block which the error occurred
in, as well as some information about the error itself are also stored.
Once this information is stored, we can find the set of datasets
affected by an error by walking back the list of snapshots in the given
head until we find one with the appropriate birth txg, and then traverse
through the snapshots of the clone family, terminating a branch if the
block was replaced in a given snapshot. Then we report this information
back to libzfs, and to the zpool status command, where it is displayed
as follows:
pool: test
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: scrub repaired 0B in 00:00:00 with 800 errors on Fri Dec 3
08:27:57 2021
config:
NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
sdb ONLINE 0 0 1.58K
errors: Permanent errors have been detected in the following files:
test@1:/test.0.0
/test/test.0.0
/test/1clone/test.0.0
A new feature flag is introduced to mark the presence of this change, as
well as promotion and backwards compatibility logic. This is an updated
version of #9175. Rebase required fixing the tests, updating the ABI of
libzfs, updating the man pages, fixing bugs, fixing the error returns,
and updating the old on-disk error logs to the new format when
activating the feature.
Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Co-authored-by: TulsiJain <tulsi.jain@delphix.com>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
Closes #9175
Closes #12812
2022-04-26 00:25:42 +00:00
|
|
|
<pointer-type-def type-id='79bd3751' size-in-bits='64' id='c65a1f29'/>
|
|
|
|
<class-decl name='_IO_codecvt' is-struct='yes' visibility='default' is-declaration-only='yes' id='a4036571'/>
|
|
|
|
<class-decl name='_IO_marker' is-struct='yes' visibility='default' is-declaration-only='yes' id='010ae0b9'/>
|
|
|
|
<class-decl name='_IO_wide_data' is-struct='yes' visibility='default' is-declaration-only='yes' id='79bd3751'/>
|
2021-07-18 15:55:46 +00:00
|
|
|
<function-decl name='lzbe_bootenv_print' mangled-name='lzbe_bootenv_print' visibility='default' binding='global' size-in-bits='64' elf-symbol-id='lzbe_bootenv_print'>
|
2021-08-31 19:26:30 +00:00
|
|
|
<parameter type-id='80f4b756' name='pool'/>
|
|
|
|
<parameter type-id='80f4b756' name='nvlist'/>
|
|
|
|
<parameter type-id='822cd80b' name='of'/>
|
|
|
|
<return type-id='95e97e5e'/>
|
2020-11-15 04:38:34 +00:00
|
|
|
</function-decl>
|
|
|
|
</abi-instr>
|
|
|
|
</abi-corpus>
|