zfs/contrib/pyzfs/libzfs_core/_constants.py

122 lines
3.7 KiB
Python
Raw Permalink Normal View History

#
# Copyright 2015 ClusterHQ
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
2018-03-03 10:10:34 +00:00
"""
Important `libzfs_core` constants.
"""
from __future__ import absolute_import, division, print_function
import errno
import sys
# Compat for platform-specific errnos
if sys.platform.startswith('freebsd'):
ECHRNG = errno.ENXIO
ECKSUM = 97 # EINTEGRITY
ETIME = errno.ETIMEDOUT
else:
ECHRNG = errno.ECHRNG
ECKSUM = errno.EBADE
ETIME = errno.ETIME
# https://stackoverflow.com/a/1695250
def enum_with_offset(offset, sequential, named):
enums = dict(((b, a + offset) for a, b in enumerate(sequential)), **named)
return type('Enum', (), enums)
def enum(*sequential, **named):
return enum_with_offset(0, sequential, named)
2018-03-03 10:10:34 +00:00
#: Maximum length of any ZFS name.
MAXNAMELEN = 255
#: Default channel program limits
ZCP_DEFAULT_INSTRLIMIT = 10 * 1000 * 1000
ZCP_DEFAULT_MEMLIMIT = 10 * 1024 * 1024
#: Encryption wrapping key length
WRAPPING_KEY_LEN = 32
#: Encryption key location enum
zfs_key_location = enum(
'ZFS_KEYLOCATION_NONE',
'ZFS_KEYLOCATION_PROMPT',
'ZFS_KEYLOCATION_URI'
)
#: Encryption key format enum
zfs_keyformat = enum(
'ZFS_KEYFORMAT_NONE',
'ZFS_KEYFORMAT_RAW',
'ZFS_KEYFORMAT_HEX',
'ZFS_KEYFORMAT_PASSPHRASE'
)
# Encryption algorithms enum
zio_encrypt = enum(
'ZIO_CRYPT_INHERIT',
'ZIO_CRYPT_ON',
'ZIO_CRYPT_OFF',
'ZIO_CRYPT_AES_128_CCM',
'ZIO_CRYPT_AES_192_CCM',
'ZIO_CRYPT_AES_256_CCM',
'ZIO_CRYPT_AES_128_GCM',
'ZIO_CRYPT_AES_192_GCM',
'ZIO_CRYPT_AES_256_GCM'
)
# ZFS-specific error codes
zfs_errno = enum_with_offset(1024, [
'ZFS_ERR_CHECKPOINT_EXISTS',
'ZFS_ERR_DISCARDING_CHECKPOINT',
'ZFS_ERR_NO_CHECKPOINT',
'ZFS_ERR_DEVRM_IN_PROGRESS',
'ZFS_ERR_VDEV_TOO_BIG',
'ZFS_ERR_IOC_CMD_UNAVAIL',
'ZFS_ERR_IOC_ARG_UNAVAIL',
'ZFS_ERR_IOC_ARG_REQUIRED',
'ZFS_ERR_IOC_ARG_BADTYPE',
'ZFS_ERR_WRONG_PARENT',
'ZFS_ERR_FROM_IVSET_GUID_MISSING',
'ZFS_ERR_FROM_IVSET_GUID_MISMATCH',
'ZFS_ERR_SPILL_BLOCK_FLAG_MISSING',
'ZFS_ERR_UNKNOWN_SEND_STREAM_FEATURE',
'ZFS_ERR_EXPORT_IN_PROGRESS',
'ZFS_ERR_BOOKMARK_SOURCE_NOT_ANCESTOR',
'ZFS_ERR_STREAM_TRUNCATED',
'ZFS_ERR_STREAM_LARGE_BLOCK_MISMATCH',
Add device rebuild feature The device_rebuild feature enables sequential reconstruction when resilvering. Mirror vdevs can be rebuilt in LBA order which may more quickly restore redundancy depending on the pools average block size, overall fragmentation and the performance characteristics of the devices. However, block checksums cannot be verified as part of the rebuild thus a scrub is automatically started after the sequential resilver completes. The new '-s' option has been added to the `zpool attach` and `zpool replace` command to request sequential reconstruction instead of healing reconstruction when resilvering. zpool attach -s <pool> <existing vdev> <new vdev> zpool replace -s <pool> <old vdev> <new vdev> The `zpool status` output has been updated to report the progress of sequential resilvering in the same way as healing resilvering. The one notable difference is that multiple sequential resilvers may be in progress as long as they're operating on different top-level vdevs. The `zpool wait -t resilver` command was extended to wait on sequential resilvers. From this perspective they are no different than healing resilvers. Sequential resilvers cannot be supported for RAIDZ, but are compatible with the dRAID feature being developed. As part of this change the resilver_restart_* tests were moved in to the functional/replacement directory. Additionally, the replacement tests were renamed and extended to verify both resilvering and rebuilding. Original-patch-by: Isaac Huang <he.huang@intel.com> Reviewed-by: Tony Hutter <hutter2@llnl.gov> Reviewed-by: John Poduska <jpoduska@datto.com> Co-authored-by: Mark Maybee <mmaybee@cray.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes #10349
2020-07-03 18:05:50 +00:00
'ZFS_ERR_RESILVER_IN_PROGRESS',
'ZFS_ERR_REBUILD_IN_PROGRESS',
'ZFS_ERR_BADPROP',
'ZFS_ERR_VDEV_NOTSUP',
'ZFS_ERR_NOT_USER_NAMESPACE',
'ZFS_ERR_RESUME_EXISTS',
'ZFS_ERR_CRYPTO_NOTSUP',
RAID-Z expansion feature This feature allows disks to be added one at a time to a RAID-Z group, expanding its capacity incrementally. This feature is especially useful for small pools (typically with only one RAID-Z group), where there isn't sufficient hardware to add capacity by adding a whole new RAID-Z group (typically doubling the number of disks). == Initiating expansion == A new device (disk) can be attached to an existing RAIDZ vdev, by running `zpool attach POOL raidzP-N NEW_DEVICE`, e.g. `zpool attach tank raidz2-0 sda`. The new device will become part of the RAIDZ group. A "raidz expansion" will be initiated, and the new device will contribute additional space to the RAIDZ group once the expansion completes. The `feature@raidz_expansion` on-disk feature flag must be `enabled` to initiate an expansion, and it remains `active` for the life of the pool. In other words, pools with expanded RAIDZ vdevs can not be imported by older releases of the ZFS software. == During expansion == The expansion entails reading all allocated space from existing disks in the RAIDZ group, and rewriting it to the new disks in the RAIDZ group (including the newly added device). The expansion progress can be monitored with `zpool status`. Data redundancy is maintained during (and after) the expansion. If a disk fails while the expansion is in progress, the expansion pauses until the health of the RAIDZ vdev is restored (e.g. by replacing the failed disk and waiting for reconstruction to complete). The pool remains accessible during expansion. Following a reboot or export/import, the expansion resumes where it left off. == After expansion == When the expansion completes, the additional space is available for use, and is reflected in the `available` zfs property (as seen in `zfs list`, `df`, etc). Expansion does not change the number of failures that can be tolerated without data loss (e.g. a RAIDZ2 is still a RAIDZ2 even after expansion). A RAIDZ vdev can be expanded multiple times. After the expansion completes, old blocks remain with their old data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but distributed among the larger set of disks. New blocks will be written with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ vdev's "assumed parity ratio" does not change, so slightly less space than is expected may be reported for newly-written blocks, according to `zfs list`, `df`, `ls -s`, and similar tools. Sponsored-by: The FreeBSD Foundation Sponsored-by: iXsystems, Inc. Sponsored-by: vStack Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: Mark Maybee <mark.maybee@delphix.com> Authored-by: Matthew Ahrens <mahrens@delphix.com> Contributions-by: Fedor Uporov <fuporov.vstack@gmail.com> Contributions-by: Stuart Maybee <stuart.maybee@comcast.net> Contributions-by: Thorsten Behrens <tbehrens@outlook.com> Contributions-by: Fmstrat <nospam@nowsci.com> Contributions-by: Don Brady <dev.fs.zfs@gmail.com> Signed-off-by: Don Brady <dev.fs.zfs@gmail.com> Closes #15022
2023-11-08 18:19:41 +00:00
'ZFS_ERR_RAIDZ_EXPAND_IN_PROGRESS',
],
{}
)
# compat before we used the enum helper for these values
ZFS_ERR_CHECKPOINT_EXISTS = zfs_errno.ZFS_ERR_CHECKPOINT_EXISTS
assert (ZFS_ERR_CHECKPOINT_EXISTS == 1024)
ZFS_ERR_DISCARDING_CHECKPOINT = zfs_errno.ZFS_ERR_DISCARDING_CHECKPOINT
ZFS_ERR_NO_CHECKPOINT = zfs_errno.ZFS_ERR_NO_CHECKPOINT
ZFS_ERR_DEVRM_IN_PROGRESS = zfs_errno.ZFS_ERR_DEVRM_IN_PROGRESS
ZFS_ERR_VDEV_TOO_BIG = zfs_errno.ZFS_ERR_VDEV_TOO_BIG
ZFS_ERR_WRONG_PARENT = zfs_errno.ZFS_ERR_WRONG_PARENT
ZFS_ERR_VDEV_NOTSUP = zfs_errno.ZFS_ERR_VDEV_NOTSUP
RAID-Z expansion feature This feature allows disks to be added one at a time to a RAID-Z group, expanding its capacity incrementally. This feature is especially useful for small pools (typically with only one RAID-Z group), where there isn't sufficient hardware to add capacity by adding a whole new RAID-Z group (typically doubling the number of disks). == Initiating expansion == A new device (disk) can be attached to an existing RAIDZ vdev, by running `zpool attach POOL raidzP-N NEW_DEVICE`, e.g. `zpool attach tank raidz2-0 sda`. The new device will become part of the RAIDZ group. A "raidz expansion" will be initiated, and the new device will contribute additional space to the RAIDZ group once the expansion completes. The `feature@raidz_expansion` on-disk feature flag must be `enabled` to initiate an expansion, and it remains `active` for the life of the pool. In other words, pools with expanded RAIDZ vdevs can not be imported by older releases of the ZFS software. == During expansion == The expansion entails reading all allocated space from existing disks in the RAIDZ group, and rewriting it to the new disks in the RAIDZ group (including the newly added device). The expansion progress can be monitored with `zpool status`. Data redundancy is maintained during (and after) the expansion. If a disk fails while the expansion is in progress, the expansion pauses until the health of the RAIDZ vdev is restored (e.g. by replacing the failed disk and waiting for reconstruction to complete). The pool remains accessible during expansion. Following a reboot or export/import, the expansion resumes where it left off. == After expansion == When the expansion completes, the additional space is available for use, and is reflected in the `available` zfs property (as seen in `zfs list`, `df`, etc). Expansion does not change the number of failures that can be tolerated without data loss (e.g. a RAIDZ2 is still a RAIDZ2 even after expansion). A RAIDZ vdev can be expanded multiple times. After the expansion completes, old blocks remain with their old data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but distributed among the larger set of disks. New blocks will be written with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ vdev's "assumed parity ratio" does not change, so slightly less space than is expected may be reported for newly-written blocks, according to `zfs list`, `df`, `ls -s`, and similar tools. Sponsored-by: The FreeBSD Foundation Sponsored-by: iXsystems, Inc. Sponsored-by: vStack Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: Mark Maybee <mark.maybee@delphix.com> Authored-by: Matthew Ahrens <mahrens@delphix.com> Contributions-by: Fedor Uporov <fuporov.vstack@gmail.com> Contributions-by: Stuart Maybee <stuart.maybee@comcast.net> Contributions-by: Thorsten Behrens <tbehrens@outlook.com> Contributions-by: Fmstrat <nospam@nowsci.com> Contributions-by: Don Brady <dev.fs.zfs@gmail.com> Signed-off-by: Don Brady <dev.fs.zfs@gmail.com> Closes #15022
2023-11-08 18:19:41 +00:00
ZFS_ERR_RAIDZ_EXPAND_IN_PROGRESS = zfs_errno.ZFS_ERR_RAIDZ_EXPAND_IN_PROGRESS
2018-03-03 10:10:34 +00:00
# vim: softtabstop=4 tabstop=4 expandtab shiftwidth=4