2012-12-13 23:24:15 +00:00
|
|
|
/*
|
|
|
|
* CDDL HEADER START
|
|
|
|
*
|
|
|
|
* The contents of this file are subject to the terms of the
|
|
|
|
* Common Development and Distribution License (the "License").
|
|
|
|
* You may not use this file except in compliance with the License.
|
|
|
|
*
|
|
|
|
* You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
|
|
|
|
* or http://www.opensolaris.org/os/licensing.
|
|
|
|
* See the License for the specific language governing permissions
|
|
|
|
* and limitations under the License.
|
|
|
|
*
|
|
|
|
* When distributing Covered Code, include this CDDL HEADER in each
|
|
|
|
* file and include the License file at usr/src/OPENSOLARIS.LICENSE.
|
|
|
|
* If applicable, add the following below this CDDL HEADER, with the
|
|
|
|
* fields enclosed by brackets "[]" replaced with your own identifying
|
|
|
|
* information: Portions Copyright [yyyy] [name of copyright owner]
|
|
|
|
*
|
|
|
|
* CDDL HEADER END
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
2018-09-06 01:33:36 +00:00
|
|
|
* Copyright (c) 2011, 2018 by Delphix. All rights reserved.
|
2013-01-23 09:54:30 +00:00
|
|
|
* Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
|
2017-04-06 15:25:47 +00:00
|
|
|
* Copyright (c) 2013, Joyent, Inc. All rights reserved.
|
2014-10-18 15:58:11 +00:00
|
|
|
* Copyright (c) 2014, Nexenta Systems, Inc. All rights reserved.
|
2018-09-06 01:33:36 +00:00
|
|
|
* Copyright (c) 2017, Intel Corporation.
|
Add zstd support to zfs
This PR adds two new compression types, based on ZStandard:
- zstd: A basic ZStandard compression algorithm Available compression.
Levels for zstd are zstd-1 through zstd-19, where the compression
increases with every level, but speed decreases.
- zstd-fast: A faster version of the ZStandard compression algorithm
zstd-fast is basically a "negative" level of zstd. The compression
decreases with every level, but speed increases.
Available compression levels for zstd-fast:
- zstd-fast-1 through zstd-fast-10
- zstd-fast-20 through zstd-fast-100 (in increments of 10)
- zstd-fast-500 and zstd-fast-1000
For more information check the man page.
Implementation details:
Rather than treat each level of zstd as a different algorithm (as was
done historically with gzip), the block pointer `enum zio_compress`
value is simply zstd for all levels, including zstd-fast, since they all
use the same decompression function.
The compress= property (a 64bit unsigned integer) uses the lower 7 bits
to store the compression algorithm (matching the number of bits used in
a block pointer, as the 8th bit was borrowed for embedded block
pointers). The upper bits are used to store the compression level.
It is necessary to be able to determine what compression level was used
when later reading a block back, so the concept used in LZ4, where the
first 32bits of the on-disk value are the size of the compressed data
(since the allocation is rounded up to the nearest ashift), was
extended, and we store the version of ZSTD and the level as well as the
compressed size. This value is returned when decompressing a block, so
that if the block needs to be recompressed (L2ARC, nop-write, etc), that
the same parameters will be used to result in the matching checksum.
All of the internal ZFS code ( `arc_buf_hdr_t`, `objset_t`,
`zio_prop_t`, etc.) uses the separated _compress and _complevel
variables. Only the properties ZAP contains the combined/bit-shifted
value. The combined value is split when the compression_changed_cb()
callback is called, and sets both objset members (os_compress and
os_complevel).
The userspace tools all use the combined/bit-shifted value.
Additional notes:
zdb can now also decode the ZSTD compression header (flag -Z) and
inspect the size, version and compression level saved in that header.
For each record, if it is ZSTD compressed, the parameters of the decoded
compression header get printed.
ZSTD is included with all current tests and new tests are added
as-needed.
Per-dataset feature flags now get activated when the property is set.
If a compression algorithm requires a feature flag, zfs activates the
feature when the property is set, rather than waiting for the first
block to be born. This is currently only used by zstd but can be
extended as needed.
Portions-Sponsored-By: The FreeBSD Foundation
Co-authored-by: Allan Jude <allanjude@freebsd.org>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Co-authored-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Co-authored-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Co-authored-by: Michael Niewöhner <foss@mniewoehner.de>
Signed-off-by: Allan Jude <allan@klarasystems.com>
Signed-off-by: Allan Jude <allanjude@freebsd.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Signed-off-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Signed-off-by: Michael Niewöhner <foss@mniewoehner.de>
Closes #6247
Closes #9024
Closes #10277
Closes #10278
2020-08-18 17:10:17 +00:00
|
|
|
* Copyright (c) 2019, Klara Inc.
|
|
|
|
* Copyright (c) 2019, Allan Jude
|
2012-12-13 23:24:15 +00:00
|
|
|
*/
|
|
|
|
|
2018-02-16 01:53:18 +00:00
|
|
|
#ifndef _KERNEL
|
2012-12-13 23:24:15 +00:00
|
|
|
#include <errno.h>
|
|
|
|
#include <string.h>
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
#include <dirent.h>
|
|
|
|
#include <search.h>
|
2018-09-02 19:09:53 +00:00
|
|
|
#include <sys/stat.h>
|
2012-12-13 23:24:15 +00:00
|
|
|
#endif
|
|
|
|
#include <sys/debug.h>
|
|
|
|
#include <sys/fs/zfs.h>
|
|
|
|
#include <sys/inttypes.h>
|
|
|
|
#include <sys/types.h>
|
2020-07-26 03:09:50 +00:00
|
|
|
#include <sys/param.h>
|
2018-09-02 19:09:53 +00:00
|
|
|
#include <sys/zfs_sysfs.h>
|
2012-12-13 23:24:15 +00:00
|
|
|
#include "zfeature_common.h"
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Set to disable all feature checks while opening pools, allowing pools with
|
|
|
|
* unsupported features to be opened. Set for testing only.
|
|
|
|
*/
|
|
|
|
boolean_t zfeature_checks_disable = B_FALSE;
|
|
|
|
|
|
|
|
zfeature_info_t spa_feature_table[SPA_FEATURES];
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Valid characters for feature guids. This list is mainly for aesthetic
|
|
|
|
* purposes and could be expanded in the future. There are different allowed
|
|
|
|
* characters in the guids reverse dns portion (before the colon) and its
|
|
|
|
* short name (after the colon).
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
valid_char(char c, boolean_t after_colon)
|
|
|
|
{
|
|
|
|
return ((c >= 'a' && c <= 'z') ||
|
|
|
|
(c >= '0' && c <= '9') ||
|
2014-11-03 20:15:08 +00:00
|
|
|
(after_colon && c == '_') ||
|
|
|
|
(!after_colon && (c == '.' || c == '-')));
|
2012-12-13 23:24:15 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Every feature guid must contain exactly one colon which separates a reverse
|
|
|
|
* dns organization name from the feature's "short" name (e.g.
|
|
|
|
* "com.company:feature_name").
|
|
|
|
*/
|
|
|
|
boolean_t
|
|
|
|
zfeature_is_valid_guid(const char *name)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
boolean_t has_colon = B_FALSE;
|
|
|
|
|
|
|
|
i = 0;
|
|
|
|
while (name[i] != '\0') {
|
|
|
|
char c = name[i++];
|
|
|
|
if (c == ':') {
|
|
|
|
if (has_colon)
|
|
|
|
return (B_FALSE);
|
|
|
|
has_colon = B_TRUE;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (!valid_char(c, has_colon))
|
|
|
|
return (B_FALSE);
|
|
|
|
}
|
|
|
|
|
|
|
|
return (has_colon);
|
|
|
|
}
|
|
|
|
|
|
|
|
boolean_t
|
|
|
|
zfeature_is_supported(const char *guid)
|
|
|
|
{
|
|
|
|
if (zfeature_checks_disable)
|
|
|
|
return (B_TRUE);
|
|
|
|
|
2017-11-04 20:25:13 +00:00
|
|
|
for (spa_feature_t i = 0; i < SPA_FEATURES; i++) {
|
2012-12-13 23:24:15 +00:00
|
|
|
zfeature_info_t *feature = &spa_feature_table[i];
|
2021-02-28 01:16:02 +00:00
|
|
|
if (!feature->fi_zfs_mod_supported)
|
|
|
|
continue;
|
2013-10-08 17:13:05 +00:00
|
|
|
if (strcmp(guid, feature->fi_guid) == 0)
|
|
|
|
return (B_TRUE);
|
2012-12-13 23:24:15 +00:00
|
|
|
}
|
2013-10-08 17:13:05 +00:00
|
|
|
return (B_FALSE);
|
2012-12-13 23:24:15 +00:00
|
|
|
}
|
|
|
|
|
2018-09-02 19:09:53 +00:00
|
|
|
int
|
|
|
|
zfeature_lookup_guid(const char *guid, spa_feature_t *res)
|
|
|
|
{
|
|
|
|
for (spa_feature_t i = 0; i < SPA_FEATURES; i++) {
|
|
|
|
zfeature_info_t *feature = &spa_feature_table[i];
|
|
|
|
if (!feature->fi_zfs_mod_supported)
|
|
|
|
continue;
|
|
|
|
if (strcmp(guid, feature->fi_guid) == 0) {
|
|
|
|
if (res != NULL)
|
|
|
|
*res = i;
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return (ENOENT);
|
|
|
|
}
|
|
|
|
|
2012-12-13 23:24:15 +00:00
|
|
|
int
|
2013-10-08 17:13:05 +00:00
|
|
|
zfeature_lookup_name(const char *name, spa_feature_t *res)
|
2012-12-13 23:24:15 +00:00
|
|
|
{
|
2017-11-04 20:25:13 +00:00
|
|
|
for (spa_feature_t i = 0; i < SPA_FEATURES; i++) {
|
2012-12-13 23:24:15 +00:00
|
|
|
zfeature_info_t *feature = &spa_feature_table[i];
|
2018-09-02 19:09:53 +00:00
|
|
|
if (!feature->fi_zfs_mod_supported)
|
|
|
|
continue;
|
2012-12-13 23:24:15 +00:00
|
|
|
if (strcmp(name, feature->fi_uname) == 0) {
|
|
|
|
if (res != NULL)
|
2013-10-08 17:13:05 +00:00
|
|
|
*res = i;
|
2012-12-13 23:24:15 +00:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return (ENOENT);
|
|
|
|
}
|
|
|
|
|
2013-12-09 18:37:51 +00:00
|
|
|
boolean_t
|
2017-01-12 17:42:11 +00:00
|
|
|
zfeature_depends_on(spa_feature_t fid, spa_feature_t check)
|
|
|
|
{
|
2013-12-09 18:37:51 +00:00
|
|
|
zfeature_info_t *feature = &spa_feature_table[fid];
|
|
|
|
|
2017-11-04 20:25:13 +00:00
|
|
|
for (int i = 0; feature->fi_depends[i] != SPA_FEATURE_NONE; i++) {
|
2013-12-09 18:37:51 +00:00
|
|
|
if (feature->fi_depends[i] == check)
|
|
|
|
return (B_TRUE);
|
|
|
|
}
|
|
|
|
return (B_FALSE);
|
|
|
|
}
|
|
|
|
|
2017-01-12 19:58:04 +00:00
|
|
|
static boolean_t
|
|
|
|
deps_contains_feature(const spa_feature_t *deps, const spa_feature_t feature)
|
|
|
|
{
|
2017-11-04 20:25:13 +00:00
|
|
|
for (int i = 0; deps[i] != SPA_FEATURE_NONE; i++)
|
2017-01-12 19:58:04 +00:00
|
|
|
if (deps[i] == feature)
|
|
|
|
return (B_TRUE);
|
|
|
|
|
|
|
|
return (B_FALSE);
|
|
|
|
}
|
|
|
|
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
#define STRCMP ((int(*)(const void *, const void *))&strcmp)
|
|
|
|
struct zfs_mod_supported_features {
|
|
|
|
void *tree;
|
|
|
|
boolean_t all_features;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct zfs_mod_supported_features *
|
|
|
|
zfs_mod_list_supported(const char *scope)
|
|
|
|
{
|
|
|
|
#if defined(__FreeBSD__) || defined(_KERNEL) || defined(LIB_ZPOOL_BUILD)
|
|
|
|
(void) scope;
|
|
|
|
return (NULL);
|
|
|
|
#else
|
|
|
|
struct zfs_mod_supported_features *ret = calloc(1, sizeof (*ret));
|
|
|
|
if (ret == NULL)
|
|
|
|
return (NULL);
|
|
|
|
|
|
|
|
DIR *sysfs_dir = NULL;
|
|
|
|
char path[128];
|
|
|
|
|
|
|
|
if (snprintf(path, sizeof (path), "%s/%s",
|
|
|
|
ZFS_SYSFS_DIR, scope) < sizeof (path))
|
|
|
|
sysfs_dir = opendir(path);
|
|
|
|
if (sysfs_dir == NULL && errno == ENOENT) {
|
|
|
|
if (snprintf(path, sizeof (path), "%s/%s",
|
|
|
|
ZFS_SYSFS_ALT_DIR, scope) < sizeof (path))
|
|
|
|
sysfs_dir = opendir(path);
|
|
|
|
}
|
|
|
|
if (sysfs_dir == NULL) {
|
|
|
|
ret->all_features = errno == ENOENT &&
|
|
|
|
(access(ZFS_SYSFS_DIR, F_OK) == 0 ||
|
|
|
|
access(ZFS_SYSFS_ALT_DIR, F_OK) == 0);
|
|
|
|
return (ret);
|
|
|
|
}
|
|
|
|
|
|
|
|
struct dirent *node;
|
|
|
|
while ((node = readdir(sysfs_dir)) != NULL) {
|
|
|
|
if (strcmp(node->d_name, ".") == 0 ||
|
|
|
|
strcmp(node->d_name, "..") == 0)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
char *name = strdup(node->d_name);
|
|
|
|
if (name == NULL) {
|
|
|
|
goto nomem;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (tsearch(name, &ret->tree, STRCMP) == NULL) {
|
|
|
|
/*
|
|
|
|
* Don't bother checking for duplicate entries:
|
|
|
|
* we're iterating a single directory.
|
|
|
|
*/
|
|
|
|
free(name);
|
|
|
|
goto nomem;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
end:
|
|
|
|
closedir(sysfs_dir);
|
|
|
|
return (ret);
|
|
|
|
|
|
|
|
nomem:
|
|
|
|
zfs_mod_list_supported_free(ret);
|
|
|
|
ret = NULL;
|
|
|
|
goto end;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
zfs_mod_list_supported_free(struct zfs_mod_supported_features *list)
|
|
|
|
{
|
|
|
|
#if !defined(__FreeBSD__) && !defined(_KERNEL) && !defined(LIB_ZPOOL_BUILD)
|
|
|
|
if (list) {
|
|
|
|
tdestroy(list->tree, free);
|
|
|
|
free(list);
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
(void) list;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2018-09-07 04:44:52 +00:00
|
|
|
#if !defined(_KERNEL) && !defined(LIB_ZPOOL_BUILD)
|
2018-09-02 19:09:53 +00:00
|
|
|
static boolean_t
|
2018-09-07 04:44:52 +00:00
|
|
|
zfs_mod_supported_impl(const char *scope, const char *name, const char *sysfs)
|
2018-09-02 19:09:53 +00:00
|
|
|
{
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
char path[128];
|
|
|
|
if (snprintf(path, sizeof (path), "%s%s%s%s%s", sysfs,
|
|
|
|
scope == NULL ? "" : "/", scope ?: "",
|
|
|
|
name == NULL ? "" : "/", name ?: "") < sizeof (path))
|
|
|
|
return (access(path, F_OK) == 0);
|
|
|
|
else
|
|
|
|
return (B_FALSE);
|
2018-09-07 04:44:52 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
boolean_t
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
zfs_mod_supported(const char *scope, const char *name,
|
|
|
|
const struct zfs_mod_supported_features *sfeatures)
|
2018-09-07 04:44:52 +00:00
|
|
|
{
|
2019-01-03 23:19:37 +00:00
|
|
|
boolean_t supported;
|
|
|
|
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
if (sfeatures != NULL)
|
|
|
|
return (sfeatures->all_features ||
|
|
|
|
tfind(name, &sfeatures->tree, STRCMP));
|
|
|
|
|
2019-01-03 23:19:37 +00:00
|
|
|
/*
|
|
|
|
* Check both the primary and alternate sysfs locations to determine
|
|
|
|
* if the required functionality is supported.
|
|
|
|
*/
|
|
|
|
supported = (zfs_mod_supported_impl(scope, name, ZFS_SYSFS_DIR) ||
|
2018-09-07 04:44:52 +00:00
|
|
|
zfs_mod_supported_impl(scope, name, ZFS_SYSFS_ALT_DIR));
|
2019-01-03 23:19:37 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* For backwards compatibility with kernel modules that predate
|
|
|
|
* supported feature/property checking. Report the feature/property
|
|
|
|
* as supported if the kernel module is loaded but the requested
|
|
|
|
* scope directory does not exist.
|
|
|
|
*/
|
|
|
|
if (supported == B_FALSE) {
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
if ((access(ZFS_SYSFS_DIR, F_OK) == 0 &&
|
|
|
|
!zfs_mod_supported_impl(scope, NULL, ZFS_SYSFS_DIR)) ||
|
|
|
|
(access(ZFS_SYSFS_ALT_DIR, F_OK) == 0 &&
|
|
|
|
!zfs_mod_supported_impl(scope, NULL, ZFS_SYSFS_ALT_DIR))) {
|
2019-01-03 23:19:37 +00:00
|
|
|
supported = B_TRUE;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return (supported);
|
2018-09-07 04:44:52 +00:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
static boolean_t
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
zfs_mod_supported_feature(const char *name,
|
|
|
|
const struct zfs_mod_supported_features *sfeatures)
|
2018-09-07 04:44:52 +00:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* The zfs module spa_feature_table[], whether in-kernel or in
|
|
|
|
* libzpool, always supports all the features. libzfs needs to
|
|
|
|
* query the running module, via sysfs, to determine which
|
|
|
|
* features are supported.
|
2019-12-06 17:44:29 +00:00
|
|
|
*
|
|
|
|
* The equivalent _can_ be done on FreeBSD by way of the sysctl
|
2021-02-17 16:14:51 +00:00
|
|
|
* tree, but this has not been done yet. Therefore, we return
|
2021-12-09 01:01:36 +00:00
|
|
|
* that all features are supported.
|
2018-09-07 04:44:52 +00:00
|
|
|
*/
|
2021-11-16 19:40:10 +00:00
|
|
|
|
|
|
|
#if defined(_KERNEL) || defined(LIB_ZPOOL_BUILD) || defined(__FreeBSD__)
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
(void) name, (void) sfeatures;
|
2018-09-07 04:44:52 +00:00
|
|
|
return (B_TRUE);
|
|
|
|
#else
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
return (zfs_mod_supported(ZFS_SYSFS_POOL_FEATURES, name, sfeatures));
|
2018-09-02 19:09:53 +00:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2012-12-13 23:24:15 +00:00
|
|
|
static void
|
2013-10-08 17:13:05 +00:00
|
|
|
zfeature_register(spa_feature_t fid, const char *guid, const char *name,
|
2018-10-16 18:15:04 +00:00
|
|
|
const char *desc, zfeature_flags_t flags, zfeature_type_t type,
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
const spa_feature_t *deps,
|
|
|
|
const struct zfs_mod_supported_features *sfeatures)
|
2012-12-13 23:24:15 +00:00
|
|
|
{
|
|
|
|
zfeature_info_t *feature = &spa_feature_table[fid];
|
2022-01-14 23:37:55 +00:00
|
|
|
static const spa_feature_t nodeps[] = { SPA_FEATURE_NONE };
|
2012-12-13 23:24:15 +00:00
|
|
|
|
|
|
|
ASSERT(name != NULL);
|
|
|
|
ASSERT(desc != NULL);
|
2015-07-24 16:53:55 +00:00
|
|
|
ASSERT((flags & ZFEATURE_FLAG_READONLY_COMPAT) == 0 ||
|
|
|
|
(flags & ZFEATURE_FLAG_MOS) == 0);
|
2012-12-13 23:24:15 +00:00
|
|
|
ASSERT3U(fid, <, SPA_FEATURES);
|
|
|
|
ASSERT(zfeature_is_valid_guid(guid));
|
|
|
|
|
|
|
|
if (deps == NULL)
|
|
|
|
deps = nodeps;
|
|
|
|
|
2017-01-12 19:58:04 +00:00
|
|
|
VERIFY(((flags & ZFEATURE_FLAG_PER_DATASET) == 0) ||
|
|
|
|
(deps_contains_feature(deps, SPA_FEATURE_EXTENSIBLE_DATASET)));
|
|
|
|
|
2013-10-08 17:13:05 +00:00
|
|
|
feature->fi_feature = fid;
|
2012-12-13 23:24:15 +00:00
|
|
|
feature->fi_guid = guid;
|
|
|
|
feature->fi_uname = name;
|
|
|
|
feature->fi_desc = desc;
|
2015-07-24 16:53:55 +00:00
|
|
|
feature->fi_flags = flags;
|
2018-10-16 18:15:04 +00:00
|
|
|
feature->fi_type = type;
|
2012-12-13 23:24:15 +00:00
|
|
|
feature->fi_depends = deps;
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
feature->fi_zfs_mod_supported =
|
|
|
|
zfs_mod_supported_feature(guid, sfeatures);
|
2012-12-13 23:24:15 +00:00
|
|
|
}
|
|
|
|
|
2020-02-18 17:36:50 +00:00
|
|
|
/*
|
|
|
|
* Every feature has a GUID of the form com.example:feature_name. The
|
|
|
|
* reversed DNS name ensures that the feature's GUID is unique across all ZFS
|
|
|
|
* implementations. This allows companies to independently develop and
|
|
|
|
* release features. Examples include org.delphix and org.datto. Previously,
|
|
|
|
* features developed on one implementation have used that implementation's
|
|
|
|
* domain name (e.g. org.illumos and org.zfsonlinux). Use of the org.openzfs
|
|
|
|
* domain name is recommended for new features which are developed by the
|
|
|
|
* OpenZFS community and its platforms. This domain may optionally be used by
|
|
|
|
* companies developing features for initial release through an OpenZFS
|
|
|
|
* implementation. Use of the org.openzfs domain requires reserving the
|
|
|
|
* feature name in advance with the OpenZFS project.
|
|
|
|
*/
|
2012-12-13 23:24:15 +00:00
|
|
|
void
|
|
|
|
zpool_feature_init(void)
|
|
|
|
{
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
struct zfs_mod_supported_features *sfeatures =
|
|
|
|
zfs_mod_list_supported(ZFS_SYSFS_POOL_FEATURES);
|
|
|
|
|
2012-12-13 23:24:15 +00:00
|
|
|
zfeature_register(SPA_FEATURE_ASYNC_DESTROY,
|
|
|
|
"com.delphix:async_destroy", "async_destroy",
|
2015-07-24 16:53:55 +00:00
|
|
|
"Destroy filesystems asynchronously.",
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
ZFEATURE_FLAG_READONLY_COMPAT, ZFEATURE_TYPE_BOOLEAN, NULL,
|
|
|
|
sfeatures);
|
2013-12-09 18:37:51 +00:00
|
|
|
|
2012-12-23 23:57:14 +00:00
|
|
|
zfeature_register(SPA_FEATURE_EMPTY_BPOBJ,
|
|
|
|
"com.delphix:empty_bpobj", "empty_bpobj",
|
2015-07-24 16:53:55 +00:00
|
|
|
"Snapshots use less space.",
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
ZFEATURE_FLAG_READONLY_COMPAT, ZFEATURE_TYPE_BOOLEAN, NULL,
|
|
|
|
sfeatures);
|
2013-12-09 18:37:51 +00:00
|
|
|
|
2013-01-23 09:54:30 +00:00
|
|
|
zfeature_register(SPA_FEATURE_LZ4_COMPRESS,
|
|
|
|
"org.illumos:lz4_compress", "lz4_compress",
|
2015-07-24 16:53:55 +00:00
|
|
|
"LZ4 compression algorithm support.",
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
ZFEATURE_FLAG_ACTIVATE_ON_ENABLE, ZFEATURE_TYPE_BOOLEAN, NULL,
|
|
|
|
sfeatures);
|
2013-12-09 18:37:51 +00:00
|
|
|
|
2017-04-06 15:25:47 +00:00
|
|
|
zfeature_register(SPA_FEATURE_MULTI_VDEV_CRASH_DUMP,
|
|
|
|
"com.joyent:multi_vdev_crash_dump", "multi_vdev_crash_dump",
|
|
|
|
"Crash dumps to multiple vdev pools.",
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
0, ZFEATURE_TYPE_BOOLEAN, NULL, sfeatures);
|
2017-04-06 15:25:47 +00:00
|
|
|
|
Illumos #4101, #4102, #4103, #4105, #4106
4101 metaslab_debug should allow for fine-grained control
4102 space_maps should store more information about themselves
4103 space map object blocksize should be increased
4105 removing a mirrored log device results in a leaked object
4106 asynchronously load metaslab
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Adam Leventhal <ahl@delphix.com>
Reviewed by: Sebastien Roy <seb@delphix.com>
Approved by: Garrett D'Amore <garrett@damore.org>
Prior to this patch, space_maps were preferred solely based on the
amount of free space left in each. Unfortunately, this heuristic didn't
contain any information about the make-up of that free space, which
meant we could keep preferring and loading a highly fragmented space map
that wouldn't actually have enough contiguous space to satisfy the
allocation; then unloading that space_map and repeating the process.
This change modifies the space_map's to store additional information
about the contiguous space in the space_map, so that we can use this
information to make a better decision about which space_map to load.
This requires reallocating all space_map objects to increase their
bonus buffer size sizes enough to fit the new metadata.
The above feature can be enabled via a new feature flag introduced by
this change: com.delphix:spacemap_histogram
In addition to the above, this patch allows the space_map block size to
be increase. Currently the block size is set to be 4K in size, which has
certain implications including the following:
* 4K sector devices will not see any compression benefit
* large space_maps require more metadata on-disk
* large space_maps require more time to load (typically random reads)
Now the space_map block size can adjust as needed up to the maximum size
set via the space_map_max_blksz variable.
A bug was fixed which resulted in potentially leaking an object when
removing a mirrored log device. The previous logic for vdev_remove() did
not deal with removing top-level vdevs that are interior vdevs (i.e.
mirror) correctly. The problem would occur when removing a mirrored log
device, and result in the DTL space map object being leaked; because
top-level vdevs don't have DTL space map objects associated with them.
References:
https://www.illumos.org/issues/4101
https://www.illumos.org/issues/4102
https://www.illumos.org/issues/4103
https://www.illumos.org/issues/4105
https://www.illumos.org/issues/4106
https://github.com/illumos/illumos-gate/commit/0713e23
Porting notes:
A handful of kmem_alloc() calls were converted to kmem_zalloc(). Also,
the KM_PUSHPAGE and TQ_PUSHPAGE flags were used as necessary.
Ported-by: Tim Chase <tim@chase2k.com>
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #2488
2013-10-01 21:25:53 +00:00
|
|
|
zfeature_register(SPA_FEATURE_SPACEMAP_HISTOGRAM,
|
|
|
|
"com.delphix:spacemap_histogram", "spacemap_histogram",
|
2015-07-24 16:53:55 +00:00
|
|
|
"Spacemaps maintain space histograms.",
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
ZFEATURE_FLAG_READONLY_COMPAT, ZFEATURE_TYPE_BOOLEAN, NULL,
|
|
|
|
sfeatures);
|
2013-12-09 18:37:51 +00:00
|
|
|
|
|
|
|
zfeature_register(SPA_FEATURE_ENABLED_TXG,
|
|
|
|
"com.delphix:enabled_txg", "enabled_txg",
|
2015-07-24 16:53:55 +00:00
|
|
|
"Record txg at which a feature is enabled",
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
ZFEATURE_FLAG_READONLY_COMPAT, ZFEATURE_TYPE_BOOLEAN, NULL,
|
|
|
|
sfeatures);
|
2013-12-09 18:37:51 +00:00
|
|
|
|
|
|
|
{
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
static const spa_feature_t hole_birth_deps[] = {
|
|
|
|
SPA_FEATURE_ENABLED_TXG,
|
|
|
|
SPA_FEATURE_NONE
|
|
|
|
};
|
|
|
|
zfeature_register(SPA_FEATURE_HOLE_BIRTH,
|
|
|
|
"com.delphix:hole_birth", "hole_birth",
|
|
|
|
"Retain hole birth txg for more precise zfs send",
|
|
|
|
ZFEATURE_FLAG_MOS | ZFEATURE_FLAG_ACTIVATE_ON_ENABLE,
|
|
|
|
ZFEATURE_TYPE_BOOLEAN, hole_birth_deps, sfeatures);
|
2013-12-09 18:37:51 +00:00
|
|
|
}
|
|
|
|
|
2016-12-16 22:11:29 +00:00
|
|
|
zfeature_register(SPA_FEATURE_POOL_CHECKPOINT,
|
|
|
|
"com.delphix:zpool_checkpoint", "zpool_checkpoint",
|
|
|
|
"Pool state can be checkpointed, allowing rewind later.",
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
ZFEATURE_FLAG_READONLY_COMPAT, ZFEATURE_TYPE_BOOLEAN, NULL,
|
|
|
|
sfeatures);
|
2016-12-16 22:11:29 +00:00
|
|
|
|
2017-08-04 16:30:49 +00:00
|
|
|
zfeature_register(SPA_FEATURE_SPACEMAP_V2,
|
|
|
|
"com.delphix:spacemap_v2", "spacemap_v2",
|
|
|
|
"Space maps representing large segments are more efficient.",
|
|
|
|
ZFEATURE_FLAG_READONLY_COMPAT | ZFEATURE_FLAG_ACTIVATE_ON_ENABLE,
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
ZFEATURE_TYPE_BOOLEAN, NULL, sfeatures);
|
2017-08-04 16:30:49 +00:00
|
|
|
|
2013-10-08 17:13:05 +00:00
|
|
|
zfeature_register(SPA_FEATURE_EXTENSIBLE_DATASET,
|
|
|
|
"com.delphix:extensible_dataset", "extensible_dataset",
|
|
|
|
"Enhanced dataset functionality, used by other features.",
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
0, ZFEATURE_TYPE_BOOLEAN, NULL, sfeatures);
|
2013-12-11 22:33:41 +00:00
|
|
|
|
|
|
|
{
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
static const spa_feature_t bookmarks_deps[] = {
|
|
|
|
SPA_FEATURE_EXTENSIBLE_DATASET,
|
|
|
|
SPA_FEATURE_NONE
|
|
|
|
};
|
|
|
|
|
|
|
|
zfeature_register(SPA_FEATURE_BOOKMARKS,
|
|
|
|
"com.delphix:bookmarks", "bookmarks",
|
|
|
|
"\"zfs bookmark\" command",
|
|
|
|
ZFEATURE_FLAG_READONLY_COMPAT, ZFEATURE_TYPE_BOOLEAN,
|
|
|
|
bookmarks_deps, sfeatures);
|
2013-12-11 22:33:41 +00:00
|
|
|
}
|
2014-06-05 21:19:08 +00:00
|
|
|
|
2015-04-01 13:07:48 +00:00
|
|
|
{
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
static const spa_feature_t filesystem_limits_deps[] = {
|
|
|
|
SPA_FEATURE_EXTENSIBLE_DATASET,
|
|
|
|
SPA_FEATURE_NONE
|
|
|
|
};
|
|
|
|
zfeature_register(SPA_FEATURE_FS_SS_LIMIT,
|
|
|
|
"com.joyent:filesystem_limits", "filesystem_limits",
|
|
|
|
"Filesystem and snapshot limits.",
|
|
|
|
ZFEATURE_FLAG_READONLY_COMPAT, ZFEATURE_TYPE_BOOLEAN,
|
|
|
|
filesystem_limits_deps, sfeatures);
|
2015-04-01 13:07:48 +00:00
|
|
|
}
|
|
|
|
|
2014-06-05 21:19:08 +00:00
|
|
|
zfeature_register(SPA_FEATURE_EMBEDDED_DATA,
|
|
|
|
"com.delphix:embedded_data", "embedded_data",
|
|
|
|
"Blocks which compress very well use even less space.",
|
2015-07-24 16:53:55 +00:00
|
|
|
ZFEATURE_FLAG_MOS | ZFEATURE_FLAG_ACTIVATE_ON_ENABLE,
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
ZFEATURE_TYPE_BOOLEAN, NULL, sfeatures);
|
2014-11-03 20:15:08 +00:00
|
|
|
|
2019-07-26 17:54:14 +00:00
|
|
|
{
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
static const spa_feature_t livelist_deps[] = {
|
|
|
|
SPA_FEATURE_EXTENSIBLE_DATASET,
|
|
|
|
SPA_FEATURE_NONE
|
|
|
|
};
|
|
|
|
zfeature_register(SPA_FEATURE_LIVELIST,
|
|
|
|
"com.delphix:livelist", "livelist",
|
|
|
|
"Improved clone deletion performance.",
|
|
|
|
ZFEATURE_FLAG_READONLY_COMPAT, ZFEATURE_TYPE_BOOLEAN,
|
|
|
|
livelist_deps, sfeatures);
|
2019-07-26 17:54:14 +00:00
|
|
|
}
|
|
|
|
|
Log Spacemap Project
= Motivation
At Delphix we've seen a lot of customer systems where fragmentation
is over 75% and random writes take a performance hit because a lot
of time is spend on I/Os that update on-disk space accounting metadata.
Specifically, we seen cases where 20% to 40% of sync time is spend
after sync pass 1 and ~30% of the I/Os on the system is spent updating
spacemaps.
The problem is that these pools have existed long enough that we've
touched almost every metaslab at least once, and random writes
scatter frees across all metaslabs every TXG, thus appending to
their spacemaps and resulting in many I/Os. To give an example,
assuming that every VDEV has 200 metaslabs and our writes fit within
a single spacemap block (generally 4K) we have 200 I/Os. Then if we
assume 2 levels of indirection, we need 400 additional I/Os and
since we are talking about metadata for which we keep 2 extra copies
for redundancy we need to triple that number, leading to a total of
1800 I/Os per VDEV every TXG.
We could try and decrease the number of metaslabs so we have less
I/Os per TXG but then each metaslab would cover a wider range on
disk and thus would take more time to be loaded in memory from disk.
In addition, after it's loaded, it's range tree would consume more
memory.
Another idea would be to just increase the spacemap block size
which would allow us to fit more entries within an I/O block
resulting in fewer I/Os per metaslab and a speedup in loading time.
The problem is still that we don't deal with the number of I/Os
going up as the number of metaslabs is increasing and the fact
is that we generally write a lot to a few metaslabs and a little
to the rest of them. Thus, just increasing the block size would
actually waste bandwidth because we won't be utilizing our bigger
block size.
= About this patch
This patch introduces the Log Spacemap project which provides the
solution to the above problem while taking into account all the
aforementioned tradeoffs. The details on how it achieves that can
be found in the references sections below and in the code (see
Big Theory Statement in spa_log_spacemap.c).
Even though the change is fairly constraint within the metaslab
and lower-level SPA codepaths, there is a side-change that is
user-facing. The change is that VDEV IDs from VDEV holes will no
longer be reused. To give some background and reasoning for this,
when a log device is removed and its VDEV structure was replaced
with a hole (or was compacted; if at the end of the vdev array),
its vdev_id could be reused by devices added after that. Now
with the pool-wide space maps recording the vdev ID, this behavior
can cause problems (e.g. is this entry referring to a segment in
the new vdev or the removed log?). Thus, to simplify things the
ID reuse behavior is gone and now vdev IDs for top-level vdevs
are truly unique within a pool.
= Testing
The illumos implementation of this feature has been used internally
for a year and has been in production for ~6 months. For this patch
specifically there don't seem to be any regressions introduced to
ZTS and I have been running zloop for a week without any related
problems.
= Performance Analysis (Linux Specific)
All performance results and analysis for illumos can be found in
the links of the references. Redoing the same experiments in Linux
gave similar results. Below are the specifics of the Linux run.
After the pool reached stable state the percentage of the time
spent in pass 1 per TXG was 64% on average for the stock bits
while the log spacemap bits stayed at 95% during the experiment
(graph: sdimitro.github.io/img/linux-lsm/PercOfSyncInPassOne.png).
Sync times per TXG were 37.6 seconds on average for the stock
bits and 22.7 seconds for the log spacemap bits (related graph:
sdimitro.github.io/img/linux-lsm/SyncTimePerTXG.png). As a result
the log spacemap bits were able to push more TXGs, which is also
the reason why all graphs quantified per TXG have more entries for
the log spacemap bits.
Another interesting aspect in terms of txg syncs is that the stock
bits had 22% of their TXGs reach sync pass 7, 55% reach sync pass 8,
and 20% reach 9. The log space map bits reached sync pass 4 in 79%
of their TXGs, sync pass 7 in 19%, and sync pass 8 at 1%. This
emphasizes the fact that not only we spend less time on metadata
but we also iterate less times to convergence in spa_sync() dirtying
objects.
[related graphs:
stock- sdimitro.github.io/img/linux-lsm/NumberOfPassesPerTXGStock.png
lsm- sdimitro.github.io/img/linux-lsm/NumberOfPassesPerTXGLSM.png]
Finally, the improvement in IOPs that the userland gains from the
change is approximately 40%. There is a consistent win in IOPS as
you can see from the graphs below but the absolute amount of
improvement that the log spacemap gives varies within each minute
interval.
sdimitro.github.io/img/linux-lsm/StockVsLog3Days.png
sdimitro.github.io/img/linux-lsm/StockVsLog10Hours.png
= Porting to Other Platforms
For people that want to port this commit to other platforms below
is a list of ZoL commits that this patch depends on:
Make zdb results for checkpoint tests consistent
db587941c5ff6dea01932bb78f70db63cf7f38ba
Update vdev_is_spacemap_addressable() for new spacemap encoding
419ba5914552c6185afbe1dd17b3ed4b0d526547
Simplify spa_sync by breaking it up to smaller functions
8dc2197b7b1e4d7ebc1420ea30e51c6541f1d834
Factor metaslab_load_wait() in metaslab_load()
b194fab0fb6caad18711abccaff3c69ad8b3f6d3
Rename range_tree_verify to range_tree_verify_not_present
df72b8bebe0ebac0b20e0750984bad182cb6564a
Change target size of metaslabs from 256GB to 16GB
c853f382db731e15a87512f4ef1101d14d778a55
zdb -L should skip leak detection altogether
21e7cf5da89f55ce98ec1115726b150e19eefe89
vs_alloc can underflow in L2ARC vdevs
7558997d2f808368867ca7e5234e5793446e8f3f
Simplify log vdev removal code
6c926f426a26ffb6d7d8e563e33fc176164175cb
Get rid of space_map_update() for ms_synced_length
425d3237ee88abc53d8522a7139c926d278b4b7f
Introduce auxiliary metaslab histograms
928e8ad47d3478a3d5d01f0dd6ae74a9371af65e
Error path in metaslab_load_impl() forgets to drop ms_sync_lock
8eef997679ba54547f7d361553d21b3291f41ae7
= References
Background, Motivation, and Internals of the Feature
- OpenZFS 2017 Presentation:
youtu.be/jj2IxRkl5bQ
- Slides:
slideshare.net/SerapheimNikolaosDim/zfs-log-spacemaps-project
Flushing Algorithm Internals & Performance Results
(Illumos Specific)
- Blogpost:
sdimitro.github.io/post/zfs-lsm-flushing/
- OpenZFS 2018 Presentation:
youtu.be/x6D2dHRjkxw
- Slides:
slideshare.net/SerapheimNikolaosDim/zfs-log-spacemap-flushing-algorithm
Upstream Delphix Issues:
DLPX-51539, DLPX-59659, DLPX-57783, DLPX-61438, DLPX-41227, DLPX-59320
DLPX-63385
Reviewed-by: Sean Eric Fagan <sef@ixsystems.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: George Wilson <gwilson@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Serapheim Dimitropoulos <serapheim@delphix.com>
Closes #8442
2019-07-16 17:11:49 +00:00
|
|
|
{
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
static const spa_feature_t log_spacemap_deps[] = {
|
|
|
|
SPA_FEATURE_SPACEMAP_V2,
|
|
|
|
SPA_FEATURE_NONE
|
|
|
|
};
|
|
|
|
zfeature_register(SPA_FEATURE_LOG_SPACEMAP,
|
|
|
|
"com.delphix:log_spacemap", "log_spacemap",
|
|
|
|
"Log metaslab changes on a single spacemap and "
|
|
|
|
"flush them periodically.",
|
|
|
|
ZFEATURE_FLAG_READONLY_COMPAT, ZFEATURE_TYPE_BOOLEAN,
|
|
|
|
log_spacemap_deps, sfeatures);
|
Log Spacemap Project
= Motivation
At Delphix we've seen a lot of customer systems where fragmentation
is over 75% and random writes take a performance hit because a lot
of time is spend on I/Os that update on-disk space accounting metadata.
Specifically, we seen cases where 20% to 40% of sync time is spend
after sync pass 1 and ~30% of the I/Os on the system is spent updating
spacemaps.
The problem is that these pools have existed long enough that we've
touched almost every metaslab at least once, and random writes
scatter frees across all metaslabs every TXG, thus appending to
their spacemaps and resulting in many I/Os. To give an example,
assuming that every VDEV has 200 metaslabs and our writes fit within
a single spacemap block (generally 4K) we have 200 I/Os. Then if we
assume 2 levels of indirection, we need 400 additional I/Os and
since we are talking about metadata for which we keep 2 extra copies
for redundancy we need to triple that number, leading to a total of
1800 I/Os per VDEV every TXG.
We could try and decrease the number of metaslabs so we have less
I/Os per TXG but then each metaslab would cover a wider range on
disk and thus would take more time to be loaded in memory from disk.
In addition, after it's loaded, it's range tree would consume more
memory.
Another idea would be to just increase the spacemap block size
which would allow us to fit more entries within an I/O block
resulting in fewer I/Os per metaslab and a speedup in loading time.
The problem is still that we don't deal with the number of I/Os
going up as the number of metaslabs is increasing and the fact
is that we generally write a lot to a few metaslabs and a little
to the rest of them. Thus, just increasing the block size would
actually waste bandwidth because we won't be utilizing our bigger
block size.
= About this patch
This patch introduces the Log Spacemap project which provides the
solution to the above problem while taking into account all the
aforementioned tradeoffs. The details on how it achieves that can
be found in the references sections below and in the code (see
Big Theory Statement in spa_log_spacemap.c).
Even though the change is fairly constraint within the metaslab
and lower-level SPA codepaths, there is a side-change that is
user-facing. The change is that VDEV IDs from VDEV holes will no
longer be reused. To give some background and reasoning for this,
when a log device is removed and its VDEV structure was replaced
with a hole (or was compacted; if at the end of the vdev array),
its vdev_id could be reused by devices added after that. Now
with the pool-wide space maps recording the vdev ID, this behavior
can cause problems (e.g. is this entry referring to a segment in
the new vdev or the removed log?). Thus, to simplify things the
ID reuse behavior is gone and now vdev IDs for top-level vdevs
are truly unique within a pool.
= Testing
The illumos implementation of this feature has been used internally
for a year and has been in production for ~6 months. For this patch
specifically there don't seem to be any regressions introduced to
ZTS and I have been running zloop for a week without any related
problems.
= Performance Analysis (Linux Specific)
All performance results and analysis for illumos can be found in
the links of the references. Redoing the same experiments in Linux
gave similar results. Below are the specifics of the Linux run.
After the pool reached stable state the percentage of the time
spent in pass 1 per TXG was 64% on average for the stock bits
while the log spacemap bits stayed at 95% during the experiment
(graph: sdimitro.github.io/img/linux-lsm/PercOfSyncInPassOne.png).
Sync times per TXG were 37.6 seconds on average for the stock
bits and 22.7 seconds for the log spacemap bits (related graph:
sdimitro.github.io/img/linux-lsm/SyncTimePerTXG.png). As a result
the log spacemap bits were able to push more TXGs, which is also
the reason why all graphs quantified per TXG have more entries for
the log spacemap bits.
Another interesting aspect in terms of txg syncs is that the stock
bits had 22% of their TXGs reach sync pass 7, 55% reach sync pass 8,
and 20% reach 9. The log space map bits reached sync pass 4 in 79%
of their TXGs, sync pass 7 in 19%, and sync pass 8 at 1%. This
emphasizes the fact that not only we spend less time on metadata
but we also iterate less times to convergence in spa_sync() dirtying
objects.
[related graphs:
stock- sdimitro.github.io/img/linux-lsm/NumberOfPassesPerTXGStock.png
lsm- sdimitro.github.io/img/linux-lsm/NumberOfPassesPerTXGLSM.png]
Finally, the improvement in IOPs that the userland gains from the
change is approximately 40%. There is a consistent win in IOPS as
you can see from the graphs below but the absolute amount of
improvement that the log spacemap gives varies within each minute
interval.
sdimitro.github.io/img/linux-lsm/StockVsLog3Days.png
sdimitro.github.io/img/linux-lsm/StockVsLog10Hours.png
= Porting to Other Platforms
For people that want to port this commit to other platforms below
is a list of ZoL commits that this patch depends on:
Make zdb results for checkpoint tests consistent
db587941c5ff6dea01932bb78f70db63cf7f38ba
Update vdev_is_spacemap_addressable() for new spacemap encoding
419ba5914552c6185afbe1dd17b3ed4b0d526547
Simplify spa_sync by breaking it up to smaller functions
8dc2197b7b1e4d7ebc1420ea30e51c6541f1d834
Factor metaslab_load_wait() in metaslab_load()
b194fab0fb6caad18711abccaff3c69ad8b3f6d3
Rename range_tree_verify to range_tree_verify_not_present
df72b8bebe0ebac0b20e0750984bad182cb6564a
Change target size of metaslabs from 256GB to 16GB
c853f382db731e15a87512f4ef1101d14d778a55
zdb -L should skip leak detection altogether
21e7cf5da89f55ce98ec1115726b150e19eefe89
vs_alloc can underflow in L2ARC vdevs
7558997d2f808368867ca7e5234e5793446e8f3f
Simplify log vdev removal code
6c926f426a26ffb6d7d8e563e33fc176164175cb
Get rid of space_map_update() for ms_synced_length
425d3237ee88abc53d8522a7139c926d278b4b7f
Introduce auxiliary metaslab histograms
928e8ad47d3478a3d5d01f0dd6ae74a9371af65e
Error path in metaslab_load_impl() forgets to drop ms_sync_lock
8eef997679ba54547f7d361553d21b3291f41ae7
= References
Background, Motivation, and Internals of the Feature
- OpenZFS 2017 Presentation:
youtu.be/jj2IxRkl5bQ
- Slides:
slideshare.net/SerapheimNikolaosDim/zfs-log-spacemaps-project
Flushing Algorithm Internals & Performance Results
(Illumos Specific)
- Blogpost:
sdimitro.github.io/post/zfs-lsm-flushing/
- OpenZFS 2018 Presentation:
youtu.be/x6D2dHRjkxw
- Slides:
slideshare.net/SerapheimNikolaosDim/zfs-log-spacemap-flushing-algorithm
Upstream Delphix Issues:
DLPX-51539, DLPX-59659, DLPX-57783, DLPX-61438, DLPX-41227, DLPX-59320
DLPX-63385
Reviewed-by: Sean Eric Fagan <sef@ixsystems.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: George Wilson <gwilson@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Serapheim Dimitropoulos <serapheim@delphix.com>
Closes #8442
2019-07-16 17:11:49 +00:00
|
|
|
}
|
|
|
|
|
2014-11-03 20:15:08 +00:00
|
|
|
{
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
static const spa_feature_t large_blocks_deps[] = {
|
|
|
|
SPA_FEATURE_EXTENSIBLE_DATASET,
|
|
|
|
SPA_FEATURE_NONE
|
|
|
|
};
|
|
|
|
zfeature_register(SPA_FEATURE_LARGE_BLOCKS,
|
|
|
|
"org.open-zfs:large_blocks", "large_blocks",
|
|
|
|
"Support for blocks larger than 128KB.",
|
|
|
|
ZFEATURE_FLAG_PER_DATASET, ZFEATURE_TYPE_BOOLEAN,
|
|
|
|
large_blocks_deps, sfeatures);
|
2014-11-03 20:15:08 +00:00
|
|
|
}
|
Implement large_dnode pool feature
Justification
-------------
This feature adds support for variable length dnodes. Our motivation is
to eliminate the overhead associated with using spill blocks. Spill
blocks are used to store system attribute data (i.e. file metadata) that
does not fit in the dnode's bonus buffer. By allowing a larger bonus
buffer area the use of a spill block can be avoided. Spill blocks
potentially incur an additional read I/O for every dnode in a dnode
block. As a worst case example, reading 32 dnodes from a 16k dnode block
and all of the spill blocks could issue 33 separate reads. Now suppose
those dnodes have size 1024 and therefore don't need spill blocks. Then
the worst case number of blocks read is reduced to from 33 to two--one
per dnode block. In practice spill blocks may tend to be co-located on
disk with the dnode blocks so the reduction in I/O would not be this
drastic. In a badly fragmented pool, however, the improvement could be
significant.
ZFS-on-Linux systems that make heavy use of extended attributes would
benefit from this feature. In particular, ZFS-on-Linux supports the
xattr=sa dataset property which allows file extended attribute data
to be stored in the dnode bonus buffer as an alternative to the
traditional directory-based format. Workloads such as SELinux and the
Lustre distributed filesystem often store enough xattr data to force
spill bocks when xattr=sa is in effect. Large dnodes may therefore
provide a performance benefit to such systems.
Other use cases that may benefit from this feature include files with
large ACLs and symbolic links with long target names. Furthermore,
this feature may be desirable on other platforms in case future
applications or features are developed that could make use of a
larger bonus buffer area.
Implementation
--------------
The size of a dnode may be a multiple of 512 bytes up to the size of
a dnode block (currently 16384 bytes). A dn_extra_slots field was
added to the current on-disk dnode_phys_t structure to describe the
size of the physical dnode on disk. The 8 bits for this field were
taken from the zero filled dn_pad2 field. The field represents how
many "extra" dnode_phys_t slots a dnode consumes in its dnode block.
This convention results in a value of 0 for 512 byte dnodes which
preserves on-disk format compatibility with older software.
Similarly, the in-memory dnode_t structure has a new dn_num_slots field
to represent the total number of dnode_phys_t slots consumed on disk.
Thus dn->dn_num_slots is 1 greater than the corresponding
dnp->dn_extra_slots. This difference in convention was adopted
because, unlike on-disk structures, backward compatibility is not a
concern for in-memory objects, so we used a more natural way to
represent size for a dnode_t.
The default size for newly created dnodes is determined by the value of
a new "dnodesize" dataset property. By default the property is set to
"legacy" which is compatible with older software. Setting the property
to "auto" will allow the filesystem to choose the most suitable dnode
size. Currently this just sets the default dnode size to 1k, but future
code improvements could dynamically choose a size based on observed
workload patterns. Dnodes of varying sizes can coexist within the same
dataset and even within the same dnode block. For example, to enable
automatically-sized dnodes, run
# zfs set dnodesize=auto tank/fish
The user can also specify literal values for the dnodesize property.
These are currently limited to powers of two from 1k to 16k. The
power-of-2 limitation is only for simplicity of the user interface.
Internally the implementation can handle any multiple of 512 up to 16k,
and consumers of the DMU API can specify any legal dnode value.
The size of a new dnode is determined at object allocation time and
stored as a new field in the znode in-memory structure. New DMU
interfaces are added to allow the consumer to specify the dnode size
that a newly allocated object should use. Existing interfaces are
unchanged to avoid having to update every call site and to preserve
compatibility with external consumers such as Lustre. The new
interfaces names are given below. The versions of these functions that
don't take a dnodesize parameter now just call the _dnsize() versions
with a dnodesize of 0, which means use the legacy dnode size.
New DMU interfaces:
dmu_object_alloc_dnsize()
dmu_object_claim_dnsize()
dmu_object_reclaim_dnsize()
New ZAP interfaces:
zap_create_dnsize()
zap_create_norm_dnsize()
zap_create_flags_dnsize()
zap_create_claim_norm_dnsize()
zap_create_link_dnsize()
The constant DN_MAX_BONUSLEN is renamed to DN_OLD_MAX_BONUSLEN. The
spa_maxdnodesize() function should be used to determine the maximum
bonus length for a pool.
These are a few noteworthy changes to key functions:
* The prototype for dnode_hold_impl() now takes a "slots" parameter.
When the DNODE_MUST_BE_FREE flag is set, this parameter is used to
ensure the hole at the specified object offset is large enough to
hold the dnode being created. The slots parameter is also used
to ensure a dnode does not span multiple dnode blocks. In both of
these cases, if a failure occurs, ENOSPC is returned. Keep in mind,
these failure cases are only possible when using DNODE_MUST_BE_FREE.
If the DNODE_MUST_BE_ALLOCATED flag is set, "slots" must be 0.
dnode_hold_impl() will check if the requested dnode is already
consumed as an extra dnode slot by an large dnode, in which case
it returns ENOENT.
* The function dmu_object_alloc() advances to the next dnode block
if dnode_hold_impl() returns an error for a requested object.
This is because the beginning of the next dnode block is the only
location it can safely assume to either be a hole or a valid
starting point for a dnode.
* dnode_next_offset_level() and other functions that iterate
through dnode blocks may no longer use a simple array indexing
scheme. These now use the current dnode's dn_num_slots field to
advance to the next dnode in the block. This is to ensure we
properly skip the current dnode's bonus area and don't interpret it
as a valid dnode.
zdb
---
The zdb command was updated to display a dnode's size under the
"dnsize" column when the object is dumped.
For ZIL create log records, zdb will now display the slot count for
the object.
ztest
-----
Ztest chooses a random dnodesize for every newly created object. The
random distribution is more heavily weighted toward small dnodes to
better simulate real-world datasets.
Unused bonus buffer space is filled with non-zero values computed from
the object number, dataset id, offset, and generation number. This
helps ensure that the dnode traversal code properly skips the interior
regions of large dnodes, and that these interior regions are not
overwritten by data belonging to other dnodes. A new test visits each
object in a dataset. It verifies that the actual dnode size matches what
was stored in the ztest block tag when it was created. It also verifies
that the unused bonus buffer space is filled with the expected data
patterns.
ZFS Test Suite
--------------
Added six new large dnode-specific tests, and integrated the dnodesize
property into existing tests for zfs allow and send/recv.
Send/Receive
------------
ZFS send streams for datasets containing large dnodes cannot be received
on pools that don't support the large_dnode feature. A send stream with
large dnodes sets a DMU_BACKUP_FEATURE_LARGE_DNODE flag which will be
unrecognized by an incompatible receiving pool so that the zfs receive
will fail gracefully.
While not implemented here, it may be possible to generate a
backward-compatible send stream from a dataset containing large
dnodes. The implementation may be tricky, however, because the send
object record for a large dnode would need to be resized to a 512
byte dnode, possibly kicking in a spill block in the process. This
means we would need to construct a new SA layout and possibly
register it in the SA layout object. The SA layout is normally just
sent as an ordinary object record. But if we are constructing new
layouts while generating the send stream we'd have to build the SA
layout object dynamically and send it at the end of the stream.
For sending and receiving between pools that do support large dnodes,
the drr_object send record type is extended with a new field to store
the dnode slot count. This field was repurposed from unused padding
in the structure.
ZIL Replay
----------
The dnode slot count is stored in the uppermost 8 bits of the lr_foid
field. The bits were unused as the object id is currently capped at
48 bits.
Resizing Dnodes
---------------
It should be possible to resize a dnode when it is dirtied if the
current dnodesize dataset property differs from the dnode's size, but
this functionality is not currently implemented. Clearly a dnode can
only grow if there are sufficient contiguous unused slots in the
dnode block, but it should always be possible to shrink a dnode.
Growing dnodes may be useful to reduce fragmentation in a pool with
many spill blocks in use. Shrinking dnodes may be useful to allow
sending a dataset to a pool that doesn't support the large_dnode
feature.
Feature Reference Counting
--------------------------
The reference count for the large_dnode pool feature tracks the
number of datasets that have ever contained a dnode of size larger
than 512 bytes. The first time a large dnode is created in a dataset
the dataset is converted to an extensible dataset. This is a one-way
operation and the only way to decrement the feature count is to
destroy the dataset, even if the dataset no longer contains any large
dnodes. The complexity of reference counting on a per-dnode basis was
too high, so we chose to track it on a per-dataset basis similarly to
the large_block feature.
Signed-off-by: Ned Bass <bass6@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #3542
2016-03-17 01:25:34 +00:00
|
|
|
|
|
|
|
{
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
static const spa_feature_t large_dnode_deps[] = {
|
|
|
|
SPA_FEATURE_EXTENSIBLE_DATASET,
|
|
|
|
SPA_FEATURE_NONE
|
|
|
|
};
|
|
|
|
zfeature_register(SPA_FEATURE_LARGE_DNODE,
|
|
|
|
"org.zfsonlinux:large_dnode", "large_dnode",
|
|
|
|
"Variable on-disk size of dnodes.",
|
|
|
|
ZFEATURE_FLAG_PER_DATASET, ZFEATURE_TYPE_BOOLEAN,
|
|
|
|
large_dnode_deps, sfeatures);
|
Implement large_dnode pool feature
Justification
-------------
This feature adds support for variable length dnodes. Our motivation is
to eliminate the overhead associated with using spill blocks. Spill
blocks are used to store system attribute data (i.e. file metadata) that
does not fit in the dnode's bonus buffer. By allowing a larger bonus
buffer area the use of a spill block can be avoided. Spill blocks
potentially incur an additional read I/O for every dnode in a dnode
block. As a worst case example, reading 32 dnodes from a 16k dnode block
and all of the spill blocks could issue 33 separate reads. Now suppose
those dnodes have size 1024 and therefore don't need spill blocks. Then
the worst case number of blocks read is reduced to from 33 to two--one
per dnode block. In practice spill blocks may tend to be co-located on
disk with the dnode blocks so the reduction in I/O would not be this
drastic. In a badly fragmented pool, however, the improvement could be
significant.
ZFS-on-Linux systems that make heavy use of extended attributes would
benefit from this feature. In particular, ZFS-on-Linux supports the
xattr=sa dataset property which allows file extended attribute data
to be stored in the dnode bonus buffer as an alternative to the
traditional directory-based format. Workloads such as SELinux and the
Lustre distributed filesystem often store enough xattr data to force
spill bocks when xattr=sa is in effect. Large dnodes may therefore
provide a performance benefit to such systems.
Other use cases that may benefit from this feature include files with
large ACLs and symbolic links with long target names. Furthermore,
this feature may be desirable on other platforms in case future
applications or features are developed that could make use of a
larger bonus buffer area.
Implementation
--------------
The size of a dnode may be a multiple of 512 bytes up to the size of
a dnode block (currently 16384 bytes). A dn_extra_slots field was
added to the current on-disk dnode_phys_t structure to describe the
size of the physical dnode on disk. The 8 bits for this field were
taken from the zero filled dn_pad2 field. The field represents how
many "extra" dnode_phys_t slots a dnode consumes in its dnode block.
This convention results in a value of 0 for 512 byte dnodes which
preserves on-disk format compatibility with older software.
Similarly, the in-memory dnode_t structure has a new dn_num_slots field
to represent the total number of dnode_phys_t slots consumed on disk.
Thus dn->dn_num_slots is 1 greater than the corresponding
dnp->dn_extra_slots. This difference in convention was adopted
because, unlike on-disk structures, backward compatibility is not a
concern for in-memory objects, so we used a more natural way to
represent size for a dnode_t.
The default size for newly created dnodes is determined by the value of
a new "dnodesize" dataset property. By default the property is set to
"legacy" which is compatible with older software. Setting the property
to "auto" will allow the filesystem to choose the most suitable dnode
size. Currently this just sets the default dnode size to 1k, but future
code improvements could dynamically choose a size based on observed
workload patterns. Dnodes of varying sizes can coexist within the same
dataset and even within the same dnode block. For example, to enable
automatically-sized dnodes, run
# zfs set dnodesize=auto tank/fish
The user can also specify literal values for the dnodesize property.
These are currently limited to powers of two from 1k to 16k. The
power-of-2 limitation is only for simplicity of the user interface.
Internally the implementation can handle any multiple of 512 up to 16k,
and consumers of the DMU API can specify any legal dnode value.
The size of a new dnode is determined at object allocation time and
stored as a new field in the znode in-memory structure. New DMU
interfaces are added to allow the consumer to specify the dnode size
that a newly allocated object should use. Existing interfaces are
unchanged to avoid having to update every call site and to preserve
compatibility with external consumers such as Lustre. The new
interfaces names are given below. The versions of these functions that
don't take a dnodesize parameter now just call the _dnsize() versions
with a dnodesize of 0, which means use the legacy dnode size.
New DMU interfaces:
dmu_object_alloc_dnsize()
dmu_object_claim_dnsize()
dmu_object_reclaim_dnsize()
New ZAP interfaces:
zap_create_dnsize()
zap_create_norm_dnsize()
zap_create_flags_dnsize()
zap_create_claim_norm_dnsize()
zap_create_link_dnsize()
The constant DN_MAX_BONUSLEN is renamed to DN_OLD_MAX_BONUSLEN. The
spa_maxdnodesize() function should be used to determine the maximum
bonus length for a pool.
These are a few noteworthy changes to key functions:
* The prototype for dnode_hold_impl() now takes a "slots" parameter.
When the DNODE_MUST_BE_FREE flag is set, this parameter is used to
ensure the hole at the specified object offset is large enough to
hold the dnode being created. The slots parameter is also used
to ensure a dnode does not span multiple dnode blocks. In both of
these cases, if a failure occurs, ENOSPC is returned. Keep in mind,
these failure cases are only possible when using DNODE_MUST_BE_FREE.
If the DNODE_MUST_BE_ALLOCATED flag is set, "slots" must be 0.
dnode_hold_impl() will check if the requested dnode is already
consumed as an extra dnode slot by an large dnode, in which case
it returns ENOENT.
* The function dmu_object_alloc() advances to the next dnode block
if dnode_hold_impl() returns an error for a requested object.
This is because the beginning of the next dnode block is the only
location it can safely assume to either be a hole or a valid
starting point for a dnode.
* dnode_next_offset_level() and other functions that iterate
through dnode blocks may no longer use a simple array indexing
scheme. These now use the current dnode's dn_num_slots field to
advance to the next dnode in the block. This is to ensure we
properly skip the current dnode's bonus area and don't interpret it
as a valid dnode.
zdb
---
The zdb command was updated to display a dnode's size under the
"dnsize" column when the object is dumped.
For ZIL create log records, zdb will now display the slot count for
the object.
ztest
-----
Ztest chooses a random dnodesize for every newly created object. The
random distribution is more heavily weighted toward small dnodes to
better simulate real-world datasets.
Unused bonus buffer space is filled with non-zero values computed from
the object number, dataset id, offset, and generation number. This
helps ensure that the dnode traversal code properly skips the interior
regions of large dnodes, and that these interior regions are not
overwritten by data belonging to other dnodes. A new test visits each
object in a dataset. It verifies that the actual dnode size matches what
was stored in the ztest block tag when it was created. It also verifies
that the unused bonus buffer space is filled with the expected data
patterns.
ZFS Test Suite
--------------
Added six new large dnode-specific tests, and integrated the dnodesize
property into existing tests for zfs allow and send/recv.
Send/Receive
------------
ZFS send streams for datasets containing large dnodes cannot be received
on pools that don't support the large_dnode feature. A send stream with
large dnodes sets a DMU_BACKUP_FEATURE_LARGE_DNODE flag which will be
unrecognized by an incompatible receiving pool so that the zfs receive
will fail gracefully.
While not implemented here, it may be possible to generate a
backward-compatible send stream from a dataset containing large
dnodes. The implementation may be tricky, however, because the send
object record for a large dnode would need to be resized to a 512
byte dnode, possibly kicking in a spill block in the process. This
means we would need to construct a new SA layout and possibly
register it in the SA layout object. The SA layout is normally just
sent as an ordinary object record. But if we are constructing new
layouts while generating the send stream we'd have to build the SA
layout object dynamically and send it at the end of the stream.
For sending and receiving between pools that do support large dnodes,
the drr_object send record type is extended with a new field to store
the dnode slot count. This field was repurposed from unused padding
in the structure.
ZIL Replay
----------
The dnode slot count is stored in the uppermost 8 bits of the lr_foid
field. The bits were unused as the object id is currently capped at
48 bits.
Resizing Dnodes
---------------
It should be possible to resize a dnode when it is dirtied if the
current dnodesize dataset property differs from the dnode's size, but
this functionality is not currently implemented. Clearly a dnode can
only grow if there are sufficient contiguous unused slots in the
dnode block, but it should always be possible to shrink a dnode.
Growing dnodes may be useful to reduce fragmentation in a pool with
many spill blocks in use. Shrinking dnodes may be useful to allow
sending a dataset to a pool that doesn't support the large_dnode
feature.
Feature Reference Counting
--------------------------
The reference count for the large_dnode pool feature tracks the
number of datasets that have ever contained a dnode of size larger
than 512 bytes. The first time a large dnode is created in a dataset
the dataset is converted to an extensible dataset. This is a one-way
operation and the only way to decrement the feature count is to
destroy the dataset, even if the dataset no longer contains any large
dnodes. The complexity of reference counting on a per-dnode basis was
too high, so we chose to track it on a per-dataset basis similarly to
the large_block feature.
Signed-off-by: Ned Bass <bass6@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #3542
2016-03-17 01:25:34 +00:00
|
|
|
}
|
OpenZFS 6585 - sha512, skein, and edonr have an unenforced dependency on extensible dataset
Authored by: ilovezfs <ilovezfs@icloud.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Richard Laager <rlaager@wiktel.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Ported by: Tony Hutter <hutter2@llnl.gov>
In any pool without the extensible dataset feature flag already enabled,
creating a dataset with dedup set to use one of the new checksums would
result in the following panic as soon as any data was added:
panic[cpu0]/thread=ffffff0006761c40: feature_get_refcount(spa, feature,
&refcount) != 48 (0x30 != 0x30), file: ../../common/fs/zfs/zfeature.c
line 390
Inpsection showed that feature->fi_feature was 7, which is the value of
SPA_FEATURE_EXTENSIBLE_DATASET in the spa_feature enum. This commit
adds extensible dataset as a dependency for the sha512, edonr, and skein
feature flags, which prevents the panic.
OpenZFS-issue: https://www.illumos.org/issues/6585
OpenZFS-commit: https://github.com/illumos/illumos-gate/commit/892586e8a147c02d7f4053cc405229a13e796928
Porting Notes:
This code was originally from Illumos, but I actually ported it from:
openzfsonosx/zfs@b62a652
2016-01-28 12:51:19 +00:00
|
|
|
|
|
|
|
{
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
static const spa_feature_t sha512_deps[] = {
|
|
|
|
SPA_FEATURE_EXTENSIBLE_DATASET,
|
|
|
|
SPA_FEATURE_NONE
|
|
|
|
};
|
|
|
|
zfeature_register(SPA_FEATURE_SHA512,
|
|
|
|
"org.illumos:sha512", "sha512",
|
|
|
|
"SHA-512/256 hash algorithm.",
|
|
|
|
ZFEATURE_FLAG_PER_DATASET, ZFEATURE_TYPE_BOOLEAN,
|
|
|
|
sha512_deps, sfeatures);
|
OpenZFS 6585 - sha512, skein, and edonr have an unenforced dependency on extensible dataset
Authored by: ilovezfs <ilovezfs@icloud.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Richard Laager <rlaager@wiktel.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Ported by: Tony Hutter <hutter2@llnl.gov>
In any pool without the extensible dataset feature flag already enabled,
creating a dataset with dedup set to use one of the new checksums would
result in the following panic as soon as any data was added:
panic[cpu0]/thread=ffffff0006761c40: feature_get_refcount(spa, feature,
&refcount) != 48 (0x30 != 0x30), file: ../../common/fs/zfs/zfeature.c
line 390
Inpsection showed that feature->fi_feature was 7, which is the value of
SPA_FEATURE_EXTENSIBLE_DATASET in the spa_feature enum. This commit
adds extensible dataset as a dependency for the sha512, edonr, and skein
feature flags, which prevents the panic.
OpenZFS-issue: https://www.illumos.org/issues/6585
OpenZFS-commit: https://github.com/illumos/illumos-gate/commit/892586e8a147c02d7f4053cc405229a13e796928
Porting Notes:
This code was originally from Illumos, but I actually ported it from:
openzfsonosx/zfs@b62a652
2016-01-28 12:51:19 +00:00
|
|
|
}
|
2018-10-16 18:15:04 +00:00
|
|
|
|
OpenZFS 6585 - sha512, skein, and edonr have an unenforced dependency on extensible dataset
Authored by: ilovezfs <ilovezfs@icloud.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Richard Laager <rlaager@wiktel.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Ported by: Tony Hutter <hutter2@llnl.gov>
In any pool without the extensible dataset feature flag already enabled,
creating a dataset with dedup set to use one of the new checksums would
result in the following panic as soon as any data was added:
panic[cpu0]/thread=ffffff0006761c40: feature_get_refcount(spa, feature,
&refcount) != 48 (0x30 != 0x30), file: ../../common/fs/zfs/zfeature.c
line 390
Inpsection showed that feature->fi_feature was 7, which is the value of
SPA_FEATURE_EXTENSIBLE_DATASET in the spa_feature enum. This commit
adds extensible dataset as a dependency for the sha512, edonr, and skein
feature flags, which prevents the panic.
OpenZFS-issue: https://www.illumos.org/issues/6585
OpenZFS-commit: https://github.com/illumos/illumos-gate/commit/892586e8a147c02d7f4053cc405229a13e796928
Porting Notes:
This code was originally from Illumos, but I actually ported it from:
openzfsonosx/zfs@b62a652
2016-01-28 12:51:19 +00:00
|
|
|
{
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
static const spa_feature_t skein_deps[] = {
|
|
|
|
SPA_FEATURE_EXTENSIBLE_DATASET,
|
|
|
|
SPA_FEATURE_NONE
|
|
|
|
};
|
|
|
|
zfeature_register(SPA_FEATURE_SKEIN,
|
|
|
|
"org.illumos:skein", "skein",
|
|
|
|
"Skein hash algorithm.",
|
|
|
|
ZFEATURE_FLAG_PER_DATASET, ZFEATURE_TYPE_BOOLEAN,
|
|
|
|
skein_deps, sfeatures);
|
OpenZFS 6585 - sha512, skein, and edonr have an unenforced dependency on extensible dataset
Authored by: ilovezfs <ilovezfs@icloud.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Richard Laager <rlaager@wiktel.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Ported by: Tony Hutter <hutter2@llnl.gov>
In any pool without the extensible dataset feature flag already enabled,
creating a dataset with dedup set to use one of the new checksums would
result in the following panic as soon as any data was added:
panic[cpu0]/thread=ffffff0006761c40: feature_get_refcount(spa, feature,
&refcount) != 48 (0x30 != 0x30), file: ../../common/fs/zfs/zfeature.c
line 390
Inpsection showed that feature->fi_feature was 7, which is the value of
SPA_FEATURE_EXTENSIBLE_DATASET in the spa_feature enum. This commit
adds extensible dataset as a dependency for the sha512, edonr, and skein
feature flags, which prevents the panic.
OpenZFS-issue: https://www.illumos.org/issues/6585
OpenZFS-commit: https://github.com/illumos/illumos-gate/commit/892586e8a147c02d7f4053cc405229a13e796928
Porting Notes:
This code was originally from Illumos, but I actually ported it from:
openzfsonosx/zfs@b62a652
2016-01-28 12:51:19 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
{
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
static const spa_feature_t edonr_deps[] = {
|
|
|
|
SPA_FEATURE_EXTENSIBLE_DATASET,
|
|
|
|
SPA_FEATURE_NONE
|
|
|
|
};
|
|
|
|
zfeature_register(SPA_FEATURE_EDONR,
|
|
|
|
"org.illumos:edonr", "edonr",
|
|
|
|
"Edon-R hash algorithm.",
|
|
|
|
ZFEATURE_FLAG_PER_DATASET, ZFEATURE_TYPE_BOOLEAN,
|
|
|
|
edonr_deps, sfeatures);
|
OpenZFS 6585 - sha512, skein, and edonr have an unenforced dependency on extensible dataset
Authored by: ilovezfs <ilovezfs@icloud.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Richard Laager <rlaager@wiktel.com>
Approved by: Robert Mustacchi <rm@joyent.com>
Ported by: Tony Hutter <hutter2@llnl.gov>
In any pool without the extensible dataset feature flag already enabled,
creating a dataset with dedup set to use one of the new checksums would
result in the following panic as soon as any data was added:
panic[cpu0]/thread=ffffff0006761c40: feature_get_refcount(spa, feature,
&refcount) != 48 (0x30 != 0x30), file: ../../common/fs/zfs/zfeature.c
line 390
Inpsection showed that feature->fi_feature was 7, which is the value of
SPA_FEATURE_EXTENSIBLE_DATASET in the spa_feature enum. This commit
adds extensible dataset as a dependency for the sha512, edonr, and skein
feature flags, which prevents the panic.
OpenZFS-issue: https://www.illumos.org/issues/6585
OpenZFS-commit: https://github.com/illumos/illumos-gate/commit/892586e8a147c02d7f4053cc405229a13e796928
Porting Notes:
This code was originally from Illumos, but I actually ported it from:
openzfsonosx/zfs@b62a652
2016-01-28 12:51:19 +00:00
|
|
|
}
|
2018-10-16 18:15:04 +00:00
|
|
|
|
Implement Redacted Send/Receive
Redacted send/receive allows users to send subsets of their data to
a target system. One possible use case for this feature is to not
transmit sensitive information to a data warehousing, test/dev, or
analytics environment. Another is to save space by not replicating
unimportant data within a given dataset, for example in backup tools
like zrepl.
Redacted send/receive is a three-stage process. First, a clone (or
clones) is made of the snapshot to be sent to the target. In this
clone (or clones), all unnecessary or unwanted data is removed or
modified. This clone is then snapshotted to create the "redaction
snapshot" (or snapshots). Second, the new zfs redact command is used
to create a redaction bookmark. The redaction bookmark stores the
list of blocks in a snapshot that were modified by the redaction
snapshot(s). Finally, the redaction bookmark is passed as a parameter
to zfs send. When sending to the snapshot that was redacted, the
redaction bookmark is used to filter out blocks that contain sensitive
or unwanted information, and those blocks are not included in the send
stream. When sending from the redaction bookmark, the blocks it
contains are considered as candidate blocks in addition to those
blocks in the destination snapshot that were modified since the
creation_txg of the redaction bookmark. This step is necessary to
allow the target to rehydrate data in the case where some blocks are
accidentally or unnecessarily modified in the redaction snapshot.
The changes to bookmarks to enable fast space estimation involve
adding deadlists to bookmarks. There is also logic to manage the
life cycles of these deadlists.
The new size estimation process operates in cases where previously
an accurate estimate could not be provided. In those cases, a send
is performed where no data blocks are read, reducing the runtime
significantly and providing a byte-accurate size estimate.
Reviewed-by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed-by: Matt Ahrens <mahrens@delphix.com>
Reviewed-by: Prashanth Sreenivasa <pks@delphix.com>
Reviewed-by: John Kennedy <john.kennedy@delphix.com>
Reviewed-by: George Wilson <george.wilson@delphix.com>
Reviewed-by: Chris Williamson <chris.williamson@delphix.com>
Reviewed-by: Pavel Zhakarov <pavel.zakharov@delphix.com>
Reviewed-by: Sebastien Roy <sebastien.roy@delphix.com>
Reviewed-by: Prakash Surya <prakash.surya@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Paul Dagnelie <pcd@delphix.com>
Closes #7958
2019-06-19 16:48:13 +00:00
|
|
|
{
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
static const spa_feature_t redact_books_deps[] = {
|
|
|
|
SPA_FEATURE_BOOKMARK_V2,
|
|
|
|
SPA_FEATURE_EXTENSIBLE_DATASET,
|
|
|
|
SPA_FEATURE_BOOKMARKS,
|
|
|
|
SPA_FEATURE_NONE
|
|
|
|
};
|
|
|
|
zfeature_register(SPA_FEATURE_REDACTION_BOOKMARKS,
|
|
|
|
"com.delphix:redaction_bookmarks", "redaction_bookmarks",
|
|
|
|
"Support for bookmarks which store redaction lists for zfs "
|
|
|
|
"redacted send/recv.", 0, ZFEATURE_TYPE_BOOLEAN,
|
|
|
|
redact_books_deps, sfeatures);
|
Implement Redacted Send/Receive
Redacted send/receive allows users to send subsets of their data to
a target system. One possible use case for this feature is to not
transmit sensitive information to a data warehousing, test/dev, or
analytics environment. Another is to save space by not replicating
unimportant data within a given dataset, for example in backup tools
like zrepl.
Redacted send/receive is a three-stage process. First, a clone (or
clones) is made of the snapshot to be sent to the target. In this
clone (or clones), all unnecessary or unwanted data is removed or
modified. This clone is then snapshotted to create the "redaction
snapshot" (or snapshots). Second, the new zfs redact command is used
to create a redaction bookmark. The redaction bookmark stores the
list of blocks in a snapshot that were modified by the redaction
snapshot(s). Finally, the redaction bookmark is passed as a parameter
to zfs send. When sending to the snapshot that was redacted, the
redaction bookmark is used to filter out blocks that contain sensitive
or unwanted information, and those blocks are not included in the send
stream. When sending from the redaction bookmark, the blocks it
contains are considered as candidate blocks in addition to those
blocks in the destination snapshot that were modified since the
creation_txg of the redaction bookmark. This step is necessary to
allow the target to rehydrate data in the case where some blocks are
accidentally or unnecessarily modified in the redaction snapshot.
The changes to bookmarks to enable fast space estimation involve
adding deadlists to bookmarks. There is also logic to manage the
life cycles of these deadlists.
The new size estimation process operates in cases where previously
an accurate estimate could not be provided. In those cases, a send
is performed where no data blocks are read, reducing the runtime
significantly and providing a byte-accurate size estimate.
Reviewed-by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed-by: Matt Ahrens <mahrens@delphix.com>
Reviewed-by: Prashanth Sreenivasa <pks@delphix.com>
Reviewed-by: John Kennedy <john.kennedy@delphix.com>
Reviewed-by: George Wilson <george.wilson@delphix.com>
Reviewed-by: Chris Williamson <chris.williamson@delphix.com>
Reviewed-by: Pavel Zhakarov <pavel.zakharov@delphix.com>
Reviewed-by: Sebastien Roy <sebastien.roy@delphix.com>
Reviewed-by: Prakash Surya <prakash.surya@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Paul Dagnelie <pcd@delphix.com>
Closes #7958
2019-06-19 16:48:13 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
{
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
static const spa_feature_t redact_datasets_deps[] = {
|
|
|
|
SPA_FEATURE_EXTENSIBLE_DATASET,
|
|
|
|
SPA_FEATURE_NONE
|
|
|
|
};
|
|
|
|
zfeature_register(SPA_FEATURE_REDACTED_DATASETS,
|
|
|
|
"com.delphix:redacted_datasets", "redacted_datasets",
|
|
|
|
"Support for redacted datasets, produced by receiving "
|
|
|
|
"a redacted zfs send stream.",
|
|
|
|
ZFEATURE_FLAG_PER_DATASET, ZFEATURE_TYPE_UINT64_ARRAY,
|
|
|
|
redact_datasets_deps, sfeatures);
|
Implement Redacted Send/Receive
Redacted send/receive allows users to send subsets of their data to
a target system. One possible use case for this feature is to not
transmit sensitive information to a data warehousing, test/dev, or
analytics environment. Another is to save space by not replicating
unimportant data within a given dataset, for example in backup tools
like zrepl.
Redacted send/receive is a three-stage process. First, a clone (or
clones) is made of the snapshot to be sent to the target. In this
clone (or clones), all unnecessary or unwanted data is removed or
modified. This clone is then snapshotted to create the "redaction
snapshot" (or snapshots). Second, the new zfs redact command is used
to create a redaction bookmark. The redaction bookmark stores the
list of blocks in a snapshot that were modified by the redaction
snapshot(s). Finally, the redaction bookmark is passed as a parameter
to zfs send. When sending to the snapshot that was redacted, the
redaction bookmark is used to filter out blocks that contain sensitive
or unwanted information, and those blocks are not included in the send
stream. When sending from the redaction bookmark, the blocks it
contains are considered as candidate blocks in addition to those
blocks in the destination snapshot that were modified since the
creation_txg of the redaction bookmark. This step is necessary to
allow the target to rehydrate data in the case where some blocks are
accidentally or unnecessarily modified in the redaction snapshot.
The changes to bookmarks to enable fast space estimation involve
adding deadlists to bookmarks. There is also logic to manage the
life cycles of these deadlists.
The new size estimation process operates in cases where previously
an accurate estimate could not be provided. In those cases, a send
is performed where no data blocks are read, reducing the runtime
significantly and providing a byte-accurate size estimate.
Reviewed-by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed-by: Matt Ahrens <mahrens@delphix.com>
Reviewed-by: Prashanth Sreenivasa <pks@delphix.com>
Reviewed-by: John Kennedy <john.kennedy@delphix.com>
Reviewed-by: George Wilson <george.wilson@delphix.com>
Reviewed-by: Chris Williamson <chris.williamson@delphix.com>
Reviewed-by: Pavel Zhakarov <pavel.zakharov@delphix.com>
Reviewed-by: Sebastien Roy <sebastien.roy@delphix.com>
Reviewed-by: Prakash Surya <prakash.surya@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Paul Dagnelie <pcd@delphix.com>
Closes #7958
2019-06-19 16:48:13 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
{
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
static const spa_feature_t bookmark_written_deps[] = {
|
|
|
|
SPA_FEATURE_BOOKMARK_V2,
|
|
|
|
SPA_FEATURE_EXTENSIBLE_DATASET,
|
|
|
|
SPA_FEATURE_BOOKMARKS,
|
|
|
|
SPA_FEATURE_NONE
|
|
|
|
};
|
|
|
|
zfeature_register(SPA_FEATURE_BOOKMARK_WRITTEN,
|
|
|
|
"com.delphix:bookmark_written", "bookmark_written",
|
|
|
|
"Additional accounting, enabling the written#<bookmark> "
|
|
|
|
"property (space written since a bookmark), "
|
|
|
|
"and estimates of send stream sizes for incrementals from "
|
|
|
|
"bookmarks.",
|
|
|
|
0, ZFEATURE_TYPE_BOOLEAN, bookmark_written_deps, sfeatures);
|
Implement Redacted Send/Receive
Redacted send/receive allows users to send subsets of their data to
a target system. One possible use case for this feature is to not
transmit sensitive information to a data warehousing, test/dev, or
analytics environment. Another is to save space by not replicating
unimportant data within a given dataset, for example in backup tools
like zrepl.
Redacted send/receive is a three-stage process. First, a clone (or
clones) is made of the snapshot to be sent to the target. In this
clone (or clones), all unnecessary or unwanted data is removed or
modified. This clone is then snapshotted to create the "redaction
snapshot" (or snapshots). Second, the new zfs redact command is used
to create a redaction bookmark. The redaction bookmark stores the
list of blocks in a snapshot that were modified by the redaction
snapshot(s). Finally, the redaction bookmark is passed as a parameter
to zfs send. When sending to the snapshot that was redacted, the
redaction bookmark is used to filter out blocks that contain sensitive
or unwanted information, and those blocks are not included in the send
stream. When sending from the redaction bookmark, the blocks it
contains are considered as candidate blocks in addition to those
blocks in the destination snapshot that were modified since the
creation_txg of the redaction bookmark. This step is necessary to
allow the target to rehydrate data in the case where some blocks are
accidentally or unnecessarily modified in the redaction snapshot.
The changes to bookmarks to enable fast space estimation involve
adding deadlists to bookmarks. There is also logic to manage the
life cycles of these deadlists.
The new size estimation process operates in cases where previously
an accurate estimate could not be provided. In those cases, a send
is performed where no data blocks are read, reducing the runtime
significantly and providing a byte-accurate size estimate.
Reviewed-by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed-by: Matt Ahrens <mahrens@delphix.com>
Reviewed-by: Prashanth Sreenivasa <pks@delphix.com>
Reviewed-by: John Kennedy <john.kennedy@delphix.com>
Reviewed-by: George Wilson <george.wilson@delphix.com>
Reviewed-by: Chris Williamson <chris.williamson@delphix.com>
Reviewed-by: Pavel Zhakarov <pavel.zakharov@delphix.com>
Reviewed-by: Sebastien Roy <sebastien.roy@delphix.com>
Reviewed-by: Prakash Surya <prakash.surya@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Paul Dagnelie <pcd@delphix.com>
Closes #7958
2019-06-19 16:48:13 +00:00
|
|
|
}
|
|
|
|
|
OpenZFS 7614, 9064 - zfs device evacuation/removal
OpenZFS 7614 - zfs device evacuation/removal
OpenZFS 9064 - remove_mirror should wait for device removal to complete
This project allows top-level vdevs to be removed from the storage pool
with "zpool remove", reducing the total amount of storage in the pool.
This operation copies all allocated regions of the device to be removed
onto other devices, recording the mapping from old to new location.
After the removal is complete, read and free operations to the removed
(now "indirect") vdev must be remapped and performed at the new location
on disk. The indirect mapping table is kept in memory whenever the pool
is loaded, so there is minimal performance overhead when doing operations
on the indirect vdev.
The size of the in-memory mapping table will be reduced when its entries
become "obsolete" because they are no longer used by any block pointers
in the pool. An entry becomes obsolete when all the blocks that use
it are freed. An entry can also become obsolete when all the snapshots
that reference it are deleted, and the block pointers that reference it
have been "remapped" in all filesystems/zvols (and clones). Whenever an
indirect block is written, all the block pointers in it will be "remapped"
to their new (concrete) locations if possible. This process can be
accelerated by using the "zfs remap" command to proactively rewrite all
indirect blocks that reference indirect (removed) vdevs.
Note that when a device is removed, we do not verify the checksum of
the data that is copied. This makes the process much faster, but if it
were used on redundant vdevs (i.e. mirror or raidz vdevs), it would be
possible to copy the wrong data, when we have the correct data on e.g.
the other side of the mirror.
At the moment, only mirrors and simple top-level vdevs can be removed
and no removal is allowed if any of the top-level vdevs are raidz.
Porting Notes:
* Avoid zero-sized kmem_alloc() in vdev_compact_children().
The device evacuation code adds a dependency that
vdev_compact_children() be able to properly empty the vdev_child
array by setting it to NULL and zeroing vdev_children. Under Linux,
kmem_alloc() and related functions return a sentinel pointer rather
than NULL for zero-sized allocations.
* Remove comment regarding "mpt" driver where zfs_remove_max_segment
is initialized to SPA_MAXBLOCKSIZE.
Change zfs_condense_indirect_commit_entry_delay_ticks to
zfs_condense_indirect_commit_entry_delay_ms for consistency with
most other tunables in which delays are specified in ms.
* ZTS changes:
Use set_tunable rather than mdb
Use zpool sync as appropriate
Use sync_pool instead of sync
Kill jobs during test_removal_with_operation to allow unmount/export
Don't add non-disk names such as "mirror" or "raidz" to $DISKS
Use $TEST_BASE_DIR instead of /tmp
Increase HZ from 100 to 1000 which is more common on Linux
removal_multiple_indirection.ksh
Reduce iterations in order to not time out on the code
coverage builders.
removal_resume_export:
Functionally, the test case is correct but there exists a race
where the kernel thread hasn't been fully started yet and is
not visible. Wait for up to 1 second for the removal thread
to be started before giving up on it. Also, increase the
amount of data copied in order that the removal not finish
before the export has a chance to fail.
* MMP compatibility, the concept of concrete versus non-concrete devices
has slightly changed the semantics of vdev_writeable(). Update
mmp_random_leaf_impl() accordingly.
* Updated dbuf_remap() to handle the org.zfsonlinux:large_dnode pool
feature which is not supported by OpenZFS.
* Added support for new vdev removal tracepoints.
* Test cases removal_with_zdb and removal_condense_export have been
intentionally disabled. When run manually they pass as intended,
but when running in the automated test environment they produce
unreliable results on the latest Fedora release.
They may work better once the upstream pool import refectoring is
merged into ZoL at which point they will be re-enabled.
Authored by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Alex Reece <alex@delphix.com>
Reviewed-by: George Wilson <george.wilson@delphix.com>
Reviewed-by: John Kennedy <john.kennedy@delphix.com>
Reviewed-by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Richard Laager <rlaager@wiktel.com>
Reviewed by: Tim Chase <tim@chase2k.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Garrett D'Amore <garrett@damore.org>
Ported-by: Tim Chase <tim@chase2k.com>
Signed-off-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://www.illumos.org/issues/7614
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/f539f1eb
Closes #6900
2016-09-22 16:30:13 +00:00
|
|
|
zfeature_register(SPA_FEATURE_DEVICE_REMOVAL,
|
|
|
|
"com.delphix:device_removal", "device_removal",
|
|
|
|
"Top-level vdevs can be removed, reducing logical pool size.",
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
ZFEATURE_FLAG_MOS, ZFEATURE_TYPE_BOOLEAN, NULL, sfeatures);
|
2018-10-16 18:15:04 +00:00
|
|
|
|
OpenZFS 7614, 9064 - zfs device evacuation/removal
OpenZFS 7614 - zfs device evacuation/removal
OpenZFS 9064 - remove_mirror should wait for device removal to complete
This project allows top-level vdevs to be removed from the storage pool
with "zpool remove", reducing the total amount of storage in the pool.
This operation copies all allocated regions of the device to be removed
onto other devices, recording the mapping from old to new location.
After the removal is complete, read and free operations to the removed
(now "indirect") vdev must be remapped and performed at the new location
on disk. The indirect mapping table is kept in memory whenever the pool
is loaded, so there is minimal performance overhead when doing operations
on the indirect vdev.
The size of the in-memory mapping table will be reduced when its entries
become "obsolete" because they are no longer used by any block pointers
in the pool. An entry becomes obsolete when all the blocks that use
it are freed. An entry can also become obsolete when all the snapshots
that reference it are deleted, and the block pointers that reference it
have been "remapped" in all filesystems/zvols (and clones). Whenever an
indirect block is written, all the block pointers in it will be "remapped"
to their new (concrete) locations if possible. This process can be
accelerated by using the "zfs remap" command to proactively rewrite all
indirect blocks that reference indirect (removed) vdevs.
Note that when a device is removed, we do not verify the checksum of
the data that is copied. This makes the process much faster, but if it
were used on redundant vdevs (i.e. mirror or raidz vdevs), it would be
possible to copy the wrong data, when we have the correct data on e.g.
the other side of the mirror.
At the moment, only mirrors and simple top-level vdevs can be removed
and no removal is allowed if any of the top-level vdevs are raidz.
Porting Notes:
* Avoid zero-sized kmem_alloc() in vdev_compact_children().
The device evacuation code adds a dependency that
vdev_compact_children() be able to properly empty the vdev_child
array by setting it to NULL and zeroing vdev_children. Under Linux,
kmem_alloc() and related functions return a sentinel pointer rather
than NULL for zero-sized allocations.
* Remove comment regarding "mpt" driver where zfs_remove_max_segment
is initialized to SPA_MAXBLOCKSIZE.
Change zfs_condense_indirect_commit_entry_delay_ticks to
zfs_condense_indirect_commit_entry_delay_ms for consistency with
most other tunables in which delays are specified in ms.
* ZTS changes:
Use set_tunable rather than mdb
Use zpool sync as appropriate
Use sync_pool instead of sync
Kill jobs during test_removal_with_operation to allow unmount/export
Don't add non-disk names such as "mirror" or "raidz" to $DISKS
Use $TEST_BASE_DIR instead of /tmp
Increase HZ from 100 to 1000 which is more common on Linux
removal_multiple_indirection.ksh
Reduce iterations in order to not time out on the code
coverage builders.
removal_resume_export:
Functionally, the test case is correct but there exists a race
where the kernel thread hasn't been fully started yet and is
not visible. Wait for up to 1 second for the removal thread
to be started before giving up on it. Also, increase the
amount of data copied in order that the removal not finish
before the export has a chance to fail.
* MMP compatibility, the concept of concrete versus non-concrete devices
has slightly changed the semantics of vdev_writeable(). Update
mmp_random_leaf_impl() accordingly.
* Updated dbuf_remap() to handle the org.zfsonlinux:large_dnode pool
feature which is not supported by OpenZFS.
* Added support for new vdev removal tracepoints.
* Test cases removal_with_zdb and removal_condense_export have been
intentionally disabled. When run manually they pass as intended,
but when running in the automated test environment they produce
unreliable results on the latest Fedora release.
They may work better once the upstream pool import refectoring is
merged into ZoL at which point they will be re-enabled.
Authored by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Alex Reece <alex@delphix.com>
Reviewed-by: George Wilson <george.wilson@delphix.com>
Reviewed-by: John Kennedy <john.kennedy@delphix.com>
Reviewed-by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Richard Laager <rlaager@wiktel.com>
Reviewed by: Tim Chase <tim@chase2k.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Garrett D'Amore <garrett@damore.org>
Ported-by: Tim Chase <tim@chase2k.com>
Signed-off-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://www.illumos.org/issues/7614
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/f539f1eb
Closes #6900
2016-09-22 16:30:13 +00:00
|
|
|
{
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
static const spa_feature_t obsolete_counts_deps[] = {
|
|
|
|
SPA_FEATURE_EXTENSIBLE_DATASET,
|
|
|
|
SPA_FEATURE_DEVICE_REMOVAL,
|
|
|
|
SPA_FEATURE_NONE
|
|
|
|
};
|
|
|
|
zfeature_register(SPA_FEATURE_OBSOLETE_COUNTS,
|
|
|
|
"com.delphix:obsolete_counts", "obsolete_counts",
|
|
|
|
"Reduce memory used by removed devices when their blocks "
|
|
|
|
"are freed or remapped.",
|
|
|
|
ZFEATURE_FLAG_READONLY_COMPAT, ZFEATURE_TYPE_BOOLEAN,
|
|
|
|
obsolete_counts_deps, sfeatures);
|
OpenZFS 7614, 9064 - zfs device evacuation/removal
OpenZFS 7614 - zfs device evacuation/removal
OpenZFS 9064 - remove_mirror should wait for device removal to complete
This project allows top-level vdevs to be removed from the storage pool
with "zpool remove", reducing the total amount of storage in the pool.
This operation copies all allocated regions of the device to be removed
onto other devices, recording the mapping from old to new location.
After the removal is complete, read and free operations to the removed
(now "indirect") vdev must be remapped and performed at the new location
on disk. The indirect mapping table is kept in memory whenever the pool
is loaded, so there is minimal performance overhead when doing operations
on the indirect vdev.
The size of the in-memory mapping table will be reduced when its entries
become "obsolete" because they are no longer used by any block pointers
in the pool. An entry becomes obsolete when all the blocks that use
it are freed. An entry can also become obsolete when all the snapshots
that reference it are deleted, and the block pointers that reference it
have been "remapped" in all filesystems/zvols (and clones). Whenever an
indirect block is written, all the block pointers in it will be "remapped"
to their new (concrete) locations if possible. This process can be
accelerated by using the "zfs remap" command to proactively rewrite all
indirect blocks that reference indirect (removed) vdevs.
Note that when a device is removed, we do not verify the checksum of
the data that is copied. This makes the process much faster, but if it
were used on redundant vdevs (i.e. mirror or raidz vdevs), it would be
possible to copy the wrong data, when we have the correct data on e.g.
the other side of the mirror.
At the moment, only mirrors and simple top-level vdevs can be removed
and no removal is allowed if any of the top-level vdevs are raidz.
Porting Notes:
* Avoid zero-sized kmem_alloc() in vdev_compact_children().
The device evacuation code adds a dependency that
vdev_compact_children() be able to properly empty the vdev_child
array by setting it to NULL and zeroing vdev_children. Under Linux,
kmem_alloc() and related functions return a sentinel pointer rather
than NULL for zero-sized allocations.
* Remove comment regarding "mpt" driver where zfs_remove_max_segment
is initialized to SPA_MAXBLOCKSIZE.
Change zfs_condense_indirect_commit_entry_delay_ticks to
zfs_condense_indirect_commit_entry_delay_ms for consistency with
most other tunables in which delays are specified in ms.
* ZTS changes:
Use set_tunable rather than mdb
Use zpool sync as appropriate
Use sync_pool instead of sync
Kill jobs during test_removal_with_operation to allow unmount/export
Don't add non-disk names such as "mirror" or "raidz" to $DISKS
Use $TEST_BASE_DIR instead of /tmp
Increase HZ from 100 to 1000 which is more common on Linux
removal_multiple_indirection.ksh
Reduce iterations in order to not time out on the code
coverage builders.
removal_resume_export:
Functionally, the test case is correct but there exists a race
where the kernel thread hasn't been fully started yet and is
not visible. Wait for up to 1 second for the removal thread
to be started before giving up on it. Also, increase the
amount of data copied in order that the removal not finish
before the export has a chance to fail.
* MMP compatibility, the concept of concrete versus non-concrete devices
has slightly changed the semantics of vdev_writeable(). Update
mmp_random_leaf_impl() accordingly.
* Updated dbuf_remap() to handle the org.zfsonlinux:large_dnode pool
feature which is not supported by OpenZFS.
* Added support for new vdev removal tracepoints.
* Test cases removal_with_zdb and removal_condense_export have been
intentionally disabled. When run manually they pass as intended,
but when running in the automated test environment they produce
unreliable results on the latest Fedora release.
They may work better once the upstream pool import refectoring is
merged into ZoL at which point they will be re-enabled.
Authored by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Alex Reece <alex@delphix.com>
Reviewed-by: George Wilson <george.wilson@delphix.com>
Reviewed-by: John Kennedy <john.kennedy@delphix.com>
Reviewed-by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Richard Laager <rlaager@wiktel.com>
Reviewed by: Tim Chase <tim@chase2k.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Garrett D'Amore <garrett@damore.org>
Ported-by: Tim Chase <tim@chase2k.com>
Signed-off-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://www.illumos.org/issues/7614
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/f539f1eb
Closes #6900
2016-09-22 16:30:13 +00:00
|
|
|
}
|
2018-10-16 18:15:04 +00:00
|
|
|
|
2016-10-04 18:46:10 +00:00
|
|
|
{
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
static const spa_feature_t userobj_accounting_deps[] = {
|
|
|
|
SPA_FEATURE_EXTENSIBLE_DATASET,
|
|
|
|
SPA_FEATURE_NONE
|
|
|
|
};
|
|
|
|
zfeature_register(SPA_FEATURE_USEROBJ_ACCOUNTING,
|
|
|
|
"org.zfsonlinux:userobj_accounting", "userobj_accounting",
|
|
|
|
"User/Group object accounting.",
|
|
|
|
ZFEATURE_FLAG_READONLY_COMPAT | ZFEATURE_FLAG_PER_DATASET,
|
|
|
|
ZFEATURE_TYPE_BOOLEAN, userobj_accounting_deps, sfeatures);
|
2016-10-04 18:46:10 +00:00
|
|
|
}
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 17:36:48 +00:00
|
|
|
|
2019-02-04 19:15:01 +00:00
|
|
|
{
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
static const spa_feature_t bookmark_v2_deps[] = {
|
|
|
|
SPA_FEATURE_EXTENSIBLE_DATASET,
|
|
|
|
SPA_FEATURE_BOOKMARKS,
|
|
|
|
SPA_FEATURE_NONE
|
|
|
|
};
|
|
|
|
zfeature_register(SPA_FEATURE_BOOKMARK_V2,
|
|
|
|
"com.datto:bookmark_v2", "bookmark_v2",
|
|
|
|
"Support for larger bookmarks",
|
|
|
|
0, ZFEATURE_TYPE_BOOLEAN, bookmark_v2_deps, sfeatures);
|
2019-02-04 19:15:01 +00:00
|
|
|
}
|
|
|
|
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 17:36:48 +00:00
|
|
|
{
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
static const spa_feature_t encryption_deps[] = {
|
|
|
|
SPA_FEATURE_EXTENSIBLE_DATASET,
|
|
|
|
SPA_FEATURE_BOOKMARK_V2,
|
|
|
|
SPA_FEATURE_NONE
|
|
|
|
};
|
|
|
|
zfeature_register(SPA_FEATURE_ENCRYPTION,
|
|
|
|
"com.datto:encryption", "encryption",
|
|
|
|
"Support for dataset level encryption",
|
|
|
|
ZFEATURE_FLAG_PER_DATASET, ZFEATURE_TYPE_BOOLEAN,
|
|
|
|
encryption_deps, sfeatures);
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 17:36:48 +00:00
|
|
|
}
|
2018-02-13 22:54:54 +00:00
|
|
|
|
|
|
|
{
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
static const spa_feature_t project_quota_deps[] = {
|
|
|
|
SPA_FEATURE_EXTENSIBLE_DATASET,
|
|
|
|
SPA_FEATURE_NONE
|
|
|
|
};
|
|
|
|
zfeature_register(SPA_FEATURE_PROJECT_QUOTA,
|
|
|
|
"org.zfsonlinux:project_quota", "project_quota",
|
|
|
|
"space/object accounting based on project ID.",
|
|
|
|
ZFEATURE_FLAG_READONLY_COMPAT | ZFEATURE_FLAG_PER_DATASET,
|
|
|
|
ZFEATURE_TYPE_BOOLEAN, project_quota_deps, sfeatures);
|
2018-02-13 22:54:54 +00:00
|
|
|
}
|
2018-09-06 01:33:36 +00:00
|
|
|
|
|
|
|
zfeature_register(SPA_FEATURE_ALLOCATION_CLASSES,
|
|
|
|
"org.zfsonlinux:allocation_classes", "allocation_classes",
|
|
|
|
"Support for separate allocation classes.",
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
ZFEATURE_FLAG_READONLY_COMPAT, ZFEATURE_TYPE_BOOLEAN, NULL,
|
|
|
|
sfeatures);
|
2018-10-19 04:06:18 +00:00
|
|
|
|
|
|
|
zfeature_register(SPA_FEATURE_RESILVER_DEFER,
|
|
|
|
"com.datto:resilver_defer", "resilver_defer",
|
2019-08-30 21:32:18 +00:00
|
|
|
"Support for deferring new resilvers when one is already running.",
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
ZFEATURE_FLAG_READONLY_COMPAT, ZFEATURE_TYPE_BOOLEAN, NULL,
|
|
|
|
sfeatures);
|
2020-07-03 18:05:50 +00:00
|
|
|
|
|
|
|
zfeature_register(SPA_FEATURE_DEVICE_REBUILD,
|
|
|
|
"org.openzfs:device_rebuild", "device_rebuild",
|
Distributed Spare (dRAID) Feature
This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID. This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.
A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`. No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.
zpool create <pool> draid[1,2,3] <vdevs...>
Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons. The supported options include:
zpool create <pool> \
draid[<parity>][:<data>d][:<children>c][:<spares>s] \
<vdevs...>
- draid[parity] - Parity level (default 1)
- draid[:<data>d] - Data devices per group (default 8)
- draid[:<children>c] - Expected number of child vdevs
- draid[:<spares>s] - Distributed hot spares (default 0)
Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.
```
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
slag7 ONLINE 0 0 0
draid2:8d:68c:2s-0 ONLINE 0 0 0
L0 ONLINE 0 0 0
L1 ONLINE 0 0 0
...
U25 ONLINE 0 0 0
U26 ONLINE 0 0 0
spare-53 ONLINE 0 0 0
U27 ONLINE 0 0 0
draid2-0-0 ONLINE 0 0 0
U28 ONLINE 0 0 0
U29 ONLINE 0 0 0
...
U42 ONLINE 0 0 0
U43 ONLINE 0 0 0
special
mirror-1 ONLINE 0 0 0
L5 ONLINE 0 0 0
U5 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
L6 ONLINE 0 0 0
U6 ONLINE 0 0 0
spares
draid2-0-0 INUSE currently in use
draid2-0-1 AVAIL
```
When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command. These options are leverages
by zloop.sh to test a wide range of dRAID configurations.
-K draid|raidz|random - kind of RAID to test
-D <value> - dRAID data drives per group
-S <value> - dRAID distributed hot spares
-R <value> - RAID parity (raidz or dRAID)
The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.
Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102
2020-11-13 21:51:51 +00:00
|
|
|
"Support for sequential mirror/dRAID device rebuilds",
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
ZFEATURE_FLAG_READONLY_COMPAT, ZFEATURE_TYPE_BOOLEAN, NULL,
|
|
|
|
sfeatures);
|
Add zstd support to zfs
This PR adds two new compression types, based on ZStandard:
- zstd: A basic ZStandard compression algorithm Available compression.
Levels for zstd are zstd-1 through zstd-19, where the compression
increases with every level, but speed decreases.
- zstd-fast: A faster version of the ZStandard compression algorithm
zstd-fast is basically a "negative" level of zstd. The compression
decreases with every level, but speed increases.
Available compression levels for zstd-fast:
- zstd-fast-1 through zstd-fast-10
- zstd-fast-20 through zstd-fast-100 (in increments of 10)
- zstd-fast-500 and zstd-fast-1000
For more information check the man page.
Implementation details:
Rather than treat each level of zstd as a different algorithm (as was
done historically with gzip), the block pointer `enum zio_compress`
value is simply zstd for all levels, including zstd-fast, since they all
use the same decompression function.
The compress= property (a 64bit unsigned integer) uses the lower 7 bits
to store the compression algorithm (matching the number of bits used in
a block pointer, as the 8th bit was borrowed for embedded block
pointers). The upper bits are used to store the compression level.
It is necessary to be able to determine what compression level was used
when later reading a block back, so the concept used in LZ4, where the
first 32bits of the on-disk value are the size of the compressed data
(since the allocation is rounded up to the nearest ashift), was
extended, and we store the version of ZSTD and the level as well as the
compressed size. This value is returned when decompressing a block, so
that if the block needs to be recompressed (L2ARC, nop-write, etc), that
the same parameters will be used to result in the matching checksum.
All of the internal ZFS code ( `arc_buf_hdr_t`, `objset_t`,
`zio_prop_t`, etc.) uses the separated _compress and _complevel
variables. Only the properties ZAP contains the combined/bit-shifted
value. The combined value is split when the compression_changed_cb()
callback is called, and sets both objset members (os_compress and
os_complevel).
The userspace tools all use the combined/bit-shifted value.
Additional notes:
zdb can now also decode the ZSTD compression header (flag -Z) and
inspect the size, version and compression level saved in that header.
For each record, if it is ZSTD compressed, the parameters of the decoded
compression header get printed.
ZSTD is included with all current tests and new tests are added
as-needed.
Per-dataset feature flags now get activated when the property is set.
If a compression algorithm requires a feature flag, zfs activates the
feature when the property is set, rather than waiting for the first
block to be born. This is currently only used by zstd but can be
extended as needed.
Portions-Sponsored-By: The FreeBSD Foundation
Co-authored-by: Allan Jude <allanjude@freebsd.org>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Co-authored-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Co-authored-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Co-authored-by: Michael Niewöhner <foss@mniewoehner.de>
Signed-off-by: Allan Jude <allan@klarasystems.com>
Signed-off-by: Allan Jude <allanjude@freebsd.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Signed-off-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Signed-off-by: Michael Niewöhner <foss@mniewoehner.de>
Closes #6247
Closes #9024
Closes #10277
Closes #10278
2020-08-18 17:10:17 +00:00
|
|
|
|
|
|
|
{
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
static const spa_feature_t zstd_deps[] = {
|
|
|
|
SPA_FEATURE_EXTENSIBLE_DATASET,
|
|
|
|
SPA_FEATURE_NONE
|
|
|
|
};
|
|
|
|
zfeature_register(SPA_FEATURE_ZSTD_COMPRESS,
|
|
|
|
"org.freebsd:zstd_compress", "zstd_compress",
|
|
|
|
"zstd compression algorithm support.",
|
|
|
|
ZFEATURE_FLAG_PER_DATASET, ZFEATURE_TYPE_BOOLEAN, zstd_deps,
|
|
|
|
sfeatures);
|
Add zstd support to zfs
This PR adds two new compression types, based on ZStandard:
- zstd: A basic ZStandard compression algorithm Available compression.
Levels for zstd are zstd-1 through zstd-19, where the compression
increases with every level, but speed decreases.
- zstd-fast: A faster version of the ZStandard compression algorithm
zstd-fast is basically a "negative" level of zstd. The compression
decreases with every level, but speed increases.
Available compression levels for zstd-fast:
- zstd-fast-1 through zstd-fast-10
- zstd-fast-20 through zstd-fast-100 (in increments of 10)
- zstd-fast-500 and zstd-fast-1000
For more information check the man page.
Implementation details:
Rather than treat each level of zstd as a different algorithm (as was
done historically with gzip), the block pointer `enum zio_compress`
value is simply zstd for all levels, including zstd-fast, since they all
use the same decompression function.
The compress= property (a 64bit unsigned integer) uses the lower 7 bits
to store the compression algorithm (matching the number of bits used in
a block pointer, as the 8th bit was borrowed for embedded block
pointers). The upper bits are used to store the compression level.
It is necessary to be able to determine what compression level was used
when later reading a block back, so the concept used in LZ4, where the
first 32bits of the on-disk value are the size of the compressed data
(since the allocation is rounded up to the nearest ashift), was
extended, and we store the version of ZSTD and the level as well as the
compressed size. This value is returned when decompressing a block, so
that if the block needs to be recompressed (L2ARC, nop-write, etc), that
the same parameters will be used to result in the matching checksum.
All of the internal ZFS code ( `arc_buf_hdr_t`, `objset_t`,
`zio_prop_t`, etc.) uses the separated _compress and _complevel
variables. Only the properties ZAP contains the combined/bit-shifted
value. The combined value is split when the compression_changed_cb()
callback is called, and sets both objset members (os_compress and
os_complevel).
The userspace tools all use the combined/bit-shifted value.
Additional notes:
zdb can now also decode the ZSTD compression header (flag -Z) and
inspect the size, version and compression level saved in that header.
For each record, if it is ZSTD compressed, the parameters of the decoded
compression header get printed.
ZSTD is included with all current tests and new tests are added
as-needed.
Per-dataset feature flags now get activated when the property is set.
If a compression algorithm requires a feature flag, zfs activates the
feature when the property is set, rather than waiting for the first
block to be born. This is currently only used by zstd but can be
extended as needed.
Portions-Sponsored-By: The FreeBSD Foundation
Co-authored-by: Allan Jude <allanjude@freebsd.org>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Co-authored-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Co-authored-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Co-authored-by: Michael Niewöhner <foss@mniewoehner.de>
Signed-off-by: Allan Jude <allan@klarasystems.com>
Signed-off-by: Allan Jude <allanjude@freebsd.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Signed-off-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Signed-off-by: Michael Niewöhner <foss@mniewoehner.de>
Closes #6247
Closes #9024
Closes #10277
Closes #10278
2020-08-18 17:10:17 +00:00
|
|
|
}
|
Distributed Spare (dRAID) Feature
This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID. This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.
A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`. No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.
zpool create <pool> draid[1,2,3] <vdevs...>
Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons. The supported options include:
zpool create <pool> \
draid[<parity>][:<data>d][:<children>c][:<spares>s] \
<vdevs...>
- draid[parity] - Parity level (default 1)
- draid[:<data>d] - Data devices per group (default 8)
- draid[:<children>c] - Expected number of child vdevs
- draid[:<spares>s] - Distributed hot spares (default 0)
Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.
```
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
slag7 ONLINE 0 0 0
draid2:8d:68c:2s-0 ONLINE 0 0 0
L0 ONLINE 0 0 0
L1 ONLINE 0 0 0
...
U25 ONLINE 0 0 0
U26 ONLINE 0 0 0
spare-53 ONLINE 0 0 0
U27 ONLINE 0 0 0
draid2-0-0 ONLINE 0 0 0
U28 ONLINE 0 0 0
U29 ONLINE 0 0 0
...
U42 ONLINE 0 0 0
U43 ONLINE 0 0 0
special
mirror-1 ONLINE 0 0 0
L5 ONLINE 0 0 0
U5 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
L6 ONLINE 0 0 0
U6 ONLINE 0 0 0
spares
draid2-0-0 INUSE currently in use
draid2-0-1 AVAIL
```
When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command. These options are leverages
by zloop.sh to test a wide range of dRAID configurations.
-K draid|raidz|random - kind of RAID to test
-D <value> - dRAID data drives per group
-S <value> - dRAID distributed hot spares
-R <value> - RAID parity (raidz or dRAID)
The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.
Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102
2020-11-13 21:51:51 +00:00
|
|
|
|
|
|
|
zfeature_register(SPA_FEATURE_DRAID,
|
2020-11-23 22:49:17 +00:00
|
|
|
"org.openzfs:draid", "draid", "Support for distributed spare RAID",
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
ZFEATURE_FLAG_MOS, ZFEATURE_TYPE_BOOLEAN, NULL, sfeatures);
|
|
|
|
|
log xattr=sa create/remove/update to ZIL
As such, there are no specific synchronous semantics defined for
the xattrs. But for xattr=on, it does log to ZIL and zil_commit() is
done, if sync=always is set on dataset. This provides sync semantics
for xattr=on with sync=always set on dataset.
For the xattr=sa implementation, it doesn't log to ZIL, so, even with
sync=always, xattrs are not guaranteed to be synced before xattr call
returns to caller. So, xattr can be lost if system crash happens, before
txg carrying xattr transaction is synced.
This change adds xattr=sa logging to ZIL on xattr create/remove/update
and xattrs are synced to ZIL (zil_commit() done) for sync=always.
This makes xattr=sa behavior similar to xattr=on.
Implementation notes:
The actual logging is fairly straight-forward and does not warrant
additional explanation.
However, it has been 14 years since we last added new TX types
to the ZIL [1], hence this is the first time we do it after the
introduction of zpool features. Therefore, here is an overview of the
feature activation and deactivation workflow:
1. The feature must be enabled. Otherwise, we don't log the new
record type. This ensures compatibility with older software.
2. The feature is activated per-dataset, since the ZIL is per-dataset.
3. If the feature is enabled and dataset is not for zvol, any append to
the ZIL chain will activate the feature for the dataset. Likewise
for starting a new ZIL chain.
4. A dataset that doesn't have a ZIL chain has the feature deactivated.
We ensure (3) by activating on the first zil_commit() after the feature
was enabled. Since activating the features requires waiting for txg
sync, the first zil_commit() after enabling the feature will be slower
than usual. The downside is that this is really a conservative
approximation: even if we never append a 'TX_SETSAXATTR' to the ZIL
chain, we pay the penalty for feature activation. The upside is that the
user is in control of when we pay the penalty, i.e., upon enabling the
feature.
We ensure (4) by hooking into zil_sync(), where ZIL destroy actually
happens.
One more piece on feature activation, since it's spread across
multiple functions:
zil_commit()
zil_process_commit_list()
if lwb == NULL // first zil_commit since zil_open
zil_create()
if no log block pointer in ZIL header:
if feature enabled and not active:
// CASE 1
enable, COALESCE txg wait with dmu_tx that allocated the
log block
else // log block was allocated earlier than this zil_open
if feature enabled and not active:
// CASE 2
enable, EXPLICIT txg wait
else // already have an in-DRAM LWB
if feature enabled and not active:
// this happens when we enable the feature after zil_create
// CASE 3
enable, EXPLICIT txg wait
[1] https://github.com/illumos/illumos-gate/commit/da6c28aaf62fa55f0fdb8004aa40f88f23bf53f0
Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Christian Schwarz <christian.schwarz@nutanix.com>
Reviewed-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Reviewed-by: Ryan Moeller <freqlabs@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jitendra Patidar <jitendra.patidar@nutanix.com>
Closes #8768
Closes #9078
2022-02-22 21:06:43 +00:00
|
|
|
{
|
Introduce BLAKE3 checksums as an OpenZFS feature
This commit adds BLAKE3 checksums to OpenZFS, it has similar
performance to Edon-R, but without the caveats around the latter.
Homepage of BLAKE3: https://github.com/BLAKE3-team/BLAKE3
Wikipedia: https://en.wikipedia.org/wiki/BLAKE_(hash_function)#BLAKE3
Short description of Wikipedia:
BLAKE3 is a cryptographic hash function based on Bao and BLAKE2,
created by Jack O'Connor, Jean-Philippe Aumasson, Samuel Neves, and
Zooko Wilcox-O'Hearn. It was announced on January 9, 2020, at Real
World Crypto. BLAKE3 is a single algorithm with many desirable
features (parallelism, XOF, KDF, PRF and MAC), in contrast to BLAKE
and BLAKE2, which are algorithm families with multiple variants.
BLAKE3 has a binary tree structure, so it supports a practically
unlimited degree of parallelism (both SIMD and multithreading) given
enough input. The official Rust and C implementations are
dual-licensed as public domain (CC0) and the Apache License.
Along with adding the BLAKE3 hash into the OpenZFS infrastructure a
new benchmarking file called chksum_bench was introduced. When read
it reports the speed of the available checksum functions.
On Linux: cat /proc/spl/kstat/zfs/chksum_bench
On FreeBSD: sysctl kstat.zfs.misc.chksum_bench
This is an example output of an i3-1005G1 test system with Debian 11:
implementation 1k 4k 16k 64k 256k 1m 4m
edonr-generic 1196 1602 1761 1749 1762 1759 1751
skein-generic 546 591 608 615 619 612 616
sha256-generic 240 300 316 314 304 285 276
sha512-generic 353 441 467 476 472 467 426
blake3-generic 308 313 313 313 312 313 312
blake3-sse2 402 1289 1423 1446 1432 1458 1413
blake3-sse41 427 1470 1625 1704 1679 1607 1629
blake3-avx2 428 1920 3095 3343 3356 3318 3204
blake3-avx512 473 2687 4905 5836 5844 5643 5374
Output on Debian 5.10.0-10-amd64 system: (Ryzen 7 5800X)
implementation 1k 4k 16k 64k 256k 1m 4m
edonr-generic 1840 2458 2665 2719 2711 2723 2693
skein-generic 870 966 996 992 1003 1005 1009
sha256-generic 415 442 453 455 457 457 457
sha512-generic 608 690 711 718 719 720 721
blake3-generic 301 313 311 309 309 310 310
blake3-sse2 343 1865 2124 2188 2180 2181 2186
blake3-sse41 364 2091 2396 2509 2463 2482 2488
blake3-avx2 365 2590 4399 4971 4915 4802 4764
Output on Debian 5.10.0-9-powerpc64le system: (POWER 9)
implementation 1k 4k 16k 64k 256k 1m 4m
edonr-generic 1213 1703 1889 1918 1957 1902 1907
skein-generic 434 492 520 522 511 525 525
sha256-generic 167 183 187 188 188 187 188
sha512-generic 186 216 222 221 225 224 224
blake3-generic 153 152 154 153 151 153 153
blake3-sse2 391 1170 1366 1406 1428 1426 1414
blake3-sse41 352 1049 1212 1174 1262 1258 1259
Output on Debian 5.10.0-11-arm64 system: (Pi400)
implementation 1k 4k 16k 64k 256k 1m 4m
edonr-generic 487 603 629 639 643 641 641
skein-generic 271 299 303 308 309 309 307
sha256-generic 117 127 128 130 130 129 130
sha512-generic 145 165 170 172 173 174 175
blake3-generic 81 29 71 89 89 89 89
blake3-sse2 112 323 368 379 380 371 374
blake3-sse41 101 315 357 368 369 364 360
Structurally, the new code is mainly split into these parts:
- 1x cross platform generic c variant: blake3_generic.c
- 4x assembly for X86-64 (SSE2, SSE4.1, AVX2, AVX512)
- 2x assembly for ARMv8 (NEON converted from SSE2)
- 2x assembly for PPC64-LE (POWER8 converted from SSE2)
- one file for switching between the implementations
Note the PPC64 assembly requires the VSX instruction set and the
kfpu_begin() / kfpu_end() calls on PowerPC were updated accordingly.
Reviewed-by: Felix Dörre <felix@dogcraft.de>
Reviewed-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tino Reichardt <milky-zfs@mcmilk.de>
Co-authored-by: Rich Ercolani <rincebrain@gmail.com>
Closes #10058
Closes #12918
2022-06-08 22:55:57 +00:00
|
|
|
static const spa_feature_t zilsaxattr_deps[] = {
|
|
|
|
SPA_FEATURE_EXTENSIBLE_DATASET,
|
|
|
|
SPA_FEATURE_NONE
|
|
|
|
};
|
|
|
|
zfeature_register(SPA_FEATURE_ZILSAXATTR,
|
|
|
|
"org.openzfs:zilsaxattr", "zilsaxattr",
|
|
|
|
"Support for xattr=sa extended attribute logging in ZIL.",
|
|
|
|
ZFEATURE_FLAG_PER_DATASET | ZFEATURE_FLAG_READONLY_COMPAT,
|
|
|
|
ZFEATURE_TYPE_BOOLEAN, zilsaxattr_deps, sfeatures);
|
log xattr=sa create/remove/update to ZIL
As such, there are no specific synchronous semantics defined for
the xattrs. But for xattr=on, it does log to ZIL and zil_commit() is
done, if sync=always is set on dataset. This provides sync semantics
for xattr=on with sync=always set on dataset.
For the xattr=sa implementation, it doesn't log to ZIL, so, even with
sync=always, xattrs are not guaranteed to be synced before xattr call
returns to caller. So, xattr can be lost if system crash happens, before
txg carrying xattr transaction is synced.
This change adds xattr=sa logging to ZIL on xattr create/remove/update
and xattrs are synced to ZIL (zil_commit() done) for sync=always.
This makes xattr=sa behavior similar to xattr=on.
Implementation notes:
The actual logging is fairly straight-forward and does not warrant
additional explanation.
However, it has been 14 years since we last added new TX types
to the ZIL [1], hence this is the first time we do it after the
introduction of zpool features. Therefore, here is an overview of the
feature activation and deactivation workflow:
1. The feature must be enabled. Otherwise, we don't log the new
record type. This ensures compatibility with older software.
2. The feature is activated per-dataset, since the ZIL is per-dataset.
3. If the feature is enabled and dataset is not for zvol, any append to
the ZIL chain will activate the feature for the dataset. Likewise
for starting a new ZIL chain.
4. A dataset that doesn't have a ZIL chain has the feature deactivated.
We ensure (3) by activating on the first zil_commit() after the feature
was enabled. Since activating the features requires waiting for txg
sync, the first zil_commit() after enabling the feature will be slower
than usual. The downside is that this is really a conservative
approximation: even if we never append a 'TX_SETSAXATTR' to the ZIL
chain, we pay the penalty for feature activation. The upside is that the
user is in control of when we pay the penalty, i.e., upon enabling the
feature.
We ensure (4) by hooking into zil_sync(), where ZIL destroy actually
happens.
One more piece on feature activation, since it's spread across
multiple functions:
zil_commit()
zil_process_commit_list()
if lwb == NULL // first zil_commit since zil_open
zil_create()
if no log block pointer in ZIL header:
if feature enabled and not active:
// CASE 1
enable, COALESCE txg wait with dmu_tx that allocated the
log block
else // log block was allocated earlier than this zil_open
if feature enabled and not active:
// CASE 2
enable, EXPLICIT txg wait
else // already have an in-DRAM LWB
if feature enabled and not active:
// this happens when we enable the feature after zil_create
// CASE 3
enable, EXPLICIT txg wait
[1] https://github.com/illumos/illumos-gate/commit/da6c28aaf62fa55f0fdb8004aa40f88f23bf53f0
Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Christian Schwarz <christian.schwarz@nutanix.com>
Reviewed-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Reviewed-by: Ryan Moeller <freqlabs@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jitendra Patidar <jitendra.patidar@nutanix.com>
Closes #8768
Closes #9078
2022-02-22 21:06:43 +00:00
|
|
|
}
|
|
|
|
|
Improve zpool status output, list all affected datasets
Currently, determining which datasets are affected by corruption is
a manual process.
The primary difficulty in reporting the list of affected snapshots is
that since the error was initially found, the snapshot where the error
originally occurred in, may have been deleted. To solve this issue, we
add the ID of the head dataset of the original snapshot which the error
was detected in, to the stored error report. Then any time a filesystem
is deleted, the errors associated with it are deleted as well. Any time
a clone promote occurs, we modify reports associated with the original
head to refer to the new head. The stored error reports are identified
by this head ID, the birth time of the block which the error occurred
in, as well as some information about the error itself are also stored.
Once this information is stored, we can find the set of datasets
affected by an error by walking back the list of snapshots in the given
head until we find one with the appropriate birth txg, and then traverse
through the snapshots of the clone family, terminating a branch if the
block was replaced in a given snapshot. Then we report this information
back to libzfs, and to the zpool status command, where it is displayed
as follows:
pool: test
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: scrub repaired 0B in 00:00:00 with 800 errors on Fri Dec 3
08:27:57 2021
config:
NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
sdb ONLINE 0 0 1.58K
errors: Permanent errors have been detected in the following files:
test@1:/test.0.0
/test/test.0.0
/test/1clone/test.0.0
A new feature flag is introduced to mark the presence of this change, as
well as promotion and backwards compatibility logic. This is an updated
version of #9175. Rebase required fixing the tests, updating the ABI of
libzfs, updating the man pages, fixing bugs, fixing the error returns,
and updating the old on-disk error logs to the new format when
activating the feature.
Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Co-authored-by: TulsiJain <tulsi.jain@delphix.com>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
Closes #9175
Closes #12812
2022-04-26 00:25:42 +00:00
|
|
|
zfeature_register(SPA_FEATURE_HEAD_ERRLOG,
|
|
|
|
"com.delphix:head_errlog", "head_errlog",
|
|
|
|
"Support for per-dataset on-disk error logs.",
|
|
|
|
ZFEATURE_FLAG_ACTIVATE_ON_ENABLE, ZFEATURE_TYPE_BOOLEAN, NULL,
|
|
|
|
sfeatures);
|
|
|
|
|
Introduce BLAKE3 checksums as an OpenZFS feature
This commit adds BLAKE3 checksums to OpenZFS, it has similar
performance to Edon-R, but without the caveats around the latter.
Homepage of BLAKE3: https://github.com/BLAKE3-team/BLAKE3
Wikipedia: https://en.wikipedia.org/wiki/BLAKE_(hash_function)#BLAKE3
Short description of Wikipedia:
BLAKE3 is a cryptographic hash function based on Bao and BLAKE2,
created by Jack O'Connor, Jean-Philippe Aumasson, Samuel Neves, and
Zooko Wilcox-O'Hearn. It was announced on January 9, 2020, at Real
World Crypto. BLAKE3 is a single algorithm with many desirable
features (parallelism, XOF, KDF, PRF and MAC), in contrast to BLAKE
and BLAKE2, which are algorithm families with multiple variants.
BLAKE3 has a binary tree structure, so it supports a practically
unlimited degree of parallelism (both SIMD and multithreading) given
enough input. The official Rust and C implementations are
dual-licensed as public domain (CC0) and the Apache License.
Along with adding the BLAKE3 hash into the OpenZFS infrastructure a
new benchmarking file called chksum_bench was introduced. When read
it reports the speed of the available checksum functions.
On Linux: cat /proc/spl/kstat/zfs/chksum_bench
On FreeBSD: sysctl kstat.zfs.misc.chksum_bench
This is an example output of an i3-1005G1 test system with Debian 11:
implementation 1k 4k 16k 64k 256k 1m 4m
edonr-generic 1196 1602 1761 1749 1762 1759 1751
skein-generic 546 591 608 615 619 612 616
sha256-generic 240 300 316 314 304 285 276
sha512-generic 353 441 467 476 472 467 426
blake3-generic 308 313 313 313 312 313 312
blake3-sse2 402 1289 1423 1446 1432 1458 1413
blake3-sse41 427 1470 1625 1704 1679 1607 1629
blake3-avx2 428 1920 3095 3343 3356 3318 3204
blake3-avx512 473 2687 4905 5836 5844 5643 5374
Output on Debian 5.10.0-10-amd64 system: (Ryzen 7 5800X)
implementation 1k 4k 16k 64k 256k 1m 4m
edonr-generic 1840 2458 2665 2719 2711 2723 2693
skein-generic 870 966 996 992 1003 1005 1009
sha256-generic 415 442 453 455 457 457 457
sha512-generic 608 690 711 718 719 720 721
blake3-generic 301 313 311 309 309 310 310
blake3-sse2 343 1865 2124 2188 2180 2181 2186
blake3-sse41 364 2091 2396 2509 2463 2482 2488
blake3-avx2 365 2590 4399 4971 4915 4802 4764
Output on Debian 5.10.0-9-powerpc64le system: (POWER 9)
implementation 1k 4k 16k 64k 256k 1m 4m
edonr-generic 1213 1703 1889 1918 1957 1902 1907
skein-generic 434 492 520 522 511 525 525
sha256-generic 167 183 187 188 188 187 188
sha512-generic 186 216 222 221 225 224 224
blake3-generic 153 152 154 153 151 153 153
blake3-sse2 391 1170 1366 1406 1428 1426 1414
blake3-sse41 352 1049 1212 1174 1262 1258 1259
Output on Debian 5.10.0-11-arm64 system: (Pi400)
implementation 1k 4k 16k 64k 256k 1m 4m
edonr-generic 487 603 629 639 643 641 641
skein-generic 271 299 303 308 309 309 307
sha256-generic 117 127 128 130 130 129 130
sha512-generic 145 165 170 172 173 174 175
blake3-generic 81 29 71 89 89 89 89
blake3-sse2 112 323 368 379 380 371 374
blake3-sse41 101 315 357 368 369 364 360
Structurally, the new code is mainly split into these parts:
- 1x cross platform generic c variant: blake3_generic.c
- 4x assembly for X86-64 (SSE2, SSE4.1, AVX2, AVX512)
- 2x assembly for ARMv8 (NEON converted from SSE2)
- 2x assembly for PPC64-LE (POWER8 converted from SSE2)
- one file for switching between the implementations
Note the PPC64 assembly requires the VSX instruction set and the
kfpu_begin() / kfpu_end() calls on PowerPC were updated accordingly.
Reviewed-by: Felix Dörre <felix@dogcraft.de>
Reviewed-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tino Reichardt <milky-zfs@mcmilk.de>
Co-authored-by: Rich Ercolani <rincebrain@gmail.com>
Closes #10058
Closes #12918
2022-06-08 22:55:57 +00:00
|
|
|
{
|
|
|
|
static const spa_feature_t blake3_deps[] = {
|
|
|
|
SPA_FEATURE_EXTENSIBLE_DATASET,
|
|
|
|
SPA_FEATURE_NONE
|
|
|
|
};
|
|
|
|
zfeature_register(SPA_FEATURE_BLAKE3,
|
|
|
|
"org.openzfs:blake3", "blake3",
|
|
|
|
"BLAKE3 hash algorithm.",
|
|
|
|
ZFEATURE_FLAG_PER_DATASET, ZFEATURE_TYPE_BOOLEAN,
|
|
|
|
blake3_deps, sfeatures);
|
|
|
|
}
|
|
|
|
|
zcommon: pre-iterate over sysfs instead of statting every feature
If sufficient memory (<2K, realistically) is available, libzfs_init()
can be significantly shorted by iterating over the correct sysfs
directory before registrations, we can turn 168 stats into 15/18
syscalls (3 opens (6 if built in), 3 fstats, 6 getdentses, and 3
closes), a tenfoldish reduction; this is probably a bit faster, too.
The list is always optional, and registration functions (and one-off
users) can simply pass NULL, which will fall back to the previous
mechanism
Also, don't allocate in zfs_mod_supported_impl, and use use access()
instead of stat(), since existence is really what we care about
Also, fix pre-prop-checking compat in fallback for built-in ZFS
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12089
2021-12-17 00:43:10 +00:00
|
|
|
zfs_mod_list_supported_free(sfeatures);
|
2012-12-13 23:24:15 +00:00
|
|
|
}
|
2017-08-09 22:31:08 +00:00
|
|
|
|
2018-02-16 01:53:18 +00:00
|
|
|
#if defined(_KERNEL)
|
2018-09-02 19:09:53 +00:00
|
|
|
EXPORT_SYMBOL(zfeature_lookup_guid);
|
2017-08-09 22:31:08 +00:00
|
|
|
EXPORT_SYMBOL(zfeature_lookup_name);
|
|
|
|
EXPORT_SYMBOL(zfeature_is_supported);
|
|
|
|
EXPORT_SYMBOL(zfeature_is_valid_guid);
|
|
|
|
EXPORT_SYMBOL(zfeature_depends_on);
|
|
|
|
EXPORT_SYMBOL(zpool_feature_init);
|
|
|
|
EXPORT_SYMBOL(spa_feature_table);
|
|
|
|
#endif
|