Cleanup: 64-bit kernel module parameters should use fixed width types
Various module parameters such as `zfs_arc_max` were originally `uint64_t` on OpenSolaris/Illumos, but were changed to `unsigned long` for Linux compatibility because Linux's kernel default module parameter implementation did not support 64-bit types on 32-bit platforms. This caused problems when porting OpenZFS to Windows because its LLP64 memory model made `unsigned long` a 32-bit type on 64-bit, which created the undesireable situation that parameters that should accept 64-bit values could not on 64-bit Windows. Upon inspection, it turns out that the Linux kernel module parameter interface is extensible, such that we are allowed to define our own types. Rather than maintaining the original type change via hacks to to continue shrinking module parameters on 32-bit Linux, we implement support for 64-bit module parameters on Linux. After doing a review of all 64-bit kernel parameters (found via the man page and also proposed changes by Andrew Innes), the kernel module parameters fell into a few groups: Parameters that were originally 64-bit on Illumos: * dbuf_cache_max_bytes * dbuf_metadata_cache_max_bytes * l2arc_feed_min_ms * l2arc_feed_secs * l2arc_headroom * l2arc_headroom_boost * l2arc_write_boost * l2arc_write_max * metaslab_aliquot * metaslab_force_ganging * zfetch_array_rd_sz * zfs_arc_max * zfs_arc_meta_limit * zfs_arc_meta_min * zfs_arc_min * zfs_async_block_max_blocks * zfs_condense_max_obsolete_bytes * zfs_condense_min_mapping_bytes * zfs_deadman_checktime_ms * zfs_deadman_synctime_ms * zfs_initialize_chunk_size * zfs_initialize_value * zfs_lua_max_instrlimit * zfs_lua_max_memlimit * zil_slog_bulk Parameters that were originally 32-bit on Illumos: * zfs_per_txg_dirty_frees_percent Parameters that were originally `ssize_t` on Illumos: * zfs_immediate_write_sz Note that `ssize_t` is `int32_t` on 32-bit and `int64_t` on 64-bit. It has been upgraded to 64-bit. Parameters that were `long`/`unsigned long` because of Linux/FreeBSD influence: * l2arc_rebuild_blocks_min_l2size * zfs_key_max_salt_uses * zfs_max_log_walking * zfs_max_logsm_summary_length * zfs_metaslab_max_size_cache_sec * zfs_min_metaslabs_to_flush * zfs_multihost_interval * zfs_unflushed_log_block_max * zfs_unflushed_log_block_min * zfs_unflushed_log_block_pct * zfs_unflushed_max_mem_amt * zfs_unflushed_max_mem_ppm New parameters that do not exist in Illumos: * l2arc_trim_ahead * vdev_file_logical_ashift * vdev_file_physical_ashift * zfs_arc_dnode_limit * zfs_arc_dnode_limit_percent * zfs_arc_dnode_reduce_percent * zfs_arc_meta_limit_percent * zfs_arc_sys_free * zfs_deadman_ziotime_ms * zfs_delete_blocks * zfs_history_output_max * zfs_livelist_max_entries * zfs_max_async_dedup_frees * zfs_max_nvlist_src_size * zfs_rebuild_max_segment * zfs_rebuild_vdev_limit * zfs_unflushed_log_txg_max * zfs_vdev_max_auto_ashift * zfs_vdev_min_auto_ashift * zfs_vnops_read_chunk_size * zvol_max_discard_blocks Rather than clutter the lists with commentary, the module parameters that need comments are repeated below. A few parameters were defined in Linux/FreeBSD specific code, where the use of ulong/long is not an issue for portability, so we leave them alone: * zfs_delete_blocks * zfs_key_max_salt_uses * zvol_max_discard_blocks The documentation for a few parameters was found to be incorrect: * zfs_deadman_checktime_ms - incorrectly documented as int * zfs_delete_blocks - not documented as Linux only * zfs_history_output_max - incorrectly documented as int * zfs_vnops_read_chunk_size - incorrectly documented as long * zvol_max_discard_blocks - incorrectly documented as ulong The documentation for these has been fixed, alongside the changes to document the switch to fixed width types. In addition, several kernel module parameters were percentages or held ashift values, so being 64-bit never made sense for them. They have been downgraded to 32-bit: * vdev_file_logical_ashift * vdev_file_physical_ashift * zfs_arc_dnode_limit_percent * zfs_arc_dnode_reduce_percent * zfs_arc_meta_limit_percent * zfs_per_txg_dirty_frees_percent * zfs_unflushed_log_block_pct * zfs_vdev_max_auto_ashift * zfs_vdev_min_auto_ashift Of special note are `zfs_vdev_max_auto_ashift` and `zfs_vdev_min_auto_ashift`, which were already defined as `uint64_t`, and passed to the kernel as `ulong`. This is inherently buggy on big endian 32-bit Linux, since the values would not be written to the correct locations. 32-bit FreeBSD was unaffected because its sysctl code correctly treated this as a `uint64_t`. Lastly, a code comment suggests that `zfs_arc_sys_free` is Linux-specific, but there is nothing to indicate to me that it is Linux-specific. Nothing was done about that. Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: Jorgen Lundman <lundman@lundman.net> Reviewed-by: Ryan Moeller <ryan@iXsystems.com> Reviewed-by: Alexander Motin <mav@FreeBSD.org> Original-patch-by: Andrew Innes <andrew.c12@gmail.com> Original-patch-by: Jorgen Lundman <lundman@lundman.net> Signed-off-by: Richard Yao <richard.yao@alumni.stonybrook.edu> Closes #13984 Closes #14004
This commit is contained in:
parent
ff7a0a108f
commit
ab8d9c1783
|
@ -252,7 +252,7 @@ static const ztest_shared_opts_t ztest_opts_defaults = {
|
||||||
|
|
||||||
extern uint64_t metaslab_force_ganging;
|
extern uint64_t metaslab_force_ganging;
|
||||||
extern uint64_t metaslab_df_alloc_threshold;
|
extern uint64_t metaslab_df_alloc_threshold;
|
||||||
extern unsigned long zfs_deadman_synctime_ms;
|
extern uint64_t zfs_deadman_synctime_ms;
|
||||||
extern uint_t metaslab_preload_limit;
|
extern uint_t metaslab_preload_limit;
|
||||||
extern int zfs_compressed_arc_enabled;
|
extern int zfs_compressed_arc_enabled;
|
||||||
extern int zfs_abd_scatter_enabled;
|
extern int zfs_abd_scatter_enabled;
|
||||||
|
@ -7119,9 +7119,9 @@ ztest_deadman_thread(void *arg)
|
||||||
*/
|
*/
|
||||||
if (spa_suspended(spa) || spa->spa_root_vdev == NULL) {
|
if (spa_suspended(spa) || spa->spa_root_vdev == NULL) {
|
||||||
fatal(B_FALSE,
|
fatal(B_FALSE,
|
||||||
"aborting test after %lu seconds because "
|
"aborting test after %llu seconds because "
|
||||||
"pool has transitioned to a suspended state.",
|
"pool has transitioned to a suspended state.",
|
||||||
zfs_deadman_synctime_ms / 1000);
|
(u_longlong_t)zfs_deadman_synctime_ms / 1000);
|
||||||
}
|
}
|
||||||
vdev_deadman(spa->spa_root_vdev, FTAG);
|
vdev_deadman(spa->spa_root_vdev, FTAG);
|
||||||
|
|
||||||
|
|
|
@ -52,17 +52,17 @@
|
||||||
|
|
||||||
#define ZFS_MODULE_VIRTUAL_PARAM_CALL ZFS_MODULE_PARAM_CALL
|
#define ZFS_MODULE_VIRTUAL_PARAM_CALL ZFS_MODULE_PARAM_CALL
|
||||||
|
|
||||||
#define param_set_arc_long_args(var) \
|
#define param_set_arc_u64_args(var) \
|
||||||
CTLTYPE_ULONG, &var, 0, param_set_arc_long, "LU"
|
CTLTYPE_U64, &var, 0, param_set_arc_u64, "QU"
|
||||||
|
|
||||||
#define param_set_arc_int_args(var) \
|
#define param_set_arc_int_args(var) \
|
||||||
CTLTYPE_INT, &var, 0, param_set_arc_int, "I"
|
CTLTYPE_INT, &var, 0, param_set_arc_int, "I"
|
||||||
|
|
||||||
#define param_set_arc_min_args(var) \
|
#define param_set_arc_min_args(var) \
|
||||||
CTLTYPE_ULONG, NULL, 0, param_set_arc_min, "LU"
|
CTLTYPE_U64, NULL, 0, param_set_arc_min, "QU"
|
||||||
|
|
||||||
#define param_set_arc_max_args(var) \
|
#define param_set_arc_max_args(var) \
|
||||||
CTLTYPE_ULONG, NULL, 0, param_set_arc_max, "LU"
|
CTLTYPE_U64, NULL, 0, param_set_arc_max, "QU"
|
||||||
|
|
||||||
#define param_set_arc_free_target_args(var) \
|
#define param_set_arc_free_target_args(var) \
|
||||||
CTLTYPE_UINT, NULL, 0, param_set_arc_free_target, "IU"
|
CTLTYPE_UINT, NULL, 0, param_set_arc_free_target, "IU"
|
||||||
|
@ -74,22 +74,22 @@
|
||||||
CTLTYPE_STRING, NULL, 0, param_set_deadman_failmode, "A"
|
CTLTYPE_STRING, NULL, 0, param_set_deadman_failmode, "A"
|
||||||
|
|
||||||
#define param_set_deadman_synctime_args(var) \
|
#define param_set_deadman_synctime_args(var) \
|
||||||
CTLTYPE_ULONG, NULL, 0, param_set_deadman_synctime, "LU"
|
CTLTYPE_U64, NULL, 0, param_set_deadman_synctime, "QU"
|
||||||
|
|
||||||
#define param_set_deadman_ziotime_args(var) \
|
#define param_set_deadman_ziotime_args(var) \
|
||||||
CTLTYPE_ULONG, NULL, 0, param_set_deadman_ziotime, "LU"
|
CTLTYPE_U64, NULL, 0, param_set_deadman_ziotime, "QU"
|
||||||
|
|
||||||
#define param_set_multihost_interval_args(var) \
|
#define param_set_multihost_interval_args(var) \
|
||||||
CTLTYPE_ULONG, NULL, 0, param_set_multihost_interval, "LU"
|
CTLTYPE_U64, NULL, 0, param_set_multihost_interval, "QU"
|
||||||
|
|
||||||
#define param_set_slop_shift_args(var) \
|
#define param_set_slop_shift_args(var) \
|
||||||
CTLTYPE_INT, NULL, 0, param_set_slop_shift, "I"
|
CTLTYPE_INT, NULL, 0, param_set_slop_shift, "I"
|
||||||
|
|
||||||
#define param_set_min_auto_ashift_args(var) \
|
#define param_set_min_auto_ashift_args(var) \
|
||||||
CTLTYPE_U64, NULL, 0, param_set_min_auto_ashift, "QU"
|
CTLTYPE_UINT, NULL, 0, param_set_min_auto_ashift, "IU"
|
||||||
|
|
||||||
#define param_set_max_auto_ashift_args(var) \
|
#define param_set_max_auto_ashift_args(var) \
|
||||||
CTLTYPE_U64, NULL, 0, param_set_max_auto_ashift, "QU"
|
CTLTYPE_UINT, NULL, 0, param_set_max_auto_ashift, "IU"
|
||||||
|
|
||||||
#define fletcher_4_param_set_args(var) \
|
#define fletcher_4_param_set_args(var) \
|
||||||
CTLTYPE_STRING, NULL, 0, fletcher_4_param, "A"
|
CTLTYPE_STRING, NULL, 0, fletcher_4_param, "A"
|
||||||
|
|
|
@ -44,14 +44,6 @@ typedef const struct kernel_param zfs_kernel_param_t;
|
||||||
#define ZMOD_RW 0644
|
#define ZMOD_RW 0644
|
||||||
#define ZMOD_RD 0444
|
#define ZMOD_RD 0444
|
||||||
|
|
||||||
#define INT int
|
|
||||||
#define LONG long
|
|
||||||
/* BEGIN CSTYLED */
|
|
||||||
#define UINT uint
|
|
||||||
#define ULONG ulong
|
|
||||||
/* END CSTYLED */
|
|
||||||
#define STRING charp
|
|
||||||
|
|
||||||
enum scope_prefix_types {
|
enum scope_prefix_types {
|
||||||
zfs,
|
zfs,
|
||||||
zfs_arc,
|
zfs_arc,
|
||||||
|
@ -84,6 +76,50 @@ enum scope_prefix_types {
|
||||||
zfs_zil
|
zfs_zil
|
||||||
};
|
};
|
||||||
|
|
||||||
|
/*
|
||||||
|
* While we define our own s64/u64 types, there is no reason to reimplement the
|
||||||
|
* existing Linux kernel types, so we use the preprocessor to remap our
|
||||||
|
* "custom" implementations to the kernel ones. This is done because the CPP
|
||||||
|
* does not allow us to write conditional definitions. The fourth definition
|
||||||
|
* exists because the CPP will not allow us to replace things like INT with int
|
||||||
|
* before string concatenation.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#define spl_param_set_int param_set_int
|
||||||
|
#define spl_param_get_int param_get_int
|
||||||
|
#define spl_param_ops_int param_ops_int
|
||||||
|
#define spl_param_ops_INT param_ops_int
|
||||||
|
|
||||||
|
#define spl_param_set_long param_set_long
|
||||||
|
#define spl_param_get_long param_get_long
|
||||||
|
#define spl_param_ops_long param_ops_long
|
||||||
|
#define spl_param_ops_LONG param_ops_long
|
||||||
|
|
||||||
|
#define spl_param_set_uint param_set_uint
|
||||||
|
#define spl_param_get_uint param_get_uint
|
||||||
|
#define spl_param_ops_uint param_ops_uint
|
||||||
|
#define spl_param_ops_UINT param_ops_uint
|
||||||
|
|
||||||
|
#define spl_param_set_ulong param_set_ulong
|
||||||
|
#define spl_param_get_ulong param_get_ulong
|
||||||
|
#define spl_param_ops_ulong param_ops_ulong
|
||||||
|
#define spl_param_ops_ULONG param_ops_ulong
|
||||||
|
|
||||||
|
#define spl_param_set_charp param_set_charp
|
||||||
|
#define spl_param_get_charp param_get_charp
|
||||||
|
#define spl_param_ops_charp param_ops_charp
|
||||||
|
#define spl_param_ops_STRING param_ops_charp
|
||||||
|
|
||||||
|
int spl_param_set_s64(const char *val, zfs_kernel_param_t *kp);
|
||||||
|
extern int spl_param_get_s64(char *buffer, zfs_kernel_param_t *kp);
|
||||||
|
extern const struct kernel_param_ops spl_param_ops_s64;
|
||||||
|
#define spl_param_ops_S64 spl_param_ops_s64
|
||||||
|
|
||||||
|
extern int spl_param_set_u64(const char *val, zfs_kernel_param_t *kp);
|
||||||
|
extern int spl_param_get_u64(char *buffer, zfs_kernel_param_t *kp);
|
||||||
|
extern const struct kernel_param_ops spl_param_ops_u64;
|
||||||
|
#define spl_param_ops_U64 spl_param_ops_u64
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Declare a module parameter / sysctl node
|
* Declare a module parameter / sysctl node
|
||||||
*
|
*
|
||||||
|
@ -116,7 +152,8 @@ enum scope_prefix_types {
|
||||||
_Static_assert( \
|
_Static_assert( \
|
||||||
sizeof (scope_prefix) == sizeof (enum scope_prefix_types), \
|
sizeof (scope_prefix) == sizeof (enum scope_prefix_types), \
|
||||||
"" #scope_prefix " size mismatch with enum scope_prefix_types"); \
|
"" #scope_prefix " size mismatch with enum scope_prefix_types"); \
|
||||||
module_param(name_prefix ## name, type, perm); \
|
module_param_cb(name_prefix ## name, &spl_param_ops_ ## type, \
|
||||||
|
&name_prefix ## name, perm); \
|
||||||
MODULE_PARM_DESC(name_prefix ## name, desc)
|
MODULE_PARM_DESC(name_prefix ## name, desc)
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
|
@ -985,8 +985,8 @@ extern arc_state_t ARC_mfu;
|
||||||
extern arc_state_t ARC_mru;
|
extern arc_state_t ARC_mru;
|
||||||
extern uint_t zfs_arc_pc_percent;
|
extern uint_t zfs_arc_pc_percent;
|
||||||
extern uint_t arc_lotsfree_percent;
|
extern uint_t arc_lotsfree_percent;
|
||||||
extern unsigned long zfs_arc_min;
|
extern uint64_t zfs_arc_min;
|
||||||
extern unsigned long zfs_arc_max;
|
extern uint64_t zfs_arc_max;
|
||||||
|
|
||||||
extern void arc_reduce_target_size(int64_t to_free);
|
extern void arc_reduce_target_size(int64_t to_free);
|
||||||
extern boolean_t arc_reclaim_needed(void);
|
extern boolean_t arc_reclaim_needed(void);
|
||||||
|
@ -1003,7 +1003,7 @@ extern void arc_tuning_update(boolean_t);
|
||||||
extern void arc_register_hotplug(void);
|
extern void arc_register_hotplug(void);
|
||||||
extern void arc_unregister_hotplug(void);
|
extern void arc_unregister_hotplug(void);
|
||||||
|
|
||||||
extern int param_set_arc_long(ZFS_MODULE_PARAM_ARGS);
|
extern int param_set_arc_u64(ZFS_MODULE_PARAM_ARGS);
|
||||||
extern int param_set_arc_int(ZFS_MODULE_PARAM_ARGS);
|
extern int param_set_arc_int(ZFS_MODULE_PARAM_ARGS);
|
||||||
extern int param_set_arc_min(ZFS_MODULE_PARAM_ARGS);
|
extern int param_set_arc_min(ZFS_MODULE_PARAM_ARGS);
|
||||||
extern int param_set_arc_max(ZFS_MODULE_PARAM_ARGS);
|
extern int param_set_arc_max(ZFS_MODULE_PARAM_ARGS);
|
||||||
|
|
|
@ -36,7 +36,7 @@
|
||||||
extern "C" {
|
extern "C" {
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
extern unsigned long zfetch_array_rd_sz;
|
extern uint64_t zfetch_array_rd_sz;
|
||||||
|
|
||||||
struct dnode; /* so we can reference dnode */
|
struct dnode; /* so we can reference dnode */
|
||||||
|
|
||||||
|
|
|
@ -84,7 +84,7 @@ typedef struct livelist_condense_entry {
|
||||||
boolean_t cancelled;
|
boolean_t cancelled;
|
||||||
} livelist_condense_entry_t;
|
} livelist_condense_entry_t;
|
||||||
|
|
||||||
extern unsigned long zfs_livelist_max_entries;
|
extern uint64_t zfs_livelist_max_entries;
|
||||||
extern int zfs_livelist_min_percent_shared;
|
extern int zfs_livelist_min_percent_shared;
|
||||||
|
|
||||||
typedef int deadlist_iter_t(void *args, dsl_deadlist_entry_t *dle);
|
typedef int deadlist_iter_t(void *args, dsl_deadlist_entry_t *dle);
|
||||||
|
|
|
@ -57,13 +57,13 @@ struct dsl_scan;
|
||||||
struct dsl_crypto_params;
|
struct dsl_crypto_params;
|
||||||
struct dsl_deadlist;
|
struct dsl_deadlist;
|
||||||
|
|
||||||
extern unsigned long zfs_dirty_data_max;
|
extern uint64_t zfs_dirty_data_max;
|
||||||
extern unsigned long zfs_dirty_data_max_max;
|
extern uint64_t zfs_dirty_data_max_max;
|
||||||
extern unsigned long zfs_wrlog_data_max;
|
extern uint64_t zfs_wrlog_data_max;
|
||||||
extern uint_t zfs_dirty_data_max_percent;
|
extern uint_t zfs_dirty_data_max_percent;
|
||||||
extern uint_t zfs_dirty_data_max_max_percent;
|
extern uint_t zfs_dirty_data_max_max_percent;
|
||||||
extern uint_t zfs_delay_min_dirty_percent;
|
extern uint_t zfs_delay_min_dirty_percent;
|
||||||
extern unsigned long zfs_delay_scale;
|
extern uint64_t zfs_delay_scale;
|
||||||
|
|
||||||
/* These macros are for indexing into the zfs_all_blkstats_t. */
|
/* These macros are for indexing into the zfs_all_blkstats_t. */
|
||||||
#define DMU_OT_DEFERRED DMU_OT_NONE
|
#define DMU_OT_DEFERRED DMU_OT_NONE
|
||||||
|
|
|
@ -64,7 +64,7 @@ extern void mmp_signal_all_threads(void);
|
||||||
|
|
||||||
/* Global tuning */
|
/* Global tuning */
|
||||||
extern int param_set_multihost_interval(ZFS_MODULE_PARAM_ARGS);
|
extern int param_set_multihost_interval(ZFS_MODULE_PARAM_ARGS);
|
||||||
extern ulong_t zfs_multihost_interval;
|
extern uint64_t zfs_multihost_interval;
|
||||||
extern uint_t zfs_multihost_fail_intervals;
|
extern uint_t zfs_multihost_fail_intervals;
|
||||||
extern uint_t zfs_multihost_import_intervals;
|
extern uint_t zfs_multihost_import_intervals;
|
||||||
|
|
||||||
|
|
|
@ -1218,9 +1218,9 @@ int param_set_deadman_failmode(ZFS_MODULE_PARAM_ARGS);
|
||||||
|
|
||||||
extern spa_mode_t spa_mode_global;
|
extern spa_mode_t spa_mode_global;
|
||||||
extern int zfs_deadman_enabled;
|
extern int zfs_deadman_enabled;
|
||||||
extern unsigned long zfs_deadman_synctime_ms;
|
extern uint64_t zfs_deadman_synctime_ms;
|
||||||
extern unsigned long zfs_deadman_ziotime_ms;
|
extern uint64_t zfs_deadman_ziotime_ms;
|
||||||
extern unsigned long zfs_deadman_checktime_ms;
|
extern uint64_t zfs_deadman_checktime_ms;
|
||||||
|
|
||||||
extern kmem_cache_t *zio_buf_cache[];
|
extern kmem_cache_t *zio_buf_cache[];
|
||||||
extern kmem_cache_t *zio_data_buf_cache[];
|
extern kmem_cache_t *zio_data_buf_cache[];
|
||||||
|
|
|
@ -649,8 +649,8 @@ uint64_t vdev_best_ashift(uint64_t logical, uint64_t a, uint64_t b);
|
||||||
/*
|
/*
|
||||||
* Vdev ashift optimization tunables
|
* Vdev ashift optimization tunables
|
||||||
*/
|
*/
|
||||||
extern uint64_t zfs_vdev_min_auto_ashift;
|
extern uint_t zfs_vdev_min_auto_ashift;
|
||||||
extern uint64_t zfs_vdev_max_auto_ashift;
|
extern uint_t zfs_vdev_max_auto_ashift;
|
||||||
int param_set_min_auto_ashift(ZFS_MODULE_PARAM_ARGS);
|
int param_set_min_auto_ashift(ZFS_MODULE_PARAM_ARGS);
|
||||||
int param_set_max_auto_ashift(ZFS_MODULE_PARAM_ARGS);
|
int param_set_max_auto_ashift(ZFS_MODULE_PARAM_ARGS);
|
||||||
|
|
||||||
|
|
|
@ -33,8 +33,8 @@ extern "C" {
|
||||||
|
|
||||||
#define ZCP_RUN_INFO_KEY "runinfo"
|
#define ZCP_RUN_INFO_KEY "runinfo"
|
||||||
|
|
||||||
extern unsigned long zfs_lua_max_instrlimit;
|
extern uint64_t zfs_lua_max_instrlimit;
|
||||||
extern unsigned long zfs_lua_max_memlimit;
|
extern uint64_t zfs_lua_max_memlimit;
|
||||||
|
|
||||||
int zcp_argerror(lua_State *, int, const char *, ...);
|
int zcp_argerror(lua_State *, int, const char *, ...);
|
||||||
|
|
||||||
|
|
|
@ -24,7 +24,7 @@
|
||||||
#define _ZFS_IOCTL_IMPL_H_
|
#define _ZFS_IOCTL_IMPL_H_
|
||||||
|
|
||||||
extern kmutex_t zfsdev_state_lock;
|
extern kmutex_t zfsdev_state_lock;
|
||||||
extern unsigned long zfs_max_nvlist_src_size;
|
extern uint64_t zfs_max_nvlist_src_size;
|
||||||
|
|
||||||
typedef int zfs_ioc_legacy_func_t(zfs_cmd_t *);
|
typedef int zfs_ioc_legacy_func_t(zfs_cmd_t *);
|
||||||
typedef int zfs_ioc_func_t(const char *, nvlist_t *, nvlist_t *);
|
typedef int zfs_ioc_func_t(const char *, nvlist_t *, nvlist_t *);
|
||||||
|
|
117
man/man4/zfs.4
117
man/man4/zfs.4
|
@ -26,7 +26,7 @@
|
||||||
.Sh DESCRIPTION
|
.Sh DESCRIPTION
|
||||||
The ZFS module supports these parameters:
|
The ZFS module supports these parameters:
|
||||||
.Bl -tag -width Ds
|
.Bl -tag -width Ds
|
||||||
.It Sy dbuf_cache_max_bytes Ns = Ns Sy ULONG_MAX Ns B Pq ulong
|
.It Sy dbuf_cache_max_bytes Ns = Ns Sy UINT64_MAX Ns B Pq u64
|
||||||
Maximum size in bytes of the dbuf cache.
|
Maximum size in bytes of the dbuf cache.
|
||||||
The target size is determined by the MIN versus
|
The target size is determined by the MIN versus
|
||||||
.No 1/2^ Ns Sy dbuf_cache_shift Pq 1/32nd
|
.No 1/2^ Ns Sy dbuf_cache_shift Pq 1/32nd
|
||||||
|
@ -36,7 +36,7 @@ can be observed via the
|
||||||
.Pa /proc/spl/kstat/zfs/dbufstats
|
.Pa /proc/spl/kstat/zfs/dbufstats
|
||||||
kstat.
|
kstat.
|
||||||
.
|
.
|
||||||
.It Sy dbuf_metadata_cache_max_bytes Ns = Ns Sy ULONG_MAX Ns B Pq ulong
|
.It Sy dbuf_metadata_cache_max_bytes Ns = Ns Sy UINT64_MAX Ns B Pq u64
|
||||||
Maximum size in bytes of the metadata dbuf cache.
|
Maximum size in bytes of the metadata dbuf cache.
|
||||||
The target size is determined by the MIN versus
|
The target size is determined by the MIN versus
|
||||||
.No 1/2^ Ns Sy dbuf_metadata_cache_shift Pq 1/64th
|
.No 1/2^ Ns Sy dbuf_metadata_cache_shift Pq 1/64th
|
||||||
|
@ -88,16 +88,16 @@ Alias for
|
||||||
Turbo L2ARC warm-up.
|
Turbo L2ARC warm-up.
|
||||||
When the L2ARC is cold the fill interval will be set as fast as possible.
|
When the L2ARC is cold the fill interval will be set as fast as possible.
|
||||||
.
|
.
|
||||||
.It Sy l2arc_feed_min_ms Ns = Ns Sy 200 Pq ulong
|
.It Sy l2arc_feed_min_ms Ns = Ns Sy 200 Pq u64
|
||||||
Min feed interval in milliseconds.
|
Min feed interval in milliseconds.
|
||||||
Requires
|
Requires
|
||||||
.Sy l2arc_feed_again Ns = Ns Ar 1
|
.Sy l2arc_feed_again Ns = Ns Ar 1
|
||||||
and only applicable in related situations.
|
and only applicable in related situations.
|
||||||
.
|
.
|
||||||
.It Sy l2arc_feed_secs Ns = Ns Sy 1 Pq ulong
|
.It Sy l2arc_feed_secs Ns = Ns Sy 1 Pq u64
|
||||||
Seconds between L2ARC writing.
|
Seconds between L2ARC writing.
|
||||||
.
|
.
|
||||||
.It Sy l2arc_headroom Ns = Ns Sy 2 Pq ulong
|
.It Sy l2arc_headroom Ns = Ns Sy 2 Pq u64
|
||||||
How far through the ARC lists to search for L2ARC cacheable content,
|
How far through the ARC lists to search for L2ARC cacheable content,
|
||||||
expressed as a multiplier of
|
expressed as a multiplier of
|
||||||
.Sy l2arc_write_max .
|
.Sy l2arc_write_max .
|
||||||
|
@ -106,7 +106,7 @@ by setting this parameter to
|
||||||
.Sy 0 ,
|
.Sy 0 ,
|
||||||
allowing the full length of ARC lists to be searched for cacheable content.
|
allowing the full length of ARC lists to be searched for cacheable content.
|
||||||
.
|
.
|
||||||
.It Sy l2arc_headroom_boost Ns = Ns Sy 200 Ns % Pq ulong
|
.It Sy l2arc_headroom_boost Ns = Ns Sy 200 Ns % Pq u64
|
||||||
Scales
|
Scales
|
||||||
.Sy l2arc_headroom
|
.Sy l2arc_headroom
|
||||||
by this percentage when L2ARC contents are being successfully compressed
|
by this percentage when L2ARC contents are being successfully compressed
|
||||||
|
@ -162,7 +162,7 @@ too many headers on a system with an irrationally large L2ARC
|
||||||
can render it slow or unusable.
|
can render it slow or unusable.
|
||||||
This parameter limits L2ARC writes and rebuilds to achieve the target.
|
This parameter limits L2ARC writes and rebuilds to achieve the target.
|
||||||
.
|
.
|
||||||
.It Sy l2arc_trim_ahead Ns = Ns Sy 0 Ns % Pq ulong
|
.It Sy l2arc_trim_ahead Ns = Ns Sy 0 Ns % Pq u64
|
||||||
Trims ahead of the current write size
|
Trims ahead of the current write size
|
||||||
.Pq Sy l2arc_write_max
|
.Pq Sy l2arc_write_max
|
||||||
on L2ARC devices by this percentage of write size if we have filled the device.
|
on L2ARC devices by this percentage of write size if we have filled the device.
|
||||||
|
@ -200,12 +200,12 @@ to enable caching/reading prefetches to/from L2ARC.
|
||||||
.It Sy l2arc_norw Ns = Ns Sy 0 Ns | Ns 1 Pq int
|
.It Sy l2arc_norw Ns = Ns Sy 0 Ns | Ns 1 Pq int
|
||||||
No reads during writes.
|
No reads during writes.
|
||||||
.
|
.
|
||||||
.It Sy l2arc_write_boost Ns = Ns Sy 8388608 Ns B Po 8 MiB Pc Pq ulong
|
.It Sy l2arc_write_boost Ns = Ns Sy 8388608 Ns B Po 8 MiB Pc Pq u64
|
||||||
Cold L2ARC devices will have
|
Cold L2ARC devices will have
|
||||||
.Sy l2arc_write_max
|
.Sy l2arc_write_max
|
||||||
increased by this amount while they remain cold.
|
increased by this amount while they remain cold.
|
||||||
.
|
.
|
||||||
.It Sy l2arc_write_max Ns = Ns Sy 8388608 Ns B Po 8 MiB Pc Pq ulong
|
.It Sy l2arc_write_max Ns = Ns Sy 8388608 Ns B Po 8 MiB Pc Pq u64
|
||||||
Max write bytes per interval.
|
Max write bytes per interval.
|
||||||
.
|
.
|
||||||
.It Sy l2arc_rebuild_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
|
.It Sy l2arc_rebuild_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
|
||||||
|
@ -215,7 +215,7 @@ or attaching an L2ARC device (e.g. the L2ARC device is slow
|
||||||
in reading stored log metadata, or the metadata
|
in reading stored log metadata, or the metadata
|
||||||
has become somehow fragmented/unusable).
|
has become somehow fragmented/unusable).
|
||||||
.
|
.
|
||||||
.It Sy l2arc_rebuild_blocks_min_l2size Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq ulong
|
.It Sy l2arc_rebuild_blocks_min_l2size Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64
|
||||||
Mininum size of an L2ARC device required in order to write log blocks in it.
|
Mininum size of an L2ARC device required in order to write log blocks in it.
|
||||||
The log blocks are used upon importing the pool to rebuild the persistent L2ARC.
|
The log blocks are used upon importing the pool to rebuild the persistent L2ARC.
|
||||||
.Pp
|
.Pp
|
||||||
|
@ -224,7 +224,7 @@ For L2ARC devices less than 1 GiB, the amount of data
|
||||||
evicts is significant compared to the amount of restored L2ARC data.
|
evicts is significant compared to the amount of restored L2ARC data.
|
||||||
In this case, do not write log blocks in L2ARC in order not to waste space.
|
In this case, do not write log blocks in L2ARC in order not to waste space.
|
||||||
.
|
.
|
||||||
.It Sy metaslab_aliquot Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq ulong
|
.It Sy metaslab_aliquot Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
|
||||||
Metaslab granularity, in bytes.
|
Metaslab granularity, in bytes.
|
||||||
This is roughly similar to what would be referred to as the "stripe size"
|
This is roughly similar to what would be referred to as the "stripe size"
|
||||||
in traditional RAID arrays.
|
in traditional RAID arrays.
|
||||||
|
@ -235,11 +235,11 @@ before moving on to the next top-level vdev.
|
||||||
Enable metaslab group biasing based on their vdevs' over- or under-utilization
|
Enable metaslab group biasing based on their vdevs' over- or under-utilization
|
||||||
relative to the pool.
|
relative to the pool.
|
||||||
.
|
.
|
||||||
.It Sy metaslab_force_ganging Ns = Ns Sy 16777217 Ns B Po 16 MiB + 1 B Pc Pq ulong
|
.It Sy metaslab_force_ganging Ns = Ns Sy 16777217 Ns B Po 16 MiB + 1 B Pc Pq u64
|
||||||
Make some blocks above a certain size be gang blocks.
|
Make some blocks above a certain size be gang blocks.
|
||||||
This option is used by the test suite to facilitate testing.
|
This option is used by the test suite to facilitate testing.
|
||||||
.
|
.
|
||||||
.It Sy zfs_history_output_max Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq int
|
.It Sy zfs_history_output_max Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
|
||||||
When attempting to log an output nvlist of an ioctl in the on-disk history,
|
When attempting to log an output nvlist of an ioctl in the on-disk history,
|
||||||
the output will not be stored if it is larger than this size (in bytes).
|
the output will not be stored if it is larger than this size (in bytes).
|
||||||
This must be less than
|
This must be less than
|
||||||
|
@ -299,7 +299,7 @@ this tunable controls which segment is used.
|
||||||
If set, we will use the largest free segment.
|
If set, we will use the largest free segment.
|
||||||
If unset, we will use a segment of at least the requested size.
|
If unset, we will use a segment of at least the requested size.
|
||||||
.
|
.
|
||||||
.It Sy zfs_metaslab_max_size_cache_sec Ns = Ns Sy 3600 Ns s Po 1 hour Pc Pq ulong
|
.It Sy zfs_metaslab_max_size_cache_sec Ns = Ns Sy 3600 Ns s Po 1 hour Pc Pq u64
|
||||||
When we unload a metaslab, we cache the size of the largest free chunk.
|
When we unload a metaslab, we cache the size of the largest free chunk.
|
||||||
We use that cached size to determine whether or not to load a metaslab
|
We use that cached size to determine whether or not to load a metaslab
|
||||||
for a given allocation.
|
for a given allocation.
|
||||||
|
@ -353,14 +353,14 @@ When a vdev is added, target this number of metaslabs per top-level vdev.
|
||||||
.It Sy zfs_vdev_default_ms_shift Ns = Ns Sy 29 Po 512 MiB Pc Pq uint
|
.It Sy zfs_vdev_default_ms_shift Ns = Ns Sy 29 Po 512 MiB Pc Pq uint
|
||||||
Default limit for metaslab size.
|
Default limit for metaslab size.
|
||||||
.
|
.
|
||||||
.It Sy zfs_vdev_max_auto_ashift Ns = Ns Sy 14 Pq ulong
|
.It Sy zfs_vdev_max_auto_ashift Ns = Ns Sy 14 Pq uint
|
||||||
Maximum ashift used when optimizing for logical \[->] physical sector size on new
|
Maximum ashift used when optimizing for logical \[->] physical sector size on new
|
||||||
top-level vdevs.
|
top-level vdevs.
|
||||||
May be increased up to
|
May be increased up to
|
||||||
.Sy ASHIFT_MAX Po 16 Pc ,
|
.Sy ASHIFT_MAX Po 16 Pc ,
|
||||||
but this may negatively impact pool space efficiency.
|
but this may negatively impact pool space efficiency.
|
||||||
.
|
.
|
||||||
.It Sy zfs_vdev_min_auto_ashift Ns = Ns Sy ASHIFT_MIN Po 9 Pc Pq ulong
|
.It Sy zfs_vdev_min_auto_ashift Ns = Ns Sy ASHIFT_MIN Po 9 Pc Pq uint
|
||||||
Minimum ashift used when creating new top-level vdevs.
|
Minimum ashift used when creating new top-level vdevs.
|
||||||
.
|
.
|
||||||
.It Sy zfs_vdev_min_ms_count Ns = Ns Sy 16 Pq uint
|
.It Sy zfs_vdev_min_ms_count Ns = Ns Sy 16 Pq uint
|
||||||
|
@ -481,10 +481,10 @@ The default value here was chosen to align with
|
||||||
which is a similar concept when doing
|
which is a similar concept when doing
|
||||||
regular reads (but there's no reason it has to be the same).
|
regular reads (but there's no reason it has to be the same).
|
||||||
.
|
.
|
||||||
.It Sy vdev_file_logical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq ulong
|
.It Sy vdev_file_logical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq u64
|
||||||
Logical ashift for file-based devices.
|
Logical ashift for file-based devices.
|
||||||
.
|
.
|
||||||
.It Sy vdev_file_physical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq ulong
|
.It Sy vdev_file_physical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq u64
|
||||||
Physical ashift for file-based devices.
|
Physical ashift for file-based devices.
|
||||||
.
|
.
|
||||||
.It Sy zap_iterate_prefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int
|
.It Sy zap_iterate_prefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int
|
||||||
|
@ -493,7 +493,7 @@ prefetch the entire object (all leaf blocks).
|
||||||
However, this is limited by
|
However, this is limited by
|
||||||
.Sy dmu_prefetch_max .
|
.Sy dmu_prefetch_max .
|
||||||
.
|
.
|
||||||
.It Sy zfetch_array_rd_sz Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq ulong
|
.It Sy zfetch_array_rd_sz Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
|
||||||
If prefetching is enabled, disable prefetching for reads larger than this size.
|
If prefetching is enabled, disable prefetching for reads larger than this size.
|
||||||
.
|
.
|
||||||
.It Sy zfetch_min_distance Ns = Ns Sy 4194304 Ns B Po 4 MiB Pc Pq uint
|
.It Sy zfetch_min_distance Ns = Ns Sy 4194304 Ns B Po 4 MiB Pc Pq uint
|
||||||
|
@ -537,7 +537,7 @@ depends on kernel configuration.
|
||||||
This is the minimum allocation size that will use scatter (page-based) ABDs.
|
This is the minimum allocation size that will use scatter (page-based) ABDs.
|
||||||
Smaller allocations will use linear ABDs.
|
Smaller allocations will use linear ABDs.
|
||||||
.
|
.
|
||||||
.It Sy zfs_arc_dnode_limit Ns = Ns Sy 0 Ns B Pq ulong
|
.It Sy zfs_arc_dnode_limit Ns = Ns Sy 0 Ns B Pq u64
|
||||||
When the number of bytes consumed by dnodes in the ARC exceeds this number of
|
When the number of bytes consumed by dnodes in the ARC exceeds this number of
|
||||||
bytes, try to unpin some of it in response to demand for non-metadata.
|
bytes, try to unpin some of it in response to demand for non-metadata.
|
||||||
This value acts as a ceiling to the amount of dnode metadata, and defaults to
|
This value acts as a ceiling to the amount of dnode metadata, and defaults to
|
||||||
|
@ -553,14 +553,14 @@ when the amount of metadata in the ARC exceeds
|
||||||
.Sy zfs_arc_meta_limit
|
.Sy zfs_arc_meta_limit
|
||||||
rather than in response to overall demand for non-metadata.
|
rather than in response to overall demand for non-metadata.
|
||||||
.
|
.
|
||||||
.It Sy zfs_arc_dnode_limit_percent Ns = Ns Sy 10 Ns % Pq ulong
|
.It Sy zfs_arc_dnode_limit_percent Ns = Ns Sy 10 Ns % Pq u64
|
||||||
Percentage that can be consumed by dnodes of ARC meta buffers.
|
Percentage that can be consumed by dnodes of ARC meta buffers.
|
||||||
.Pp
|
.Pp
|
||||||
See also
|
See also
|
||||||
.Sy zfs_arc_dnode_limit ,
|
.Sy zfs_arc_dnode_limit ,
|
||||||
which serves a similar purpose but has a higher priority if nonzero.
|
which serves a similar purpose but has a higher priority if nonzero.
|
||||||
.
|
.
|
||||||
.It Sy zfs_arc_dnode_reduce_percent Ns = Ns Sy 10 Ns % Pq ulong
|
.It Sy zfs_arc_dnode_reduce_percent Ns = Ns Sy 10 Ns % Pq u64
|
||||||
Percentage of ARC dnodes to try to scan in response to demand for non-metadata
|
Percentage of ARC dnodes to try to scan in response to demand for non-metadata
|
||||||
when the number of bytes consumed by dnodes exceeds
|
when the number of bytes consumed by dnodes exceeds
|
||||||
.Sy zfs_arc_dnode_limit .
|
.Sy zfs_arc_dnode_limit .
|
||||||
|
@ -613,7 +613,7 @@ Setting this value to
|
||||||
.Sy 0
|
.Sy 0
|
||||||
will disable the throttle.
|
will disable the throttle.
|
||||||
.
|
.
|
||||||
.It Sy zfs_arc_max Ns = Ns Sy 0 Ns B Pq ulong
|
.It Sy zfs_arc_max Ns = Ns Sy 0 Ns B Pq u64
|
||||||
Max size of ARC in bytes.
|
Max size of ARC in bytes.
|
||||||
If
|
If
|
||||||
.Sy 0 ,
|
.Sy 0 ,
|
||||||
|
@ -642,7 +642,7 @@ the free buffers in order to stay below the
|
||||||
This value should not need to be tuned but is available to facilitate
|
This value should not need to be tuned but is available to facilitate
|
||||||
performance analysis.
|
performance analysis.
|
||||||
.
|
.
|
||||||
.It Sy zfs_arc_meta_limit Ns = Ns Sy 0 Ns B Pq ulong
|
.It Sy zfs_arc_meta_limit Ns = Ns Sy 0 Ns B Pq u64
|
||||||
The maximum allowed size in bytes that metadata buffers are allowed to
|
The maximum allowed size in bytes that metadata buffers are allowed to
|
||||||
consume in the ARC.
|
consume in the ARC.
|
||||||
When this limit is reached, metadata buffers will be reclaimed,
|
When this limit is reached, metadata buffers will be reclaimed,
|
||||||
|
@ -658,14 +658,14 @@ of the ARC may be used for metadata.
|
||||||
This value my be changed dynamically, except that must be set to an explicit value
|
This value my be changed dynamically, except that must be set to an explicit value
|
||||||
.Pq cannot be set back to Sy 0 .
|
.Pq cannot be set back to Sy 0 .
|
||||||
.
|
.
|
||||||
.It Sy zfs_arc_meta_limit_percent Ns = Ns Sy 75 Ns % Pq ulong
|
.It Sy zfs_arc_meta_limit_percent Ns = Ns Sy 75 Ns % Pq u64
|
||||||
Percentage of ARC buffers that can be used for metadata.
|
Percentage of ARC buffers that can be used for metadata.
|
||||||
.Pp
|
.Pp
|
||||||
See also
|
See also
|
||||||
.Sy zfs_arc_meta_limit ,
|
.Sy zfs_arc_meta_limit ,
|
||||||
which serves a similar purpose but has a higher priority if nonzero.
|
which serves a similar purpose but has a higher priority if nonzero.
|
||||||
.
|
.
|
||||||
.It Sy zfs_arc_meta_min Ns = Ns Sy 0 Ns B Pq ulong
|
.It Sy zfs_arc_meta_min Ns = Ns Sy 0 Ns B Pq u64
|
||||||
The minimum allowed size in bytes that metadata buffers may consume in
|
The minimum allowed size in bytes that metadata buffers may consume in
|
||||||
the ARC.
|
the ARC.
|
||||||
.
|
.
|
||||||
|
@ -691,7 +691,7 @@ additional data buffers may be evicted if required
|
||||||
to evict the required number of metadata buffers.
|
to evict the required number of metadata buffers.
|
||||||
.El
|
.El
|
||||||
.
|
.
|
||||||
.It Sy zfs_arc_min Ns = Ns Sy 0 Ns B Pq ulong
|
.It Sy zfs_arc_min Ns = Ns Sy 0 Ns B Pq u64
|
||||||
Min size of ARC in bytes.
|
Min size of ARC in bytes.
|
||||||
.No If set to Sy 0 , arc_c_min
|
.No If set to Sy 0 , arc_c_min
|
||||||
will default to consuming the larger of
|
will default to consuming the larger of
|
||||||
|
@ -718,7 +718,7 @@ but that was not proven to be useful.
|
||||||
Number of missing top-level vdevs which will be allowed during
|
Number of missing top-level vdevs which will be allowed during
|
||||||
pool import (only in read-only mode).
|
pool import (only in read-only mode).
|
||||||
.
|
.
|
||||||
.It Sy zfs_max_nvlist_src_size Ns = Sy 0 Pq ulong
|
.It Sy zfs_max_nvlist_src_size Ns = Sy 0 Pq u64
|
||||||
Maximum size in bytes allowed to be passed as
|
Maximum size in bytes allowed to be passed as
|
||||||
.Sy zc_nvlist_src_size
|
.Sy zc_nvlist_src_size
|
||||||
for ioctls on
|
for ioctls on
|
||||||
|
@ -822,7 +822,7 @@ even with a small average compressed block size of ~8 KiB.
|
||||||
The parameter can be set to 0 (zero) to disable the limit,
|
The parameter can be set to 0 (zero) to disable the limit,
|
||||||
and only applies on Linux.
|
and only applies on Linux.
|
||||||
.
|
.
|
||||||
.It Sy zfs_arc_sys_free Ns = Ns Sy 0 Ns B Pq ulong
|
.It Sy zfs_arc_sys_free Ns = Ns Sy 0 Ns B Pq u64
|
||||||
The target number of bytes the ARC should leave as free memory on the system.
|
The target number of bytes the ARC should leave as free memory on the system.
|
||||||
If zero, equivalent to the bigger of
|
If zero, equivalent to the bigger of
|
||||||
.Sy 512 KiB No and Sy all_system_memory/64 .
|
.Sy 512 KiB No and Sy all_system_memory/64 .
|
||||||
|
@ -866,12 +866,12 @@ bytes of memory and if the obsolete space map object uses more than
|
||||||
bytes on-disk.
|
bytes on-disk.
|
||||||
The condensing process is an attempt to save memory by removing obsolete mappings.
|
The condensing process is an attempt to save memory by removing obsolete mappings.
|
||||||
.
|
.
|
||||||
.It Sy zfs_condense_max_obsolete_bytes Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq ulong
|
.It Sy zfs_condense_max_obsolete_bytes Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64
|
||||||
Only attempt to condense indirect vdev mappings if the on-disk size
|
Only attempt to condense indirect vdev mappings if the on-disk size
|
||||||
of the obsolete space map object is greater than this number of bytes
|
of the obsolete space map object is greater than this number of bytes
|
||||||
.Pq see Sy zfs_condense_indirect_vdevs_enable .
|
.Pq see Sy zfs_condense_indirect_vdevs_enable .
|
||||||
.
|
.
|
||||||
.It Sy zfs_condense_min_mapping_bytes Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq ulong
|
.It Sy zfs_condense_min_mapping_bytes Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq u64
|
||||||
Minimum size vdev mapping to attempt to condense
|
Minimum size vdev mapping to attempt to condense
|
||||||
.Pq see Sy zfs_condense_indirect_vdevs_enable .
|
.Pq see Sy zfs_condense_indirect_vdevs_enable .
|
||||||
.
|
.
|
||||||
|
@ -927,21 +927,21 @@ This can be used to facilitate automatic fail-over
|
||||||
to a properly configured fail-over partner.
|
to a properly configured fail-over partner.
|
||||||
.El
|
.El
|
||||||
.
|
.
|
||||||
.It Sy zfs_deadman_checktime_ms Ns = Ns Sy 60000 Ns ms Po 1 min Pc Pq int
|
.It Sy zfs_deadman_checktime_ms Ns = Ns Sy 60000 Ns ms Po 1 min Pc Pq u64
|
||||||
Check time in milliseconds.
|
Check time in milliseconds.
|
||||||
This defines the frequency at which we check for hung I/O requests
|
This defines the frequency at which we check for hung I/O requests
|
||||||
and potentially invoke the
|
and potentially invoke the
|
||||||
.Sy zfs_deadman_failmode
|
.Sy zfs_deadman_failmode
|
||||||
behavior.
|
behavior.
|
||||||
.
|
.
|
||||||
.It Sy zfs_deadman_synctime_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq ulong
|
.It Sy zfs_deadman_synctime_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq u64
|
||||||
Interval in milliseconds after which the deadman is triggered and also
|
Interval in milliseconds after which the deadman is triggered and also
|
||||||
the interval after which a pool sync operation is considered to be "hung".
|
the interval after which a pool sync operation is considered to be "hung".
|
||||||
Once this limit is exceeded the deadman will be invoked every
|
Once this limit is exceeded the deadman will be invoked every
|
||||||
.Sy zfs_deadman_checktime_ms
|
.Sy zfs_deadman_checktime_ms
|
||||||
milliseconds until the pool sync completes.
|
milliseconds until the pool sync completes.
|
||||||
.
|
.
|
||||||
.It Sy zfs_deadman_ziotime_ms Ns = Ns Sy 300000 Ns ms Po 5 min Pc Pq ulong
|
.It Sy zfs_deadman_ziotime_ms Ns = Ns Sy 300000 Ns ms Po 5 min Pc Pq u64
|
||||||
Interval in milliseconds after which the deadman is triggered and an
|
Interval in milliseconds after which the deadman is triggered and an
|
||||||
individual I/O operation is considered to be "hung".
|
individual I/O operation is considered to be "hung".
|
||||||
As long as the operation remains "hung",
|
As long as the operation remains "hung",
|
||||||
|
@ -994,15 +994,15 @@ same object.
|
||||||
Rate limit delay and deadman zevents (which report slow I/O operations) to this many per
|
Rate limit delay and deadman zevents (which report slow I/O operations) to this many per
|
||||||
second.
|
second.
|
||||||
.
|
.
|
||||||
.It Sy zfs_unflushed_max_mem_amt Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq ulong
|
.It Sy zfs_unflushed_max_mem_amt Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64
|
||||||
Upper-bound limit for unflushed metadata changes to be held by the
|
Upper-bound limit for unflushed metadata changes to be held by the
|
||||||
log spacemap in memory, in bytes.
|
log spacemap in memory, in bytes.
|
||||||
.
|
.
|
||||||
.It Sy zfs_unflushed_max_mem_ppm Ns = Ns Sy 1000 Ns ppm Po 0.1% Pc Pq ulong
|
.It Sy zfs_unflushed_max_mem_ppm Ns = Ns Sy 1000 Ns ppm Po 0.1% Pc Pq u64
|
||||||
Part of overall system memory that ZFS allows to be used
|
Part of overall system memory that ZFS allows to be used
|
||||||
for unflushed metadata changes by the log spacemap, in millionths.
|
for unflushed metadata changes by the log spacemap, in millionths.
|
||||||
.
|
.
|
||||||
.It Sy zfs_unflushed_log_block_max Ns = Ns Sy 131072 Po 128k Pc Pq ulong
|
.It Sy zfs_unflushed_log_block_max Ns = Ns Sy 131072 Po 128k Pc Pq u64
|
||||||
Describes the maximum number of log spacemap blocks allowed for each pool.
|
Describes the maximum number of log spacemap blocks allowed for each pool.
|
||||||
The default value means that the space in all the log spacemaps
|
The default value means that the space in all the log spacemaps
|
||||||
can add up to no more than
|
can add up to no more than
|
||||||
|
@ -1030,17 +1030,17 @@ one extra logical I/O issued.
|
||||||
This is the reason why this tunable is exposed in terms of blocks rather
|
This is the reason why this tunable is exposed in terms of blocks rather
|
||||||
than space used.
|
than space used.
|
||||||
.
|
.
|
||||||
.It Sy zfs_unflushed_log_block_min Ns = Ns Sy 1000 Pq ulong
|
.It Sy zfs_unflushed_log_block_min Ns = Ns Sy 1000 Pq u64
|
||||||
If the number of metaslabs is small and our incoming rate is high,
|
If the number of metaslabs is small and our incoming rate is high,
|
||||||
we could get into a situation that we are flushing all our metaslabs every TXG.
|
we could get into a situation that we are flushing all our metaslabs every TXG.
|
||||||
Thus we always allow at least this many log blocks.
|
Thus we always allow at least this many log blocks.
|
||||||
.
|
.
|
||||||
.It Sy zfs_unflushed_log_block_pct Ns = Ns Sy 400 Ns % Pq ulong
|
.It Sy zfs_unflushed_log_block_pct Ns = Ns Sy 400 Ns % Pq u64
|
||||||
Tunable used to determine the number of blocks that can be used for
|
Tunable used to determine the number of blocks that can be used for
|
||||||
the spacemap log, expressed as a percentage of the total number of
|
the spacemap log, expressed as a percentage of the total number of
|
||||||
unflushed metaslabs in the pool.
|
unflushed metaslabs in the pool.
|
||||||
.
|
.
|
||||||
.It Sy zfs_unflushed_log_txg_max Ns = Ns Sy 1000 Pq ulong
|
.It Sy zfs_unflushed_log_txg_max Ns = Ns Sy 1000 Pq u64
|
||||||
Tunable limiting maximum time in TXGs any metaslab may remain unflushed.
|
Tunable limiting maximum time in TXGs any metaslab may remain unflushed.
|
||||||
It effectively limits maximum number of unflushed per-TXG spacemap logs
|
It effectively limits maximum number of unflushed per-TXG spacemap logs
|
||||||
that need to be read after unclean pool export.
|
that need to be read after unclean pool export.
|
||||||
|
@ -1060,6 +1060,7 @@ will be deleted asynchronously, while smaller files are deleted synchronously.
|
||||||
Decreasing this value will reduce the time spent in an
|
Decreasing this value will reduce the time spent in an
|
||||||
.Xr unlink 2
|
.Xr unlink 2
|
||||||
system call, at the expense of a longer delay before the freed space is available.
|
system call, at the expense of a longer delay before the freed space is available.
|
||||||
|
This only applies on Linux.
|
||||||
.
|
.
|
||||||
.It Sy zfs_dirty_data_max Ns = Pq int
|
.It Sy zfs_dirty_data_max Ns = Pq int
|
||||||
Determines the dirty space limit in bytes.
|
Determines the dirty space limit in bytes.
|
||||||
|
@ -1185,10 +1186,10 @@ benchmark results by reading this kstat file:
|
||||||
.It Sy zfs_free_bpobj_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
|
.It Sy zfs_free_bpobj_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
|
||||||
Enable/disable the processing of the free_bpobj object.
|
Enable/disable the processing of the free_bpobj object.
|
||||||
.
|
.
|
||||||
.It Sy zfs_async_block_max_blocks Ns = Ns Sy ULONG_MAX Po unlimited Pc Pq ulong
|
.It Sy zfs_async_block_max_blocks Ns = Ns Sy UINT64_MAX Po unlimited Pc Pq u64
|
||||||
Maximum number of blocks freed in a single TXG.
|
Maximum number of blocks freed in a single TXG.
|
||||||
.
|
.
|
||||||
.It Sy zfs_max_async_dedup_frees Ns = Ns Sy 100000 Po 10^5 Pc Pq ulong
|
.It Sy zfs_max_async_dedup_frees Ns = Ns Sy 100000 Po 10^5 Pc Pq u64
|
||||||
Maximum number of dedup blocks freed in a single TXG.
|
Maximum number of dedup blocks freed in a single TXG.
|
||||||
.
|
.
|
||||||
.It Sy zfs_vdev_async_read_max_active Ns = Ns Sy 3 Pq uint
|
.It Sy zfs_vdev_async_read_max_active Ns = Ns Sy 3 Pq uint
|
||||||
|
@ -1444,22 +1445,22 @@ Similar to
|
||||||
.Sy zfs_free_min_time_ms ,
|
.Sy zfs_free_min_time_ms ,
|
||||||
but for cleanup of old indirection records for removed vdevs.
|
but for cleanup of old indirection records for removed vdevs.
|
||||||
.
|
.
|
||||||
.It Sy zfs_immediate_write_sz Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq long
|
.It Sy zfs_immediate_write_sz Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq s64
|
||||||
Largest data block to write to the ZIL.
|
Largest data block to write to the ZIL.
|
||||||
Larger blocks will be treated as if the dataset being written to had the
|
Larger blocks will be treated as if the dataset being written to had the
|
||||||
.Sy logbias Ns = Ns Sy throughput
|
.Sy logbias Ns = Ns Sy throughput
|
||||||
property set.
|
property set.
|
||||||
.
|
.
|
||||||
.It Sy zfs_initialize_value Ns = Ns Sy 16045690984833335022 Po 0xDEADBEEFDEADBEEE Pc Pq ulong
|
.It Sy zfs_initialize_value Ns = Ns Sy 16045690984833335022 Po 0xDEADBEEFDEADBEEE Pc Pq u64
|
||||||
Pattern written to vdev free space by
|
Pattern written to vdev free space by
|
||||||
.Xr zpool-initialize 8 .
|
.Xr zpool-initialize 8 .
|
||||||
.
|
.
|
||||||
.It Sy zfs_initialize_chunk_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq ulong
|
.It Sy zfs_initialize_chunk_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
|
||||||
Size of writes used by
|
Size of writes used by
|
||||||
.Xr zpool-initialize 8 .
|
.Xr zpool-initialize 8 .
|
||||||
This option is used by the test suite.
|
This option is used by the test suite.
|
||||||
.
|
.
|
||||||
.It Sy zfs_livelist_max_entries Ns = Ns Sy 500000 Po 5*10^5 Pc Pq ulong
|
.It Sy zfs_livelist_max_entries Ns = Ns Sy 500000 Po 5*10^5 Pc Pq u64
|
||||||
The threshold size (in block pointers) at which we create a new sub-livelist.
|
The threshold size (in block pointers) at which we create a new sub-livelist.
|
||||||
Larger sublists are more costly from a memory perspective but the fewer
|
Larger sublists are more costly from a memory perspective but the fewer
|
||||||
sublists there are, the lower the cost of insertion.
|
sublists there are, the lower the cost of insertion.
|
||||||
|
@ -1498,11 +1499,11 @@ executing the open context condensing work in
|
||||||
.Fn spa_livelist_condense_cb .
|
.Fn spa_livelist_condense_cb .
|
||||||
This option is used by the test suite to trigger race conditions.
|
This option is used by the test suite to trigger race conditions.
|
||||||
.
|
.
|
||||||
.It Sy zfs_lua_max_instrlimit Ns = Ns Sy 100000000 Po 10^8 Pc Pq ulong
|
.It Sy zfs_lua_max_instrlimit Ns = Ns Sy 100000000 Po 10^8 Pc Pq u64
|
||||||
The maximum execution time limit that can be set for a ZFS channel program,
|
The maximum execution time limit that can be set for a ZFS channel program,
|
||||||
specified as a number of Lua instructions.
|
specified as a number of Lua instructions.
|
||||||
.
|
.
|
||||||
.It Sy zfs_lua_max_memlimit Ns = Ns Sy 104857600 Po 100 MiB Pc Pq ulong
|
.It Sy zfs_lua_max_memlimit Ns = Ns Sy 104857600 Po 100 MiB Pc Pq u64
|
||||||
The maximum memory limit that can be set for a ZFS channel program, specified
|
The maximum memory limit that can be set for a ZFS channel program, specified
|
||||||
in bytes.
|
in bytes.
|
||||||
.
|
.
|
||||||
|
@ -1511,11 +1512,11 @@ The maximum depth of nested datasets.
|
||||||
This value can be tuned temporarily to
|
This value can be tuned temporarily to
|
||||||
fix existing datasets that exceed the predefined limit.
|
fix existing datasets that exceed the predefined limit.
|
||||||
.
|
.
|
||||||
.It Sy zfs_max_log_walking Ns = Ns Sy 5 Pq ulong
|
.It Sy zfs_max_log_walking Ns = Ns Sy 5 Pq u64
|
||||||
The number of past TXGs that the flushing algorithm of the log spacemap
|
The number of past TXGs that the flushing algorithm of the log spacemap
|
||||||
feature uses to estimate incoming log blocks.
|
feature uses to estimate incoming log blocks.
|
||||||
.
|
.
|
||||||
.It Sy zfs_max_logsm_summary_length Ns = Ns Sy 10 Pq ulong
|
.It Sy zfs_max_logsm_summary_length Ns = Ns Sy 10 Pq u64
|
||||||
Maximum number of rows allowed in the summary of the spacemap log.
|
Maximum number of rows allowed in the summary of the spacemap log.
|
||||||
.
|
.
|
||||||
.It Sy zfs_max_recordsize Ns = Ns Sy 16777216 Po 16 MiB Pc Pq uint
|
.It Sy zfs_max_recordsize Ns = Ns Sy 16777216 Po 16 MiB Pc Pq uint
|
||||||
|
@ -1534,7 +1535,7 @@ regardless of this setting.
|
||||||
Allow datasets received with redacted send/receive to be mounted.
|
Allow datasets received with redacted send/receive to be mounted.
|
||||||
Normally disabled because these datasets may be missing key data.
|
Normally disabled because these datasets may be missing key data.
|
||||||
.
|
.
|
||||||
.It Sy zfs_min_metaslabs_to_flush Ns = Ns Sy 1 Pq ulong
|
.It Sy zfs_min_metaslabs_to_flush Ns = Ns Sy 1 Pq u64
|
||||||
Minimum number of metaslabs to flush per dirty TXG.
|
Minimum number of metaslabs to flush per dirty TXG.
|
||||||
.
|
.
|
||||||
.It Sy zfs_metaslab_fragmentation_threshold Ns = Ns Sy 70 Ns % Pq uint
|
.It Sy zfs_metaslab_fragmentation_threshold Ns = Ns Sy 70 Ns % Pq uint
|
||||||
|
@ -1584,7 +1585,7 @@ into the special allocation class.
|
||||||
Historical statistics for this many latest multihost updates will be available in
|
Historical statistics for this many latest multihost updates will be available in
|
||||||
.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /multihost .
|
.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /multihost .
|
||||||
.
|
.
|
||||||
.It Sy zfs_multihost_interval Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq ulong
|
.It Sy zfs_multihost_interval Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq u64
|
||||||
Used to control the frequency of multihost writes which are performed when the
|
Used to control the frequency of multihost writes which are performed when the
|
||||||
.Sy multihost
|
.Sy multihost
|
||||||
pool property is on.
|
pool property is on.
|
||||||
|
@ -1677,7 +1678,7 @@ prefetched during a pool traversal, like
|
||||||
.Nm zfs Cm send
|
.Nm zfs Cm send
|
||||||
or other data crawling operations.
|
or other data crawling operations.
|
||||||
.
|
.
|
||||||
.It Sy zfs_per_txg_dirty_frees_percent Ns = Ns Sy 30 Ns % Pq ulong
|
.It Sy zfs_per_txg_dirty_frees_percent Ns = Ns Sy 30 Ns % Pq u64
|
||||||
Control percentage of dirtied indirect blocks from frees allowed into one TXG.
|
Control percentage of dirtied indirect blocks from frees allowed into one TXG.
|
||||||
After this threshold is crossed, additional frees will wait until the next TXG.
|
After this threshold is crossed, additional frees will wait until the next TXG.
|
||||||
.Sy 0 No disables this throttle.
|
.Sy 0 No disables this throttle.
|
||||||
|
@ -1705,7 +1706,7 @@ Disable QAT hardware acceleration for AES-GCM encryption.
|
||||||
May be unset after the ZFS modules have been loaded to initialize the QAT
|
May be unset after the ZFS modules have been loaded to initialize the QAT
|
||||||
hardware as long as support is compiled in and the QAT driver is present.
|
hardware as long as support is compiled in and the QAT driver is present.
|
||||||
.
|
.
|
||||||
.It Sy zfs_vnops_read_chunk_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq long
|
.It Sy zfs_vnops_read_chunk_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
|
||||||
Bytes to read per chunk.
|
Bytes to read per chunk.
|
||||||
.
|
.
|
||||||
.It Sy zfs_read_history Ns = Ns Sy 0 Pq uint
|
.It Sy zfs_read_history Ns = Ns Sy 0 Pq uint
|
||||||
|
@ -1715,7 +1716,7 @@ Historical statistics for this many latest reads will be available in
|
||||||
.It Sy zfs_read_history_hits Ns = Ns Sy 0 Ns | Ns 1 Pq int
|
.It Sy zfs_read_history_hits Ns = Ns Sy 0 Ns | Ns 1 Pq int
|
||||||
Include cache hits in read history
|
Include cache hits in read history
|
||||||
.
|
.
|
||||||
.It Sy zfs_rebuild_max_segment Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq ulong
|
.It Sy zfs_rebuild_max_segment Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
|
||||||
Maximum read segment size to issue when sequentially resilvering a
|
Maximum read segment size to issue when sequentially resilvering a
|
||||||
top-level vdev.
|
top-level vdev.
|
||||||
.
|
.
|
||||||
|
@ -1725,7 +1726,7 @@ completes in order to verify the checksums of all blocks which have been
|
||||||
resilvered.
|
resilvered.
|
||||||
This is enabled by default and strongly recommended.
|
This is enabled by default and strongly recommended.
|
||||||
.
|
.
|
||||||
.It Sy zfs_rebuild_vdev_limit Ns = Ns Sy 33554432 Ns B Po 32 MiB Pc Pq ulong
|
.It Sy zfs_rebuild_vdev_limit Ns = Ns Sy 33554432 Ns B Po 32 MiB Pc Pq u64
|
||||||
Maximum amount of I/O that can be concurrently issued for a sequential
|
Maximum amount of I/O that can be concurrently issued for a sequential
|
||||||
resilver per leaf device, given in bytes.
|
resilver per leaf device, given in bytes.
|
||||||
.
|
.
|
||||||
|
@ -2166,7 +2167,7 @@ if a volatile out-of-order write cache is enabled.
|
||||||
Disable intent logging replay.
|
Disable intent logging replay.
|
||||||
Can be disabled for recovery from corrupted ZIL.
|
Can be disabled for recovery from corrupted ZIL.
|
||||||
.
|
.
|
||||||
.It Sy zil_slog_bulk Ns = Ns Sy 786432 Ns B Po 768 KiB Pc Pq ulong
|
.It Sy zil_slog_bulk Ns = Ns Sy 786432 Ns B Po 768 KiB Pc Pq u64
|
||||||
Limit SLOG write size per commit executed with synchronous priority.
|
Limit SLOG write size per commit executed with synchronous priority.
|
||||||
Any writes above that will be executed with lower (asynchronous) priority
|
Any writes above that will be executed with lower (asynchronous) priority
|
||||||
to limit potential SLOG device abuse by single active ZIL writer.
|
to limit potential SLOG device abuse by single active ZIL writer.
|
||||||
|
@ -2276,7 +2277,7 @@ systems with a very large number of zvols.
|
||||||
.It Sy zvol_major Ns = Ns Sy 230 Pq uint
|
.It Sy zvol_major Ns = Ns Sy 230 Pq uint
|
||||||
Major number for zvol block devices.
|
Major number for zvol block devices.
|
||||||
.
|
.
|
||||||
.It Sy zvol_max_discard_blocks Ns = Ns Sy 16384 Pq ulong
|
.It Sy zvol_max_discard_blocks Ns = Ns Sy 16384 Pq long
|
||||||
Discard (TRIM) operations done on zvols will be done in batches of this
|
Discard (TRIM) operations done on zvols will be done in batches of this
|
||||||
many blocks, where block size is determined by the
|
many blocks, where block size is determined by the
|
||||||
.Sy volblocksize
|
.Sy volblocksize
|
||||||
|
|
|
@ -137,11 +137,11 @@ SYSCTL_CONST_STRING(_vfs_zfs_version, OID_AUTO, module, CTLFLAG_RD,
|
||||||
/* arc.c */
|
/* arc.c */
|
||||||
|
|
||||||
int
|
int
|
||||||
param_set_arc_long(SYSCTL_HANDLER_ARGS)
|
param_set_arc_u64(SYSCTL_HANDLER_ARGS)
|
||||||
{
|
{
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
err = sysctl_handle_long(oidp, arg1, 0, req);
|
err = sysctl_handle_64(oidp, arg1, 0, req);
|
||||||
if (err != 0 || req->newptr == NULL)
|
if (err != 0 || req->newptr == NULL)
|
||||||
return (err);
|
return (err);
|
||||||
|
|
||||||
|
@ -171,7 +171,7 @@ param_set_arc_max(SYSCTL_HANDLER_ARGS)
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
val = zfs_arc_max;
|
val = zfs_arc_max;
|
||||||
err = sysctl_handle_long(oidp, &val, 0, req);
|
err = sysctl_handle_64(oidp, &val, 0, req);
|
||||||
if (err != 0 || req->newptr == NULL)
|
if (err != 0 || req->newptr == NULL)
|
||||||
return (SET_ERROR(err));
|
return (SET_ERROR(err));
|
||||||
|
|
||||||
|
@ -203,7 +203,7 @@ param_set_arc_min(SYSCTL_HANDLER_ARGS)
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
val = zfs_arc_min;
|
val = zfs_arc_min;
|
||||||
err = sysctl_handle_long(oidp, &val, 0, req);
|
err = sysctl_handle_64(oidp, &val, 0, req);
|
||||||
if (err != 0 || req->newptr == NULL)
|
if (err != 0 || req->newptr == NULL)
|
||||||
return (SET_ERROR(err));
|
return (SET_ERROR(err));
|
||||||
|
|
||||||
|
@ -599,7 +599,7 @@ param_set_multihost_interval(SYSCTL_HANDLER_ARGS)
|
||||||
{
|
{
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
err = sysctl_handle_long(oidp, &zfs_multihost_interval, 0, req);
|
err = sysctl_handle_64(oidp, &zfs_multihost_interval, 0, req);
|
||||||
if (err != 0 || req->newptr == NULL)
|
if (err != 0 || req->newptr == NULL)
|
||||||
return (err);
|
return (err);
|
||||||
|
|
||||||
|
@ -676,7 +676,7 @@ param_set_deadman_synctime(SYSCTL_HANDLER_ARGS)
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
val = zfs_deadman_synctime_ms;
|
val = zfs_deadman_synctime_ms;
|
||||||
err = sysctl_handle_long(oidp, &val, 0, req);
|
err = sysctl_handle_64(oidp, &val, 0, req);
|
||||||
if (err != 0 || req->newptr == NULL)
|
if (err != 0 || req->newptr == NULL)
|
||||||
return (err);
|
return (err);
|
||||||
zfs_deadman_synctime_ms = val;
|
zfs_deadman_synctime_ms = val;
|
||||||
|
@ -693,7 +693,7 @@ param_set_deadman_ziotime(SYSCTL_HANDLER_ARGS)
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
val = zfs_deadman_ziotime_ms;
|
val = zfs_deadman_ziotime_ms;
|
||||||
err = sysctl_handle_long(oidp, &val, 0, req);
|
err = sysctl_handle_64(oidp, &val, 0, req);
|
||||||
if (err != 0 || req->newptr == NULL)
|
if (err != 0 || req->newptr == NULL)
|
||||||
return (err);
|
return (err);
|
||||||
zfs_deadman_ziotime_ms = val;
|
zfs_deadman_ziotime_ms = val;
|
||||||
|
@ -761,11 +761,11 @@ SYSCTL_INT(_vfs_zfs, OID_AUTO, space_map_ibs, CTLFLAG_RWTUN,
|
||||||
int
|
int
|
||||||
param_set_min_auto_ashift(SYSCTL_HANDLER_ARGS)
|
param_set_min_auto_ashift(SYSCTL_HANDLER_ARGS)
|
||||||
{
|
{
|
||||||
uint64_t val;
|
int val;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
val = zfs_vdev_min_auto_ashift;
|
val = zfs_vdev_min_auto_ashift;
|
||||||
err = sysctl_handle_64(oidp, &val, 0, req);
|
err = sysctl_handle_int(oidp, &val, 0, req);
|
||||||
if (err != 0 || req->newptr == NULL)
|
if (err != 0 || req->newptr == NULL)
|
||||||
return (SET_ERROR(err));
|
return (SET_ERROR(err));
|
||||||
|
|
||||||
|
@ -779,20 +779,20 @@ param_set_min_auto_ashift(SYSCTL_HANDLER_ARGS)
|
||||||
|
|
||||||
/* BEGIN CSTYLED */
|
/* BEGIN CSTYLED */
|
||||||
SYSCTL_PROC(_vfs_zfs, OID_AUTO, min_auto_ashift,
|
SYSCTL_PROC(_vfs_zfs, OID_AUTO, min_auto_ashift,
|
||||||
CTLTYPE_U64 | CTLFLAG_RWTUN | CTLFLAG_MPSAFE,
|
CTLTYPE_UINT | CTLFLAG_RWTUN | CTLFLAG_MPSAFE,
|
||||||
&zfs_vdev_min_auto_ashift, sizeof (zfs_vdev_min_auto_ashift),
|
&zfs_vdev_min_auto_ashift, sizeof (zfs_vdev_min_auto_ashift),
|
||||||
param_set_min_auto_ashift, "QU",
|
param_set_min_auto_ashift, "IU",
|
||||||
"Min ashift used when creating new top-level vdev. (LEGACY)");
|
"Min ashift used when creating new top-level vdev. (LEGACY)");
|
||||||
/* END CSTYLED */
|
/* END CSTYLED */
|
||||||
|
|
||||||
int
|
int
|
||||||
param_set_max_auto_ashift(SYSCTL_HANDLER_ARGS)
|
param_set_max_auto_ashift(SYSCTL_HANDLER_ARGS)
|
||||||
{
|
{
|
||||||
uint64_t val;
|
int val;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
val = zfs_vdev_max_auto_ashift;
|
val = zfs_vdev_max_auto_ashift;
|
||||||
err = sysctl_handle_64(oidp, &val, 0, req);
|
err = sysctl_handle_int(oidp, &val, 0, req);
|
||||||
if (err != 0 || req->newptr == NULL)
|
if (err != 0 || req->newptr == NULL)
|
||||||
return (SET_ERROR(err));
|
return (SET_ERROR(err));
|
||||||
|
|
||||||
|
@ -806,9 +806,9 @@ param_set_max_auto_ashift(SYSCTL_HANDLER_ARGS)
|
||||||
|
|
||||||
/* BEGIN CSTYLED */
|
/* BEGIN CSTYLED */
|
||||||
SYSCTL_PROC(_vfs_zfs, OID_AUTO, max_auto_ashift,
|
SYSCTL_PROC(_vfs_zfs, OID_AUTO, max_auto_ashift,
|
||||||
CTLTYPE_U64 | CTLFLAG_RWTUN | CTLFLAG_MPSAFE,
|
CTLTYPE_UINT | CTLFLAG_RWTUN | CTLFLAG_MPSAFE,
|
||||||
&zfs_vdev_max_auto_ashift, sizeof (zfs_vdev_max_auto_ashift),
|
&zfs_vdev_max_auto_ashift, sizeof (zfs_vdev_max_auto_ashift),
|
||||||
param_set_max_auto_ashift, "QU",
|
param_set_max_auto_ashift, "IU",
|
||||||
"Max ashift used when optimizing for logical -> physical sector size on"
|
"Max ashift used when optimizing for logical -> physical sector size on"
|
||||||
" new top-level vdevs. (LEGACY)");
|
" new top-level vdevs. (LEGACY)");
|
||||||
/* END CSTYLED */
|
/* END CSTYLED */
|
||||||
|
|
|
@ -40,8 +40,8 @@
|
||||||
|
|
||||||
static taskq_t *vdev_file_taskq;
|
static taskq_t *vdev_file_taskq;
|
||||||
|
|
||||||
static unsigned long vdev_file_logical_ashift = SPA_MINBLOCKSHIFT;
|
static uint_t vdev_file_logical_ashift = SPA_MINBLOCKSHIFT;
|
||||||
static unsigned long vdev_file_physical_ashift = SPA_MINBLOCKSHIFT;
|
static uint_t vdev_file_physical_ashift = SPA_MINBLOCKSHIFT;
|
||||||
|
|
||||||
void
|
void
|
||||||
vdev_file_init(void)
|
vdev_file_init(void)
|
||||||
|
@ -350,7 +350,7 @@ vdev_ops_t vdev_disk_ops = {
|
||||||
|
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_vdev_file, vdev_file_, logical_ashift, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_vdev_file, vdev_file_, logical_ashift, UINT, ZMOD_RW,
|
||||||
"Logical ashift for file-based devices");
|
"Logical ashift for file-based devices");
|
||||||
ZFS_MODULE_PARAM(zfs_vdev_file, vdev_file_, physical_ashift, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_vdev_file, vdev_file_, physical_ashift, UINT, ZMOD_RW,
|
||||||
"Physical ashift for file-based devices");
|
"Physical ashift for file-based devices");
|
||||||
|
|
|
@ -48,6 +48,7 @@
|
||||||
#include <sys/cred.h>
|
#include <sys/cred.h>
|
||||||
#include <sys/vnode.h>
|
#include <sys/vnode.h>
|
||||||
#include <sys/misc.h>
|
#include <sys/misc.h>
|
||||||
|
#include <linux/mod_compat.h>
|
||||||
|
|
||||||
unsigned long spl_hostid = 0;
|
unsigned long spl_hostid = 0;
|
||||||
EXPORT_SYMBOL(spl_hostid);
|
EXPORT_SYMBOL(spl_hostid);
|
||||||
|
@ -518,6 +519,29 @@ ddi_copyin(const void *from, void *to, size_t len, int flags)
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(ddi_copyin);
|
EXPORT_SYMBOL(ddi_copyin);
|
||||||
|
|
||||||
|
#define define_spl_param(type, fmt) \
|
||||||
|
int \
|
||||||
|
spl_param_get_##type(char *buf, zfs_kernel_param_t *kp) \
|
||||||
|
{ \
|
||||||
|
return (scnprintf(buf, PAGE_SIZE, fmt "\n", \
|
||||||
|
*(type *)kp->arg)); \
|
||||||
|
} \
|
||||||
|
int \
|
||||||
|
spl_param_set_##type(const char *buf, zfs_kernel_param_t *kp) \
|
||||||
|
{ \
|
||||||
|
return (kstrto##type(buf, 0, (type *)kp->arg)); \
|
||||||
|
} \
|
||||||
|
const struct kernel_param_ops spl_param_ops_##type = { \
|
||||||
|
.set = spl_param_set_##type, \
|
||||||
|
.get = spl_param_get_##type, \
|
||||||
|
}; \
|
||||||
|
EXPORT_SYMBOL(spl_param_get_##type); \
|
||||||
|
EXPORT_SYMBOL(spl_param_set_##type); \
|
||||||
|
EXPORT_SYMBOL(spl_param_ops_##type);
|
||||||
|
|
||||||
|
define_spl_param(s64, "%lld")
|
||||||
|
define_spl_param(u64, "%llu")
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Post a uevent to userspace whenever a new vdev adds to the pool. It is
|
* Post a uevent to userspace whenever a new vdev adds to the pool. It is
|
||||||
* necessary to sync blkid information with udev, which zed daemon uses
|
* necessary to sync blkid information with udev, which zed daemon uses
|
||||||
|
|
|
@ -358,11 +358,11 @@ arc_lowmem_fini(void)
|
||||||
}
|
}
|
||||||
|
|
||||||
int
|
int
|
||||||
param_set_arc_long(const char *buf, zfs_kernel_param_t *kp)
|
param_set_arc_u64(const char *buf, zfs_kernel_param_t *kp)
|
||||||
{
|
{
|
||||||
int error;
|
int error;
|
||||||
|
|
||||||
error = param_set_long(buf, kp);
|
error = spl_param_set_u64(buf, kp);
|
||||||
if (error < 0)
|
if (error < 0)
|
||||||
return (SET_ERROR(error));
|
return (SET_ERROR(error));
|
||||||
|
|
||||||
|
@ -374,13 +374,13 @@ param_set_arc_long(const char *buf, zfs_kernel_param_t *kp)
|
||||||
int
|
int
|
||||||
param_set_arc_min(const char *buf, zfs_kernel_param_t *kp)
|
param_set_arc_min(const char *buf, zfs_kernel_param_t *kp)
|
||||||
{
|
{
|
||||||
return (param_set_arc_long(buf, kp));
|
return (param_set_arc_u64(buf, kp));
|
||||||
}
|
}
|
||||||
|
|
||||||
int
|
int
|
||||||
param_set_arc_max(const char *buf, zfs_kernel_param_t *kp)
|
param_set_arc_max(const char *buf, zfs_kernel_param_t *kp)
|
||||||
{
|
{
|
||||||
return (param_set_arc_long(buf, kp));
|
return (param_set_arc_u64(buf, kp));
|
||||||
}
|
}
|
||||||
|
|
||||||
int
|
int
|
||||||
|
|
|
@ -30,7 +30,7 @@ param_set_multihost_interval(const char *val, zfs_kernel_param_t *kp)
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
ret = param_set_ulong(val, kp);
|
ret = spl_param_set_u64(val, kp);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
return (ret);
|
return (ret);
|
||||||
|
|
||||||
|
|
|
@ -60,7 +60,7 @@ param_set_deadman_ziotime(const char *val, zfs_kernel_param_t *kp)
|
||||||
{
|
{
|
||||||
int error;
|
int error;
|
||||||
|
|
||||||
error = param_set_ulong(val, kp);
|
error = spl_param_set_u64(val, kp);
|
||||||
if (error < 0)
|
if (error < 0)
|
||||||
return (SET_ERROR(error));
|
return (SET_ERROR(error));
|
||||||
|
|
||||||
|
@ -74,7 +74,7 @@ param_set_deadman_synctime(const char *val, zfs_kernel_param_t *kp)
|
||||||
{
|
{
|
||||||
int error;
|
int error;
|
||||||
|
|
||||||
error = param_set_ulong(val, kp);
|
error = spl_param_set_u64(val, kp);
|
||||||
if (error < 0)
|
if (error < 0)
|
||||||
return (SET_ERROR(error));
|
return (SET_ERROR(error));
|
||||||
|
|
||||||
|
|
|
@ -1006,17 +1006,17 @@ MODULE_PARM_DESC(zfs_vdev_scheduler, "I/O scheduler");
|
||||||
int
|
int
|
||||||
param_set_min_auto_ashift(const char *buf, zfs_kernel_param_t *kp)
|
param_set_min_auto_ashift(const char *buf, zfs_kernel_param_t *kp)
|
||||||
{
|
{
|
||||||
uint64_t val;
|
uint_t val;
|
||||||
int error;
|
int error;
|
||||||
|
|
||||||
error = kstrtoull(buf, 0, &val);
|
error = kstrtouint(buf, 0, &val);
|
||||||
if (error < 0)
|
if (error < 0)
|
||||||
return (SET_ERROR(error));
|
return (SET_ERROR(error));
|
||||||
|
|
||||||
if (val < ASHIFT_MIN || val > zfs_vdev_max_auto_ashift)
|
if (val < ASHIFT_MIN || val > zfs_vdev_max_auto_ashift)
|
||||||
return (SET_ERROR(-EINVAL));
|
return (SET_ERROR(-EINVAL));
|
||||||
|
|
||||||
error = param_set_ulong(buf, kp);
|
error = param_set_uint(buf, kp);
|
||||||
if (error < 0)
|
if (error < 0)
|
||||||
return (SET_ERROR(error));
|
return (SET_ERROR(error));
|
||||||
|
|
||||||
|
@ -1026,17 +1026,17 @@ param_set_min_auto_ashift(const char *buf, zfs_kernel_param_t *kp)
|
||||||
int
|
int
|
||||||
param_set_max_auto_ashift(const char *buf, zfs_kernel_param_t *kp)
|
param_set_max_auto_ashift(const char *buf, zfs_kernel_param_t *kp)
|
||||||
{
|
{
|
||||||
uint64_t val;
|
uint_t val;
|
||||||
int error;
|
int error;
|
||||||
|
|
||||||
error = kstrtoull(buf, 0, &val);
|
error = kstrtouint(buf, 0, &val);
|
||||||
if (error < 0)
|
if (error < 0)
|
||||||
return (SET_ERROR(error));
|
return (SET_ERROR(error));
|
||||||
|
|
||||||
if (val > ASHIFT_MAX || val < zfs_vdev_min_auto_ashift)
|
if (val > ASHIFT_MAX || val < zfs_vdev_min_auto_ashift)
|
||||||
return (SET_ERROR(-EINVAL));
|
return (SET_ERROR(-EINVAL));
|
||||||
|
|
||||||
error = param_set_ulong(buf, kp);
|
error = param_set_uint(buf, kp);
|
||||||
if (error < 0)
|
if (error < 0)
|
||||||
return (SET_ERROR(error));
|
return (SET_ERROR(error));
|
||||||
|
|
||||||
|
|
|
@ -53,8 +53,8 @@ static taskq_t *vdev_file_taskq;
|
||||||
* impact the vdev_ashift setting which can only be set at vdev creation
|
* impact the vdev_ashift setting which can only be set at vdev creation
|
||||||
* time.
|
* time.
|
||||||
*/
|
*/
|
||||||
static unsigned long vdev_file_logical_ashift = SPA_MINBLOCKSHIFT;
|
static uint_t vdev_file_logical_ashift = SPA_MINBLOCKSHIFT;
|
||||||
static unsigned long vdev_file_physical_ashift = SPA_MINBLOCKSHIFT;
|
static uint_t vdev_file_physical_ashift = SPA_MINBLOCKSHIFT;
|
||||||
|
|
||||||
static void
|
static void
|
||||||
vdev_file_hold(vdev_t *vd)
|
vdev_file_hold(vdev_t *vd)
|
||||||
|
@ -376,7 +376,7 @@ vdev_ops_t vdev_disk_ops = {
|
||||||
|
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_vdev_file, vdev_file_, logical_ashift, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_vdev_file, vdev_file_, logical_ashift, UINT, ZMOD_RW,
|
||||||
"Logical ashift for file-based devices");
|
"Logical ashift for file-based devices");
|
||||||
ZFS_MODULE_PARAM(zfs_vdev_file, vdev_file_, physical_ashift, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_vdev_file, vdev_file_, physical_ashift, UINT, ZMOD_RW,
|
||||||
"Physical ashift for file-based devices");
|
"Physical ashift for file-based devices");
|
||||||
|
|
|
@ -419,12 +419,12 @@ boolean_t arc_warm;
|
||||||
/*
|
/*
|
||||||
* These tunables are for performance analysis.
|
* These tunables are for performance analysis.
|
||||||
*/
|
*/
|
||||||
unsigned long zfs_arc_max = 0;
|
uint64_t zfs_arc_max = 0;
|
||||||
unsigned long zfs_arc_min = 0;
|
uint64_t zfs_arc_min = 0;
|
||||||
unsigned long zfs_arc_meta_limit = 0;
|
uint64_t zfs_arc_meta_limit = 0;
|
||||||
unsigned long zfs_arc_meta_min = 0;
|
uint64_t zfs_arc_meta_min = 0;
|
||||||
static unsigned long zfs_arc_dnode_limit = 0;
|
static uint64_t zfs_arc_dnode_limit = 0;
|
||||||
static unsigned long zfs_arc_dnode_reduce_percent = 10;
|
static uint_t zfs_arc_dnode_reduce_percent = 10;
|
||||||
static uint_t zfs_arc_grow_retry = 0;
|
static uint_t zfs_arc_grow_retry = 0;
|
||||||
static uint_t zfs_arc_shrink_shift = 0;
|
static uint_t zfs_arc_shrink_shift = 0;
|
||||||
static uint_t zfs_arc_p_min_shift = 0;
|
static uint_t zfs_arc_p_min_shift = 0;
|
||||||
|
@ -449,17 +449,17 @@ int zfs_compressed_arc_enabled = B_TRUE;
|
||||||
* ARC will evict meta buffers that exceed arc_meta_limit. This
|
* ARC will evict meta buffers that exceed arc_meta_limit. This
|
||||||
* tunable make arc_meta_limit adjustable for different workloads.
|
* tunable make arc_meta_limit adjustable for different workloads.
|
||||||
*/
|
*/
|
||||||
static unsigned long zfs_arc_meta_limit_percent = 75;
|
static uint64_t zfs_arc_meta_limit_percent = 75;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Percentage that can be consumed by dnodes of ARC meta buffers.
|
* Percentage that can be consumed by dnodes of ARC meta buffers.
|
||||||
*/
|
*/
|
||||||
static unsigned long zfs_arc_dnode_limit_percent = 10;
|
static uint_t zfs_arc_dnode_limit_percent = 10;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* These tunables are Linux-specific
|
* These tunables are Linux-specific
|
||||||
*/
|
*/
|
||||||
static unsigned long zfs_arc_sys_free = 0;
|
static uint64_t zfs_arc_sys_free = 0;
|
||||||
static uint_t zfs_arc_min_prefetch_ms = 0;
|
static uint_t zfs_arc_min_prefetch_ms = 0;
|
||||||
static uint_t zfs_arc_min_prescient_prefetch_ms = 0;
|
static uint_t zfs_arc_min_prescient_prefetch_ms = 0;
|
||||||
static int zfs_arc_p_dampener_disable = 1;
|
static int zfs_arc_p_dampener_disable = 1;
|
||||||
|
@ -781,12 +781,12 @@ uint64_t zfs_crc64_table[256];
|
||||||
#define L2ARC_FEED_TYPES 4
|
#define L2ARC_FEED_TYPES 4
|
||||||
|
|
||||||
/* L2ARC Performance Tunables */
|
/* L2ARC Performance Tunables */
|
||||||
unsigned long l2arc_write_max = L2ARC_WRITE_SIZE; /* def max write size */
|
uint64_t l2arc_write_max = L2ARC_WRITE_SIZE; /* def max write size */
|
||||||
unsigned long l2arc_write_boost = L2ARC_WRITE_SIZE; /* extra warmup write */
|
uint64_t l2arc_write_boost = L2ARC_WRITE_SIZE; /* extra warmup write */
|
||||||
unsigned long l2arc_headroom = L2ARC_HEADROOM; /* # of dev writes */
|
uint64_t l2arc_headroom = L2ARC_HEADROOM; /* # of dev writes */
|
||||||
unsigned long l2arc_headroom_boost = L2ARC_HEADROOM_BOOST;
|
uint64_t l2arc_headroom_boost = L2ARC_HEADROOM_BOOST;
|
||||||
unsigned long l2arc_feed_secs = L2ARC_FEED_SECS; /* interval seconds */
|
uint64_t l2arc_feed_secs = L2ARC_FEED_SECS; /* interval seconds */
|
||||||
unsigned long l2arc_feed_min_ms = L2ARC_FEED_MIN_MS; /* min interval msecs */
|
uint64_t l2arc_feed_min_ms = L2ARC_FEED_MIN_MS; /* min interval msecs */
|
||||||
int l2arc_noprefetch = B_TRUE; /* don't cache prefetch bufs */
|
int l2arc_noprefetch = B_TRUE; /* don't cache prefetch bufs */
|
||||||
int l2arc_feed_again = B_TRUE; /* turbo warmup */
|
int l2arc_feed_again = B_TRUE; /* turbo warmup */
|
||||||
int l2arc_norw = B_FALSE; /* no reads during writes */
|
int l2arc_norw = B_FALSE; /* no reads during writes */
|
||||||
|
@ -909,7 +909,7 @@ static int l2arc_mfuonly = 0;
|
||||||
* will vary depending of how well the specific device handles
|
* will vary depending of how well the specific device handles
|
||||||
* these commands.
|
* these commands.
|
||||||
*/
|
*/
|
||||||
static unsigned long l2arc_trim_ahead = 0;
|
static uint64_t l2arc_trim_ahead = 0;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Performance tuning of L2ARC persistence:
|
* Performance tuning of L2ARC persistence:
|
||||||
|
@ -925,7 +925,7 @@ static unsigned long l2arc_trim_ahead = 0;
|
||||||
* not to waste space.
|
* not to waste space.
|
||||||
*/
|
*/
|
||||||
static int l2arc_rebuild_enabled = B_TRUE;
|
static int l2arc_rebuild_enabled = B_TRUE;
|
||||||
static unsigned long l2arc_rebuild_blocks_min_l2size = 1024 * 1024 * 1024;
|
static uint64_t l2arc_rebuild_blocks_min_l2size = 1024 * 1024 * 1024;
|
||||||
|
|
||||||
/* L2ARC persistence rebuild control routines. */
|
/* L2ARC persistence rebuild control routines. */
|
||||||
void l2arc_rebuild_vdev(vdev_t *vd, boolean_t reopen);
|
void l2arc_rebuild_vdev(vdev_t *vd, boolean_t reopen);
|
||||||
|
@ -11077,20 +11077,20 @@ EXPORT_SYMBOL(arc_add_prune_callback);
|
||||||
EXPORT_SYMBOL(arc_remove_prune_callback);
|
EXPORT_SYMBOL(arc_remove_prune_callback);
|
||||||
|
|
||||||
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, min, param_set_arc_min,
|
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, min, param_set_arc_min,
|
||||||
param_get_ulong, ZMOD_RW, "Minimum ARC size in bytes");
|
spl_param_get_u64, ZMOD_RW, "Minimum ARC size in bytes");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, max, param_set_arc_max,
|
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, max, param_set_arc_max,
|
||||||
param_get_ulong, ZMOD_RW, "Maximum ARC size in bytes");
|
spl_param_get_u64, ZMOD_RW, "Maximum ARC size in bytes");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, meta_limit, param_set_arc_long,
|
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, meta_limit, param_set_arc_u64,
|
||||||
param_get_ulong, ZMOD_RW, "Metadata limit for ARC size in bytes");
|
spl_param_get_u64, ZMOD_RW, "Metadata limit for ARC size in bytes");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, meta_limit_percent,
|
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, meta_limit_percent,
|
||||||
param_set_arc_long, param_get_ulong, ZMOD_RW,
|
param_set_arc_int, param_get_uint, ZMOD_RW,
|
||||||
"Percent of ARC size for ARC meta limit");
|
"Percent of ARC size for ARC meta limit");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, meta_min, param_set_arc_long,
|
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, meta_min, param_set_arc_u64,
|
||||||
param_get_ulong, ZMOD_RW, "Minimum ARC metadata size in bytes");
|
spl_param_get_u64, ZMOD_RW, "Minimum ARC metadata size in bytes");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, meta_prune, INT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, meta_prune, INT, ZMOD_RW,
|
||||||
"Meta objects to scan for prune");
|
"Meta objects to scan for prune");
|
||||||
|
@ -11129,25 +11129,25 @@ ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, min_prescient_prefetch_ms,
|
||||||
param_set_arc_int, param_get_uint, ZMOD_RW,
|
param_set_arc_int, param_get_uint, ZMOD_RW,
|
||||||
"Min life of prescient prefetched block in ms");
|
"Min life of prescient prefetched block in ms");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, write_max, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, write_max, U64, ZMOD_RW,
|
||||||
"Max write bytes per interval");
|
"Max write bytes per interval");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, write_boost, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, write_boost, U64, ZMOD_RW,
|
||||||
"Extra write bytes during device warmup");
|
"Extra write bytes during device warmup");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, headroom, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, headroom, U64, ZMOD_RW,
|
||||||
"Number of max device writes to precache");
|
"Number of max device writes to precache");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, headroom_boost, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, headroom_boost, U64, ZMOD_RW,
|
||||||
"Compressed l2arc_headroom multiplier");
|
"Compressed l2arc_headroom multiplier");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, trim_ahead, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, trim_ahead, U64, ZMOD_RW,
|
||||||
"TRIM ahead L2ARC write size multiplier");
|
"TRIM ahead L2ARC write size multiplier");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, feed_secs, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, feed_secs, U64, ZMOD_RW,
|
||||||
"Seconds between L2ARC writing");
|
"Seconds between L2ARC writing");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, feed_min_ms, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, feed_min_ms, U64, ZMOD_RW,
|
||||||
"Min feed interval in milliseconds");
|
"Min feed interval in milliseconds");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, noprefetch, INT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, noprefetch, INT, ZMOD_RW,
|
||||||
|
@ -11165,7 +11165,7 @@ ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, meta_percent, UINT, ZMOD_RW,
|
||||||
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, rebuild_enabled, INT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, rebuild_enabled, INT, ZMOD_RW,
|
||||||
"Rebuild the L2ARC when importing a pool");
|
"Rebuild the L2ARC when importing a pool");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, rebuild_blocks_min_l2size, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, rebuild_blocks_min_l2size, U64, ZMOD_RW,
|
||||||
"Min size in bytes to write rebuild log blocks in L2ARC");
|
"Min size in bytes to write rebuild log blocks in L2ARC");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, mfuonly, INT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, mfuonly, INT, ZMOD_RW,
|
||||||
|
@ -11177,17 +11177,17 @@ ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, exclude_special, INT, ZMOD_RW,
|
||||||
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, lotsfree_percent, param_set_arc_int,
|
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, lotsfree_percent, param_set_arc_int,
|
||||||
param_get_uint, ZMOD_RW, "System free memory I/O throttle in bytes");
|
param_get_uint, ZMOD_RW, "System free memory I/O throttle in bytes");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, sys_free, param_set_arc_long,
|
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, sys_free, param_set_arc_u64,
|
||||||
param_get_ulong, ZMOD_RW, "System free memory target size in bytes");
|
spl_param_get_u64, ZMOD_RW, "System free memory target size in bytes");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, dnode_limit, param_set_arc_long,
|
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, dnode_limit, param_set_arc_u64,
|
||||||
param_get_ulong, ZMOD_RW, "Minimum bytes of dnodes in ARC");
|
spl_param_get_u64, ZMOD_RW, "Minimum bytes of dnodes in ARC");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, dnode_limit_percent,
|
ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, dnode_limit_percent,
|
||||||
param_set_arc_long, param_get_ulong, ZMOD_RW,
|
param_set_arc_int, param_get_uint, ZMOD_RW,
|
||||||
"Percent of ARC meta buffers for dnodes");
|
"Percent of ARC meta buffers for dnodes");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, dnode_reduce_percent, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, dnode_reduce_percent, UINT, ZMOD_RW,
|
||||||
"Percentage of excess dnodes to try to unpin");
|
"Percentage of excess dnodes to try to unpin");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, eviction_pct, UINT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, eviction_pct, UINT, ZMOD_RW,
|
||||||
|
|
|
@ -227,8 +227,8 @@ typedef struct dbuf_cache {
|
||||||
dbuf_cache_t dbuf_caches[DB_CACHE_MAX];
|
dbuf_cache_t dbuf_caches[DB_CACHE_MAX];
|
||||||
|
|
||||||
/* Size limits for the caches */
|
/* Size limits for the caches */
|
||||||
static unsigned long dbuf_cache_max_bytes = ULONG_MAX;
|
static uint64_t dbuf_cache_max_bytes = UINT64_MAX;
|
||||||
static unsigned long dbuf_metadata_cache_max_bytes = ULONG_MAX;
|
static uint64_t dbuf_metadata_cache_max_bytes = UINT64_MAX;
|
||||||
|
|
||||||
/* Set the default sizes of the caches to log2 fraction of arc size */
|
/* Set the default sizes of the caches to log2 fraction of arc size */
|
||||||
static uint_t dbuf_cache_shift = 5;
|
static uint_t dbuf_cache_shift = 5;
|
||||||
|
@ -5122,7 +5122,7 @@ EXPORT_SYMBOL(dmu_buf_set_user_ie);
|
||||||
EXPORT_SYMBOL(dmu_buf_get_user);
|
EXPORT_SYMBOL(dmu_buf_get_user);
|
||||||
EXPORT_SYMBOL(dmu_buf_get_blkptr);
|
EXPORT_SYMBOL(dmu_buf_get_blkptr);
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_dbuf_cache, dbuf_cache_, max_bytes, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_dbuf_cache, dbuf_cache_, max_bytes, U64, ZMOD_RW,
|
||||||
"Maximum size in bytes of the dbuf cache.");
|
"Maximum size in bytes of the dbuf cache.");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_dbuf_cache, dbuf_cache_, hiwater_pct, UINT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_dbuf_cache, dbuf_cache_, hiwater_pct, UINT, ZMOD_RW,
|
||||||
|
@ -5131,7 +5131,7 @@ ZFS_MODULE_PARAM(zfs_dbuf_cache, dbuf_cache_, hiwater_pct, UINT, ZMOD_RW,
|
||||||
ZFS_MODULE_PARAM(zfs_dbuf_cache, dbuf_cache_, lowater_pct, UINT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_dbuf_cache, dbuf_cache_, lowater_pct, UINT, ZMOD_RW,
|
||||||
"Percentage below dbuf_cache_max_bytes when dbuf eviction stops.");
|
"Percentage below dbuf_cache_max_bytes when dbuf eviction stops.");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_dbuf, dbuf_, metadata_cache_max_bytes, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_dbuf, dbuf_, metadata_cache_max_bytes, U64, ZMOD_RW,
|
||||||
"Maximum size in bytes of dbuf metadata cache.");
|
"Maximum size in bytes of dbuf metadata cache.");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_dbuf, dbuf_, cache_shift, UINT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_dbuf, dbuf_, cache_shift, UINT, ZMOD_RW,
|
||||||
|
|
|
@ -70,7 +70,7 @@ static int zfs_nopwrite_enabled = 1;
|
||||||
* will wait until the next TXG.
|
* will wait until the next TXG.
|
||||||
* A value of zero will disable this throttle.
|
* A value of zero will disable this throttle.
|
||||||
*/
|
*/
|
||||||
static unsigned long zfs_per_txg_dirty_frees_percent = 30;
|
static uint_t zfs_per_txg_dirty_frees_percent = 30;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Enable/disable forcing txg sync when dirty checking for holes with lseek().
|
* Enable/disable forcing txg sync when dirty checking for holes with lseek().
|
||||||
|
@ -2355,7 +2355,7 @@ EXPORT_SYMBOL(dmu_ot);
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, nopwrite_enabled, INT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, nopwrite_enabled, INT, ZMOD_RW,
|
||||||
"Enable NOP writes");
|
"Enable NOP writes");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, per_txg_dirty_frees_percent, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, per_txg_dirty_frees_percent, UINT, ZMOD_RW,
|
||||||
"Percentage of dirtied blocks from frees in one TXG");
|
"Percentage of dirtied blocks from frees in one TXG");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, dmu_offset_next_sync, INT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, dmu_offset_next_sync, INT, ZMOD_RW,
|
||||||
|
|
|
@ -58,7 +58,7 @@ unsigned int zfetch_max_distance = 64 * 1024 * 1024;
|
||||||
/* max bytes to prefetch indirects for per stream (default 64MB) */
|
/* max bytes to prefetch indirects for per stream (default 64MB) */
|
||||||
unsigned int zfetch_max_idistance = 64 * 1024 * 1024;
|
unsigned int zfetch_max_idistance = 64 * 1024 * 1024;
|
||||||
/* max number of bytes in an array_read in which we allow prefetching (1MB) */
|
/* max number of bytes in an array_read in which we allow prefetching (1MB) */
|
||||||
unsigned long zfetch_array_rd_sz = 1024 * 1024;
|
uint64_t zfetch_array_rd_sz = 1024 * 1024;
|
||||||
|
|
||||||
typedef struct zfetch_stats {
|
typedef struct zfetch_stats {
|
||||||
kstat_named_t zfetchstat_hits;
|
kstat_named_t zfetchstat_hits;
|
||||||
|
@ -565,5 +565,5 @@ ZFS_MODULE_PARAM(zfs_prefetch, zfetch_, max_distance, UINT, ZMOD_RW,
|
||||||
ZFS_MODULE_PARAM(zfs_prefetch, zfetch_, max_idistance, UINT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_prefetch, zfetch_, max_idistance, UINT, ZMOD_RW,
|
||||||
"Max bytes to prefetch indirects for per stream");
|
"Max bytes to prefetch indirects for per stream");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_prefetch, zfetch_, array_rd_sz, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_prefetch, zfetch_, array_rd_sz, U64, ZMOD_RW,
|
||||||
"Number of bytes in a array_read");
|
"Number of bytes in a array_read");
|
||||||
|
|
|
@ -92,7 +92,7 @@
|
||||||
* will be loaded into memory and shouldn't take up an inordinate amount of
|
* will be loaded into memory and shouldn't take up an inordinate amount of
|
||||||
* space. We settled on ~500000 entries, corresponding to roughly 128M.
|
* space. We settled on ~500000 entries, corresponding to roughly 128M.
|
||||||
*/
|
*/
|
||||||
unsigned long zfs_livelist_max_entries = 500000;
|
uint64_t zfs_livelist_max_entries = 500000;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* We can approximate how much of a performance gain a livelist will give us
|
* We can approximate how much of a performance gain a livelist will give us
|
||||||
|
@ -1040,7 +1040,7 @@ dsl_process_sub_livelist(bpobj_t *bpobj, bplist_t *to_free, zthr_t *t,
|
||||||
return (err);
|
return (err);
|
||||||
}
|
}
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_livelist, zfs_livelist_, max_entries, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_livelist, zfs_livelist_, max_entries, U64, ZMOD_RW,
|
||||||
"Size to start the next sub-livelist in a livelist");
|
"Size to start the next sub-livelist in a livelist");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_livelist, zfs_livelist_, min_percent_shared, INT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_livelist, zfs_livelist_, min_percent_shared, INT, ZMOD_RW,
|
||||||
|
|
|
@ -99,8 +99,8 @@
|
||||||
* capped at zfs_dirty_data_max_max. It can also be overridden with a module
|
* capped at zfs_dirty_data_max_max. It can also be overridden with a module
|
||||||
* parameter.
|
* parameter.
|
||||||
*/
|
*/
|
||||||
unsigned long zfs_dirty_data_max = 0;
|
uint64_t zfs_dirty_data_max = 0;
|
||||||
unsigned long zfs_dirty_data_max_max = 0;
|
uint64_t zfs_dirty_data_max_max = 0;
|
||||||
uint_t zfs_dirty_data_max_percent = 10;
|
uint_t zfs_dirty_data_max_percent = 10;
|
||||||
uint_t zfs_dirty_data_max_max_percent = 25;
|
uint_t zfs_dirty_data_max_max_percent = 25;
|
||||||
|
|
||||||
|
@ -109,7 +109,7 @@ uint_t zfs_dirty_data_max_max_percent = 25;
|
||||||
* when approaching the limit until log data is cleared out after txg sync.
|
* when approaching the limit until log data is cleared out after txg sync.
|
||||||
* It only counts TX_WRITE log with WR_COPIED or WR_NEED_COPY.
|
* It only counts TX_WRITE log with WR_COPIED or WR_NEED_COPY.
|
||||||
*/
|
*/
|
||||||
unsigned long zfs_wrlog_data_max = 0;
|
uint64_t zfs_wrlog_data_max = 0;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* If there's at least this much dirty data (as a percentage of
|
* If there's at least this much dirty data (as a percentage of
|
||||||
|
@ -138,7 +138,7 @@ uint_t zfs_delay_min_dirty_percent = 60;
|
||||||
* Note: zfs_delay_scale * zfs_dirty_data_max must be < 2^64, due to the
|
* Note: zfs_delay_scale * zfs_dirty_data_max must be < 2^64, due to the
|
||||||
* multiply in dmu_tx_delay().
|
* multiply in dmu_tx_delay().
|
||||||
*/
|
*/
|
||||||
unsigned long zfs_delay_scale = 1000 * 1000 * 1000 / 2000;
|
uint64_t zfs_delay_scale = 1000 * 1000 * 1000 / 2000;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* This determines the number of threads used by the dp_sync_taskq.
|
* This determines the number of threads used by the dp_sync_taskq.
|
||||||
|
@ -1465,20 +1465,20 @@ ZFS_MODULE_PARAM(zfs, zfs_, dirty_data_max_max_percent, UINT, ZMOD_RD,
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, delay_min_dirty_percent, UINT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, delay_min_dirty_percent, UINT, ZMOD_RW,
|
||||||
"Transaction delay threshold");
|
"Transaction delay threshold");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, dirty_data_max, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, dirty_data_max, U64, ZMOD_RW,
|
||||||
"Determines the dirty space limit");
|
"Determines the dirty space limit");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, wrlog_data_max, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, wrlog_data_max, U64, ZMOD_RW,
|
||||||
"The size limit of write-transaction zil log data");
|
"The size limit of write-transaction zil log data");
|
||||||
|
|
||||||
/* zfs_dirty_data_max_max only applied at module load in arc_init(). */
|
/* zfs_dirty_data_max_max only applied at module load in arc_init(). */
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, dirty_data_max_max, ULONG, ZMOD_RD,
|
ZFS_MODULE_PARAM(zfs, zfs_, dirty_data_max_max, U64, ZMOD_RD,
|
||||||
"zfs_dirty_data_max upper bound in bytes");
|
"zfs_dirty_data_max upper bound in bytes");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, dirty_data_sync_percent, UINT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, dirty_data_sync_percent, UINT, ZMOD_RW,
|
||||||
"Dirty data txg sync threshold as a percentage of zfs_dirty_data_max");
|
"Dirty data txg sync threshold as a percentage of zfs_dirty_data_max");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, delay_scale, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, delay_scale, U64, ZMOD_RW,
|
||||||
"How quickly delay approaches infinity");
|
"How quickly delay approaches infinity");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, sync_taskq_batch_pct, INT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, sync_taskq_batch_pct, INT, ZMOD_RW,
|
||||||
|
|
|
@ -147,13 +147,13 @@ static int zfs_scan_strict_mem_lim = B_FALSE;
|
||||||
* overload the drives with I/O, since that is protected by
|
* overload the drives with I/O, since that is protected by
|
||||||
* zfs_vdev_scrub_max_active.
|
* zfs_vdev_scrub_max_active.
|
||||||
*/
|
*/
|
||||||
static unsigned long zfs_scan_vdev_limit = 4 << 20;
|
static uint64_t zfs_scan_vdev_limit = 4 << 20;
|
||||||
|
|
||||||
static uint_t zfs_scan_issue_strategy = 0;
|
static uint_t zfs_scan_issue_strategy = 0;
|
||||||
|
|
||||||
/* don't queue & sort zios, go direct */
|
/* don't queue & sort zios, go direct */
|
||||||
static int zfs_scan_legacy = B_FALSE;
|
static int zfs_scan_legacy = B_FALSE;
|
||||||
static unsigned long zfs_scan_max_ext_gap = 2 << 20; /* in bytes */
|
static uint64_t zfs_scan_max_ext_gap = 2 << 20; /* in bytes */
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* fill_weight is non-tunable at runtime, so we copy it at module init from
|
* fill_weight is non-tunable at runtime, so we copy it at module init from
|
||||||
|
@ -192,9 +192,9 @@ static int zfs_no_scrub_io = B_FALSE; /* set to disable scrub i/o */
|
||||||
static int zfs_no_scrub_prefetch = B_FALSE; /* set to disable scrub prefetch */
|
static int zfs_no_scrub_prefetch = B_FALSE; /* set to disable scrub prefetch */
|
||||||
static const enum ddt_class zfs_scrub_ddt_class_max = DDT_CLASS_DUPLICATE;
|
static const enum ddt_class zfs_scrub_ddt_class_max = DDT_CLASS_DUPLICATE;
|
||||||
/* max number of blocks to free in a single TXG */
|
/* max number of blocks to free in a single TXG */
|
||||||
static unsigned long zfs_async_block_max_blocks = ULONG_MAX;
|
static uint64_t zfs_async_block_max_blocks = UINT64_MAX;
|
||||||
/* max number of dedup blocks to free in a single TXG */
|
/* max number of dedup blocks to free in a single TXG */
|
||||||
static unsigned long zfs_max_async_dedup_frees = 100000;
|
static uint64_t zfs_max_async_dedup_frees = 100000;
|
||||||
|
|
||||||
/* set to disable resilver deferring */
|
/* set to disable resilver deferring */
|
||||||
static int zfs_resilver_disable_defer = B_FALSE;
|
static int zfs_resilver_disable_defer = B_FALSE;
|
||||||
|
@ -4447,7 +4447,7 @@ dsl_scan_assess_vdev(dsl_pool_t *dp, vdev_t *vd)
|
||||||
spa_async_request(dp->dp_spa, SPA_ASYNC_RESILVER);
|
spa_async_request(dp->dp_spa, SPA_ASYNC_RESILVER);
|
||||||
}
|
}
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, scan_vdev_limit, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, scan_vdev_limit, U64, ZMOD_RW,
|
||||||
"Max bytes in flight per leaf vdev for scrubs and resilvers");
|
"Max bytes in flight per leaf vdev for scrubs and resilvers");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, scrub_min_time_ms, UINT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, scrub_min_time_ms, UINT, ZMOD_RW,
|
||||||
|
@ -4471,10 +4471,10 @@ ZFS_MODULE_PARAM(zfs, zfs_, no_scrub_io, INT, ZMOD_RW,
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, no_scrub_prefetch, INT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, no_scrub_prefetch, INT, ZMOD_RW,
|
||||||
"Set to disable scrub prefetching");
|
"Set to disable scrub prefetching");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, async_block_max_blocks, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, async_block_max_blocks, U64, ZMOD_RW,
|
||||||
"Max number of blocks freed in one txg");
|
"Max number of blocks freed in one txg");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, max_async_dedup_frees, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, max_async_dedup_frees, U64, ZMOD_RW,
|
||||||
"Max number of dedup blocks freed in one txg");
|
"Max number of dedup blocks freed in one txg");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, free_bpobj_enabled, INT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, free_bpobj_enabled, INT, ZMOD_RW,
|
||||||
|
@ -4495,7 +4495,7 @@ ZFS_MODULE_PARAM(zfs, zfs_, scan_legacy, INT, ZMOD_RW,
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, scan_checkpoint_intval, UINT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, scan_checkpoint_intval, UINT, ZMOD_RW,
|
||||||
"Scan progress on-disk checkpointing interval");
|
"Scan progress on-disk checkpointing interval");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, scan_max_ext_gap, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, scan_max_ext_gap, U64, ZMOD_RW,
|
||||||
"Max gap in bytes between sequential scrub / resilver I/Os");
|
"Max gap in bytes between sequential scrub / resilver I/Os");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, scan_mem_lim_soft_fact, UINT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, scan_mem_lim_soft_fact, UINT, ZMOD_RW,
|
||||||
|
|
|
@ -51,12 +51,12 @@
|
||||||
* operation, we will try to write this amount of data to each disk before
|
* operation, we will try to write this amount of data to each disk before
|
||||||
* moving on to the next top-level vdev.
|
* moving on to the next top-level vdev.
|
||||||
*/
|
*/
|
||||||
static unsigned long metaslab_aliquot = 1024 * 1024;
|
static uint64_t metaslab_aliquot = 1024 * 1024;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* For testing, make some blocks above a certain size be gang blocks.
|
* For testing, make some blocks above a certain size be gang blocks.
|
||||||
*/
|
*/
|
||||||
unsigned long metaslab_force_ganging = SPA_MAXBLOCKSIZE + 1;
|
uint64_t metaslab_force_ganging = SPA_MAXBLOCKSIZE + 1;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* In pools where the log space map feature is not enabled we touch
|
* In pools where the log space map feature is not enabled we touch
|
||||||
|
@ -286,7 +286,7 @@ static const int max_disabled_ms = 3;
|
||||||
* Time (in seconds) to respect ms_max_size when the metaslab is not loaded.
|
* Time (in seconds) to respect ms_max_size when the metaslab is not loaded.
|
||||||
* To avoid 64-bit overflow, don't set above UINT32_MAX.
|
* To avoid 64-bit overflow, don't set above UINT32_MAX.
|
||||||
*/
|
*/
|
||||||
static unsigned long zfs_metaslab_max_size_cache_sec = 1 * 60 * 60; /* 1 hour */
|
static uint64_t zfs_metaslab_max_size_cache_sec = 1 * 60 * 60; /* 1 hour */
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Maximum percentage of memory to use on storing loaded metaslabs. If loading
|
* Maximum percentage of memory to use on storing loaded metaslabs. If loading
|
||||||
|
@ -6202,7 +6202,7 @@ metaslab_unflushed_txg(metaslab_t *ms)
|
||||||
return (ms->ms_unflushed_txg);
|
return (ms->ms_unflushed_txg);
|
||||||
}
|
}
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_metaslab, metaslab_, aliquot, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_metaslab, metaslab_, aliquot, U64, ZMOD_RW,
|
||||||
"Allocation granularity (a.k.a. stripe size)");
|
"Allocation granularity (a.k.a. stripe size)");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_metaslab, metaslab_, debug_load, INT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_metaslab, metaslab_, debug_load, INT, ZMOD_RW,
|
||||||
|
@ -6250,7 +6250,7 @@ ZFS_MODULE_PARAM(zfs_metaslab, zfs_metaslab_, segment_weight_enabled, INT,
|
||||||
ZFS_MODULE_PARAM(zfs_metaslab, zfs_metaslab_, switch_threshold, INT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_metaslab, zfs_metaslab_, switch_threshold, INT, ZMOD_RW,
|
||||||
"Segment-based metaslab selection maximum buckets before switching");
|
"Segment-based metaslab selection maximum buckets before switching");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_metaslab, metaslab_, force_ganging, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_metaslab, metaslab_, force_ganging, U64, ZMOD_RW,
|
||||||
"Blocks larger than this size are forced to be gang blocks");
|
"Blocks larger than this size are forced to be gang blocks");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_metaslab, metaslab_, df_max_search, UINT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_metaslab, metaslab_, df_max_search, UINT, ZMOD_RW,
|
||||||
|
@ -6259,7 +6259,7 @@ ZFS_MODULE_PARAM(zfs_metaslab, metaslab_, df_max_search, UINT, ZMOD_RW,
|
||||||
ZFS_MODULE_PARAM(zfs_metaslab, metaslab_, df_use_largest_segment, INT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_metaslab, metaslab_, df_use_largest_segment, INT, ZMOD_RW,
|
||||||
"When looking in size tree, use largest segment instead of exact fit");
|
"When looking in size tree, use largest segment instead of exact fit");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_metaslab, zfs_metaslab_, max_size_cache_sec, ULONG,
|
ZFS_MODULE_PARAM(zfs_metaslab, zfs_metaslab_, max_size_cache_sec, U64,
|
||||||
ZMOD_RW, "How long to trust the cached max chunk size of a metaslab");
|
ZMOD_RW, "How long to trust the cached max chunk size of a metaslab");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_metaslab, zfs_metaslab_, mem_limit, UINT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_metaslab, zfs_metaslab_, mem_limit, UINT, ZMOD_RW,
|
||||||
|
|
|
@ -156,7 +156,7 @@
|
||||||
* vary with the I/O load and this observed value is the ub_mmp_delay which is
|
* vary with the I/O load and this observed value is the ub_mmp_delay which is
|
||||||
* stored in the uberblock. The minimum allowed value is 100 ms.
|
* stored in the uberblock. The minimum allowed value is 100 ms.
|
||||||
*/
|
*/
|
||||||
ulong_t zfs_multihost_interval = MMP_DEFAULT_INTERVAL;
|
uint64_t zfs_multihost_interval = MMP_DEFAULT_INTERVAL;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Used to control the duration of the activity test on import. Smaller values
|
* Used to control the duration of the activity test on import. Smaller values
|
||||||
|
@ -736,7 +736,7 @@ mmp_signal_all_threads(void)
|
||||||
|
|
||||||
/* BEGIN CSTYLED */
|
/* BEGIN CSTYLED */
|
||||||
ZFS_MODULE_PARAM_CALL(zfs_multihost, zfs_multihost_, interval,
|
ZFS_MODULE_PARAM_CALL(zfs_multihost, zfs_multihost_, interval,
|
||||||
param_set_multihost_interval, param_get_ulong, ZMOD_RW,
|
param_set_multihost_interval, spl_param_get_u64, ZMOD_RW,
|
||||||
"Milliseconds between mmp writes to each leaf");
|
"Milliseconds between mmp writes to each leaf");
|
||||||
/* END CSTYLED */
|
/* END CSTYLED */
|
||||||
|
|
||||||
|
|
|
@ -218,7 +218,7 @@ static int spa_load_print_vdev_tree = B_FALSE;
|
||||||
* there are also risks of performing an inadvertent rewind as we might be
|
* there are also risks of performing an inadvertent rewind as we might be
|
||||||
* missing all the vdevs with the latest uberblocks.
|
* missing all the vdevs with the latest uberblocks.
|
||||||
*/
|
*/
|
||||||
unsigned long zfs_max_missing_tvds = 0;
|
uint64_t zfs_max_missing_tvds = 0;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* The parameters below are similar to zfs_max_missing_tvds but are only
|
* The parameters below are similar to zfs_max_missing_tvds but are only
|
||||||
|
@ -10016,7 +10016,7 @@ ZFS_MODULE_PARAM(zfs_zio, zio_, taskq_batch_tpq, UINT, ZMOD_RD,
|
||||||
"Number of threads per IO worker taskqueue");
|
"Number of threads per IO worker taskqueue");
|
||||||
|
|
||||||
/* BEGIN CSTYLED */
|
/* BEGIN CSTYLED */
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, max_missing_tvds, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, max_missing_tvds, U64, ZMOD_RW,
|
||||||
"Allow importing pool with up to this number of missing top-level "
|
"Allow importing pool with up to this number of missing top-level "
|
||||||
"vdevs (in read-only mode)");
|
"vdevs (in read-only mode)");
|
||||||
/* END CSTYLED */
|
/* END CSTYLED */
|
||||||
|
|
|
@ -158,7 +158,7 @@
|
||||||
* amount of checkpointed data that has been freed within them while
|
* amount of checkpointed data that has been freed within them while
|
||||||
* the pool had a checkpoint.
|
* the pool had a checkpoint.
|
||||||
*/
|
*/
|
||||||
static unsigned long zfs_spa_discard_memory_limit = 16 * 1024 * 1024;
|
static uint64_t zfs_spa_discard_memory_limit = 16 * 1024 * 1024;
|
||||||
|
|
||||||
int
|
int
|
||||||
spa_checkpoint_get_stats(spa_t *spa, pool_checkpoint_stat_t *pcs)
|
spa_checkpoint_get_stats(spa_t *spa, pool_checkpoint_stat_t *pcs)
|
||||||
|
@ -631,7 +631,7 @@ EXPORT_SYMBOL(spa_checkpoint_discard_thread);
|
||||||
EXPORT_SYMBOL(spa_checkpoint_discard_thread_check);
|
EXPORT_SYMBOL(spa_checkpoint_discard_thread_check);
|
||||||
|
|
||||||
/* BEGIN CSTYLED */
|
/* BEGIN CSTYLED */
|
||||||
ZFS_MODULE_PARAM(zfs_spa, zfs_spa_, discard_memory_limit, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_spa, zfs_spa_, discard_memory_limit, U64, ZMOD_RW,
|
||||||
"Limit for memory used in prefetching the checkpoint space map done "
|
"Limit for memory used in prefetching the checkpoint space map done "
|
||||||
"on each vdev while discarding the checkpoint");
|
"on each vdev while discarding the checkpoint");
|
||||||
/* END CSTYLED */
|
/* END CSTYLED */
|
||||||
|
|
|
@ -188,13 +188,13 @@ static const unsigned long zfs_log_sm_blksz = 1ULL << 17;
|
||||||
* (thus the _ppm suffix; reads as "parts per million"). As an example,
|
* (thus the _ppm suffix; reads as "parts per million"). As an example,
|
||||||
* the default of 1000 allows 0.1% of memory to be used.
|
* the default of 1000 allows 0.1% of memory to be used.
|
||||||
*/
|
*/
|
||||||
static unsigned long zfs_unflushed_max_mem_ppm = 1000;
|
static uint64_t zfs_unflushed_max_mem_ppm = 1000;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Specific hard-limit in memory that ZFS allows to be used for
|
* Specific hard-limit in memory that ZFS allows to be used for
|
||||||
* unflushed changes.
|
* unflushed changes.
|
||||||
*/
|
*/
|
||||||
static unsigned long zfs_unflushed_max_mem_amt = 1ULL << 30;
|
static uint64_t zfs_unflushed_max_mem_amt = 1ULL << 30;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* The following tunable determines the number of blocks that can be used for
|
* The following tunable determines the number of blocks that can be used for
|
||||||
|
@ -243,33 +243,33 @@ static unsigned long zfs_unflushed_max_mem_amt = 1ULL << 30;
|
||||||
* provide upper and lower bounds for the log block limit.
|
* provide upper and lower bounds for the log block limit.
|
||||||
* [see zfs_unflushed_log_block_{min,max}]
|
* [see zfs_unflushed_log_block_{min,max}]
|
||||||
*/
|
*/
|
||||||
static unsigned long zfs_unflushed_log_block_pct = 400;
|
static uint_t zfs_unflushed_log_block_pct = 400;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* If the number of metaslabs is small and our incoming rate is high, we could
|
* If the number of metaslabs is small and our incoming rate is high, we could
|
||||||
* get into a situation that we are flushing all our metaslabs every TXG. Thus
|
* get into a situation that we are flushing all our metaslabs every TXG. Thus
|
||||||
* we always allow at least this many log blocks.
|
* we always allow at least this many log blocks.
|
||||||
*/
|
*/
|
||||||
static unsigned long zfs_unflushed_log_block_min = 1000;
|
static uint64_t zfs_unflushed_log_block_min = 1000;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* If the log becomes too big, the import time of the pool can take a hit in
|
* If the log becomes too big, the import time of the pool can take a hit in
|
||||||
* terms of performance. Thus we have a hard limit in the size of the log in
|
* terms of performance. Thus we have a hard limit in the size of the log in
|
||||||
* terms of blocks.
|
* terms of blocks.
|
||||||
*/
|
*/
|
||||||
static unsigned long zfs_unflushed_log_block_max = (1ULL << 17);
|
static uint64_t zfs_unflushed_log_block_max = (1ULL << 17);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Also we have a hard limit in the size of the log in terms of dirty TXGs.
|
* Also we have a hard limit in the size of the log in terms of dirty TXGs.
|
||||||
*/
|
*/
|
||||||
static unsigned long zfs_unflushed_log_txg_max = 1000;
|
static uint64_t zfs_unflushed_log_txg_max = 1000;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Max # of rows allowed for the log_summary. The tradeoff here is accuracy and
|
* Max # of rows allowed for the log_summary. The tradeoff here is accuracy and
|
||||||
* stability of the flushing algorithm (longer summary) vs its runtime overhead
|
* stability of the flushing algorithm (longer summary) vs its runtime overhead
|
||||||
* (smaller summary is faster to traverse).
|
* (smaller summary is faster to traverse).
|
||||||
*/
|
*/
|
||||||
static unsigned long zfs_max_logsm_summary_length = 10;
|
static uint64_t zfs_max_logsm_summary_length = 10;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Tunable that sets the lower bound on the metaslabs to flush every TXG.
|
* Tunable that sets the lower bound on the metaslabs to flush every TXG.
|
||||||
|
@ -282,7 +282,7 @@ static unsigned long zfs_max_logsm_summary_length = 10;
|
||||||
* The point of this tunable is to be used in extreme cases where we really
|
* The point of this tunable is to be used in extreme cases where we really
|
||||||
* want to flush more metaslabs than our adaptable heuristic plans to flush.
|
* want to flush more metaslabs than our adaptable heuristic plans to flush.
|
||||||
*/
|
*/
|
||||||
static unsigned long zfs_min_metaslabs_to_flush = 1;
|
static uint64_t zfs_min_metaslabs_to_flush = 1;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Tunable that specifies how far in the past do we want to look when trying to
|
* Tunable that specifies how far in the past do we want to look when trying to
|
||||||
|
@ -293,7 +293,7 @@ static unsigned long zfs_min_metaslabs_to_flush = 1;
|
||||||
* average over all the blocks that we walk
|
* average over all the blocks that we walk
|
||||||
* [see spa_estimate_incoming_log_blocks].
|
* [see spa_estimate_incoming_log_blocks].
|
||||||
*/
|
*/
|
||||||
static unsigned long zfs_max_log_walking = 5;
|
static uint64_t zfs_max_log_walking = 5;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* This tunable exists solely for testing purposes. It ensures that the log
|
* This tunable exists solely for testing purposes. It ensures that the log
|
||||||
|
@ -1357,34 +1357,34 @@ spa_ld_log_spacemaps(spa_t *spa)
|
||||||
}
|
}
|
||||||
|
|
||||||
/* BEGIN CSTYLED */
|
/* BEGIN CSTYLED */
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, unflushed_max_mem_amt, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, unflushed_max_mem_amt, U64, ZMOD_RW,
|
||||||
"Specific hard-limit in memory that ZFS allows to be used for "
|
"Specific hard-limit in memory that ZFS allows to be used for "
|
||||||
"unflushed changes");
|
"unflushed changes");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, unflushed_max_mem_ppm, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, unflushed_max_mem_ppm, U64, ZMOD_RW,
|
||||||
"Percentage of the overall system memory that ZFS allows to be "
|
"Percentage of the overall system memory that ZFS allows to be "
|
||||||
"used for unflushed changes (value is calculated over 1000000 for "
|
"used for unflushed changes (value is calculated over 1000000 for "
|
||||||
"finer granularity)");
|
"finer granularity)");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, unflushed_log_block_max, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, unflushed_log_block_max, U64, ZMOD_RW,
|
||||||
"Hard limit (upper-bound) in the size of the space map log "
|
"Hard limit (upper-bound) in the size of the space map log "
|
||||||
"in terms of blocks.");
|
"in terms of blocks.");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, unflushed_log_block_min, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, unflushed_log_block_min, U64, ZMOD_RW,
|
||||||
"Lower-bound limit for the maximum amount of blocks allowed in "
|
"Lower-bound limit for the maximum amount of blocks allowed in "
|
||||||
"log spacemap (see zfs_unflushed_log_block_max)");
|
"log spacemap (see zfs_unflushed_log_block_max)");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, unflushed_log_txg_max, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, unflushed_log_txg_max, U64, ZMOD_RW,
|
||||||
"Hard limit (upper-bound) in the size of the space map log "
|
"Hard limit (upper-bound) in the size of the space map log "
|
||||||
"in terms of dirty TXGs.");
|
"in terms of dirty TXGs.");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, unflushed_log_block_pct, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, unflushed_log_block_pct, UINT, ZMOD_RW,
|
||||||
"Tunable used to determine the number of blocks that can be used for "
|
"Tunable used to determine the number of blocks that can be used for "
|
||||||
"the spacemap log, expressed as a percentage of the total number of "
|
"the spacemap log, expressed as a percentage of the total number of "
|
||||||
"metaslabs in the pool (e.g. 400 means the number of log blocks is "
|
"metaslabs in the pool (e.g. 400 means the number of log blocks is "
|
||||||
"capped at 4 times the number of metaslabs)");
|
"capped at 4 times the number of metaslabs)");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, max_log_walking, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, max_log_walking, U64, ZMOD_RW,
|
||||||
"The number of past TXGs that the flushing algorithm of the log "
|
"The number of past TXGs that the flushing algorithm of the log "
|
||||||
"spacemap feature uses to estimate incoming log blocks");
|
"spacemap feature uses to estimate incoming log blocks");
|
||||||
|
|
||||||
|
@ -1393,8 +1393,8 @@ ZFS_MODULE_PARAM(zfs, zfs_, keep_log_spacemaps_at_export, INT, ZMOD_RW,
|
||||||
"during pool export/destroy");
|
"during pool export/destroy");
|
||||||
/* END CSTYLED */
|
/* END CSTYLED */
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, max_logsm_summary_length, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, max_logsm_summary_length, U64, ZMOD_RW,
|
||||||
"Maximum number of rows allowed in the summary of the spacemap log");
|
"Maximum number of rows allowed in the summary of the spacemap log");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, min_metaslabs_to_flush, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, min_metaslabs_to_flush, U64, ZMOD_RW,
|
||||||
"Minimum number of metaslabs to flush per dirty TXG");
|
"Minimum number of metaslabs to flush per dirty TXG");
|
||||||
|
|
|
@ -304,20 +304,20 @@ int zfs_free_leak_on_eio = B_FALSE;
|
||||||
* has not completed in zfs_deadman_synctime_ms is considered "hung" resulting
|
* has not completed in zfs_deadman_synctime_ms is considered "hung" resulting
|
||||||
* in one of three behaviors controlled by zfs_deadman_failmode.
|
* in one of three behaviors controlled by zfs_deadman_failmode.
|
||||||
*/
|
*/
|
||||||
unsigned long zfs_deadman_synctime_ms = 600000UL; /* 10 min. */
|
uint64_t zfs_deadman_synctime_ms = 600000UL; /* 10 min. */
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* This value controls the maximum amount of time zio_wait() will block for an
|
* This value controls the maximum amount of time zio_wait() will block for an
|
||||||
* outstanding IO. By default this is 300 seconds at which point the "hung"
|
* outstanding IO. By default this is 300 seconds at which point the "hung"
|
||||||
* behavior will be applied as described for zfs_deadman_synctime_ms.
|
* behavior will be applied as described for zfs_deadman_synctime_ms.
|
||||||
*/
|
*/
|
||||||
unsigned long zfs_deadman_ziotime_ms = 300000UL; /* 5 min. */
|
uint64_t zfs_deadman_ziotime_ms = 300000UL; /* 5 min. */
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Check time in milliseconds. This defines the frequency at which we check
|
* Check time in milliseconds. This defines the frequency at which we check
|
||||||
* for hung I/O.
|
* for hung I/O.
|
||||||
*/
|
*/
|
||||||
unsigned long zfs_deadman_checktime_ms = 60000UL; /* 1 min. */
|
uint64_t zfs_deadman_checktime_ms = 60000UL; /* 1 min. */
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* By default the deadman is enabled.
|
* By default the deadman is enabled.
|
||||||
|
@ -2922,7 +2922,7 @@ ZFS_MODULE_PARAM(zfs, zfs_, recover, INT, ZMOD_RW,
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, free_leak_on_eio, INT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, free_leak_on_eio, INT, ZMOD_RW,
|
||||||
"Set to ignore IO errors during free and permanently leak the space");
|
"Set to ignore IO errors during free and permanently leak the space");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_deadman, zfs_deadman_, checktime_ms, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_deadman, zfs_deadman_, checktime_ms, U64, ZMOD_RW,
|
||||||
"Dead I/O check interval in milliseconds");
|
"Dead I/O check interval in milliseconds");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_deadman, zfs_deadman_, enabled, INT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_deadman, zfs_deadman_, enabled, INT, ZMOD_RW,
|
||||||
|
@ -2943,11 +2943,11 @@ ZFS_MODULE_PARAM_CALL(zfs_deadman, zfs_deadman_, failmode,
|
||||||
"Failmode for deadman timer");
|
"Failmode for deadman timer");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM_CALL(zfs_deadman, zfs_deadman_, synctime_ms,
|
ZFS_MODULE_PARAM_CALL(zfs_deadman, zfs_deadman_, synctime_ms,
|
||||||
param_set_deadman_synctime, param_get_ulong, ZMOD_RW,
|
param_set_deadman_synctime, spl_param_get_u64, ZMOD_RW,
|
||||||
"Pool sync expiration time in milliseconds");
|
"Pool sync expiration time in milliseconds");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM_CALL(zfs_deadman, zfs_deadman_, ziotime_ms,
|
ZFS_MODULE_PARAM_CALL(zfs_deadman, zfs_deadman_, ziotime_ms,
|
||||||
param_set_deadman_ziotime, param_get_ulong, ZMOD_RW,
|
param_set_deadman_ziotime, spl_param_get_u64, ZMOD_RW,
|
||||||
"IO expiration time in milliseconds");
|
"IO expiration time in milliseconds");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, special_class_metadata_reserve_pct, UINT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, special_class_metadata_reserve_pct, UINT, ZMOD_RW,
|
||||||
|
|
|
@ -144,8 +144,8 @@ int zfs_nocacheflush = 0;
|
||||||
* be forced by vdev logical ashift or by user via ashift property, but won't
|
* be forced by vdev logical ashift or by user via ashift property, but won't
|
||||||
* be set automatically as a performance optimization.
|
* be set automatically as a performance optimization.
|
||||||
*/
|
*/
|
||||||
uint64_t zfs_vdev_max_auto_ashift = 14;
|
uint_t zfs_vdev_max_auto_ashift = 14;
|
||||||
uint64_t zfs_vdev_min_auto_ashift = ASHIFT_MIN;
|
uint_t zfs_vdev_min_auto_ashift = ASHIFT_MIN;
|
||||||
|
|
||||||
void
|
void
|
||||||
vdev_dbgmsg(vdev_t *vd, const char *fmt, ...)
|
vdev_dbgmsg(vdev_t *vd, const char *fmt, ...)
|
||||||
|
@ -6156,11 +6156,11 @@ ZFS_MODULE_PARAM(zfs, zfs_, embedded_slog_min_ms, UINT, ZMOD_RW,
|
||||||
|
|
||||||
/* BEGIN CSTYLED */
|
/* BEGIN CSTYLED */
|
||||||
ZFS_MODULE_PARAM_CALL(zfs_vdev, zfs_vdev_, min_auto_ashift,
|
ZFS_MODULE_PARAM_CALL(zfs_vdev, zfs_vdev_, min_auto_ashift,
|
||||||
param_set_min_auto_ashift, param_get_ulong, ZMOD_RW,
|
param_set_min_auto_ashift, param_get_uint, ZMOD_RW,
|
||||||
"Minimum ashift used when creating new top-level vdevs");
|
"Minimum ashift used when creating new top-level vdevs");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM_CALL(zfs_vdev, zfs_vdev_, max_auto_ashift,
|
ZFS_MODULE_PARAM_CALL(zfs_vdev, zfs_vdev_, max_auto_ashift,
|
||||||
param_set_max_auto_ashift, param_get_ulong, ZMOD_RW,
|
param_set_max_auto_ashift, param_get_uint, ZMOD_RW,
|
||||||
"Maximum ashift used when optimizing for logical -> physical sector "
|
"Maximum ashift used when optimizing for logical -> physical sector "
|
||||||
"size on new top-level vdevs");
|
"size on new top-level vdevs");
|
||||||
/* END CSTYLED */
|
/* END CSTYLED */
|
||||||
|
|
|
@ -189,14 +189,14 @@ static uint_t zfs_condense_indirect_obsolete_pct = 25;
|
||||||
* consumed by the obsolete space map; the default of 1GB is small enough
|
* consumed by the obsolete space map; the default of 1GB is small enough
|
||||||
* that we typically don't mind "wasting" it.
|
* that we typically don't mind "wasting" it.
|
||||||
*/
|
*/
|
||||||
static unsigned long zfs_condense_max_obsolete_bytes = 1024 * 1024 * 1024;
|
static uint64_t zfs_condense_max_obsolete_bytes = 1024 * 1024 * 1024;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Don't bother condensing if the mapping uses less than this amount of
|
* Don't bother condensing if the mapping uses less than this amount of
|
||||||
* memory. The default of 128KB is considered a "trivial" amount of
|
* memory. The default of 128KB is considered a "trivial" amount of
|
||||||
* memory and not worth reducing.
|
* memory and not worth reducing.
|
||||||
*/
|
*/
|
||||||
static unsigned long zfs_condense_min_mapping_bytes = 128 * 1024;
|
static uint64_t zfs_condense_min_mapping_bytes = 128 * 1024;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* This is used by the test suite so that it can ensure that certain
|
* This is used by the test suite so that it can ensure that certain
|
||||||
|
@ -1892,11 +1892,11 @@ ZFS_MODULE_PARAM(zfs_condense, zfs_condense_, indirect_obsolete_pct, UINT,
|
||||||
"Minimum obsolete percent of bytes in the mapping "
|
"Minimum obsolete percent of bytes in the mapping "
|
||||||
"to attempt condensing");
|
"to attempt condensing");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_condense, zfs_condense_, min_mapping_bytes, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_condense, zfs_condense_, min_mapping_bytes, U64, ZMOD_RW,
|
||||||
"Don't bother condensing if the mapping uses less than this amount of "
|
"Don't bother condensing if the mapping uses less than this amount of "
|
||||||
"memory");
|
"memory");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_condense, zfs_condense_, max_obsolete_bytes, ULONG,
|
ZFS_MODULE_PARAM(zfs_condense, zfs_condense_, max_obsolete_bytes, U64,
|
||||||
ZMOD_RW,
|
ZMOD_RW,
|
||||||
"Minimum size obsolete spacemap to attempt condensing");
|
"Minimum size obsolete spacemap to attempt condensing");
|
||||||
|
|
||||||
|
|
|
@ -36,17 +36,13 @@
|
||||||
/*
|
/*
|
||||||
* Value that is written to disk during initialization.
|
* Value that is written to disk during initialization.
|
||||||
*/
|
*/
|
||||||
#ifdef _ILP32
|
static uint64_t zfs_initialize_value = 0xdeadbeefdeadbeeeULL;
|
||||||
static unsigned long zfs_initialize_value = 0xdeadbeefUL;
|
|
||||||
#else
|
|
||||||
static unsigned long zfs_initialize_value = 0xdeadbeefdeadbeeeULL;
|
|
||||||
#endif
|
|
||||||
|
|
||||||
/* maximum number of I/Os outstanding per leaf vdev */
|
/* maximum number of I/Os outstanding per leaf vdev */
|
||||||
static const int zfs_initialize_limit = 1;
|
static const int zfs_initialize_limit = 1;
|
||||||
|
|
||||||
/* size of initializing writes; default 1MiB, see zfs_remove_max_segment */
|
/* size of initializing writes; default 1MiB, see zfs_remove_max_segment */
|
||||||
static unsigned long zfs_initialize_chunk_size = 1024 * 1024;
|
static uint64_t zfs_initialize_chunk_size = 1024 * 1024;
|
||||||
|
|
||||||
static boolean_t
|
static boolean_t
|
||||||
vdev_initialize_should_stop(vdev_t *vd)
|
vdev_initialize_should_stop(vdev_t *vd)
|
||||||
|
@ -261,15 +257,9 @@ vdev_initialize_block_fill(void *buf, size_t len, void *unused)
|
||||||
(void) unused;
|
(void) unused;
|
||||||
|
|
||||||
ASSERT0(len % sizeof (uint64_t));
|
ASSERT0(len % sizeof (uint64_t));
|
||||||
#ifdef _ILP32
|
|
||||||
for (uint64_t i = 0; i < len; i += sizeof (uint32_t)) {
|
|
||||||
*(uint32_t *)((char *)(buf) + i) = zfs_initialize_value;
|
|
||||||
}
|
|
||||||
#else
|
|
||||||
for (uint64_t i = 0; i < len; i += sizeof (uint64_t)) {
|
for (uint64_t i = 0; i < len; i += sizeof (uint64_t)) {
|
||||||
*(uint64_t *)((char *)(buf) + i) = zfs_initialize_value;
|
*(uint64_t *)((char *)(buf) + i) = zfs_initialize_value;
|
||||||
}
|
}
|
||||||
#endif
|
|
||||||
return (0);
|
return (0);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -765,8 +755,8 @@ EXPORT_SYMBOL(vdev_initialize_stop_all);
|
||||||
EXPORT_SYMBOL(vdev_initialize_stop_wait);
|
EXPORT_SYMBOL(vdev_initialize_stop_wait);
|
||||||
EXPORT_SYMBOL(vdev_initialize_restart);
|
EXPORT_SYMBOL(vdev_initialize_restart);
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, initialize_value, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, initialize_value, U64, ZMOD_RW,
|
||||||
"Value written during zpool initialize");
|
"Value written during zpool initialize");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, initialize_chunk_size, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, initialize_chunk_size, U64, ZMOD_RW,
|
||||||
"Size in bytes of writes by zpool initialize");
|
"Size in bytes of writes by zpool initialize");
|
||||||
|
|
|
@ -103,7 +103,7 @@
|
||||||
* Size of rebuild reads; defaults to 1MiB per data disk and is capped at
|
* Size of rebuild reads; defaults to 1MiB per data disk and is capped at
|
||||||
* SPA_MAXBLOCKSIZE.
|
* SPA_MAXBLOCKSIZE.
|
||||||
*/
|
*/
|
||||||
static unsigned long zfs_rebuild_max_segment = 1024 * 1024;
|
static uint64_t zfs_rebuild_max_segment = 1024 * 1024;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Maximum number of parallelly executed bytes per leaf vdev caused by a
|
* Maximum number of parallelly executed bytes per leaf vdev caused by a
|
||||||
|
@ -121,7 +121,7 @@ static unsigned long zfs_rebuild_max_segment = 1024 * 1024;
|
||||||
* With a value of 32MB the sequential resilver write rate was measured at
|
* With a value of 32MB the sequential resilver write rate was measured at
|
||||||
* 800MB/s sustained while rebuilding to a distributed spare.
|
* 800MB/s sustained while rebuilding to a distributed spare.
|
||||||
*/
|
*/
|
||||||
static unsigned long zfs_rebuild_vdev_limit = 32 << 20;
|
static uint64_t zfs_rebuild_vdev_limit = 32 << 20;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Automatically start a pool scrub when the last active sequential resilver
|
* Automatically start a pool scrub when the last active sequential resilver
|
||||||
|
@ -1138,10 +1138,10 @@ vdev_rebuild_get_stats(vdev_t *tvd, vdev_rebuild_stat_t *vrs)
|
||||||
return (error);
|
return (error);
|
||||||
}
|
}
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, rebuild_max_segment, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, rebuild_max_segment, U64, ZMOD_RW,
|
||||||
"Max segment size in bytes of rebuild reads");
|
"Max segment size in bytes of rebuild reads");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, rebuild_vdev_limit, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, rebuild_vdev_limit, U64, ZMOD_RW,
|
||||||
"Max bytes in flight per leaf vdev for sequential resilvers");
|
"Max bytes in flight per leaf vdev for sequential resilvers");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, rebuild_scrub_enabled, INT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, rebuild_scrub_enabled, INT, ZMOD_RW,
|
||||||
|
|
|
@ -109,8 +109,8 @@
|
||||||
#define ZCP_NVLIST_MAX_DEPTH 20
|
#define ZCP_NVLIST_MAX_DEPTH 20
|
||||||
|
|
||||||
static const uint64_t zfs_lua_check_instrlimit_interval = 100;
|
static const uint64_t zfs_lua_check_instrlimit_interval = 100;
|
||||||
unsigned long zfs_lua_max_instrlimit = ZCP_MAX_INSTRLIMIT;
|
uint64_t zfs_lua_max_instrlimit = ZCP_MAX_INSTRLIMIT;
|
||||||
unsigned long zfs_lua_max_memlimit = ZCP_MAX_MEMLIMIT;
|
uint64_t zfs_lua_max_memlimit = ZCP_MAX_MEMLIMIT;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Forward declarations for mutually recursive functions
|
* Forward declarations for mutually recursive functions
|
||||||
|
@ -1443,8 +1443,8 @@ zcp_parse_args(lua_State *state, const char *fname, const zcp_arg_t *pargs,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_lua, zfs_lua_, max_instrlimit, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_lua, zfs_lua_, max_instrlimit, U64, ZMOD_RW,
|
||||||
"Max instruction limit that can be specified for a channel program");
|
"Max instruction limit that can be specified for a channel program");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_lua, zfs_lua_, max_memlimit, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_lua, zfs_lua_, max_memlimit, U64, ZMOD_RW,
|
||||||
"Max memory limit that can be specified for a channel program");
|
"Max memory limit that can be specified for a channel program");
|
||||||
|
|
|
@ -229,14 +229,14 @@ static zfsdev_state_t *zfsdev_state_list;
|
||||||
* for zc->zc_nvlist_src_size, since we will need to allocate that much memory.
|
* for zc->zc_nvlist_src_size, since we will need to allocate that much memory.
|
||||||
* Defaults to 0=auto which is handled by platform code.
|
* Defaults to 0=auto which is handled by platform code.
|
||||||
*/
|
*/
|
||||||
unsigned long zfs_max_nvlist_src_size = 0;
|
uint64_t zfs_max_nvlist_src_size = 0;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* When logging the output nvlist of an ioctl in the on-disk history, limit
|
* When logging the output nvlist of an ioctl in the on-disk history, limit
|
||||||
* the logged size to this many bytes. This must be less than DMU_MAX_ACCESS.
|
* the logged size to this many bytes. This must be less than DMU_MAX_ACCESS.
|
||||||
* This applies primarily to zfs_ioc_channel_program().
|
* This applies primarily to zfs_ioc_channel_program().
|
||||||
*/
|
*/
|
||||||
static unsigned long zfs_history_output_max = 1024 * 1024;
|
static uint64_t zfs_history_output_max = 1024 * 1024;
|
||||||
|
|
||||||
uint_t zfs_fsyncer_key;
|
uint_t zfs_fsyncer_key;
|
||||||
uint_t zfs_allow_log_key;
|
uint_t zfs_allow_log_key;
|
||||||
|
@ -7884,8 +7884,8 @@ zfs_kmod_fini(void)
|
||||||
tsd_destroy(&zfs_allow_log_key);
|
tsd_destroy(&zfs_allow_log_key);
|
||||||
}
|
}
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, max_nvlist_src_size, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, max_nvlist_src_size, U64, ZMOD_RW,
|
||||||
"Maximum size in bytes allowed for src nvlist passed with ZFS ioctls");
|
"Maximum size in bytes allowed for src nvlist passed with ZFS ioctls");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, history_output_max, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, history_output_max, U64, ZMOD_RW,
|
||||||
"Maximum size in bytes of ZFS ioctl output that will be logged");
|
"Maximum size in bytes of ZFS ioctl output that will be logged");
|
||||||
|
|
|
@ -525,7 +525,7 @@ zfs_log_rename(zilog_t *zilog, dmu_tx_t *tx, uint64_t txtype, znode_t *sdzp,
|
||||||
* called as soon as the write is on stable storage (be it via a DMU sync or a
|
* called as soon as the write is on stable storage (be it via a DMU sync or a
|
||||||
* ZIL commit).
|
* ZIL commit).
|
||||||
*/
|
*/
|
||||||
static long zfs_immediate_write_sz = 32768;
|
static int64_t zfs_immediate_write_sz = 32768;
|
||||||
|
|
||||||
void
|
void
|
||||||
zfs_log_write(zilog_t *zilog, dmu_tx_t *tx, int txtype,
|
zfs_log_write(zilog_t *zilog, dmu_tx_t *tx, int txtype,
|
||||||
|
@ -815,5 +815,5 @@ zfs_log_acl(zilog_t *zilog, dmu_tx_t *tx, znode_t *zp,
|
||||||
zil_itx_assign(zilog, itx, tx);
|
zil_itx_assign(zilog, itx, tx);
|
||||||
}
|
}
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs, zfs_, immediate_write_sz, LONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs, zfs_, immediate_write_sz, S64, ZMOD_RW,
|
||||||
"Largest data block to write to zil");
|
"Largest data block to write to zil");
|
||||||
|
|
|
@ -176,7 +176,7 @@ zfs_access(znode_t *zp, int mode, int flag, cred_t *cr)
|
||||||
return (error);
|
return (error);
|
||||||
}
|
}
|
||||||
|
|
||||||
static unsigned long zfs_vnops_read_chunk_size = 1024 * 1024; /* Tunable */
|
static uint64_t zfs_vnops_read_chunk_size = 1024 * 1024; /* Tunable */
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Read bytes from specified file into supplied buffer.
|
* Read bytes from specified file into supplied buffer.
|
||||||
|
@ -991,5 +991,5 @@ EXPORT_SYMBOL(zfs_write);
|
||||||
EXPORT_SYMBOL(zfs_getsecattr);
|
EXPORT_SYMBOL(zfs_getsecattr);
|
||||||
EXPORT_SYMBOL(zfs_setsecattr);
|
EXPORT_SYMBOL(zfs_setsecattr);
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_vnops, zfs_vnops_, read_chunk_size, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_vnops, zfs_vnops_, read_chunk_size, U64, ZMOD_RW,
|
||||||
"Bytes to read per chunk");
|
"Bytes to read per chunk");
|
||||||
|
|
|
@ -132,7 +132,7 @@ static int zil_nocacheflush = 0;
|
||||||
* Any writes above that will be executed with lower (asynchronous) priority
|
* Any writes above that will be executed with lower (asynchronous) priority
|
||||||
* to limit potential SLOG device abuse by single active ZIL writer.
|
* to limit potential SLOG device abuse by single active ZIL writer.
|
||||||
*/
|
*/
|
||||||
static unsigned long zil_slog_bulk = 768 * 1024;
|
static uint64_t zil_slog_bulk = 768 * 1024;
|
||||||
|
|
||||||
static kmem_cache_t *zil_lwb_cache;
|
static kmem_cache_t *zil_lwb_cache;
|
||||||
static kmem_cache_t *zil_zcw_cache;
|
static kmem_cache_t *zil_zcw_cache;
|
||||||
|
@ -3946,7 +3946,7 @@ ZFS_MODULE_PARAM(zfs_zil, zil_, replay_disable, INT, ZMOD_RW,
|
||||||
ZFS_MODULE_PARAM(zfs_zil, zil_, nocacheflush, INT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_zil, zil_, nocacheflush, INT, ZMOD_RW,
|
||||||
"Disable ZIL cache flushes");
|
"Disable ZIL cache flushes");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_zil, zil_, slog_bulk, ULONG, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_zil, zil_, slog_bulk, U64, ZMOD_RW,
|
||||||
"Limit in bytes slog sync writes per commit");
|
"Limit in bytes slog sync writes per commit");
|
||||||
|
|
||||||
ZFS_MODULE_PARAM(zfs_zil, zil_, maxblocksize, UINT, ZMOD_RW,
|
ZFS_MODULE_PARAM(zfs_zil, zil_, maxblocksize, UINT, ZMOD_RW,
|
||||||
|
|
|
@ -44,7 +44,7 @@ verify_runnable "global"
|
||||||
|
|
||||||
function cleanup
|
function cleanup
|
||||||
{
|
{
|
||||||
log_must set_tunable64 VDEV_FILE_PHYSICAL_ASHIFT $orig_ashift
|
log_must set_tunable32 VDEV_FILE_PHYSICAL_ASHIFT $orig_ashift
|
||||||
poolexists $TESTPOOL && destroy_pool $TESTPOOL
|
poolexists $TESTPOOL && destroy_pool $TESTPOOL
|
||||||
rm -f $disk1 $disk2
|
rm -f $disk1 $disk2
|
||||||
}
|
}
|
||||||
|
@ -77,13 +77,13 @@ do
|
||||||
# Make sure we can also set the ashift using the tunable.
|
# Make sure we can also set the ashift using the tunable.
|
||||||
#
|
#
|
||||||
log_must zpool create $TESTPOOL $disk1
|
log_must zpool create $TESTPOOL $disk1
|
||||||
log_must set_tunable64 VDEV_FILE_PHYSICAL_ASHIFT $ashift
|
log_must set_tunable32 VDEV_FILE_PHYSICAL_ASHIFT $ashift
|
||||||
log_must zpool add $TESTPOOL $disk2
|
log_must zpool add $TESTPOOL $disk2
|
||||||
exp=$(( (ashift <= max_auto_ashift) ? ashift : logical_ashift ))
|
exp=$(( (ashift <= max_auto_ashift) ? ashift : logical_ashift ))
|
||||||
log_must verify_ashift $disk2 $exp
|
log_must verify_ashift $disk2 $exp
|
||||||
|
|
||||||
# clean things for the next run
|
# clean things for the next run
|
||||||
log_must set_tunable64 VDEV_FILE_PHYSICAL_ASHIFT $orig_ashift
|
log_must set_tunable32 VDEV_FILE_PHYSICAL_ASHIFT $orig_ashift
|
||||||
log_must zpool destroy $TESTPOOL
|
log_must zpool destroy $TESTPOOL
|
||||||
log_must zpool labelclear $disk1
|
log_must zpool labelclear $disk1
|
||||||
log_must zpool labelclear $disk2
|
log_must zpool labelclear $disk2
|
||||||
|
|
|
@ -44,7 +44,7 @@ verify_runnable "global"
|
||||||
|
|
||||||
function cleanup
|
function cleanup
|
||||||
{
|
{
|
||||||
log_must set_tunable64 VDEV_FILE_PHYSICAL_ASHIFT $orig_ashift
|
log_must set_tunable32 VDEV_FILE_PHYSICAL_ASHIFT $orig_ashift
|
||||||
poolexists $TESTPOOL && destroy_pool $TESTPOOL
|
poolexists $TESTPOOL && destroy_pool $TESTPOOL
|
||||||
log_must rm -f $disk1 $disk2
|
log_must rm -f $disk1 $disk2
|
||||||
}
|
}
|
||||||
|
@ -63,7 +63,7 @@ orig_ashift=$(get_tunable VDEV_FILE_PHYSICAL_ASHIFT)
|
||||||
# the ashift using the -o ashift property should still
|
# the ashift using the -o ashift property should still
|
||||||
# be honored.
|
# be honored.
|
||||||
#
|
#
|
||||||
log_must set_tunable64 VDEV_FILE_PHYSICAL_ASHIFT 16
|
log_must set_tunable32 VDEV_FILE_PHYSICAL_ASHIFT 16
|
||||||
|
|
||||||
typeset ashifts=("9" "10" "11" "12" "13" "14" "15" "16")
|
typeset ashifts=("9" "10" "11" "12" "13" "14" "15" "16")
|
||||||
for ashift in ${ashifts[@]}
|
for ashift in ${ashifts[@]}
|
||||||
|
|
|
@ -42,7 +42,7 @@ verify_runnable "global"
|
||||||
|
|
||||||
function cleanup
|
function cleanup
|
||||||
{
|
{
|
||||||
log_must set_tunable64 VDEV_FILE_PHYSICAL_ASHIFT $orig_ashift
|
log_must set_tunable32 VDEV_FILE_PHYSICAL_ASHIFT $orig_ashift
|
||||||
poolexists $TESTPOOL1 && destroy_pool $TESTPOOL1
|
poolexists $TESTPOOL1 && destroy_pool $TESTPOOL1
|
||||||
rm -f $disk1 $disk2
|
rm -f $disk1 $disk2
|
||||||
}
|
}
|
||||||
|
@ -61,7 +61,7 @@ orig_ashift=$(get_tunable VDEV_FILE_PHYSICAL_ASHIFT)
|
||||||
# the ashift using the -o ashift property should still
|
# the ashift using the -o ashift property should still
|
||||||
# be honored.
|
# be honored.
|
||||||
#
|
#
|
||||||
log_must set_tunable64 VDEV_FILE_PHYSICAL_ASHIFT 16
|
log_must set_tunable32 VDEV_FILE_PHYSICAL_ASHIFT 16
|
||||||
|
|
||||||
typeset ashifts=("9" "10" "11" "12" "13" "14" "15" "16")
|
typeset ashifts=("9" "10" "11" "12" "13" "14" "15" "16")
|
||||||
for ashift in ${ashifts[@]}
|
for ashift in ${ashifts[@]}
|
||||||
|
|
|
@ -42,7 +42,7 @@ verify_runnable "global"
|
||||||
|
|
||||||
function cleanup
|
function cleanup
|
||||||
{
|
{
|
||||||
log_must set_tunable64 VDEV_FILE_PHYSICAL_ASHIFT $orig_ashift
|
log_must set_tunable32 VDEV_FILE_PHYSICAL_ASHIFT $orig_ashift
|
||||||
poolexists $TESTPOOL1 && destroy_pool $TESTPOOL1
|
poolexists $TESTPOOL1 && destroy_pool $TESTPOOL1
|
||||||
rm -f $disk1 $disk2
|
rm -f $disk1 $disk2
|
||||||
}
|
}
|
||||||
|
@ -61,7 +61,7 @@ orig_ashift=$(get_tunable VDEV_FILE_PHYSICAL_ASHIFT)
|
||||||
# the ashift using the -o ashift property should still
|
# the ashift using the -o ashift property should still
|
||||||
# be honored.
|
# be honored.
|
||||||
#
|
#
|
||||||
log_must set_tunable64 VDEV_FILE_PHYSICAL_ASHIFT 16
|
log_must set_tunable32 VDEV_FILE_PHYSICAL_ASHIFT 16
|
||||||
|
|
||||||
typeset ashifts=("9" "10" "11" "12" "13" "14" "15" "16")
|
typeset ashifts=("9" "10" "11" "12" "13" "14" "15" "16")
|
||||||
for ashift in ${ashifts[@]}
|
for ashift in ${ashifts[@]}
|
||||||
|
|
|
@ -44,7 +44,7 @@ verify_runnable "global"
|
||||||
|
|
||||||
function cleanup
|
function cleanup
|
||||||
{
|
{
|
||||||
log_must set_tunable64 VDEV_FILE_PHYSICAL_ASHIFT $orig_ashift
|
log_must set_tunable32 VDEV_FILE_PHYSICAL_ASHIFT $orig_ashift
|
||||||
poolexists $TESTPOOL1 && destroy_pool $TESTPOOL1
|
poolexists $TESTPOOL1 && destroy_pool $TESTPOOL1
|
||||||
rm -f $disk1 $disk2
|
rm -f $disk1 $disk2
|
||||||
}
|
}
|
||||||
|
@ -63,7 +63,7 @@ orig_ashift=$(get_tunable VDEV_FILE_PHYSICAL_ASHIFT)
|
||||||
# the ashift using the -o ashift property should still
|
# the ashift using the -o ashift property should still
|
||||||
# be honored.
|
# be honored.
|
||||||
#
|
#
|
||||||
log_must set_tunable64 VDEV_FILE_PHYSICAL_ASHIFT 16
|
log_must set_tunable32 VDEV_FILE_PHYSICAL_ASHIFT 16
|
||||||
|
|
||||||
typeset ashifts=("9" "10" "11" "12" "13" "14" "15" "16")
|
typeset ashifts=("9" "10" "11" "12" "13" "14" "15" "16")
|
||||||
for ashift in ${ashifts[@]}
|
for ashift in ${ashifts[@]}
|
||||||
|
|
|
@ -42,7 +42,7 @@ verify_runnable "global"
|
||||||
|
|
||||||
function cleanup
|
function cleanup
|
||||||
{
|
{
|
||||||
log_must set_tunable64 VDEV_FILE_PHYSICAL_ASHIFT $orig_ashift
|
log_must set_tunable32 VDEV_FILE_PHYSICAL_ASHIFT $orig_ashift
|
||||||
destroy_pool $TESTPOOL1
|
destroy_pool $TESTPOOL1
|
||||||
rm -f $disk
|
rm -f $disk
|
||||||
}
|
}
|
||||||
|
@ -60,7 +60,7 @@ orig_ashift=$(get_tunable VDEV_FILE_PHYSICAL_ASHIFT)
|
||||||
# the ashift using the -o ashift property should still
|
# the ashift using the -o ashift property should still
|
||||||
# be honored.
|
# be honored.
|
||||||
#
|
#
|
||||||
log_must set_tunable64 VDEV_FILE_PHYSICAL_ASHIFT 16
|
log_must set_tunable32 VDEV_FILE_PHYSICAL_ASHIFT 16
|
||||||
|
|
||||||
disk=$TEST_BASE_DIR/disk
|
disk=$TEST_BASE_DIR/disk
|
||||||
log_must mkfile $SIZE $disk
|
log_must mkfile $SIZE $disk
|
||||||
|
|
Loading…
Reference in New Issue