2022-04-07 23:07:08 +00:00
|
|
|
libzpool_la_CFLAGS = $(AM_CFLAGS) $(KERNEL_CFLAGS) $(LIBRARY_CFLAGS)
|
|
|
|
libzpool_la_CFLAGS += $(ZLIB_CFLAGS)
|
2010-08-26 18:22:58 +00:00
|
|
|
|
2022-04-07 23:07:08 +00:00
|
|
|
libzpool_la_CPPFLAGS = $(AM_CPPFLAGS) $(FORCEDEBUG_CPPFLAGS)
|
|
|
|
libzpool_la_CPPFLAGS += -I$(srcdir)/include/os/@ac_system_l@/zfs
|
|
|
|
libzpool_la_CPPFLAGS += -DLIB_ZPOOL_BUILD
|
2014-06-09 21:55:31 +00:00
|
|
|
|
2022-04-07 23:07:08 +00:00
|
|
|
lib_LTLIBRARIES += libzpool.la
|
|
|
|
CPPCHECKTARGETS += libzpool.la
|
2018-02-08 16:16:23 +00:00
|
|
|
|
2020-06-25 18:14:54 +00:00
|
|
|
dist_libzpool_la_SOURCES = \
|
2022-04-07 23:07:08 +00:00
|
|
|
%D%/kernel.c \
|
|
|
|
%D%/taskq.c \
|
|
|
|
%D%/util.c
|
2020-06-25 18:14:54 +00:00
|
|
|
|
2014-06-09 21:55:31 +00:00
|
|
|
nodist_libzpool_la_SOURCES = \
|
2022-04-07 23:07:08 +00:00
|
|
|
module/lua/lapi.c \
|
|
|
|
module/lua/lauxlib.c \
|
|
|
|
module/lua/lbaselib.c \
|
|
|
|
module/lua/lcode.c \
|
|
|
|
module/lua/lcompat.c \
|
|
|
|
module/lua/lcorolib.c \
|
|
|
|
module/lua/lctype.c \
|
|
|
|
module/lua/ldebug.c \
|
|
|
|
module/lua/ldo.c \
|
|
|
|
module/lua/lfunc.c \
|
|
|
|
module/lua/lgc.c \
|
|
|
|
module/lua/llex.c \
|
|
|
|
module/lua/lmem.c \
|
|
|
|
module/lua/lobject.c \
|
|
|
|
module/lua/lopcodes.c \
|
|
|
|
module/lua/lparser.c \
|
|
|
|
module/lua/lstate.c \
|
|
|
|
module/lua/lstring.c \
|
|
|
|
module/lua/lstrlib.c \
|
|
|
|
module/lua/ltable.c \
|
|
|
|
module/lua/ltablib.c \
|
|
|
|
module/lua/ltm.c \
|
|
|
|
module/lua/lvm.c \
|
|
|
|
module/lua/lzio.c \
|
|
|
|
\
|
|
|
|
module/os/linux/zfs/abd_os.c \
|
|
|
|
module/os/linux/zfs/arc_os.c \
|
|
|
|
module/os/linux/zfs/trace.c \
|
|
|
|
module/os/linux/zfs/vdev_file.c \
|
RAID-Z expansion feature
This feature allows disks to be added one at a time to a RAID-Z group,
expanding its capacity incrementally. This feature is especially useful
for small pools (typically with only one RAID-Z group), where there
isn't sufficient hardware to add capacity by adding a whole new RAID-Z
group (typically doubling the number of disks).
== Initiating expansion ==
A new device (disk) can be attached to an existing RAIDZ vdev, by
running `zpool attach POOL raidzP-N NEW_DEVICE`, e.g. `zpool attach tank
raidz2-0 sda`. The new device will become part of the RAIDZ group. A
"raidz expansion" will be initiated, and the new device will contribute
additional space to the RAIDZ group once the expansion completes.
The `feature@raidz_expansion` on-disk feature flag must be `enabled` to
initiate an expansion, and it remains `active` for the life of the pool.
In other words, pools with expanded RAIDZ vdevs can not be imported by
older releases of the ZFS software.
== During expansion ==
The expansion entails reading all allocated space from existing disks in
the RAIDZ group, and rewriting it to the new disks in the RAIDZ group
(including the newly added device).
The expansion progress can be monitored with `zpool status`.
Data redundancy is maintained during (and after) the expansion. If a
disk fails while the expansion is in progress, the expansion pauses
until the health of the RAIDZ vdev is restored (e.g. by replacing the
failed disk and waiting for reconstruction to complete).
The pool remains accessible during expansion. Following a reboot or
export/import, the expansion resumes where it left off.
== After expansion ==
When the expansion completes, the additional space is available for use,
and is reflected in the `available` zfs property (as seen in `zfs list`,
`df`, etc).
Expansion does not change the number of failures that can be tolerated
without data loss (e.g. a RAIDZ2 is still a RAIDZ2 even after
expansion).
A RAIDZ vdev can be expanded multiple times.
After the expansion completes, old blocks remain with their old
data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but
distributed among the larger set of disks. New blocks will be written
with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been
expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ
vdev's "assumed parity ratio" does not change, so slightly less space
than is expected may be reported for newly-written blocks, according to
`zfs list`, `df`, `ls -s`, and similar tools.
Sponsored-by: The FreeBSD Foundation
Sponsored-by: iXsystems, Inc.
Sponsored-by: vStack
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Authored-by: Matthew Ahrens <mahrens@delphix.com>
Contributions-by: Fedor Uporov <fuporov.vstack@gmail.com>
Contributions-by: Stuart Maybee <stuart.maybee@comcast.net>
Contributions-by: Thorsten Behrens <tbehrens@outlook.com>
Contributions-by: Fmstrat <nospam@nowsci.com>
Contributions-by: Don Brady <dev.fs.zfs@gmail.com>
Signed-off-by: Don Brady <dev.fs.zfs@gmail.com>
Closes #15022
2023-11-08 18:19:41 +00:00
|
|
|
module/os/linux/zfs/vdev_label_os.c \
|
2022-04-07 23:07:08 +00:00
|
|
|
module/os/linux/zfs/zfs_debug.c \
|
|
|
|
module/os/linux/zfs/zfs_racct.c \
|
|
|
|
module/os/linux/zfs/zfs_znode.c \
|
|
|
|
module/os/linux/zfs/zio_crypt.c \
|
|
|
|
\
|
|
|
|
module/zcommon/cityhash.c \
|
|
|
|
module/zcommon/zfeature_common.c \
|
|
|
|
module/zcommon/zfs_comutil.c \
|
|
|
|
module/zcommon/zfs_deleg.c \
|
|
|
|
module/zcommon/zfs_fletcher.c \
|
|
|
|
module/zcommon/zfs_fletcher_aarch64_neon.c \
|
|
|
|
module/zcommon/zfs_fletcher_avx512.c \
|
|
|
|
module/zcommon/zfs_fletcher_intel.c \
|
|
|
|
module/zcommon/zfs_fletcher_sse.c \
|
|
|
|
module/zcommon/zfs_fletcher_superscalar.c \
|
|
|
|
module/zcommon/zfs_fletcher_superscalar4.c \
|
|
|
|
module/zcommon/zfs_namecheck.c \
|
|
|
|
module/zcommon/zfs_prop.c \
|
|
|
|
module/zcommon/zpool_prop.c \
|
|
|
|
module/zcommon/zprop_common.c \
|
|
|
|
\
|
|
|
|
module/zfs/abd.c \
|
|
|
|
module/zfs/aggsum.c \
|
|
|
|
module/zfs/arc.c \
|
Introduce BLAKE3 checksums as an OpenZFS feature
This commit adds BLAKE3 checksums to OpenZFS, it has similar
performance to Edon-R, but without the caveats around the latter.
Homepage of BLAKE3: https://github.com/BLAKE3-team/BLAKE3
Wikipedia: https://en.wikipedia.org/wiki/BLAKE_(hash_function)#BLAKE3
Short description of Wikipedia:
BLAKE3 is a cryptographic hash function based on Bao and BLAKE2,
created by Jack O'Connor, Jean-Philippe Aumasson, Samuel Neves, and
Zooko Wilcox-O'Hearn. It was announced on January 9, 2020, at Real
World Crypto. BLAKE3 is a single algorithm with many desirable
features (parallelism, XOF, KDF, PRF and MAC), in contrast to BLAKE
and BLAKE2, which are algorithm families with multiple variants.
BLAKE3 has a binary tree structure, so it supports a practically
unlimited degree of parallelism (both SIMD and multithreading) given
enough input. The official Rust and C implementations are
dual-licensed as public domain (CC0) and the Apache License.
Along with adding the BLAKE3 hash into the OpenZFS infrastructure a
new benchmarking file called chksum_bench was introduced. When read
it reports the speed of the available checksum functions.
On Linux: cat /proc/spl/kstat/zfs/chksum_bench
On FreeBSD: sysctl kstat.zfs.misc.chksum_bench
This is an example output of an i3-1005G1 test system with Debian 11:
implementation 1k 4k 16k 64k 256k 1m 4m
edonr-generic 1196 1602 1761 1749 1762 1759 1751
skein-generic 546 591 608 615 619 612 616
sha256-generic 240 300 316 314 304 285 276
sha512-generic 353 441 467 476 472 467 426
blake3-generic 308 313 313 313 312 313 312
blake3-sse2 402 1289 1423 1446 1432 1458 1413
blake3-sse41 427 1470 1625 1704 1679 1607 1629
blake3-avx2 428 1920 3095 3343 3356 3318 3204
blake3-avx512 473 2687 4905 5836 5844 5643 5374
Output on Debian 5.10.0-10-amd64 system: (Ryzen 7 5800X)
implementation 1k 4k 16k 64k 256k 1m 4m
edonr-generic 1840 2458 2665 2719 2711 2723 2693
skein-generic 870 966 996 992 1003 1005 1009
sha256-generic 415 442 453 455 457 457 457
sha512-generic 608 690 711 718 719 720 721
blake3-generic 301 313 311 309 309 310 310
blake3-sse2 343 1865 2124 2188 2180 2181 2186
blake3-sse41 364 2091 2396 2509 2463 2482 2488
blake3-avx2 365 2590 4399 4971 4915 4802 4764
Output on Debian 5.10.0-9-powerpc64le system: (POWER 9)
implementation 1k 4k 16k 64k 256k 1m 4m
edonr-generic 1213 1703 1889 1918 1957 1902 1907
skein-generic 434 492 520 522 511 525 525
sha256-generic 167 183 187 188 188 187 188
sha512-generic 186 216 222 221 225 224 224
blake3-generic 153 152 154 153 151 153 153
blake3-sse2 391 1170 1366 1406 1428 1426 1414
blake3-sse41 352 1049 1212 1174 1262 1258 1259
Output on Debian 5.10.0-11-arm64 system: (Pi400)
implementation 1k 4k 16k 64k 256k 1m 4m
edonr-generic 487 603 629 639 643 641 641
skein-generic 271 299 303 308 309 309 307
sha256-generic 117 127 128 130 130 129 130
sha512-generic 145 165 170 172 173 174 175
blake3-generic 81 29 71 89 89 89 89
blake3-sse2 112 323 368 379 380 371 374
blake3-sse41 101 315 357 368 369 364 360
Structurally, the new code is mainly split into these parts:
- 1x cross platform generic c variant: blake3_generic.c
- 4x assembly for X86-64 (SSE2, SSE4.1, AVX2, AVX512)
- 2x assembly for ARMv8 (NEON converted from SSE2)
- 2x assembly for PPC64-LE (POWER8 converted from SSE2)
- one file for switching between the implementations
Note the PPC64 assembly requires the VSX instruction set and the
kfpu_begin() / kfpu_end() calls on PowerPC were updated accordingly.
Reviewed-by: Felix Dörre <felix@dogcraft.de>
Reviewed-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tino Reichardt <milky-zfs@mcmilk.de>
Co-authored-by: Rich Ercolani <rincebrain@gmail.com>
Closes #10058
Closes #12918
2022-06-08 22:55:57 +00:00
|
|
|
module/zfs/blake3_zfs.c \
|
2022-04-07 23:07:08 +00:00
|
|
|
module/zfs/blkptr.c \
|
|
|
|
module/zfs/bplist.c \
|
|
|
|
module/zfs/bpobj.c \
|
|
|
|
module/zfs/bptree.c \
|
|
|
|
module/zfs/bqueue.c \
|
|
|
|
module/zfs/btree.c \
|
2023-03-10 19:59:53 +00:00
|
|
|
module/zfs/brt.c \
|
2022-04-07 23:07:08 +00:00
|
|
|
module/zfs/dbuf.c \
|
|
|
|
module/zfs/dbuf_stats.c \
|
|
|
|
module/zfs/ddt.c \
|
ddt: dedup log
Adds a log/journal to dedup. At the end of txg, instead of writing the
entry directly to the ZAP, instead its adding to an in-memory tree and
appended to an on-disk object. The on-disk object is only read at
import, to reload the in-memory tree.
Lookups first go the the log tree before going to the ZAP, so
recently-used entries will remain close by in memory. This vastly
reduces overhead from dedup IO, as it will not have to do so many
read/update/write cycles on ZAP leaf nodes.
A flushing facility is added at end of txg, to push logged entries out
to the ZAP. There's actually two separate "logs" (in-memory tree and
on-disk object), one active (recieving updated entries) and one flushing
(writing out to disk). These are swapped (ie flushing begins) based on
memory used by the in-memory log trees and time since we last flushed
something.
The flushing facility monitors the amount of entries coming in and being
flushed out, and calibrates itself to try to flush enough each txg to
keep up with the ingest rate without competing too much with other IO.
Multiple tuneables are provided to control the flushing facility.
All the histograms and stats are update to accomodate the log as a
separate entry store. zdb gains knowledge of how to count them and dump
them. Documentation included!
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Co-authored-by: Allan Jude <allan@klarasystems.com>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: iXsystems, Inc.
Closes #15895
2023-06-22 07:46:22 +00:00
|
|
|
module/zfs/ddt_log.c \
|
2023-05-16 03:30:26 +00:00
|
|
|
module/zfs/ddt_stats.c \
|
2022-04-07 23:07:08 +00:00
|
|
|
module/zfs/ddt_zap.c \
|
|
|
|
module/zfs/dmu.c \
|
|
|
|
module/zfs/dmu_diff.c \
|
|
|
|
module/zfs/dmu_object.c \
|
|
|
|
module/zfs/dmu_objset.c \
|
|
|
|
module/zfs/dmu_recv.c \
|
|
|
|
module/zfs/dmu_redact.c \
|
|
|
|
module/zfs/dmu_send.c \
|
|
|
|
module/zfs/dmu_traverse.c \
|
|
|
|
module/zfs/dmu_tx.c \
|
|
|
|
module/zfs/dmu_zfetch.c \
|
|
|
|
module/zfs/dnode.c \
|
|
|
|
module/zfs/dnode_sync.c \
|
|
|
|
module/zfs/dsl_bookmark.c \
|
|
|
|
module/zfs/dsl_crypt.c \
|
|
|
|
module/zfs/dsl_dataset.c \
|
|
|
|
module/zfs/dsl_deadlist.c \
|
|
|
|
module/zfs/dsl_deleg.c \
|
|
|
|
module/zfs/dsl_destroy.c \
|
|
|
|
module/zfs/dsl_dir.c \
|
|
|
|
module/zfs/dsl_pool.c \
|
|
|
|
module/zfs/dsl_prop.c \
|
|
|
|
module/zfs/dsl_scan.c \
|
|
|
|
module/zfs/dsl_synctask.c \
|
|
|
|
module/zfs/dsl_userhold.c \
|
|
|
|
module/zfs/edonr_zfs.c \
|
|
|
|
module/zfs/fm.c \
|
|
|
|
module/zfs/gzip.c \
|
|
|
|
module/zfs/hkdf.c \
|
|
|
|
module/zfs/lz4.c \
|
|
|
|
module/zfs/lz4_zfs.c \
|
|
|
|
module/zfs/lzjb.c \
|
|
|
|
module/zfs/metaslab.c \
|
|
|
|
module/zfs/mmp.c \
|
|
|
|
module/zfs/multilist.c \
|
|
|
|
module/zfs/objlist.c \
|
|
|
|
module/zfs/pathname.c \
|
|
|
|
module/zfs/range_tree.c \
|
|
|
|
module/zfs/refcount.c \
|
|
|
|
module/zfs/rrwlock.c \
|
|
|
|
module/zfs/sa.c \
|
Add generic implementation handling and SHA2 impl
The skeleton file module/icp/include/generic_impl.c can be used for
iterating over different implementations of algorithms.
It is used by SHA256, SHA512 and BLAKE3 currently.
The Solaris SHA2 implementation got replaced with a version which is
based on public domain code of cppcrypto v0.10.
These assembly files are taken from current openssl master:
- sha256-x86_64.S: x64, SSSE3, AVX, AVX2, SHA-NI (x86_64)
- sha512-x86_64.S: x64, AVX, AVX2 (x86_64)
- sha256-armv7.S: ARMv7, NEON, ARMv8-CE (arm)
- sha512-armv7.S: ARMv7, NEON (arm)
- sha256-armv8.S: ARMv7, NEON, ARMv8-CE (aarch64)
- sha512-armv8.S: ARMv7, ARMv8-CE (aarch64)
- sha256-ppc.S: Generic PPC64 LE/BE (ppc64)
- sha512-ppc.S: Generic PPC64 LE/BE (ppc64)
- sha256-p8.S: Power8 ISA Version 2.07 LE/BE (ppc64)
- sha512-p8.S: Power8 ISA Version 2.07 LE/BE (ppc64)
Tested-by: Rich Ercolani <rincebrain@gmail.com>
Tested-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tino Reichardt <milky-zfs@mcmilk.de>
Closes #13741
2023-03-01 08:40:28 +00:00
|
|
|
module/zfs/sha2_zfs.c \
|
2022-04-07 23:07:08 +00:00
|
|
|
module/zfs/skein_zfs.c \
|
|
|
|
module/zfs/spa.c \
|
|
|
|
module/zfs/spa_checkpoint.c \
|
|
|
|
module/zfs/spa_config.c \
|
|
|
|
module/zfs/spa_errlog.c \
|
|
|
|
module/zfs/spa_history.c \
|
|
|
|
module/zfs/spa_log_spacemap.c \
|
|
|
|
module/zfs/spa_misc.c \
|
|
|
|
module/zfs/spa_stats.c \
|
|
|
|
module/zfs/space_map.c \
|
|
|
|
module/zfs/space_reftree.c \
|
|
|
|
module/zfs/txg.c \
|
|
|
|
module/zfs/uberblock.c \
|
|
|
|
module/zfs/unique.c \
|
|
|
|
module/zfs/vdev.c \
|
|
|
|
module/zfs/vdev_draid.c \
|
|
|
|
module/zfs/vdev_draid_rand.c \
|
|
|
|
module/zfs/vdev_indirect.c \
|
|
|
|
module/zfs/vdev_indirect_births.c \
|
|
|
|
module/zfs/vdev_indirect_mapping.c \
|
|
|
|
module/zfs/vdev_initialize.c \
|
|
|
|
module/zfs/vdev_label.c \
|
|
|
|
module/zfs/vdev_mirror.c \
|
|
|
|
module/zfs/vdev_missing.c \
|
|
|
|
module/zfs/vdev_queue.c \
|
|
|
|
module/zfs/vdev_raidz.c \
|
|
|
|
module/zfs/vdev_raidz_math.c \
|
|
|
|
module/zfs/vdev_raidz_math_aarch64_neon.c \
|
|
|
|
module/zfs/vdev_raidz_math_aarch64_neonx2.c \
|
|
|
|
module/zfs/vdev_raidz_math_avx2.c \
|
|
|
|
module/zfs/vdev_raidz_math_avx512bw.c \
|
|
|
|
module/zfs/vdev_raidz_math_avx512f.c \
|
|
|
|
module/zfs/vdev_raidz_math_powerpc_altivec.c \
|
|
|
|
module/zfs/vdev_raidz_math_scalar.c \
|
|
|
|
module/zfs/vdev_raidz_math_sse2.c \
|
|
|
|
module/zfs/vdev_raidz_math_ssse3.c \
|
|
|
|
module/zfs/vdev_rebuild.c \
|
|
|
|
module/zfs/vdev_removal.c \
|
|
|
|
module/zfs/vdev_root.c \
|
|
|
|
module/zfs/vdev_trim.c \
|
|
|
|
module/zfs/zap.c \
|
|
|
|
module/zfs/zap_leaf.c \
|
|
|
|
module/zfs/zap_micro.c \
|
|
|
|
module/zfs/zcp.c \
|
|
|
|
module/zfs/zcp_get.c \
|
|
|
|
module/zfs/zcp_global.c \
|
|
|
|
module/zfs/zcp_iter.c \
|
|
|
|
module/zfs/zcp_set.c \
|
|
|
|
module/zfs/zcp_synctask.c \
|
|
|
|
module/zfs/zfeature.c \
|
|
|
|
module/zfs/zfs_byteswap.c \
|
Introduce BLAKE3 checksums as an OpenZFS feature
This commit adds BLAKE3 checksums to OpenZFS, it has similar
performance to Edon-R, but without the caveats around the latter.
Homepage of BLAKE3: https://github.com/BLAKE3-team/BLAKE3
Wikipedia: https://en.wikipedia.org/wiki/BLAKE_(hash_function)#BLAKE3
Short description of Wikipedia:
BLAKE3 is a cryptographic hash function based on Bao and BLAKE2,
created by Jack O'Connor, Jean-Philippe Aumasson, Samuel Neves, and
Zooko Wilcox-O'Hearn. It was announced on January 9, 2020, at Real
World Crypto. BLAKE3 is a single algorithm with many desirable
features (parallelism, XOF, KDF, PRF and MAC), in contrast to BLAKE
and BLAKE2, which are algorithm families with multiple variants.
BLAKE3 has a binary tree structure, so it supports a practically
unlimited degree of parallelism (both SIMD and multithreading) given
enough input. The official Rust and C implementations are
dual-licensed as public domain (CC0) and the Apache License.
Along with adding the BLAKE3 hash into the OpenZFS infrastructure a
new benchmarking file called chksum_bench was introduced. When read
it reports the speed of the available checksum functions.
On Linux: cat /proc/spl/kstat/zfs/chksum_bench
On FreeBSD: sysctl kstat.zfs.misc.chksum_bench
This is an example output of an i3-1005G1 test system with Debian 11:
implementation 1k 4k 16k 64k 256k 1m 4m
edonr-generic 1196 1602 1761 1749 1762 1759 1751
skein-generic 546 591 608 615 619 612 616
sha256-generic 240 300 316 314 304 285 276
sha512-generic 353 441 467 476 472 467 426
blake3-generic 308 313 313 313 312 313 312
blake3-sse2 402 1289 1423 1446 1432 1458 1413
blake3-sse41 427 1470 1625 1704 1679 1607 1629
blake3-avx2 428 1920 3095 3343 3356 3318 3204
blake3-avx512 473 2687 4905 5836 5844 5643 5374
Output on Debian 5.10.0-10-amd64 system: (Ryzen 7 5800X)
implementation 1k 4k 16k 64k 256k 1m 4m
edonr-generic 1840 2458 2665 2719 2711 2723 2693
skein-generic 870 966 996 992 1003 1005 1009
sha256-generic 415 442 453 455 457 457 457
sha512-generic 608 690 711 718 719 720 721
blake3-generic 301 313 311 309 309 310 310
blake3-sse2 343 1865 2124 2188 2180 2181 2186
blake3-sse41 364 2091 2396 2509 2463 2482 2488
blake3-avx2 365 2590 4399 4971 4915 4802 4764
Output on Debian 5.10.0-9-powerpc64le system: (POWER 9)
implementation 1k 4k 16k 64k 256k 1m 4m
edonr-generic 1213 1703 1889 1918 1957 1902 1907
skein-generic 434 492 520 522 511 525 525
sha256-generic 167 183 187 188 188 187 188
sha512-generic 186 216 222 221 225 224 224
blake3-generic 153 152 154 153 151 153 153
blake3-sse2 391 1170 1366 1406 1428 1426 1414
blake3-sse41 352 1049 1212 1174 1262 1258 1259
Output on Debian 5.10.0-11-arm64 system: (Pi400)
implementation 1k 4k 16k 64k 256k 1m 4m
edonr-generic 487 603 629 639 643 641 641
skein-generic 271 299 303 308 309 309 307
sha256-generic 117 127 128 130 130 129 130
sha512-generic 145 165 170 172 173 174 175
blake3-generic 81 29 71 89 89 89 89
blake3-sse2 112 323 368 379 380 371 374
blake3-sse41 101 315 357 368 369 364 360
Structurally, the new code is mainly split into these parts:
- 1x cross platform generic c variant: blake3_generic.c
- 4x assembly for X86-64 (SSE2, SSE4.1, AVX2, AVX512)
- 2x assembly for ARMv8 (NEON converted from SSE2)
- 2x assembly for PPC64-LE (POWER8 converted from SSE2)
- one file for switching between the implementations
Note the PPC64 assembly requires the VSX instruction set and the
kfpu_begin() / kfpu_end() calls on PowerPC were updated accordingly.
Reviewed-by: Felix Dörre <felix@dogcraft.de>
Reviewed-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tino Reichardt <milky-zfs@mcmilk.de>
Co-authored-by: Rich Ercolani <rincebrain@gmail.com>
Closes #10058
Closes #12918
2022-06-08 22:55:57 +00:00
|
|
|
module/zfs/zfs_chksum.c \
|
2022-04-07 23:07:08 +00:00
|
|
|
module/zfs/zfs_fm.c \
|
|
|
|
module/zfs/zfs_fuid.c \
|
|
|
|
module/zfs/zfs_ratelimit.c \
|
|
|
|
module/zfs/zfs_rlock.c \
|
|
|
|
module/zfs/zfs_sa.c \
|
|
|
|
module/zfs/zil.c \
|
|
|
|
module/zfs/zio.c \
|
|
|
|
module/zfs/zio_checksum.c \
|
|
|
|
module/zfs/zio_compress.c \
|
|
|
|
module/zfs/zio_inject.c \
|
|
|
|
module/zfs/zle.c \
|
|
|
|
module/zfs/zrlock.c \
|
|
|
|
module/zfs/zthr.c
|
2010-08-26 18:22:58 +00:00
|
|
|
|
2011-12-31 23:30:52 +00:00
|
|
|
libzpool_la_LIBADD = \
|
2022-04-07 23:07:08 +00:00
|
|
|
libicp.la \
|
|
|
|
libunicode.la \
|
|
|
|
libnvpair.la \
|
|
|
|
libzstd.la \
|
|
|
|
libzutil.la
|
Clean up lib dependencies
libzutil is currently statically linked into libzfs, libzfs_core and
libzpool. Avoid the unnecessary duplication by removing it from libzfs
and libzpool, and adding libzfs_core to libzpool.
Remove a few unnecessary dependencies:
- libuutil from libzfs_core
- libtirpc from libspl
- keep only libcrypto in libzfs, as we don't use any functions from
libssl
- librt is only used for clock_gettime, however on modern systems that's
in libc rather than librt. Add a configure check to see if we actually
need librt
- libdl from raidz_test
Add a few missing dependencies:
- zlib to libefi and libzfs
- libuuid to zpool, and libuuid and libudev to zed
- libnvpair uses assertions, so add assert.c to provide aok and
libspl_assertf
Sort the LDADD for programs so that libraries that satisfy dependencies
come at the end rather than the beginning of the linker command line.
Revamp the configure tests for libaries to use FIND_SYSTEM_LIBRARY
instead. This can take advantage of pkg-config, and it also avoids
polluting LIBS.
List all the required dependencies in the pkgconfig files, and move the
one for libzfs_core into the latter's directory. Install pkgconfig files
in $(libdir)/pkgconfig on linux and $(prefix)/libdata/pkgconfig on
FreeBSD, instead of /usr/share/pkgconfig, as the more correct location
for library .pc files.
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Closes #10538
2020-06-30 17:10:41 +00:00
|
|
|
|
Distributed Spare (dRAID) Feature
This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID. This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.
A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`. No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.
zpool create <pool> draid[1,2,3] <vdevs...>
Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons. The supported options include:
zpool create <pool> \
draid[<parity>][:<data>d][:<children>c][:<spares>s] \
<vdevs...>
- draid[parity] - Parity level (default 1)
- draid[:<data>d] - Data devices per group (default 8)
- draid[:<children>c] - Expected number of child vdevs
- draid[:<spares>s] - Distributed hot spares (default 0)
Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.
```
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
slag7 ONLINE 0 0 0
draid2:8d:68c:2s-0 ONLINE 0 0 0
L0 ONLINE 0 0 0
L1 ONLINE 0 0 0
...
U25 ONLINE 0 0 0
U26 ONLINE 0 0 0
spare-53 ONLINE 0 0 0
U27 ONLINE 0 0 0
draid2-0-0 ONLINE 0 0 0
U28 ONLINE 0 0 0
U29 ONLINE 0 0 0
...
U42 ONLINE 0 0 0
U43 ONLINE 0 0 0
special
mirror-1 ONLINE 0 0 0
L5 ONLINE 0 0 0
U5 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
L6 ONLINE 0 0 0
U6 ONLINE 0 0 0
spares
draid2-0-0 INUSE currently in use
draid2-0-1 AVAIL
```
When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command. These options are leverages
by zloop.sh to test a wide range of dRAID configurations.
-K draid|raidz|random - kind of RAID to test
-D <value> - dRAID data drives per group
-S <value> - dRAID distributed hot spares
-R <value> - RAID parity (raidz or dRAID)
The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.
Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102
2020-11-13 21:51:51 +00:00
|
|
|
libzpool_la_LIBADD += $(LIBCLOCK_GETTIME) $(ZLIB_LIBS) -ldl -lm
|
Clean up lib dependencies
libzutil is currently statically linked into libzfs, libzfs_core and
libzpool. Avoid the unnecessary duplication by removing it from libzfs
and libzpool, and adding libzfs_core to libzpool.
Remove a few unnecessary dependencies:
- libuutil from libzfs_core
- libtirpc from libspl
- keep only libcrypto in libzfs, as we don't use any functions from
libssl
- librt is only used for clock_gettime, however on modern systems that's
in libc rather than librt. Add a configure check to see if we actually
need librt
- libdl from raidz_test
Add a few missing dependencies:
- zlib to libefi and libzfs
- libuuid to zpool, and libuuid and libudev to zed
- libnvpair uses assertions, so add assert.c to provide aok and
libspl_assertf
Sort the LDADD for programs so that libraries that satisfy dependencies
come at the end rather than the beginning of the linker command line.
Revamp the configure tests for libaries to use FIND_SYSTEM_LIBRARY
instead. This can take advantage of pkg-config, and it also avoids
polluting LIBS.
List all the required dependencies in the pkgconfig files, and move the
one for libzfs_core into the latter's directory. Install pkgconfig files
in $(libdir)/pkgconfig on linux and $(prefix)/libdata/pkgconfig on
FreeBSD, instead of /usr/share/pkgconfig, as the more correct location
for library .pc files.
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Closes #10538
2020-06-30 17:10:41 +00:00
|
|
|
|
2020-07-14 19:17:44 +00:00
|
|
|
libzpool_la_LDFLAGS = -pthread
|
|
|
|
|
|
|
|
if !ASAN_ENABLED
|
|
|
|
libzpool_la_LDFLAGS += -Wl,-z,defs
|
|
|
|
endif
|
2011-12-31 23:30:52 +00:00
|
|
|
|
2020-04-14 18:36:28 +00:00
|
|
|
if BUILD_FREEBSD
|
Clean up lib dependencies
libzutil is currently statically linked into libzfs, libzfs_core and
libzpool. Avoid the unnecessary duplication by removing it from libzfs
and libzpool, and adding libzfs_core to libzpool.
Remove a few unnecessary dependencies:
- libuutil from libzfs_core
- libtirpc from libspl
- keep only libcrypto in libzfs, as we don't use any functions from
libssl
- librt is only used for clock_gettime, however on modern systems that's
in libc rather than librt. Add a configure check to see if we actually
need librt
- libdl from raidz_test
Add a few missing dependencies:
- zlib to libefi and libzfs
- libuuid to zpool, and libuuid and libudev to zed
- libnvpair uses assertions, so add assert.c to provide aok and
libspl_assertf
Sort the LDADD for programs so that libraries that satisfy dependencies
come at the end rather than the beginning of the linker command line.
Revamp the configure tests for libaries to use FIND_SYSTEM_LIBRARY
instead. This can take advantage of pkg-config, and it also avoids
polluting LIBS.
List all the required dependencies in the pkgconfig files, and move the
one for libzfs_core into the latter's directory. Install pkgconfig files
in $(libdir)/pkgconfig on linux and $(prefix)/libdata/pkgconfig on
FreeBSD, instead of /usr/share/pkgconfig, as the more correct location
for library .pc files.
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Closes #10538
2020-06-30 17:10:41 +00:00
|
|
|
libzpool_la_LIBADD += -lgeom
|
2020-04-14 18:36:28 +00:00
|
|
|
endif
|
2010-09-04 20:26:23 +00:00
|
|
|
|
2021-04-01 23:53:05 +00:00
|
|
|
libzpool_la_LDFLAGS += -version-info 5:0:0
|
2020-10-31 14:39:58 +00:00
|
|
|
|
2020-01-23 19:01:24 +00:00
|
|
|
if TARGET_CPU_POWERPC
|
2022-04-07 23:07:08 +00:00
|
|
|
module/zfs/libzpool_la-vdev_raidz_math_powerpc_altivec.$(OBJEXT) : CFLAGS += -maltivec
|
|
|
|
module/zfs/libzpool_la-vdev_raidz_math_powerpc_altivec.l$(OBJEXT): CFLAGS += -maltivec
|
2020-01-23 19:01:24 +00:00
|
|
|
endif
|