Compare commits

..

146 Commits

Author SHA1 Message Date
Martin Matuška 459c99ff23 Fix block cloning between unencrypted and encrypted datasets
Block cloning from an encrypted dataset into an unencrypted dataset
and vice versa is not possible. The current code did allow cloning
unencrypted files into an encrypted dataset causing a panic when
these were accessed. Block cloning between encrypted and encrypted
is currently supported on the same filesystem only.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Kay Pedersen <mail@mkwg.de>
Reviewed-by: Rob N <robn@despairlabs.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Martin Matuska <mm@FreeBSD.org>
Closes #15464
Closes #15465
2023-11-06 10:40:50 -08:00
Brian Behlendorf 95785196f2 Tag 2.2.0
New Features
- Block cloning (#13392)
- Linux container support (#14070, #14097, #12263)
- Scrub error log (#12812, #12355)
- BLAKE3 checksums (#12918)
- Corrective "zfs receive"
- Vdev and zpool user properties

Performance
- Fully adaptive ARC (#14359)
- SHA2 checksums (#13741)
- Edon-R checksums (#13618)
- Zstd early abort (#13244)
- Prefetch improvements (#14603, #14516, #14402, #14243, #13452)
- General optimization (#14121, #14123, #14039, #13680, #13613,
  #13606, #13576, #13553, #12789, #14925, #14948)

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2023-10-12 16:19:17 -07:00
Jason King 2bba9fd479 Zpool can start allocating from metaslab before TRIMs have completed
When doing a manual TRIM on a zpool, the metaslab being TRIMmed is
potentially re-enabled before all queued TRIM zios for that metaslab
have completed. Since TRIM zios have the lowest priority, it is 
possible to get into a situation where allocations occur from the 
just re-enabled metaslab and cut ahead of queued TRIMs to the same 
metaslab.  If the ranges overlap, this will cause corruption.

We were able to trigger this pretty consistently with a small single 
top-level vdev zpool (i.e. small number of metaslabs) with heavy 
parallel write activity while performing a manual TRIM against a 
somewhat 'slow' device (so TRIMs took a bit of time to complete). 
With the patch, we've not been able to recreate it since. It was on 
illumos, but inspection of the OpenZFS trim code looks like the 
relevant pieces are largely unchanged and so it appears it would be 
vulnerable to the same issue.

Reviewed-by: Igor Kozhukhov <igor@dilos.org>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jason King <jking@racktopsystems.com>
Illumos-issue: https://www.illumos.org/issues/15939
Closes #15395
2023-10-12 11:05:20 -07:00
Brian Behlendorf 30ee2ee8ec spec: define _bashcompletiondir if undefined
Always define _bashcompletiondir in the spec file to a reasonable value
when it is undefined.  Required for `rpmbuild --rebuild <srpm>`.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #15396
2023-10-11 16:58:06 -07:00
Brian Behlendorf d7b6e470ff ZTS: Debug zfs_share_concurrent_shares failure
Update zfs_share_concurrent_shares test case to wait a few seconds
and recheck that the filesystem isn't shared.  The intent here is
determine the nature of the error and if it may be a race.

Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by:  Umer Saleem <usaleem@ixsystems.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #15379
2023-10-10 19:19:09 -07:00
Brian Behlendorf 04186d33be CI: Move perl script to dist_noinst_DATA
Everything listed in dist_noinst_SCRIPTS is assumed to be a shell
script, this generates a shellcheck SC1071 error since perl is not
supported.  Move update_authors.pl to dist_noinst_DATA with the
other perl scripts.

Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Rob N <robn@despairlabs.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #15392
2023-10-10 19:19:09 -07:00
Daniel Berlin 810fc49a3e Ensure we call fput when cloning fails due to different devices.
Right now, zpl_ioctl_ficlone and zpl_ioctl_ficlonerange do not call
put on the src fd if the source and destination are on two different
devices.  This leaves the source file held open in this case.

Reviewed-by: Kay Pedersen <mail@mkwg.de>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Signed-off-by: Daniel Berlin <dberlin@dberlin.org>
Closes #15386
2023-10-10 19:19:09 -07:00
Brian Behlendorf 75a7740574 ZTS: Remove zfs_allow_010_pos expection for FreeBSD
This issue should now be address by PR #15376 and the exception
for this test case be removed.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by:  Umer Saleem <usaleem@ixsystems.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #15382
2023-10-10 19:19:09 -07:00
Tony Hutter a80e1f1c90 zvol: Temporally disable blk-mq
There was a report of zvol data loss (#15351) after enabling blk-mq on a
zvol backed with 16k physical block sized disks.  Out of an abundance of
caution, do not allow the user to enable blk-mq until we can look into
the issue.

Note that blk-mq was not enabled by default on zvols.  It was always
opt-in via the zvol_use_blk_mq module parameter.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tony Nguyen <tony.nguyen@delphix.com>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Addresses: #15351
Closes #15378
2023-10-10 19:19:09 -07:00
Rob Norris 111ae3364c AUTHORS: update with missing names
This is generated by scripts/update_authors.pl. I've looked over the
results fairly closely and while I don't think they're bad, they could
be improved somewhat, but also, I don't know if its good form to just
update this without explicit consent from those named.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tino Reichardt <milky-zfs@mcmilk.de>
Signed-off-by: Rob Norris <robn@despairlabs.com>
Closes #15374
2023-10-10 19:19:09 -07:00
Rob Norris 3990273ffe update_authors: add missing names from commits to AUTHORS
Full description of what's happening in comments.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tino Reichardt <milky-zfs@mcmilk.de>
Signed-off-by: Rob Norris <robn@despairlabs.com>
Closes #15374
2023-10-10 19:19:09 -07:00
Rob Norris da93b72c91 mailmap: initial, trying to tidy up a lot of the commit history
This comes from the observation that a huge number of commit author
fields look quite strange (to my eyes), but quite often the
Signed-off-by: trailer has the correct name. For these I have updated
the name where it was obvious how to do so, however, I have not created
a mapping for the commit email to the Signed-off-by email, as whatever I
choose for email will become the prime candidate for inclusion in the
AUTHORS file, and care needs to be taken when acting without explicit
consent.

There's a small handful of commits that look like they were done on
local machines, or CI hosts, or similar, where the git authorship config
wasn't set up properly. Its obvious what this should look like, so I've
just done them.

The remainder is mapping Github noreply emails to either an
obviously-correct Signed-off-by trailer, or to a an author from another
commit. This was mostly done by hand, so there may be errors, but I
think its close. I do not understand where these come from - I know that
they're what commits made via Github web look like when there's no real
address set on the account, but I find it hard to believe that so many
of these came through the web, especially given the complexity of most
of the changes. I suspect there's some kind of merge helper tool in play
here. Regardless, the history is set now, and this tries to get it back
on track.

Obviously, all of this helps the history look tidy, but this also feeds
into the AUTHORS update script. See next commit.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tino Reichardt <milky-zfs@mcmilk.de>
Signed-off-by: Rob Norris <robn@despairlabs.com>
Closes #15374
2023-10-10 19:19:09 -07:00
Umer Saleem 9fa06c5574 ZTS: Fix verify_fs_mount in delegate_common.kshlib
verify_fs_mount expects the dataset to remain unmounted after
updating the mountpoint property in delegate_common.kshlib.

This commit updates verify_fs_mount and uses nomount parameter
for zfs set to update the mountpoint property without mounting
the dataset.

This fixes the zfs_allow_010_pos test case, which was failing on
FreeBSD after the behavior update in setting the mountpoint
property.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Signed-off-by: Umer Saleem <usaleem@ixsystems.com>
Closes #15376
2023-10-10 19:19:09 -07:00
Brian Behlendorf 8d47d2d579 ZTS: Move zpool_import_hostid_changed* tests to Linux runfile
Relocate the zpool_import_hostid_changed* test cases to the Linux
runfile until these tests are modified to run cleanly on FreeBSD.

Reviewed-by: George Melikov <mail@gmelikov.ru>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #15377
2023-10-10 19:19:09 -07:00
Alexander Motin f6e6e77ed8 FreeBSD: Reduce divergence from in-tree sources
This includes random small tweaks, primarily a build fixes, required
when ZFS is built as part of FreeBSD base.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tino Reichardt <milky-zfs@mcmilk.de>
Signed-off-by:	Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #15368
2023-10-10 19:19:09 -07:00
Sam James 120d1787d7 config/zfs-build.m4: add Gentoo's bash-completion path
Followup e69ade32e1 by adding Gentoo's
bash completion path.

We should probably consider using/honouring the standard --with-bashcompletiondir
autoconf option as well, but that's something to do later.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Umer Saleem <usaleem@ixsystems.com>
Signed-off-by: Sam James <sam@gentoo.org>
Closes #15372
2023-10-10 19:19:09 -07:00
Brian Behlendorf 2407f30bda Tag 2.2.0-rc5
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2023-10-07 09:14:37 -07:00
Alexander Motin 9be8ddfb3c ZIL: Reduce maximum size of WR_COPIED to 7.5K
Benchmarks show that at certain write sizes range lock/unlock take
not so much time as extra memory copy.  The exact threshold is not
obvious due to other overheads, but it is definitely lower than
~63KB used before.  Make it configurable, defaulting at 7.5KB,
that is 8KB of nearest malloc() size minus itx and lr structs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by:	Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #15353
2023-10-07 09:08:20 -07:00
siv0 3755cde22a rpm: Fix `make rpm` on Debian/Ubuntu
The recent patch to change the bash completion install location based
on the Distribution, ignored that it should still be possible to
create RPMs on Debian derived systems. Additionally `make deb` itself
creates RPMs and converts them via `alien`.

This patch adds the bashcompletiondir variable to the rpm defines and
uses this for the location, where to get the bash completion file.

It still changes the location on Debian/Ubuntu systems in the final
packages from /etc/bash_completion.d to
/usr/share/bash-completion/completions

Fixes: e69ade32e1

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Closes #15355
Closes #15365
2023-10-07 09:08:20 -07:00
Rob Norris 33d7c2d165 import: require force when cachefile hostid doesn't match on-disk
Previously, if a cachefile is passed to zpool import, the cached config
is mostly offered as-is to ZFS_IOC_POOL_TRYIMPORT->spa_tryimport(), and
the results are taken as the canonical pool config and handed back to
ZFS_IOC_POOL_IMPORT.

In the course of its operation, spa_load() will inspect the pool and
build a new config from what it finds on disk. However, it then
regenerates a new config ready to import, and so rightly sets the hostid
and hostname for the local host in the config it returns.

Because of this, the "require force" checks always decide the pool is
exported and last touched by the local host, even if this is not true,
which is possible in a HA environment when MMP is not enabled. The pool
may be imported on another head, but the import checks still pass here,
so the pool ends up imported on both.

(This doesn't happen when a cachefile isn't used, because the pool
config is discovered in userspace in zpool_find_import(), and that does
find the on-disk hostid and hostname correctly).

Since the systemd zfs-import-cache.service unit uses cachefile imports,
this can lead to a system returning after a crash with a "valid"
cachefile on disk and automatically, quietly, importing a pool that has
already been taken up by a secondary head.

This commit causes the on-disk hostid and hostname to be included in the
ZPOOL_CONFIG_LOAD_INFO item in the returned config, and then changes the
"force" checks for zpool import to use them if present.

This method should give no change in behaviour for old userspace on new
kernels (they won't know to look for the new config items) and for new
userspace on old kernels (the won't find the new config items).

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes #15290
2023-10-07 09:08:20 -07:00
Rob Norris 2919784be2 tests: add tests for zpool import behaviour when hostid changes
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes #15290
2023-10-07 09:08:20 -07:00
Rob N 8495536f7f zfsconcepts: add description of block cloning
Here I'm trying to succinctly introduce the concept, the basics of its
construction, how its different to dedup, how to use it, and where its
limitations lie, in four paragraphs and with enough searchable terms to
help the reader find more information both within OpenZFS and elsewhere.

Phew.

Sponsored-By: Klara, Inc.
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Closes #15362
2023-10-07 09:08:20 -07:00
Alexander Motin bcd010d3a5 Reduce number of metaslab preload taskq threads.
Before this change ZFS created threads for 50% of CPUs for each top-
level vdev.  Plus it created the same number of threads for embedded
log groups (that have only one metaslab and don't need any preload).
As result, on system with 80 CPUs and pool of 60 vdevs this resulted
in 4800 metaslab preload threads, that is absolutely insane.

This patch changes the preload threads to 50% of CPUs in one taskq
per pool, so on the mentioned system it will be only 40 threads.

Among other things this fixes zdb on the mentioned system and pool
on FreeBSD, that failed to create so many threads in one process.

Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by:	Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #15319
2023-10-07 09:08:20 -07:00
Martin Matuška c27277daac CI: add FreeBSD build with Cirrus CI
As a first step for automatic FreeBSD testing add a build and install
for FreeBSD versions 12.4, 13.2 and 14-snapshot using Cirrus CI.

Reviewed-by: Jose Luis Duran 
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Martin Matuska <mm@FreeBSD.org>
Closes #15332
2023-10-07 09:08:20 -07:00
Rob N bf54da84fb tests/block_cloning: sync before write in fallback test
We're still seeing this test fail intermittently (that is, the clone
happens), which must mean the write and the clone can still be happening
on different txgs.

It might be that there's still activity after the pool is created. So
here we force a sync before starting the write.

Sponsored-By: Klara Inc.
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Closes #15359
2023-10-07 09:08:20 -07:00
Alexander Motin 3158b5d718 ARC: Drop different size headers for crypto
To reduce memory usage ZFS crypto allocated bigger by 56 bytes ARC
headers only when specific block was encrypted on disk.  It was a
nice optimization, except in some cases the code reallocated them
on fly, that invalidated header pointers from the buffers.  Since
the buffers use different locking, it created number of races, that
were originally covered (at least partially) by b_evict_lock, used
also to protection evictions.  But it has gone as part of #14340.
As result, as was found in #15293, arc_hdr_realloc_crypt() ended
up unprotected and causing use-after-free.

Instead of introducing some even more elaborate locking, this patch
just drops the difference between normal and protected headers. It
cost us additional 56 bytes per header, but with couple patches
saving 24 bytes, the net growth is only 32 bytes with total header
size of 232 bytes on FreeBSD, that IMHO is acceptable price for
simplicity.  Additional locking would also end up consuming space,
time or both.

Reviewe-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Alexander Motin <mav@FreeBSD.org>
Sponsored by: iXsystems, Inc.
Closes #15293
Closes #15347
2023-10-07 09:08:20 -07:00
Alexander Motin ba7797c8db ARC: Remove b_bufcnt/b_ebufcnt from ARC headers
In most cases we do not care about exact number of buffers linked
to the header, we just need to know if it is zero, non-zero or one.
That can easily be checked just looking on b_buf pointer or in some
cases derefencing it.

b_ebufcnt is read only once, and in that case we already traverse
the list as part of arc_buf_remove(), so second traverse should not
be expensive.

This reduces L1 ARC header size by 8 bytes and full crypto header by
16 bytes, down to 176 and 232 bytes on FreeBSD respectively.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by:	Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #15350
2023-10-07 09:08:20 -07:00
Alexander Motin bc77a0c85e ARC: Remove b_cv from struct l1arc_buf_hdr
Earlier as part of #14123 I've removed one use of b_cv.  This patch
reuses the same approach to remove the other one from much more
rare code path.

This saves 16 bytes of L1 ARC header on FreeBSD (reducing it from
200 to 184 bytes) and seems even more on Linux.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by:	Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #15340
2023-10-07 09:08:20 -07:00
Andrew Turner 1611b8e56e Add BTI landing pads to the AArch64 SHA2 assembly
The Arm Branch Target Identification (BTI) extension guards against
branching to an unintended instruction.

To support BTI add the landing pad instructions to the SHA2 functions.
These are from the hint space so are a nop on hardware that lacks BTI
support or if BTI isn't enabled.

Reviewed-by: Allan Jude <allan@klarasystems.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Tino Reichardt <milky-zfs@mcmilk.de>
Signed-off-by: Andrew Turner <andrew.turner4@arm.com>
Closes #14862
Closes #15339
2023-10-04 12:36:21 -07:00
Umer Saleem 8015e2ea66 Add '-u' - nomount flag for zfs set
This commit adds '-u' flag for zfs set operation. With this flag,
mountpoint, sharenfs and sharesmb properties can be updated
without actually mounting or sharing the dataset.

Previously, if dataset was unmounted, and mountpoint property was
updated, dataset was not mounted after the update. This behavior
is changed in #15240. We mount the dataset whenever mountpoint
property is updated, regardless if it's mounted or not.

To provide the user with option to keep the dataset unmounted and
still update the mountpoint without mounting the dataset, '-u'
flag can be used.

If any of mountpoint, sharenfs or sharesmb properties are updated
with '-u' flag, the property is set to desired value but the
operation to (re/un)mount and/or (re/un)share the dataset is not
performed and dataset remains as it was before.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Umer Saleem <usaleem@ixsystems.com>
Closes #15322
2023-10-03 15:41:46 -07:00
Umer Saleem c53bc3837c Improve the handling of sharesmb,sharenfs properties
For sharesmb and sharenfs properties, the status of setting the
property is tied with whether we succeed to share the dataset or
not. In case sharing the dataset is not successful, this is
treated as overall failure of setting the property. In this case,
if we check the property after the failure, it is set to on.

This commit updates this behavior and the status of setting the
share properties is not returned as failure, when we fail to
share the dataset.

For sharenfs property, if access list is provided, the syntax
errors in access list/host adresses are not validated until after
setting the property during postfix phase while trying to
share the dataset. This is not correct, since the property has
already been set when we reach there.

Syntax errors in access list/host addresses are validated while
validating the property list, before setting the property and
failure is returned to user in this case when there are errors
in access list.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Ameer Hamza <ahamza@ixsystems.com>
Signed-off-by: Umer Saleem <usaleem@ixsystems.com>
Closes #15240
2023-10-03 15:41:46 -07:00
Umer Saleem e9dc31c74e Update the behavior of mountpoint property
There are some inconsistencies in the handling of mountpoint
property. This commit updates the behavior and makes it
consistent.

If mountpoint property is set when dataset is unmounted, this
would update the mountpoint property. The mountpoint could be
valid or invalid in this case. Setting the mountpoint property
would result in success in this case. Dataset would still be
unmounted here.

On the other hand, if dataset is mounted and mountpoint
property is updated to something invalid where mount cannot be
successful, for example, setting the mountpoint inside a readonly
directory. This would unmount the dataset, set the mountpoint
property to requested value and tries to mount the dataset. The
mount operation returns error and this error is treated as
overall failure of setting the property while the property is
actually set.

To make the behavior consistent in case dataset is mounted or
unmounted, we should try to mount the dataset whenever mountpoint
property is updated. This would result in mounting the datasets
if canmount property is set to on, regardless if the dataset was
previously unmounted.

The failure in mount operation while setting the mountpoint
property should not be treated as failure, since the property is
actually set now to user requested value.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Ameer Hamza <ahamza@ixsystems.com>
Signed-off-by: Umer Saleem <usaleem@ixsystems.com>
Closes #15240
2023-10-03 15:41:46 -07:00
Stoiko Ivanov b04b13ae79 contrib: debian: drop bashcompletion mangling after install
tested by running:
```
./configure --with-config=user; cp -a contrib/debian .
dpkg-buildpackage -b -uc -us
```
on a Debian 12 based system.

and checking where the completion file got installed.

Reviewed-by: Umer Saleem <usaleem@ixsystems.com>
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Closes #15304
2023-10-03 09:06:07 -07:00
Stoiko Ivanov 7b1d421adf contrib: debian: switch to dh-sequence-dkms
Follows b191f9a13d3005621ead9a727b811892264505ef from Debian's
packaging team at:
https://salsa.debian.org/zfsonlinux-team/zfs/

The previous build-dependency is kept as option, to still be able to
build on older Debian based distros (e.g. Ubuntu 20.04).

Without this building on Debian 12/bookworm does not work, as `dkms`
is a virtual package.

Reviewed-by: Umer Saleem <usaleem@ixsystems.com>
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Closes #15304
2023-10-03 09:06:07 -07:00
Stoiko Ivanov db5c3b4c76 contrib: bash_completion.d: make install destination vendor dependent
Certain Linux distributions (Debian/Ubuntu at least) expect
bash-completion snippets to be installed in
/usr/share/bash-completion/completions instead of
/etc/bash_completion.d.

This patch sets the bashcompletiondir variable based on the vendor,
inspired by similar settings for initdir and initconfdir.

It seems that commit 612b8dff5b
caused the file to be installed in the first-place (thus the error
when building debian packages only became apparent when testing a
2.2.0-rc4 build)

The change only sets the variable in Makefile context - the
rpm/zfs.spec.in file has the path hardcoded as
%{_sysconfdir}/bash_completion.d/zfs, but since running
```
./configure --sysconfdir=/myetc  ; make rpm
```
also results in all relevant files to be installed in /etc instead of
/myetc I assume this can remain as is.

Reviewed-by: Umer Saleem <usaleem@ixsystems.com>
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Closes #15304
2023-10-03 09:06:07 -07:00
Chunwei Chen 0d870a1775 Fix invalid pointer access in trace_dbuf.h
In dnode_destroy, dn_objset is invalidated. However, it will later call
into dbuf_destroy, in which DTRACE_SET_STATE will try to access spa_name
via dn_objset causing illegal pointer access.

Reviewed-by: Brian Atkinson <batkinson@lanl.gov>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Chunwei Chen <david.chen@nutanix.com>
Closes #15333
2023-10-03 09:06:07 -07:00
George Amanakis 608741d062 Report ashift of L2ARC devices in zdb
Commit 8af1104f does not actually store the ashift of cache devices in
their label. However, in order to facilitate reporting the ashift
through zdb, we enable this in the present commit. We also document
how the retrieval of the ashift is done.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
Closes #15331
2023-10-03 09:06:07 -07:00
Alexander Motin 3079bf2e6c Restrict short block cloning requests
If we are copying only one block and it is smaller than recordsize
property, do not allow destination to grow beyond one block if it
is not there yet.  Otherwise the destination will get stuck with
that block size forever, that can be as small as 512 bytes, no
matter how big the destination grow later.

Reviewed-by: Kay Pedersen <mail@mkwg.de>
Reviewed-by: Rob Norris <rob.norris@klarasystems.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by:	Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #15321
2023-10-03 09:06:07 -07:00
Brian Behlendorf b34bf2d5f6 Tweak rebuild in-flight hard limit
Vendor testing shows we should be able to get a little more
performance if we further relax the hard limit which we're hitting.

Authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #15324
2023-10-03 09:06:07 -07:00
Akash B 229ca7d738 Fix ENOSPC for extended quota
When unlinking multiple files from a pool at 100% capacity, it
was possible for ENOSPC to be returned after the first few unlinks.
This issue was fixed previously by PR #13172 but then this was
again introduced by PR #13839.

This is resolved using the existing mechanism of returning ERESTART
when over quota as long as we know enough space will shortly be
available after processing the pending deferred frees.

Also, updated the existing testcase which reliably reproduced the
issue without this patch.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Dipak Ghosh <dipak.ghosh@hpe.com>
Signed-off-by: Akash B <akash-b@hpe.com>
Closes #15312
2023-09-28 14:28:21 -07:00
Paul Dagnelie 9e36c5769f Don't allocate from new metaslabs
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Paul Dagnelie <pcd@delphix.com>
Closes #15307
Closes #15308
2023-09-28 14:28:21 -07:00
Paul Dagnelie d38f4664a6 Reduce trim min size even lower for tests to reduce flakiness
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Signed-off-by: Paul Dagnelie <pcd@delphix.com>
Closes #15315
2023-09-28 14:28:21 -07:00
Paul Dagnelie 99dc1fc340 ZTS: Fix introduced test bug in block_cloning_copyfilerange
Reviewed-by: John Wren Kennedy <john.kennedy@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Paul Dagnelie <pcd@delphix.com>
Closes #15316
2023-09-28 14:28:21 -07:00
Brian Behlendorf ba4dbbdae7 ZTS: Add additional exceptions
"zfs_share_concurrent_shares" may fail on FreeBSD and some Linux
distributions (fedora).  Move it to the common list.

"zfs_allow_010_pos" has been observed to fail on FreeBSD 13.

Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Kay Pedersen <mail@mkwg.de>
Reviewed-by: Umer Saleem <usaleem@ixsystems.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #15306
2023-09-28 14:28:21 -07:00
Paul Dagnelie 8526b12f3d Set timeout before creating pool in test
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Rob Norris <rob.norris@klarasystems.com>
Signed-off-by: Paul Dagnelie <pcd@delphix.com>
Closes #15309
2023-09-28 14:28:21 -07:00
Paul Dagnelie 0ce1b2ca19 Invoke zdb by guid to avoid import errors
The problem that was occurring is basically that a device was removed 
by ztest and replaced with another device. It was then reguided. The 
import then failed because there were two possible imports with the 
same name; one with the new guid, and one with the old. This can 
happen because the label writes from the device removal/replacement 
can be subject to ztest's error injection. 

The other ways to fix this would be to change the error injection to 
not trigger on removals (which may not be technically feasible), or 
to change the import code to not report configurations that are so 
short on devices (which would potentially have unpleasant end-user 
effects when trying to recover from data losses/device configuration 
issues).

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Signed-off-by: Paul Dagnelie <pcd@delphix.com>
Closes #15298
2023-09-28 14:28:21 -07:00
Alexander Motin 0aabd6b482 ZIL: Avoid dbuf_read() in ztest_get_data()
While working on similar patches for zfs and zvol in #15153 I've
forgot about ztest.  Update it also so that we test the same code
paths as use in production.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by:	Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #15301
2023-09-28 14:28:21 -07:00
Rob N 5f30698670 tests/block_cloning: try harder to stay on same txg in fallback test
We've observed this test failing intermittently. When it does, the
"same block" check shows that both files have the same content, that is,
the file was cloned.

The only way this could have happened is if the open txg moved between
the dd and clonefile calls. That's possible because although we set
zfs_txg_timeout to be large, that only affects the wait time in the sync
thread at the start of a new txg; it doesn't change anything if its
currently waiting or working.

So here we just force the txgs to move immediately before, which should
get both operations onto the same txg as intented.

Sponsored-By: OpenDrives Inc.
Sponsored-By: Klara Inc.
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris Rob Norris <rob.norris@klarasystems.com>
Closes #15303
2023-09-22 16:13:20 -07:00
Rob N a199cac6cd status: report pool suspension state under failmode=continue
When failmode=continue is set and the pool suspends, both 'zpool status'
and the 'zfs/pool/state' kstat ignore it and report the normal vdev tree
state. There's no clear indicator that the pool is suspended. This is
unlike suspend in failmode=wait, or suspend due to MMP check failure,
which both report "SUSPENDED" explicitly.

This commit changes it so SUSPENDED is reported for failmode=continue
the same as for other modes.

Rationale:

The historical behaviour of failmode=continue is roughly, "press on as
though all is well". To this end, the fact that the pool had suspended
was not shown, to maintain the façade that all is well.

Its unclear why hiding this information was considered appropriate. One
possibility is that it was expected that a true pool fault would always
be reported as DEGRADED or FAULTED, and that the pool could not suspend
without these happening.

That is not necessarily true, as vdev health and suspend state are only
loosely connected, such that a pool in (apparent) good health can be
suspended for good reasons, and of course a degraded pool does not lead
to suspension. Even if that expectation were true, there's still a
difference in urgency - a degraded pool may not need to be attended to
for hours, while a suspended pool is most often unusable until an
operator intervenes.

An operator that has set failmode=continue has presumably done so
because their workload is one that can continue to operate in a useful
way when the pool suspends. In this case the operator still needs a
clear indicator that there is a problem that needs attending to.

Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Closes #15297
2023-09-22 16:13:20 -07:00
Paul Dagnelie 729507d309 Fix occasional rsend test crashes
We have occasional crashes in the rsend tests. Debugging revealed 
that this is because the send_worker thread is getting EINTR from 
splice(). This happens when a non-fatal signal is received during 
the syscall. We should retry the syscall, rather than exiting failure.
Tweak the loop to only break if the splice is finished or we receive 
a non-EINTR error.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Signed-off-by: Paul Dagnelie <pcd@delphix.com>
Closes #15273
2023-09-22 16:13:20 -07:00
Rob N 3af63683fe cmd: add 'help' subcommand to zpool and zfs
'program help subcommand' is a reasonably common pattern for
multifunction command-line programs. This commit adds support for that
style to the zpool and zfs commands.

When run as 'zpool help [<topic>]' or 'zfs help [<topic>]', executes the
'man' program on the PATH with the most likely manpage name for the
requested topic: "zpool-<topic>" or "zfs-<topic>" for subcommands, or
"zpool<topic>" or "zfs<topic>" for the "concepts" and "props" topics.
If no topic is supplied, uses the top "zpool" or "zfs" pages.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Kay Pedersen <mail@mkwg.de>
Signed-off-by: Rob Norris <robn@despairlabs.com>
Closes #15288
2023-09-22 16:13:20 -07:00
Paul Dagnelie 9aa1a2878e Fix incorrect expected error in ztest
There is an occasional ztest failure that looks like ztest: attach 
(/var/tmp/zloop-run/ztest.13a 570425344, draid1-1-0 532152320, 1) 
returned 22, expected 95. This is because the value that we return 
is EINVAL, but expected_error is set incorrectly.

Change the expected_error value to match both the comment and the 
actual error value.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Paul Dagnelie <pcd@delphix.com>
Closes #15295
2023-09-22 16:13:20 -07:00
Paul Dagnelie cc75c816c5 Fix l2arc_apply_transforms ztest crash
In #13375 we modified the allocation size of the buffer that we use 
to apply l2arc transforms to be the size of the arc hdr we're using, 
rather than the allocation size that will be in place on the disk, 
because sometimes the hdr size is larger. Unfortunately, sometimes 
the allocation size is larger, which means that we overflow the buffer 
in that case. This change modifies the allocation to be the max of 
the two values

Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Paul Dagnelie <pcd@delphix.com>
Closes #15177
Closes #15248
2023-09-22 16:13:20 -07:00
Rob N 1c2aee7a52 tests: install missing PAM tests
'pam_change_unmounted' and 'pam_recursive' both exist and are referenced
by the test run config, but weren't being installed and so are excluded.
This gets them installed so they will run as expected.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Kay Pedersen <mail@mkwg.de>
Signed-off-by: Rob Norris <robn@despairlabs.com>
Closes #15291
2023-09-22 16:13:20 -07:00
Alexander Motin 62677576a7 ZIL: Fix potential race on flush deferring.
zil_lwb_set_zio_dependency() can not set write ZIO dependency on
previous LWB's write ZIO if one is already in done handler and set
state to LWB_STATE_WRITE_DONE.  So theoretically done handler of
next LWB's write ZIO may run before done handler of previous LWB
write ZIO completes.  In such case we can not defer flushes, since
the flush issue process is not locked.

This may fix some reported assertions of lwb_vdev_tree not being
empty inside zil_free_lwb().

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by:	Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #15278
2023-09-20 16:41:23 -07:00
Mateusz Guzik f7a07d76ee Retire z_nr_znodes
Added in ab26409db7 ("Linux 3.1 compat, super_block->s_shrink"), with
the only consumer which needed the count getting retired in 066e825221
("Linux compat: Minimum kernel version 3.10").

The counter gets in the way of not maintaining the list to begin with.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Signed-off-by: Mateusz Guzik <mjguzik@gmail.com>
Closes #15274
2023-09-19 08:52:06 -07:00
Tony Hutter 54c6fbd378 zed: Allow autoreplace and fault LEDs for removed vdevs
Allow zed to autoreplace vdevs marked as REMOVED.  Also update
statechange-led zedlet to toggle fault LEDs for REMOVED vdevs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #15281
2023-09-19 08:52:06 -07:00
наб 0ce7a068e9 check-zstd-symbols: also ignore __pfx_ symbols
Link: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b341b20d648bb7e9a3307c33163e7399f0913e66

Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #15282 
Closes #15284
2023-09-19 08:52:06 -07:00
Laura Hild 228b064d1b Remove implication that child `disk`s aren't vdevs in zpoolconcepts(7)
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Laura Hild <lsh@jlab.org>
Closes #15247
2023-09-19 08:52:06 -07:00
ednadolski-ix b9b9cdcdb1 update max_variance limit in zdb_block_size_histogram test for CI
Commit 2d7843401a had previously
updated this hardcoded limit to allow for CI testing. As there
is no deterministic pass/fail value, the need has arisen for
one more small increase.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Edmund Nadolski <edmund.nadolski@ixsystems.com>
Closes #15252
2023-09-19 08:52:06 -07:00
George Amanakis 11943656f9 Update the MOS directory on spa_upgrade_errlog()
spa_upgrade_errlog() does not update the MOS directory when the
head_errlog feature is enabled. In this case if spa_errlog_sync() is not
called, the MOS dir references the old errlog_last and errlog_sync
objects. Thus when doing a scrub a panic will occur:

Call Trace:
 dump_stack+0x6d/0x8b
 panic+0x101/0x2e3
 spl_panic+0xcf/0x102 [spl]
 delete_errlog+0x124/0x130 [zfs]
 spa_errlog_sync+0x256/0x260 [zfs]
 spa_sync_iterate_to_convergence+0xe5/0x250 [zfs]
 spa_sync+0x2f7/0x670 [zfs]
 txg_sync_thread+0x22d/0x2d0 [zfs]
 thread_generic_wrapper+0x83/0xa0 [spl]
 kthread+0x104/0x140
 ret_from_fork+0x1f/0x40

Fix this by updating the related MOS directory objects in
spa_upgrade_errlog().

Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
Closes #15279 
Closes #15277
2023-09-19 08:51:00 -07:00
Tony Hutter c011ef8c91 Linux 6.5 compat: META (#15265)
Update the META file to reflect compatibility with the 6.5
kernel.

Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
2023-09-19 08:50:01 -07:00
Andrea Righi cacc599aa2 Linux 6.5 compat: spl: properly unregister sysctl entries
When register_sysctl_table() is unavailable we fail to properly
unregister sysctl entries under "kernel/spl".

This leads to errors like the following when spl is unloaded/reloaded,
making impossible to properly reload the spl module:

[  746.995704] sysctl duplicate entry: /kernel/spl/kmem/slab_kvmem_total

Fix by cleaning up all the sub-entries inside "kernel/spl" when the
spl module is unloaded.

Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Atkinson <batkinson@lanl.gov>
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Closes #15239
2023-09-19 08:50:01 -07:00
Andrea Righi c7ee59a160 Linux 6.5 compat: safe cleanup in spl_proc_fini()
If we fail to create a proc entry in spl_proc_init() we may end up
calling unregister_sysctl_table() twice: one in the failure path of
spl_proc_init() and another time during spl_proc_fini().

Avoid the double call to unregister_sysctl_table() and while at it
refactor the code a bit to reduce code duplication.

This was accidentally introduced when the spl code was
updated for Linux 6.5 compatibility.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Ameer Hamza <ahamza@ixsystems.com>
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Closes #15234 
Closes #15235
2023-09-19 08:50:01 -07:00
Coleman Kane 58a707375f Linux 6.5 compat: Use copy_splice_read instead of filemap_splice_read
Using the filemap_splice_read function for the splice_read handler was
leading to occasional data corruption under certain circumstances. Favor
using copy_splice_read instead, which does not demonstrate the same
erroneous behavior under the tested failure cases.

Reviewed-by: Brian Atkinson <batkinson@lanl.gov>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Coleman Kane <ckane@colemankane.org>
Closes #15164
2023-09-19 08:50:01 -07:00
Coleman Kane 5a22de144a Linux 6.5 compat: replace generic_file_splice_read with filemap_splice_read
The generic_file_splice_read function was removed in Linux 6.5 in favor
of filemap_splice_read. Add an autoconf test for filemap_splice_read and
use it if it is found as the handler for .splice_read in the
file_operations struct. Additionally, ITER_PIPE was removed in 6.5. This
change removes the ITER_* macros that OpenZFS doesn't use from being
tested in config/kernel-vfs-iov_iter.m4. The removal of ITER_PIPE was
causing the test to fail, which also affected the code responsible for
setting the .splice_read handler, above. That behavior caused run-time
panics on Linux 6.5.

Reviewed-by: Brian Atkinson <batkinson@lanl.gov>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Coleman Kane <ckane@colemankane.org>
Closes #15155
2023-09-19 08:50:01 -07:00
Coleman Kane 31a4673c05 Linux 6.5 compat: register_sysctl_table removed
Additionally, the .child element of ctl_table has been removed in 6.5.
This change adds a new test for the pre-6.5 register_sysctl_table()
function, and uses the old code in that case. If it isn't found, then
the parentage entries in the tables are removed, and the register_sysctl
call is provided the paths of "kernel/spl", "kernel/spl/kmem", and
"kernel/spl/kstat" directly, to populate each subdirectory over three
calls, as is the new API.

Reviewed-by: Brian Atkinson <batkinson@lanl.gov>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Coleman Kane <ckane@colemankane.org>
Closes #15138
2023-09-19 08:50:01 -07:00
Brian Atkinson 3a68f3c50f Revert "Linux 6.5 compat: register_sysctl_table removed"
This reverts commit b35374fd64 as there
are error messages when loading the SPL module. Errors seemed to be tied
to duplicate a duplicate entry.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
Closes #15134
2023-09-19 08:50:01 -07:00
Coleman Kane 8be6308e85 Linux 4.20 compat: wrapper function for iov_iter type access
An iov_iter_type() function to access the "type" member of the struct
iov_iter was added at one point. Move the conditional logic to decide
which method to use for accessing it into a macro and simplify the
zpl_uio_init code.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Brian Atkinson <batkinson@lanl.gov>
Signed-off-by: Coleman Kane <ckane@colemankane.org>
Closes #15100
2023-09-19 08:50:01 -07:00
Coleman Kane 0bf2c5365e Linux 6.4 compat: iter_iov() function now used to get old iov member
The iov_iter->iov member is now iov_iter->__iov and must be accessed via
the accessor function iter_iov(). Create a wrapper that is conditionally
compiled to use the access method appropriate for the target kernel
version.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Brian Atkinson <batkinson@lanl.gov>
Signed-off-by: Coleman Kane <ckane@colemankane.org>
Closes #15100
2023-09-19 08:50:01 -07:00
Coleman Kane d76de9fb17 Linux 6.5 compat: blkdev changes
Multiple changes to the blkdev API were introduced in Linux 6.5. This
includes passing (void* holder) to blkdev_put, adding a new
blk_holder_ops* arg to blkdev_get_by_path, adding a new blk_mode_t type
that replaces uses of fmode_t, and removing an argument from the release
handler on block_device_operations that we weren't using. The open
function definition has also changed to take gendisk* and blk_mode_t, so
update it accordingly, too.

Implement local wrappers for blkdev_get_by_path() and
vdev_blkdev_put() so that the in-line calls are cleaner, and place the
conditionally-compiled implementation details inside of both of these
local wrappers. Both calls are exclusively used within vdev_disk.c, at
this time.

Add blk_mode_is_open_write() to test FMODE_WRITE / BLK_OPEN_WRITE
The wrapper function is now used for testing using the appropriate
method for the kernel, whether the open mode is writable or not.

Emphasize fmode_t arg in zvol_release is not used

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Coleman Kane <ckane@colemankane.org>
Closes #15099
2023-09-19 08:50:01 -07:00
Coleman Kane c0f075c06b Linux 6.5 compat: use disk_check_media_change when it exists
When disk_check_media_change() exists, then define
zfs_check_media_change() to simply call disk_check_media_change() on
the bd_disk member of its argument. Since disk_check_media_change()
is newer than when revalidate_disk was present in bops, we should
be able to safely do this via a macro, instead of recreating a new
implementation of the inline function that forces revalidation.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Brian Atkinson <batkinson@lanl.gov>
Signed-off-by: Coleman Kane <ckane@colemankane.org>
Closes #15101
2023-09-19 08:50:01 -07:00
Coleman Kane 6c2fc56916 Linux 6.5 compat: register_sysctl_table removed
Additionally, the .child element of ctl_table has been removed in 6.5.
This change adds a new test for the pre-6.5 register_sysctl_table()
function, and uses the old code in that case. If it isn't found, then
the parentage entries in the tables are removed, and the register_sysctl
call is provided the paths of "kernel/spl", "kernel/spl/kmem", and
"kernel/spl/kstat" directly, to populate each subdirectory over three
calls, as is the new API.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Coleman Kane <ckane@colemankane.org>
Closes #15098
2023-09-19 08:50:01 -07:00
Alexander Motin e96fbdba34 Add more constraints for block cloning.
- We cannot clone into files with smaller block size if there is
more than one block, since we can not grow the block size.
 - Block size must be power-of-2 if destination offset != 0, since
there can be no multiple blocks of non-power-of-2 size.

The first should handle the case when destination file has several
blocks but still is not bigger than one block of the source file.
The second fixes panic in dmu_buf_hold_array_by_dnode() on attempt
to concatenate files with equal but non-power-of-2 block sizes.

While there, assert that error is reported if we made no progress.

Signed-off-by:	Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
2023-09-10 14:02:52 -07:00
Brian Behlendorf 739db06ce7 Tag 2.2.0-rc4
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2023-09-07 16:11:40 -07:00
Volker Mauel 4da8c7d11e Intel QAT 1.7 compatibility
Based on the intel QAT samples which are bundled in the 1.x drivers, 
this is the preferred approach since api version 1.6.  See:

https://www.intel.de/content/www/de/de/download/19734/intel-quickassist-technology-driver-for-linux-hw-version-1-x.html?

Reviewed-by: Weigang Li <weigang.li@intel.com>
Signed-off-by: Volker Mauel <volkermauel@gmail.com>
Closes #15190
2023-09-07 16:10:52 -07:00
Umer Saleem 32949f2560 Relax error reporting in zpool import and zpool split
For zpool import and zpool split, zpool_enable_datasets is called
to mount and share all datasets in a pool. If there is an error
while mounting or sharing any dataset in the pool, the status of
import or split is reported as failure. However, the changes do
show up in zpool list.

This commit updates the error reporting in zpool import and zpool
split path. More descriptive messages are shown to user in case
there is an error during mount or share. Errors in mount or share
do not effect the overall status of zpool import and zpool split.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Signed-off-by: Umer Saleem <usaleem@ixsystems.com>
Closes #15216
2023-09-02 10:30:38 -07:00
Alexander Motin 79ac1b29d5 ZIL: Change ZIOs issue order.
In zil_lwb_write_issue(), after issuing lwb_root_zio/lwb_write_zio,
we have no right to access lwb->lwb_child_zio. If it was not there,
the first two ZIOs may have already completed and freed the lwb.
ZIOs issue in opposite order from children to parent should keep
the lwb valid till the end, since the lwb can be freed only after
lwb_root_zio completion callback.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by:	Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #15233
2023-09-02 10:30:38 -07:00
Alexander Motin 7dc2baaa1f ZIL: Revert zl_lock scope reduction.
While I have no reports of it, I suspect possible use-after-free
scenario when zil_commit_waiter() tries to dereference zcw_lwb
for lwb already freed by zil_sync(), while zcw_done is not set.
Extension of zl_lock scope as it was originally should block
zil_sync() from freeing the lwb, closing this race.

This reverts #14959 and couple chunks of #14841.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by:	Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #15228
2023-09-02 10:30:38 -07:00
Alexander Motin 5a7cb0b065 ZIL: Tune some assertions.
In zil_free_lwb() we should first assert lwb_state or the rest of
assertions can be misleading if it is false.

Add lwb_state assertions in zil_lwb_add_block() to make sure we are
not trying to add elements to lwb_vdev_tree after it was processed.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by:	Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #15227
2023-09-02 10:30:38 -07:00
Dimitry Andric 400f56e3f8 dmu_buf_will_clone: change assertion to fix 32-bit compiler warning
Building module/zfs/dbuf.c for 32-bit targets can result in a warning:

In file included from
/usr/src/sys/contrib/openzfs/include/sys/zfs_context.h:97,
                 from /usr/src/sys/contrib/openzfs/module/zfs/dbuf.c:32:
/usr/src/sys/contrib/openzfs/module/zfs/dbuf.c: In function
'dmu_buf_will_clone':
/usr/src/sys/contrib/openzfs/lib/libspl/include/assert.h:116:33: error:
cast from pointer to integer of different size
[-Werror=pointer-to-int-cast]
  116 |         const uint64_t __left = (uint64_t)(LEFT);
  \
      |                                 ^
/usr/src/sys/contrib/openzfs/lib/libspl/include/assert.h:148:25: note:
in expansion of macro 'VERIFY0'
  148 | #define ASSERT0         VERIFY0
      |                         ^~~~~~~
/usr/src/sys/contrib/openzfs/module/zfs/dbuf.c:2704:9: note: in
expansion of macro 'ASSERT0'
 2704 |         ASSERT0(dbuf_find_dirty_eq(db, tx->tx_txg));
      |         ^~~~~~~

This is because dbuf_find_dirty_eq() returns a pointer, which if
pointers are 32-bit results in a warning about the cast to uint64_t.

Instead, use the ASSERT3P() macro, with == and NULL as second and third
arguments, which should work regardless of the target's bitness.

Reviewed-by: Kay Pedersen <mail@mkwg.de>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Brian Atkinson <batkinson@lanl.gov>
Signed-off-by: Dimitry Andric <dimitry@andric.com>
Closes #15224
2023-09-01 09:33:33 -07:00
Serapheim Dimitropoulos 63159e5bda checkstyle: fix action failures
Reviewed-by: Don Brady <dev.fs.zfs@gmail.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Serapheim Dimitropoulos <serapheim@delphix.com>
Closes #15220
2023-09-01 09:33:33 -07:00
Paul Dagnelie 7eabb0af37 Try to clarify wording to reduce zpool add incidents
Try to clarify wording to reduce zpool add incidents.
Add an attach example.

Reviewed-by: Rich Ercolani <Rincebrain@gmail.com>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Signed-off-by: Paul Dagnelie <pcd@delphix.com>
Closes #15179
2023-08-27 08:25:42 -07:00
Rich Ercolani c65aaa8387 Avoid save/restoring AMX registers to avoid a SPR erratum
Intel SPR erratum SPR4 says that if you trip into a vmexit while
doing FPU save/restore, your AMX register state might misbehave...
and by misbehave, I mean save all zeroes incorrectly, leading to
explosions if you restore it.

Since we're not using AMX for anything, the simple way to avoid
this is to just not save/restore those when we do anything, since
we're killing preemption of any sort across our save/restores.

If we ever decide to use AMX, it's not clear that we have any
way to mitigate this, on Linux...but I am not an expert.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rich Ercolani <rincebrain@gmail.com>
Closes #14989
Closes #15168
2023-08-27 08:25:42 -07:00
Brian Behlendorf e99e684b33 zed: update zed.d/statechange-slot_off.sh
The statechange-slot_off.sh zedlet which was added in #15200
needed to be installed so it's included by the packages.

Additional testing has also shown that multiple retries are
often needed for the script to operate reliably.

Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #15210
2023-08-27 08:25:42 -07:00
наб 1b696429c1 Make zoned/jailed zfsprops(7) make more sense.
- Distribute zfs-[un]jail.8 on FreeBSD and zfs-[un]zone.8 on Linux
- zfsprops.7: mirror zoned/jailed, only available on respective platforms

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #15161
2023-08-27 08:25:42 -07:00
Rob N 084ff4abd2 tests/block_cloning: rename and document get_same_blocks helper
`get_same_blocks` is a helper to compare two files and return a list of
the blocks that are clones of each other. Its very necessary for block
cloning tests.

Previously it was incorrectly called `unique_blocks`, which is the
_inverse_ of what it does (an early version did list unique blocks; it
was changed but the name was not). So if nothing else, it should be
called `duplicate_blocks`.

But, keeping the details of a clone operation in your head is actually
quite difficult, without the additional overhead of wondering how the
tools work. So I've renamed it to better describe what it does, added a
usage note, and changed it to return block indexes from 0 instead of 1,
to match how L0 blocks are normally counted.

Reviewed-by: Umer Saleem <usaleem@ixsystems.com>
Reviewed-by:  Kay Pedersen <mail@mkwg.de>
Signed-off-by: Rob Norris <robn@despairlabs.com>
Closes #15181
2023-08-26 11:18:11 -07:00
Serapheim Dimitropoulos ab999406fe Update outdated assertion from zio_write_compress
As part of some internal gang block testing within Delphix
we hit the assertion removed by this patch. The assertion
was triggered by a ZIO that had two copies and was a gang
block making the following expression equal to 3:
```
MIN(zp->zp_copies + BP_IS_GANG(bp), spa_max_replication(spa))
```
and failing when we expected the above to be equal to
`BP_GET_NDVAS(bp)`.

The assertion is no longer valid since the following commit:
```
commit 14872aaa4f
Author: Matthew Ahrens <matthew.ahrens@delphix.com>
Date:   Mon Feb 6 09:37:06 2023 -0800

  EIO caused by encryption + recursive gang
```

The above commit changed gang block headers so they can't
have more than 2 copies but the assertion in question from
this PR was never updated.

Reviewed-by: George Wilson <george.wilson@delphix.com>
Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Signed-off-by: Serapheim Dimitropoulos <serapheim@delphix.com>
Closes #15180
2023-08-26 11:18:11 -07:00
Tony Hutter d19304ffee zed: Add zedlet to power off slot when drive is faulted
If ZED_POWER_OFF_ENCLOUSRE_SLOT_ON_FAULT is enabled in zed.rc, then
power off the drive's slot in the enclosure if it becomes FAULTED.
This can help silence misbehaving drives.  This assumes your drive
enclosure fully supports slot power control via sysfs.

Reviewed-by: @AllKind
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #15200
2023-08-25 13:33:40 -07:00
Rob N 92f095a903 copy_file_range: fix fallback when source create on same txg
In 019dea0a5 we removed the conversion from EAGAIN->EXDEV inside
zfs_clone_range(), but forgot to add a test for EAGAIN to the
copy_file_range() entry points to trigger fallback to a content copy.

This commit fixes that.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Kay Pedersen <mail@mkwg.de>
Signed-off-by: Rob Norris <robn@despairlabs.com>
Closes #15170
Closes #15172
2023-08-25 13:33:40 -07:00
Umer Saleem 645a7e4d95 Move zinject from openzfs-zfs-test to openzfs-zfsutils
For Native Debian packaging, zinject binary and man page is
packaged in ZFS test package. zinject is not not directly related
to ZTS and should be packaged with other utilities, like it is
present in zfs_<ver>.rpm/deb packages.

This commit moves zinject binary and man page from openzfs-zfs-test
to openzfs-zfsutils package.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Ameer Hamza <ahamza@ixsystems.com>
Signed-off-by: Umer Saleem <usaleem@ixsystems.com>
Closes #15160
2023-08-25 13:33:40 -07:00
Rafael Kitover 95649854ba dracut: support mountpoint=legacy for root dataset
Support mountpoint=legacy for the root dataset in the dracut zfs support
scripts.

mountpoint=/ or mountpoint=/sysroot also works.

Change zfs-env-bootfs.service to add zfsutil to BOOTFSFLAGS only for
root datasets with mountpoint != legacy.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Signed-off-by: Rafael Kitover <rkitover@gmail.com>
Closes #15149
2023-08-25 13:33:40 -07:00
oromenahar 895cb689d3 zfs_clone_range should return a descriptive error codes
Return the more descriptive error codes instead of `EXDEV` when
the parameters don't match the requirements of the clone function.
Updated the comments in `brt.c` accordingly.
The first three errors are just invalid parameters, which zfs can
not handle.
The fourth error indicates that the block which should be cloned
is created and cloned or modified in the same transaction
group (`txg`).

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Rob Norris <rob.norris@klarasystems.com>
Signed-off-by: Kay Pedersen <mail@mkwg.de>
Closes #15148
2023-08-25 13:33:40 -07:00
наб 6bdc7259d1 libzfs: sendrecv: send_progress_thread: handle SIGINFO/SIGUSR1
POSIX timers target the process, not the thread (as does SIGINFO),
so we need to block it in the main thread which will die if interrupted.

Ref: https://101010.pl/@ed1conf@bsd.network/110731819189629373
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #15113
2023-08-25 13:33:40 -07:00
Ryan Lahfa 1e488eec60 linux/spl/kmem_cache: undefine `kmem_cache_alloc` before defining it
When compiling a kernel with bcachefs and zfs,
the two macros will collide, making it impossible
to have both filesystems.

It is sufficient to just undefine the macro before calling it.

On why this should be in ZFS rather than bcachefs, currently,
bcachefs is not a in-tree filesystem, but,
it has a reasonably high chance of getting included soon.

This avoids the breakage in ZFS early,
this patch may be distributed downstream in NixOS
and is already used there.

Reviewed-by: Brian Atkinson <batkinson@lanl.gov>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Ryan Lahfa <ryan@lahfa.xyz>
Closes #15144
2023-08-25 13:33:40 -07:00
Mateusz Piotrowski c418edf1d3 Fix some typos
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Mateusz Piotrowski <0mp@FreeBSD.org>
Closes #15141
2023-08-25 13:33:40 -07:00
Alexander Motin df8c9f351d ZIL: Second attempt to reduce scope of zl_issuer_lock.
The previous patch #14841 appeared to have significant flaw, causing
deadlocks if zl_get_data callback got blocked waiting for TXG sync.  I
already handled some of such cases in the original patch, but issue
 #14982 shown cases that were impossible to solve in that design.

This patch fixes the problem by postponing log blocks allocation till
the very end, just before the zios issue, leaving nothing blocking after
that point to cause deadlocks.  Before that point though any sleeps are
now allowed, not causing sync thread blockage.  This require slightly
more complicated lwb state machine to allocate blocks and issue zios
in proper order.  But with removal of special early issue workarounds
the new code is much cleaner now, and should even be more efficient.

Since this patch uses null zios between write, I've found that null
zios do not wait for logical children ready status in zio_ready(),
that makes parent write to proceed prematurely, producing incorrect
log blocks.  Added ZIO_CHILD_LOGICAL_BIT to zio_wait_for_children()
fixes it.

Reviewed-by: Rob Norris <rob.norris@klarasystems.com>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Reviewed-by: George Wilson <george.wilson@delphix.com>
Signed-off-by:	Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #15122
2023-08-25 11:58:44 -07:00
Alexander Motin bb31ded68b ZIL: Replay blocks without next block pointer.
If we get next block allocation error during log write, we trigger
transaction commit.  But the block we have just completed is still
written and transactions it covers will be acknowledged normally.
If after that we ignore the block during replay just because it is
the last in the chain, we may not replay some transactions that we
have acknowledged as synced, that is not right.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by:	Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #15132
2023-08-25 11:58:44 -07:00
Alexander Motin c1801cbe59 ZIL: Avoid dbuf_read() before dmu_sync().
In most cases dmu_sync() works with dirty records directly and does
not need actual data. The only exception is dmu_sync_late_arrival().
To save some CPU time use dmu_buf_hold_noread*() in z*_get_data()
and explicitly call dbuf_read() in dmu_sync_late_arrival(). There
is also a chance that by that time TXG will already be synced and
we won't have to do it at all.

Reviewed-by: Brian Atkinson <batkinson@lanl.gov>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by:	Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #15153
2023-08-25 11:58:44 -07:00
Alexander Motin ffaedf0a44 Remove fastwrite mechanism.
Fastwrite was introduced many years ago to improve ZIL writes spread
between multiple top-level vdevs by tracking number of allocated but
not written blocks and choosing vdev with smaller count.  It suposed
to reduce ZIL knowledge about allocation, but actually made ZIL to
even more actively report allocation code about the allocations,
complicating both ZIL and metaslabs code.

On top of that, it seems ZIO_FLAG_FASTWRITE setting in dmu_sync()
was lost many years ago, that was one of the declared benefits. Plus
introduction of embedded log metaslab class solved another problem
with allocation rotor accounting both normal and log allocations,
since in most cases those are now in different metaslab classes.

After all that, I'd prefer to simplify already too complicated ZIL,
ZIO and metaslab code if the benefit of complexity is not obvious.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: George Wilson <george.wilson@delphix.com>
Signed-off-by:	Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #15107
2023-08-25 11:58:44 -07:00
Alexander Motin 02ce9030e6 Avoid waiting in dmu_sync_late_arrival().
The transaction there does not produce any dirty data or log blocks,
so it should not be throttled. All other cases wait for TXG sync, by
which time the log block we are writing will be obsolete, so we can
skip waiting and just return error here instead.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by:	Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #15096
2023-08-25 11:58:44 -07:00
Serapheim Dimitropoulos 0ae7bfc0a4 zpool_vdev_remove() should handle EALREADY error return
When the vdev properties features was merged an extra check
was added in `spa_vdev_remove_top_check()` which checked
whether the vdev that we want to remove is already being
removed and if so return an EALREADY error.

```
static int
spa_vdev_remove_top_check(vdev_t *vd)
{
	... <snip> ...
	/*
	 * This device is already being removed
	 */
	if (vd->vdev_removing)
		return (SET_ERROR(EALREADY));
```

Before that change we'd still fail with an error but it
was a more generic one - here is the check that failed
later in the same function:
```
	/*
	 * There can not be a removal in progress.
	 */
	if (spa->spa_removing_phys.sr_state == DSS_SCANNING)
		return (SET_ERROR(EBUSY));
```

Changing the error code returned from that function changed
the behavior of the removal's library interface exposed to
the userland - `spa_vdev_remove()` now returns `EZFS_UNKNOWN`
instead of `EZFS_EBUSY` that was returning before.

This patch adds logic to make `spa_vdev_remove()` mindful
of the new EALREADY code and propagating `EZFS_EBUSY`
reverting to the previously established semantics of that
function.

Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Signed-off-by: Serapheim Dimitropoulos <serapheim@delphix.com>
Closes #15013
Closes #15129
2023-08-02 08:54:09 -07:00
наб bd1eab16eb linux: zfs: ctldir: set [amc]time to snapshot's creation property
If looking up a snapdir inode failed, hold pool config – hold the 
snapshot – get its creation property – release it – release it, 
then use that as the [amc]time in the allocated inode. If that 
fails then fall back to current time. No performance impact since 
this is only done when allocating a new snapdir inode.
                                                       
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #15110
Closes #15117
2023-08-02 08:53:45 -07:00
Zach Dykstra b3c1807d77 readmmap.c: fix building with MUSL libc
glibc includes sys/types.h from stdlib.h. This is not the case for MUSL,
so explicitly include it. Fixes usage of uint_t.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Zach Dykstra <dykstra.zachary@gmail.com>
Closes #15130
2023-08-02 08:53:06 -07:00
oromenahar b5e2456333 Check the return value in clonefile test
Reviewed-by: Brian Atkinson <batkinson@lanl.gov>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Rob Norris <rob.norris@klarasystems.com>
Signed-off-by: Kay Pedersen <mail@mkwg.de>
Closes #15128
2023-08-02 08:52:40 -07:00
Rob N c47f0f4417 linux/copy_file_range: properly request a fallback copy on Linux <5.3
Before Linux 5.3, the filesystem's copy_file_range handler had to signal
back to the kernel that we can't fulfill the request and it should
fallback to a content copy. This is done by returning -EOPNOTSUPP.

This commit converts the EXDEV return from zfs_clone_range to
EOPNOTSUPP, to force the kernel to fallback for all the valid reasons it
might be unable to clone. Without it the copy_file_range() syscall will
return EXDEV to userspace, breaking its semantics.

Add test for copy_file_range fallbacks.  copy_file_range should always
fallback to a content copy whenever ZFS can't service the request with
cloning.

Reviewed-by: Brian Atkinson <batkinson@lanl.gov>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Kay Pedersen <mail@mkwg.de>
Signed-off-by: Rob Norris <robn@despairlabs.com>
Closes #15131
2023-08-02 08:52:40 -07:00
Rob N 12f2b1f65e zdb: include cloned blocks in block statistics
This gives `zdb -b` support for clone blocks.

Previously, it didn't know what clones were, so would count their space
allocation multiple times and then report leaked space (or, in debug,
would assert trying to claim blocks a second time).

This commit fixes those bugs, and reports the number of clones and the
space "used" (saved) by them.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Kay Pedersen <mail@mkwg.de>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-By: OpenDrives Inc.
Sponsored-By: Klara Inc.
Closes #15123
2023-08-02 08:52:40 -07:00
Brian Behlendorf 4a104ac047 Tag 2.2.0-rc3
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2023-07-27 16:15:44 -07:00
oromenahar c24a480631 BRT should return EOPNOTSUPP
Return the more descriptive EOPNOTSUPP instead of EXDEV when the
storage pool doesn't support block cloning.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Rob Norris <rob.norris@klarasystems.com>
Signed-off-by: Kay Pedersen <mail@mkwg.de>
Closes #15097
2023-07-27 16:11:54 -07:00
Rob Norris 36d1a3ef4e zts: block cloning tests
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Kay Pedersen <mail@mkwg.de>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-By: OpenDrives Inc.
Sponsored-By: Klara Inc.
Closes #15050
Closes #405
Closes #13349
2023-07-26 08:46:58 -07:00
Rob Norris 2768dc04cc linux: implement filesystem-side copy/clone functions for EL7
Redhat have backported copy_file_range and clone_file_range to the EL7
kernel using an "extended file operations" wrapper structure. This
connects all that up to let cloning work there too.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Kay Pedersen <mail@mkwg.de>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-By: OpenDrives Inc.
Sponsored-By: Klara Inc.
Closes #15050
2023-07-26 08:46:58 -07:00
Rob Norris 3366ceaf3a linux: implement filesystem-side clone ioctls
Prior to Linux 4.5, the FICLONE etc ioctls were specific to BTRFS, and
were implemented as regular filesystem-specific ioctls. This implements
those ioctls directly in OpenZFS, allowing cloning to work on older
kernels.

There's no need to gate these behind version checks; on later kernels
Linux will simply never deliver these ioctls, instead calling the
approprate VFS op.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Kay Pedersen <mail@mkwg.de>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-By: OpenDrives Inc.
Sponsored-By: Klara Inc.
Closes #15050
2023-07-26 08:46:58 -07:00
Rob Norris 5d12545da8 linux: implement filesystem-side copy/clone functions
This implements the Linux VFS ops required to service the file
copy/clone APIs:

  .copy_file_range    (4.5+)
  .clone_file_range   (4.5-4.19)
  .dedupe_file_range  (4.5-4.19)
  .remap_file_range   (4.20+)

Note that dedupe_file_range() and remap_file_range(REMAP_FILE_DEDUP) are
hooked up here, but are not implemented yet.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Kay Pedersen <mail@mkwg.de>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-By: OpenDrives Inc.
Sponsored-By: Klara Inc.
Closes #15050
2023-07-26 08:46:58 -07:00
Rob Norris a3ea8c8ee6 dbuf_sync_leaf: check DB_READ in state assertions
Block cloning introduced a new state transition from DB_NOFILL to
DB_READ. This occurs when a block is cloned and then read on the
current txg.

In this case, the clone will move the dbuf to DB_NOFILL, and then the
read will be issued for the overidden block pointer. If that read is
still outstanding when it comes time to write, the dbuf will be in
DB_READ, which is not handled by the checks in dbuf_sync_leaf, thus
tripping the assertions.

This updates those checks to allow DB_READ as a valid state iff the
dirty record is for a BRT write and there is a override block pointer.
This is a safe situation because the block already exists, so there's
nothing that could change from underneath the read.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Kay Pedersen <mail@mkwg.de>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Original-patch-by: Kay Pedersen <mail@mkwg.de>
Sponsored-By: OpenDrives Inc.
Sponsored-By: Klara Inc.
Closes #15050
2023-07-26 08:46:58 -07:00
Rob Norris 0426e13271 dmu_buf_will_clone: only check that current txg is clean
dbuf_undirty() will (correctly) only removed dirty records for the given
(open) txg. If there is a dirty record for an earlier closed txg that
has not been synced out yet, then db_dirty_records will still have
entries on it, tripping the assertion.

Instead, change the assertion to only consider the current txg. To some
extent this is redundant, as its really just saying "did dbuf_undirty()
work?", but it it doesn't hurt and accurately expresses our
expectations.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Kay Pedersen <mail@mkwg.de>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Original-patch-by: Kay Pedersen <mail@mkwg.de>
Sponsored-By: OpenDrives Inc.
Sponsored-By: Klara Inc.
Closes #15050
2023-07-26 08:46:58 -07:00
Rob Norris 8aa4f0f0fc brt_vdev_realloc: use vmem_alloc for large allocation
bv_entcount can be a relatively large allocation (see comment for
BRT_RANGESIZE), so get it from the big allocator.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Kay Pedersen <mail@mkwg.de>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-By: OpenDrives Inc.
Sponsored-By: Klara Inc.
Closes #15050
2023-07-26 08:46:58 -07:00
Rob Norris 7698503dca zfs_clone_range: use vmem_malloc for large allocation
Just silencing the warning about large allocations.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Kay Pedersen <mail@mkwg.de>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-By: OpenDrives Inc.
Sponsored-By: Klara Inc.
Closes #15050
2023-07-26 08:46:58 -07:00
Brian Behlendorf b9aa32ff39 zed: Reduce log noise for large JBODs
For large JBODs the log message "zfs_iter_vdev: no match" can
account for the bulk of the log messages (over 70%).  Since this
message is purely informational and not that useful we remove it.

Reviewed-by: Olaf Faaland <faaland1@llnl.gov>
Reviewed-by: Brian Atkinson <batkinson@lanl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #15086
Closes #15094
2023-07-26 08:46:58 -07:00
Brian Behlendorf 571762b290 Linux 6.4 compat: META
Update the META file to reflect compatibility with the 6.4 kernel.

Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Rob Norris <rob.norris@klarasystems.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #15095
2023-07-26 08:46:58 -07:00
Alexander Motin 991834f5dc Remove zl_issuer_lock from zil_suspend().
This locking was recently added as part of #14979. But appears it
is illegal to take zl_issuer_lock while holding dp_config_rwlock,
taken by dsl_pool_hold().  It causes deadlock with sync thread in
spa_sync_upgrades().  On a second thought, we should not
need this locking, since zil_commit_impl() we call below takes
zl_issuer_lock, that should sufficiently protect zl_suspend reads,
combined with other logic from #14979.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by:	Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #15103
2023-07-25 13:54:02 -07:00
Alexander Motin 41a0f66279 ZIL: Fix config lock deadlock.
When we have some LWBs closed and their ZIOs ready to be issued, we
can not afford sleeping on config lock if somebody else try to lock
it as writer, or it will cause a deadlock.

To solve it, move spa_config_enter() from zil_lwb_write_issue() to
zil_lwb_write_close() under zl_issuer_lock to enforce lock ordering
with other threads.  Now if we can't immediately lock config, issue
all previously closed LWBs so that they could drop their config
locks after completion, and only then allow sleeping on our lock.

Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Reviewed-by: Prakash Surya <prakash.surya@delphix.com>
Reviewed-by: George Wilson <george.wilson@delphix.com>
Signed-off-by: Alexander Motin <mav@FreeBSD.org>
Sponsored by: iXsystems, Inc.
Closes #15078
Closes #15080
2023-07-25 13:54:02 -07:00
Umer Saleem c79d1bae75
Update changelog for OpenZFS 2.2.0 release
This commit updates changelog for native Debian packages for
OpenZFS 2.2.0 release.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Umer Saleem <usaleem@ixsystems.com>
Closes #15104
2023-07-25 09:01:27 -07:00
Brian Behlendorf 70232483b4 Tag 2.2.0-rc2
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2023-07-21 16:36:34 -07:00
Rob N c5273e0c31 shellcheck: disable "unreachable command" check [SC2317]
This new check in 0.9.0 appears to have some issues with various forms
of "early return", like trap, exit and return. This is tripping up (at
least):

  cmd/zed/zed.d/history_event-zfs-list-cacher.sh
  /etc/zfs/zfs-functions

Its not obvious what its complaining about or what the remedy is, so it
seems sensible to disable this check for now.

See also:

  https://www.shellcheck.net/wiki/SC2317
  https://github.com/koalaman/shellcheck/issues/2542
  https://github.com/koalaman/shellcheck/issues/2613

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <robn@despairlabs.com>
Closes #15089
2023-07-21 16:35:12 -07:00
Rob N 685ae4429f metaslab: tuneable to better control force ganging
metaslab_force_ganging isn't enough to actually force ganging, because
it still only forces 3% of the time. This adds
metaslab_force_ganging_pct so we can configure how often to force
ganging.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Closes #15088
2023-07-21 16:35:12 -07:00
Alexander Motin 81be809a25 Adjust prefetch parameters.
- Reduce maximum prefetch distance for 32bit platforms to 8MB as it
was previously.  Those systems didn't grow much probably, so better
stay conservative there.
 - Retire array_rd_sz tunable, blocking prefetch for large requests.
We should not penalize applications trying to be more efficient. The
speculative prefetcher by itself has reasonable distance limits, and
1MB is not much at all these days.

Reviewed-by: Allan Jude <allan@klarasystems.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by:	Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #15072
2023-07-21 16:35:12 -07:00
Alexander Motin 8a6fde8213 Add explicit prefetches to bpobj_iterate().
To simplify error handling bpobj_iterate_blkptrs() iterates through
the list of block pointers backwards.  Unfortunately speculative
prefetcher is currently unable to detect such patterns, that makes
each block read there synchronous and very slow on HDD pools.

According to my tests, added explicit prefetch reduces time needed
to asynchronously delete 8 snapshots of 4 million blocks each from
20 seconds to less than one, that should free sync thread for other
useful work, such as async writes, scrub, etc.

While there, plug one memory leak in case of bpobj_open() error and
harmonize some variable names.

Reviewed-by: Allan Jude <allan@klarasystems.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by:	Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #15071
2023-07-21 16:35:12 -07:00
Alan Somers b6f618f8ff Don't emit cksum_{actual_expected} in ereport.fs.zfs.checksum events
With anything but fletcher-4, even a tiny change in the input will cause
the checksum value to change completely.  So knowing the actual and
expected checksums doesn't provide much more information than "they
don't match".  The harm in sending them is simply that they bloat the
event.  In particular, on FreeBSD the event must fit into a 1016 byte
buffer.

Fixes #14717 for mirrored pools.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Rich Ercolani <rincebrain@gmail.com>
Signed-off-by: Alan Somers <asomers@gmail.com>
Sponsored-by: Axcient
Closes #14717
Closes #15052
2023-07-21 16:35:12 -07:00
Alan Somers 51a2b59767 Don't emit checksum histograms in ereport.fs.zfs.checksum events
The checksum histograms were intended to be used with ATA and parallel
SCSI, which are obsolete.  With modern storage hardware, they will
almost always look like white noise; all bits will be wrong.  They only
serve to bloat the event.  That's a particular problem on FreeBSD, where
events must fit into a 1016 byte buffer.

This fixes issue #14717 for RAIDZ pools, but not for mirror pools.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Rich Ercolani <rincebrain@gmail.com>
Signed-off-by: Alan Somers <asomers@gmail.com>
Sponsored-by: Axcient
Closes #15052
2023-07-21 16:35:12 -07:00
Tony Hutter 8c81c0b05d zed: Fix zed ASSERT on slot power cycle
We would see zed assert on one of our systems if we powered off a
slot.  Further examination showed zfs_retire_recv() was reporting
a GUID of 0, which in turn would return a NULL nvlist.  Add
in a check for a zero GUID.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #15084
2023-07-21 16:35:12 -07:00
Chunwei Chen b221f43943 Fix zpl_test_super race with zfs_umount
We cannot call zpl_enter in zpl_test_super, because zpl_test_super is
under spinlock so we can't sleep, and also because zpl_test_super is
called without sb->s_umount taken, so it's possible we would race with
zfs_umount and call zpl_enter on freed zfsvfs.

Here's an stack trace when this happens:
[ 2379.114837] VERIFY(cvp->cv_magic == CV_MAGIC) failed
[ 2379.114845] PANIC at spl-condvar.c:497:__cv_broadcast()
[ 2379.114854] Kernel panic - not syncing: VERIFY(cvp->cv_magic == CV_MAGIC) failed
[ 2379.115012] Call Trace:
[ 2379.115019]  dump_stack+0x74/0x96
[ 2379.115024]  panic+0x114/0x2f6
[ 2379.115035]  spl_panic+0xcf/0xfc [spl]
[ 2379.115477]  __cv_broadcast+0x68/0xa0 [spl]
[ 2379.115585]  rrw_exit+0xb8/0x310 [zfs]
[ 2379.115696]  rrm_exit+0x4a/0x80 [zfs]
[ 2379.115808]  zpl_test_super+0xa9/0xd0 [zfs]
[ 2379.115920]  sget+0xd1/0x230
[ 2379.116033]  zpl_mount+0xdc/0x230 [zfs]
[ 2379.116037]  legacy_get_tree+0x28/0x50
[ 2379.116039]  vfs_get_tree+0x27/0xc0
[ 2379.116045]  path_mount+0x2fe/0xa70
[ 2379.116048]  do_mount+0x80/0xa0
[ 2379.116050]  __x64_sys_mount+0x8b/0xe0
[ 2379.116052]  do_syscall_64+0x35/0x50
[ 2379.116054]  entry_SYSCALL_64_after_hwframe+0x61/0xc6
[ 2379.116057] RIP: 0033:0x7f9912e8b26a

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Chunwei Chen <david.chen@nutanix.com>
Closes #15077
2023-07-21 16:35:12 -07:00
Ameer Hamza e037327bfe spa_min_alloc should be GCD, not min
Since spa_min_alloc may not be a power of 2, unlike ashifts, in the
case of DRAID, we should not select the minimal value among several
vdevs. Rounding to a multiple of it is unlikely to work for other
vdevs. Instead, using the greatest common divisor produces smaller
yet more reasonable results.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Signed-off-by: Ameer Hamza <ahamza@ixsystems.com>
Closes #15067
2023-07-21 16:35:12 -07:00
Yuri Pankov 1a2e486d25 Don't panic if setting vdev properties is unsupported for this vdev type
Check that vdev has valid zap and bail out early.

While here, move objid selection out of the loop, it's not going to
change.

Reviewed-by: Allan Jude <allan@klarasystems.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Yuri Pankov <yuripv@FreeBSD.org>
Closes #15063
2023-07-21 16:35:12 -07:00
Ameer Hamza d8011707cc Ignore pool ashift property during vdev attachment
Ashift can be set for a vdev only during its creation, and the
top-level vdev does not change when a vdev is attached or replaced.
The ashift property should not be used during attachment, as it
does not allow attaching/replacing a vdev if the pool's ashift
property is increased after the existing vdev was created. Instead,
we should be able to attach the vdev if the attached vdev can
satisfy the ashift requirement with its parent.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Signed-off-by: Ameer Hamza <ahamza@ixsystems.com>
Closes #15061
2023-07-21 16:35:12 -07:00
Wojciech Małota-Wójcik f5f5a2db95 Rollback before zfs root is mounted
On my machines I observe random failures caused by rollback happening 
after zfs root is mounted. I've observed two types of failures:

- zfs-rollback-bootfs.service fails saying that rollback must be
  done just before mounting the dataset
- boot process fails and rescue console is entered.

After making this modification and testing it for couple of days 
none of those problems have been observed anymore.

I don't know if `dracut-mount.service` is still needed in the 
`After` directive. Maybe someone else is able to address this?

Reviewed-by: Gregory Bartholomew <gregory.lee.bartholomew@gmail.com>
Signed-off-by: Wojciech Małota-Wójcik <59281144+outofforest@users.noreply.github.com>
Closes #15025
2023-07-21 16:35:12 -07:00
Alexander Motin 83b0967c1f Do not request data L1 buffers on scan prefetch.
Set ARC_FLAG_NO_BUF when prefetching data L1 buffers for scan.  We
do not prefetch data L0 buffers, so we do not need the L1 buffers,
only want them to be ready in ARC. This saves some CPU time on the
buffers decompression.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by:	Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #15029
2023-07-21 16:35:12 -07:00
Coleman Kane 73ba5df31a Linux 6.5 compat: disk_check_media_change() was added
The disk_check_media_change() function was added which replaces
bdev_check_media_change.  This change was introduced in 6.5rc1
444aa2c58cb3b6cfe3b7cc7db6c294d73393a894 and the new function takes a
gendisk* as its argument, no longer a block_device*. Thus, bdev->bd_disk
is now used to pass the expected data.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Coleman Kane <ckane@colemankane.org>
Closes #15060
2023-07-21 16:35:12 -07:00
Coleman Kane 1bc244ae93 Linux 6.5 compat: BLK_STS_NEXUS renamed to BLK_STS_RESV_CONFLICT
This change was introduced in Linux commit
7ba150834b840f6f5cdd07ca69a4ccf39df59a66

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Signed-off-by: Coleman Kane <ckane@colemankane.org>
Closes #15059
2023-07-21 16:35:12 -07:00
Coleman Kane 931dc70550 Linux 6.5 compat: intptr_t definition is canonically signed
Make the version here match that elsewhere in the kernel and system
headers.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Signed-off-by: Coleman Kane <ckane@colemankane.org>
Closes #15058
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2023-07-21 16:35:12 -07:00
Yuri Pankov 5299f4f289 set autotrim default to 'off' everywhere
As it turns out having autotrim default to 'on' on FreeBSD never really
worked due to mess with defines where userland and kernel module were
getting different default values (userland was defaulting to 'off',
module was thinking it's 'on').

Reviewed-by: Tino Reichardt <milky-zfs@mcmilk.de>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Signed-off-by: Yuri Pankov <yuripv@FreeBSD.org>
Closes #15079
2023-07-21 16:35:12 -07:00
Alan Somers f917cf1c03 Fix the ZFS checksum error histograms with larger record sizes
My analysis in PR #14716 was incorrect.  Each histogram bucket contains
the number of incorrect bits, by position in a 64-bit word, over the
entire record.  8-bit buckets can overflow for record sizes above 2k.
To forestall that, saturate each bucket at 255.  That should still get
the point across: either all bits are equally wrong, or just a couple
are.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Alan Somers <asomers@gmail.com>
Sponsored-by: Axcient
Closes #15049
2023-07-21 16:35:12 -07:00
Alexander Motin 56ed389a57 Fix raw receive with different indirect block size.
Unlike regular receive, raw receive require destination to have the
same block structure as the source.  In case of dnode reclaim this
triggers two special cases, requiring special handling:
 - If dn_nlevels == 1, we can change the ibs, but dnode_set_blksz()
should not dirty the data buffer if block size does not change, or
durign receive dbuf_dirty_lightweight() will trigger assertion.
 - If dn_nlevels > 1, we just can't change the ibs, dnode_set_blksz()
would fail and receive_object would trigger assertion, so we should
destroy and recreate the dnode from scratch.

Reviewed-by: Paul Dagnelie <pcd@delphix.com>
Signed-off-by:	Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #15039

(cherry picked from commit c4e8742149)
2023-07-20 08:58:29 -07:00
Alexander Motin e613e4bbe3 Avoid extra snprintf() in dsl_deadlist_merge().
Since we are already iterating the ZAP, we have exact string key to
remove, we do not need to call zap_remove_int() with the int key we
just converted, we can call zap_remove() for the original string.

This should make no functional change, only a micro-optimization.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by:	Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #15056

(cherry picked from commit fdba8cbb79)
2023-07-20 08:58:29 -07:00
Alexander Motin b4e630b00c Add missed DMU_PROJECTUSED_OBJECT prefetch.
It seems 9c5167d19f "Project Quota on ZFS" missed to add prefetch
for DMU_PROJECTUSED_OBJECT during scan (scrub/resilver).  It should
not cause visible problems, but may affect scub/resilver performance.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by:	Alexander Motin <mav@FreeBSD.org>
Sponsored by:	iXsystems, Inc.
Closes #15024
2023-07-20 08:58:29 -07:00
Mateusz Guzik bf6cd30796 FreeBSD: catch up to __FreeBSD_version 1400093
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Signed-off-by: Mateusz Guzik <mjguzik@gmail.com>
Closes #15036
2023-07-20 08:58:29 -07:00
Alexander Motin 1266cebf87 FreeBSD: Fix build on stable/13 after 1302506.
Starting approximately from version 1302506 vn_lock_pair() grown two
additional arguments following head.  There is a one week hole, but
that is closet reference point we have.

Reviewed-by: Mateusz Guzik <mjguzik@gmail.com>
Signed-off-by:  Alexander Motin <mav@FreeBSD.org>
Sponsored by:   iXsystems, Inc.
Closes #15047
2023-07-20 08:58:29 -07:00
694 changed files with 13460 additions and 41103 deletions

View File

@ -1,4 +0,0 @@
name: "Custom CodeQL Analysis"
queries:
- uses: ./.github/codeql/custom-queries/cpp/deprecatedFunctionUsage.ql

View File

@ -1,4 +0,0 @@
name: "Custom CodeQL Analysis"
paths-ignore:
- tests

View File

@ -1,59 +0,0 @@
/**
* @name Deprecated function usage detection
* @description Detects functions whose usage is banned from the OpenZFS
* codebase due to QA concerns.
* @kind problem
* @severity error
* @id cpp/deprecated-function-usage
*/
import cpp
predicate isDeprecatedFunction(Function f) {
f.getName() = "strtok" or
f.getName() = "__xpg_basename" or
f.getName() = "basename" or
f.getName() = "dirname" or
f.getName() = "bcopy" or
f.getName() = "bcmp" or
f.getName() = "bzero" or
f.getName() = "asctime" or
f.getName() = "asctime_r" or
f.getName() = "gmtime" or
f.getName() = "localtime" or
f.getName() = "strncpy"
}
string getReplacementMessage(Function f) {
if f.getName() = "strtok" then
result = "Use strtok_r(3) instead!"
else if f.getName() = "__xpg_basename" then
result = "basename(3) is underspecified. Use zfs_basename() instead!"
else if f.getName() = "basename" then
result = "basename(3) is underspecified. Use zfs_basename() instead!"
else if f.getName() = "dirname" then
result = "dirname(3) is underspecified. Use zfs_dirnamelen() instead!"
else if f.getName() = "bcopy" then
result = "bcopy(3) is deprecated. Use memcpy(3)/memmove(3) instead!"
else if f.getName() = "bcmp" then
result = "bcmp(3) is deprecated. Use memcmp(3) instead!"
else if f.getName() = "bzero" then
result = "bzero(3) is deprecated. Use memset(3) instead!"
else if f.getName() = "asctime" then
result = "Use strftime(3) instead!"
else if f.getName() = "asctime_r" then
result = "Use strftime(3) instead!"
else if f.getName() = "gmtime" then
result = "gmtime(3) isn't thread-safe. Use gmtime_r(3) instead!"
else if f.getName() = "localtime" then
result = "localtime(3) isn't thread-safe. Use localtime_r(3) instead!"
else
result = "strncpy(3) is deprecated. Use strlcpy(3) instead!"
}
from FunctionCall fc, Function f
where
fc.getTarget() = f and
isDeprecatedFunction(f)
select fc, getReplacementMessage(f)

View File

@ -1,4 +0,0 @@
name: openzfs-cpp-queries
version: 0.0.0
libraryPathDependencies: codeql-cpp
suites: openzfs-cpp-suite

View File

@ -4,54 +4,44 @@
```mermaid
flowchart TB
subgraph CleanUp and Summary
CleanUp+Summary
Part1-20.04-->CleanUp+nice+Summary
Part2-20.04-->CleanUp+nice+Summary
PartN-20.04-->CleanUp+nice+Summary
Part1-22.04-->CleanUp+nice+Summary
Part2-22.04-->CleanUp+nice+Summary
PartN-22.04-->CleanUp+nice+Summary
end
subgraph Functional Testings
sanity-checks-20.04
zloop-checks-20.04
functional-testing-20.04-->Part1-20.04
functional-testing-20.04-->Part2-20.04
functional-testing-20.04-->Part3-20.04
functional-testing-20.04-->Part4-20.04
functional-testing-20.04-->PartN-20.04
functional-testing-22.04-->Part1-22.04
functional-testing-22.04-->Part2-22.04
functional-testing-22.04-->Part3-22.04
functional-testing-22.04-->Part4-22.04
sanity-checks-22.04
zloop-checks-22.04
functional-testing-22.04-->PartN-22.04
end
subgraph Sanity and zloop Testings
sanity-checks-20.04-->functional-testing-20.04
sanity-checks-22.04-->functional-testing-22.04
zloop-checks-20.04-->functional
zloop-checks-22.04-->functional
end
subgraph Code Checking + Building
Build-Ubuntu-20.04
codeql.yml
checkstyle.yml
Build-Ubuntu-22.04
end
Build-Ubuntu-20.04-->sanity-checks-20.04
Build-Ubuntu-20.04-->zloop-checks-20.04
Build-Ubuntu-20.04-->functional-testing-20.04
Build-Ubuntu-22.04-->sanity-checks-22.04
Build-Ubuntu-20.04-->zloop-checks-20.04
Build-Ubuntu-22.04-->zloop-checks-22.04
Build-Ubuntu-22.04-->functional-testing-22.04
sanity-checks-20.04-->CleanUp+Summary
Part1-20.04-->CleanUp+Summary
Part2-20.04-->CleanUp+Summary
Part3-20.04-->CleanUp+Summary
Part4-20.04-->CleanUp+Summary
Part1-22.04-->CleanUp+Summary
Part2-22.04-->CleanUp+Summary
Part3-22.04-->CleanUp+Summary
Part4-22.04-->CleanUp+Summary
sanity-checks-22.04-->CleanUp+Summary
end
```
1) build zfs modules for Ubuntu 20.04 and 22.04 (~15m)
2) 2x zloop test (~10m) + 2x sanity test (~25m)
3) 4x functional testings in parts 1..4 (each ~1h)
3) functional testings in parts 1..5 (each ~1h)
4) cleanup and create summary
- content of summary depends on the results of the steps

View File

@ -8,7 +8,7 @@ jobs:
checkstyle:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Install dependencies
@ -52,7 +52,7 @@ jobs:
if: failure() && steps.CheckABI.outcome == 'failure'
run: |
find -name *.abi | tar -cf abi_files.tar -T -
- uses: actions/upload-artifact@v4
- uses: actions/upload-artifact@v3
if: failure() && steps.CheckABI.outcome == 'failure'
with:
name: New ABI files (use only if you're sure about interface changes)

View File

@ -24,12 +24,11 @@ jobs:
echo "MAKEFLAGS=-j$(nproc)" >> $GITHUB_ENV
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v3
- name: Initialize CodeQL
uses: github/codeql-action/init@v2
with:
config-file: .github/codeql-${{ matrix.language }}.yml
languages: ${{ matrix.language }}
- name: Autobuild

View File

@ -87,7 +87,7 @@ function summarize_f() {
output "\n## $headline\n"
rm -rf testfiles
for i in $(seq 1 $FUNCTIONAL_PARTS); do
tarfile="$2-part$i/part$i.tar"
tarfile="$2/part$i.tar"
check_tarfile "$tarfile"
check_logfile "testfiles/log"
done

View File

@ -55,24 +55,29 @@ function mod_install() {
cat /proc/spl/kstat/zfs/chksum_bench
echo "::endgroup::"
echo "::group::Optimize storage for ZFS testings"
# remove swap and umount fast storage
# 89GiB -> rootfs + bootfs with ~80MB/s -> don't care
# 64GiB -> /mnt with 420MB/s -> new testing ssd
sudo swapoff -a
echo "::group::Reclaim and report disk space"
# remove 4GiB of images
sudo systemd-run docker system prune --force --all --volumes
# this one is fast and mounted @ /mnt
# -> we reformat with ext4 + move it to /var/tmp
DEV="/dev/disk/azure/resource-part1"
sudo umount /mnt
sudo mkfs.ext4 -O ^has_journal -F $DEV
sudo mount -o noatime,barrier=0 $DEV /var/tmp
sudo chmod 1777 /var/tmp
# remove unused software
sudo systemd-run --wait rm -rf \
"$AGENT_TOOLSDIRECTORY" \
/opt/* \
/usr/local/* \
/usr/share/az* \
/usr/share/dotnet \
/usr/share/gradle* \
/usr/share/miniconda \
/usr/share/swift \
/var/lib/gems \
/var/lib/mysql \
/var/lib/snapd
# trim the cleaned space
sudo fstrim /
# disk usage afterwards
sudo df -h /
sudo df -h /var/tmp
sudo fstrim -a
df -h /
echo "::endgroup::"
}

View File

@ -13,10 +13,10 @@ jobs:
zloop:
runs-on: ubuntu-${{ inputs.os }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
- uses: actions/download-artifact@v4
- uses: actions/download-artifact@v3
with:
name: modules-${{ inputs.os }}
- name: Install modules
@ -34,19 +34,19 @@ jobs:
if: failure()
run: |
sudo chmod +r -R /var/tmp/zloop/
- uses: actions/upload-artifact@v4
- uses: actions/upload-artifact@v3
if: failure()
with:
name: Zloop-logs-${{ inputs.os }}
name: Zpool-logs-${{ inputs.os }}
path: |
/var/tmp/zloop/*/
!/var/tmp/zloop/*/vdev/
retention-days: 14
if-no-files-found: ignore
- uses: actions/upload-artifact@v4
- uses: actions/upload-artifact@v3
if: failure()
with:
name: Zloop-files-${{ inputs.os }}
name: Zpool-files-${{ inputs.os }}
path: |
/var/tmp/zloop/*/vdev/
retention-days: 14
@ -55,10 +55,10 @@ jobs:
sanity:
runs-on: ubuntu-${{ inputs.os }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
- uses: actions/download-artifact@v4
- uses: actions/download-artifact@v3
with:
name: modules-${{ inputs.os }}
- name: Install modules
@ -77,7 +77,7 @@ jobs:
RESPATH="/var/tmp/test_results"
mv -f $RESPATH/current $RESPATH/testfiles
tar cf $RESPATH/sanity.tar -h -C $RESPATH testfiles
- uses: actions/upload-artifact@v4
- uses: actions/upload-artifact@v3
if: success() || failure()
with:
name: Logs-${{ inputs.os }}-sanity
@ -91,10 +91,10 @@ jobs:
matrix:
tests: [ part1, part2, part3, part4 ]
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
- uses: actions/download-artifact@v4
- uses: actions/download-artifact@v3
with:
name: modules-${{ inputs.os }}
- name: Install modules
@ -116,9 +116,9 @@ jobs:
RESPATH="/var/tmp/test_results"
mv -f $RESPATH/current $RESPATH/testfiles
tar cf $RESPATH/${{ matrix.tests }}.tar -h -C $RESPATH testfiles
- uses: actions/upload-artifact@v4
- uses: actions/upload-artifact@v3
if: success() || failure()
with:
name: Logs-${{ inputs.os }}-functional-${{ matrix.tests }}
name: Logs-${{ inputs.os }}-functional
path: /var/tmp/test_results/${{ matrix.tests }}.tar
if-no-files-found: ignore

View File

@ -14,14 +14,14 @@ jobs:
os: [20.04, 22.04]
runs-on: ubuntu-${{ matrix.os }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Build modules
run: .github/workflows/scripts/setup-dependencies.sh build
- name: Prepare modules upload
run: tar czf modules-${{ matrix.os }}.tgz *.deb .github tests/test-runner tests/ImageOS.txt
- uses: actions/upload-artifact@v4
- uses: actions/upload-artifact@v3
with:
name: modules-${{ matrix.os }}
path: modules-${{ matrix.os }}.tgz
@ -44,7 +44,7 @@ jobs:
runs-on: ubuntu-22.04
needs: testings
steps:
- uses: actions/download-artifact@v4
- uses: actions/download-artifact@v3
- name: Generating summary
run: |
tar xzf modules-22.04/modules-22.04.tgz .github tests
@ -58,7 +58,7 @@ jobs:
run: .github/workflows/scripts/generate-summary.sh 3
- name: Summary for errors #4
run: .github/workflows/scripts/generate-summary.sh 4
- uses: actions/upload-artifact@v4
- uses: actions/upload-artifact@v3
with:
name: Summary Files
path: Summary/

1
.gitignore vendored
View File

@ -83,7 +83,6 @@
modules.order
Makefile
Makefile.in
changelog
*.patch
*.orig
*.tmp

View File

@ -30,7 +30,6 @@ Andreas Dilger <adilger@dilger.ca>
Andrew Walker <awalker@ixsystems.com>
Benedikt Neuffer <github@itfriend.de>
Chengfei Zhu <chengfeix.zhu@intel.com>
ChenHao Lu <18302010006@fudan.edu.cn>
Chris Lindee <chris.lindee+github@gmail.com>
Colm Buckley <colm@tuatha.org>
Crag Wang <crag0715@gmail.com>
@ -44,7 +43,6 @@ Glenn Washburn <development@efficientek.com>
Gordan Bobic <gordan.bobic@gmail.com>
Gregory Bartholomew <gregory.lee.bartholomew@gmail.com>
hedong zhang <h_d_zhang@163.com>
Ilkka Sovanto <github@ilkka.kapsi.fi>
InsanePrawn <Insane.Prawny@gmail.com>
Jason Cohen <jwittlincohen@gmail.com>
Jason Harmening <jason.harmening@gmail.com>
@ -59,7 +57,6 @@ KernelOfTruth <kerneloftruth@gmail.com>
Liu Hua <liu.hua130@zte.com.cn>
Liu Qing <winglq@gmail.com>
loli10K <ezomori.nozomu@gmail.com>
Mart Frauenlob <allkind@fastest.cc>
Matthias Blankertz <matthias@blankertz.org>
Michael Gmelin <grembo@FreeBSD.org>
Olivier Mazouffre <olivier.mazouffre@ims-bordeaux.fr>
@ -76,12 +73,6 @@ WHR <msl0000023508@gmail.com>
Yanping Gao <yanping.gao@xtaotech.com>
Youzhong Yang <youzhong@gmail.com>
# Signed-off-by: overriding Author:
Ryan <errornointernet@envs.net> <error.nointernet@gmail.com>
Qiuhao Chen <chenqiuhao1997@gmail.com> <haohao0924@126.com>
Yuxin Wang <yuxinwang9999@gmail.com> <Bi11gates9999@gmail.com>
Zhenlei Huang <zlei@FreeBSD.org> <zlei.huang@gmail.com>
# Commits from strange places, long ago
Brian Behlendorf <behlendorf1@llnl.gov> <behlendo@7e1ea52c-4ff2-0310-8f11-9dd32ca42a1c>
Brian Behlendorf <behlendorf1@llnl.gov> <behlendo@fedora-17-amd64.(none)>
@ -98,7 +89,6 @@ Alek Pinchuk <apinchuk@axcient.com> <alek-p@users.noreply.github.com>
Alexander Lobakin <alobakin@pm.me> <solbjorn@users.noreply.github.com>
Alexey Smirnoff <fling@member.fsf.org> <fling-@users.noreply.github.com>
Allen Holl <allen.m.holl@gmail.com> <65494904+allen-4@users.noreply.github.com>
Alphan Yılmaz <alphanyilmaz@gmail.com> <a1ea321@users.noreply.github.com>
Ameer Hamza <ahamza@ixsystems.com> <106930537+ixhamza@users.noreply.github.com>
Andrew J. Hesford <ajh@sideband.org> <48421688+ahesford@users.noreply.github.com>>
Andrew Sun <me@andrewsun.com> <as-com@users.noreply.github.com>
@ -106,22 +96,18 @@ Aron Xu <happyaron.xu@gmail.com> <happyaron@users.noreply.github.com>
Arun KV <arun.kv@datacore.com> <65647132+arun-kv@users.noreply.github.com>
Ben Wolsieffer <benwolsieffer@gmail.com> <lopsided98@users.noreply.github.com>
bernie1995 <bernie.pikes@gmail.com> <42413912+bernie1995@users.noreply.github.com>
Bojan Novković <bnovkov@FreeBSD.org> <72801811+bnovkov@users.noreply.github.com>
Boris Protopopov <boris.protopopov@actifio.com> <bprotopopov@users.noreply.github.com>
Brad Forschinger <github@bnjf.id.au> <bnjf@users.noreply.github.com>
Brandon Thetford <brandon@dodecatec.com> <dodexahedron@users.noreply.github.com>
buzzingwires <buzzingwires@outlook.com> <131118055+buzzingwires@users.noreply.github.com>
Cedric Maunoury <cedric.maunoury@gmail.com> <38213715+cedricmaunoury@users.noreply.github.com>
Charles Suh <charles.suh@gmail.com> <charlessuh@users.noreply.github.com>
Chris Peredun <chris.peredun@ixsystems.com> <126915832+chrisperedun@users.noreply.github.com>
Dacian Reece-Stremtan <dacianstremtan@gmail.com> <35844628+dacianstremtan@users.noreply.github.com>
Damian Szuberski <szuberskidamian@gmail.com> <30863496+szubersk@users.noreply.github.com>
Daniel Hiepler <d-git@coderdu.de> <32984777+heeplr@users.noreply.github.com>
Daniel Kobras <d.kobras@science-computing.de> <sckobras@users.noreply.github.com>
Daniel Reichelt <hacking@nachtgeist.net> <nachtgeist@users.noreply.github.com>
David Quigley <david.quigley@intel.com> <dpquigl@users.noreply.github.com>
Dennis R. Friedrichsen <dennis.r.friedrichsen@gmail.com> <31087738+dennisfriedrichsen@users.noreply.github.com>
Dex Wood <slash2314@gmail.com> <slash2314@users.noreply.github.com>
DHE <git@dehacked.net> <DeHackEd@users.noreply.github.com>
Dmitri John Ledkov <dimitri.ledkov@canonical.com> <19779+xnox@users.noreply.github.com>
Dries Michiels <driesm.michiels@gmail.com> <32487486+driesmp@users.noreply.github.com>
@ -142,7 +128,6 @@ Harry Mallon <hjmallon@gmail.com> <1816667+hjmallon@users.noreply.github.com>
Hiếu Lê <leorize+oss@disroot.org> <alaviss@users.noreply.github.com>
Jake Howard <git@theorangeone.net> <RealOrangeOne@users.noreply.github.com>
James Cowgill <james.cowgill@mips.com> <jcowgill@users.noreply.github.com>
Jaron Kent-Dobias <jaron@kent-dobias.com> <kentdobias@users.noreply.github.com>
Jason King <jason.king@joyent.com> <jasonbking@users.noreply.github.com>
Jeff Dike <jdike@akamai.com> <52420226+jdike@users.noreply.github.com>
Jitendra Patidar <jitendra.patidar@nutanix.com> <53164267+jsai20@users.noreply.github.com>
@ -152,9 +137,7 @@ John L. Hammond <john.hammond@intel.com> <35266395+jhammond-intel@users.noreply.
John-Mark Gurney <jmg@funkthat.com> <jmgurney@users.noreply.github.com>
John Ramsden <johnramsden@riseup.net> <johnramsden@users.noreply.github.com>
Jonathon Fernyhough <jonathon@m2x.dev> <559369+jonathonf@users.noreply.github.com>
Jose Luis Duran <jlduran@gmail.com> <jlduran@users.noreply.github.com>
Justin Hibbits <chmeeedalf@gmail.com> <chmeeedalf@users.noreply.github.com>
Kevin Greene <kevin.greene@delphix.com> <104801862+kxgreene@users.noreply.github.com>
Kevin Jin <lostking2008@hotmail.com> <33590050+jxdking@users.noreply.github.com>
Kevin P. Fleming <kevin@km6g.us> <kpfleming@users.noreply.github.com>
Krzysztof Piecuch <piecuch@kpiecuch.pl> <3964215+pikrzysztof@users.noreply.github.com>
@ -165,11 +148,9 @@ Lorenz Hüdepohl <dev@stellardeath.org> <lhuedepohl@users.noreply.github.com>
Luís Henriques <henrix@camandro.org> <73643340+lumigch@users.noreply.github.com>
Marcin Skarbek <git@skarbek.name> <mskarbek@users.noreply.github.com>
Matt Fiddaman <github@m.fiddaman.uk> <81489167+matt-fidd@users.noreply.github.com>
Maxim Filimonov <che@bein.link> <part1zano@users.noreply.github.com>
Max Zettlmeißl <max@zettlmeissl.de> <6818198+maxz@users.noreply.github.com>
Michael Niewöhner <foss@mniewoehner.de> <c0d3z3r0@users.noreply.github.com>
Michael Zhivich <mzhivich@akamai.com> <33133421+mzhivich@users.noreply.github.com>
MigeljanImeri <ImeriMigel@gmail.com> <78048439+MigeljanImeri@users.noreply.github.com>
Mo Zhou <cdluminate@gmail.com> <5723047+cdluminate@users.noreply.github.com>
Nick Mattis <nickm970@gmail.com> <nmattis@users.noreply.github.com>
omni <omni+vagant@hack.org> <79493359+omnivagant@users.noreply.github.com>
@ -183,7 +164,6 @@ Ping Huang <huangping@smartx.com> <101400146+hpingfs@users.noreply.github.com>
Piotr P. Stefaniak <pstef@freebsd.org> <pstef@users.noreply.github.com>
Richard Allen <belperite@gmail.com> <33836503+belperite@users.noreply.github.com>
Rich Ercolani <rincebrain@gmail.com> <214141+rincebrain@users.noreply.github.com>
Rick Macklem <rmacklem@uoguelph.ca> <64620010+rmacklem@users.noreply.github.com>
Rob Wing <rob.wing@klarasystems.com> <98866084+rob-wing@users.noreply.github.com>
Roman Strashkin <roman.strashkin@nexenta.com> <Ramzec@users.noreply.github.com>
Ryan Hirasaki <ryanhirasaki@gmail.com> <4690732+RyanHir@users.noreply.github.com>
@ -194,17 +174,13 @@ Scott Colby <scott@scolby.com> <scolby33@users.noreply.github.com>
Sean Eric Fagan <kithrup@mac.com> <kithrup@users.noreply.github.com>
Spencer Kinny <spencerkinny1995@gmail.com> <30333052+Spencer-Kinny@users.noreply.github.com>
Srikanth N S <srikanth.nagasubbaraoseetharaman@hpe.com> <75025422+nssrikanth@users.noreply.github.com>
Stefan Lendl <s.lendl@proxmox.com> <1321542+stfl@users.noreply.github.com>
Thomas Bertschinger <bertschinger@lanl.gov> <101425190+bertschinger@users.noreply.github.com>
Thomas Geppert <geppi@digitx.de> <geppi@users.noreply.github.com>
Tim Crawford <tcrawford@datto.com> <crawfxrd@users.noreply.github.com>
Todd Seidelmann <18294602+seidelma@users.noreply.github.com>
Tom Matthews <tom@axiom-partners.com> <tomtastic@users.noreply.github.com>
Tony Perkins <tperkins@datto.com> <62951051+tony-zfs@users.noreply.github.com>
Torsten Wörtwein <twoertwein@gmail.com> <twoertwein@users.noreply.github.com>
Tulsi Jain <tulsi.jain@delphix.com> <TulsiJain@users.noreply.github.com>
Václav Skála <skala@vshosting.cz> <33496485+vaclavskala@users.noreply.github.com>
Vaibhav Bhanawat <vaibhav.bhanawat@delphix.com> <88050553+vaibhav-delphix@users.noreply.github.com>
Violet Purcell <vimproved@inventati.org> <66446404+vimproved@users.noreply.github.com>
Vipin Kumar Verma <vipin.verma@hpe.com> <75025470+vermavipinkumar@users.noreply.github.com>
Wolfgang Bumiller <w.bumiller@proxmox.com> <Blub@users.noreply.github.com>

48
AUTHORS
View File

@ -46,7 +46,6 @@ CONTRIBUTORS:
Alex Zhuravlev <alexey.zhuravlev@intel.com>
Allan Jude <allanjude@freebsd.org>
Allen Holl <allen.m.holl@gmail.com>
Alphan Yılmaz <alphanyilmaz@gmail.com>
alteriks <alteriks@gmail.com>
Alyssa Ross <hi@alyssa.is>
Ameer Hamza <ahamza@ixsystems.com>
@ -89,18 +88,15 @@ CONTRIBUTORS:
Bassu <bassu@phi9.com>
Ben Allen <bsallen@alcf.anl.gov>
Ben Cordero <bencord0@condi.me>
Benda Xu <orv@debian.org>
Benedikt Neuffer <github@itfriend.de>
Benjamin Albrecht <git@albrecht.io>
Benjamin Gentil <benjgentil.pro@gmail.com>
Benjamin Sherman <benjamin@holyarmy.org>
Ben McGough <bmcgough@fredhutch.org>
Ben Rubson <ben.rubson@gmail.com>
Ben Wolsieffer <benwolsieffer@gmail.com>
bernie1995 <bernie.pikes@gmail.com>
Bill McGonigle <bill-github.com-public1@bfccomputing.com>
Bill Pijewski <wdp@joyent.com>
Bojan Novković <bnovkov@FreeBSD.org>
Boris Protopopov <boris.protopopov@nexenta.com>
Brad Forschinger <github@bnjf.id.au>
Brad Lewis <brad.lewis@delphix.com>
@ -115,7 +111,6 @@ CONTRIBUTORS:
bzzz77 <bzzz.tomas@gmail.com>
cable2999 <cable2999@users.noreply.github.com>
Caleb James DeLisle <calebdelisle@lavabit.com>
Cameron Harr <harr1@llnl.gov>
Cao Xuewen <cao.xuewen@zte.com.cn>
Carlo Landmeter <clandmeter@gmail.com>
Carlos Alberto Lopez Perez <clopez@igalia.com>
@ -125,15 +120,12 @@ CONTRIBUTORS:
Chen Can <chen.can2@zte.com.cn>
Chengfei Zhu <chengfeix.zhu@intel.com>
Chen Haiquan <oc@yunify.com>
ChenHao Lu <18302010006@fudan.edu.cn>
Chip Parker <aparker@enthought.com>
Chris Burroughs <chris.burroughs@gmail.com>
Chris Davidson <christopher.davidson@gmail.com>
Chris Dunlap <cdunlap@llnl.gov>
Chris Dunlop <chris@onthe.net.au>
Chris Lindee <chris.lindee+github@gmail.com>
Chris McDonough <chrism@plope.com>
Chris Peredun <chris.peredun@ixsystems.com>
Chris Siden <chris.siden@delphix.com>
Chris Siebenmann <cks.github@cs.toronto.edu>
Christer Ekholm <che@chrekh.se>
@ -152,7 +144,6 @@ CONTRIBUTORS:
Clint Armstrong <clint@clintarmstrong.net>
Coleman Kane <ckane@colemankane.org>
Colin Ian King <colin.king@canonical.com>
Colin Percival <cperciva@tarsnap.com>
Colm Buckley <colm@tuatha.org>
Crag Wang <crag0715@gmail.com>
Craig Loomis <cloomis@astro.princeton.edu>
@ -165,12 +156,10 @@ CONTRIBUTORS:
Damiano Albani <damiano.albani@gmail.com>
Damian Szuberski <szuberskidamian@gmail.com>
Damian Wojsław <damian@wojslaw.pl>
Daniel Berlin <dberlin@dberlin.org>
Daniel Hiepler <d-git@coderdu.de>
Daniel Hoffman <dj.hoffman@delphix.com>
Daniel Kobras <d.kobras@science-computing.de>
Daniel Kolesa <daniel@octaforge.org>
Daniel Perry <dtperry@amazon.com>
Daniel Reichelt <hacking@nachtgeist.net>
Daniel Stevenson <bot@dstev.net>
Daniel Verite <daniel@verite.pro>
@ -187,11 +176,8 @@ CONTRIBUTORS:
David Quigley <david.quigley@intel.com>
Debabrata Banerjee <dbanerje@akamai.com>
D. Ebdrup <debdrup@freebsd.org>
Dennis R. Friedrichsen <dennis.r.friedrichsen@gmail.com>
Denys Rtveliashvili <denys@rtveliashvili.name>
Derek Dai <daiderek@gmail.com>
Derek Schrock <dereks@lifeofadishwasher.com>
Dex Wood <slash2314@gmail.com>
DHE <git@dehacked.net>
Didier Roche <didrocks@ubuntu.com>
Dimitri John Ledkov <xnox@ubuntu.com>
@ -249,12 +235,9 @@ CONTRIBUTORS:
Gionatan Danti <g.danti@assyoma.it>
Giuseppe Di Natale <guss80@gmail.com>
Glenn Washburn <development@efficientek.com>
glibg10b <glibg10b@users.noreply.github.com>
gofaster <felix.gofaster@gmail.com>
Gordan Bobic <gordan@redsleeve.org>
Gordon Bergling <gbergling@googlemail.com>
Gordon Ross <gwr@nexenta.com>
Gordon Tetlow <gordon@freebsd.org>
Graham Christensen <graham@grahamc.com>
Graham Perrin <grahamperrin@gmail.com>
Gregor Kopka <gregor@kopka.net>
@ -282,7 +265,6 @@ CONTRIBUTORS:
Igor Kozhukhov <ikozhukhov@gmail.com>
Igor Lvovsky <ilvovsky@gmail.com>
ilbsmart <wgqimut@gmail.com>
Ilkka Sovanto <github@ilkka.kapsi.fi>
illiliti <illiliti@protonmail.com>
ilovezfs <ilovezfs@icloud.com>
InsanePrawn <Insane.Prawny@gmail.com>
@ -298,11 +280,9 @@ CONTRIBUTORS:
Jan Engelhardt <jengelh@inai.de>
Jan Kryl <jan.kryl@nexenta.com>
Jan Sanislo <oystr@cs.washington.edu>
Jaron Kent-Dobias <jaron@kent-dobias.com>
Jason Cohen <jwittlincohen@gmail.com>
Jason Harmening <jason.harmening@gmail.com>
Jason King <jason.brian.king@gmail.com>
Jason Lee <jasonlee@lanl.gov>
Jason Zaman <jasonzaman@gmail.com>
Javen Wu <wu.javen@gmail.com>
Jean-Baptiste Lallement <jean-baptiste@ubuntu.com>
@ -333,7 +313,6 @@ CONTRIBUTORS:
Jonathon Fernyhough <jonathon@m2x.dev>
Jorgen Lundman <lundman@lundman.net>
Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
Jose Luis Duran <jlduran@gmail.com>
Josh Soref <jsoref@users.noreply.github.com>
Joshua M. Clulow <josh@sysmgr.org>
José Luis Salvador Rufo <salvador.joseluis@gmail.com>
@ -357,10 +336,8 @@ CONTRIBUTORS:
Kash Pande <kash@tripleback.net>
Kay Pedersen <christianpe96@gmail.com>
Keith M Wesolowski <wesolows@foobazco.org>
Kent Ross <k@mad.cash>
KernelOfTruth <kerneloftruth@gmail.com>
Kevin Bowling <kevin.bowling@kev009.com>
Kevin Greene <kevin.greene@delphix.com>
Kevin Jin <lostking2008@hotmail.com>
Kevin P. Fleming <kevin@km6g.us>
Kevin Tanguy <kevin.tanguy@ovh.net>
@ -412,10 +389,8 @@ CONTRIBUTORS:
Mark Shellenbaum <Mark.Shellenbaum@Oracle.COM>
marku89 <mar42@kola.li>
Mark Wright <markwright@internode.on.net>
Mart Frauenlob <allkind@fastest.cc>
Martin Matuska <mm@FreeBSD.org>
Martin Rüegg <martin.rueegg@metaworx.ch>
Martin Wagner <martin.wagner.dev@gmail.com>
Massimo Maggi <me@massimo-maggi.eu>
Mateusz Guzik <mjguzik@gmail.com>
Mateusz Piotrowski <0mp@FreeBSD.org>
@ -430,7 +405,6 @@ CONTRIBUTORS:
Matus Kral <matuskral@me.com>
Mauricio Faria de Oliveira <mfo@canonical.com>
Max Grossman <max.grossman@delphix.com>
Maxim Filimonov <che@bein.link>
Maximilian Mehnert <maximilian.mehnert@gmx.de>
Max Zettlmeißl <max@zettlmeissl.de>
Md Islam <mdnahian@outlook.com>
@ -443,7 +417,6 @@ CONTRIBUTORS:
Michael Niewöhner <foss@mniewoehner.de>
Michael Zhivich <mzhivich@akamai.com>
Michal Vasilek <michal@vasilek.cz>
MigeljanImeri <ImeriMigel@gmail.com>
Mike Gerdts <mike.gerdts@joyent.com>
Mike Harsch <mike@harschsystems.com>
Mike Leddy <mike.leddy@gmail.com>
@ -475,7 +448,6 @@ CONTRIBUTORS:
Olaf Faaland <faaland1@llnl.gov>
Oleg Drokin <green@linuxhacker.ru>
Oleg Stepura <oleg@stepura.com>
Olivier Certner <olce.freebsd@certner.fr>
Olivier Mazouffre <olivier.mazouffre@ims-bordeaux.fr>
omni <omni+vagant@hack.org>
Orivej Desh <orivej@gmx.fr>
@ -494,7 +466,6 @@ CONTRIBUTORS:
Peng <peng.hse@xtaotech.com>
Peter Ashford <ashford@accs.com>
Peter Dave Hello <hsu@peterdavehello.org>
Peter Doherty <peterd@acranox.org>
Peter Levine <plevine457@gmail.com>
Peter Wirdemo <peter.wirdemo@gmail.com>
Petros Koutoupis <petros@petroskoutoupis.com>
@ -508,8 +479,6 @@ CONTRIBUTORS:
Prasad Joshi <prasadjoshi124@gmail.com>
privb0x23 <privb0x23@users.noreply.github.com>
P.SCH <p88@yahoo.com>
Qiuhao Chen <chenqiuhao1997@gmail.com>
Quartz <yyhran@163.com>
Quentin Zdanis <zdanisq@gmail.com>
Rafael Kitover <rkitover@gmail.com>
RageLtMan <sempervictus@users.noreply.github.com>
@ -522,15 +491,11 @@ CONTRIBUTORS:
Riccardo Schirone <rschirone91@gmail.com>
Richard Allen <belperite@gmail.com>
Richard Elling <Richard.Elling@RichardElling.com>
Richard Kojedzinszky <richard@kojedz.in>
Richard Laager <rlaager@wiktel.com>
Richard Lowe <richlowe@richlowe.net>
Richard Sharpe <rsharpe@samba.org>
Richard Yao <ryao@gentoo.org>
Rich Ercolani <rincebrain@gmail.com>
Rick Macklem <rmacklem@uoguelph.ca>
rilysh <nightquick@proton.me>
Robert Evans <evansr@google.com>
Robert Novak <sailnfool@gmail.com>
Roberto Ricci <ricci@disroot.org>
Rob Norris <robn@despairlabs.com>
@ -540,14 +505,11 @@ CONTRIBUTORS:
Roman Strashkin <roman.strashkin@nexenta.com>
Ross Williams <ross@ross-williams.net>
Ruben Kerkhof <ruben@rubenkerkhof.com>
Ryan <errornointernet@envs.net>
Ryan Hirasaki <ryanhirasaki@gmail.com>
Ryan Lahfa <masterancpp@gmail.com>
Ryan Libby <rlibby@FreeBSD.org>
Ryan Moeller <freqlabs@FreeBSD.org>
Sam Atkinson <samatk@amazon.com>
Sam Hathaway <github.com@munkynet.org>
Sam James <sam@gentoo.org>
Sam Lunt <samuel.j.lunt@gmail.com>
Samuel VERSCHELDE <stormi-github@ylix.fr>
Samuel Wycliffe <samuelwycliffe@gmail.com>
@ -565,12 +527,9 @@ CONTRIBUTORS:
Sen Haerens <sen@senhaerens.be>
Serapheim Dimitropoulos <serapheim@delphix.com>
Seth Forshee <seth.forshee@canonical.com>
Seth Troisi <sethtroisi@google.com>
Shaan Nobee <sniper111@gmail.com>
Shampavman <sham.pavman@nexenta.com>
Shaun Tancheff <shaun@aeonazure.com>
Shawn Bayern <sbayern@law.fsu.edu>
Shengqi Chen <harry-chen@outlook.com>
Shen Yan <shenyanxxxy@qq.com>
Simon Guest <simon.guest@tesujimath.org>
Simon Klinkert <simon.klinkert@gmail.com>
@ -578,7 +537,6 @@ CONTRIBUTORS:
Spencer Kinny <spencerkinny1995@gmail.com>
Srikanth N S <srikanth.nagasubbaraoseetharaman@hpe.com>
Stanislav Seletskiy <s.seletskiy@gmail.com>
Stefan Lendl <s.lendl@proxmox.com>
Steffen Müthing <steffen.muething@iwr.uni-heidelberg.de>
Stephen Blinick <stephen.blinick@delphix.com>
sterlingjensen <sterlingjensen@users.noreply.github.com>
@ -599,7 +557,6 @@ CONTRIBUTORS:
Teodor Spæren <teodor_spaeren@riseup.net>
TerraTech <TerraTech@users.noreply.github.com>
Thijs Cramer <thijs.cramer@gmail.com>
Thomas Bertschinger <bertschinger@lanl.gov>
Thomas Geppert <geppi@digitx.de>
Thomas Lamprecht <guggentom@hotmail.de>
Till Maas <opensource@till.name>
@ -612,7 +569,6 @@ CONTRIBUTORS:
Tim Schumacher <timschumi@gmx.de>
Tino Reichardt <milky-zfs@mcmilk.de>
Tobin Harding <me@tobin.cc>
Todd Seidelmann <seidelma@users.noreply.github.com>
Tom Caputi <tcaputi@datto.com>
Tom Matthews <tom@axiom-partners.com>
Tomohiro Kusumi <kusumi.tomohiro@gmail.com>
@ -630,7 +586,6 @@ CONTRIBUTORS:
Turbo Fredriksson <turbo@bayour.com>
Tyler J. Stachecki <stachecki.tyler@gmail.com>
Umer Saleem <usaleem@ixsystems.com>
Vaibhav Bhanawat <vaibhav.bhanawat@delphix.com>
Valmiky Arquissandas <kayvlim@gmail.com>
Val Packett <val@packett.cool>
Vince van Oosten <techhazard@codeforyouand.me>
@ -659,13 +614,10 @@ CONTRIBUTORS:
yuina822 <ayuichi@club.kyutech.ac.jp>
YunQiang Su <syq@debian.org>
Yuri Pankov <yuri.pankov@gmail.com>
Yuxin Wang <yuxinwang9999@gmail.com>
Yuxuan Shui <yshuiv7@gmail.com>
Zachary Bedell <zac@thebedells.org>
Zach Dykstra <dykstra.zachary@gmail.com>
zgock <zgock@nuc.base.zgock-lab.net>
Zhao Yongming <zym@apache.org>
Zhenlei Huang <zlei@FreeBSD.org>
Zhu Chuang <chuang@melty.land>
Érico Nogueira <erico.erc@gmail.com>
Đoàn Trần Công Danh <congdanhqx@gmail.com>

4
META
View File

@ -1,10 +1,10 @@
Meta: 1
Name: zfs
Branch: 1.0
Version: 2.2.99
Version: 2.2.0
Release: 1
Release-Tags: relext
License: CDDL
Author: OpenZFS
Linux-Maximum: 6.10
Linux-Maximum: 6.5
Linux-Minimum: 3.10

View File

@ -32,4 +32,4 @@ For more details see the NOTICE, LICENSE and COPYRIGHT files; `UCRL-CODE-235197`
# Supported Kernels
* The `META` file contains the officially recognized supported Linux kernel versions.
* Supported FreeBSD versions are any supported branches and releases starting from 13.0-RELEASE.
* Supported FreeBSD versions are any supported branches and releases starting from 12.2-RELEASE.

View File

@ -24,7 +24,7 @@ zfs_ids_to_path_LDADD = \
libzfs.la
zhack_CPPFLAGS = $(AM_CPPFLAGS) $(LIBZPOOL_CPPFLAGS)
zhack_CPPFLAGS = $(AM_CPPFLAGS) $(FORCEDEBUG_CPPFLAGS)
sbin_PROGRAMS += zhack
CPPCHECKTARGETS += zhack
@ -39,7 +39,9 @@ zhack_LDADD = \
ztest_CFLAGS = $(AM_CFLAGS) $(KERNEL_CFLAGS)
ztest_CPPFLAGS = $(AM_CPPFLAGS) $(LIBZPOOL_CPPFLAGS)
# Get rid of compiler warning for unchecked truncating snprintfs on gcc 7.1.1
ztest_CFLAGS += $(NO_FORMAT_TRUNCATION)
ztest_CPPFLAGS = $(AM_CPPFLAGS) $(FORCEDEBUG_CPPFLAGS)
sbin_PROGRAMS += ztest
CPPCHECKTARGETS += ztest

View File

@ -260,34 +260,33 @@ def draw_graph(kstats_dict):
arc_stats = isolate_section('arcstats', kstats_dict)
GRAPH_INDENT = ' '*4
GRAPH_WIDTH = 70
arc_max = int(arc_stats['c_max'])
GRAPH_WIDTH = 60
arc_size = f_bytes(arc_stats['size'])
arc_perc = f_perc(arc_stats['size'], arc_max)
data_size = f_bytes(arc_stats['data_size'])
meta_size = f_bytes(arc_stats['metadata_size'])
arc_perc = f_perc(arc_stats['size'], arc_stats['c_max'])
mfu_size = f_bytes(arc_stats['mfu_size'])
mru_size = f_bytes(arc_stats['mru_size'])
meta_size = f_bytes(arc_stats['arc_meta_used'])
dnode_limit = f_bytes(arc_stats['arc_dnode_limit'])
dnode_size = f_bytes(arc_stats['dnode_size'])
info_form = ('ARC: {0} ({1}) Data: {2} Meta: {3} Dnode: {4}')
info_line = info_form.format(arc_size, arc_perc, data_size, meta_size,
dnode_size)
info_form = ('ARC: {0} ({1}) MFU: {2} MRU: {3} META: {4} '
'DNODE {5} ({6})')
info_line = info_form.format(arc_size, arc_perc, mfu_size, mru_size,
meta_size, dnode_size, dnode_limit)
info_spc = ' '*int((GRAPH_WIDTH-len(info_line))/2)
info_line = GRAPH_INDENT+info_spc+info_line
graph_line = GRAPH_INDENT+'+'+('-'*(GRAPH_WIDTH-2))+'+'
arc_perc = float(int(arc_stats['size'])/arc_max)
data_perc = float(int(arc_stats['data_size'])/arc_max)
meta_perc = float(int(arc_stats['metadata_size'])/arc_max)
dnode_perc = float(int(arc_stats['dnode_size'])/arc_max)
mfu_perc = float(int(arc_stats['mfu_size'])/int(arc_stats['c_max']))
mru_perc = float(int(arc_stats['mru_size'])/int(arc_stats['c_max']))
arc_perc = float(int(arc_stats['size'])/int(arc_stats['c_max']))
total_ticks = float(arc_perc)*GRAPH_WIDTH
data_ticks = data_perc*GRAPH_WIDTH
meta_ticks = meta_perc*GRAPH_WIDTH
dnode_ticks = dnode_perc*GRAPH_WIDTH
other_ticks = total_ticks-(data_ticks+meta_ticks+dnode_ticks)
mfu_ticks = mfu_perc*GRAPH_WIDTH
mru_ticks = mru_perc*GRAPH_WIDTH
other_ticks = total_ticks-(mfu_ticks+mru_ticks)
core_form = 'D'*int(data_ticks)+'M'*int(meta_ticks)+'N'*int(dnode_ticks)+\
'O'*int(other_ticks)
core_form = 'F'*int(mfu_ticks)+'R'*int(mru_ticks)+'O'*int(other_ticks)
core_spc = ' '*(GRAPH_WIDTH-(2+len(core_form)))
core_line = GRAPH_INDENT+'|'+core_form+core_spc+'|'
@ -537,87 +536,56 @@ def section_arc(kstats_dict):
arc_stats = isolate_section('arcstats', kstats_dict)
memory_all = arc_stats['memory_all_bytes']
memory_free = arc_stats['memory_free_bytes']
memory_avail = arc_stats['memory_available_bytes']
throttle = arc_stats['memory_throttle_count']
if throttle == '0':
health = 'HEALTHY'
else:
health = 'THROTTLED'
prt_1('ARC status:', health)
prt_i1('Memory throttle count:', throttle)
print()
arc_size = arc_stats['size']
arc_target_size = arc_stats['c']
arc_max = arc_stats['c_max']
arc_min = arc_stats['c_min']
dnode_limit = arc_stats['arc_dnode_limit']
print('ARC status:')
prt_i1('Total memory size:', f_bytes(memory_all))
prt_i2('Min target size:', f_perc(arc_min, memory_all), f_bytes(arc_min))
prt_i2('Max target size:', f_perc(arc_max, memory_all), f_bytes(arc_max))
prt_i2('Target size (adaptive):',
f_perc(arc_size, arc_max), f_bytes(arc_target_size))
prt_i2('Current size:', f_perc(arc_size, arc_max), f_bytes(arc_size))
prt_i1('Free memory size:', f_bytes(memory_free))
prt_i1('Available memory size:', f_bytes(memory_avail))
print()
compressed_size = arc_stats['compressed_size']
overhead_size = arc_stats['overhead_size']
bonus_size = arc_stats['bonus_size']
dnode_size = arc_stats['dnode_size']
dbuf_size = arc_stats['dbuf_size']
hdr_size = arc_stats['hdr_size']
l2_hdr_size = arc_stats['l2_hdr_size']
abd_chunk_waste_size = arc_stats['abd_chunk_waste_size']
prt_1('ARC structal breakdown (current size):', f_bytes(arc_size))
prt_i2('Compressed size:',
f_perc(compressed_size, arc_size), f_bytes(compressed_size))
prt_i2('Overhead size:',
f_perc(overhead_size, arc_size), f_bytes(overhead_size))
prt_i2('Bonus size:',
f_perc(bonus_size, arc_size), f_bytes(bonus_size))
prt_i2('Dnode size:',
f_perc(dnode_size, arc_size), f_bytes(dnode_size))
prt_i2('Dbuf size:',
f_perc(dbuf_size, arc_size), f_bytes(dbuf_size))
prt_i2('Header size:',
f_perc(hdr_size, arc_size), f_bytes(hdr_size))
prt_i2('L2 header size:',
f_perc(l2_hdr_size, arc_size), f_bytes(l2_hdr_size))
prt_i2('ABD chunk waste size:',
f_perc(abd_chunk_waste_size, arc_size), f_bytes(abd_chunk_waste_size))
print()
meta = arc_stats['meta']
pd = arc_stats['pd']
pm = arc_stats['pm']
data_size = arc_stats['data_size']
metadata_size = arc_stats['metadata_size']
anon_data = arc_stats['anon_data']
anon_metadata = arc_stats['anon_metadata']
mfu_data = arc_stats['mfu_data']
mfu_metadata = arc_stats['mfu_metadata']
mfu_edata = arc_stats['mfu_evictable_data']
mfu_emetadata = arc_stats['mfu_evictable_metadata']
mru_data = arc_stats['mru_data']
mru_metadata = arc_stats['mru_metadata']
mru_edata = arc_stats['mru_evictable_data']
mru_emetadata = arc_stats['mru_evictable_metadata']
mfug_data = arc_stats['mfu_ghost_data']
mfug_metadata = arc_stats['mfu_ghost_metadata']
mrug_data = arc_stats['mru_ghost_data']
mrug_metadata = arc_stats['mru_ghost_metadata']
unc_data = arc_stats['uncached_data']
unc_metadata = arc_stats['uncached_metadata']
bonus_size = arc_stats['bonus_size']
dnode_limit = arc_stats['arc_dnode_limit']
dnode_size = arc_stats['dnode_size']
dbuf_size = arc_stats['dbuf_size']
hdr_size = arc_stats['hdr_size']
l2_hdr_size = arc_stats['l2_hdr_size']
abd_chunk_waste_size = arc_stats['abd_chunk_waste_size']
target_size_ratio = '{0}:1'.format(int(arc_max) // int(arc_min))
prt_2('ARC size (current):',
f_perc(arc_size, arc_max), f_bytes(arc_size))
prt_i2('Target size (adaptive):',
f_perc(arc_target_size, arc_max), f_bytes(arc_target_size))
prt_i2('Min size (hard limit):',
f_perc(arc_min, arc_max), f_bytes(arc_min))
prt_i2('Max size (high water):',
target_size_ratio, f_bytes(arc_max))
caches_size = int(anon_data)+int(anon_metadata)+\
int(mfu_data)+int(mfu_metadata)+int(mru_data)+int(mru_metadata)+\
int(unc_data)+int(unc_metadata)
prt_1('ARC types breakdown (compressed + overhead):', f_bytes(caches_size))
prt_i2('Data size:',
f_perc(data_size, caches_size), f_bytes(data_size))
prt_i2('Metadata size:',
f_perc(metadata_size, caches_size), f_bytes(metadata_size))
print()
prt_1('ARC states breakdown (compressed + overhead):', f_bytes(caches_size))
prt_i2('Anonymous data size:',
f_perc(anon_data, caches_size), f_bytes(anon_data))
prt_i2('Anonymous metadata size:',
@ -628,37 +596,43 @@ def section_arc(kstats_dict):
f_bytes(v / 65536 * caches_size / 65536))
prt_i2('MFU data size:',
f_perc(mfu_data, caches_size), f_bytes(mfu_data))
prt_i2('MFU evictable data size:',
f_perc(mfu_edata, caches_size), f_bytes(mfu_edata))
prt_i1('MFU ghost data size:', f_bytes(mfug_data))
v = (s-int(pm))*int(meta)/s
prt_i2('MFU metadata target:', f_perc(v, s),
f_bytes(v / 65536 * caches_size / 65536))
prt_i2('MFU metadata size:',
f_perc(mfu_metadata, caches_size), f_bytes(mfu_metadata))
prt_i2('MFU evictable metadata size:',
f_perc(mfu_emetadata, caches_size), f_bytes(mfu_emetadata))
prt_i1('MFU ghost metadata size:', f_bytes(mfug_metadata))
v = int(pd)*(s-int(meta))/s
prt_i2('MRU data target:', f_perc(v, s),
f_bytes(v / 65536 * caches_size / 65536))
prt_i2('MRU data size:',
f_perc(mru_data, caches_size), f_bytes(mru_data))
prt_i2('MRU evictable data size:',
f_perc(mru_edata, caches_size), f_bytes(mru_edata))
prt_i1('MRU ghost data size:', f_bytes(mrug_data))
v = int(pm)*int(meta)/s
prt_i2('MRU metadata target:', f_perc(v, s),
f_bytes(v / 65536 * caches_size / 65536))
prt_i2('MRU metadata size:',
f_perc(mru_metadata, caches_size), f_bytes(mru_metadata))
prt_i2('MRU evictable metadata size:',
f_perc(mru_emetadata, caches_size), f_bytes(mru_emetadata))
prt_i1('MRU ghost metadata size:', f_bytes(mrug_metadata))
prt_i2('Uncached data size:',
f_perc(unc_data, caches_size), f_bytes(unc_data))
prt_i2('Uncached metadata size:',
f_perc(unc_metadata, caches_size), f_bytes(unc_metadata))
prt_i2('Bonus size:',
f_perc(bonus_size, arc_size), f_bytes(bonus_size))
prt_i2('Dnode cache target:',
f_perc(dnode_limit, arc_max), f_bytes(dnode_limit))
prt_i2('Dnode cache size:',
f_perc(dnode_size, dnode_limit), f_bytes(dnode_size))
prt_i2('Dbuf size:',
f_perc(dbuf_size, arc_size), f_bytes(dbuf_size))
prt_i2('Header size:',
f_perc(hdr_size, arc_size), f_bytes(hdr_size))
prt_i2('L2 header size:',
f_perc(l2_hdr_size, arc_size), f_bytes(l2_hdr_size))
prt_i2('ABD chunk waste size:',
f_perc(abd_chunk_waste_size, arc_size), f_bytes(abd_chunk_waste_size))
print()
print('ARC hash breakdown:')
@ -673,9 +647,6 @@ def section_arc(kstats_dict):
print()
print('ARC misc:')
prt_i1('Memory throttles:', arc_stats['memory_throttle_count'])
prt_i1('Memory direct reclaims:', arc_stats['memory_direct_count'])
prt_i1('Memory indirect reclaims:', arc_stats['memory_indirect_count'])
prt_i1('Deleted:', f_hits(arc_stats['deleted']))
prt_i1('Mutex misses:', f_hits(arc_stats['mutex_miss']))
prt_i1('Eviction skips:', f_hits(arc_stats['evict_skip']))
@ -740,7 +711,7 @@ def section_archits(kstats_dict):
pd_total = int(arc_stats['prefetch_data_hits']) +\
int(arc_stats['prefetch_data_iohits']) +\
int(arc_stats['prefetch_data_misses'])
prt_2('ARC prefetch data accesses:', f_perc(pd_total, all_accesses),
prt_2('ARC prefetch metadata accesses:', f_perc(pd_total, all_accesses),
f_hits(pd_total))
pd_todo = (('Prefetch data hits:', arc_stats['prefetch_data_hits']),
('Prefetch data I/O hits:', arc_stats['prefetch_data_iohits']),
@ -822,27 +793,18 @@ def section_dmu(kstats_dict):
zfetch_stats = isolate_section('zfetchstats', kstats_dict)
zfetch_access_total = int(zfetch_stats['hits']) +\
int(zfetch_stats['future']) + int(zfetch_stats['stride']) +\
int(zfetch_stats['past']) + int(zfetch_stats['misses'])
zfetch_access_total = int(zfetch_stats['hits'])+int(zfetch_stats['misses'])
prt_1('DMU predictive prefetcher calls:', f_hits(zfetch_access_total))
prt_i2('Stream hits:',
f_perc(zfetch_stats['hits'], zfetch_access_total),
f_hits(zfetch_stats['hits']))
future = int(zfetch_stats['future']) + int(zfetch_stats['stride'])
prt_i2('Hits ahead of stream:', f_perc(future, zfetch_access_total),
f_hits(future))
prt_i2('Hits behind stream:',
f_perc(zfetch_stats['past'], zfetch_access_total),
f_hits(zfetch_stats['past']))
prt_i2('Stream misses:',
f_perc(zfetch_stats['misses'], zfetch_access_total),
f_hits(zfetch_stats['misses']))
prt_i2('Streams limit reached:',
f_perc(zfetch_stats['max_streams'], zfetch_stats['misses']),
f_hits(zfetch_stats['max_streams']))
prt_i1('Stream strides:', f_hits(zfetch_stats['stride']))
prt_i1('Prefetches issued', f_hits(zfetch_stats['io_issued']))
print()

View File

@ -157,16 +157,6 @@ cols = {
"free": [5, 1024, "ARC free memory"],
"avail": [5, 1024, "ARC available memory"],
"waste": [5, 1024, "Wasted memory due to round up to pagesize"],
"ztotal": [6, 1000, "zfetch total prefetcher calls per second"],
"zhits": [5, 1000, "zfetch stream hits per second"],
"zahead": [6, 1000, "zfetch hits ahead of streams per second"],
"zpast": [5, 1000, "zfetch hits behind streams per second"],
"zmisses": [7, 1000, "zfetch stream misses per second"],
"zmax": [4, 1000, "zfetch limit reached per second"],
"zfuture": [7, 1000, "zfetch stream future per second"],
"zstride": [7, 1000, "zfetch stream strides per second"],
"zissued": [7, 1000, "zfetch prefetches issued per second"],
"zactive": [7, 1000, "zfetch prefetches active per second"],
}
v = {}
@ -174,8 +164,6 @@ hdr = ["time", "read", "ddread", "ddh%", "dmread", "dmh%", "pread", "ph%",
"size", "c", "avail"]
xhdr = ["time", "mfu", "mru", "mfug", "mrug", "unc", "eskip", "mtxmis",
"dread", "pread", "read"]
zhdr = ["time", "ztotal", "zhits", "zahead", "zpast", "zmisses", "zmax",
"zfuture", "zstride", "zissued", "zactive"]
sint = 1 # Default interval is 1 second
count = 1 # Default count is 1
hdr_intr = 20 # Print header every 20 lines of output
@ -200,8 +188,6 @@ if sys.platform.startswith('freebsd'):
k = [ctl for ctl in sysctl.filter('kstat.zfs.misc.arcstats')
if ctl.type != sysctl.CTLTYPE_NODE]
k += [ctl for ctl in sysctl.filter('kstat.zfs.misc.zfetchstats')
if ctl.type != sysctl.CTLTYPE_NODE]
if not k:
sys.exit(1)
@ -213,28 +199,19 @@ if sys.platform.startswith('freebsd'):
continue
name, value = s.name, s.value
if "arcstats" in name:
# Trims 'kstat.zfs.misc.arcstats' from the name
kstat[name[24:]] = int(value)
else:
kstat["zfetch_" + name[27:]] = int(value)
# Trims 'kstat.zfs.misc.arcstats' from the name
kstat[name[24:]] = int(value)
elif sys.platform.startswith('linux'):
def kstat_update():
global kstat
k1 = [line.strip() for line in open('/proc/spl/kstat/zfs/arcstats')]
k = [line.strip() for line in open('/proc/spl/kstat/zfs/arcstats')]
k2 = ["zfetch_" + line.strip() for line in
open('/proc/spl/kstat/zfs/zfetchstats')]
if k1 is None or k2 is None:
if not k:
sys.exit(1)
del k1[0:2]
del k2[0:2]
k = k1 + k2
del k[0:2]
kstat = {}
for s in k:
@ -262,7 +239,6 @@ def usage():
sys.stderr.write("\t -v : List all possible field headers and definitions"
"\n")
sys.stderr.write("\t -x : Print extended stats\n")
sys.stderr.write("\t -z : Print zfetch stats\n")
sys.stderr.write("\t -f : Specify specific fields to print (see -v)\n")
sys.stderr.write("\t -o : Redirect output to the specified file\n")
sys.stderr.write("\t -s : Override default field separator with custom "
@ -381,7 +357,6 @@ def init():
global count
global hdr
global xhdr
global zhdr
global opfile
global sep
global out
@ -393,17 +368,15 @@ def init():
xflag = False
hflag = False
vflag = False
zflag = False
i = 1
try:
opts, args = getopt.getopt(
sys.argv[1:],
"axzo:hvs:f:p",
"axo:hvs:f:p",
[
"all",
"extended",
"zfetch",
"outfile",
"help",
"verbose",
@ -437,15 +410,13 @@ def init():
i += 1
if opt in ('-p', '--parsable'):
pretty_print = False
if opt in ('-z', '--zfetch'):
zflag = True
i += 1
argv = sys.argv[i:]
sint = int(argv[0]) if argv else sint
count = int(argv[1]) if len(argv) > 1 else (0 if len(argv) > 0 else 1)
if hflag or (xflag and zflag) or ((zflag or xflag) and desired_cols):
if hflag or (xflag and desired_cols):
usage()
if vflag:
@ -454,9 +425,6 @@ def init():
if xflag:
hdr = xhdr
if zflag:
hdr = zhdr
update_hdr_intr()
# check if L2ARC exists
@ -601,17 +569,6 @@ def calculate():
v["el2mru"] = d["evict_l2_eligible_mru"] // sint
v["el2inel"] = d["evict_l2_ineligible"] // sint
v["mtxmis"] = d["mutex_miss"] // sint
v["ztotal"] = (d["zfetch_hits"] + d["zfetch_future"] + d["zfetch_stride"] +
d["zfetch_past"] + d["zfetch_misses"]) // sint
v["zhits"] = d["zfetch_hits"] // sint
v["zahead"] = (d["zfetch_future"] + d["zfetch_stride"]) // sint
v["zpast"] = d["zfetch_past"] // sint
v["zmisses"] = d["zfetch_misses"] // sint
v["zmax"] = d["zfetch_max_streams"] // sint
v["zfuture"] = d["zfetch_future"] // sint
v["zstride"] = d["zfetch_stride"] // sint
v["zissued"] = d["zfetch_io_issued"] // sint
v["zactive"] = d["zfetch_io_active"] // sint
if l2exist:
v["l2hits"] = d["l2_hits"] // sint

View File

@ -37,7 +37,7 @@ import re
bhdr = ["pool", "objset", "object", "level", "blkid", "offset", "dbsize"]
bxhdr = ["pool", "objset", "object", "level", "blkid", "offset", "dbsize",
"usize", "meta", "state", "dbholds", "dbc", "list", "atype", "flags",
"meta", "state", "dbholds", "dbc", "list", "atype", "flags",
"count", "asize", "access", "mru", "gmru", "mfu", "gmfu", "l2",
"l2_dattr", "l2_asize", "l2_comp", "aholds", "dtype", "btype",
"data_bs", "meta_bs", "bsize", "lvls", "dholds", "blocks", "dsize"]
@ -47,17 +47,17 @@ dhdr = ["pool", "objset", "object", "dtype", "cached"]
dxhdr = ["pool", "objset", "object", "dtype", "btype", "data_bs", "meta_bs",
"bsize", "lvls", "dholds", "blocks", "dsize", "cached", "direct",
"indirect", "bonus", "spill"]
dincompat = ["level", "blkid", "offset", "dbsize", "usize", "meta", "state",
"dbholds", "dbc", "list", "atype", "flags", "count", "asize",
"access", "mru", "gmru", "mfu", "gmfu", "l2", "l2_dattr",
"l2_asize", "l2_comp", "aholds"]
dincompat = ["level", "blkid", "offset", "dbsize", "meta", "state", "dbholds",
"dbc", "list", "atype", "flags", "count", "asize", "access",
"mru", "gmru", "mfu", "gmfu", "l2", "l2_dattr", "l2_asize",
"l2_comp", "aholds"]
thdr = ["pool", "objset", "dtype", "cached"]
txhdr = ["pool", "objset", "dtype", "cached", "direct", "indirect",
"bonus", "spill"]
tincompat = ["object", "level", "blkid", "offset", "dbsize", "usize", "meta",
"state", "dbc", "dbholds", "list", "atype", "flags", "count",
"asize", "access", "mru", "gmru", "mfu", "gmfu", "l2", "l2_dattr",
tincompat = ["object", "level", "blkid", "offset", "dbsize", "meta", "state",
"dbc", "dbholds", "list", "atype", "flags", "count", "asize",
"access", "mru", "gmru", "mfu", "gmfu", "l2", "l2_dattr",
"l2_asize", "l2_comp", "aholds", "btype", "data_bs", "meta_bs",
"bsize", "lvls", "dholds", "blocks", "dsize"]
@ -70,7 +70,6 @@ cols = {
"blkid": [8, -1, "block number of buffer"],
"offset": [12, 1024, "offset in object of buffer"],
"dbsize": [7, 1024, "size of buffer"],
"usize": [7, 1024, "size of attached user data"],
"meta": [4, -1, "is this buffer metadata?"],
"state": [5, -1, "state of buffer (read, cached, etc)"],
"dbholds": [7, 1000, "number of holds on buffer"],
@ -400,7 +399,6 @@ def update_dict(d, k, line, labels):
key = line[labels[k]]
dbsize = int(line[labels['dbsize']])
usize = int(line[labels['usize']])
blkid = int(line[labels['blkid']])
level = int(line[labels['level']])
@ -418,7 +416,7 @@ def update_dict(d, k, line, labels):
d[pool][objset][key]['indirect'] = 0
d[pool][objset][key]['spill'] = 0
d[pool][objset][key]['cached'] += dbsize + usize
d[pool][objset][key]['cached'] += dbsize
if blkid == -1:
d[pool][objset][key]['bonus'] += dbsize

View File

@ -269,7 +269,8 @@ main(int argc, char **argv)
return (MOUNT_USAGE);
}
if (sloppy || libzfs_envvar_is_set("ZFS_MOUNT_HELPER")) {
if (!zfsutil || sloppy ||
libzfs_envvar_is_set("ZFS_MOUNT_HELPER")) {
zfs_adjust_mount_options(zhp, mntpoint, mntopts, mtabopt);
}
@ -336,7 +337,7 @@ main(int argc, char **argv)
dataset, mntpoint, mntflags, zfsflags, mntopts, mtabopt);
if (!fake) {
if (!remount && !sloppy &&
if (zfsutil && !sloppy &&
!libzfs_envvar_is_set("ZFS_MOUNT_HELPER")) {
error = zfs_mount_at(zhp, mntopts, mntflags, mntpoint);
if (error) {

View File

@ -1,5 +1,5 @@
raidz_test_CFLAGS = $(AM_CFLAGS) $(KERNEL_CFLAGS)
raidz_test_CPPFLAGS = $(AM_CPPFLAGS) $(LIBZPOOL_CPPFLAGS)
raidz_test_CPPFLAGS = $(AM_CPPFLAGS) $(FORCEDEBUG_CPPFLAGS)
bin_PROGRAMS += raidz_test
CPPCHECKTARGETS += raidz_test

View File

@ -84,10 +84,10 @@ run_gen_bench_impl(const char *impl)
if (rto_opts.rto_expand) {
rm_bench = vdev_raidz_map_alloc_expanded(
&zio_bench,
zio_bench.io_abd,
zio_bench.io_size, zio_bench.io_offset,
rto_opts.rto_ashift, ncols+1, ncols,
fn+1, rto_opts.rto_expand_offset,
0, B_FALSE);
fn+1, rto_opts.rto_expand_offset);
} else {
rm_bench = vdev_raidz_map_alloc(&zio_bench,
BENCH_ASHIFT, ncols, fn+1);
@ -172,10 +172,10 @@ run_rec_bench_impl(const char *impl)
if (rto_opts.rto_expand) {
rm_bench = vdev_raidz_map_alloc_expanded(
&zio_bench,
zio_bench.io_abd,
zio_bench.io_size, zio_bench.io_offset,
BENCH_ASHIFT, ncols+1, ncols,
PARITY_PQR,
rto_opts.rto_expand_offset, 0, B_FALSE);
PARITY_PQR, rto_opts.rto_expand_offset);
} else {
rm_bench = vdev_raidz_map_alloc(&zio_bench,
BENCH_ASHIFT, ncols, PARITY_PQR);

View File

@ -327,12 +327,14 @@ init_raidz_golden_map(raidz_test_opts_t *opts, const int parity)
if (opts->rto_expand) {
opts->rm_golden =
vdev_raidz_map_alloc_expanded(opts->zio_golden,
vdev_raidz_map_alloc_expanded(opts->zio_golden->io_abd,
opts->zio_golden->io_size, opts->zio_golden->io_offset,
opts->rto_ashift, total_ncols+1, total_ncols,
parity, opts->rto_expand_offset, 0, B_FALSE);
rm_test = vdev_raidz_map_alloc_expanded(zio_test,
parity, opts->rto_expand_offset);
rm_test = vdev_raidz_map_alloc_expanded(zio_test->io_abd,
zio_test->io_size, zio_test->io_offset,
opts->rto_ashift, total_ncols+1, total_ncols,
parity, opts->rto_expand_offset, 0, B_FALSE);
parity, opts->rto_expand_offset);
} else {
opts->rm_golden = vdev_raidz_map_alloc(opts->zio_golden,
opts->rto_ashift, total_ncols, parity);
@ -359,6 +361,187 @@ init_raidz_golden_map(raidz_test_opts_t *opts, const int parity)
return (err);
}
/*
* If reflow is not in progress, reflow_offset should be UINT64_MAX.
* For each row, if the row is entirely before reflow_offset, it will
* come from the new location. Otherwise this row will come from the
* old location. Therefore, rows that straddle the reflow_offset will
* come from the old location.
*
* NOTE: Until raidz expansion is implemented this function is only
* needed by raidz_test.c to the multi-row raid_map_t functionality.
*/
raidz_map_t *
vdev_raidz_map_alloc_expanded(abd_t *abd, uint64_t size, uint64_t offset,
uint64_t ashift, uint64_t physical_cols, uint64_t logical_cols,
uint64_t nparity, uint64_t reflow_offset)
{
/* The zio's size in units of the vdev's minimum sector size. */
uint64_t s = size >> ashift;
uint64_t q, r, bc, devidx, asize = 0, tot;
/*
* "Quotient": The number of data sectors for this stripe on all but
* the "big column" child vdevs that also contain "remainder" data.
* AKA "full rows"
*/
q = s / (logical_cols - nparity);
/*
* "Remainder": The number of partial stripe data sectors in this I/O.
* This will add a sector to some, but not all, child vdevs.
*/
r = s - q * (logical_cols - nparity);
/* The number of "big columns" - those which contain remainder data. */
bc = (r == 0 ? 0 : r + nparity);
/*
* The total number of data and parity sectors associated with
* this I/O.
*/
tot = s + nparity * (q + (r == 0 ? 0 : 1));
/* How many rows contain data (not skip) */
uint64_t rows = howmany(tot, logical_cols);
int cols = MIN(tot, logical_cols);
raidz_map_t *rm = kmem_zalloc(offsetof(raidz_map_t, rm_row[rows]),
KM_SLEEP);
rm->rm_nrows = rows;
for (uint64_t row = 0; row < rows; row++) {
raidz_row_t *rr = kmem_alloc(offsetof(raidz_row_t,
rr_col[cols]), KM_SLEEP);
rm->rm_row[row] = rr;
/* The starting RAIDZ (parent) vdev sector of the row. */
uint64_t b = (offset >> ashift) + row * logical_cols;
/*
* If we are in the middle of a reflow, and any part of this
* row has not been copied, then use the old location of
* this row.
*/
int row_phys_cols = physical_cols;
if (b + (logical_cols - nparity) > reflow_offset >> ashift)
row_phys_cols--;
/* starting child of this row */
uint64_t child_id = b % row_phys_cols;
/* The starting byte offset on each child vdev. */
uint64_t child_offset = (b / row_phys_cols) << ashift;
/*
* We set cols to the entire width of the block, even
* if this row is shorter. This is needed because parity
* generation (for Q and R) needs to know the entire width,
* because it treats the short row as though it was
* full-width (and the "phantom" sectors were zero-filled).
*
* Another approach to this would be to set cols shorter
* (to just the number of columns that we might do i/o to)
* and have another mechanism to tell the parity generation
* about the "entire width". Reconstruction (at least
* vdev_raidz_reconstruct_general()) would also need to
* know about the "entire width".
*/
rr->rr_cols = cols;
rr->rr_bigcols = bc;
rr->rr_missingdata = 0;
rr->rr_missingparity = 0;
rr->rr_firstdatacol = nparity;
rr->rr_abd_empty = NULL;
rr->rr_nempty = 0;
for (int c = 0; c < rr->rr_cols; c++, child_id++) {
if (child_id >= row_phys_cols) {
child_id -= row_phys_cols;
child_offset += 1ULL << ashift;
}
rr->rr_col[c].rc_devidx = child_id;
rr->rr_col[c].rc_offset = child_offset;
rr->rr_col[c].rc_orig_data = NULL;
rr->rr_col[c].rc_error = 0;
rr->rr_col[c].rc_tried = 0;
rr->rr_col[c].rc_skipped = 0;
rr->rr_col[c].rc_need_orig_restore = B_FALSE;
uint64_t dc = c - rr->rr_firstdatacol;
if (c < rr->rr_firstdatacol) {
rr->rr_col[c].rc_size = 1ULL << ashift;
rr->rr_col[c].rc_abd =
abd_alloc_linear(rr->rr_col[c].rc_size,
B_TRUE);
} else if (row == rows - 1 && bc != 0 && c >= bc) {
/*
* Past the end, this for parity generation.
*/
rr->rr_col[c].rc_size = 0;
rr->rr_col[c].rc_abd = NULL;
} else {
/*
* "data column" (col excluding parity)
* Add an ASCII art diagram here
*/
uint64_t off;
if (c < bc || r == 0) {
off = dc * rows + row;
} else {
off = r * rows +
(dc - r) * (rows - 1) + row;
}
rr->rr_col[c].rc_size = 1ULL << ashift;
rr->rr_col[c].rc_abd = abd_get_offset_struct(
&rr->rr_col[c].rc_abdstruct,
abd, off << ashift, 1 << ashift);
}
asize += rr->rr_col[c].rc_size;
}
/*
* If all data stored spans all columns, there's a danger that
* parity will always be on the same device and, since parity
* isn't read during normal operation, that that device's I/O
* bandwidth won't be used effectively. We therefore switch
* the parity every 1MB.
*
* ...at least that was, ostensibly, the theory. As a practical
* matter unless we juggle the parity between all devices
* evenly, we won't see any benefit. Further, occasional writes
* that aren't a multiple of the LCM of the number of children
* and the minimum stripe width are sufficient to avoid pessimal
* behavior. Unfortunately, this decision created an implicit
* on-disk format requirement that we need to support for all
* eternity, but only for single-parity RAID-Z.
*
* If we intend to skip a sector in the zeroth column for
* padding we must make sure to note this swap. We will never
* intend to skip the first column since at least one data and
* one parity column must appear in each row.
*/
if (rr->rr_firstdatacol == 1 && rr->rr_cols > 1 &&
(offset & (1ULL << 20))) {
ASSERT(rr->rr_cols >= 2);
ASSERT(rr->rr_col[0].rc_size == rr->rr_col[1].rc_size);
devidx = rr->rr_col[0].rc_devidx;
uint64_t o = rr->rr_col[0].rc_offset;
rr->rr_col[0].rc_devidx = rr->rr_col[1].rc_devidx;
rr->rr_col[0].rc_offset = rr->rr_col[1].rc_offset;
rr->rr_col[1].rc_devidx = devidx;
rr->rr_col[1].rc_offset = o;
}
}
ASSERT3U(asize, ==, tot << ashift);
/* init RAIDZ parity ops */
rm->rm_ops = vdev_raidz_math_get_ops();
return (rm);
}
static raidz_map_t *
init_raidz_map(raidz_test_opts_t *opts, zio_t **zio, const int parity)
{
@ -378,9 +561,10 @@ init_raidz_map(raidz_test_opts_t *opts, zio_t **zio, const int parity)
init_zio_abd(*zio);
if (opts->rto_expand) {
rm = vdev_raidz_map_alloc_expanded(*zio,
rm = vdev_raidz_map_alloc_expanded((*zio)->io_abd,
(*zio)->io_size, (*zio)->io_offset,
opts->rto_ashift, total_ncols+1, total_ncols,
parity, opts->rto_expand_offset, 0, B_FALSE);
parity, opts->rto_expand_offset);
} else {
rm = vdev_raidz_map_alloc(*zio, opts->rto_ashift,
total_ncols, parity);

View File

@ -119,4 +119,7 @@ void init_zio_abd(zio_t *zio);
void run_raidz_benchmark(void);
struct raidz_map *vdev_raidz_map_alloc_expanded(abd_t *, uint64_t, uint64_t,
uint64_t, uint64_t, uint64_t, uint64_t, uint64_t);
#endif /* RAIDZ_TEST_H */

View File

@ -1,4 +1,4 @@
zdb_CPPFLAGS = $(AM_CPPFLAGS) $(LIBZPOOL_CPPFLAGS)
zdb_CPPFLAGS = $(AM_CPPFLAGS) $(FORCEDEBUG_CPPFLAGS)
zdb_CFLAGS = $(AM_CFLAGS) $(LIBCRYPTO_CFLAGS)
sbin_PROGRAMS += zdb
@ -10,7 +10,6 @@ zdb_SOURCES = \
%D%/zdb_il.c
zdb_LDADD = \
libzdb.la \
libzpool.la \
libzfs_core.la \
libnvpair.la

File diff suppressed because it is too large Load Diff

View File

@ -168,25 +168,23 @@ zil_prt_rec_write(zilog_t *zilog, int txtype, const void *arg)
(u_longlong_t)lr->lr_foid, (u_longlong_t)lr->lr_offset,
(u_longlong_t)lr->lr_length);
if (txtype == TX_WRITE2 || verbose < 4)
if (txtype == TX_WRITE2 || verbose < 5)
return;
if (lr->lr_common.lrc_reclen == sizeof (lr_write_t)) {
(void) printf("%shas blkptr, %s\n", tab_prefix,
!BP_IS_HOLE(bp) && BP_GET_LOGICAL_BIRTH(bp) >=
spa_min_claim_txg(zilog->zl_spa) ?
!BP_IS_HOLE(bp) &&
bp->blk_birth >= spa_min_claim_txg(zilog->zl_spa) ?
"will claim" : "won't claim");
print_log_bp(bp, tab_prefix);
if (verbose < 5)
return;
if (BP_IS_HOLE(bp)) {
(void) printf("\t\t\tLSIZE 0x%llx\n",
(u_longlong_t)BP_GET_LSIZE(bp));
(void) printf("%s<hole>\n", tab_prefix);
return;
}
if (BP_GET_LOGICAL_BIRTH(bp) < zilog->zl_header->zh_claim_txg) {
if (bp->blk_birth < zilog->zl_header->zh_claim_txg) {
(void) printf("%s<block already committed>\n",
tab_prefix);
return;
@ -204,9 +202,6 @@ zil_prt_rec_write(zilog_t *zilog, int txtype, const void *arg)
if (error)
goto out;
} else {
if (verbose < 5)
return;
/* data is stored after the end of the lr_write record */
data = abd_alloc(lr->lr_length, B_FALSE);
abd_copy_from_buf(data, lr + 1, lr->lr_length);
@ -222,28 +217,6 @@ out:
abd_free(data);
}
static void
zil_prt_rec_write_enc(zilog_t *zilog, int txtype, const void *arg)
{
(void) txtype;
const lr_write_t *lr = arg;
const blkptr_t *bp = &lr->lr_blkptr;
int verbose = MAX(dump_opt['d'], dump_opt['i']);
(void) printf("%s(encrypted)\n", tab_prefix);
if (verbose < 4)
return;
if (lr->lr_common.lrc_reclen == sizeof (lr_write_t)) {
(void) printf("%shas blkptr, %s\n", tab_prefix,
!BP_IS_HOLE(bp) && BP_GET_LOGICAL_BIRTH(bp) >=
spa_min_claim_txg(zilog->zl_spa) ?
"will claim" : "won't claim");
print_log_bp(bp, tab_prefix);
}
}
static void
zil_prt_rec_truncate(zilog_t *zilog, int txtype, const void *arg)
{
@ -339,34 +312,11 @@ zil_prt_rec_clone_range(zilog_t *zilog, int txtype, const void *arg)
{
(void) zilog, (void) txtype;
const lr_clone_range_t *lr = arg;
int verbose = MAX(dump_opt['d'], dump_opt['i']);
(void) printf("%sfoid %llu, offset %llx, length %llx, blksize %llx\n",
tab_prefix, (u_longlong_t)lr->lr_foid, (u_longlong_t)lr->lr_offset,
(u_longlong_t)lr->lr_length, (u_longlong_t)lr->lr_blksz);
if (verbose < 4)
return;
for (unsigned int i = 0; i < lr->lr_nbps; i++) {
(void) printf("%s[%u/%llu] ", tab_prefix, i + 1,
(u_longlong_t)lr->lr_nbps);
print_log_bp(&lr->lr_bps[i], "");
}
}
static void
zil_prt_rec_clone_range_enc(zilog_t *zilog, int txtype, const void *arg)
{
(void) zilog, (void) txtype;
const lr_clone_range_t *lr = arg;
int verbose = MAX(dump_opt['d'], dump_opt['i']);
(void) printf("%s(encrypted)\n", tab_prefix);
if (verbose < 4)
return;
for (unsigned int i = 0; i < lr->lr_nbps; i++) {
(void) printf("%s[%u/%llu] ", tab_prefix, i + 1,
(u_longlong_t)lr->lr_nbps);
@ -377,7 +327,6 @@ zil_prt_rec_clone_range_enc(zilog_t *zilog, int txtype, const void *arg)
typedef void (*zil_prt_rec_func_t)(zilog_t *, int, const void *);
typedef struct zil_rec_info {
zil_prt_rec_func_t zri_print;
zil_prt_rec_func_t zri_print_enc;
const char *zri_name;
uint64_t zri_count;
} zil_rec_info_t;
@ -392,9 +341,7 @@ static zil_rec_info_t zil_rec_info[TX_MAX_TYPE] = {
{.zri_print = zil_prt_rec_remove, .zri_name = "TX_RMDIR "},
{.zri_print = zil_prt_rec_link, .zri_name = "TX_LINK "},
{.zri_print = zil_prt_rec_rename, .zri_name = "TX_RENAME "},
{.zri_print = zil_prt_rec_write,
.zri_print_enc = zil_prt_rec_write_enc,
.zri_name = "TX_WRITE "},
{.zri_print = zil_prt_rec_write, .zri_name = "TX_WRITE "},
{.zri_print = zil_prt_rec_truncate, .zri_name = "TX_TRUNCATE "},
{.zri_print = zil_prt_rec_setattr, .zri_name = "TX_SETATTR "},
{.zri_print = zil_prt_rec_acl, .zri_name = "TX_ACL_V0 "},
@ -411,7 +358,6 @@ static zil_rec_info_t zil_rec_info[TX_MAX_TYPE] = {
{.zri_print = zil_prt_rec_rename, .zri_name = "TX_RENAME_EXCHANGE "},
{.zri_print = zil_prt_rec_rename, .zri_name = "TX_RENAME_WHITEOUT "},
{.zri_print = zil_prt_rec_clone_range,
.zri_print_enc = zil_prt_rec_clone_range_enc,
.zri_name = "TX_CLONE_RANGE "},
};
@ -438,8 +384,6 @@ print_log_record(zilog_t *zilog, const lr_t *lr, void *arg, uint64_t claim_txg)
if (txtype && verbose >= 3) {
if (!zilog->zl_os->os_encrypted) {
zil_rec_info[txtype].zri_print(zilog, txtype, lr);
} else if (zil_rec_info[txtype].zri_print_enc) {
zil_rec_info[txtype].zri_print_enc(zilog, txtype, lr);
} else {
(void) printf("%s(encrypted)\n", tab_prefix);
}
@ -473,7 +417,7 @@ print_log_block(zilog_t *zilog, const blkptr_t *bp, void *arg,
if (claim_txg != 0)
claim = "already claimed";
else if (BP_GET_LOGICAL_BIRTH(bp) >= spa_min_claim_txg(zilog->zl_spa))
else if (bp->blk_birth >= spa_min_claim_txg(zilog->zl_spa))
claim = "will claim";
else
claim = "won't claim";

View File

@ -22,7 +22,6 @@
* Copyright (c) 2004, 2010, Oracle and/or its affiliates. All rights reserved.
*
* Copyright (c) 2016, Intel Corporation.
* Copyright (c) 2023, Klara Inc.
*/
/*
@ -232,6 +231,28 @@ fmd_prop_get_int32(fmd_hdl_t *hdl, const char *name)
if (strcmp(name, "spare_on_remove") == 0)
return (1);
if (strcmp(name, "io_N") == 0 || strcmp(name, "checksum_N") == 0)
return (10); /* N = 10 events */
return (0);
}
int64_t
fmd_prop_get_int64(fmd_hdl_t *hdl, const char *name)
{
(void) hdl;
/*
* These can be looked up in mp->modinfo->fmdi_props
* For now we just hard code for phase 2. In the
* future, there can be a ZED based override.
*/
if (strcmp(name, "remove_timeout") == 0)
return (15ULL * 1000ULL * 1000ULL * 1000ULL); /* 15 sec */
if (strcmp(name, "io_T") == 0 || strcmp(name, "checksum_T") == 0)
return (1000ULL * 1000ULL * 1000ULL * 600ULL); /* 10 min */
return (0);
}
@ -514,19 +535,6 @@ fmd_serd_exists(fmd_hdl_t *hdl, const char *name)
return (fmd_serd_eng_lookup(&mp->mod_serds, name) != NULL);
}
int
fmd_serd_active(fmd_hdl_t *hdl, const char *name)
{
fmd_module_t *mp = (fmd_module_t *)hdl;
fmd_serd_eng_t *sgp;
if ((sgp = fmd_serd_eng_lookup(&mp->mod_serds, name)) == NULL) {
zed_log_msg(LOG_ERR, "serd engine '%s' does not exist", name);
return (0);
}
return (fmd_serd_eng_fired(sgp) || !fmd_serd_eng_empty(sgp));
}
void
fmd_serd_reset(fmd_hdl_t *hdl, const char *name)
{
@ -535,10 +543,12 @@ fmd_serd_reset(fmd_hdl_t *hdl, const char *name)
if ((sgp = fmd_serd_eng_lookup(&mp->mod_serds, name)) == NULL) {
zed_log_msg(LOG_ERR, "serd engine '%s' does not exist", name);
} else {
fmd_serd_eng_reset(sgp);
fmd_hdl_debug(hdl, "serd_reset %s", name);
return;
}
fmd_serd_eng_reset(sgp);
fmd_hdl_debug(hdl, "serd_reset %s", name);
}
int
@ -546,21 +556,16 @@ fmd_serd_record(fmd_hdl_t *hdl, const char *name, fmd_event_t *ep)
{
fmd_module_t *mp = (fmd_module_t *)hdl;
fmd_serd_eng_t *sgp;
int err;
if ((sgp = fmd_serd_eng_lookup(&mp->mod_serds, name)) == NULL) {
zed_log_msg(LOG_ERR, "failed to add record to SERD engine '%s'",
name);
return (0);
}
return (fmd_serd_eng_record(sgp, ep->ev_hrt));
}
err = fmd_serd_eng_record(sgp, ep->ev_hrt);
void
fmd_serd_gc(fmd_hdl_t *hdl)
{
fmd_module_t *mp = (fmd_module_t *)hdl;
fmd_serd_hash_apply(&mp->mod_serds, fmd_serd_eng_gc, NULL);
return (err);
}
/* FMD Timers */
@ -574,7 +579,7 @@ _timer_notify(union sigval sv)
const fmd_hdl_ops_t *ops = mp->mod_info->fmdi_ops;
struct itimerspec its;
fmd_hdl_debug(hdl, "%s timer fired (%p)", mp->mod_name, ftp->ft_tid);
fmd_hdl_debug(hdl, "timer fired (%p)", ftp->ft_tid);
/* disarm the timer */
memset(&its, 0, sizeof (struct itimerspec));

View File

@ -151,6 +151,7 @@ extern void fmd_hdl_vdebug(fmd_hdl_t *, const char *, va_list);
extern void fmd_hdl_debug(fmd_hdl_t *, const char *, ...);
extern int32_t fmd_prop_get_int32(fmd_hdl_t *, const char *);
extern int64_t fmd_prop_get_int64(fmd_hdl_t *, const char *);
#define FMD_STAT_NOALLOC 0x0 /* fmd should use caller's memory */
#define FMD_STAT_ALLOC 0x1 /* fmd should allocate stats memory */
@ -194,12 +195,10 @@ extern size_t fmd_buf_size(fmd_hdl_t *, fmd_case_t *, const char *);
extern void fmd_serd_create(fmd_hdl_t *, const char *, uint_t, hrtime_t);
extern void fmd_serd_destroy(fmd_hdl_t *, const char *);
extern int fmd_serd_exists(fmd_hdl_t *, const char *);
extern int fmd_serd_active(fmd_hdl_t *, const char *);
extern void fmd_serd_reset(fmd_hdl_t *, const char *);
extern int fmd_serd_record(fmd_hdl_t *, const char *, fmd_event_t *);
extern int fmd_serd_fired(fmd_hdl_t *, const char *);
extern int fmd_serd_empty(fmd_hdl_t *, const char *);
extern void fmd_serd_gc(fmd_hdl_t *);
extern id_t fmd_timer_install(fmd_hdl_t *, void *, fmd_event_t *, hrtime_t);
extern void fmd_timer_remove(fmd_hdl_t *, id_t);

View File

@ -310,9 +310,8 @@ fmd_serd_eng_reset(fmd_serd_eng_t *sgp)
}
void
fmd_serd_eng_gc(fmd_serd_eng_t *sgp, void *arg)
fmd_serd_eng_gc(fmd_serd_eng_t *sgp)
{
(void) arg;
fmd_serd_elem_t *sep, *nep;
hrtime_t hrt;

View File

@ -77,7 +77,7 @@ extern int fmd_serd_eng_fired(fmd_serd_eng_t *);
extern int fmd_serd_eng_empty(fmd_serd_eng_t *);
extern void fmd_serd_eng_reset(fmd_serd_eng_t *);
extern void fmd_serd_eng_gc(fmd_serd_eng_t *, void *);
extern void fmd_serd_eng_gc(fmd_serd_eng_t *);
#ifdef __cplusplus
}

View File

@ -23,7 +23,6 @@
* Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright 2015 Nexenta Systems, Inc. All rights reserved.
* Copyright (c) 2016, Intel Corporation.
* Copyright (c) 2023, Klara Inc.
*/
#include <stddef.h>
@ -48,16 +47,11 @@
#define DEFAULT_CHECKSUM_T 600 /* seconds */
#define DEFAULT_IO_N 10 /* events */
#define DEFAULT_IO_T 600 /* seconds */
#define DEFAULT_SLOW_IO_N 10 /* events */
#define DEFAULT_SLOW_IO_T 30 /* seconds */
#define CASE_GC_TIMEOUT_SECS 43200 /* 12 hours */
/*
* Our serd engines are named in the following format:
* 'zfs_<pool_guid>_<vdev_guid>_{checksum,io,slow_io}'
* This #define reserves enough space for two 64-bit hex values plus the
* length of the longest string.
* Our serd engines are named 'zfs_<pool_guid>_<vdev_guid>_{checksum,io}'. This
* #define reserves enough space for two 64-bit hex values plus the length of
* the longest string.
*/
#define MAX_SERDLEN (16 * 2 + sizeof ("zfs___checksum"))
@ -74,7 +68,6 @@ typedef struct zfs_case_data {
int zc_pool_state;
char zc_serd_checksum[MAX_SERDLEN];
char zc_serd_io[MAX_SERDLEN];
char zc_serd_slow_io[MAX_SERDLEN];
int zc_has_remove_timer;
} zfs_case_data_t;
@ -121,8 +114,7 @@ zfs_de_stats_t zfs_stats = {
{ "resource_drops", FMD_TYPE_UINT64, "resource related ereports" }
};
/* wait 15 seconds after a removal */
static hrtime_t zfs_remove_timeout = SEC2NSEC(15);
static hrtime_t zfs_remove_timeout;
uu_list_pool_t *zfs_case_pool;
uu_list_t *zfs_cases;
@ -132,8 +124,6 @@ uu_list_t *zfs_cases;
#define ZFS_MAKE_EREPORT(type) \
FM_EREPORT_CLASS "." ZFS_ERROR_CLASS "." type
static void zfs_purge_cases(fmd_hdl_t *hdl);
/*
* Write out the persistent representation of an active case.
*/
@ -180,42 +170,6 @@ zfs_case_unserialize(fmd_hdl_t *hdl, fmd_case_t *cp)
return (zcp);
}
/*
* count other unique slow-io cases in a pool
*/
static uint_t
zfs_other_slow_cases(fmd_hdl_t *hdl, const zfs_case_data_t *zfs_case)
{
zfs_case_t *zcp;
uint_t cases = 0;
static hrtime_t next_check = 0;
/*
* Note that plumbing in some external GC would require adding locking,
* since most of this module code is not thread safe and assumes there
* is only one thread running against the module. So we perform GC here
* inline periodically so that future delay induced faults will be
* possible once the issue causing multiple vdev delays is resolved.
*/
if (gethrestime_sec() > next_check) {
/* Periodically purge old SERD entries and stale cases */
fmd_serd_gc(hdl);
zfs_purge_cases(hdl);
next_check = gethrestime_sec() + CASE_GC_TIMEOUT_SECS;
}
for (zcp = uu_list_first(zfs_cases); zcp != NULL;
zcp = uu_list_next(zfs_cases, zcp)) {
if (zcp->zc_data.zc_pool_guid == zfs_case->zc_pool_guid &&
zcp->zc_data.zc_vdev_guid != zfs_case->zc_vdev_guid &&
zcp->zc_data.zc_serd_slow_io[0] != '\0' &&
fmd_serd_active(hdl, zcp->zc_data.zc_serd_slow_io)) {
cases++;
}
}
return (cases);
}
/*
* Iterate over any active cases. If any cases are associated with a pool or
* vdev which is no longer present on the system, close the associated case.
@ -422,14 +376,6 @@ zfs_serd_name(char *buf, uint64_t pool_guid, uint64_t vdev_guid,
(long long unsigned int)vdev_guid, type);
}
static void
zfs_case_retire(fmd_hdl_t *hdl, zfs_case_t *zcp)
{
fmd_hdl_debug(hdl, "retiring case");
fmd_case_close(hdl, zcp->zc_case);
}
/*
* Solve a given ZFS case. This first checks to make sure the diagnosis is
* still valid, as well as cleaning up any pending timer associated with the
@ -686,7 +632,9 @@ zfs_fm_recv(fmd_hdl_t *hdl, fmd_event_t *ep, nvlist_t *nvl, const char *class)
if (strcmp(class,
ZFS_MAKE_EREPORT(FM_EREPORT_ZFS_DATA)) == 0 ||
strcmp(class,
ZFS_MAKE_EREPORT(FM_EREPORT_ZFS_CONFIG_CACHE_WRITE)) == 0) {
ZFS_MAKE_EREPORT(FM_EREPORT_ZFS_CONFIG_CACHE_WRITE)) == 0 ||
strcmp(class,
ZFS_MAKE_EREPORT(FM_EREPORT_ZFS_DELAY)) == 0) {
zfs_stats.resource_drops.fmds_value.ui64++;
return;
}
@ -754,9 +702,6 @@ zfs_fm_recv(fmd_hdl_t *hdl, fmd_event_t *ep, nvlist_t *nvl, const char *class)
if (zcp->zc_data.zc_serd_checksum[0] != '\0')
fmd_serd_reset(hdl,
zcp->zc_data.zc_serd_checksum);
if (zcp->zc_data.zc_serd_slow_io[0] != '\0')
fmd_serd_reset(hdl,
zcp->zc_data.zc_serd_slow_io);
} else if (fmd_nvl_class_match(hdl, nvl,
ZFS_MAKE_RSRC(FM_RESOURCE_STATECHANGE))) {
uint64_t state = 0;
@ -785,11 +730,7 @@ zfs_fm_recv(fmd_hdl_t *hdl, fmd_event_t *ep, nvlist_t *nvl, const char *class)
if (fmd_case_solved(hdl, zcp->zc_case))
return;
if (vdev_guid)
fmd_hdl_debug(hdl, "error event '%s', vdev %llu", class,
vdev_guid);
else
fmd_hdl_debug(hdl, "error event '%s'", class);
fmd_hdl_debug(hdl, "error event '%s'", class);
/*
* Determine if we should solve the case and generate a fault. We solve
@ -838,12 +779,11 @@ zfs_fm_recv(fmd_hdl_t *hdl, fmd_event_t *ep, nvlist_t *nvl, const char *class)
fmd_nvl_class_match(hdl, nvl,
ZFS_MAKE_EREPORT(FM_EREPORT_ZFS_IO_FAILURE)) ||
fmd_nvl_class_match(hdl, nvl,
ZFS_MAKE_EREPORT(FM_EREPORT_ZFS_DELAY)) ||
fmd_nvl_class_match(hdl, nvl,
ZFS_MAKE_EREPORT(FM_EREPORT_ZFS_PROBE_FAILURE))) {
const char *failmode = NULL;
boolean_t checkremove = B_FALSE;
uint32_t pri = 0;
int32_t flags = 0;
/*
* If this is a checksum or I/O error, then toss it into the
@ -874,75 +814,20 @@ zfs_fm_recv(fmd_hdl_t *hdl, fmd_event_t *ep, nvlist_t *nvl, const char *class)
}
if (fmd_serd_record(hdl, zcp->zc_data.zc_serd_io, ep))
checkremove = B_TRUE;
} else if (fmd_nvl_class_match(hdl, nvl,
ZFS_MAKE_EREPORT(FM_EREPORT_ZFS_DELAY))) {
uint64_t slow_io_n, slow_io_t;
/*
* Create a slow io SERD engine when the VDEV has the
* 'vdev_slow_io_n' and 'vdev_slow_io_n' properties.
*/
if (zcp->zc_data.zc_serd_slow_io[0] == '\0' &&
nvlist_lookup_uint64(nvl,
FM_EREPORT_PAYLOAD_ZFS_VDEV_SLOW_IO_N,
&slow_io_n) == 0 &&
nvlist_lookup_uint64(nvl,
FM_EREPORT_PAYLOAD_ZFS_VDEV_SLOW_IO_T,
&slow_io_t) == 0) {
zfs_serd_name(zcp->zc_data.zc_serd_slow_io,
pool_guid, vdev_guid, "slow_io");
fmd_serd_create(hdl,
zcp->zc_data.zc_serd_slow_io,
slow_io_n,
SEC2NSEC(slow_io_t));
zfs_case_serialize(zcp);
}
/* Pass event to SERD engine and see if this triggers */
if (zcp->zc_data.zc_serd_slow_io[0] != '\0' &&
fmd_serd_record(hdl, zcp->zc_data.zc_serd_slow_io,
ep)) {
/*
* Ignore a slow io diagnosis when other
* VDEVs in the pool show signs of being slow.
*/
if (zfs_other_slow_cases(hdl, &zcp->zc_data)) {
zfs_case_retire(hdl, zcp);
fmd_hdl_debug(hdl, "pool %llu has "
"multiple slow io cases -- skip "
"degrading vdev %llu",
(u_longlong_t)
zcp->zc_data.zc_pool_guid,
(u_longlong_t)
zcp->zc_data.zc_vdev_guid);
} else {
zfs_case_solve(hdl, zcp,
"fault.fs.zfs.vdev.slow_io");
}
}
} else if (fmd_nvl_class_match(hdl, nvl,
ZFS_MAKE_EREPORT(FM_EREPORT_ZFS_CHECKSUM))) {
uint64_t flags = 0;
int32_t flags32 = 0;
/*
* We ignore ereports for checksum errors generated by
* scrub/resilver I/O to avoid potentially further
* degrading the pool while it's being repaired.
*
* Note that FM_EREPORT_PAYLOAD_ZFS_ZIO_FLAGS used to
* be int32. To allow newer zed to work on older
* kernels, if we don't find the flags, we look for
* the older ones too.
*/
if (((nvlist_lookup_uint32(nvl,
FM_EREPORT_PAYLOAD_ZFS_ZIO_PRIORITY, &pri) == 0) &&
(pri == ZIO_PRIORITY_SCRUB ||
pri == ZIO_PRIORITY_REBUILD)) ||
((nvlist_lookup_uint64(nvl,
FM_EREPORT_PAYLOAD_ZFS_ZIO_FLAGS, &flags) == 0) &&
(flags & (ZIO_FLAG_SCRUB | ZIO_FLAG_RESILVER))) ||
((nvlist_lookup_int32(nvl,
FM_EREPORT_PAYLOAD_ZFS_ZIO_FLAGS, &flags32) == 0) &&
(flags32 & (ZIO_FLAG_SCRUB | ZIO_FLAG_RESILVER)))) {
FM_EREPORT_PAYLOAD_ZFS_ZIO_FLAGS, &flags) == 0) &&
(flags & (ZIO_FLAG_SCRUB | ZIO_FLAG_RESILVER)))) {
fmd_hdl_debug(hdl, "ignoring '%s' for "
"scrub/resilver I/O", class);
return;
@ -1039,8 +924,6 @@ zfs_fm_close(fmd_hdl_t *hdl, fmd_case_t *cs)
fmd_serd_destroy(hdl, zcp->zc_data.zc_serd_checksum);
if (zcp->zc_data.zc_serd_io[0] != '\0')
fmd_serd_destroy(hdl, zcp->zc_data.zc_serd_io);
if (zcp->zc_data.zc_serd_slow_io[0] != '\0')
fmd_serd_destroy(hdl, zcp->zc_data.zc_serd_slow_io);
if (zcp->zc_data.zc_has_remove_timer)
fmd_timer_remove(hdl, zcp->zc_remove_timer);
@ -1049,15 +932,30 @@ zfs_fm_close(fmd_hdl_t *hdl, fmd_case_t *cs)
fmd_hdl_free(hdl, zcp, sizeof (zfs_case_t));
}
/*
* We use the fmd gc entry point to look for old cases that no longer apply.
* This allows us to keep our set of case data small in a long running system.
*/
static void
zfs_fm_gc(fmd_hdl_t *hdl)
{
zfs_purge_cases(hdl);
}
static const fmd_hdl_ops_t fmd_ops = {
zfs_fm_recv, /* fmdo_recv */
zfs_fm_timeout, /* fmdo_timeout */
zfs_fm_close, /* fmdo_close */
NULL, /* fmdo_stats */
NULL, /* fmdo_gc */
zfs_fm_gc, /* fmdo_gc */
};
static const fmd_prop_t fmd_props[] = {
{ "checksum_N", FMD_TYPE_UINT32, "10" },
{ "checksum_T", FMD_TYPE_TIME, "10min" },
{ "io_N", FMD_TYPE_UINT32, "10" },
{ "io_T", FMD_TYPE_TIME, "10min" },
{ "remove_timeout", FMD_TYPE_TIME, "15sec" },
{ NULL, 0, NULL }
};
@ -1098,6 +996,8 @@ _zfs_diagnosis_init(fmd_hdl_t *hdl)
(void) fmd_stat_create(hdl, FMD_STAT_NOALLOC, sizeof (zfs_stats) /
sizeof (fmd_stat_t), (fmd_stat_t *)&zfs_stats);
zfs_remove_timeout = fmd_prop_get_int64(hdl, "remove_timeout");
}
void

View File

@ -24,7 +24,6 @@
* Copyright 2014 Nexenta Systems, Inc. All rights reserved.
* Copyright (c) 2016, 2017, Intel Corporation.
* Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
* Copyright (c) 2023, Klara Inc.
*/
/*
@ -147,17 +146,6 @@ zfs_unavail_pool(zpool_handle_t *zhp, void *data)
return (0);
}
/*
* Write an array of strings to the zed log
*/
static void lines_to_zed_log_msg(char **lines, int lines_cnt)
{
int i;
for (i = 0; i < lines_cnt; i++) {
zed_log_msg(LOG_INFO, "%s", lines[i]);
}
}
/*
* Two stage replace on Linux
* since we get disk notifications
@ -205,21 +193,14 @@ zfs_process_add(zpool_handle_t *zhp, nvlist_t *vdev, boolean_t labeled)
uint64_t is_spare = 0;
const char *physpath = NULL, *new_devid = NULL, *enc_sysfs_path = NULL;
char rawpath[PATH_MAX], fullpath[PATH_MAX];
char pathbuf[PATH_MAX];
char devpath[PATH_MAX];
int ret;
int online_flag = ZFS_ONLINE_CHECKREMOVE | ZFS_ONLINE_UNSPARE;
boolean_t is_sd = B_FALSE;
boolean_t is_mpath_wholedisk = B_FALSE;
uint_t c;
vdev_stat_t *vs;
char **lines = NULL;
int lines_cnt = 0;
/*
* Get the persistent path, typically under the '/dev/disk/by-id' or
* '/dev/disk/by-vdev' directories. Note that this path can change
* when a vdev is replaced with a new disk.
*/
if (nvlist_lookup_string(vdev, ZPOOL_CONFIG_PATH, &path) != 0)
return;
@ -233,12 +214,8 @@ zfs_process_add(zpool_handle_t *zhp, nvlist_t *vdev, boolean_t labeled)
}
(void) nvlist_lookup_string(vdev, ZPOOL_CONFIG_PHYS_PATH, &physpath);
update_vdev_config_dev_sysfs_path(vdev, path,
ZPOOL_CONFIG_VDEV_ENC_SYSFS_PATH);
(void) nvlist_lookup_string(vdev, ZPOOL_CONFIG_VDEV_ENC_SYSFS_PATH,
&enc_sysfs_path);
(void) nvlist_lookup_uint64(vdev, ZPOOL_CONFIG_WHOLE_DISK, &wholedisk);
(void) nvlist_lookup_uint64(vdev, ZPOOL_CONFIG_OFFLINE, &offline);
(void) nvlist_lookup_uint64(vdev, ZPOOL_CONFIG_FAULTED, &faulted);
@ -380,17 +357,15 @@ zfs_process_add(zpool_handle_t *zhp, nvlist_t *vdev, boolean_t labeled)
(void) snprintf(rawpath, sizeof (rawpath), "%s%s",
is_sd ? DEV_BYVDEV_PATH : DEV_BYPATH_PATH, physpath);
if (realpath(rawpath, pathbuf) == NULL && !is_mpath_wholedisk) {
if (realpath(rawpath, devpath) == NULL && !is_mpath_wholedisk) {
zed_log_msg(LOG_INFO, " realpath: %s failed (%s)",
rawpath, strerror(errno));
int err = zpool_vdev_online(zhp, fullpath,
ZFS_ONLINE_FORCEFAULT, &newstate);
(void) zpool_vdev_online(zhp, fullpath, ZFS_ONLINE_FORCEFAULT,
&newstate);
zed_log_msg(LOG_INFO, " zpool_vdev_online: %s FORCEFAULT (%s) "
"err %d, new state %d",
fullpath, libzfs_error_description(g_zfshdl), err,
err ? (int)newstate : 0);
zed_log_msg(LOG_INFO, " zpool_vdev_online: %s FORCEFAULT (%s)",
fullpath, libzfs_error_description(g_zfshdl));
return;
}
@ -408,22 +383,6 @@ zfs_process_add(zpool_handle_t *zhp, nvlist_t *vdev, boolean_t labeled)
if (is_mpath_wholedisk) {
/* Don't label device mapper or multipath disks. */
zed_log_msg(LOG_INFO,
" it's a multipath wholedisk, don't label");
if (zpool_prepare_disk(zhp, vdev, "autoreplace", &lines,
&lines_cnt) != 0) {
zed_log_msg(LOG_INFO,
" zpool_prepare_disk: could not "
"prepare '%s' (%s)", fullpath,
libzfs_error_description(g_zfshdl));
if (lines_cnt > 0) {
zed_log_msg(LOG_INFO,
" zfs_prepare_disk output:");
lines_to_zed_log_msg(lines, lines_cnt);
}
libzfs_free_str_array(lines, lines_cnt);
return;
}
} else if (!labeled) {
/*
* we're auto-replacing a raw disk, so label it first
@ -440,24 +399,16 @@ zfs_process_add(zpool_handle_t *zhp, nvlist_t *vdev, boolean_t labeled)
* to trigger a ZFS fault for the device (and any hot spare
* replacement).
*/
leafname = strrchr(pathbuf, '/') + 1;
leafname = strrchr(devpath, '/') + 1;
/*
* If this is a request to label a whole disk, then attempt to
* write out the label.
*/
if (zpool_prepare_and_label_disk(g_zfshdl, zhp, leafname,
vdev, "autoreplace", &lines, &lines_cnt) != 0) {
zed_log_msg(LOG_WARNING,
" zpool_prepare_and_label_disk: could not "
if (zpool_label_disk(g_zfshdl, zhp, leafname) != 0) {
zed_log_msg(LOG_INFO, " zpool_label_disk: could not "
"label '%s' (%s)", leafname,
libzfs_error_description(g_zfshdl));
if (lines_cnt > 0) {
zed_log_msg(LOG_INFO,
" zfs_prepare_disk output:");
lines_to_zed_log_msg(lines, lines_cnt);
}
libzfs_free_str_array(lines, lines_cnt);
(void) zpool_vdev_online(zhp, fullpath,
ZFS_ONLINE_FORCEFAULT, &newstate);
@ -480,7 +431,7 @@ zfs_process_add(zpool_handle_t *zhp, nvlist_t *vdev, boolean_t labeled)
sizeof (device->pd_physpath));
list_insert_tail(&g_device_list, device);
zed_log_msg(LOG_NOTICE, " zpool_label_disk: async '%s' (%llu)",
zed_log_msg(LOG_INFO, " zpool_label_disk: async '%s' (%llu)",
leafname, (u_longlong_t)guid);
return; /* resumes at EC_DEV_ADD.ESC_DISK for partition */
@ -503,8 +454,8 @@ zfs_process_add(zpool_handle_t *zhp, nvlist_t *vdev, boolean_t labeled)
}
if (!found) {
/* unexpected partition slice encountered */
zed_log_msg(LOG_WARNING, "labeled disk %s was "
"unexpected here", fullpath);
zed_log_msg(LOG_INFO, "labeled disk %s unexpected here",
fullpath);
(void) zpool_vdev_online(zhp, fullpath,
ZFS_ONLINE_FORCEFAULT, &newstate);
return;
@ -513,21 +464,10 @@ zfs_process_add(zpool_handle_t *zhp, nvlist_t *vdev, boolean_t labeled)
zed_log_msg(LOG_INFO, " zpool_label_disk: resume '%s' (%llu)",
physpath, (u_longlong_t)guid);
/*
* Paths that begin with '/dev/disk/by-id/' will change and so
* they must be updated before calling zpool_vdev_attach().
*/
if (strncmp(path, DEV_BYID_PATH, strlen(DEV_BYID_PATH)) == 0) {
(void) snprintf(pathbuf, sizeof (pathbuf), "%s%s",
DEV_BYID_PATH, new_devid);
zed_log_msg(LOG_INFO, " zpool_label_disk: path '%s' "
"replaced by '%s'", path, pathbuf);
path = pathbuf;
}
(void) snprintf(devpath, sizeof (devpath), "%s%s",
DEV_BYID_PATH, new_devid);
}
libzfs_free_str_array(lines, lines_cnt);
/*
* Construct the root vdev to pass to zpool_vdev_attach(). While adding
* the entire vdev structure is harmless, we construct a reduced set of
@ -566,11 +506,9 @@ zfs_process_add(zpool_handle_t *zhp, nvlist_t *vdev, boolean_t labeled)
* Wait for udev to verify the links exist, then auto-replace
* the leaf disk at same physical location.
*/
if (zpool_label_disk_wait(path, DISK_LABEL_WAIT) != 0) {
zed_log_msg(LOG_WARNING, "zfs_mod: pool '%s', after labeling "
"replacement disk, the expected disk partition link '%s' "
"is missing after waiting %u ms",
zpool_get_name(zhp), path, DISK_LABEL_WAIT);
if (zpool_label_disk_wait(path, 3000) != 0) {
zed_log_msg(LOG_WARNING, "zfs_mod: expected replacement "
"disk %s is missing", path);
nvlist_free(nvroot);
return;
}
@ -585,7 +523,7 @@ zfs_process_add(zpool_handle_t *zhp, nvlist_t *vdev, boolean_t labeled)
B_TRUE, B_FALSE);
}
zed_log_msg(LOG_WARNING, " zpool_vdev_replace: %s with %s (%s)",
zed_log_msg(LOG_INFO, " zpool_vdev_replace: %s with %s (%s)",
fullpath, path, (ret == 0) ? "no errors" :
libzfs_error_description(g_zfshdl));
@ -683,7 +621,7 @@ zfs_iter_vdev(zpool_handle_t *zhp, nvlist_t *nvl, void *data)
dp->dd_prop, path);
dp->dd_found = B_TRUE;
/* pass the new devid for use by auto-replacing code */
/* pass the new devid for use by replacing code */
if (dp->dd_new_devid != NULL) {
(void) nvlist_add_string(nvl, "new_devid",
dp->dd_new_devid);
@ -702,7 +640,7 @@ zfs_enable_ds(void *arg)
{
unavailpool_t *pool = (unavailpool_t *)arg;
(void) zpool_enable_datasets(pool->uap_zhp, NULL, 0, 512);
(void) zpool_enable_datasets(pool->uap_zhp, NULL, 0);
zpool_close(pool->uap_zhp);
free(pool);
}

View File

@ -523,9 +523,6 @@ zfs_retire_recv(fmd_hdl_t *hdl, fmd_event_t *ep, nvlist_t *nvl,
} else if (fmd_nvl_class_match(hdl, fault,
"fault.fs.zfs.vdev.checksum")) {
degrade_device = B_TRUE;
} else if (fmd_nvl_class_match(hdl, fault,
"fault.fs.zfs.vdev.slow_io")) {
degrade_device = B_TRUE;
} else if (fmd_nvl_class_match(hdl, fault,
"fault.fs.zfs.device")) {
fault_device = B_FALSE;

View File

@ -9,7 +9,6 @@ dist_zedexec_SCRIPTS = \
%D%/all-debug.sh \
%D%/all-syslog.sh \
%D%/data-notify.sh \
%D%/deadman-slot_off.sh \
%D%/generic-notify.sh \
%D%/pool_import-led.sh \
%D%/resilver_finish-notify.sh \
@ -30,7 +29,6 @@ SUBSTFILES += $(nodist_zedexec_SCRIPTS)
zedconfdefaults = \
all-syslog.sh \
data-notify.sh \
deadman-slot_off.sh \
history_event-zfs-list-cacher.sh \
pool_import-led.sh \
resilver_finish-notify.sh \

View File

@ -1,71 +0,0 @@
#!/bin/sh
# shellcheck disable=SC3014,SC2154,SC2086,SC2034
#
# Turn off disk's enclosure slot if an I/O is hung triggering the deadman.
#
# It's possible for outstanding I/O to a misbehaving SCSI disk to neither
# promptly complete or return an error. This can occur due to retry and
# recovery actions taken by the SCSI layer, driver, or disk. When it occurs
# the pool will be unresponsive even though there may be sufficient redundancy
# configured to proceeded without this single disk.
#
# When a hung I/O is detected by the kmods it will be posted as a deadman
# event. By default an I/O is considered to be hung after 5 minutes. This
# value can be changed with the zfs_deadman_ziotime_ms module parameter.
# If ZED_POWER_OFF_ENCLOSURE_SLOT_ON_DEADMAN is set the disk's enclosure
# slot will be powered off causing the outstanding I/O to fail. The ZED
# will then handle this like a normal disk failure and FAULT the vdev.
#
# We assume the user will be responsible for turning the slot back on
# after replacing the disk.
#
# Note that this script requires that your enclosure be supported by the
# Linux SCSI Enclosure services (SES) driver. The script will do nothing
# if you have no enclosure, or if your enclosure isn't supported.
#
# Exit codes:
# 0: slot successfully powered off
# 1: enclosure not available
# 2: ZED_POWER_OFF_ENCLOSURE_SLOT_ON_DEADMAN disabled
# 3: System not configured to wait on deadman
# 4: The enclosure sysfs path passed from ZFS does not exist
# 5: Enclosure slot didn't actually turn off after we told it to
[ -f "${ZED_ZEDLET_DIR}/zed.rc" ] && . "${ZED_ZEDLET_DIR}/zed.rc"
. "${ZED_ZEDLET_DIR}/zed-functions.sh"
if [ ! -d /sys/class/enclosure ] ; then
# No JBOD enclosure or NVMe slots
exit 1
fi
if [ "${ZED_POWER_OFF_ENCLOSURE_SLOT_ON_DEADMAN}" != "1" ] ; then
exit 2
fi
if [ "$ZEVENT_POOL_FAILMODE" != "wait" ] ; then
exit 3
fi
if [ ! -f "$ZEVENT_VDEV_ENC_SYSFS_PATH/power_status" ] ; then
exit 4
fi
# Turn off the slot and wait for sysfs to report that the slot is off.
# It can take ~400ms on some enclosures and multiple retries may be needed.
for i in $(seq 1 20) ; do
echo "off" | tee "$ZEVENT_VDEV_ENC_SYSFS_PATH/power_status"
for j in $(seq 1 5) ; do
if [ "$(cat $ZEVENT_VDEV_ENC_SYSFS_PATH/power_status)" == "off" ] ; then
break 2
fi
sleep 0.1
done
done
if [ "$(cat $ZEVENT_VDEV_ENC_SYSFS_PATH/power_status)" != "off" ] ; then
exit 5
fi
zed_log_msg "powered down slot $ZEVENT_VDEV_ENC_SYSFS_PATH for $ZEVENT_VDEV_PATH"

View File

@ -5,7 +5,7 @@
#
# Bad SCSI disks can often "disappear and reappear" causing all sorts of chaos
# as they flip between FAULTED and ONLINE. If
# ZED_POWER_OFF_ENCLOSURE_SLOT_ON_FAULT is set in zed.rc, and the disk gets
# ZED_POWER_OFF_ENCLOUSRE_SLOT_ON_FAULT is set in zed.rc, and the disk gets
# FAULTED, then power down the slot via sysfs:
#
# /sys/class/enclosure/<enclosure>/<slot>/power_status
@ -19,7 +19,7 @@
# Exit codes:
# 0: slot successfully powered off
# 1: enclosure not available
# 2: ZED_POWER_OFF_ENCLOSURE_SLOT_ON_FAULT disabled
# 2: ZED_POWER_OFF_ENCLOUSRE_SLOT_ON_FAULT disabled
# 3: vdev was not FAULTED
# 4: The enclosure sysfs path passed from ZFS does not exist
# 5: Enclosure slot didn't actually turn off after we told it to
@ -32,7 +32,7 @@ if [ ! -d /sys/class/enclosure ] ; then
exit 1
fi
if [ "${ZED_POWER_OFF_ENCLOSURE_SLOT_ON_FAULT}" != "1" ] ; then
if [ "${ZED_POWER_OFF_ENCLOUSRE_SLOT_ON_FAULT}" != "1" ] ; then
exit 2
fi

View File

@ -205,14 +205,6 @@ zed_notify()
[ "${rv}" -eq 0 ] && num_success=$((num_success + 1))
[ "${rv}" -eq 1 ] && num_failure=$((num_failure + 1))
zed_notify_ntfy "${subject}" "${pathname}"; rv=$?
[ "${rv}" -eq 0 ] && num_success=$((num_success + 1))
[ "${rv}" -eq 1 ] && num_failure=$((num_failure + 1))
zed_notify_gotify "${subject}" "${pathname}"; rv=$?
[ "${rv}" -eq 0 ] && num_success=$((num_success + 1))
[ "${rv}" -eq 1 ] && num_failure=$((num_failure + 1))
[ "${num_success}" -gt 0 ] && return 0
[ "${num_failure}" -gt 0 ] && return 1
return 2
@ -535,191 +527,6 @@ zed_notify_pushover()
}
# zed_notify_ntfy (subject, pathname)
#
# Send a notification via Ntfy.sh <https://ntfy.sh/>.
# The ntfy topic (ZED_NTFY_TOPIC) identifies the topic that the notification
# will be sent to Ntfy.sh server. The ntfy url (ZED_NTFY_URL) defines the
# self-hosted or provided hosted ntfy service location. The ntfy access token
# <https://docs.ntfy.sh/publish/#access-tokens> (ZED_NTFY_ACCESS_TOKEN) reprsents an
# access token that could be used if a topic is read/write protected. If a
# topic can be written to publicaly, a ZED_NTFY_ACCESS_TOKEN is not required.
#
# Requires curl and sed executables to be installed in the standard PATH.
#
# References
# https://docs.ntfy.sh
#
# Arguments
# subject: notification subject
# pathname: pathname containing the notification message (OPTIONAL)
#
# Globals
# ZED_NTFY_TOPIC
# ZED_NTFY_ACCESS_TOKEN (OPTIONAL)
# ZED_NTFY_URL
#
# Return
# 0: notification sent
# 1: notification failed
# 2: not configured
#
zed_notify_ntfy()
{
local subject="$1"
local pathname="${2:-"/dev/null"}"
local msg_body
local msg_out
local msg_err
[ -n "${ZED_NTFY_TOPIC}" ] || return 2
local url="${ZED_NTFY_URL:-"https://ntfy.sh"}/${ZED_NTFY_TOPIC}"
if [ ! -r "${pathname}" ]; then
zed_log_err "ntfy cannot read \"${pathname}\""
return 1
fi
zed_check_cmd "curl" "sed" || return 1
# Read the message body in.
#
msg_body="$(cat "${pathname}")"
if [ -z "${msg_body}" ]
then
msg_body=$subject
subject=""
fi
# Send the POST request and check for errors.
#
if [ -n "${ZED_NTFY_ACCESS_TOKEN}" ]; then
msg_out="$( \
curl \
-u ":${ZED_NTFY_ACCESS_TOKEN}" \
-H "Title: ${subject}" \
-d "${msg_body}" \
-H "Priority: high" \
"${url}" \
2>/dev/null \
)"; rv=$?
else
msg_out="$( \
curl \
-H "Title: ${subject}" \
-d "${msg_body}" \
-H "Priority: high" \
"${url}" \
2>/dev/null \
)"; rv=$?
fi
if [ "${rv}" -ne 0 ]; then
zed_log_err "curl exit=${rv}"
return 1
fi
msg_err="$(echo "${msg_out}" \
| sed -n -e 's/.*"errors" *:.*\[\(.*\)\].*/\1/p')"
if [ -n "${msg_err}" ]; then
zed_log_err "ntfy \"${msg_err}"\"
return 1
fi
return 0
}
# zed_notify_gotify (subject, pathname)
#
# Send a notification via Gotify <https://gotify.net/>.
# The Gotify URL (ZED_GOTIFY_URL) defines a self-hosted Gotify location.
# The Gotify application token (ZED_GOTIFY_APPTOKEN) defines a
# Gotify application token which is associated with a message.
# The optional Gotify priority value (ZED_GOTIFY_PRIORITY) overrides the
# default or configured priority at the Gotify server for the application.
#
# Requires curl and sed executables to be installed in the standard PATH.
#
# References
# https://gotify.net/docs/index
#
# Arguments
# subject: notification subject
# pathname: pathname containing the notification message (OPTIONAL)
#
# Globals
# ZED_GOTIFY_URL
# ZED_GOTIFY_APPTOKEN
# ZED_GOTIFY_PRIORITY
#
# Return
# 0: notification sent
# 1: notification failed
# 2: not configured
#
zed_notify_gotify()
{
local subject="$1"
local pathname="${2:-"/dev/null"}"
local msg_body
local msg_out
local msg_err
[ -n "${ZED_GOTIFY_URL}" ] && [ -n "${ZED_GOTIFY_APPTOKEN}" ] || return 2
local url="${ZED_GOTIFY_URL}/message?token=${ZED_GOTIFY_APPTOKEN}"
if [ ! -r "${pathname}" ]; then
zed_log_err "gotify cannot read \"${pathname}\""
return 1
fi
zed_check_cmd "curl" "sed" || return 1
# Read the message body in.
#
msg_body="$(cat "${pathname}")"
if [ -z "${msg_body}" ]
then
msg_body=$subject
subject=""
fi
# Send the POST request and check for errors.
#
if [ -n "${ZED_GOTIFY_PRIORITY}" ]; then
msg_out="$( \
curl \
--form-string "title=${subject}" \
--form-string "message=${msg_body}" \
--form-string "priority=${ZED_GOTIFY_PRIORITY}" \
"${url}" \
2>/dev/null \
)"; rv=$?
else
msg_out="$( \
curl \
--form-string "title=${subject}" \
--form-string "message=${msg_body}" \
"${url}" \
2>/dev/null \
)"; rv=$?
fi
if [ "${rv}" -ne 0 ]; then
zed_log_err "curl exit=${rv}"
return 1
fi
msg_err="$(echo "${msg_out}" \
| sed -n -e 's/.*"errors" *:.*\[\(.*\)\].*/\1/p')"
if [ -n "${msg_err}" ]; then
zed_log_err "gotify \"${msg_err}"\"
return 1
fi
return 0
}
# zed_rate_limit (tag, [interval])
#
# Check whether an event of a given type [tag] has already occurred within the

View File

@ -146,54 +146,4 @@ ZED_SYSLOG_SUBCLASS_EXCLUDE="history_event"
# Power off the drive's slot in the enclosure if it becomes FAULTED. This can
# help silence misbehaving drives. This assumes your drive enclosure fully
# supports slot power control via sysfs.
#ZED_POWER_OFF_ENCLOSURE_SLOT_ON_FAULT=1
##
# Power off the drive's slot in the enclosure if there is a hung I/O which
# exceeds the deadman timeout. This can help prevent a single misbehaving
# drive from rendering a redundant pool unavailable. This assumes your drive
# enclosure fully supports slot power control via sysfs.
#ZED_POWER_OFF_ENCLOSURE_SLOT_ON_DEADMAN=1
##
# Ntfy topic
# This defines which topic will receive the ntfy notification.
# <https://docs.ntfy.sh/publish/>
# Disabled by default; uncomment to enable.
#ZED_NTFY_TOPIC=""
##
# Ntfy access token (optional for public topics)
# This defines an access token which can be used
# to allow you to authenticate when sending to topics
# <https://docs.ntfy.sh/publish/#access-tokens>
# Disabled by default; uncomment to enable.
#ZED_NTFY_ACCESS_TOKEN=""
##
# Ntfy Service URL
# This defines which service the ntfy call will be directed toward
# <https://docs.ntfy.sh/install/>
# https://ntfy.sh by default; uncomment to enable an alternative service url.
#ZED_NTFY_URL="https://ntfy.sh"
##
# Gotify server URL
# This defines a URL that the Gotify call will be directed toward.
# <https://gotify.net/docs/index>
# Disabled by default; uncomment to enable.
#ZED_GOTIFY_URL=""
##
# Gotify application token
# This defines a Gotify application token which a message is associated with.
# This token is generated when an application is created on the Gotify server.
# Disabled by default; uncomment to enable.
#ZED_GOTIFY_APPTOKEN=""
##
# Gotify priority (optional)
# If defined, this overrides the default priority of the
# Gotify application associated with ZED_GOTIFY_APPTOKEN.
# Value is an integer 0 and up.
#ZED_GOTIFY_PRIORITY=""
#ZED_POWER_OFF_ENCLOUSRE_SLOT_ON_FAULT=1

View File

@ -35,7 +35,6 @@
#include "zed_strings.h"
#include "agents/zfs_agents.h"
#include <libzutil.h>
#define MAXBUF 4096
@ -923,25 +922,6 @@ _zed_event_add_time_strings(uint64_t eid, zed_strings_t *zsp, int64_t etime[])
}
}
static void
_zed_event_update_enc_sysfs_path(nvlist_t *nvl)
{
const char *vdev_path;
if (nvlist_lookup_string(nvl, FM_EREPORT_PAYLOAD_ZFS_VDEV_PATH,
&vdev_path) != 0) {
return; /* some other kind of event, ignore it */
}
if (vdev_path == NULL) {
return;
}
update_vdev_config_dev_sysfs_path(nvl, vdev_path,
FM_EREPORT_PAYLOAD_ZFS_VDEV_ENC_SYSFS_PATH);
}
/*
* Service the next zevent, blocking until one is available.
*/
@ -989,17 +969,6 @@ zed_event_service(struct zed_conf *zcp)
zed_log_msg(LOG_WARNING,
"Failed to lookup zevent class (eid=%llu)", eid);
} else {
/*
* Special case: If we can dynamically detect an enclosure sysfs
* path, then use that value rather than the one stored in the
* vd->vdev_enc_sysfs_path. There have been rare cases where
* vd->vdev_enc_sysfs_path becomes outdated. However, there
* will be other times when we can not dynamically detect the
* sysfs path (like if a disk disappears) and have to rely on
* the old value for things like turning on the fault LED.
*/
_zed_event_update_enc_sysfs_path(nvl);
/* let internal modules see this event first */
zfs_agent_post_event(class, NULL, nvl);

View File

@ -134,10 +134,6 @@ static int zfs_do_unzone(int argc, char **argv);
static int zfs_do_help(int argc, char **argv);
enum zfs_options {
ZFS_OPTION_JSON_NUMS_AS_INT = 1024
};
/*
* Enable a reasonable set of defaults for libumem debugging on DEBUG builds.
*/
@ -276,8 +272,6 @@ static zfs_command_t command_table[] = {
#define NCOMMAND (sizeof (command_table) / sizeof (command_table[0]))
#define MAX_CMD_LEN 256
zfs_command_t *current_command;
static const char *
@ -298,7 +292,7 @@ get_usage(zfs_help_t idx)
"<filesystem|volume>@<snap>[%<snap>][,...]\n"
"\tdestroy <filesystem|volume>#<bookmark>\n"));
case HELP_GET:
return (gettext("\tget [-rHp] [-j [--json-int]] [-d max] "
return (gettext("\tget [-rHp] [-d max] "
"[-o \"all\" | field[,...]]\n"
"\t [-t type[,...]] [-s source[,...]]\n"
"\t <\"all\" | property[,...]> "
@ -310,14 +304,12 @@ get_usage(zfs_help_t idx)
return (gettext("\tupgrade [-v]\n"
"\tupgrade [-r] [-V version] <-a | filesystem ...>\n"));
case HELP_LIST:
return (gettext("\tlist [-Hp] [-j [--json-int]] [-r|-d max] "
"[-o property[,...]] [-s property]...\n\t "
"[-S property]... [-t type[,...]] "
return (gettext("\tlist [-Hp] [-r|-d max] [-o property[,...]] "
"[-s property]...\n\t [-S property]... [-t type[,...]] "
"[filesystem|volume|snapshot] ...\n"));
case HELP_MOUNT:
return (gettext("\tmount [-j]\n"
"\tmount [-flvO] [-o opts] <-a|-R filesystem|"
"filesystem>\n"));
return (gettext("\tmount\n"
"\tmount [-flvO] [-o opts] <-a | filesystem>\n"));
case HELP_PROMOTE:
return (gettext("\tpromote <clone-filesystem>\n"));
case HELP_RECEIVE:
@ -427,7 +419,7 @@ get_usage(zfs_help_t idx)
"\t <filesystem|volume>\n"
"\tchange-key -i [-l] <filesystem|volume>\n"));
case HELP_VERSION:
return (gettext("\tversion [-j]\n"));
return (gettext("\tversion\n"));
case HELP_REDACT:
return (gettext("\tredact <snapshot> <bookmark> "
"<redaction_snapshot> ...\n"));
@ -1892,89 +1884,7 @@ is_recvd_column(zprop_get_cbdata_t *cbp)
}
/*
* Generates an nvlist with output version for every command based on params.
* Purpose of this is to add a version of JSON output, considering the schema
* format might be updated for each command in future.
*
* Schema:
*
* "output_version": {
* "command": string,
* "vers_major": integer,
* "vers_minor": integer,
* }
*/
static nvlist_t *
zfs_json_schema(int maj_v, int min_v)
{
nvlist_t *sch = NULL;
nvlist_t *ov = NULL;
char cmd[MAX_CMD_LEN];
snprintf(cmd, MAX_CMD_LEN, "zfs %s", current_command->name);
sch = fnvlist_alloc();
ov = fnvlist_alloc();
fnvlist_add_string(ov, "command", cmd);
fnvlist_add_uint32(ov, "vers_major", maj_v);
fnvlist_add_uint32(ov, "vers_minor", min_v);
fnvlist_add_nvlist(sch, "output_version", ov);
fnvlist_free(ov);
return (sch);
}
static void
fill_dataset_info(nvlist_t *list, zfs_handle_t *zhp, boolean_t as_int)
{
char createtxg[ZFS_MAXPROPLEN];
zfs_type_t type = zfs_get_type(zhp);
nvlist_add_string(list, "name", zfs_get_name(zhp));
switch (type) {
case ZFS_TYPE_FILESYSTEM:
fnvlist_add_string(list, "type", "FILESYSTEM");
break;
case ZFS_TYPE_VOLUME:
fnvlist_add_string(list, "type", "VOLUME");
break;
case ZFS_TYPE_SNAPSHOT:
fnvlist_add_string(list, "type", "SNAPSHOT");
break;
case ZFS_TYPE_POOL:
fnvlist_add_string(list, "type", "POOL");
break;
case ZFS_TYPE_BOOKMARK:
fnvlist_add_string(list, "type", "BOOKMARK");
break;
default:
fnvlist_add_string(list, "type", "UNKNOWN");
break;
}
if (type != ZFS_TYPE_POOL)
fnvlist_add_string(list, "pool", zfs_get_pool_name(zhp));
if (as_int) {
fnvlist_add_uint64(list, "createtxg", zfs_prop_get_int(zhp,
ZFS_PROP_CREATETXG));
} else {
if (zfs_prop_get(zhp, ZFS_PROP_CREATETXG, createtxg,
sizeof (createtxg), NULL, NULL, 0, B_TRUE) == 0)
fnvlist_add_string(list, "createtxg", createtxg);
}
if (type == ZFS_TYPE_SNAPSHOT) {
char *ds, *snap;
ds = snap = strdup(zfs_get_name(zhp));
ds = strsep(&snap, "@");
fnvlist_add_string(list, "dataset", ds);
fnvlist_add_string(list, "snapshot_name", snap);
free(ds);
}
}
/*
* zfs get [-rHp] [-j [--json-int]] [-o all | field[,field]...]
* [-s source[,source]...]
* zfs get [-rHp] [-o all | field[,field]...] [-s source[,source]...]
* < all | property[,property]... > < fs | snap | vol > ...
*
* -r recurse over any child datasets
@ -1987,8 +1897,6 @@ fill_dataset_info(nvlist_t *list, zfs_handle_t *zhp, boolean_t as_int)
* "local,default,inherited,received,temporary,none". Default is
* all six.
* -p Display values in parsable (literal) format.
* -j Display output in JSON format.
* --json-int Display numbers as integers instead of strings.
*
* Prints properties for the given datasets. The user can control which
* columns to display as well as which property types to allow.
@ -2008,21 +1916,9 @@ get_callback(zfs_handle_t *zhp, void *data)
nvlist_t *user_props = zfs_get_user_props(zhp);
zprop_list_t *pl = cbp->cb_proplist;
nvlist_t *propval;
nvlist_t *item, *d, *props;
item = d = props = NULL;
const char *strval;
const char *sourceval;
boolean_t received = is_recvd_column(cbp);
int err = 0;
if (cbp->cb_json) {
d = fnvlist_lookup_nvlist(cbp->cb_jsobj, "datasets");
if (d == NULL) {
fprintf(stderr, "datasets obj not found.\n");
exit(1);
}
props = fnvlist_alloc();
}
for (; pl != NULL; pl = pl->pl_next) {
char *recvdval = NULL;
@ -2057,9 +1953,9 @@ get_callback(zfs_handle_t *zhp, void *data)
cbp->cb_literal) == 0))
recvdval = rbuf;
err = zprop_collect_property(zfs_get_name(zhp), cbp,
zprop_print_one_property(zfs_get_name(zhp), cbp,
zfs_prop_to_name(pl->pl_prop),
buf, sourcetype, source, recvdval, props);
buf, sourcetype, source, recvdval);
} else if (zfs_prop_userquota(pl->pl_user_prop)) {
sourcetype = ZPROP_SRC_LOCAL;
@ -2069,9 +1965,8 @@ get_callback(zfs_handle_t *zhp, void *data)
(void) strlcpy(buf, "-", sizeof (buf));
}
err = zprop_collect_property(zfs_get_name(zhp), cbp,
pl->pl_user_prop, buf, sourcetype, source, NULL,
props);
zprop_print_one_property(zfs_get_name(zhp), cbp,
pl->pl_user_prop, buf, sourcetype, source, NULL);
} else if (zfs_prop_written(pl->pl_user_prop)) {
sourcetype = ZPROP_SRC_LOCAL;
@ -2081,9 +1976,8 @@ get_callback(zfs_handle_t *zhp, void *data)
(void) strlcpy(buf, "-", sizeof (buf));
}
err = zprop_collect_property(zfs_get_name(zhp), cbp,
pl->pl_user_prop, buf, sourcetype, source, NULL,
props);
zprop_print_one_property(zfs_get_name(zhp), cbp,
pl->pl_user_prop, buf, sourcetype, source, NULL);
} else {
if (nvlist_lookup_nvlist(user_props,
pl->pl_user_prop, &propval) != 0) {
@ -2115,24 +2009,9 @@ get_callback(zfs_handle_t *zhp, void *data)
cbp->cb_literal) == 0))
recvdval = rbuf;
err = zprop_collect_property(zfs_get_name(zhp), cbp,
zprop_print_one_property(zfs_get_name(zhp), cbp,
pl->pl_user_prop, strval, sourcetype,
source, recvdval, props);
}
if (err != 0)
return (err);
}
if (cbp->cb_json) {
if (!nvlist_empty(props)) {
item = fnvlist_alloc();
fill_dataset_info(item, zhp, cbp->cb_json_as_int);
fnvlist_add_nvlist(item, "properties", props);
fnvlist_add_nvlist(d, zfs_get_name(zhp), item);
fnvlist_free(props);
fnvlist_free(item);
} else {
fnvlist_free(props);
source, recvdval);
}
}
@ -2149,7 +2028,6 @@ zfs_do_get(int argc, char **argv)
int ret = 0;
int limit = 0;
zprop_list_t fake_name = { 0 };
nvlist_t *data;
/*
* Set up default columns and sources.
@ -2161,14 +2039,8 @@ zfs_do_get(int argc, char **argv)
cb.cb_columns[3] = GET_COL_SOURCE;
cb.cb_type = ZFS_TYPE_DATASET;
struct option long_options[] = {
{"json-int", no_argument, NULL, ZFS_OPTION_JSON_NUMS_AS_INT},
{0, 0, 0, 0}
};
/* check options */
while ((c = getopt_long(argc, argv, ":d:o:s:jrt:Hp", long_options,
NULL)) != -1) {
while ((c = getopt(argc, argv, ":d:o:s:rt:Hp")) != -1) {
switch (c) {
case 'p':
cb.cb_literal = B_TRUE;
@ -2182,17 +2054,6 @@ zfs_do_get(int argc, char **argv)
case 'H':
cb.cb_scripted = B_TRUE;
break;
case 'j':
cb.cb_json = B_TRUE;
cb.cb_jsobj = zfs_json_schema(0, 1);
data = fnvlist_alloc();
fnvlist_add_nvlist(cb.cb_jsobj, "datasets", data);
fnvlist_free(data);
break;
case ZFS_OPTION_JSON_NUMS_AS_INT:
cb.cb_json_as_int = B_TRUE;
cb.cb_literal = B_TRUE;
break;
case ':':
(void) fprintf(stderr, gettext("missing argument for "
"'%c' option\n"), optopt);
@ -2284,25 +2145,15 @@ found2:;
for (char *tok; (tok = strsep(&optarg, ",")); ) {
static const char *const type_opts[] = {
"filesystem",
"fs",
"volume",
"vol",
"snapshot",
"snap",
"filesystem", "volume",
"snapshot", "snap",
"bookmark",
"all"
};
"all" };
static const int type_types[] = {
ZFS_TYPE_FILESYSTEM,
ZFS_TYPE_FILESYSTEM,
ZFS_TYPE_VOLUME,
ZFS_TYPE_VOLUME,
ZFS_TYPE_SNAPSHOT,
ZFS_TYPE_SNAPSHOT,
ZFS_TYPE_FILESYSTEM, ZFS_TYPE_VOLUME,
ZFS_TYPE_SNAPSHOT, ZFS_TYPE_SNAPSHOT,
ZFS_TYPE_BOOKMARK,
ZFS_TYPE_DATASET | ZFS_TYPE_BOOKMARK
};
ZFS_TYPE_DATASET | ZFS_TYPE_BOOKMARK };
for (i = 0; i < ARRAY_SIZE(type_opts); ++i)
if (strcmp(tok, type_opts[i]) == 0) {
@ -2316,6 +2167,7 @@ found2:;
found3:;
}
break;
case '?':
(void) fprintf(stderr, gettext("invalid option '%c'\n"),
optopt);
@ -2332,12 +2184,6 @@ found3:;
usage(B_FALSE);
}
if (!cb.cb_json && cb.cb_json_as_int) {
(void) fprintf(stderr, gettext("'--json-int' only works with"
" '-j' option\n"));
usage(B_FALSE);
}
fields = argv[0];
/*
@ -2378,11 +2224,6 @@ found3:;
ret = zfs_for_each(argc, argv, flags, types, NULL,
&cb.cb_proplist, limit, get_callback, &cb);
if (ret == 0 && cb.cb_json)
zcmd_print_json(cb.cb_jsobj);
else if (ret != 0 && cb.cb_json)
nvlist_free(cb.cb_jsobj);
if (cb.cb_proplist == &fake_name)
zprop_free_list(fake_name.pl_next);
else
@ -3590,9 +3431,6 @@ typedef struct list_cbdata {
boolean_t cb_literal;
boolean_t cb_scripted;
zprop_list_t *cb_proplist;
boolean_t cb_json;
nvlist_t *cb_jsobj;
boolean_t cb_json_as_int;
} list_cbdata_t;
/*
@ -3663,11 +3501,10 @@ zfs_list_avail_color(zfs_handle_t *zhp)
/*
* Given a dataset and a list of fields, print out all the properties according
* to the described layout, or return an nvlist containing all the fields, later
* to be printed out as JSON object.
* to the described layout.
*/
static void
collect_dataset(zfs_handle_t *zhp, list_cbdata_t *cb)
print_dataset(zfs_handle_t *zhp, list_cbdata_t *cb)
{
zprop_list_t *pl = cb->cb_proplist;
boolean_t first = B_TRUE;
@ -3676,23 +3513,9 @@ collect_dataset(zfs_handle_t *zhp, list_cbdata_t *cb)
nvlist_t *propval;
const char *propstr;
boolean_t right_justify;
nvlist_t *item, *d, *props;
item = d = props = NULL;
zprop_source_t sourcetype = ZPROP_SRC_NONE;
char source[ZFS_MAX_DATASET_NAME_LEN];
if (cb->cb_json) {
d = fnvlist_lookup_nvlist(cb->cb_jsobj, "datasets");
if (d == NULL) {
fprintf(stderr, "datasets obj not found.\n");
exit(1);
}
item = fnvlist_alloc();
props = fnvlist_alloc();
fill_dataset_info(item, zhp, cb->cb_json_as_int);
}
for (; pl != NULL; pl = pl->pl_next) {
if (!cb->cb_json && !first) {
if (!first) {
if (cb->cb_scripted)
(void) putchar('\t');
else
@ -3708,112 +3531,69 @@ collect_dataset(zfs_handle_t *zhp, list_cbdata_t *cb)
right_justify = zfs_prop_align_right(pl->pl_prop);
} else if (pl->pl_prop != ZPROP_USERPROP) {
if (zfs_prop_get(zhp, pl->pl_prop, property,
sizeof (property), &sourcetype, source,
sizeof (source), cb->cb_literal) != 0)
sizeof (property), NULL, NULL, 0,
cb->cb_literal) != 0)
propstr = "-";
else
propstr = property;
right_justify = zfs_prop_align_right(pl->pl_prop);
} else if (zfs_prop_userquota(pl->pl_user_prop)) {
sourcetype = ZPROP_SRC_LOCAL;
if (zfs_prop_get_userquota(zhp, pl->pl_user_prop,
property, sizeof (property), cb->cb_literal) != 0) {
sourcetype = ZPROP_SRC_NONE;
property, sizeof (property), cb->cb_literal) != 0)
propstr = "-";
} else {
else
propstr = property;
}
right_justify = B_TRUE;
} else if (zfs_prop_written(pl->pl_user_prop)) {
sourcetype = ZPROP_SRC_LOCAL;
if (zfs_prop_get_written(zhp, pl->pl_user_prop,
property, sizeof (property), cb->cb_literal) != 0) {
sourcetype = ZPROP_SRC_NONE;
property, sizeof (property), cb->cb_literal) != 0)
propstr = "-";
} else {
else
propstr = property;
}
right_justify = B_TRUE;
} else {
if (nvlist_lookup_nvlist(userprops,
pl->pl_user_prop, &propval) != 0) {
pl->pl_user_prop, &propval) != 0)
propstr = "-";
} else {
else
propstr = fnvlist_lookup_string(propval,
ZPROP_VALUE);
strlcpy(source,
fnvlist_lookup_string(propval,
ZPROP_SOURCE), ZFS_MAX_DATASET_NAME_LEN);
if (strcmp(source,
zfs_get_name(zhp)) == 0) {
sourcetype = ZPROP_SRC_LOCAL;
} else if (strcmp(source,
ZPROP_SOURCE_VAL_RECVD) == 0) {
sourcetype = ZPROP_SRC_RECEIVED;
} else {
sourcetype = ZPROP_SRC_INHERITED;
}
}
right_justify = B_FALSE;
}
if (cb->cb_json) {
if (pl->pl_prop == ZFS_PROP_NAME)
continue;
if (zprop_nvlist_one_property(
zfs_prop_to_name(pl->pl_prop), propstr,
sourcetype, source, NULL, props,
cb->cb_json_as_int) != 0)
nomem();
} else {
/*
* zfs_list_avail_color() needs
* ZFS_PROP_AVAILABLE + USED, so we need another
* for() search for the USED part when no colors
* wanted, we can skip the whole thing
*/
if (use_color() && pl->pl_prop == ZFS_PROP_AVAILABLE) {
zprop_list_t *pl2 = cb->cb_proplist;
for (; pl2 != NULL; pl2 = pl2->pl_next) {
if (pl2->pl_prop == ZFS_PROP_USED) {
color_start(
zfs_list_avail_color(zhp));
/*
* found it, no need for more
* loops
*/
break;
}
/*
* zfs_list_avail_color() needs ZFS_PROP_AVAILABLE + USED
* - so we need another for() search for the USED part
* - when no colors wanted, we can skip the whole thing
*/
if (use_color() && pl->pl_prop == ZFS_PROP_AVAILABLE) {
zprop_list_t *pl2 = cb->cb_proplist;
for (; pl2 != NULL; pl2 = pl2->pl_next) {
if (pl2->pl_prop == ZFS_PROP_USED) {
color_start(zfs_list_avail_color(zhp));
/* found it, no need for more loops */
break;
}
}
/*
* If this is being called in scripted mode, or if
* this is the last column and it is left-justified,
* don't include a width format specifier.
*/
if (cb->cb_scripted || (pl->pl_next == NULL &&
!right_justify))
(void) fputs(propstr, stdout);
else if (right_justify) {
(void) printf("%*s", (int)pl->pl_width,
propstr);
} else {
(void) printf("%-*s", (int)pl->pl_width,
propstr);
}
if (pl->pl_prop == ZFS_PROP_AVAILABLE)
color_end();
}
/*
* If this is being called in scripted mode, or if this is the
* last column and it is left-justified, don't include a width
* format specifier.
*/
if (cb->cb_scripted || (pl->pl_next == NULL && !right_justify))
(void) fputs(propstr, stdout);
else if (right_justify)
(void) printf("%*s", (int)pl->pl_width, propstr);
else
(void) printf("%-*s", (int)pl->pl_width, propstr);
if (pl->pl_prop == ZFS_PROP_AVAILABLE)
color_end();
}
if (cb->cb_json) {
fnvlist_add_nvlist(item, "properties", props);
fnvlist_add_nvlist(d, zfs_get_name(zhp), item);
fnvlist_free(props);
fnvlist_free(item);
} else
(void) putchar('\n');
(void) putchar('\n');
}
/*
@ -3825,12 +3605,12 @@ list_callback(zfs_handle_t *zhp, void *data)
list_cbdata_t *cbp = data;
if (cbp->cb_first) {
if (!cbp->cb_scripted && !cbp->cb_json)
if (!cbp->cb_scripted)
print_header(cbp);
cbp->cb_first = B_FALSE;
}
collect_dataset(zhp, cbp);
print_dataset(zhp, cbp);
return (0);
}
@ -3849,16 +3629,9 @@ zfs_do_list(int argc, char **argv)
int ret = 0;
zfs_sort_column_t *sortcol = NULL;
int flags = ZFS_ITER_PROP_LISTSNAPS | ZFS_ITER_ARGS_CAN_BE_PATHS;
nvlist_t *data = NULL;
struct option long_options[] = {
{"json-int", no_argument, NULL, ZFS_OPTION_JSON_NUMS_AS_INT},
{0, 0, 0, 0}
};
/* check options */
while ((c = getopt_long(argc, argv, "jHS:d:o:prs:t:", long_options,
NULL)) != -1) {
while ((c = getopt(argc, argv, "HS:d:o:prs:t:")) != -1) {
switch (c) {
case 'o':
fields = optarg;
@ -3873,17 +3646,6 @@ zfs_do_list(int argc, char **argv)
case 'r':
flags |= ZFS_ITER_RECURSE;
break;
case 'j':
cb.cb_json = B_TRUE;
cb.cb_jsobj = zfs_json_schema(0, 1);
data = fnvlist_alloc();
fnvlist_add_nvlist(cb.cb_jsobj, "datasets", data);
fnvlist_free(data);
break;
case ZFS_OPTION_JSON_NUMS_AS_INT:
cb.cb_json_as_int = B_TRUE;
cb.cb_literal = B_TRUE;
break;
case 'H':
cb.cb_scripted = B_TRUE;
break;
@ -3910,25 +3672,15 @@ zfs_do_list(int argc, char **argv)
for (char *tok; (tok = strsep(&optarg, ",")); ) {
static const char *const type_subopts[] = {
"filesystem",
"fs",
"volume",
"vol",
"snapshot",
"snap",
"filesystem", "volume",
"snapshot", "snap",
"bookmark",
"all"
};
"all" };
static const int type_types[] = {
ZFS_TYPE_FILESYSTEM,
ZFS_TYPE_FILESYSTEM,
ZFS_TYPE_VOLUME,
ZFS_TYPE_VOLUME,
ZFS_TYPE_SNAPSHOT,
ZFS_TYPE_SNAPSHOT,
ZFS_TYPE_FILESYSTEM, ZFS_TYPE_VOLUME,
ZFS_TYPE_SNAPSHOT, ZFS_TYPE_SNAPSHOT,
ZFS_TYPE_BOOKMARK,
ZFS_TYPE_DATASET | ZFS_TYPE_BOOKMARK
};
ZFS_TYPE_DATASET | ZFS_TYPE_BOOKMARK };
for (c = 0; c < ARRAY_SIZE(type_subopts); ++c)
if (strcmp(tok, type_subopts[c]) == 0) {
@ -3957,12 +3709,6 @@ found3:;
argc -= optind;
argv += optind;
if (!cb.cb_json && cb.cb_json_as_int) {
(void) fprintf(stderr, gettext("'--json-int' only works with"
" '-j' option\n"));
usage(B_FALSE);
}
/*
* If "-o space" and no types were specified, don't display snapshots.
*/
@ -4002,11 +3748,6 @@ found3:;
ret = zfs_for_each(argc, argv, flags, types, sortcol, &cb.cb_proplist,
limit, list_callback, &cb);
if (ret == 0 && cb.cb_json)
zcmd_print_json(cb.cb_jsobj);
else if (ret != 0 && cb.cb_json)
nvlist_free(cb.cb_jsobj);
zprop_free_list(cb.cb_proplist);
zfs_free_sort_columns(sortcol);
@ -4242,10 +3983,6 @@ zfs_do_redact(int argc, char **argv)
(void) fprintf(stderr, gettext("potentially invalid redaction "
"snapshot; full dataset names required\n"));
break;
case ESRCH:
(void) fprintf(stderr, gettext("attempted to resume redaction "
" with a mismatched redaction list\n"));
break;
default:
(void) fprintf(stderr, gettext("internal error: %s\n"),
strerror(errno));
@ -7003,8 +6740,6 @@ zfs_do_holds(int argc, char **argv)
#define MOUNT_TIME 1 /* seconds */
typedef struct get_all_state {
char **ga_datasets;
int ga_count;
boolean_t ga_verbose;
get_all_cb_t *ga_cbp;
} get_all_state_t;
@ -7051,35 +6786,19 @@ get_one_dataset(zfs_handle_t *zhp, void *data)
return (0);
}
static int
get_recursive_datasets(zfs_handle_t *zhp, void *data)
{
get_all_state_t *state = data;
int len = strlen(zfs_get_name(zhp));
for (int i = 0; i < state->ga_count; ++i) {
if (strcmp(state->ga_datasets[i], zfs_get_name(zhp)) == 0)
return (get_one_dataset(zhp, data));
else if ((strncmp(state->ga_datasets[i], zfs_get_name(zhp),
len) == 0) && state->ga_datasets[i][len] == '/') {
(void) zfs_iter_filesystems_v2(zhp, 0,
get_recursive_datasets, data);
}
}
zfs_close(zhp);
return (0);
}
static void
get_all_datasets(get_all_state_t *state)
get_all_datasets(get_all_cb_t *cbp, boolean_t verbose)
{
if (state->ga_verbose)
set_progress_header(gettext("Reading ZFS config"));
if (state->ga_datasets == NULL)
(void) zfs_iter_root(g_zfs, get_one_dataset, state);
else
(void) zfs_iter_root(g_zfs, get_recursive_datasets, state);
get_all_state_t state = {
.ga_verbose = verbose,
.ga_cbp = cbp
};
if (state->ga_verbose)
if (verbose)
set_progress_header(gettext("Reading ZFS config"));
(void) zfs_iter_root(g_zfs, get_one_dataset, &state);
if (verbose)
finish_progress(gettext("done."));
}
@ -7425,38 +7144,24 @@ static int
share_mount(int op, int argc, char **argv)
{
int do_all = 0;
int recursive = 0;
boolean_t verbose = B_FALSE;
boolean_t json = B_FALSE;
int c, ret = 0;
char *options = NULL;
int flags = 0;
nvlist_t *jsobj, *data, *item;
const uint_t mount_nthr = 512;
uint_t nthr;
jsobj = data = item = NULL;
/* check options */
while ((c = getopt(argc, argv, op == OP_MOUNT ? ":ajRlvo:Of" : "al"))
while ((c = getopt(argc, argv, op == OP_MOUNT ? ":alvo:Of" : "al"))
!= -1) {
switch (c) {
case 'a':
do_all = 1;
break;
case 'R':
recursive = 1;
break;
case 'v':
verbose = B_TRUE;
break;
case 'l':
flags |= MS_CRYPT;
break;
case 'j':
json = B_TRUE;
jsobj = zfs_json_schema(0, 1);
data = fnvlist_alloc();
break;
case 'o':
if (*optarg == '\0') {
(void) fprintf(stderr, gettext("empty mount "
@ -7491,13 +7196,8 @@ share_mount(int op, int argc, char **argv)
argc -= optind;
argv += optind;
if (json && argc != 0) {
(void) fprintf(stderr, gettext("too many arguments\n"));
usage(B_FALSE);
}
/* check number of arguments */
if (do_all || recursive) {
if (do_all) {
enum sa_protocol protocol = SA_NO_PROTOCOL;
if (op == OP_SHARE && argc > 0) {
@ -7506,38 +7206,14 @@ share_mount(int op, int argc, char **argv)
argv++;
}
if (argc != 0 && do_all) {
if (argc != 0) {
(void) fprintf(stderr, gettext("too many arguments\n"));
usage(B_FALSE);
}
if (argc == 0 && recursive) {
(void) fprintf(stderr,
gettext("no dataset provided\n"));
usage(B_FALSE);
}
start_progress_timer();
get_all_cb_t cb = { 0 };
get_all_state_t state = { 0 };
if (argc == 0) {
state.ga_datasets = NULL;
state.ga_count = -1;
} else {
zfs_handle_t *zhp;
for (int i = 0; i < argc; i++) {
zhp = zfs_open(g_zfs, argv[i],
ZFS_TYPE_FILESYSTEM);
if (zhp == NULL)
usage(B_FALSE);
zfs_close(zhp);
}
state.ga_datasets = argv;
state.ga_count = argc;
}
state.ga_verbose = verbose;
state.ga_cbp = &cb;
get_all_datasets(&state);
get_all_datasets(&cb, verbose);
if (cb.cb_used == 0) {
free(options);
@ -7554,8 +7230,7 @@ share_mount(int op, int argc, char **argv)
pthread_mutex_init(&share_mount_state.sm_lock, NULL);
/* For a 'zfs share -a' operation start with a clean slate. */
if (op == OP_SHARE)
zfs_truncate_shares(NULL);
zfs_truncate_shares(NULL);
/*
* libshare isn't mt-safe, so only do the operation in parallel
@ -7563,9 +7238,9 @@ share_mount(int op, int argc, char **argv)
* be serialized so that we can prompt the user for their keys
* in a consistent manner.
*/
nthr = op == OP_MOUNT && !(flags & MS_CRYPT) ? mount_nthr : 1;
zfs_foreach_mountpoint(g_zfs, cb.cb_handles, cb.cb_used,
share_mount_one_cb, &share_mount_state, nthr);
share_mount_one_cb, &share_mount_state,
op == OP_MOUNT && !(flags & MS_CRYPT));
zfs_commit_shares(NULL);
ret = share_mount_state.sm_status;
@ -7599,30 +7274,12 @@ share_mount(int op, int argc, char **argv)
if (strcmp(entry.mnt_fstype, MNTTYPE_ZFS) != 0 ||
strchr(entry.mnt_special, '@') != NULL)
continue;
if (json) {
item = fnvlist_alloc();
fnvlist_add_string(item, "filesystem",
entry.mnt_special);
fnvlist_add_string(item, "mountpoint",
entry.mnt_mountp);
fnvlist_add_nvlist(data, entry.mnt_special,
item);
fnvlist_free(item);
} else {
(void) printf("%-30s %s\n", entry.mnt_special,
entry.mnt_mountp);
}
(void) printf("%-30s %s\n", entry.mnt_special,
entry.mnt_mountp);
}
(void) fclose(mnttab);
if (json) {
fnvlist_add_nvlist(jsobj, "datasets", data);
if (nvlist_empty(data))
fnvlist_free(jsobj);
else
zcmd_print_json(jsobj);
fnvlist_free(data);
}
} else {
zfs_handle_t *zhp;
@ -9080,39 +8737,8 @@ found:;
static int
zfs_do_version(int argc, char **argv)
{
int c;
nvlist_t *jsobj = NULL, *zfs_ver = NULL;
boolean_t json = B_FALSE;
while ((c = getopt(argc, argv, "j")) != -1) {
switch (c) {
case 'j':
json = B_TRUE;
jsobj = zfs_json_schema(0, 1);
break;
case '?':
(void) fprintf(stderr, gettext("invalid option '%c'\n"),
optopt);
usage(B_FALSE);
}
}
argc -= optind;
if (argc != 0) {
(void) fprintf(stderr, "too many arguments\n");
usage(B_FALSE);
}
if (json) {
zfs_ver = zfs_version_nvlist();
if (zfs_ver) {
fnvlist_add_nvlist(jsobj, "zfs_version", zfs_ver);
zcmd_print_json(jsobj);
fnvlist_free(zfs_ver);
return (0);
} else
return (-1);
} else
return (zfs_version_print() != 0);
(void) argc, (void) argv;
return (zfs_version_print() != 0);
}
/* Display documentation */

View File

@ -612,8 +612,8 @@ zhack_repair_undetach(uberblock_t *ub, nvlist_t *cfg, const int l)
* Uberblock root block pointer has valid birth TXG.
* Copying it to the label NVlist
*/
if (BP_GET_LOGICAL_BIRTH(&ub->ub_rootbp) != 0) {
const uint64_t txg = BP_GET_LOGICAL_BIRTH(&ub->ub_rootbp);
if (ub->ub_rootbp.blk_birth != 0) {
const uint64_t txg = ub->ub_rootbp.blk_birth;
ub->ub_txg = txg;
if (nvlist_remove_all(cfg, ZPOOL_CONFIG_CREATE_TXG) != 0) {

View File

@ -43,9 +43,6 @@ cols = {
"obj": [12, -1, "objset"],
"cc": [5, 1000, "zil_commit_count"],
"cwc": [5, 1000, "zil_commit_writer_count"],
"cec": [5, 1000, "zil_commit_error_count"],
"csc": [5, 1000, "zil_commit_stall_count"],
"cSc": [5, 1000, "zil_commit_suspend_count"],
"ic": [5, 1000, "zil_itx_count"],
"iic": [5, 1000, "zil_itx_indirect_count"],
"iib": [5, 1024, "zil_itx_indirect_bytes"],

View File

@ -22,7 +22,6 @@
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2012, 2015 by Delphix. All rights reserved.
* Copyright (c) 2017, Intel Corporation.
* Copyright (c) 2023-2024, Klara Inc.
*/
/*
@ -209,38 +208,6 @@ type_to_name(uint64_t type)
}
}
struct errstr {
int err;
const char *str;
};
static const struct errstr errstrtable[] = {
{ EIO, "io" },
{ ECKSUM, "checksum" },
{ EINVAL, "decompress" },
{ EACCES, "decrypt" },
{ ENXIO, "nxio" },
{ ECHILD, "dtl" },
{ EILSEQ, "corrupt" },
{ ENOSYS, "noop" },
{ 0, NULL },
};
static int
str_to_err(const char *str)
{
for (int i = 0; errstrtable[i].str != NULL; i++)
if (strcasecmp(errstrtable[i].str, str) == 0)
return (errstrtable[i].err);
return (-1);
}
static const char *
err_to_str(int err)
{
for (int i = 0; errstrtable[i].str != NULL; i++)
if (errstrtable[i].err == err)
return (errstrtable[i].str);
return ("[unknown]");
}
/*
* Print usage message.
@ -266,12 +233,12 @@ usage(void)
"\t\tspa_vdev_exit() will trigger a panic.\n"
"\n"
"\tzinject -d device [-e errno] [-L <nvlist|uber|pad1|pad2>] [-F]\n"
"\t\t[-T <read|write|free|claim|flush|all>] [-f frequency] pool\n\n"
"\t\t[-T <read|write|free|claim|all>] [-f frequency] pool\n\n"
"\t\tInject a fault into a particular device or the device's\n"
"\t\tlabel. Label injection can either be 'nvlist', 'uber',\n "
"\t\t'pad1', or 'pad2'.\n"
"\t\t'errno' can be 'nxio' (the default), 'io', 'dtl',\n"
"\t\t'corrupt' (bit flip), or 'noop' (successfully do nothing).\n"
"\t\t'errno' can be 'nxio' (the default), 'io', 'dtl', or\n"
"\t\t'corrupt' (bit flip).\n"
"\t\t'frequency' is a value between 0.0001 and 100.0 that limits\n"
"\t\tdevice error injection to a percentage of the IOs.\n"
"\n"
@ -310,11 +277,6 @@ usage(void)
"\t\tcreate 3 lanes on the device; one lane with a latency\n"
"\t\tof 10 ms and two lanes with a 25 ms latency.\n"
"\n"
"\tzinject -P import|export -s <seconds> pool\n"
"\t\tAdd an artificial delay to a future pool import or export,\n"
"\t\tsuch that the operation takes a minimum of supplied seconds\n"
"\t\tto complete.\n"
"\n"
"\tzinject -I [-s <seconds> | -g <txgs>] pool\n"
"\t\tCause the pool to stop writing blocks yet not\n"
"\t\treport errors for a duration. Simulates buggy hardware\n"
@ -397,10 +359,8 @@ print_data_handler(int id, const char *pool, zinject_record_t *record,
{
int *count = data;
if (record->zi_guid != 0 || record->zi_func[0] != '\0' ||
record->zi_duration != 0) {
if (record->zi_guid != 0 || record->zi_func[0] != '\0')
return (0);
}
if (*count == 0) {
(void) printf("%3s %-15s %-6s %-6s %-8s %3s %-4s "
@ -432,10 +392,6 @@ static int
print_device_handler(int id, const char *pool, zinject_record_t *record,
void *data)
{
static const char *iotypestr[] = {
"null", "read", "write", "free", "claim", "flush", "trim", "all",
};
int *count = data;
if (record->zi_guid == 0 || record->zi_func[0] != '\0')
@ -445,21 +401,14 @@ print_device_handler(int id, const char *pool, zinject_record_t *record,
return (0);
if (*count == 0) {
(void) printf("%3s %-15s %-16s %-5s %-10s %-9s\n",
"ID", "POOL", "GUID", "TYPE", "ERROR", "FREQ");
(void) printf(
"--- --------------- ---------------- "
"----- ---------- ---------\n");
(void) printf("%3s %-15s %s\n", "ID", "POOL", "GUID");
(void) printf("--- --------------- ----------------\n");
}
*count += 1;
double freq = record->zi_freq == 0 ? 100.0f :
(((double)record->zi_freq) / ZI_PERCENTAGE_MAX) * 100.0f;
(void) printf("%3d %-15s %llx %-5s %-10s %8.4f%%\n", id, pool,
(u_longlong_t)record->zi_guid, iotypestr[record->zi_iotype],
err_to_str(record->zi_error), freq);
(void) printf("%3d %-15s %llx\n", id, pool,
(u_longlong_t)record->zi_guid);
return (0);
}
@ -514,33 +463,6 @@ print_panic_handler(int id, const char *pool, zinject_record_t *record,
return (0);
}
static int
print_pool_delay_handler(int id, const char *pool, zinject_record_t *record,
void *data)
{
int *count = data;
if (record->zi_cmd != ZINJECT_DELAY_IMPORT &&
record->zi_cmd != ZINJECT_DELAY_EXPORT) {
return (0);
}
if (*count == 0) {
(void) printf("%3s %-19s %-11s %s\n",
"ID", "POOL", "DELAY (sec)", "COMMAND");
(void) printf("--- ------------------- -----------"
" -------\n");
}
*count += 1;
(void) printf("%3d %-19s %-11llu %s\n",
id, pool, (u_longlong_t)record->zi_duration,
record->zi_cmd == ZINJECT_DELAY_IMPORT ? "import": "export");
return (0);
}
/*
* Print all registered error handlers. Returns the number of handlers
* registered.
@ -571,13 +493,6 @@ print_all_handlers(void)
count = 0;
}
(void) iter_handlers(print_pool_delay_handler, &count);
if (count > 0) {
total += count;
(void) printf("\n");
count = 0;
}
(void) iter_handlers(print_panic_handler, &count);
return (count + total);
@ -650,27 +565,9 @@ register_handler(const char *pool, int flags, zinject_record_t *record,
zc.zc_guid = flags;
if (zfs_ioctl(g_zfs, ZFS_IOC_INJECT_FAULT, &zc) != 0) {
const char *errmsg = strerror(errno);
switch (errno) {
case EDOM:
errmsg = "block level exceeds max level of object";
break;
case EEXIST:
if (record->zi_cmd == ZINJECT_DELAY_IMPORT)
errmsg = "pool already imported";
if (record->zi_cmd == ZINJECT_DELAY_EXPORT)
errmsg = "a handler already exists";
break;
case ENOENT:
/* import delay injector running on older zfs module */
if (record->zi_cmd == ZINJECT_DELAY_IMPORT)
errmsg = "import delay injector not supported";
break;
default:
break;
}
(void) fprintf(stderr, "failed to add handler: %s\n", errmsg);
(void) fprintf(stderr, "failed to add handler: %s\n",
errno == EDOM ? "block level exceeds max level of object" :
strerror(errno));
return (1);
}
@ -695,9 +592,6 @@ register_handler(const char *pool, int flags, zinject_record_t *record,
} else if (record->zi_duration < 0) {
(void) printf(" txgs: %lld \n",
(u_longlong_t)-record->zi_duration);
} else if (record->zi_timer > 0) {
(void) printf(" timer: %lld ms\n",
(u_longlong_t)NSEC2MSEC(record->zi_timer));
} else {
(void) printf("objset: %llu\n",
(u_longlong_t)record->zi_objset);
@ -896,7 +790,7 @@ main(int argc, char **argv)
}
while ((c = getopt(argc, argv,
":aA:b:C:d:D:f:Fg:qhIc:t:T:l:mr:s:e:uL:p:P:")) != -1) {
":aA:b:C:d:D:f:Fg:qhIc:t:T:l:mr:s:e:uL:p:")) != -1) {
switch (c) {
case 'a':
flags |= ZINJECT_FLUSH_ARC;
@ -948,12 +842,24 @@ main(int argc, char **argv)
}
break;
case 'e':
error = str_to_err(optarg);
if (error < 0) {
if (strcasecmp(optarg, "io") == 0) {
error = EIO;
} else if (strcasecmp(optarg, "checksum") == 0) {
error = ECKSUM;
} else if (strcasecmp(optarg, "decompress") == 0) {
error = EINVAL;
} else if (strcasecmp(optarg, "decrypt") == 0) {
error = EACCES;
} else if (strcasecmp(optarg, "nxio") == 0) {
error = ENXIO;
} else if (strcasecmp(optarg, "dtl") == 0) {
error = ECHILD;
} else if (strcasecmp(optarg, "corrupt") == 0) {
error = EILSEQ;
} else {
(void) fprintf(stderr, "invalid error type "
"'%s': must be one of: io decompress "
"decrypt nxio dtl corrupt noop\n",
optarg);
"'%s': must be 'io', 'checksum' or "
"'nxio'\n", optarg);
usage();
libzfs_fini(g_zfs);
return (1);
@ -1014,19 +920,6 @@ main(int argc, char **argv)
sizeof (record.zi_func));
record.zi_cmd = ZINJECT_PANIC;
break;
case 'P':
if (strcasecmp(optarg, "import") == 0) {
record.zi_cmd = ZINJECT_DELAY_IMPORT;
} else if (strcasecmp(optarg, "export") == 0) {
record.zi_cmd = ZINJECT_DELAY_EXPORT;
} else {
(void) fprintf(stderr, "invalid command '%s': "
"must be 'import' or 'export'\n", optarg);
usage();
libzfs_fini(g_zfs);
return (1);
}
break;
case 'q':
quiet = 1;
break;
@ -1054,14 +947,12 @@ main(int argc, char **argv)
io_type = ZIO_TYPE_FREE;
} else if (strcasecmp(optarg, "claim") == 0) {
io_type = ZIO_TYPE_CLAIM;
} else if (strcasecmp(optarg, "flush") == 0) {
io_type = ZIO_TYPE_FLUSH;
} else if (strcasecmp(optarg, "all") == 0) {
io_type = ZIO_TYPES;
} else {
(void) fprintf(stderr, "invalid I/O type "
"'%s': must be 'read', 'write', 'free', "
"'claim', 'flush' or 'all'\n", optarg);
"'claim' or 'all'\n", optarg);
usage();
libzfs_fini(g_zfs);
return (1);
@ -1108,7 +999,7 @@ main(int argc, char **argv)
argc -= optind;
argv += optind;
if (record.zi_duration != 0 && record.zi_cmd == 0)
if (record.zi_duration != 0)
record.zi_cmd = ZINJECT_IGNORED_WRITES;
if (cancel != NULL) {
@ -1192,22 +1083,6 @@ main(int argc, char **argv)
libzfs_fini(g_zfs);
return (1);
}
if (record.zi_nlanes) {
switch (io_type) {
case ZIO_TYPE_READ:
case ZIO_TYPE_WRITE:
case ZIO_TYPES:
break;
default:
(void) fprintf(stderr, "I/O type for a delay "
"must be 'read' or 'write'\n");
usage();
libzfs_fini(g_zfs);
return (1);
}
}
if (!error)
error = ENXIO;
@ -1254,8 +1129,8 @@ main(int argc, char **argv)
if (raw != NULL || range != NULL || type != TYPE_INVAL ||
level != 0 || device != NULL || record.zi_freq > 0 ||
dvas != 0) {
(void) fprintf(stderr, "%s incompatible with other "
"options\n", "import|export delay (-P)");
(void) fprintf(stderr, "panic (-p) incompatible with "
"other options\n");
usage();
libzfs_fini(g_zfs);
return (2);
@ -1273,28 +1148,6 @@ main(int argc, char **argv)
if (argv[1] != NULL)
record.zi_type = atoi(argv[1]);
dataset[0] = '\0';
} else if (record.zi_cmd == ZINJECT_DELAY_IMPORT ||
record.zi_cmd == ZINJECT_DELAY_EXPORT) {
if (raw != NULL || range != NULL || type != TYPE_INVAL ||
level != 0 || device != NULL || record.zi_freq > 0 ||
dvas != 0) {
(void) fprintf(stderr, "%s incompatible with other "
"options\n", "import|export delay (-P)");
usage();
libzfs_fini(g_zfs);
return (2);
}
if (argc != 1 || record.zi_duration <= 0) {
(void) fprintf(stderr, "import|export delay (-P) "
"injection requires a duration (-s) and a single "
"pool name\n");
usage();
libzfs_fini(g_zfs);
return (2);
}
(void) strlcpy(pool, argv[0], sizeof (pool));
} else if (record.zi_cmd == ZINJECT_IGNORED_WRITES) {
if (raw != NULL || range != NULL || type != TYPE_INVAL ||
level != 0 || record.zi_freq > 0 || dvas != 0) {

View File

@ -1,9 +1,6 @@
# Features which are supported by GRUB2
allocation_classes
async_destroy
block_cloning
bookmarks
device_rebuild
embedded_data
empty_bpobj
enabled_txg
@ -12,12 +9,6 @@ filesystem_limits
hole_birth
large_blocks
livelist
log_spacemap
lz4_compress
project_quota
resilver_defer
spacemap_histogram
spacemap_v2
userobj_accounting
zilsaxattr
zpool_checkpoint

View File

@ -6,6 +6,7 @@ edonr
embedded_data
empty_bpobj
enabled_txg
encryption
extensible_dataset
filesystem_limits
hole_birth

View File

@ -124,24 +124,3 @@ check_file(const char *file, boolean_t force, boolean_t isspare)
{
return (check_file_generic(file, force, isspare));
}
int
zpool_power_current_state(zpool_handle_t *zhp, char *vdev)
{
(void) zhp;
(void) vdev;
/* Enclosure slot power not supported on FreeBSD yet */
return (-1);
}
int
zpool_power(zpool_handle_t *zhp, char *vdev, boolean_t turn_on)
{
(void) zhp;
(void) vdev;
(void) turn_on;
/* Enclosure slot power not supported on FreeBSD yet */
return (ENOTSUP);
}

View File

@ -416,258 +416,3 @@ check_file(const char *file, boolean_t force, boolean_t isspare)
{
return (check_file_generic(file, force, isspare));
}
/*
* Read from a sysfs file and return an allocated string. Removes
* the newline from the end of the string if there is one.
*
* Returns a string on success (which must be freed), or NULL on error.
*/
static char *zpool_sysfs_gets(char *path)
{
int fd;
struct stat statbuf;
char *buf = NULL;
ssize_t count = 0;
fd = open(path, O_RDONLY);
if (fd < 0)
return (NULL);
if (fstat(fd, &statbuf) != 0) {
close(fd);
return (NULL);
}
buf = calloc(statbuf.st_size + 1, sizeof (*buf));
if (buf == NULL) {
close(fd);
return (NULL);
}
/*
* Note, we can read less bytes than st_size, and that's ok. Sysfs
* files will report their size is 4k even if they only return a small
* string.
*/
count = read(fd, buf, statbuf.st_size);
if (count < 0) {
/* Error doing read() or we overran the buffer */
close(fd);
free(buf);
return (NULL);
}
/* Remove trailing newline */
if (count > 0 && buf[count - 1] == '\n')
buf[count - 1] = 0;
close(fd);
return (buf);
}
/*
* Write a string to a sysfs file.
*
* Returns 0 on success, non-zero otherwise.
*/
static int zpool_sysfs_puts(char *path, char *str)
{
FILE *file;
file = fopen(path, "w");
if (!file) {
return (-1);
}
if (fputs(str, file) < 0) {
fclose(file);
return (-2);
}
fclose(file);
return (0);
}
/* Given a vdev nvlist_t, rescan its enclosure sysfs path */
static void
rescan_vdev_config_dev_sysfs_path(nvlist_t *vdev_nv)
{
update_vdev_config_dev_sysfs_path(vdev_nv,
fnvlist_lookup_string(vdev_nv, ZPOOL_CONFIG_PATH),
ZPOOL_CONFIG_VDEV_ENC_SYSFS_PATH);
}
/*
* Given a power string: "on", "off", "1", or "0", return 0 if it's an
* off value, 1 if it's an on value, and -1 if the value is unrecognized.
*/
static int zpool_power_parse_value(char *str)
{
if ((strcmp(str, "off") == 0) || (strcmp(str, "0") == 0))
return (0);
if ((strcmp(str, "on") == 0) || (strcmp(str, "1") == 0))
return (1);
return (-1);
}
/*
* Given a vdev string return an allocated string containing the sysfs path to
* its power control file. Also do a check if the power control file really
* exists and has correct permissions.
*
* Example returned strings:
*
* /sys/class/enclosure/0:0:122:0/10/power_status
* /sys/bus/pci/slots/10/power
*
* Returns allocated string on success (which must be freed), NULL on failure.
*/
static char *
zpool_power_sysfs_path(zpool_handle_t *zhp, char *vdev)
{
const char *enc_sysfs_dir = NULL;
char *path = NULL;
nvlist_t *vdev_nv = zpool_find_vdev(zhp, vdev, NULL, NULL, NULL);
if (vdev_nv == NULL) {
return (NULL);
}
/* Make sure we're getting the updated enclosure sysfs path */
rescan_vdev_config_dev_sysfs_path(vdev_nv);
if (nvlist_lookup_string(vdev_nv, ZPOOL_CONFIG_VDEV_ENC_SYSFS_PATH,
&enc_sysfs_dir) != 0) {
return (NULL);
}
if (asprintf(&path, "%s/power_status", enc_sysfs_dir) == -1)
return (NULL);
if (access(path, W_OK) != 0) {
free(path);
path = NULL;
/* No HDD 'power_control' file, maybe it's NVMe? */
if (asprintf(&path, "%s/power", enc_sysfs_dir) == -1) {
return (NULL);
}
if (access(path, R_OK | W_OK) != 0) {
/* Not NVMe either */
free(path);
return (NULL);
}
}
return (path);
}
/*
* Given a path to a sysfs power control file, return B_TRUE if you should use
* "on/off" words to control it, or B_FALSE otherwise ("0/1" to control).
*/
static boolean_t
zpool_power_use_word(char *sysfs_path)
{
if (strcmp(&sysfs_path[strlen(sysfs_path) - strlen("power_status")],
"power_status") == 0) {
return (B_TRUE);
}
return (B_FALSE);
}
/*
* Check the sysfs power control value for a vdev.
*
* Returns:
* 0 - Power is off
* 1 - Power is on
* -1 - Error or unsupported
*/
int
zpool_power_current_state(zpool_handle_t *zhp, char *vdev)
{
char *val;
int rc;
char *path = zpool_power_sysfs_path(zhp, vdev);
if (path == NULL)
return (-1);
val = zpool_sysfs_gets(path);
if (val == NULL) {
free(path);
return (-1);
}
rc = zpool_power_parse_value(val);
free(val);
free(path);
return (rc);
}
/*
* Turn on or off the slot to a device
*
* Device path is the full path to the device (like /dev/sda or /dev/sda1).
*
* Return code:
* 0: Success
* ENOTSUP: Power control not supported for OS
* EBADSLT: Couldn't read current power state
* ENOENT: No sysfs path to power control
* EIO: Couldn't write sysfs power value
* EBADE: Sysfs power value didn't change
*/
int
zpool_power(zpool_handle_t *zhp, char *vdev, boolean_t turn_on)
{
char *sysfs_path;
const char *val;
int rc;
int timeout_ms;
rc = zpool_power_current_state(zhp, vdev);
if (rc == -1) {
return (EBADSLT);
}
/* Already correct value? */
if (rc == (int)turn_on)
return (0);
sysfs_path = zpool_power_sysfs_path(zhp, vdev);
if (sysfs_path == NULL)
return (ENOENT);
if (zpool_power_use_word(sysfs_path)) {
val = turn_on ? "on" : "off";
} else {
val = turn_on ? "1" : "0";
}
rc = zpool_sysfs_puts(sysfs_path, (char *)val);
free(sysfs_path);
if (rc != 0) {
return (EIO);
}
/*
* Wait up to 30 seconds for sysfs power value to change after
* writing it.
*/
timeout_ms = zpool_getenv_int("ZPOOL_POWER_ON_SLOT_TIMEOUT_MS", 30000);
for (int i = 0; i < MAX(1, timeout_ms / 200); i++) {
rc = zpool_power_current_state(zhp, vdev);
if (rc == (int)turn_on)
return (0); /* success */
fsleep(0.200); /* 200ms */
}
/* sysfs value never changed */
return (EBADE);
}

View File

@ -33,18 +33,10 @@ for i in $scripts ; do
val=""
case $i in
enc)
if echo "$VDEV_ENC_SYSFS_PATH" | grep -q '/sys/bus/pci/slots' ; then
val="$VDEV_ENC_SYSFS_PATH"
else
val="$(ls """$VDEV_ENC_SYSFS_PATH/../../""" 2>/dev/null)"
fi
val=$(ls "$VDEV_ENC_SYSFS_PATH/../../" 2>/dev/null)
;;
slot)
if echo "$VDEV_ENC_SYSFS_PATH" | grep -q '/sys/bus/pci/slots' ; then
val="$(basename """$VDEV_ENC_SYSFS_PATH""")"
else
val="$(cat """$VDEV_ENC_SYSFS_PATH/slot""" 2>/dev/null)"
fi
val=$(cat "$VDEV_ENC_SYSFS_PATH/slot" 2>/dev/null)
;;
encdev)
val=$(ls "$VDEV_ENC_SYSFS_PATH/../device/scsi_generic" 2>/dev/null)

View File

@ -443,22 +443,37 @@ vdev_run_cmd(vdev_cmd_data_t *data, char *cmd)
{
int rc;
char *argv[2] = {cmd};
char **env;
char *env[5] = {(char *)"PATH=/bin:/sbin:/usr/bin:/usr/sbin"};
char **lines = NULL;
int lines_cnt = 0;
int i;
env = zpool_vdev_script_alloc_env(data->pool, data->path, data->upath,
data->vdev_enc_sysfs_path, NULL, NULL);
if (env == NULL)
/* Setup our custom environment variables */
rc = asprintf(&env[1], "VDEV_PATH=%s",
data->path ? data->path : "");
if (rc == -1) {
env[1] = NULL;
goto out;
}
rc = asprintf(&env[2], "VDEV_UPATH=%s",
data->upath ? data->upath : "");
if (rc == -1) {
env[2] = NULL;
goto out;
}
rc = asprintf(&env[3], "VDEV_ENC_SYSFS_PATH=%s",
data->vdev_enc_sysfs_path ?
data->vdev_enc_sysfs_path : "");
if (rc == -1) {
env[3] = NULL;
goto out;
}
/* Run the command */
rc = libzfs_run_process_get_stdout_nopath(cmd, argv, env, &lines,
&lines_cnt);
zpool_vdev_script_free_env(env);
if (rc != 0)
goto out;
@ -470,6 +485,10 @@ vdev_run_cmd(vdev_cmd_data_t *data, char *cmd)
out:
if (lines != NULL)
libzfs_free_str_array(lines, lines_cnt);
/* Start with i = 1 since env[0] was statically allocated */
for (i = 1; i < ARRAY_SIZE(env); i++)
free(env[i]);
}
/*
@ -554,10 +573,6 @@ for_each_vdev_run_cb(void *zhp_data, nvlist_t *nv, void *cb_vcdl)
if (nvlist_lookup_string(nv, ZPOOL_CONFIG_PATH, &path) != 0)
return (1);
/* Make sure we're getting the updated enclosure sysfs path */
update_vdev_config_dev_sysfs_path(nv, path,
ZPOOL_CONFIG_VDEV_ENC_SYSFS_PATH);
nvlist_lookup_string(nv, ZPOOL_CONFIG_VDEV_ENC_SYSFS_PATH,
&vdev_enc_sysfs_path);

File diff suppressed because it is too large Load Diff

View File

@ -126,10 +126,6 @@ vdev_cmd_data_list_t *all_pools_for_each_vdev_run(int argc, char **argv,
void free_vdev_cmd_data_list(vdev_cmd_data_list_t *vcdl);
void free_vdev_cmd_data(vdev_cmd_data_t *data);
int vdev_run_cmd_simple(char *path, char *cmd);
int check_device(const char *path, boolean_t force,
boolean_t isspare, boolean_t iswholedisk);
boolean_t check_sector_size_database(char *path, int *sector_size);
@ -138,9 +134,6 @@ int check_file(const char *file, boolean_t force, boolean_t isspare);
void after_zpool_upgrade(zpool_handle_t *zhp);
int check_file_generic(const char *file, boolean_t force, boolean_t isspare);
int zpool_power(zpool_handle_t *zhp, char *vdev, boolean_t turn_on);
int zpool_power_current_state(zpool_handle_t *zhp, char *vdev);
#ifdef __cplusplus
}
#endif

View File

@ -372,10 +372,6 @@ make_leaf_vdev(nvlist_t *props, const char *arg, boolean_t is_primary)
verify(nvlist_add_string(vdev, ZPOOL_CONFIG_PATH, path) == 0);
verify(nvlist_add_string(vdev, ZPOOL_CONFIG_TYPE, type) == 0);
/* Lookup and add the enclosure sysfs path (if exists) */
update_vdev_config_dev_sysfs_path(vdev, path,
ZPOOL_CONFIG_VDEV_ENC_SYSFS_PATH);
if (strcmp(type, VDEV_TYPE_DISK) == 0)
verify(nvlist_add_uint64(vdev, ZPOOL_CONFIG_WHOLE_DISK,
(uint64_t)wholedisk) == 0);
@ -940,15 +936,6 @@ zero_label(const char *path)
return (0);
}
static void
lines_to_stderr(char *lines[], int lines_cnt)
{
int i;
for (i = 0; i < lines_cnt; i++) {
fprintf(stderr, "%s\n", lines[i]);
}
}
/*
* Go through and find any whole disks in the vdev specification, labelling them
* as appropriate. When constructing the vdev spec, we were unable to open this
@ -960,7 +947,7 @@ lines_to_stderr(char *lines[], int lines_cnt)
* need to get the devid after we label the disk.
*/
static int
make_disks(zpool_handle_t *zhp, nvlist_t *nv, boolean_t replacing)
make_disks(zpool_handle_t *zhp, nvlist_t *nv)
{
nvlist_t **child;
uint_t c, children;
@ -1045,8 +1032,6 @@ make_disks(zpool_handle_t *zhp, nvlist_t *nv, boolean_t replacing)
*/
if (!is_exclusive && !is_spare(NULL, udevpath)) {
char *devnode = strrchr(devpath, '/') + 1;
char **lines = NULL;
int lines_cnt = 0;
ret = strncmp(udevpath, UDISK_ROOT, strlen(UDISK_ROOT));
if (ret == 0) {
@ -1058,27 +1043,9 @@ make_disks(zpool_handle_t *zhp, nvlist_t *nv, boolean_t replacing)
/*
* When labeling a pool the raw device node name
* is provided as it appears under /dev/.
*
* Note that 'zhp' will be NULL when we're creating a
* pool.
*/
if (zpool_prepare_and_label_disk(g_zfs, zhp, devnode,
nv, zhp == NULL ? "create" :
replacing ? "replace" : "add", &lines,
&lines_cnt) != 0) {
(void) fprintf(stderr,
gettext(
"Error preparing/labeling disk.\n"));
if (lines_cnt > 0) {
(void) fprintf(stderr,
gettext("zfs_prepare_disk output:\n"));
lines_to_stderr(lines, lines_cnt);
}
libzfs_free_str_array(lines, lines_cnt);
if (zpool_label_disk(g_zfs, zhp, devnode) == -1)
return (-1);
}
libzfs_free_str_array(lines, lines_cnt);
/*
* Wait for udev to signal the device is available
@ -1115,19 +1082,19 @@ make_disks(zpool_handle_t *zhp, nvlist_t *nv, boolean_t replacing)
}
for (c = 0; c < children; c++)
if ((ret = make_disks(zhp, child[c], replacing)) != 0)
if ((ret = make_disks(zhp, child[c])) != 0)
return (ret);
if (nvlist_lookup_nvlist_array(nv, ZPOOL_CONFIG_SPARES,
&child, &children) == 0)
for (c = 0; c < children; c++)
if ((ret = make_disks(zhp, child[c], replacing)) != 0)
if ((ret = make_disks(zhp, child[c])) != 0)
return (ret);
if (nvlist_lookup_nvlist_array(nv, ZPOOL_CONFIG_L2CACHE,
&child, &children) == 0)
for (c = 0; c < children; c++)
if ((ret = make_disks(zhp, child[c], replacing)) != 0)
if ((ret = make_disks(zhp, child[c])) != 0)
return (ret);
return (0);
@ -1785,7 +1752,7 @@ split_mirror_vdev(zpool_handle_t *zhp, char *newname, nvlist_t *props,
return (NULL);
}
if (!flags.dryrun && make_disks(zhp, newroot, B_FALSE) != 0) {
if (!flags.dryrun && make_disks(zhp, newroot) != 0) {
nvlist_free(newroot);
return (NULL);
}
@ -1906,7 +1873,7 @@ make_root_vdev(zpool_handle_t *zhp, nvlist_t *props, int force, int check_rep,
/*
* Run through the vdev specification and label any whole disks found.
*/
if (!dryrun && make_disks(zhp, newroot, replacing) != 0) {
if (!dryrun && make_disks(zhp, newroot) != 0) {
nvlist_free(newroot);
return (NULL);
}

View File

@ -1,5 +1,3 @@
zstream_CPPFLAGS = $(AM_CPPFLAGS) $(LIBZPOOL_CPPFLAGS)
sbin_PROGRAMS += zstream
CPPCHECKTARGETS += zstream

View File

@ -22,8 +22,6 @@
/*
* Copyright 2022 Axcient. All rights reserved.
* Use is subject to license terms.
*
* Copyright (c) 2024, Klara, Inc.
*/
#include <err.h>
@ -259,73 +257,83 @@ zstream_do_decompress(int argc, char *argv[])
ENTRY e = {.key = key};
p = hsearch(e, FIND);
if (p == NULL) {
if (p != NULL) {
zio_decompress_func_t *xfunc = NULL;
switch ((enum zio_compress)(intptr_t)p->data) {
case ZIO_COMPRESS_OFF:
xfunc = NULL;
break;
case ZIO_COMPRESS_LZJB:
xfunc = lzjb_decompress;
break;
case ZIO_COMPRESS_GZIP_1:
xfunc = gzip_decompress;
break;
case ZIO_COMPRESS_ZLE:
xfunc = zle_decompress;
break;
case ZIO_COMPRESS_LZ4:
xfunc = lz4_decompress_zfs;
break;
case ZIO_COMPRESS_ZSTD:
xfunc = zfs_zstd_decompress;
break;
default:
assert(B_FALSE);
}
/*
* Read and decompress the block
*/
char *lzbuf = safe_calloc(payload_size);
(void) sfread(lzbuf, payload_size, stdin);
if (xfunc == NULL) {
memcpy(buf, lzbuf, payload_size);
drrw->drr_compressiontype =
ZIO_COMPRESS_OFF;
if (verbose)
fprintf(stderr, "Resetting "
"compression type to off "
"for ino %llu offset "
"%llu\n",
(u_longlong_t)
drrw->drr_object,
(u_longlong_t)
drrw->drr_offset);
} else if (0 != xfunc(lzbuf, buf,
payload_size, payload_size, 0)) {
/*
* The block must not be compressed,
* at least not with this compression
* type, possibly because it gets
* written multiple times in this
* stream.
*/
warnx("decompression failed for "
"ino %llu offset %llu",
(u_longlong_t)drrw->drr_object,
(u_longlong_t)drrw->drr_offset);
memcpy(buf, lzbuf, payload_size);
} else if (verbose) {
drrw->drr_compressiontype =
ZIO_COMPRESS_OFF;
fprintf(stderr, "successfully "
"decompressed ino %llu "
"offset %llu\n",
(u_longlong_t)drrw->drr_object,
(u_longlong_t)drrw->drr_offset);
} else {
drrw->drr_compressiontype =
ZIO_COMPRESS_OFF;
}
free(lzbuf);
} else {
/*
* Read the contents of the block unaltered
*/
(void) sfread(buf, payload_size, stdin);
break;
}
/*
* Read and decompress the block
*/
enum zio_compress c =
(enum zio_compress)(intptr_t)p->data;
if (c == ZIO_COMPRESS_OFF) {
(void) sfread(buf, payload_size, stdin);
drrw->drr_compressiontype = 0;
drrw->drr_compressed_size = 0;
if (verbose)
fprintf(stderr,
"Resetting compression type to "
"off for ino %llu offset %llu\n",
(u_longlong_t)drrw->drr_object,
(u_longlong_t)drrw->drr_offset);
break;
}
uint64_t lsize = drrw->drr_logical_size;
ASSERT3U(payload_size, <=, lsize);
char *lzbuf = safe_calloc(payload_size);
(void) sfread(lzbuf, payload_size, stdin);
abd_t sabd, dabd;
abd_get_from_buf_struct(&sabd, lzbuf, payload_size);
abd_get_from_buf_struct(&dabd, buf, lsize);
int err = zio_decompress_data(c, &sabd, &dabd,
payload_size, lsize, NULL);
abd_free(&dabd);
abd_free(&sabd);
if (err == 0) {
drrw->drr_compressiontype = 0;
drrw->drr_compressed_size = 0;
payload_size = lsize;
if (verbose) {
fprintf(stderr,
"successfully decompressed "
"ino %llu offset %llu\n",
(u_longlong_t)drrw->drr_object,
(u_longlong_t)drrw->drr_offset);
}
} else {
/*
* The block must not be compressed, at least
* not with this compression type, possibly
* because it gets written multiple times in
* this stream.
*/
warnx("decompression failed for "
"ino %llu offset %llu",
(u_longlong_t)drrw->drr_object,
(u_longlong_t)drrw->drr_offset);
memcpy(buf, lzbuf, payload_size);
}
free(lzbuf);
break;
}

View File

@ -22,9 +22,10 @@
/*
* Copyright 2022 Axcient. All rights reserved.
* Use is subject to license terms.
*
*/
/*
* Copyright (c) 2022 by Delphix. All rights reserved.
* Copyright (c) 2024, Klara, Inc.
*/
#include <err.h>
@ -71,12 +72,12 @@ zstream_do_recompress(int argc, char *argv[])
dmu_replay_record_t *drr = &thedrr;
zio_cksum_t stream_cksum;
int c;
int level = 0;
int level = -1;
while ((c = getopt(argc, argv, "l:")) != -1) {
switch (c) {
case 'l':
if (sscanf(optarg, "%d", &level) != 1) {
if (sscanf(optarg, "%d", &level) != 0) {
fprintf(stderr,
"failed to parse level '%s'\n",
optarg);
@ -96,22 +97,34 @@ zstream_do_recompress(int argc, char *argv[])
if (argc != 1)
zstream_usage();
enum zio_compress ctype;
if (strcmp(argv[0], "off") == 0) {
ctype = ZIO_COMPRESS_OFF;
int type = 0;
zio_compress_info_t *cinfo = NULL;
if (0 == strcmp(argv[0], "off")) {
type = ZIO_COMPRESS_OFF;
cinfo = &zio_compress_table[type];
} else if (0 == strcmp(argv[0], "inherit") ||
0 == strcmp(argv[0], "empty") ||
0 == strcmp(argv[0], "on")) {
// Fall through to invalid compression type case
} else {
for (ctype = 0; ctype < ZIO_COMPRESS_FUNCTIONS; ctype++) {
if (strcmp(argv[0],
zio_compress_table[ctype].ci_name) == 0)
for (int i = 0; i < ZIO_COMPRESS_FUNCTIONS; i++) {
if (0 == strcmp(zio_compress_table[i].ci_name,
argv[0])) {
cinfo = &zio_compress_table[i];
type = i;
break;
}
}
if (ctype == ZIO_COMPRESS_FUNCTIONS ||
zio_compress_table[ctype].ci_compress == NULL) {
fprintf(stderr, "Invalid compression type %s.\n",
argv[0]);
exit(2);
}
}
if (cinfo == NULL) {
fprintf(stderr, "Invalid compression type %s.\n",
argv[0]);
exit(2);
}
if (cinfo->ci_compress == NULL) {
type = 0;
cinfo = &zio_compress_table[0];
}
if (isatty(STDIN_FILENO)) {
@ -122,7 +135,6 @@ zstream_do_recompress(int argc, char *argv[])
exit(1);
}
abd_init();
fletcher_4_init();
zio_init();
zstd_init();
@ -235,78 +247,63 @@ zstream_do_recompress(int argc, char *argv[])
(void) sfread(buf, payload_size, stdin);
break;
}
enum zio_compress dtype = drrw->drr_compressiontype;
if (dtype >= ZIO_COMPRESS_FUNCTIONS) {
if (drrw->drr_compressiontype >=
ZIO_COMPRESS_FUNCTIONS) {
fprintf(stderr, "Invalid compression type in "
"stream: %d\n", dtype);
"stream: %d\n", drrw->drr_compressiontype);
exit(3);
}
if (zio_compress_table[dtype].ci_decompress == NULL)
dtype = ZIO_COMPRESS_OFF;
zio_compress_info_t *dinfo =
&zio_compress_table[drrw->drr_compressiontype];
/* Set up buffers to minimize memcpys */
char *cbuf, *dbuf;
if (ctype == ZIO_COMPRESS_OFF)
if (cinfo->ci_compress == NULL)
dbuf = buf;
else
dbuf = safe_calloc(bufsz);
if (dtype == ZIO_COMPRESS_OFF)
if (dinfo->ci_decompress == NULL)
cbuf = dbuf;
else
cbuf = safe_calloc(payload_size);
/* Read and decompress the payload */
(void) sfread(cbuf, payload_size, stdin);
if (dtype != ZIO_COMPRESS_OFF) {
abd_t cabd, dabd;
abd_get_from_buf_struct(&cabd,
cbuf, payload_size);
abd_get_from_buf_struct(&dabd, dbuf,
MIN(bufsz, drrw->drr_logical_size));
if (zio_decompress_data(dtype, &cabd, &dabd,
payload_size, abd_get_size(&dabd),
NULL) != 0) {
if (dinfo->ci_decompress != NULL) {
if (0 != dinfo->ci_decompress(cbuf, dbuf,
payload_size, MIN(bufsz,
drrw->drr_logical_size), dinfo->ci_level)) {
warnx("decompression type %d failed "
"for ino %llu offset %llu",
dtype,
type,
(u_longlong_t)drrw->drr_object,
(u_longlong_t)drrw->drr_offset);
exit(4);
}
payload_size = drrw->drr_logical_size;
abd_free(&dabd);
abd_free(&cabd);
free(cbuf);
}
/* Recompress the payload */
if (ctype != ZIO_COMPRESS_OFF) {
abd_t dabd, abd;
abd_get_from_buf_struct(&dabd,
dbuf, drrw->drr_logical_size);
abd_t *pabd =
abd_get_from_buf_struct(&abd, buf, bufsz);
size_t csize = zio_compress_data(ctype, &dabd,
&pabd, drrw->drr_logical_size, level);
size_t rounded =
P2ROUNDUP(csize, SPA_MINBLOCKSIZE);
if (rounded >= drrw->drr_logical_size) {
if (cinfo->ci_compress != NULL) {
payload_size = P2ROUNDUP(cinfo->ci_compress(
dbuf, buf, drrw->drr_logical_size,
MIN(payload_size, bufsz), (level == -1 ?
cinfo->ci_level : level)),
SPA_MINBLOCKSIZE);
if (payload_size != drrw->drr_logical_size) {
drrw->drr_compressiontype = type;
drrw->drr_compressed_size =
payload_size;
} else {
memcpy(buf, dbuf, payload_size);
drrw->drr_compressiontype = 0;
drrw->drr_compressed_size = 0;
} else {
abd_zero_off(pabd, csize,
rounded - csize);
drrw->drr_compressiontype = ctype;
drrw->drr_compressed_size =
payload_size = rounded;
}
abd_free(&abd);
abd_free(&dabd);
free(dbuf);
} else {
drrw->drr_compressiontype = 0;
drrw->drr_compressiontype = type;
drrw->drr_compressed_size = 0;
}
break;
@ -374,7 +371,6 @@ zstream_do_recompress(int argc, char *argv[])
fletcher_4_fini();
zio_fini();
zstd_fini();
abd_fini();
return (0);
}

View File

@ -56,6 +56,15 @@ typedef struct redup_table {
int numhashbits;
} redup_table_t;
int
highbit64(uint64_t i)
{
if (i == 0)
return (0);
return (NBBY * sizeof (uint64_t) - __builtin_clzll(i));
}
void *
safe_calloc(size_t n)
{
@ -177,7 +186,7 @@ static void
zfs_redup_stream(int infd, int outfd, boolean_t verbose)
{
int bufsz = SPA_MAXBLOCKSIZE;
dmu_replay_record_t thedrr;
dmu_replay_record_t thedrr = { 0 };
dmu_replay_record_t *drr = &thedrr;
redup_table_t rdt;
zio_cksum_t stream_cksum;
@ -185,8 +194,6 @@ zfs_redup_stream(int infd, int outfd, boolean_t verbose)
uint64_t num_records = 0;
uint64_t num_write_byref_records = 0;
memset(&thedrr, 0, sizeof (dmu_replay_record_t));
#ifdef _ILP32
uint64_t max_rde_size = SMALLEST_POSSIBLE_MAX_RDT_MB << 20;
#else

File diff suppressed because it is too large Load Diff

View File

@ -10,8 +10,7 @@ AM_CPPFLAGS = \
-I$(top_srcdir)/include \
-I$(top_srcdir)/module/icp/include \
-I$(top_srcdir)/lib/libspl/include \
-I$(top_srcdir)/lib/libspl/include/os/@ac_system_l@ \
-I$(top_srcdir)/lib/libzpool/include
-I$(top_srcdir)/lib/libspl/include/os/@ac_system_l@
AM_LIBTOOLFLAGS = --silent
@ -22,9 +21,7 @@ AM_CFLAGS += $(IMPLICIT_FALLTHROUGH)
AM_CFLAGS += $(DEBUG_CFLAGS)
AM_CFLAGS += $(ASAN_CFLAGS)
AM_CFLAGS += $(UBSAN_CFLAGS)
AM_CFLAGS += $(CODE_COVERAGE_CFLAGS)
AM_CFLAGS += $(NO_FORMAT_ZERO_LENGTH)
AM_CFLAGS += $(NO_FORMAT_TRUNCATION)
AM_CFLAGS += $(CODE_COVERAGE_CFLAGS) $(NO_FORMAT_ZERO_LENGTH)
if BUILD_FREEBSD
AM_CFLAGS += -fPIC -Werror -Wno-unknown-pragmas -Wno-enum-conversion
AM_CFLAGS += -include $(top_srcdir)/include/os/freebsd/spl/sys/ccompile.h
@ -36,7 +33,6 @@ AM_CPPFLAGS += -D_REENTRANT
AM_CPPFLAGS += -D_FILE_OFFSET_BITS=64
AM_CPPFLAGS += -D_LARGEFILE64_SOURCE
AM_CPPFLAGS += -DLIBEXECDIR=\"$(libexecdir)\"
AM_CPPFLAGS += -DZFSEXECDIR=\"$(zfsexecdir)\"
AM_CPPFLAGS += -DRUNSTATEDIR=\"$(runstatedir)\"
AM_CPPFLAGS += -DSBINDIR=\"$(sbindir)\"
AM_CPPFLAGS += -DSYSCONFDIR=\"$(sysconfdir)\"
@ -45,6 +41,21 @@ AM_CPPFLAGS += $(DEBUG_CPPFLAGS)
AM_CPPFLAGS += $(CODE_COVERAGE_CPPFLAGS)
AM_CPPFLAGS += -DTEXT_DOMAIN=\"zfs-@ac_system_l@-user\"
AM_CPPFLAGS_NOCHECK = -D"strtok(...)=strtok(__VA_ARGS__) __attribute__((deprecated(\"Use strtok_r(3) instead!\")))"
AM_CPPFLAGS_NOCHECK += -D"__xpg_basename(...)=__xpg_basename(__VA_ARGS__) __attribute__((deprecated(\"basename(3) is underspecified. Use zfs_basename() instead!\")))"
AM_CPPFLAGS_NOCHECK += -D"basename(...)=basename(__VA_ARGS__) __attribute__((deprecated(\"basename(3) is underspecified. Use zfs_basename() instead!\")))"
AM_CPPFLAGS_NOCHECK += -D"dirname(...)=dirname(__VA_ARGS__) __attribute__((deprecated(\"dirname(3) is underspecified. Use zfs_dirnamelen() instead!\")))"
AM_CPPFLAGS_NOCHECK += -D"bcopy(...)=__attribute__((deprecated(\"bcopy(3) is deprecated. Use memcpy(3)/memmove(3) instead!\"))) bcopy(__VA_ARGS__)"
AM_CPPFLAGS_NOCHECK += -D"bcmp(...)=__attribute__((deprecated(\"bcmp(3) is deprecated. Use memcmp(3) instead!\"))) bcmp(__VA_ARGS__)"
AM_CPPFLAGS_NOCHECK += -D"bzero(...)=__attribute__((deprecated(\"bzero(3) is deprecated. Use memset(3) instead!\"))) bzero(__VA_ARGS__)"
AM_CPPFLAGS_NOCHECK += -D"asctime(...)=__attribute__((deprecated(\"Use strftime(3) instead!\"))) asctime(__VA_ARGS__)"
AM_CPPFLAGS_NOCHECK += -D"asctime_r(...)=__attribute__((deprecated(\"Use strftime(3) instead!\"))) asctime_r(__VA_ARGS__)"
AM_CPPFLAGS_NOCHECK += -D"gmtime(...)=__attribute__((deprecated(\"gmtime(3) isn't thread-safe. Use gmtime_r(3) instead!\"))) gmtime(__VA_ARGS__)"
AM_CPPFLAGS_NOCHECK += -D"localtime(...)=__attribute__((deprecated(\"localtime(3) isn't thread-safe. Use localtime_r(3) instead!\"))) localtime(__VA_ARGS__)"
AM_CPPFLAGS_NOCHECK += -D"strncpy(...)=__attribute__((deprecated(\"strncpy(3) is deprecated. Use strlcpy(3) instead!\"))) strncpy(__VA_ARGS__)"
AM_CPPFLAGS += $(AM_CPPFLAGS_NOCHECK)
if ASAN_ENABLED
AM_CPPFLAGS += -DZFS_ASAN_ENABLED
endif
@ -58,7 +69,7 @@ AM_LDFLAGS += $(ASAN_LDFLAGS)
AM_LDFLAGS += $(UBSAN_LDFLAGS)
if BUILD_FREEBSD
AM_LDFLAGS += -fstack-protector-strong
AM_LDFLAGS += -fstack-protector-strong -shared
AM_LDFLAGS += -Wl,-x -Wl,--fatal-warnings -Wl,--warn-shared-textrel
AM_LDFLAGS += -lm
endif
@ -71,7 +82,4 @@ KERNEL_CFLAGS = $(FRAME_LARGER_THAN)
LIBRARY_CFLAGS = -no-suppress
# Forcibly enable asserts/debugging for libzpool &al.
# Since ZFS_DEBUG can change shared data structures, all libzpool users must
# be compiled with the same flags.
# See https://github.com/openzfs/zfs/issues/16476
LIBZPOOL_CPPFLAGS = -DDEBUG -UNDEBUG -DZFS_DEBUG
FORCEDEBUG_CPPFLAGS = -DDEBUG -UNDEBUG -DZFS_DEBUG

View File

@ -18,7 +18,6 @@ subst_sed_cmd = \
-e 's|@ASAN_ENABLED[@]|$(ASAN_ENABLED)|g' \
-e 's|@DEFAULT_INIT_NFS_SERVER[@]|$(DEFAULT_INIT_NFS_SERVER)|g' \
-e 's|@DEFAULT_INIT_SHELL[@]|$(DEFAULT_INIT_SHELL)|g' \
-e 's|@IS_SYSV_RC[@]|$(IS_SYSV_RC)|g' \
-e 's|@LIBFETCH_DYNAMIC[@]|$(LIBFETCH_DYNAMIC)|g' \
-e 's|@LIBFETCH_SONAME[@]|$(LIBFETCH_SONAME)|g' \
-e 's|@PYTHON[@]|$(PYTHON)|g' \
@ -44,4 +43,4 @@ SUBSTFILES =
CLEANFILES += $(SUBSTFILES)
dist_noinst_DATA += $(SUBSTFILES:=.in)
$(SUBSTFILES): $(call SUBST,%,)
$(call SUBST,%,)

View File

@ -80,11 +80,10 @@ AC_DEFUN([ZFS_AC_CONFIG_ALWAYS_PYZFS], [
[AC_MSG_ERROR("Python $PYTHON_VERSION unknown")]
)
AS_IF([test "x$enable_pyzfs" = xyes], [
AX_PYTHON_DEVEL([$PYTHON_REQUIRED_VERSION])
], [
AX_PYTHON_DEVEL([$PYTHON_REQUIRED_VERSION], [true])
AS_IF([test "x$ax_python_devel_found" = xno], [
AX_PYTHON_DEVEL([$PYTHON_REQUIRED_VERSION], [
AS_IF([test "x$enable_pyzfs" = xyes], [
AC_MSG_ERROR("Python $PYTHON_REQUIRED_VERSION development library is not installed")
], [test "x$enable_pyzfs" != xno], [
enable_pyzfs=no
])
])

View File

@ -4,13 +4,18 @@
#
# SYNOPSIS
#
# AX_PYTHON_DEVEL([version[,optional]])
# AX_PYTHON_DEVEL([version], [action-if-not-found])
#
# DESCRIPTION
#
# Note: Defines as a precious variable "PYTHON_VERSION". Don't override it
# in your configure.ac.
#
# Note: this is a slightly modified version of the original AX_PYTHON_DEVEL
# macro which accepts an additional [action-if-not-found] argument. This
# allow to detect if Python development is available without aborting the
# configure phase with an hard error in case it is not.
#
# This macro checks for Python and tries to get the include path to
# 'Python.h'. It provides the $(PYTHON_CPPFLAGS) and $(PYTHON_LIBS) output
# variables. It also exports $(PYTHON_EXTRA_LIBS) and
@ -23,11 +28,6 @@
# version number. Don't use "PYTHON_VERSION" for this: that environment
# variable is declared as precious and thus reserved for the end-user.
#
# By default this will fail if it does not detect a development version of
# python. If you want it to continue, set optional to true, like
# AX_PYTHON_DEVEL([], [true]). The ax_python_devel_found variable will be
# "no" if it fails.
#
# This macro should work for all versions of Python >= 2.1.0. As an end
# user, you can disable the check for the python version by setting the
# PYTHON_NOVERSIONCHECK environment variable to something else than the
@ -45,6 +45,7 @@
# Copyright (c) 2009 Matteo Settenvini <matteo@member.fsf.org>
# Copyright (c) 2009 Horst Knorr <hk_classes@knoda.org>
# Copyright (c) 2013 Daniel Mullner <muellner@math.stanford.edu>
# Copyright (c) 2018 loli10K <ezomori.nozomu@gmail.com>
#
# This program is free software: you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the
@ -72,18 +73,10 @@
# modified version of the Autoconf Macro, you may extend this special
# exception to the GPL to apply to your modified version as well.
#serial 36
#serial 21
AU_ALIAS([AC_PYTHON_DEVEL], [AX_PYTHON_DEVEL])
AC_DEFUN([AX_PYTHON_DEVEL],[
# Get whether it's optional
if test -z "$2"; then
ax_python_devel_optional=false
else
ax_python_devel_optional=$2
fi
ax_python_devel_found=yes
#
# Allow the use of a (user set) custom python version
#
@ -94,26 +87,23 @@ AC_DEFUN([AX_PYTHON_DEVEL],[
AC_PATH_PROG([PYTHON],[python[$PYTHON_VERSION]])
if test -z "$PYTHON"; then
AC_MSG_WARN([Cannot find python$PYTHON_VERSION in your system path])
if ! $ax_python_devel_optional; then
AC_MSG_ERROR([Giving up, python development not available])
fi
ax_python_devel_found=no
PYTHON_VERSION=""
m4_ifvaln([$2],[$2],[
AC_MSG_ERROR([Cannot find python$PYTHON_VERSION in your system path])
PYTHON_VERSION=""
])
fi
if test $ax_python_devel_found = yes; then
#
# Check for a version of Python >= 2.1.0
#
AC_MSG_CHECKING([for a version of Python >= '2.1.0'])
ac_supports_python_ver=`$PYTHON -c "import sys; \
#
# Check for a version of Python >= 2.1.0
#
AC_MSG_CHECKING([for a version of Python >= '2.1.0'])
ac_supports_python_ver=`$PYTHON -c "import sys; \
ver = sys.version.split ()[[0]]; \
print (ver >= '2.1.0')"`
if test "$ac_supports_python_ver" != "True"; then
if test "$ac_supports_python_ver" != "True"; then
if test -z "$PYTHON_NOVERSIONCHECK"; then
AC_MSG_RESULT([no])
AC_MSG_WARN([
AC_MSG_FAILURE([
This version of the AC@&t@_PYTHON_DEVEL macro
doesn't work properly with versions of Python before
2.1.0. You may need to re-run configure, setting the
@ -122,27 +112,20 @@ PYTHON_EXTRA_LIBS and PYTHON_EXTRA_LDFLAGS by hand.
Moreover, to disable this check, set PYTHON_NOVERSIONCHECK
to something else than an empty string.
])
if ! $ax_python_devel_optional; then
AC_MSG_FAILURE([Giving up])
fi
ax_python_devel_found=no
PYTHON_VERSION=""
else
AC_MSG_RESULT([skip at user request])
fi
else
else
AC_MSG_RESULT([yes])
fi
fi
if test $ax_python_devel_found = yes; then
#
# If the macro parameter ``version'' is set, honour it.
# A Python shim class, VPy, is used to implement correct version comparisons via
# string expressions, since e.g. a naive textual ">= 2.7.3" won't work for
# Python 2.7.10 (the ".1" being evaluated as less than ".3").
#
if test -n "$1"; then
#
# If the macro parameter ``version'' is set, honour it.
# A Python shim class, VPy, is used to implement correct version comparisons via
# string expressions, since e.g. a naive textual ">= 2.7.3" won't work for
# Python 2.7.10 (the ".1" being evaluated as less than ".3").
#
if test -n "$1"; then
AC_MSG_CHECKING([for a version of Python $1])
cat << EOF > ax_python_devel_vpy.py
class VPy:
@ -150,7 +133,7 @@ class VPy:
return tuple(map(int, s.strip().replace("rc", ".").split(".")))
def __init__(self):
import sys
self.vpy = tuple(sys.version_info)[[:3]]
self.vpy = tuple(sys.version_info)
def __eq__(self, s):
return self.vpy == self.vtup(s)
def __ne__(self, s):
@ -172,69 +155,25 @@ EOF
AC_MSG_RESULT([yes])
else
AC_MSG_RESULT([no])
AC_MSG_WARN([this package requires Python $1.
AC_MSG_ERROR([this package requires Python $1.
If you have it installed, but it isn't the default Python
interpreter in your system path, please pass the PYTHON_VERSION
variable to configure. See ``configure --help'' for reference.
])
if ! $ax_python_devel_optional; then
AC_MSG_ERROR([Giving up])
fi
ax_python_devel_found=no
PYTHON_VERSION=""
fi
fi
fi
if test $ax_python_devel_found = yes; then
#
# Check if you have distutils, else fail
#
AC_MSG_CHECKING([for the sysconfig Python package])
ac_sysconfig_result=`$PYTHON -c "import sysconfig" 2>&1`
if test $? -eq 0; then
AC_MSG_RESULT([yes])
IMPORT_SYSCONFIG="import sysconfig"
else
AC_MSG_RESULT([no])
AC_MSG_CHECKING([for the distutils Python package])
ac_sysconfig_result=`$PYTHON -c "from distutils import sysconfig" 2>&1`
if test $? -eq 0; then
AC_MSG_RESULT([yes])
IMPORT_SYSCONFIG="from distutils import sysconfig"
else
AC_MSG_WARN([cannot import Python module "distutils".
Please check your Python installation. The error was:
$ac_sysconfig_result])
if ! $ax_python_devel_optional; then
AC_MSG_ERROR([Giving up])
fi
ax_python_devel_found=no
PYTHON_VERSION=""
fi
fi
fi
if test $ax_python_devel_found = yes; then
#
# Check for Python include path
#
AC_MSG_CHECKING([for Python include path])
if test -z "$PYTHON_CPPFLAGS"; then
if test "$IMPORT_SYSCONFIG" = "import sysconfig"; then
# sysconfig module has different functions
python_path=`$PYTHON -c "$IMPORT_SYSCONFIG; \
print (sysconfig.get_path ('include'));"`
plat_python_path=`$PYTHON -c "$IMPORT_SYSCONFIG; \
print (sysconfig.get_path ('platinclude'));"`
else
# old distutils way
python_path=`$PYTHON -c "$IMPORT_SYSCONFIG; \
print (sysconfig.get_python_inc ());"`
plat_python_path=`$PYTHON -c "$IMPORT_SYSCONFIG; \
print (sysconfig.get_python_inc (plat_specific=1));"`
fi
#
# Check for Python include path
#
#
AC_MSG_CHECKING([for Python include path])
if test -z "$PYTHON_CPPFLAGS"; then
python_path=`$PYTHON -c "import sysconfig; \
print (sysconfig.get_path('include'));"`
plat_python_path=`$PYTHON -c "import sysconfig; \
print (sysconfig.get_path('platinclude'));"`
if test -n "${python_path}"; then
if test "${plat_python_path}" != "${python_path}"; then
python_path="-I$python_path -I$plat_python_path"
@ -243,15 +182,15 @@ $ac_sysconfig_result])
fi
fi
PYTHON_CPPFLAGS=$python_path
fi
AC_MSG_RESULT([$PYTHON_CPPFLAGS])
AC_SUBST([PYTHON_CPPFLAGS])
fi
AC_MSG_RESULT([$PYTHON_CPPFLAGS])
AC_SUBST([PYTHON_CPPFLAGS])
#
# Check for Python library path
#
AC_MSG_CHECKING([for Python library path])
if test -z "$PYTHON_LIBS"; then
#
# Check for Python library path
#
AC_MSG_CHECKING([for Python library path])
if test -z "$PYTHON_LIBS"; then
# (makes two attempts to ensure we've got a version number
# from the interpreter)
ac_python_version=`cat<<EOD | $PYTHON -
@ -269,7 +208,7 @@ EOD`
ac_python_version=$PYTHON_VERSION
else
ac_python_version=`$PYTHON -c "import sys; \
print ("%d.%d" % sys.version_info[[:2]])"`
print ('.'.join(sys.version.split('.')[[:2]]))"`
fi
fi
@ -281,7 +220,7 @@ EOD`
ac_python_libdir=`cat<<EOD | $PYTHON -
# There should be only one
$IMPORT_SYSCONFIG
import sysconfig
e = sysconfig.get_config_var('LIBDIR')
if e is not None:
print (e)
@ -290,7 +229,7 @@ EOD`
# Now, for the library:
ac_python_library=`cat<<EOD | $PYTHON -
$IMPORT_SYSCONFIG
import sysconfig
c = sysconfig.get_config_vars()
if 'LDVERSION' in c:
print ('python'+c[['LDVERSION']])
@ -310,140 +249,88 @@ EOD`
else
# old way: use libpython from python_configdir
ac_python_libdir=`$PYTHON -c \
"from sysconfig import get_python_lib as f; \
"import sysconfig; \
import os; \
print (os.path.join(f(plat_specific=1, standard_lib=1), 'config'));"`
print (os.path.join(sysconfig.get_path('platstdlib'), 'config'));"`
PYTHON_LIBS="-L$ac_python_libdir -lpython$ac_python_version"
fi
if test -z "PYTHON_LIBS"; then
AC_MSG_WARN([
m4_ifvaln([$2],[$2],[
AC_MSG_ERROR([
Cannot determine location of your Python DSO. Please check it was installed with
dynamic libraries enabled, or try setting PYTHON_LIBS by hand.
])
])
if ! $ax_python_devel_optional; then
AC_MSG_ERROR([Giving up])
fi
ax_python_devel_found=no
PYTHON_VERSION=""
fi
fi
fi
AC_MSG_RESULT([$PYTHON_LIBS])
AC_SUBST([PYTHON_LIBS])
if test $ax_python_devel_found = yes; then
AC_MSG_RESULT([$PYTHON_LIBS])
AC_SUBST([PYTHON_LIBS])
#
# Check for site packages
#
AC_MSG_CHECKING([for Python site-packages path])
if test -z "$PYTHON_SITE_PKG"; then
PYTHON_SITE_PKG=`$PYTHON -c "import distutils.sysconfig; \
print (distutils.sysconfig.get_python_lib(0,0));" 2>/dev/null || \
$PYTHON -c "import sysconfig; \
print (sysconfig.get_path('purelib'));"`
fi
AC_MSG_RESULT([$PYTHON_SITE_PKG])
AC_SUBST([PYTHON_SITE_PKG])
#
# Check for site packages
#
AC_MSG_CHECKING([for Python site-packages path])
if test -z "$PYTHON_SITE_PKG"; then
if test "$IMPORT_SYSCONFIG" = "import sysconfig"; then
PYTHON_SITE_PKG=`$PYTHON -c "
$IMPORT_SYSCONFIG;
if hasattr(sysconfig, 'get_default_scheme'):
scheme = sysconfig.get_default_scheme()
else:
scheme = sysconfig._get_default_scheme()
if scheme == 'posix_local':
# Debian's default scheme installs to /usr/local/ but we want to find headers in /usr/
scheme = 'posix_prefix'
prefix = '$prefix'
if prefix == 'NONE':
prefix = '$ac_default_prefix'
sitedir = sysconfig.get_path('purelib', scheme, vars={'base': prefix})
print(sitedir)"`
else
# distutils.sysconfig way
PYTHON_SITE_PKG=`$PYTHON -c "$IMPORT_SYSCONFIG; \
print (sysconfig.get_python_lib(0,0));"`
fi
fi
AC_MSG_RESULT([$PYTHON_SITE_PKG])
AC_SUBST([PYTHON_SITE_PKG])
#
# Check for platform-specific site packages
#
AC_MSG_CHECKING([for Python platform specific site-packages path])
if test -z "$PYTHON_PLATFORM_SITE_PKG"; then
if test "$IMPORT_SYSCONFIG" = "import sysconfig"; then
PYTHON_PLATFORM_SITE_PKG=`$PYTHON -c "
$IMPORT_SYSCONFIG;
if hasattr(sysconfig, 'get_default_scheme'):
scheme = sysconfig.get_default_scheme()
else:
scheme = sysconfig._get_default_scheme()
if scheme == 'posix_local':
# Debian's default scheme installs to /usr/local/ but we want to find headers in /usr/
scheme = 'posix_prefix'
prefix = '$prefix'
if prefix == 'NONE':
prefix = '$ac_default_prefix'
sitedir = sysconfig.get_path('platlib', scheme, vars={'platbase': prefix})
print(sitedir)"`
else
# distutils.sysconfig way
PYTHON_PLATFORM_SITE_PKG=`$PYTHON -c "$IMPORT_SYSCONFIG; \
print (sysconfig.get_python_lib(1,0));"`
fi
fi
AC_MSG_RESULT([$PYTHON_PLATFORM_SITE_PKG])
AC_SUBST([PYTHON_PLATFORM_SITE_PKG])
#
# libraries which must be linked in when embedding
#
AC_MSG_CHECKING(python extra libraries)
if test -z "$PYTHON_EXTRA_LIBS"; then
PYTHON_EXTRA_LIBS=`$PYTHON -c "$IMPORT_SYSCONFIG; \
#
# libraries which must be linked in when embedding
#
AC_MSG_CHECKING(python extra libraries)
if test -z "$PYTHON_EXTRA_LIBS"; then
PYTHON_EXTRA_LIBS=`$PYTHON -c "import sysconfig; \
conf = sysconfig.get_config_var; \
print (conf('LIBS') + ' ' + conf('SYSLIBS'))"`
fi
AC_MSG_RESULT([$PYTHON_EXTRA_LIBS])
AC_SUBST(PYTHON_EXTRA_LIBS)
fi
AC_MSG_RESULT([$PYTHON_EXTRA_LIBS])
AC_SUBST(PYTHON_EXTRA_LIBS)
#
# linking flags needed when embedding
#
AC_MSG_CHECKING(python extra linking flags)
if test -z "$PYTHON_EXTRA_LDFLAGS"; then
PYTHON_EXTRA_LDFLAGS=`$PYTHON -c "$IMPORT_SYSCONFIG; \
#
# linking flags needed when embedding
#
AC_MSG_CHECKING(python extra linking flags)
if test -z "$PYTHON_EXTRA_LDFLAGS"; then
PYTHON_EXTRA_LDFLAGS=`$PYTHON -c "import sysconfig; \
conf = sysconfig.get_config_var; \
print (conf('LINKFORSHARED'))"`
# Hack for macos, it sticks this in here.
PYTHON_EXTRA_LDFLAGS=`echo $PYTHON_EXTRA_LDFLAGS | sed 's/CoreFoundation.*$/CoreFoundation/'`
fi
AC_MSG_RESULT([$PYTHON_EXTRA_LDFLAGS])
AC_SUBST(PYTHON_EXTRA_LDFLAGS)
fi
AC_MSG_RESULT([$PYTHON_EXTRA_LDFLAGS])
AC_SUBST(PYTHON_EXTRA_LDFLAGS)
#
# final check to see if everything compiles alright
#
AC_MSG_CHECKING([consistency of all components of python development environment])
# save current global flags
ac_save_LIBS="$LIBS"
ac_save_LDFLAGS="$LDFLAGS"
ac_save_CPPFLAGS="$CPPFLAGS"
LIBS="$ac_save_LIBS $PYTHON_LIBS $PYTHON_EXTRA_LIBS"
LDFLAGS="$ac_save_LDFLAGS $PYTHON_EXTRA_LDFLAGS"
CPPFLAGS="$ac_save_CPPFLAGS $PYTHON_CPPFLAGS"
AC_LANG_PUSH([C])
AC_LINK_IFELSE([
#
# final check to see if everything compiles alright
#
AC_MSG_CHECKING([consistency of all components of python development environment])
# save current global flags
ac_save_LIBS="$LIBS"
ac_save_LDFLAGS="$LDFLAGS"
ac_save_CPPFLAGS="$CPPFLAGS"
LIBS="$ac_save_LIBS $PYTHON_LIBS $PYTHON_EXTRA_LIBS $PYTHON_EXTRA_LIBS"
LDFLAGS="$ac_save_LDFLAGS $PYTHON_EXTRA_LDFLAGS"
CPPFLAGS="$ac_save_CPPFLAGS $PYTHON_CPPFLAGS"
AC_LANG_PUSH([C])
AC_LINK_IFELSE([
AC_LANG_PROGRAM([[#include <Python.h>]],
[[Py_Initialize();]])
],[pythonexists=yes],[pythonexists=no])
AC_LANG_POP([C])
# turn back to default flags
CPPFLAGS="$ac_save_CPPFLAGS"
LIBS="$ac_save_LIBS"
LDFLAGS="$ac_save_LDFLAGS"
AC_LANG_POP([C])
# turn back to default flags
CPPFLAGS="$ac_save_CPPFLAGS"
LIBS="$ac_save_LIBS"
LDFLAGS="$ac_save_LDFLAGS"
AC_MSG_RESULT([$pythonexists])
AC_MSG_RESULT([$pythonexists])
if test ! "x$pythonexists" = "xyes"; then
AC_MSG_WARN([
if test ! "x$pythonexists" = "xyes"; then
m4_ifvaln([$2],[$2],[
AC_MSG_FAILURE([
Could not link test program to Python. Maybe the main Python library has been
installed in some non-standard library path. If so, pass it to configure,
via the LIBS environment variable.
@ -453,13 +340,9 @@ print(sitedir)"`
You probably have to install the development version of the Python package
for your distribution. The exact name of this package varies among them.
============================================================================
])
if ! $ax_python_devel_optional; then
AC_MSG_ERROR([Giving up])
fi
ax_python_devel_found=no
PYTHON_VERSION=""
fi
])
PYTHON_VERSION=""
])
fi
#

View File

@ -90,8 +90,8 @@ AC_DEFUN([ZFS_AC_FIND_SYSTEM_LIBRARY], [
AC_DEFINE([HAVE_][$1], [1], [Define if you have [$5]])
$7
],[dnl ELSE
AC_SUBST([$1]_CFLAGS, [""])
AC_SUBST([$1]_LIBS, [""])
AC_SUBST([$1]_CFLAGS, [])
AC_SUBST([$1]_LIBS, [])
AC_MSG_WARN([cannot find [$5] via pkg-config or in the standard locations])
$8
])

View File

@ -172,7 +172,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_INODE_OPERATIONS_GET_ACL], [
ZFS_LINUX_TEST_SRC([inode_operations_get_acl], [
#include <linux/fs.h>
static struct posix_acl *get_acl_fn(struct inode *inode, int type)
struct posix_acl *get_acl_fn(struct inode *inode, int type)
{ return NULL; }
static const struct inode_operations
@ -184,7 +184,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_INODE_OPERATIONS_GET_ACL], [
ZFS_LINUX_TEST_SRC([inode_operations_get_acl_rcu], [
#include <linux/fs.h>
static struct posix_acl *get_acl_fn(struct inode *inode, int type,
struct posix_acl *get_acl_fn(struct inode *inode, int type,
bool rcu) { return NULL; }
static const struct inode_operations
@ -196,7 +196,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_INODE_OPERATIONS_GET_ACL], [
ZFS_LINUX_TEST_SRC([inode_operations_get_inode_acl], [
#include <linux/fs.h>
static struct posix_acl *get_inode_acl_fn(struct inode *inode, int type,
struct posix_acl *get_inode_acl_fn(struct inode *inode, int type,
bool rcu) { return NULL; }
static const struct inode_operations
@ -243,7 +243,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_INODE_OPERATIONS_SET_ACL], [
ZFS_LINUX_TEST_SRC([inode_operations_set_acl_mnt_idmap_dentry], [
#include <linux/fs.h>
static int set_acl_fn(struct mnt_idmap *idmap,
int set_acl_fn(struct mnt_idmap *idmap,
struct dentry *dent, struct posix_acl *acl,
int type) { return 0; }
@ -255,7 +255,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_INODE_OPERATIONS_SET_ACL], [
ZFS_LINUX_TEST_SRC([inode_operations_set_acl_userns_dentry], [
#include <linux/fs.h>
static int set_acl_fn(struct user_namespace *userns,
int set_acl_fn(struct user_namespace *userns,
struct dentry *dent, struct posix_acl *acl,
int type) { return 0; }
@ -267,7 +267,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_INODE_OPERATIONS_SET_ACL], [
ZFS_LINUX_TEST_SRC([inode_operations_set_acl_userns], [
#include <linux/fs.h>
static int set_acl_fn(struct user_namespace *userns,
int set_acl_fn(struct user_namespace *userns,
struct inode *inode, struct posix_acl *acl,
int type) { return 0; }
@ -279,7 +279,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_INODE_OPERATIONS_SET_ACL], [
ZFS_LINUX_TEST_SRC([inode_operations_set_acl], [
#include <linux/fs.h>
static int set_acl_fn(struct inode *inode, struct posix_acl *acl,
int set_acl_fn(struct inode *inode, struct posix_acl *acl,
int type) { return 0; }
static const struct inode_operations

View File

@ -8,7 +8,7 @@ dnl #
AC_DEFUN([ZFS_AC_KERNEL_SRC_AUTOMOUNT], [
ZFS_LINUX_TEST_SRC([dentry_operations_d_automount], [
#include <linux/dcache.h>
static struct vfsmount *d_automount(struct path *p) { return NULL; }
struct vfsmount *d_automount(struct path *p) { return NULL; }
struct dentry_operations dops __attribute__ ((unused)) = {
.d_automount = d_automount,
};

View File

@ -247,7 +247,7 @@ dnl #
AC_DEFUN([ZFS_AC_KERNEL_SRC_BIO_END_IO_T_ARGS], [
ZFS_LINUX_TEST_SRC([bio_end_io_t_args], [
#include <linux/bio.h>
static void wanted_end_io(struct bio *bio) { return; }
void wanted_end_io(struct bio *bio) { return; }
bio_end_io_t *end_io __attribute__ ((unused)) = wanted_end_io;
], [])
])

View File

@ -25,8 +25,6 @@ AC_DEFUN([ZFS_AC_KERNEL_BLK_QUEUE_PLUG], [
dnl #
dnl # 2.6.32 - 4.11: statically allocated bdi in request_queue
dnl # 4.12: dynamically allocated bdi in request_queue
dnl # 6.11: bdi no longer available through request_queue, so get it from
dnl # the gendisk attached to the queue
dnl #
AC_DEFUN([ZFS_AC_KERNEL_SRC_BLK_QUEUE_BDI], [
ZFS_LINUX_TEST_SRC([blk_queue_bdi], [
@ -49,30 +47,6 @@ AC_DEFUN([ZFS_AC_KERNEL_BLK_QUEUE_BDI], [
])
])
AC_DEFUN([ZFS_AC_KERNEL_SRC_BLK_QUEUE_DISK_BDI], [
ZFS_LINUX_TEST_SRC([blk_queue_disk_bdi], [
#include <linux/blkdev.h>
#include <linux/backing-dev.h>
], [
struct request_queue q;
struct gendisk disk;
struct backing_dev_info bdi __attribute__ ((unused));
q.disk = &disk;
q.disk->bdi = &bdi;
])
])
AC_DEFUN([ZFS_AC_KERNEL_BLK_QUEUE_DISK_BDI], [
AC_MSG_CHECKING([whether backing_dev_info is available through queue gendisk])
ZFS_LINUX_TEST_RESULT([blk_queue_disk_bdi], [
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_BLK_QUEUE_DISK_BDI, 1,
[backing_dev_info is available through queue gendisk])
],[
AC_MSG_RESULT(no)
])
])
dnl #
dnl # 5.9: added blk_queue_update_readahead(),
dnl # 5.15: renamed to disk_update_readahead()
@ -358,7 +332,7 @@ AC_DEFUN([ZFS_AC_KERNEL_BLK_QUEUE_MAX_HW_SECTORS], [
ZFS_LINUX_TEST_RESULT([blk_queue_max_hw_sectors], [
AC_MSG_RESULT(yes)
],[
AC_MSG_RESULT(no)
ZFS_LINUX_TEST_ERROR([blk_queue_max_hw_sectors])
])
])
@ -381,7 +355,7 @@ AC_DEFUN([ZFS_AC_KERNEL_BLK_QUEUE_MAX_SEGMENTS], [
ZFS_LINUX_TEST_RESULT([blk_queue_max_segments], [
AC_MSG_RESULT(yes)
], [
AC_MSG_RESULT(no)
ZFS_LINUX_TEST_ERROR([blk_queue_max_segments])
])
])
@ -403,14 +377,6 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_BLK_MQ], [
(void) blk_mq_alloc_tag_set(&tag_set);
return BLK_STS_OK;
], [])
ZFS_LINUX_TEST_SRC([blk_mq_rq_hctx], [
#include <linux/blk-mq.h>
#include <linux/blkdev.h>
], [
struct request rq = {0};
struct blk_mq_hw_ctx *hctx = NULL;
rq.mq_hctx = hctx;
], [])
])
AC_DEFUN([ZFS_AC_KERNEL_BLK_MQ], [
@ -418,13 +384,6 @@ AC_DEFUN([ZFS_AC_KERNEL_BLK_MQ], [
ZFS_LINUX_TEST_RESULT([blk_mq], [
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_BLK_MQ, 1, [block multiqueue is available])
AC_MSG_CHECKING([whether block multiqueue hardware context is cached in struct request])
ZFS_LINUX_TEST_RESULT([blk_mq_rq_hctx], [
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_BLK_MQ_RQ_HCTX, 1, [block multiqueue hardware context is cached in struct request])
], [
AC_MSG_RESULT(no)
])
], [
AC_MSG_RESULT(no)
])
@ -433,7 +392,6 @@ AC_DEFUN([ZFS_AC_KERNEL_BLK_MQ], [
AC_DEFUN([ZFS_AC_KERNEL_SRC_BLK_QUEUE], [
ZFS_AC_KERNEL_SRC_BLK_QUEUE_PLUG
ZFS_AC_KERNEL_SRC_BLK_QUEUE_BDI
ZFS_AC_KERNEL_SRC_BLK_QUEUE_DISK_BDI
ZFS_AC_KERNEL_SRC_BLK_QUEUE_UPDATE_READAHEAD
ZFS_AC_KERNEL_SRC_BLK_QUEUE_DISCARD
ZFS_AC_KERNEL_SRC_BLK_QUEUE_SECURE_ERASE
@ -448,7 +406,6 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_BLK_QUEUE], [
AC_DEFUN([ZFS_AC_KERNEL_BLK_QUEUE], [
ZFS_AC_KERNEL_BLK_QUEUE_PLUG
ZFS_AC_KERNEL_BLK_QUEUE_BDI
ZFS_AC_KERNEL_BLK_QUEUE_DISK_BDI
ZFS_AC_KERNEL_BLK_QUEUE_UPDATE_READAHEAD
ZFS_AC_KERNEL_BLK_QUEUE_DISCARD
ZFS_AC_KERNEL_BLK_QUEUE_SECURE_ERASE

View File

@ -35,45 +35,6 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_BLKDEV_GET_BY_PATH_4ARG], [
])
])
dnl #
dnl # 6.8.x API change
dnl # bdev_open_by_path() replaces blkdev_get_by_path()
dnl #
AC_DEFUN([ZFS_AC_KERNEL_SRC_BLKDEV_BDEV_OPEN_BY_PATH], [
ZFS_LINUX_TEST_SRC([bdev_open_by_path], [
#include <linux/fs.h>
#include <linux/blkdev.h>
], [
struct bdev_handle *bdh __attribute__ ((unused)) = NULL;
const char *path = "path";
fmode_t mode = 0;
void *holder = NULL;
struct blk_holder_ops h;
bdh = bdev_open_by_path(path, mode, holder, &h);
])
])
dnl #
dnl # 6.9.x API change
dnl # bdev_file_open_by_path() replaced bdev_open_by_path(),
dnl # and returns struct file*
dnl #
AC_DEFUN([ZFS_AC_KERNEL_SRC_BDEV_FILE_OPEN_BY_PATH], [
ZFS_LINUX_TEST_SRC([bdev_file_open_by_path], [
#include <linux/fs.h>
#include <linux/blkdev.h>
], [
struct file *file __attribute__ ((unused)) = NULL;
const char *path = "path";
fmode_t mode = 0;
void *holder = NULL;
struct blk_holder_ops h;
file = bdev_file_open_by_path(path, mode, holder, &h);
])
])
AC_DEFUN([ZFS_AC_KERNEL_BLKDEV_GET_BY_PATH], [
AC_MSG_CHECKING([whether blkdev_get_by_path() exists and takes 3 args])
ZFS_LINUX_TEST_RESULT([blkdev_get_by_path], [
@ -86,24 +47,7 @@ AC_DEFUN([ZFS_AC_KERNEL_BLKDEV_GET_BY_PATH], [
[blkdev_get_by_path() exists and takes 4 args])
AC_MSG_RESULT(yes)
], [
AC_MSG_RESULT(no)
AC_MSG_CHECKING([whether bdev_open_by_path() exists])
ZFS_LINUX_TEST_RESULT([bdev_open_by_path], [
AC_DEFINE(HAVE_BDEV_OPEN_BY_PATH, 1,
[bdev_open_by_path() exists])
AC_MSG_RESULT(yes)
], [
AC_MSG_RESULT(no)
AC_MSG_CHECKING([whether bdev_file_open_by_path() exists])
ZFS_LINUX_TEST_RESULT([bdev_file_open_by_path], [
AC_DEFINE(HAVE_BDEV_FILE_OPEN_BY_PATH, 1,
[bdev_file_open_by_path() exists])
AC_MSG_RESULT(yes)
], [
AC_MSG_RESULT(no)
ZFS_LINUX_TEST_ERROR([blkdev_get_by_path()])
])
])
ZFS_LINUX_TEST_ERROR([blkdev_get_by_path()])
])
])
])
@ -164,50 +108,18 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_BLKDEV_PUT_HOLDER], [
])
])
dnl #
dnl # 6.8.x API change
dnl # bdev_release() replaces blkdev_put()
dnl #
AC_DEFUN([ZFS_AC_KERNEL_SRC_BLKDEV_BDEV_RELEASE], [
ZFS_LINUX_TEST_SRC([bdev_release], [
#include <linux/fs.h>
#include <linux/blkdev.h>
], [
struct bdev_handle *bdh = NULL;
bdev_release(bdh);
])
])
dnl #
dnl # 6.9.x API change
dnl #
dnl # bdev_release() now private, but because bdev_file_open_by_path() returns
dnl # struct file*, we can just use fput(). So the blkdev_put test no longer
dnl # fails if not found.
dnl #
AC_DEFUN([ZFS_AC_KERNEL_BLKDEV_PUT], [
AC_MSG_CHECKING([whether blkdev_put() exists])
ZFS_LINUX_TEST_RESULT([blkdev_put], [
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_BLKDEV_PUT, 1, [blkdev_put() exists])
], [
AC_MSG_RESULT(no)
AC_MSG_CHECKING([whether blkdev_put() accepts void* as arg 2])
ZFS_LINUX_TEST_RESULT([blkdev_put_holder], [
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_BLKDEV_PUT_HOLDER, 1,
[blkdev_put() accepts void* as arg 2])
], [
AC_MSG_RESULT(no)
AC_MSG_CHECKING([whether bdev_release() exists])
ZFS_LINUX_TEST_RESULT([bdev_release], [
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_BDEV_RELEASE, 1,
[bdev_release() exists])
], [
AC_MSG_RESULT(no)
])
ZFS_LINUX_TEST_ERROR([blkdev_put()])
])
])
])
@ -534,30 +446,6 @@ AC_DEFUN([ZFS_AC_KERNEL_BLKDEV_BDEV_WHOLE], [
])
])
dnl #
dnl # 5.16 API change
dnl # Added bdev_nr_bytes() helper.
dnl #
AC_DEFUN([ZFS_AC_KERNEL_SRC_BLKDEV_BDEV_NR_BYTES], [
ZFS_LINUX_TEST_SRC([bdev_nr_bytes], [
#include <linux/blkdev.h>
],[
struct block_device *bdev = NULL;
loff_t nr_bytes __attribute__ ((unused)) = 0;
nr_bytes = bdev_nr_bytes(bdev);
])
])
AC_DEFUN([ZFS_AC_KERNEL_BLKDEV_BDEV_NR_BYTES], [
AC_MSG_CHECKING([whether bdev_nr_bytes() is available])
ZFS_LINUX_TEST_RESULT([bdev_nr_bytes], [
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_BDEV_NR_BYTES, 1, [bdev_nr_bytes() is available])
],[
AC_MSG_RESULT(no)
])
])
dnl #
dnl # 5.20 API change,
dnl # Removed bdevname(), snprintf(.., %pg) should be used.
@ -585,29 +473,11 @@ AC_DEFUN([ZFS_AC_KERNEL_BLKDEV_BDEVNAME], [
])
dnl #
dnl # TRIM support: discard and secure erase. We make use of asynchronous
dnl # functions when available.
dnl # 5.19 API: blkdev_issue_secure_erase()
dnl # 3.10 API: blkdev_issue_discard(..., BLKDEV_DISCARD_SECURE)
dnl #
dnl # 3.10:
dnl # sync discard: blkdev_issue_discard(..., 0)
dnl # sync erase: blkdev_issue_discard(..., BLKDEV_DISCARD_SECURE)
dnl # async discard: [not available]
dnl # async erase: [not available]
dnl #
dnl # 4.7:
dnl # sync discard: blkdev_issue_discard(..., 0)
dnl # sync erase: blkdev_issue_discard(..., BLKDEV_DISCARD_SECURE)
dnl # async discard: __blkdev_issue_discard(..., 0)
dnl # async erase: __blkdev_issue_discard(..., BLKDEV_DISCARD_SECURE)
dnl #
dnl # 5.19:
dnl # sync discard: blkdev_issue_discard(...)
dnl # sync erase: blkdev_issue_secure_erase(...)
dnl # async discard: __blkdev_issue_discard(...)
dnl # async erase: [not available]
dnl #
AC_DEFUN([ZFS_AC_KERNEL_SRC_BLKDEV_ISSUE_DISCARD], [
ZFS_LINUX_TEST_SRC([blkdev_issue_discard_noflags], [
AC_DEFUN([ZFS_AC_KERNEL_SRC_BLKDEV_ISSUE_SECURE_ERASE], [
ZFS_LINUX_TEST_SRC([blkdev_issue_secure_erase], [
#include <linux/blkdev.h>
],[
struct block_device *bdev = NULL;
@ -615,9 +485,10 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_BLKDEV_ISSUE_DISCARD], [
sector_t nr_sects = 0;
int error __attribute__ ((unused));
error = blkdev_issue_discard(bdev,
error = blkdev_issue_secure_erase(bdev,
sector, nr_sects, GFP_KERNEL);
])
ZFS_LINUX_TEST_SRC([blkdev_issue_discard_flags], [
#include <linux/blkdev.h>
],[
@ -630,77 +501,9 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_BLKDEV_ISSUE_DISCARD], [
error = blkdev_issue_discard(bdev,
sector, nr_sects, GFP_KERNEL, flags);
])
ZFS_LINUX_TEST_SRC([blkdev_issue_discard_async_noflags], [
#include <linux/blkdev.h>
],[
struct block_device *bdev = NULL;
sector_t sector = 0;
sector_t nr_sects = 0;
struct bio *biop = NULL;
int error __attribute__ ((unused));
error = __blkdev_issue_discard(bdev,
sector, nr_sects, GFP_KERNEL, &biop);
])
ZFS_LINUX_TEST_SRC([blkdev_issue_discard_async_flags], [
#include <linux/blkdev.h>
],[
struct block_device *bdev = NULL;
sector_t sector = 0;
sector_t nr_sects = 0;
unsigned long flags = 0;
struct bio *biop = NULL;
int error __attribute__ ((unused));
error = __blkdev_issue_discard(bdev,
sector, nr_sects, GFP_KERNEL, flags, &biop);
])
ZFS_LINUX_TEST_SRC([blkdev_issue_secure_erase], [
#include <linux/blkdev.h>
],[
struct block_device *bdev = NULL;
sector_t sector = 0;
sector_t nr_sects = 0;
int error __attribute__ ((unused));
error = blkdev_issue_secure_erase(bdev,
sector, nr_sects, GFP_KERNEL);
])
])
AC_DEFUN([ZFS_AC_KERNEL_BLKDEV_ISSUE_DISCARD], [
AC_MSG_CHECKING([whether blkdev_issue_discard() is available])
ZFS_LINUX_TEST_RESULT([blkdev_issue_discard_noflags], [
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_BLKDEV_ISSUE_DISCARD_NOFLAGS, 1,
[blkdev_issue_discard() is available])
],[
AC_MSG_RESULT(no)
])
AC_MSG_CHECKING([whether blkdev_issue_discard(flags) is available])
ZFS_LINUX_TEST_RESULT([blkdev_issue_discard_flags], [
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_BLKDEV_ISSUE_DISCARD_FLAGS, 1,
[blkdev_issue_discard(flags) is available])
],[
AC_MSG_RESULT(no)
])
AC_MSG_CHECKING([whether __blkdev_issue_discard() is available])
ZFS_LINUX_TEST_RESULT([blkdev_issue_discard_async_noflags], [
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_BLKDEV_ISSUE_DISCARD_ASYNC_NOFLAGS, 1,
[__blkdev_issue_discard() is available])
],[
AC_MSG_RESULT(no)
])
AC_MSG_CHECKING([whether __blkdev_issue_discard(flags) is available])
ZFS_LINUX_TEST_RESULT([blkdev_issue_discard_async_flags], [
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_BLKDEV_ISSUE_DISCARD_ASYNC_FLAGS, 1,
[__blkdev_issue_discard(flags) is available])
],[
AC_MSG_RESULT(no)
])
AC_DEFUN([ZFS_AC_KERNEL_BLKDEV_ISSUE_SECURE_ERASE], [
AC_MSG_CHECKING([whether blkdev_issue_secure_erase() is available])
ZFS_LINUX_TEST_RESULT([blkdev_issue_secure_erase], [
AC_MSG_RESULT(yes)
@ -708,6 +511,15 @@ AC_DEFUN([ZFS_AC_KERNEL_BLKDEV_ISSUE_DISCARD], [
[blkdev_issue_secure_erase() is available])
],[
AC_MSG_RESULT(no)
AC_MSG_CHECKING([whether blkdev_issue_discard() is available])
ZFS_LINUX_TEST_RESULT([blkdev_issue_discard_flags], [
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_BLKDEV_ISSUE_DISCARD, 1,
[blkdev_issue_discard() is available])
],[
ZFS_LINUX_TEST_ERROR([blkdev_issue_discard()])
])
])
])
@ -758,11 +570,8 @@ AC_DEFUN([ZFS_AC_KERNEL_BLKDEV_BLK_STS_RESV_CONFLICT], [
AC_DEFUN([ZFS_AC_KERNEL_SRC_BLKDEV], [
ZFS_AC_KERNEL_SRC_BLKDEV_GET_BY_PATH
ZFS_AC_KERNEL_SRC_BLKDEV_GET_BY_PATH_4ARG
ZFS_AC_KERNEL_SRC_BLKDEV_BDEV_OPEN_BY_PATH
ZFS_AC_KERNEL_SRC_BDEV_FILE_OPEN_BY_PATH
ZFS_AC_KERNEL_SRC_BLKDEV_PUT
ZFS_AC_KERNEL_SRC_BLKDEV_PUT_HOLDER
ZFS_AC_KERNEL_SRC_BLKDEV_BDEV_RELEASE
ZFS_AC_KERNEL_SRC_BLKDEV_REREAD_PART
ZFS_AC_KERNEL_SRC_BLKDEV_INVALIDATE_BDEV
ZFS_AC_KERNEL_SRC_BLKDEV_LOOKUP_BDEV
@ -771,9 +580,8 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_BLKDEV], [
ZFS_AC_KERNEL_SRC_BLKDEV_CHECK_DISK_CHANGE
ZFS_AC_KERNEL_SRC_BLKDEV_BDEV_CHECK_MEDIA_CHANGE
ZFS_AC_KERNEL_SRC_BLKDEV_BDEV_WHOLE
ZFS_AC_KERNEL_SRC_BLKDEV_BDEV_NR_BYTES
ZFS_AC_KERNEL_SRC_BLKDEV_BDEVNAME
ZFS_AC_KERNEL_SRC_BLKDEV_ISSUE_DISCARD
ZFS_AC_KERNEL_SRC_BLKDEV_ISSUE_SECURE_ERASE
ZFS_AC_KERNEL_SRC_BLKDEV_BDEV_KOBJ
ZFS_AC_KERNEL_SRC_BLKDEV_PART_TO_DEV
ZFS_AC_KERNEL_SRC_BLKDEV_DISK_CHECK_MEDIA_CHANGE
@ -792,10 +600,9 @@ AC_DEFUN([ZFS_AC_KERNEL_BLKDEV], [
ZFS_AC_KERNEL_BLKDEV_CHECK_DISK_CHANGE
ZFS_AC_KERNEL_BLKDEV_BDEV_CHECK_MEDIA_CHANGE
ZFS_AC_KERNEL_BLKDEV_BDEV_WHOLE
ZFS_AC_KERNEL_BLKDEV_BDEV_NR_BYTES
ZFS_AC_KERNEL_BLKDEV_BDEVNAME
ZFS_AC_KERNEL_BLKDEV_GET_ERESTARTSYS
ZFS_AC_KERNEL_BLKDEV_ISSUE_DISCARD
ZFS_AC_KERNEL_BLKDEV_ISSUE_SECURE_ERASE
ZFS_AC_KERNEL_BLKDEV_BDEV_KOBJ
ZFS_AC_KERNEL_BLKDEV_PART_TO_DEV
ZFS_AC_KERNEL_BLKDEV_DISK_CHECK_MEDIA_CHANGE

View File

@ -5,7 +5,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_BLOCK_DEVICE_OPERATIONS_CHECK_EVENTS], [
ZFS_LINUX_TEST_SRC([block_device_operations_check_events], [
#include <linux/blkdev.h>
static unsigned int blk_check_events(struct gendisk *disk,
unsigned int blk_check_events(struct gendisk *disk,
unsigned int clearing) {
(void) disk, (void) clearing;
return (0);
@ -34,7 +34,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_BLOCK_DEVICE_OPERATIONS_RELEASE_VOID], [
ZFS_LINUX_TEST_SRC([block_device_operations_release_void], [
#include <linux/blkdev.h>
static void blk_release(struct gendisk *g, fmode_t mode) {
void blk_release(struct gendisk *g, fmode_t mode) {
(void) g, (void) mode;
return;
}
@ -56,7 +56,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_BLOCK_DEVICE_OPERATIONS_RELEASE_1ARG], [
ZFS_LINUX_TEST_SRC([block_device_operations_release_void_1arg], [
#include <linux/blkdev.h>
static void blk_release(struct gendisk *g) {
void blk_release(struct gendisk *g) {
(void) g;
return;
}
@ -96,7 +96,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_BLOCK_DEVICE_OPERATIONS_REVALIDATE_DISK], [
ZFS_LINUX_TEST_SRC([block_device_operations_revalidate_disk], [
#include <linux/blkdev.h>
static int blk_revalidate_disk(struct gendisk *disk) {
int blk_revalidate_disk(struct gendisk *disk) {
(void) disk;
return(0);
}

View File

@ -7,7 +7,7 @@ dnl #
AC_DEFUN([ZFS_AC_KERNEL_SRC_COMMIT_METADATA], [
ZFS_LINUX_TEST_SRC([export_operations_commit_metadata], [
#include <linux/exportfs.h>
static int commit_metadata(struct inode *inode) { return 0; }
int commit_metadata(struct inode *inode) { return 0; }
static struct export_operations eops __attribute__ ((unused))={
.commit_metadata = commit_metadata,
};

View File

@ -2,15 +2,12 @@ dnl #
dnl # 4.9, current_time() added
dnl # 4.18, return type changed from timespec to timespec64
dnl #
dnl # Note that we don't care about the return type in this check. If we have
dnl # to implement a fallback, we'll know we're <4.9, which was timespec.
dnl #
AC_DEFUN([ZFS_AC_KERNEL_SRC_CURRENT_TIME], [
ZFS_LINUX_TEST_SRC([current_time], [
#include <linux/fs.h>
], [
struct inode ip __attribute__ ((unused));
(void) current_time(&ip);
ip.i_atime = current_time(&ip);
])
])

View File

@ -98,7 +98,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_D_REVALIDATE_NAMEIDATA], [
#include <linux/dcache.h>
#include <linux/sched.h>
static int revalidate (struct dentry *dentry,
int revalidate (struct dentry *dentry,
struct nameidata *nidata) { return 0; }
static const struct dentry_operations

View File

@ -8,7 +8,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_DIRTY_INODE], [
ZFS_LINUX_TEST_SRC([dirty_inode_with_flags], [
#include <linux/fs.h>
static void dirty_inode(struct inode *a, int b) { return; }
void dirty_inode(struct inode *a, int b) { return; }
static const struct super_operations
sops __attribute__ ((unused)) = {

View File

@ -7,7 +7,7 @@ dnl #
AC_DEFUN([ZFS_AC_KERNEL_SRC_ENCODE_FH_WITH_INODE], [
ZFS_LINUX_TEST_SRC([export_operations_encode_fh], [
#include <linux/exportfs.h>
static int encode_fh(struct inode *inode, __u32 *fh, int *max_len,
int encode_fh(struct inode *inode, __u32 *fh, int *max_len,
struct inode *parent) { return 0; }
static struct export_operations eops __attribute__ ((unused))={
.encode_fh = encode_fh,

View File

@ -6,7 +6,7 @@ dnl #
AC_DEFUN([ZFS_AC_KERNEL_SRC_EVICT_INODE], [
ZFS_LINUX_TEST_SRC([evict_inode], [
#include <linux/fs.h>
static void evict_inode (struct inode * t) { return; }
void evict_inode (struct inode * t) { return; }
static struct super_operations sops __attribute__ ((unused)) = {
.evict_inode = evict_inode,
};

View File

@ -11,7 +11,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_FALLOCATE], [
ZFS_LINUX_TEST_SRC([file_fallocate], [
#include <linux/fs.h>
static long test_fallocate(struct file *file, int mode,
long test_fallocate(struct file *file, int mode,
loff_t offset, loff_t len) { return 0; }
static const struct file_operations

View File

@ -4,7 +4,6 @@ dnl #
AC_DEFUN([ZFS_AC_KERNEL_SRC_FILEMAP], [
ZFS_LINUX_TEST_SRC([filemap_range_has_page], [
#include <linux/fs.h>
#include <linux/pagemap.h>
],[
struct address_space *mapping = NULL;
loff_t lstart = 0;

View File

@ -1,8 +1,7 @@
dnl #
dnl # Starting from Linux 5.13, flush_dcache_page() becomes an inline
dnl # function and may indirectly referencing GPL-only symbols:
dnl # on powerpc: cpu_feature_keys
dnl # on riscv: PageHuge (added from 6.2)
dnl # function and may indirectly referencing GPL-only cpu_feature_keys on
dnl # powerpc
dnl #
dnl #

View File

@ -79,12 +79,6 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_FPU], [
__kernel_fpu_end();
], [], [ZFS_META_LICENSE])
ZFS_LINUX_TEST_SRC([kernel_neon], [
#include <asm/neon.h>
], [
kernel_neon_begin();
kernel_neon_end();
], [], [ZFS_META_LICENSE])
])
AC_DEFUN([ZFS_AC_KERNEL_FPU], [
@ -111,20 +105,9 @@ AC_DEFUN([ZFS_AC_KERNEL_FPU], [
AC_DEFINE(KERNEL_EXPORTS_X86_FPU, 1,
[kernel exports FPU functions])
],[
dnl #
dnl # ARM neon symbols (only on arm and arm64)
dnl # could be GPL-only on arm64 after Linux 6.2
dnl #
ZFS_LINUX_TEST_RESULT([kernel_neon_license],[
AC_MSG_RESULT(kernel_neon_*)
AC_DEFINE(HAVE_KERNEL_NEON, 1,
[kernel has kernel_neon_* functions])
],[
# catch-all
AC_MSG_RESULT(internal)
AC_DEFINE(HAVE_KERNEL_FPU_INTERNAL, 1,
[kernel fpu internal])
])
AC_MSG_RESULT(internal)
AC_DEFINE(HAVE_KERNEL_FPU_INTERNAL, 1,
[kernel fpu internal])
])
])
])

View File

@ -1,36 +0,0 @@
dnl #
dnl # 6.6 API change,
dnl # fsync_bdev was removed in favor of sync_blockdev
dnl #
AC_DEFUN([ZFS_AC_KERNEL_SRC_SYNC_BDEV], [
ZFS_LINUX_TEST_SRC([fsync_bdev], [
#include <linux/blkdev.h>
],[
fsync_bdev(NULL);
])
ZFS_LINUX_TEST_SRC([sync_blockdev], [
#include <linux/blkdev.h>
],[
sync_blockdev(NULL);
])
])
AC_DEFUN([ZFS_AC_KERNEL_SYNC_BDEV], [
AC_MSG_CHECKING([whether fsync_bdev() exists])
ZFS_LINUX_TEST_RESULT([fsync_bdev], [
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_FSYNC_BDEV, 1,
[fsync_bdev() is declared in include/blkdev.h])
],[
AC_MSG_CHECKING([whether sync_blockdev() exists])
ZFS_LINUX_TEST_RESULT([sync_blockdev], [
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_SYNC_BLOCKDEV, 1,
[sync_blockdev() is declared in include/blkdev.h])
],[
ZFS_LINUX_TEST_ERROR(
[neither fsync_bdev() nor sync_blockdev() exist])
])
])
])

View File

@ -5,7 +5,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_FSYNC], [
ZFS_LINUX_TEST_SRC([fsync_without_dentry], [
#include <linux/fs.h>
static int test_fsync(struct file *f, int x) { return 0; }
int test_fsync(struct file *f, int x) { return 0; }
static const struct file_operations
fops __attribute__ ((unused)) = {
@ -16,7 +16,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_FSYNC], [
ZFS_LINUX_TEST_SRC([fsync_range], [
#include <linux/fs.h>
static int test_fsync(struct file *f, loff_t a, loff_t b, int c)
int test_fsync(struct file *f, loff_t a, loff_t b, int c)
{ return 0; }
static const struct file_operations

View File

@ -7,10 +7,6 @@ dnl #
dnl # 6.3 API
dnl # generic_fillattr() now takes struct mnt_idmap* as the first argument
dnl #
dnl # 6.6 API
dnl # generic_fillattr() now takes u32 as second argument, representing a
dnl # request_mask for statx
dnl #
AC_DEFUN([ZFS_AC_KERNEL_SRC_GENERIC_FILLATTR], [
ZFS_LINUX_TEST_SRC([generic_fillattr_userns], [
#include <linux/fs.h>
@ -29,39 +25,22 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_GENERIC_FILLATTR], [
struct kstat *k = NULL;
generic_fillattr(idmap, in, k);
])
ZFS_LINUX_TEST_SRC([generic_fillattr_mnt_idmap_reqmask], [
#include <linux/fs.h>
],[
struct mnt_idmap *idmap = NULL;
struct inode *in = NULL;
struct kstat *k = NULL;
generic_fillattr(idmap, 0, in, k);
])
])
AC_DEFUN([ZFS_AC_KERNEL_GENERIC_FILLATTR], [
AC_MSG_CHECKING(
[whether generic_fillattr requires struct mnt_idmap* and request_mask])
ZFS_LINUX_TEST_RESULT([generic_fillattr_mnt_idmap_reqmask], [
AC_MSG_CHECKING([whether generic_fillattr requires struct mnt_idmap*])
ZFS_LINUX_TEST_RESULT([generic_fillattr_mnt_idmap], [
AC_MSG_RESULT([yes])
AC_DEFINE(HAVE_GENERIC_FILLATTR_IDMAP_REQMASK, 1,
[generic_fillattr requires struct mnt_idmap* and u32 request_mask])
AC_DEFINE(HAVE_GENERIC_FILLATTR_IDMAP, 1,
[generic_fillattr requires struct mnt_idmap*])
],[
AC_MSG_CHECKING([whether generic_fillattr requires struct mnt_idmap*])
ZFS_LINUX_TEST_RESULT([generic_fillattr_mnt_idmap], [
AC_MSG_CHECKING([whether generic_fillattr requires struct user_namespace*])
ZFS_LINUX_TEST_RESULT([generic_fillattr_userns], [
AC_MSG_RESULT([yes])
AC_DEFINE(HAVE_GENERIC_FILLATTR_IDMAP, 1,
[generic_fillattr requires struct mnt_idmap*])
AC_DEFINE(HAVE_GENERIC_FILLATTR_USERNS, 1,
[generic_fillattr requires struct user_namespace*])
],[
AC_MSG_CHECKING([whether generic_fillattr requires struct user_namespace*])
ZFS_LINUX_TEST_RESULT([generic_fillattr_userns], [
AC_MSG_RESULT([yes])
AC_DEFINE(HAVE_GENERIC_FILLATTR_USERNS, 1,
[generic_fillattr requires struct user_namespace*])
],[
AC_MSG_RESULT([no])
])
AC_MSG_RESULT([no])
])
])
])

View File

@ -5,7 +5,7 @@ dnl #
AC_DEFUN([ZFS_AC_KERNEL_SRC_GET_LINK], [
ZFS_LINUX_TEST_SRC([inode_operations_get_link], [
#include <linux/fs.h>
static const char *get_link(struct dentry *de, struct inode *ip,
const char *get_link(struct dentry *de, struct inode *ip,
struct delayed_call *done) { return "symlink"; }
static struct inode_operations
iops __attribute__ ((unused)) = {
@ -15,7 +15,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_GET_LINK], [
ZFS_LINUX_TEST_SRC([inode_operations_get_link_cookie], [
#include <linux/fs.h>
static const char *get_link(struct dentry *de, struct
const char *get_link(struct dentry *de, struct
inode *ip, void **cookie) { return "symlink"; }
static struct inode_operations
iops __attribute__ ((unused)) = {
@ -25,7 +25,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_GET_LINK], [
ZFS_LINUX_TEST_SRC([inode_operations_follow_link], [
#include <linux/fs.h>
static const char *follow_link(struct dentry *de,
const char *follow_link(struct dentry *de,
void **cookie) { return "symlink"; }
static struct inode_operations
iops __attribute__ ((unused)) = {
@ -35,7 +35,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_GET_LINK], [
ZFS_LINUX_TEST_SRC([inode_operations_follow_link_nameidata], [
#include <linux/fs.h>
static void *follow_link(struct dentry *de, struct
void *follow_link(struct dentry *de, struct
nameidata *nd) { return (void *)NULL; }
static struct inode_operations
iops __attribute__ ((unused)) = {

View File

@ -23,28 +23,3 @@ AC_DEFUN([ZFS_AC_KERNEL_IDMAP_MNT_API], [
])
])
dnl #
dnl # 6.8 decouples mnt_idmap from user_namespace. This is all internal
dnl # to mnt_idmap so we can't detect it directly, but we detect a related
dnl # change as use that as a signal.
dnl #
AC_DEFUN([ZFS_AC_KERNEL_SRC_IDMAP_NO_USERNS], [
ZFS_LINUX_TEST_SRC([idmap_no_userns], [
#include <linux/uidgid.h>
], [
struct uid_gid_map *map = NULL;
map_id_down(map, 0);
])
])
AC_DEFUN([ZFS_AC_KERNEL_IDMAP_NO_USERNS], [
AC_MSG_CHECKING([whether idmapped mounts have a user namespace])
ZFS_LINUX_TEST_RESULT([idmap_no_userns], [
AC_MSG_RESULT([yes])
AC_DEFINE(HAVE_IDMAP_NO_USERNS, 1,
[mnt_idmap does not have user_namespace])
], [
AC_MSG_RESULT([no])
])
])

View File

@ -7,7 +7,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_CREATE], [
#include <linux/fs.h>
#include <linux/sched.h>
static int inode_create(struct mnt_idmap *idmap,
int inode_create(struct mnt_idmap *idmap,
struct inode *inode ,struct dentry *dentry,
umode_t umode, bool flag) { return 0; }
@ -25,7 +25,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_CREATE], [
#include <linux/fs.h>
#include <linux/sched.h>
static int inode_create(struct user_namespace *userns,
int inode_create(struct user_namespace *userns,
struct inode *inode ,struct dentry *dentry,
umode_t umode, bool flag) { return 0; }
@ -42,7 +42,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_CREATE], [
#include <linux/fs.h>
#include <linux/sched.h>
static int inode_create(struct inode *inode ,struct dentry *dentry,
int inode_create(struct inode *inode ,struct dentry *dentry,
umode_t umode, bool flag) { return 0; }
static const struct inode_operations

View File

@ -7,7 +7,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_INODE_GETATTR], [
ZFS_LINUX_TEST_SRC([inode_operations_getattr_mnt_idmap], [
#include <linux/fs.h>
static int test_getattr(
int test_getattr(
struct mnt_idmap *idmap,
const struct path *p, struct kstat *k,
u32 request_mask, unsigned int query_flags)
@ -28,7 +28,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_INODE_GETATTR], [
ZFS_LINUX_TEST_SRC([inode_operations_getattr_userns], [
#include <linux/fs.h>
static int test_getattr(
int test_getattr(
struct user_namespace *userns,
const struct path *p, struct kstat *k,
u32 request_mask, unsigned int query_flags)
@ -47,7 +47,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_INODE_GETATTR], [
ZFS_LINUX_TEST_SRC([inode_operations_getattr_path], [
#include <linux/fs.h>
static int test_getattr(
int test_getattr(
const struct path *p, struct kstat *k,
u32 request_mask, unsigned int query_flags)
{ return 0; }
@ -61,7 +61,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_INODE_GETATTR], [
ZFS_LINUX_TEST_SRC([inode_operations_getattr_vfsmount], [
#include <linux/fs.h>
static int test_getattr(
int test_getattr(
struct vfsmount *mnt, struct dentry *d,
struct kstat *k)
{ return 0; }

View File

@ -6,7 +6,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_LOOKUP_FLAGS], [
#include <linux/fs.h>
#include <linux/sched.h>
static struct dentry *inode_lookup(struct inode *inode,
struct dentry *inode_lookup(struct inode *inode,
struct dentry *dentry, unsigned int flags) { return NULL; }
static const struct inode_operations iops

View File

@ -8,12 +8,12 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_PERMISSION], [
#include <linux/fs.h>
#include <linux/sched.h>
static int test_permission(struct mnt_idmap *idmap,
int inode_permission(struct mnt_idmap *idmap,
struct inode *inode, int mask) { return 0; }
static const struct inode_operations
iops __attribute__ ((unused)) = {
.permission = test_permission,
.permission = inode_permission,
};
],[])
@ -25,12 +25,12 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_PERMISSION], [
#include <linux/fs.h>
#include <linux/sched.h>
static int test_permission(struct user_namespace *userns,
int inode_permission(struct user_namespace *userns,
struct inode *inode, int mask) { return 0; }
static const struct inode_operations
iops __attribute__ ((unused)) = {
.permission = test_permission,
.permission = inode_permission,
};
],[])
])

View File

@ -7,7 +7,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_INODE_SETATTR], [
ZFS_LINUX_TEST_SRC([inode_operations_setattr_mnt_idmap], [
#include <linux/fs.h>
static int test_setattr(
int test_setattr(
struct mnt_idmap *idmap,
struct dentry *de, struct iattr *ia)
{ return 0; }
@ -27,7 +27,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_INODE_SETATTR], [
ZFS_LINUX_TEST_SRC([inode_operations_setattr_userns], [
#include <linux/fs.h>
static int test_setattr(
int test_setattr(
struct user_namespace *userns,
struct dentry *de, struct iattr *ia)
{ return 0; }
@ -41,7 +41,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_INODE_SETATTR], [
ZFS_LINUX_TEST_SRC([inode_operations_setattr], [
#include <linux/fs.h>
static int test_setattr(
int test_setattr(
struct dentry *de, struct iattr *ia)
{ return 0; }

View File

@ -27,73 +27,6 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_INODE_TIMES], [
memset(&ip, 0, sizeof(ip));
ts = ip.i_mtime;
])
dnl #
dnl # 6.6 API change
dnl # i_ctime no longer directly accessible, must use
dnl # inode_get_ctime(ip), inode_set_ctime*(ip) to
dnl # read/write.
dnl #
ZFS_LINUX_TEST_SRC([inode_get_ctime], [
#include <linux/fs.h>
],[
struct inode ip;
memset(&ip, 0, sizeof(ip));
inode_get_ctime(&ip);
])
ZFS_LINUX_TEST_SRC([inode_set_ctime_to_ts], [
#include <linux/fs.h>
],[
struct inode ip;
struct timespec64 ts = {0};
memset(&ip, 0, sizeof(ip));
inode_set_ctime_to_ts(&ip, ts);
])
dnl #
dnl # 6.7 API change
dnl # i_atime/i_mtime no longer directly accessible, must use
dnl # inode_get_mtime(ip), inode_set_mtime*(ip) to
dnl # read/write.
dnl #
ZFS_LINUX_TEST_SRC([inode_get_atime], [
#include <linux/fs.h>
],[
struct inode ip;
memset(&ip, 0, sizeof(ip));
inode_get_atime(&ip);
])
ZFS_LINUX_TEST_SRC([inode_get_mtime], [
#include <linux/fs.h>
],[
struct inode ip;
memset(&ip, 0, sizeof(ip));
inode_get_mtime(&ip);
])
ZFS_LINUX_TEST_SRC([inode_set_atime_to_ts], [
#include <linux/fs.h>
],[
struct inode ip;
struct timespec64 ts = {0};
memset(&ip, 0, sizeof(ip));
inode_set_atime_to_ts(&ip, ts);
])
ZFS_LINUX_TEST_SRC([inode_set_mtime_to_ts], [
#include <linux/fs.h>
],[
struct inode ip;
struct timespec64 ts = {0};
memset(&ip, 0, sizeof(ip));
inode_set_mtime_to_ts(&ip, ts);
])
])
AC_DEFUN([ZFS_AC_KERNEL_INODE_TIMES], [
@ -114,58 +47,4 @@ AC_DEFUN([ZFS_AC_KERNEL_INODE_TIMES], [
AC_DEFINE(HAVE_INODE_TIMESPEC64_TIMES, 1,
[inode->i_*time's are timespec64])
])
AC_MSG_CHECKING([whether inode_get_ctime() exists])
ZFS_LINUX_TEST_RESULT([inode_get_ctime], [
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_INODE_GET_CTIME, 1,
[inode_get_ctime() exists in linux/fs.h])
],[
AC_MSG_RESULT(no)
])
AC_MSG_CHECKING([whether inode_set_ctime_to_ts() exists])
ZFS_LINUX_TEST_RESULT([inode_set_ctime_to_ts], [
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_INODE_SET_CTIME_TO_TS, 1,
[inode_set_ctime_to_ts() exists in linux/fs.h])
],[
AC_MSG_RESULT(no)
])
AC_MSG_CHECKING([whether inode_get_atime() exists])
ZFS_LINUX_TEST_RESULT([inode_get_atime], [
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_INODE_GET_ATIME, 1,
[inode_get_atime() exists in linux/fs.h])
],[
AC_MSG_RESULT(no)
])
AC_MSG_CHECKING([whether inode_set_atime_to_ts() exists])
ZFS_LINUX_TEST_RESULT([inode_set_atime_to_ts], [
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_INODE_SET_ATIME_TO_TS, 1,
[inode_set_atime_to_ts() exists in linux/fs.h])
],[
AC_MSG_RESULT(no)
])
AC_MSG_CHECKING([whether inode_get_mtime() exists])
ZFS_LINUX_TEST_RESULT([inode_get_mtime], [
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_INODE_GET_MTIME, 1,
[inode_get_mtime() exists in linux/fs.h])
],[
AC_MSG_RESULT(no)
])
AC_MSG_CHECKING([whether inode_set_mtime_to_ts() exists])
ZFS_LINUX_TEST_RESULT([inode_set_mtime_to_ts], [
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_INODE_SET_MTIME_TO_TS, 1,
[inode_set_mtime_to_ts() exists in linux/fs.h])
],[
AC_MSG_RESULT(no)
])
])

View File

@ -1,23 +0,0 @@
dnl #
dnl # 5.11 API change
dnl # kmap_atomic() was deprecated in favor of kmap_local_page()
dnl #
AC_DEFUN([ZFS_AC_KERNEL_SRC_KMAP_LOCAL_PAGE], [
ZFS_LINUX_TEST_SRC([kmap_local_page], [
#include <linux/highmem.h>
],[
struct page page;
kmap_local_page(&page);
])
])
AC_DEFUN([ZFS_AC_KERNEL_KMAP_LOCAL_PAGE], [
AC_MSG_CHECKING([whether kmap_local_page exists])
ZFS_LINUX_TEST_RESULT([kmap_local_page], [
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_KMAP_LOCAL_PAGE, 1,
[kernel has kmap_local_page])
],[
AC_MSG_RESULT(no)
])
])

View File

@ -4,7 +4,7 @@ dnl #
AC_DEFUN([ZFS_AC_KERNEL_SRC_MAKE_REQUEST_FN], [
ZFS_LINUX_TEST_SRC([make_request_fn_void], [
#include <linux/blkdev.h>
static void make_request(struct request_queue *q,
void make_request(struct request_queue *q,
struct bio *bio) { return; }
],[
blk_queue_make_request(NULL, &make_request);
@ -12,7 +12,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_MAKE_REQUEST_FN], [
ZFS_LINUX_TEST_SRC([make_request_fn_blk_qc_t], [
#include <linux/blkdev.h>
static blk_qc_t make_request(struct request_queue *q,
blk_qc_t make_request(struct request_queue *q,
struct bio *bio) { return (BLK_QC_T_NONE); }
],[
blk_queue_make_request(NULL, &make_request);
@ -20,7 +20,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_MAKE_REQUEST_FN], [
ZFS_LINUX_TEST_SRC([blk_alloc_queue_request_fn], [
#include <linux/blkdev.h>
static blk_qc_t make_request(struct request_queue *q,
blk_qc_t make_request(struct request_queue *q,
struct bio *bio) { return (BLK_QC_T_NONE); }
],[
struct request_queue *q __attribute__ ((unused));
@ -29,7 +29,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_MAKE_REQUEST_FN], [
ZFS_LINUX_TEST_SRC([blk_alloc_queue_request_fn_rh], [
#include <linux/blkdev.h>
static blk_qc_t make_request(struct request_queue *q,
blk_qc_t make_request(struct request_queue *q,
struct bio *bio) { return (BLK_QC_T_NONE); }
],[
struct request_queue *q __attribute__ ((unused));
@ -50,21 +50,6 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_MAKE_REQUEST_FN], [
disk = blk_alloc_disk(NUMA_NO_NODE);
])
ZFS_LINUX_TEST_SRC([blk_alloc_disk_2arg], [
#include <linux/blkdev.h>
],[
struct queue_limits *lim = NULL;
struct gendisk *disk __attribute__ ((unused));
disk = blk_alloc_disk(lim, NUMA_NO_NODE);
])
ZFS_LINUX_TEST_SRC([blkdev_queue_limits_features], [
#include <linux/blkdev.h>
],[
struct queue_limits *lim = NULL;
lim->features = 0;
])
ZFS_LINUX_TEST_SRC([blk_cleanup_disk], [
#include <linux/blkdev.h>
],[
@ -111,45 +96,6 @@ AC_DEFUN([ZFS_AC_KERNEL_MAKE_REQUEST_FN], [
], [
AC_MSG_RESULT(no)
])
dnl #
dnl # Linux 6.9 API Change:
dnl # blk_alloc_queue() takes a nullable queue_limits arg.
dnl #
AC_MSG_CHECKING([whether blk_alloc_disk() exists and takes 2 args])
ZFS_LINUX_TEST_RESULT([blk_alloc_disk_2arg], [
AC_MSG_RESULT(yes)
AC_DEFINE([HAVE_BLK_ALLOC_DISK_2ARG], 1, [blk_alloc_disk() exists and takes 2 args])
dnl #
dnl # Linux 6.11 API change:
dnl # struct queue_limits gains a 'features' field,
dnl # used to set flushing options
dnl #
AC_MSG_CHECKING([whether struct queue_limits has a features field])
ZFS_LINUX_TEST_RESULT([blkdev_queue_limits_features], [
AC_MSG_RESULT(yes)
AC_DEFINE([HAVE_BLKDEV_QUEUE_LIMITS_FEATURES], 1,
[struct queue_limits has a features field])
], [
AC_MSG_RESULT(no)
])
dnl #
dnl # 5.20 API change,
dnl # Removed blk_cleanup_disk(), put_disk() should be used.
dnl #
AC_MSG_CHECKING([whether blk_cleanup_disk() exists])
ZFS_LINUX_TEST_RESULT([blk_cleanup_disk], [
AC_MSG_RESULT(yes)
AC_DEFINE([HAVE_BLK_CLEANUP_DISK], 1,
[blk_cleanup_disk() exists])
], [
AC_MSG_RESULT(no)
])
], [
AC_MSG_RESULT(no)
])
],[
AC_MSG_RESULT(no)

View File

@ -9,7 +9,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_MKDIR], [
ZFS_LINUX_TEST_SRC([mkdir_mnt_idmap], [
#include <linux/fs.h>
static int mkdir(struct mnt_idmap *idmap,
int mkdir(struct mnt_idmap *idmap,
struct inode *inode, struct dentry *dentry,
umode_t umode) { return 0; }
static const struct inode_operations
@ -26,7 +26,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_MKDIR], [
ZFS_LINUX_TEST_SRC([mkdir_user_namespace], [
#include <linux/fs.h>
static int mkdir(struct user_namespace *userns,
int mkdir(struct user_namespace *userns,
struct inode *inode, struct dentry *dentry,
umode_t umode) { return 0; }
@ -47,7 +47,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_MKDIR], [
ZFS_LINUX_TEST_SRC([inode_operations_mkdir], [
#include <linux/fs.h>
static int mkdir(struct inode *inode, struct dentry *dentry,
int mkdir(struct inode *inode, struct dentry *dentry,
umode_t umode) { return 0; }
static const struct inode_operations

View File

@ -7,7 +7,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_MKNOD], [
#include <linux/fs.h>
#include <linux/sched.h>
static int tmp_mknod(struct mnt_idmap *idmap,
int tmp_mknod(struct mnt_idmap *idmap,
struct inode *inode ,struct dentry *dentry,
umode_t u, dev_t d) { return 0; }
@ -25,7 +25,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_MKNOD], [
#include <linux/fs.h>
#include <linux/sched.h>
static int tmp_mknod(struct user_namespace *userns,
int tmp_mknod(struct user_namespace *userns,
struct inode *inode ,struct dentry *dentry,
umode_t u, dev_t d) { return 0; }

View File

@ -1,36 +0,0 @@
AC_DEFUN([ZFS_AC_KERNEL_SRC_MM_PAGE_SIZE], [
ZFS_LINUX_TEST_SRC([page_size], [
#include <linux/mm.h>
],[
unsigned long s;
s = page_size(NULL);
])
])
AC_DEFUN([ZFS_AC_KERNEL_MM_PAGE_SIZE], [
AC_MSG_CHECKING([whether page_size() is available])
ZFS_LINUX_TEST_RESULT([page_size], [
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_MM_PAGE_SIZE, 1, [page_size() is available])
],[
AC_MSG_RESULT(no)
])
])
AC_DEFUN([ZFS_AC_KERNEL_SRC_MM_PAGE_MAPPING], [
ZFS_LINUX_TEST_SRC([page_mapping], [
#include <linux/pagemap.h>
],[
struct page *p = NULL;
struct address_space *m = page_mapping(NULL);
])
])
AC_DEFUN([ZFS_AC_KERNEL_MM_PAGE_MAPPING], [
AC_MSG_CHECKING([whether page_mapping() is available])
ZFS_LINUX_TEST_RESULT([page_mapping], [
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_MM_PAGE_MAPPING, 1, [page_mapping() is available])
],[
AC_MSG_RESULT(no)
])
])

View File

@ -7,14 +7,14 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_PROC_OPERATIONS], [
ZFS_LINUX_TEST_SRC([proc_ops_struct], [
#include <linux/proc_fs.h>
static int test_open(struct inode *ip, struct file *fp) { return 0; }
static ssize_t test_read(struct file *fp, char __user *ptr,
int test_open(struct inode *ip, struct file *fp) { return 0; }
ssize_t test_read(struct file *fp, char __user *ptr,
size_t size, loff_t *offp) { return 0; }
static ssize_t test_write(struct file *fp, const char __user *ptr,
ssize_t test_write(struct file *fp, const char __user *ptr,
size_t size, loff_t *offp) { return 0; }
static loff_t test_lseek(struct file *fp, loff_t off, int flag)
loff_t test_lseek(struct file *fp, loff_t off, int flag)
{ return 0; }
static int test_release(struct inode *ip, struct file *fp)
int test_release(struct inode *ip, struct file *fp)
{ return 0; }
const struct proc_ops test_ops __attribute__ ((unused)) = {

View File

@ -4,7 +4,7 @@ dnl #
AC_DEFUN([ZFS_AC_KERNEL_SRC_PUT_LINK], [
ZFS_LINUX_TEST_SRC([put_link_cookie], [
#include <linux/fs.h>
static void put_link(struct inode *ip, void *cookie)
void put_link(struct inode *ip, void *cookie)
{ return; }
static struct inode_operations
iops __attribute__ ((unused)) = {
@ -14,7 +14,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_PUT_LINK], [
ZFS_LINUX_TEST_SRC([put_link_nameidata], [
#include <linux/fs.h>
static void put_link(struct dentry *de, struct
void put_link(struct dentry *de, struct
nameidata *nd, void *ptr) { return; }
static struct inode_operations
iops __attribute__ ((unused)) = {

View File

@ -25,62 +25,3 @@ AC_DEFUN([ZFS_AC_KERNEL_REGISTER_SYSCTL_TABLE], [
AC_MSG_RESULT([no])
])
])
dnl #
dnl # Linux 6.11 register_sysctl() enforces that sysctl tables no longer
dnl # supply a sentinel end-of-table element. 6.6 introduces
dnl # register_sysctl_sz() to enable callers to choose, so we use it if
dnl # available for backward compatibility.
dnl #
AC_DEFUN([ZFS_AC_KERNEL_SRC_REGISTER_SYSCTL_SZ], [
ZFS_LINUX_TEST_SRC([has_register_sysctl_sz], [
#include <linux/sysctl.h>
],[
struct ctl_table test_table[] __attribute__((unused)) = {0};
register_sysctl_sz("", test_table, 0);
])
])
AC_DEFUN([ZFS_AC_KERNEL_REGISTER_SYSCTL_SZ], [
AC_MSG_CHECKING([whether register_sysctl_sz exists])
ZFS_LINUX_TEST_RESULT([has_register_sysctl_sz], [
AC_MSG_RESULT([yes])
AC_DEFINE(HAVE_REGISTER_SYSCTL_SZ, 1,
[register_sysctl_sz exists])
],[
AC_MSG_RESULT([no])
])
])
dnl #
dnl # Linux 6.11 makes const the ctl_table arg of proc_handler
dnl #
AC_DEFUN([ZFS_AC_KERNEL_SRC_PROC_HANDLER_CTL_TABLE_CONST], [
ZFS_LINUX_TEST_SRC([has_proc_handler_ctl_table_const], [
#include <linux/sysctl.h>
static int test_handler(
const struct ctl_table *ctl __attribute((unused)),
int write __attribute((unused)),
void *buffer __attribute((unused)),
size_t *lenp __attribute((unused)),
loff_t *ppos __attribute((unused)))
{
return (0);
}
], [
proc_handler *ph __attribute((unused)) =
&test_handler;
])
])
AC_DEFUN([ZFS_AC_KERNEL_PROC_HANDLER_CTL_TABLE_CONST], [
AC_MSG_CHECKING([whether proc_handler ctl_table arg is const])
ZFS_LINUX_TEST_RESULT([has_proc_handler_ctl_table_const], [
AC_MSG_RESULT([yes])
AC_DEFINE(HAVE_PROC_HANDLER_CTL_TABLE_CONST, 1,
[proc_handler ctl_table arg is const])
], [
AC_MSG_RESULT([no])
])
])

View File

@ -8,7 +8,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_RENAME], [
dnl #
ZFS_LINUX_TEST_SRC([inode_operations_rename2], [
#include <linux/fs.h>
static int rename2_fn(struct inode *sip, struct dentry *sdp,
int rename2_fn(struct inode *sip, struct dentry *sdp,
struct inode *tip, struct dentry *tdp,
unsigned int flags) { return 0; }
@ -26,7 +26,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_RENAME], [
dnl #
ZFS_LINUX_TEST_SRC([inode_operations_rename_flags], [
#include <linux/fs.h>
static int rename_fn(struct inode *sip, struct dentry *sdp,
int rename_fn(struct inode *sip, struct dentry *sdp,
struct inode *tip, struct dentry *tdp,
unsigned int flags) { return 0; }
@ -44,7 +44,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_RENAME], [
dnl #
ZFS_LINUX_TEST_SRC([dir_inode_operations_wrapper_rename2], [
#include <linux/fs.h>
static int rename2_fn(struct inode *sip, struct dentry *sdp,
int rename2_fn(struct inode *sip, struct dentry *sdp,
struct inode *tip, struct dentry *tdp,
unsigned int flags) { return 0; }
@ -62,7 +62,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_RENAME], [
dnl #
ZFS_LINUX_TEST_SRC([inode_operations_rename_userns], [
#include <linux/fs.h>
static int rename_fn(struct user_namespace *user_ns, struct inode *sip,
int rename_fn(struct user_namespace *user_ns, struct inode *sip,
struct dentry *sdp, struct inode *tip, struct dentry *tdp,
unsigned int flags) { return 0; }
@ -77,7 +77,7 @@ AC_DEFUN([ZFS_AC_KERNEL_SRC_RENAME], [
dnl #
ZFS_LINUX_TEST_SRC([inode_operations_rename_mnt_idmap], [
#include <linux/fs.h>
static int rename_fn(struct mnt_idmap *idmap, struct inode *sip,
int rename_fn(struct mnt_idmap *idmap, struct inode *sip,
struct dentry *sdp, struct inode *tip, struct dentry *tdp,
unsigned int flags) { return 0; }

Some files were not shown because too many files have changed in this diff Show More