Devices were only being created at module load time or when a
dataset was created. Similiar devices were not always being
removed at all the correct times. This patch updates all the
places where devices should either be created or removed. I'm
reasonably sure I got them all but if theres a case I missed
we can catch it with a follow up patch.
module load/unload
zfs create/remove
zpool import/export
zpool destroy
This patch also adds a simple regression test to zconfig.sh
to ensure zpool import/export is basically working properly.
This test specifically checks that devices are created
properly, removed after export, created after import, and
removed as a consequence of a zpool destroy.
Due to now resolved bug in the SPL you would need to explicitly
import you zpools after module load. Now that is no longer the case.
If a cache file is found your pool will be automatically loaded and
available so I'm removing the explicit imports from the test case.
After much contemplation I can't see a clean way to use udev entirely
in-tree for testing. This patch removed a horrible horrible hack which
would copy the needed udev bits in to place on your system to make it
work. That however is simply not acceptable, nothing you in in-tree
should ever ever ever install something on your system.
Since I could not come up with a clean way to use udev in-tree. The
fix is to simply parse the zdev config file and create the needed
symlinks in a sub-diretory or your working tree. This is not as clean
as using udev but it does work perfectly well for in-tree testing.
While I completely agree the udev is the lesser of many possibles
evils when solving the device issue... it is still evil. After
attempting to craft a single rule which will work for various
versions of udev in various distros. I've come to the conclusion
the only maintainable way to solve this issue is to split the rule
from any particular configuration.
This commit provides a generic 60-zpool.rules file which use a
small helper util 'zpool_id' to parse a configuration file by
default located in /etc/zfs/zdev.conf. The helper script maps
a by-path udev name to a more friendly name of <channel><rank>
for large configurations.
As part of this change all of the support scripts why rely on
this udev naming convention have been updated as needed. Example
zdev.conf files have also been added for 3 different systems by
you will always need to add one for your exact hardware.
Finally, included in these changes are the proper tweaks to the
build system to ensure everything still get's packaged properly
in the rpms and can run in or out of tree.
Moving forward udevadm {trigger/settle} replaced udevtrigger/udevsettle
as the correct interface to use. However, since we need to work in
both environments for testing check and see if udevadm is available.
If it is then use it. If it is not fall back to the legacy interface.
The script has been updated to download the latest documentations
packages for Solaris and extract the needed ZFS man pages. These
will still need a little markup to handle changes between the
Solaris and Linux versions of ZFS. Howver, they should be pretty
minor I've tried hard to keep the interface the same.
In additional to the script update the zdb, zfs, and zpool man
pages have been added to the repo.
This script was added to provide a simple way to check that zpool
layers correctly on all the standard linux block device types.
It's still a little fragile if there a hiccup in say the md or
lvm tool chain but aside from that it works well.
The 'make check' target now also calls this script in a safe mode
which only operates on files and loopback devices. To check other
block devices types is must be explicitly run by hand because it
will overwrite various block devices.
For the sake of completeness we need to validate everything works
well not just on IDE or SCSI drives. But we need to verify a
zpool configured on top of the Linux virtual block devices.
These scripts simply that testing process, and have shown that
while everything is good on top of a ram disk. Right now the
code base panics the kernel when layered on top of either an
md or dm style device. For the moment don't do that.