Add two additional basic sanity tests to confirm zvol snapshots
and clones work. The snapshot test is basically the same as the
example provided in the wiki. The clone test goes one step father
and clones the snapshot then modifies it to match the original
modified volume. It them compares them to ensure everything was
modified as expected.
These are just meant to be sanity tests to catch obvious breakage
before tagging a release. They are still not a substitute for a
full regression test suite.
While the zfs utilities do block until the expected device appears
they can only do this for full devices, not partitions. This means
that once as device appears it still may take a little bit of time
before the kernel rescans the partition table, updates sysfs, udev
is notified and the partition devices are created. The test case
itself could block briefly waiting for the partition beause it knows
what to expect. But for now the simpler thing to do is just delay.
See previous commit for details. But the gist is with the removal of
the zvol path component the regression tests must be updated to use
the correct path name.
Several folks have now remarked that when the regression tests
fail they leave a mess behind. This was done intentionally at
the time to facilitate debugging the wreckage.
However, this also means that you may need to do some manual
cleanup such as removing the loopback devices before re-running
the tests. To simplify this proceedure I've added the '-c'
option to zconfig.sh which will attempt to cleanup the mess
from a previous test before starting.
This is somewhat dangerous because it must guess as to which
loopback devices you were using. But this risk is fairly minimal
because devices which are currently still is use can not be
cleaned up. And because only devices with 'zpool' in the name
are considered for removal. That said if your running parallel
copies of say zconfig.sh this may cause you some trouble.
Update the zconfig.sh test script to verify not only that volumes,
snapshots, and clones are created and removed properly. But also
verify that the partition information for each of these types of
devices is properly enumerated by the kernel.
Tests 4 and 5 now also create two partitions on the original volume
and these partitions are expected to also exist on the snapshot and
the clone. Correctness is verified after import/export, module
load/unload, dataset creation, and pool destruction.
Additionally, the code to create a partition table was refactored
in to a small helper function to simplify the test cases. And
finally all of the function variables were flagged 'local' to ensure
their scope is limited. This should have been done a while ago.
Using sparse files for the test configurations had atleast three
significant advantages.
1) Actually test sparse files to ensure they work.
2) Drastically reduce required disk space for the regression test
suite. This turns out to be fairly important when running the
test suite in a virtualized environment.
3) Significantly speed of the test suite. Run time of zconfig.sh
dropped from 2m:56s to 1m:00s on my test system, zpios-sanity.sh
nows runs in only 0m:26s.
This change updates zconfig.sh to reference /dev/zvol/ instead
of simply /dev/. It also extends the texts to verify correct
minor device creation for import/export and module load/unload.
This test was accidentally readded to the linux-kernel-disk
topic branch. It is being reverted so it can be reapplied with
a few minor tweaks in the right place.
The splat module is only needed for the spl regression tests.
But if we add it to MODULES then 'zfs.sh -u' will be able to
unload it if needed, The downside if 'zfs.sh' will always
load it but it's overhead is minimal and in a production
setting you'll always be doing a 'modprobe zfs' anyway so
this is really just for testing.
The common.sh script assumed that it was either being run from
in-tree or was installed under /usr/libexec/zfs. If this was
not the case, because of say the default --prefix=/usr/local,
then the paths would be wrong. To fix this common.sh is now
generated from common.sh.in with the correct path information
provided at configure time.
Devices were only being created at module load time or when a
dataset was created. Similiar devices were not always being
removed at all the correct times. This patch updates all the
places where devices should either be created or removed. I'm
reasonably sure I got them all but if theres a case I missed
we can catch it with a follow up patch.
module load/unload
zfs create/remove
zpool import/export
zpool destroy
This patch also adds a simple regression test to zconfig.sh
to ensure zpool import/export is basically working properly.
This test specifically checks that devices are created
properly, removed after export, created after import, and
removed as a consequence of a zpool destroy.