Add two additional basic sanity tests to confirm zvol snapshots
and clones work. The snapshot test is basically the same as the
example provided in the wiki. The clone test goes one step father
and clones the snapshot then modifies it to match the original
modified volume. It them compares them to ensure everything was
modified as expected.
These are just meant to be sanity tests to catch obvious breakage
before tagging a release. They are still not a substitute for a
full regression test suite.
While the zfs utilities do block until the expected device appears
they can only do this for full devices, not partitions. This means
that once as device appears it still may take a little bit of time
before the kernel rescans the partition table, updates sysfs, udev
is notified and the partition devices are created. The test case
itself could block briefly waiting for the partition beause it knows
what to expect. But for now the simpler thing to do is just delay.
See previous commit for details. But the gist is with the removal of
the zvol path component the regression tests must be updated to use
the correct path name.
Several folks have now remarked that when the regression tests
fail they leave a mess behind. This was done intentionally at
the time to facilitate debugging the wreckage.
However, this also means that you may need to do some manual
cleanup such as removing the loopback devices before re-running
the tests. To simplify this proceedure I've added the '-c'
option to zconfig.sh which will attempt to cleanup the mess
from a previous test before starting.
This is somewhat dangerous because it must guess as to which
loopback devices you were using. But this risk is fairly minimal
because devices which are currently still is use can not be
cleaned up. And because only devices with 'zpool' in the name
are considered for removal. That said if your running parallel
copies of say zconfig.sh this may cause you some trouble.
Update the zconfig.sh test script to verify not only that volumes,
snapshots, and clones are created and removed properly. But also
verify that the partition information for each of these types of
devices is properly enumerated by the kernel.
Tests 4 and 5 now also create two partitions on the original volume
and these partitions are expected to also exist on the snapshot and
the clone. Correctness is verified after import/export, module
load/unload, dataset creation, and pool destruction.
Additionally, the code to create a partition table was refactored
in to a small helper function to simplify the test cases. And
finally all of the function variables were flagged 'local' to ensure
their scope is limited. This should have been done a while ago.
This change updates zconfig.sh to reference /dev/zvol/ instead
of simply /dev/. It also extends the texts to verify correct
minor device creation for import/export and module load/unload.
The common.sh script assumed that it was either being run from
in-tree or was installed under /usr/libexec/zfs. If this was
not the case, because of say the default --prefix=/usr/local,
then the paths would be wrong. To fix this common.sh is now
generated from common.sh.in with the correct path information
provided at configure time.
Due to now resolved bug in the SPL you would need to explicitly
import you zpools after module load. Now that is no longer the case.
If a cache file is found your pool will be automatically loaded and
available so I'm removing the explicit imports from the test case.
Pass an alternate location via module option for the zpool.cache file
used by the kernel. This allows us to write in-tree tests which do
not modify any out-of-tree files we do not own. This is just standard
good behavior for any test suite.
Additionally, refine the existing test case to explicity use the cache
file when looking for pools to import. And add a second test cache
which is forced to probe the disks for available pools to import.
This is an initial script for validation of zfs/zpool configuration.
For now there is only one test here to ensure that /etc/zfs/zpool.cache
is being updated properly from the kernel module. Additional tests
should be added, I believe Richardo said there was an existing test
suite out there which validated the behavior of many zpool/zfs commands.
It would be nice to add that as appropriate.