These changes do not belong on linux-kernel-module since they
are tweaks to user space utilities. I'm reverting them from
this topic branch and will be moving them to a new topic branch
which can be used for just this sort of thing.
Interestingly this has only been a problem on a clean RHEL6
install so I suspect the include was removed from one of the
standard system include headers. We should be including it
explicitly anyway since it's used in both of these .c files.
At last a useful user space interface for the Linux ZFS port arrives.
With the addition of the ZVOL real ZFS based block devices are available
and can be compared head to head with Linux's MD and LVM block drivers.
The Linux ZVOL has not yet had any performance work done but from a user
perspective it should be functionally complete and behave like any other
Linux block device.
The ZVOL has so far been tested using zconfig.sh on the following x86_64
based platforms: FC11, CHAOS4, RHEL5, RHEL6, and SLES11. However, more
testing is required to ensure everything is working as designed.
What follows in a somewhat detailed list of changes includes in this
commit to make ZVOL's possible. A few other issues were addressed in
the context of these changes which will also be mentioned.
* zvol_create_link_common() simplified to simply issue to ioctl to
create the device and then wait up to 10 seconds for it to appear.
The device will be created within a few miliseconds by udev under
/dev/<pool>/<volume>. Note this naming convention is slightly
different than on Solaris by I feel is more Linuxy.
* Removed support for dump vdevs. This concept is specific to Solaris
and done not map cleanly to Linux. Under Linux generating system cores
is perferably done over the network via netdump, or alternately to a
block device via O_DIRECT.
Because the local 'index' variable shadows the index() function
it was replaced by 'i'. Unfortunately when I made this change
I accidentally replaced one instance with 'j' resulting in the
short decimal values being printed incorrectly.
It's still not clear to me why the default value here is large
enough Solaris. I hit this limit again when setting up 120 SATA
drives configured as 15 raidz2 groups each containing 8 drives.
We expect to go bigger so we may just want to spend a little
time and figure out how to make this all dynamic.
For the moment I have added an error message to the failure path to
make it clear what happened. I have also changed the zdb ASSERT to
a VERIFY so we always catch the failure. For now we will just always
ensure the module stack is loaded, longer term we need something a
little more flexible.
Most of these fixes appear to be harmless and should never occur.
However, there were a few cases in this patch which do concern me,
I doubt we're seeing them but they look possible... mainly in the
user tools.