In check_disk() we should only check the entire device if it
not a whole disk. It is a whole disk with an EFI label on it,
it is possible that libblkid will misidentify the device as a
filesystem. I had a case yesterday where 2 bytes in the EFI
GUID happened we set to the right values such that libblkid
decided there was a minux filesystem there. If it's a whole
device we look for a EFI label.
If we are able to read the backup EFI label from a device but
the primary is corrupt. Then don't bother trying to stat
the partitions in /dev/ the kernel will not create devices
using the backup label when the primary is damaged.
Add code to determine if we have a udev path instead of a
normal device path. In this case use the -part# partition
naming scheme instead of the /dev/disk# scheme. This is
important because we always want to access devices using
the full path provided at configuration time.
Readded support for zpool_relabel_disk() now that we have
the full libefi library in place we do have access to this
functionality.
Lots of additional paranoia to ensure EFI label are written
correctly. These changes include:
1) Removing the O_NDELAY flag when opening a file descriptor
for libefi. This flag should really only be used when you
do not intend to do any file IO. Under Solaris only ioctl()'s
were performed under linux we do perform reads and writes.
2) Use O_DIRECT to ensure any caching is bypassed while
writing or reading the EFI labels. This change forces the
use of sector aligned memory buffers which are allocated
using posix_memalign().
3) Add additional efi_debug error messages to efi_ioctl().
4) While doing a fsync is good to ensure the EFI label is on
disk we can, and should go one step futher by issuing the
BLKFLSBUF ioctl(). This signals the kernel to instruct the
drive to flush it's on-disk cache.
5) Because of some initial strangeness I observed in testing
with some flakey drives be extra paranoid in zpool_label_disk().
After we've written the device without error, flushed the drive
caches, correctly detected the new partitions created by the
kernel. Then additionally read back the EFI label from user
space to make sure it is intact and correct. I don't think we
can ever be to careful here.
NOTE: The was recently some concern expressed that writing EFI
labels from user space on Linux was not the right way to do this.
That instead two kernel ioctl()s should be used to create and
remove partitions. After some investigation it's clear to me
using those ioctl() would be a bad idea. The in fact don't
actually write partition tables to the disk, they only create
the partition devices in the kernel. So what you really want
to do is write the label out from user space, then prompt the
kernel to re-read the partition from disk to create the partitions.
This is in fact exactly what newer version of parted do.
When creating partition tables we always need to wait until not
only the /dev/<disk><part> device appears. But just as importantly
if we were originally given a udev path we need to wait for the
/dev/disk/*/<name>-part<part> symlink to be created. However,
since the partition naming convention differs between /dev/ and
/dev/disk we determine based on the path which convention to
expect and then wait (for a few seconds) for the device to be
created. Based on my experience with udev on my test nodes it
takes about 300ms for the devices to be created after being
prompted by the kernel. This time will vary somehwat based
on how complicated your udev rules are, so for safety I threw
in a factor of 10. We wait 3 seconds for the devices to appears
before erroring out with a failure.
An additional minor fix includes checking the force flag in the
EFI_GPT_PRIMARY_CORRUPT case. This allows you to force the
update even in the corrupt partition case.
Finally, since these are Linux only changes I've dropped the
devid code entirely here because I still can't think of why we
would need or want it on a Linux system.
To simplify creation and management of test configurations the
dragon and x4550 configureis have been integrated with udev. Our
current best guess as to how we'll actually manage the disks in
these systems is with a udev mapping scheme. The current leading
scheme is to map each drive to a simpe <CHANNEL><RANK> id. In
this mapping each CHANNEL is represented by the letters a-z, and
the RANK is represented by the numbers 1-n. A CHANNEL should
identify a group of RANKS which are all attached to a single
controller, each RANK represents a disk. This provides a nice
mechanism to locate a specific drive given a known hardware
configuration. Various hardware vendors use a similar scheme.
A nice side effect of these changes is it allowed me to make
the raid0/raid10/raidz/raidz2 setup functions generic. This
makes adding new test configs easy, you just need to create
a udev rules file for your test config which conforms to the
naming scheme.
After spending considerable time thinking about this I've come to the
conclusion that on Linux systems we don't need Solaris style devid
support. Instead was can simply use udev if we are careful, there
are even some advantages.
The Solaris style devid's are designed to provide a mechanism by which
a device can be opened reliably regardless of it's location in the system.
This is exactly what udev provides us on Linux, a flexible mechanism for
consistently identifing the same devices regardless of probing order.
We just need to be careful to always open the device by the path provided
at creation time, this path must be stored in ZPOOL_CONFIG_PATH. This
in fact has certain advantages.
For example, if in your system you always want the zpool to be able to
locate the disk regardless of physical location you can create the pool
using /dev/disk/by-id/. This is perhaps what you'ld want on a desktop
system where the exact location is not that important. It's more
critical that all the disks can be found.
However, in an enterprise setup there's a good chace that the physical
location of each drive is important. You have like set things up such
that your raid groups span multiple hosts adapters, such that you can
lose an adapter without downtime. In this case you would want to use
the /dev/disk/by-path/ path to ensure the path information is preserved
and you always open the disks at the right physical locations. This
would ensure your system never gets accidently misconfigured and still
just works because the zpool was still able to locate the disk.
Finally, if you want to get really fancy you can always create your
own udev rules. This way you could implement whatever lookup sceme
you wanted in user space for your drives. This would include nice
cosmetic things like being able to control the device names in tools
like zpool status, since the name as just based of the device names.
I've yet to come up with a good reason to implement devid support on
Linux since we have udev. But I've still just commented it out for now
because somebody might come up with a really good I forgot.