24f3d6e49e
In check_disk() we should only check the entire device if it not a whole disk. It is a whole disk with an EFI label on it, it is possible that libblkid will misidentify the device as a filesystem. I had a case yesterday where 2 bytes in the EFI GUID happened we set to the right values such that libblkid decided there was a minux filesystem there. If it's a whole device we look for a EFI label. If we are able to read the backup EFI label from a device but the primary is corrupt. Then don't bother trying to stat the partitions in /dev/ the kernel will not create devices using the backup label when the primary is damaged. Add code to determine if we have a udev path instead of a normal device path. In this case use the -part# partition naming scheme instead of the /dev/disk# scheme. This is important because we always want to access devices using the full path provided at configuration time. Readded support for zpool_relabel_disk() now that we have the full libefi library in place we do have access to this functionality. Lots of additional paranoia to ensure EFI label are written correctly. These changes include: 1) Removing the O_NDELAY flag when opening a file descriptor for libefi. This flag should really only be used when you do not intend to do any file IO. Under Solaris only ioctl()'s were performed under linux we do perform reads and writes. 2) Use O_DIRECT to ensure any caching is bypassed while writing or reading the EFI labels. This change forces the use of sector aligned memory buffers which are allocated using posix_memalign(). 3) Add additional efi_debug error messages to efi_ioctl(). 4) While doing a fsync is good to ensure the EFI label is on disk we can, and should go one step futher by issuing the BLKFLSBUF ioctl(). This signals the kernel to instruct the drive to flush it's on-disk cache. 5) Because of some initial strangeness I observed in testing with some flakey drives be extra paranoid in zpool_label_disk(). After we've written the device without error, flushed the drive caches, correctly detected the new partitions created by the kernel. Then additionally read back the EFI label from user space to make sure it is intact and correct. I don't think we can ever be to careful here. NOTE: The was recently some concern expressed that writing EFI labels from user space on Linux was not the right way to do this. That instead two kernel ioctl()s should be used to create and remove partitions. After some investigation it's clear to me using those ioctl() would be a bad idea. The in fact don't actually write partition tables to the disk, they only create the partition devices in the kernel. So what you really want to do is write the label out from user space, then prompt the kernel to re-read the partition from disk to create the partitions. This is in fact exactly what newer version of parted do. |
||
---|---|---|
cmd | ||
config | ||
doc | ||
lib | ||
module | ||
scripts | ||
.topdeps | ||
.topmsg | ||
AUTHORS | ||
COPYING | ||
COPYRIGHT | ||
ChangeLog | ||
DISCLAIMER | ||
GIT | ||
META | ||
Makefile.am | ||
OPENSOLARIS.LICENSE | ||
README | ||
TODO | ||
ZFS.RELEASE | ||
autogen.sh | ||
configure.ac | ||
zfs-modules.spec.in | ||
zfs.spec.in | ||
zfs_unconfig.h |
README
============================ ZFS KERNEL BUILD ============================ 1) Build the SPL (Solaris Porting Layer) module which is designed to provide many Solaris APIs in the Linux kernel which are needed by ZFS. To build the SPL: tar -xzf spl-x.y.z.tgz cd spl-x.y.z ./configure --with-linux=<kernel src> make make check <as root> 2) Build ZFS, this port is based on build specified by the ZFS.RELEASE file. You will need to have both the kernel and SPL source available. To build ZFS for use as a Linux kernel module. tar -xzf zfs-x.y.z.tgz cd zfs-x.y.z ./configure --with-linux=<kernel src> \ --with-spl=<spl src> make make check <as root> ============================ ZPIOS TEST SUITE ============================ 3) Provided is an in-kernel test application called zpios which can be used to simulate a parallel IO load. It may be used as a stress or performance test for your configuration. To simplify testing scripts provided in the scripts/ directory which provide a few pre-built zpool configurations and zpios test cases. By default 'make check' as root will run a simple test against several small loopback devices created in /tmp/. cd scripts ./zfs.sh # Load the ZFS/SPL modules ./zpios.sh -c lo-raid0.sh -t tiny -v # Tiny zpios loopback test ./zfs.sh -u # Unload the ZFS/SPL modules Enjoy, Brian Behlendorf <behlendorf1@llnl.gov>