Merge commit 'refs/top-bases/linux-kernel-disk' into linux-kernel-disk

This commit is contained in:
Brian Behlendorf 2009-11-02 13:09:08 -08:00
commit 6101f4eff7
3 changed files with 129 additions and 16 deletions

View File

@ -1,3 +1,84 @@
2009-11-02 Brian Behlendorf <behlendorf1@llnl.gov>
* : Tag zfs-0.4.6 - Use 'git log --no-merges' for full change log.
* Rebased to ZFS b121 from OpenSolaris.
* module/zfs/vdev_disk.c: Finally a feature complete implementation:
- Handle dynamic bio merge_bdev limitations when constructing the
bio set associated with a dio. This previously prevented us from
layering cleanly on the md and dm virtual devices.
- Removed hard coded 512 byte sector size.
- Correctly determine the device size when using a partition.
- Hold and extra dio reference when submitting bio's using
bio_submit() to prevent a completion race.
* lib/libefi/*: Added fully function libefi library from Solaris.
This allows us to properly create and access GPT style partition
tables which are used when a whole device is added to a zpool.
* cmd/zpool/zpool_vdev.c: Fully integrated zpool with Linux package
libblkid. This allows zpool to identify existing devices of
various types to prevent devices from accidentally being used. When
given a whole device with a GPT partition table all partitions will
be checked for existing filesystems. At the moment MBR style
partition tables cannot be check and the force option must be used.
* cmd/zpool/zpool_vdev.c: Solaris devid support has been removed in
favor of Linux's udev. This means that a zpool device will always be
opened using the path provided at configuration time. This may
initially seem limiting but it has certain advantages:
- When creating a zpool where the physical location of the device
is NOT import simply create the pool using the /dev/disk/by-id paths.
This will ensure that regardless of physical location the device
will be properly located.
- When creating a zpool where the physical location of the device
is important use the /dev/disk/by-path paths. This will ensure that
devices are never accidentally detected and used in an incorrect
location which would compromise the redundancy of the system.
- Ever better you can create use your own udev rules file to setup
any mapping and naming convention you desire. One example of a
custom rule is to map physical device locations using grid with
numbers are letter for coordinates. Each letter might represent
a specific bus/channel and each number a specific device. For large
configurations this provides an easy way to identify devices.
* module/zpios/zpios.c: Update to use kobject_set_name() for
increased portability.
* modules/*/*: Update module init/exit access points to use
spl_module_{init,exit}() macro API. This ensures the cwd is
immediately set to '/' and may be leveraged latter for any
additional module setup/cleanup which is required.
* cmd/ztest/ztest.c: Check ftrucate() return code to prevent
warnings when --fortify-source options is used in rpm builds.
* config/Rules.am: Set DEBUG/NDEBUG globally when building user
space components.
* scripts/zconfig.sh: Initial hook for running additional sanity
tests are part of 'make check'. Currently, there are only two
tests which do some basic configuration checking but they should
be extended as much as possible to prevent regressions. Tests
should also all be written so they run entirely in-tree.
* scripts/zpios-sanity.sh: Initial hook for validating real IO
using all block devices and all raid configurations. Supported
device types include scsi, ide, md, dm, ram, loop, and file.
Supported raid types include raid0, raid10, raidz, and raidz2.
* scripts/zpool-config/*: Update dragon and x4550 configs to use
custom udev rules file with <A-Z><1-N> naming convention. Add
configs for md, dm, and ram block devices to verify functionality.
* zfs-test.spec.in: Added zfs-test package which extends the existing
in-tree test infrastructure such that it can be run as part of an
installed package. This simplifies the testing of tagged releases.
* zfs-modules.spec.in: Various spec file tweaks for the supported
distros: RHEL5, RHEL6, SLES10, SLES11, Chaos4, Fedora 11.
2009-08-04 Brian Behlendorf <behlendorf1@llnl.gov> 2009-08-04 Brian Behlendorf <behlendorf1@llnl.gov>
* : Tag zfs-0.4.5 - Use 'git log --no-merges' for full change log. * : Tag zfs-0.4.5 - Use 'git log --no-merges' for full change log.
@ -117,7 +198,7 @@
The atomic support is not 100% fully implemented but it's a good The atomic support is not 100% fully implemented but it's a good
first step towards cleanly supporting the architecture. first step towards cleanly supporting the architecture.
- Added powerpc ISA type. - Added powerpc ISA type.
- Explictly use signed char for portability. On x86/x86_64 - Explicitly use signed char for portability. On x86/x86_64
systems the default char type is signed, on ppc/ppc64 systems systems the default char type is signed, on ppc/ppc64 systems
the default char type is unsigned. the default char type is unsigned.
- Core target arch support for conditional compilation of SUBDIRs. - Core target arch support for conditional compilation of SUBDIRs.

2
META
View File

@ -1,6 +1,6 @@
Meta: 1 Meta: 1
Name: zfs Name: zfs
Branch: 1.0 Branch: 1.0
Version: 0.4.5 Version: 0.4.6
Release: 1 Release: 1
Release-Tags: relext Release-Tags: relext

60
TODO
View File

@ -1,19 +1,51 @@
SUMMARY OF MAJOR KNOWN PROBLEMS IN v0.4.5 (Development Release) SUMMARY OF MAJOR KNOWN PROBLEMS IN v0.4.6 (Development Release)
- Implement something similar to the Solaris devid support to ensure * Fault Management (FM) and sysevent support / analog.
ZFS properly locates your disks even when moved. The current port   bugzilla 14866, 15645
is dependent on using something like udev to ensure this can never
happen but this is not a viable long term solution.
- Implement something like Solaris's sysevent support to detect when This is probably the biggest remaining chunk of work.  Linux has no
drives appear or are removed. There is no easy analog to my knowledge direct equivalent of the Solaris Fault Management Architecture (FMA)
for linux and we will need to come up with something. and we need one.  All fault information is currently ignored and no
disk errors are even logged.  We need to settle on a design for this
but minimally it needs to log the events to syslog.
- Get the ZVOL working as a vehicle for further stress testing under * Implement the ZVOL.
Linux, and to provide one useful user space interface for access to bugzilla xxxxx
the DMU.
- Get the ZPL working minimal support will be needed for lustre. This should be pretty staight forward now that the DMU is fully
implemented and solid. It just needs to be done.
- Integrate the FUSE port in to this code base, or rebase it as its * Implement the ZPL.
own zfs-fuse package which is built against the zfs-devel package. bugzilla xxxxx
Getting basic ZPL support should be pretty straight forward. Moving
beyond that to fully integrate with the VFS for things like mmap and
file locking will be trickier.
* Integrate the ZFS-FUSE port in to this code base.
bugzilla xxxxx
Merging the zfs-fuse code base in with this project would be nice from a
code maintence standpoint. This code base is quite a bit newer than
zfs-fuse and it already provides a libzpool library for zfs-fuse to link
against. This should be a pretty straight forward addition.
* Emulate kthreads with pthreads in userspace.
  bugzilla xxxxx
There is a patch available for this but each time I've integrated it
I've observed SIGSEGVs in ztest. Once this patch is in place ztest
can be used to use the kthread API which brings us one step closer
to being able to run it in the kernel as an additional sanity check.
* DMU Performance
  bugzilla 13566
While performance is currently not bad it is not where it needs to be
for production use. The latest test results which can be found in the
docs directly show that on hardware which is capable of 8GB/s we only
see a few GB/s when running through the DMU. To address this we need
to finish getting the code working with the kernel lock profiler and
look for some hot locks. Additionally, it would be interesting to run
the same tests on Solaris (once we have a ZVOL/ZPL) and compare the
performance. It's not at all clear to me Solaris currently does better.