Exposed by the fc11 debug kernel we need to hold a reference over all
calls to submit_bio(). Otherwise it is possible all the completion
callbacks run before we exit __vdev_disk_physio(), and we end up with
a GPF. This was quickly exposed when slab poisoning was enabled. I
have added helper functions to cleanly track the reference counts. In
addition dr->dr_ref was converted from an integer to an atomic type
which removes the need for the spinlock. As a nice side effect of
these changes the code is now slightly cleaner and clearer.
With this patch applied I get the following failure 100% of the time,
I'd prefer to debug it and keep moving forward but I do not have the
time right now so I'm reverting the patch to the version which worked.
Ricardo please fix.
(gdb) bt
0 ztest_dmu_write_parallel (za=0x2aaaac898960) at
../../cmd/ztest/ztest.c:2566
1 0x0000000000405a79 in ztest_thread (arg=<value optimized out>)
at ../../cmd/ztest/ztest.c:3862
2 0x00002b2e6a7a841d in zk_thread_helper (arg=<value optimized out>)
at ../../lib/libzpool/kernel.c:131
3 0x000000379be06367 in start_thread (arg=<value optimized out>)
at pthread_create.c:297
4 0x000000379b2d30ad in clone () from /lib64/libc.so.6
This resolves previous scalabily concerns about the cost of calling
curthread which previously required a list walk. The kthread address
is now tracked as thread specific data which can be quickly returned.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
There is concern that READA may do more than simply reorder the queue.
There may be an increased chance that a requested marked READA will
fail because the elevator considers it optional. For this reason, all
read requests, even speculative ones, have been converted back to READ.
The 2.6.30 kernel build systems sets -Wframe-larger-than=2048 which causes
a warning to be generated when an individual stack frame exceeds 2048.
This caught the spa_history_log() and dmu_objset_snapshot() functions
which declared a data structure on the stack which contained a char
array of MAXPATHLEN. This in defined to be 4096 in the linux kernel
and I imagine it is quite large under Solaris as well. Regardless, the
offending data structures were moved to the heap to correctly keep the
stack depth to a minimum. We might consider setting this value even
lower to catch additional offenders because we are expecting deep stacks.
Tested under CHAOS4.2, RHEL5, SLES11, and FC11 (all x86_64)
Features:
Honor spa_mode() when opening the block device. Previously this
was ignored and devices were always opened read/write.
Integrated DKIOCFLUSHWRITECACHE zio operation with linux WRITE_BARRIER
for kernels post 2.6.24 where empty bio requests are supported. For
earlier kernels ENOTSUP is returned and no barriers are performed. If
RHEL5 based kernels are intended to be supported long term we may need
make use of the old akward API.
With the addition of WRITE_BARRIER support all writes which were
WRITE_SYNC can now be safely made WRITE bios. They will now take
advantage of aggregation in the elevator and improved write performance
is likely.
Notice the ZIO_FLAG_SPECULATIVE flag and pass along the hint to the
elevator by using READA instead of READ. This provides the elevator
the ability to prioritize the real READs ahead of the speculative IO
if needed.
Implement an initial version of vdev_disk_io_done() which in the case
of an EIO error triggers a media change check. If it determines a
media change has occured we fail the device and remove it from the
config. This logic I'm sure can be improved further but for now it
is an improvement over the VERIFY() that no error will ever happen.
APIs:
2.6.22 API change
Unused destroy_dirty_buffers arg removed from prototype.
2.6.24 API change
Empty write barriers are now supported and we should use them.
2.6.24 API change
Size argument dropped from bio_endio and bi_end_io, because the
bi_end_io is only called once now when the request is complete.
There is no longer any need for a size argument. This also means
that partial IO's are no longer possibe and the end_io callback
should not check bi->bi_size. Finally, the return type was updated
to void.
2.6.28 API change
open/close_bdev_excl() renamed to open/close_bdev_exclusive().
2.6.29 API change
BIO_RW_SYNC renamed to BIO_RW_SYNCIO.
Use the legacy BIO_RW_FAILFAST flag if it exists. If it is missing it
means we are running against a kernel with the newer API. We should
be able to enable some fairly smart behavior one we intergrate with the
new API, but until I get around to writing that code just remove the
flag entirely. It's not critical for correctness.
Kernel commit 6712ecf8f648118c3363c142196418f89a510b90 which removes the
size argument from bio_endio and bi_end_io, also removes the need to
handle partial IOs in the handler.
- Linux specific character device registration calls replaced with
the spl version for maximum portability between linux kernels.
- Added ZPIOS_NAME macro.
A compat ioctl handler for zpios was added which simply passes the
ioctl on to the usual handler. The IOWR macro's correctly handle
this. Additionally replace the use of 'struct timespec' which uses
longs internally and is therefore different sizes on 32-bit vs 64-bit
objects with 'struct zpios_timespec_t'. This custom structure uses
uint32_t types internally and is safe to pass through an ioctl. The
helper functions for this new type were also moved to a common place
so they may be used safely by the user or kernel code.
The intent here is to fully remove the previous Solaris thread
implementation so we don't need to simulate both Solaris kernel
and user space thread APIs. The few user space consumers of the
thread API have been updated to use the kthread API. In order
to support this we needed to more fully support the kthread API
and that means not doing crazy things like casting a thread id
to a pointer and using that as was done before. This first
implementation is not effecient but it does provide all the
corrent semantics. If/when performance becomes and issue we
can and should just natively adopt pthreads which is portable.
Let me finish by saying I'm not proud of any of this and I would
love to see it improved. However, this slow implementation does
at least provide all the correct kthread API semantics whereas
the previous method of casting the thread ID to a pointer was
dodgy at best.