While ztest does run in user space we run it with the same stack
restrictions it would have in kernel space. This ensures that any
stack related issues which would be hit in the kernel can be caught
and debugged in user space instead.
This patch is a first pass to limit the stack usage of every ztest
function to 1024 bytes. Subsequent updates can further reduce this
Due to limited stack space recursive functions are frowned upon in
the Linux kernel. However, they often are the most elegant solution
to a problem. The following code preserves the recursive function
traverse_visitbp() but moves the local variables AND function
arguments to the heap to minimize the stack frame size. Enough
space is initially allocated on the stack for 20 levels of recursion.
This change does ugly-up-the-code but it reduces the worst case
usage from roughly 4160 bytes to 960 bytes on x86_64 archs.
Move dsl_dataset_t local variable from the stack to the heap.
This reduces the stack usage of this function from 2048 bytes
to 176 bytes for x84_64 arches.
There are 3 fixes in thie commit. First, update ztest_run() to store
the thread id and not the address of the kthread_t. This will be freed
on thread exit and is not safe to use. This is pretty close to how
things were done in the original ztest code before I got there.
Second, for extra paranoia update thread_exit() to return a special
TS_MAGIC value via pthread_exit(). This value is then verified in
pthread_join() to ensure the thread exited cleanly. This can be
done cleanly because the kthread doesn't provide a return code
mechanism we need to worry about.
Third, replace the ztest deadman thread with a signal handler. We
cannot use the previous approach because the correct behavior for
pthreads is to wait for all threads to exit before terminating the
process. Since the deadman thread won't call exit by design we
end up hanging in kernel_exit(). To avoid this we just setup a
SIGALRM signal handle and register a deadman alarm. IMHO this
is simpler and cleaner anyway.
There was previous discussion of a race with joinable threads but to
be honest I can neither exactly remember the race, or recrease the
issue. I believe it may have had to do with pthread_create() returning
without having set kt->tid since this was done in the created thread.
If that was the race then I've 'fixed' it by ensuring the thread id
is set in the thread AND as the first pthread_create() argument. Why
this wasn't done originally I'm not sure, with luck Ricardo remembers.
Additionally, explicitly set a PAGESIZE guard frame at the end of the
stack to aid in detecting stack overflow. And add some conditional
logic to set STACK_SIZE correctly for Solaris.
There are cases where under Linux it is not safe to sleep in
taskq_dispatch(). Rather than adding Linux specific code to
detect these cases I opted to keep it simple and just never
allow a sleep here. The impact of this should be minimal.
I missed a instanse of removing the & operator when reducing the
stack usage in this function. This unfortunately doesn't cause
a compile warning but it is does cause ztest failures. Anyway,
update the topic branch to correct this mistake.
Certain function must never be automatically inlined by gcc because
they are stack heavy or called recursively. This patch flags all
such functions I have found as 'noinline' to prevent gcc from making
the optimization.
Reduce stack usage in dsl_deleg_get, gcc flagged it as consuming a
whopping 1040 bytes or potentially 1/4 of a 4K stack. This patch
moves all the large structures and buffer off the stack and on to
the heap. This includes 2 zap_cursor_t structs each 52 bytes in
size, 2 zap_attribute_t structs each 280 bytes in size, and 1
256 byte char array. The total saves on the stack is 880 bytes
after you account for the 5 new pointers added.
Also the source buffer length has been increased from MAXNAMELEN
to MAXNAMELEN+strlen(MOS_DIR_NAME)+1 as described by the comment in
dsl_dir_name(). A buffer overrun may have been possible with the
slightly smaller buffer.
Reduce kernel stack usage by lzjb_compress() by moving uint16 array
off the stack and on to the heap. The exact performance implications
of this I have not measured but we absolutely need to keep stack
usage to a minimum. If/when this becomes and issue we optimize.
Move xiou stat structures from a header to the dmu.c source as is
done with all the other kstat interfaces. This information is local
to dmu.c registered the xuio kstat and should stay that way.
If your only going to allow one allocator to be used and it is defined
at compile time there is no point including the others in the build.
This patch could/should be refined for Linux to make the metaslab
configurable at run time. That might be a bit tricky however since
you would need to quiese all IO. Short of that making it configurable
as a module load option would be a reasonable compromise.
In the linux kernel 'current' is defined to mean the current process
and can never be used as a local variable in a function. Simply
replace all usage of 'current' with 'curr' in this function.
This is a portability change which removes the dependence of the Solaris
thread library. All locations where Solaris thread API was used before
have been replaced with equivilant Solaris kernel style thread calls.
In user space the kernel style threading API is implemented in term of
the portable pthreads library. This includes all threads, mutexs,
condition variables, reader/writer locks, and taskqs.
Updated fix to detect if we are in an interrupt and only sleep if it
is safe to do some. I guess it must be safe to sleep under Solaris
this must be handled in a sort interrupt handler there