Use default slab types

We should not override the default memory type of the kmem cache.  This
was done previously to force certain objects which were slightly over
object size limit cut off in to KMC_KMEM caches for better performance.

The zfsonlinux/spl#356 patch slightly increases the default cut off
from 511 bytes 1024 bytes for x86_64.  This means there is long longer
a need to override the default for the caches.  And since the default
values are now being used the new spl_kmem_cache_slab_limit and
spl_kmem_cache_kmem_limit tunables will apply to all kmem caches.

The following is a list of caches that will be impacted:

                  | object size   | forced type   | default type
----------------- | ------------- | ------------- | --------------
dnode_t           | 936 bytes     | KMC_KMEM      | KMC_KMEM
zio_cache         | 1104 bytes    | *KMC_KMEM     | *KMC_VMEM
zio_link_cache    | 48 bytes      | KMC_KMEM      | KMC_KMEM
zio_vdev_cache    | 131088 bytes  | KMC_VMEM      | KMC_VMEM
zio_buf_512       | 512 bytes     | KMC_KMEM      | KMC_KMEM
zio_data_buf_512  | 512 bytes     | KMC_KMEM      | KMC_KMEM
zio_buf_1024      | 1024 bytes    | KMC_KMEM      | KMC_KMEM
zio_data_buf_1024 | 1024 bytes    | +KMC_VMEM     | +KMC_KMEM

* Cache memory type will change from KMC_KMEM to KMC_VMEM.
+ Cache memory type will change from KMC_VMEM to KMC_KMEM.

This patch removes another slight point of divergence between ZoL
and Illumos.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Closes #2337
This commit is contained in:
Brian Behlendorf 2014-05-14 18:17:39 -07:00
parent 58bd7ad060
commit 6795a698f4
2 changed files with 4 additions and 15 deletions

View File

@ -179,7 +179,7 @@ dnode_init(void)
{ {
ASSERT(dnode_cache == NULL); ASSERT(dnode_cache == NULL);
dnode_cache = kmem_cache_create("dnode_t", sizeof (dnode_t), dnode_cache = kmem_cache_create("dnode_t", sizeof (dnode_t),
0, dnode_cons, dnode_dest, NULL, NULL, NULL, KMC_KMEM); 0, dnode_cons, dnode_dest, NULL, NULL, NULL, 0);
kmem_cache_set_move(dnode_cache, dnode_move); kmem_cache_set_move(dnode_cache, dnode_move);
} }

View File

@ -129,11 +129,11 @@ zio_init(void)
vmem_t *data_alloc_arena = NULL; vmem_t *data_alloc_arena = NULL;
zio_cache = kmem_cache_create("zio_cache", sizeof (zio_t), 0, zio_cache = kmem_cache_create("zio_cache", sizeof (zio_t), 0,
zio_cons, zio_dest, NULL, NULL, NULL, KMC_KMEM); zio_cons, zio_dest, NULL, NULL, NULL, 0);
zio_link_cache = kmem_cache_create("zio_link_cache", zio_link_cache = kmem_cache_create("zio_link_cache",
sizeof (zio_link_t), 0, NULL, NULL, NULL, NULL, NULL, KMC_KMEM); sizeof (zio_link_t), 0, NULL, NULL, NULL, NULL, NULL, 0);
zio_vdev_cache = kmem_cache_create("zio_vdev_cache", sizeof (vdev_io_t), zio_vdev_cache = kmem_cache_create("zio_vdev_cache", sizeof (vdev_io_t),
PAGESIZE, NULL, NULL, NULL, NULL, NULL, KMC_VMEM); PAGESIZE, NULL, NULL, NULL, NULL, NULL, 0);
/* /*
* For small buffers, we want a cache for each multiple of * For small buffers, we want a cache for each multiple of
@ -171,17 +171,6 @@ zio_init(void)
char name[36]; char name[36];
int flags = zio_bulk_flags; int flags = zio_bulk_flags;
/*
* The smallest buffers (512b) are heavily used and
* experience a lot of churn. The slabs allocated
* for them are also relatively small (32K). Thus
* in over to avoid expensive calls to vmalloc() we
* make an exception to the usual slab allocation
* policy and force these buffers to be kmem backed.
*/
if (size == (1 << SPA_MINBLOCKSHIFT))
flags |= KMC_KMEM;
(void) sprintf(name, "zio_buf_%lu", (ulong_t)size); (void) sprintf(name, "zio_buf_%lu", (ulong_t)size);
zio_buf_cache[c] = kmem_cache_create(name, size, zio_buf_cache[c] = kmem_cache_create(name, size,
align, NULL, NULL, NULL, NULL, NULL, flags); align, NULL, NULL, NULL, NULL, NULL, flags);