Fix crash when using ZFS on Ceph rbd
When using __get_free_pages to get high order memory, only the first page's _count will set to 1, other's will be 0. When an internal page get passed into rbd, it will eventully go into tcp_sendpage. There, it will be called with get_page and put_page, and get freed erroneously when _count jump back to 0. The solution to this problem is to use compound page. All pages in a high order compound page share a single _count. So get_page and put_page in tcp_sendpage will not cause _count jump to 0. Signed-off-by: Chunwei Chen <tuxoko@gmail.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes #251
This commit is contained in:
parent
d6e6e4a98e
commit
ae16ed992b
|
@ -864,7 +864,8 @@ kv_alloc(spl_kmem_cache_t *skc, int size, int flags)
|
||||||
ASSERT(ISP2(size));
|
ASSERT(ISP2(size));
|
||||||
|
|
||||||
if (skc->skc_flags & KMC_KMEM)
|
if (skc->skc_flags & KMC_KMEM)
|
||||||
ptr = (void *)__get_free_pages(flags, get_order(size));
|
ptr = (void *)__get_free_pages(flags | __GFP_COMP,
|
||||||
|
get_order(size));
|
||||||
else
|
else
|
||||||
ptr = __vmalloc(size, flags | __GFP_HIGHMEM, PAGE_KERNEL);
|
ptr = __vmalloc(size, flags | __GFP_HIGHMEM, PAGE_KERNEL);
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue