Skip to content

Commit 8ba00bb

Browse files
JoonsooKimtorvalds
authored andcommitted
slub: consider pfmemalloc_match() in get_partial_node()
get_partial() is currently not checking pfmemalloc_match() meaning that it is possible for pfmemalloc pages to leak to non-pfmemalloc users. This is a problem in the following situation. Assume that there is a request from normal allocation and there are no objects in the per-cpu cache and no node-partial slab. In this case, slab_alloc enters the slow path and new_slab_objects() is called which may return a PFMEMALLOC page. As the current user is not allowed to access PFMEMALLOC page, deactivate_slab() is called ([5091b74: mm: slub: optimise the SLUB fast path to avoid pfmemalloc checks]) and returns an object from PFMEMALLOC page. Next time, when we get another request from normal allocation, slab_alloc() enters the slow-path and calls new_slab_objects(). In new_slab_objects(), we call get_partial() and get a partial slab which was just deactivated but is a pfmemalloc page. We extract one object from it and re-deactivate. "deactivate -> re-get in get_partial -> re-deactivate" occures repeatedly. As a result, access to PFMEMALLOC page is not properly restricted and it can cause a performance degradation due to frequent deactivation. deactivation frequently. This patch changes get_partial_node() to take pfmemalloc_match() into account and prevents the "deactivate -> re-get in get_partial() scenario. Instead, new_slab() is called. Signed-off-by: Joonsoo Kim <js1304@gmail.com> Acked-by: David Rientjes <rientjes@google.com> Signed-off-by: Mel Gorman <mgorman@suse.de> Cc: David Miller <davem@davemloft.net> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: Pekka Enberg <penberg@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent d014dc2 commit 8ba00bb

File tree

1 file changed

+10
-5
lines changed

1 file changed

+10
-5
lines changed

mm/slub.c

+10-5
Original file line numberDiff line numberDiff line change
@@ -1524,12 +1524,13 @@ static inline void *acquire_slab(struct kmem_cache *s,
15241524
}
15251525

15261526
static int put_cpu_partial(struct kmem_cache *s, struct page *page, int drain);
1527+
static inline bool pfmemalloc_match(struct page *page, gfp_t gfpflags);
15271528

15281529
/*
15291530
* Try to allocate a partial slab from a specific node.
15301531
*/
1531-
static void *get_partial_node(struct kmem_cache *s,
1532-
struct kmem_cache_node *n, struct kmem_cache_cpu *c)
1532+
static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n,
1533+
struct kmem_cache_cpu *c, gfp_t flags)
15331534
{
15341535
struct page *page, *page2;
15351536
void *object = NULL;
@@ -1545,9 +1546,13 @@ static void *get_partial_node(struct kmem_cache *s,
15451546

15461547
spin_lock(&n->list_lock);
15471548
list_for_each_entry_safe(page, page2, &n->partial, lru) {
1548-
void *t = acquire_slab(s, n, page, object == NULL);
1549+
void *t;
15491550
int available;
15501551

1552+
if (!pfmemalloc_match(page, flags))
1553+
continue;
1554+
1555+
t = acquire_slab(s, n, page, object == NULL);
15511556
if (!t)
15521557
break;
15531558

@@ -1614,7 +1619,7 @@ static void *get_any_partial(struct kmem_cache *s, gfp_t flags,
16141619

16151620
if (n && cpuset_zone_allowed_hardwall(zone, flags) &&
16161621
n->nr_partial > s->min_partial) {
1617-
object = get_partial_node(s, n, c);
1622+
object = get_partial_node(s, n, c, flags);
16181623
if (object) {
16191624
/*
16201625
* Return the object even if
@@ -1643,7 +1648,7 @@ static void *get_partial(struct kmem_cache *s, gfp_t flags, int node,
16431648
void *object;
16441649
int searchnode = (node == NUMA_NO_NODE) ? numa_node_id() : node;
16451650

1646-
object = get_partial_node(s, get_node(s, searchnode), c);
1651+
object = get_partial_node(s, get_node(s, searchnode), c, flags);
16471652
if (object || node != NUMA_NO_NODE)
16481653
return object;
16491654

0 commit comments

Comments
 (0)