Skip to content

Commit 1a5a990

Browse files
aagittorvalds
authored andcommitted
mm: thp: fix pmd_bad() triggering in code paths holding mmap_sem read mode
In some cases it may happen that pmd_none_or_clear_bad() is called with the mmap_sem hold in read mode. In those cases the huge page faults can allocate hugepmds under pmd_none_or_clear_bad() and that can trigger a false positive from pmd_bad() that will not like to see a pmd materializing as trans huge. It's not khugepaged causing the problem, khugepaged holds the mmap_sem in write mode (and all those sites must hold the mmap_sem in read mode to prevent pagetables to go away from under them, during code review it seems vm86 mode on 32bit kernels requires that too unless it's restricted to 1 thread per process or UP builds). The race is only with the huge pagefaults that can convert a pmd_none() into a pmd_trans_huge(). Effectively all these pmd_none_or_clear_bad() sites running with mmap_sem in read mode are somewhat speculative with the page faults, and the result is always undefined when they run simultaneously. This is probably why it wasn't common to run into this. For example if the madvise(MADV_DONTNEED) runs zap_page_range() shortly before the page fault, the hugepage will not be zapped, if the page fault runs first it will be zapped. Altering pmd_bad() not to error out if it finds hugepmds won't be enough to fix this, because zap_pmd_range would then proceed to call zap_pte_range (which would be incorrect if the pmd become a pmd_trans_huge()). The simplest way to fix this is to read the pmd in the local stack (regardless of what we read, no need of actual CPU barriers, only compiler barrier needed), and be sure it is not changing under the code that computes its value. Even if the real pmd is changing under the value we hold on the stack, we don't care. If we actually end up in zap_pte_range it means the pmd was not none already and it was not huge, and it can't become huge from under us (khugepaged locking explained above). All we need is to enforce that there is no way anymore that in a code path like below, pmd_trans_huge can be false, but pmd_none_or_clear_bad can run into a hugepmd. The overhead of a barrier() is just a compiler tweak and should not be measurable (I only added it for THP builds). I don't exclude different compiler versions may have prevented the race too by caching the value of *pmd on the stack (that hasn't been verified, but it wouldn't be impossible considering pmd_none_or_clear_bad, pmd_bad, pmd_trans_huge, pmd_none are all inlines and there's no external function called in between pmd_trans_huge and pmd_none_or_clear_bad). if (pmd_trans_huge(*pmd)) { if (next-addr != HPAGE_PMD_SIZE) { VM_BUG_ON(!rwsem_is_locked(&tlb->mm->mmap_sem)); split_huge_page_pmd(vma->vm_mm, pmd); } else if (zap_huge_pmd(tlb, vma, pmd, addr)) continue; /* fall through */ } if (pmd_none_or_clear_bad(pmd)) Because this race condition could be exercised without special privileges this was reported in CVE-2012-1179. The race was identified and fully explained by Ulrich who debugged it. I'm quoting his accurate explanation below, for reference. ====== start quote ======= mapcount 0 page_mapcount 1 kernel BUG at mm/huge_memory.c:1384! At some point prior to the panic, a "bad pmd ..." message similar to the following is logged on the console: mm/memory.c:145: bad pmd ffff8800376e1f98(80000000314000e7). The "bad pmd ..." message is logged by pmd_clear_bad() before it clears the page's PMD table entry. 143 void pmd_clear_bad(pmd_t *pmd) 144 { -> 145 pmd_ERROR(*pmd); 146 pmd_clear(pmd); 147 } After the PMD table entry has been cleared, there is an inconsistency between the actual number of PMD table entries that are mapping the page and the page's map count (_mapcount field in struct page). When the page is subsequently reclaimed, __split_huge_page() detects this inconsistency. 1381 if (mapcount != page_mapcount(page)) 1382 printk(KERN_ERR "mapcount %d page_mapcount %d\n", 1383 mapcount, page_mapcount(page)); -> 1384 BUG_ON(mapcount != page_mapcount(page)); The root cause of the problem is a race of two threads in a multithreaded process. Thread B incurs a page fault on a virtual address that has never been accessed (PMD entry is zero) while Thread A is executing an madvise() system call on a virtual address within the same 2 MB (huge page) range. virtual address space .---------------------. | | | | .-|---------------------| | | | | | |<-- B(fault) | | | 2 MB | |/////////////////////|-. huge < |/////////////////////| > A(range) page | |/////////////////////|-' | | | | | | '-|---------------------| | | | | '---------------------' - Thread A is executing an madvise(..., MADV_DONTNEED) system call on the virtual address range "A(range)" shown in the picture. sys_madvise // Acquire the semaphore in shared mode. down_read(&current->mm->mmap_sem) ... madvise_vma switch (behavior) case MADV_DONTNEED: madvise_dontneed zap_page_range unmap_vmas unmap_page_range zap_pud_range zap_pmd_range // // Assume that this huge page has never been accessed. // I.e. content of the PMD entry is zero (not mapped). // if (pmd_trans_huge(*pmd)) { // We don't get here due to the above assumption. } // // Assume that Thread B incurred a page fault and .---------> // sneaks in here as shown below. | // | if (pmd_none_or_clear_bad(pmd)) | { | if (unlikely(pmd_bad(*pmd))) | pmd_clear_bad | { | pmd_ERROR | // Log "bad pmd ..." message here. | pmd_clear | // Clear the page's PMD entry. | // Thread B incremented the map count | // in page_add_new_anon_rmap(), but | // now the page is no longer mapped | // by a PMD entry (-> inconsistency). | } | } | v - Thread B is handling a page fault on virtual address "B(fault)" shown in the picture. ... do_page_fault __do_page_fault // Acquire the semaphore in shared mode. down_read_trylock(&mm->mmap_sem) ... handle_mm_fault if (pmd_none(*pmd) && transparent_hugepage_enabled(vma)) // We get here due to the above assumption (PMD entry is zero). do_huge_pmd_anonymous_page alloc_hugepage_vma // Allocate a new transparent huge page here. ... __do_huge_pmd_anonymous_page ... spin_lock(&mm->page_table_lock) ... page_add_new_anon_rmap // Here we increment the page's map count (starts at -1). atomic_set(&page->_mapcount, 0) set_pmd_at // Here we set the page's PMD entry which will be cleared // when Thread A calls pmd_clear_bad(). ... spin_unlock(&mm->page_table_lock) The mmap_sem does not prevent the race because both threads are acquiring it in shared mode (down_read). Thread B holds the page_table_lock while the page's map count and PMD table entry are updated. However, Thread A does not synchronize on that lock. ====== end quote ======= [akpm@linux-foundation.org: checkpatch fixes] Reported-by: Ulrich Obergfell <uobergfe@redhat.com> Signed-off-by: Andrea Arcangeli <aarcange@redhat.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Hugh Dickins <hughd@google.com> Cc: Dave Jones <davej@redhat.com> Acked-by: Larry Woodman <lwoodman@redhat.com> Acked-by: Rik van Riel <riel@redhat.com> Cc: <stable@vger.kernel.org> [2.6.38+] Cc: Mark Salter <msalter@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent 31f6765 commit 1a5a990

File tree

9 files changed

+92
-10
lines changed

9 files changed

+92
-10
lines changed

arch/x86/kernel/vm86_32.c

+2
Original file line numberDiff line numberDiff line change
@@ -172,6 +172,7 @@ static void mark_screen_rdonly(struct mm_struct *mm)
172172
spinlock_t *ptl;
173173
int i;
174174

175+
down_write(&mm->mmap_sem);
175176
pgd = pgd_offset(mm, 0xA0000);
176177
if (pgd_none_or_clear_bad(pgd))
177178
goto out;
@@ -190,6 +191,7 @@ static void mark_screen_rdonly(struct mm_struct *mm)
190191
}
191192
pte_unmap_unlock(pte, ptl);
192193
out:
194+
up_write(&mm->mmap_sem);
193195
flush_tlb();
194196
}
195197

fs/proc/task_mmu.c

+9
Original file line numberDiff line numberDiff line change
@@ -409,6 +409,9 @@ static int smaps_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
409409
} else {
410410
spin_unlock(&walk->mm->page_table_lock);
411411
}
412+
413+
if (pmd_trans_unstable(pmd))
414+
return 0;
412415
/*
413416
* The mmap_sem held all the way back in m_start() is what
414417
* keeps khugepaged out of here and from collapsing things
@@ -507,6 +510,8 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr,
507510
struct page *page;
508511

509512
split_huge_page_pmd(walk->mm, pmd);
513+
if (pmd_trans_unstable(pmd))
514+
return 0;
510515

511516
pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
512517
for (; addr != end; pte++, addr += PAGE_SIZE) {
@@ -670,6 +675,8 @@ static int pagemap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
670675
int err = 0;
671676

672677
split_huge_page_pmd(walk->mm, pmd);
678+
if (pmd_trans_unstable(pmd))
679+
return 0;
673680

674681
/* find the first VMA at or above 'addr' */
675682
vma = find_vma(walk->mm, addr);
@@ -961,6 +968,8 @@ static int gather_pte_stats(pmd_t *pmd, unsigned long addr,
961968
spin_unlock(&walk->mm->page_table_lock);
962969
}
963970

971+
if (pmd_trans_unstable(pmd))
972+
return 0;
964973
orig_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
965974
do {
966975
struct page *page = can_gather_numa_stats(*pte, md->vma, addr);

include/asm-generic/pgtable.h

+61
Original file line numberDiff line numberDiff line change
@@ -425,6 +425,8 @@ extern void untrack_pfn_vma(struct vm_area_struct *vma, unsigned long pfn,
425425
unsigned long size);
426426
#endif
427427

428+
#ifdef CONFIG_MMU
429+
428430
#ifndef CONFIG_TRANSPARENT_HUGEPAGE
429431
static inline int pmd_trans_huge(pmd_t pmd)
430432
{
@@ -441,7 +443,66 @@ static inline int pmd_write(pmd_t pmd)
441443
return 0;
442444
}
443445
#endif /* __HAVE_ARCH_PMD_WRITE */
446+
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
447+
448+
/*
449+
* This function is meant to be used by sites walking pagetables with
450+
* the mmap_sem hold in read mode to protect against MADV_DONTNEED and
451+
* transhuge page faults. MADV_DONTNEED can convert a transhuge pmd
452+
* into a null pmd and the transhuge page fault can convert a null pmd
453+
* into an hugepmd or into a regular pmd (if the hugepage allocation
454+
* fails). While holding the mmap_sem in read mode the pmd becomes
455+
* stable and stops changing under us only if it's not null and not a
456+
* transhuge pmd. When those races occurs and this function makes a
457+
* difference vs the standard pmd_none_or_clear_bad, the result is
458+
* undefined so behaving like if the pmd was none is safe (because it
459+
* can return none anyway). The compiler level barrier() is critically
460+
* important to compute the two checks atomically on the same pmdval.
461+
*/
462+
static inline int pmd_none_or_trans_huge_or_clear_bad(pmd_t *pmd)
463+
{
464+
/* depend on compiler for an atomic pmd read */
465+
pmd_t pmdval = *pmd;
466+
/*
467+
* The barrier will stabilize the pmdval in a register or on
468+
* the stack so that it will stop changing under the code.
469+
*/
470+
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
471+
barrier();
472+
#endif
473+
if (pmd_none(pmdval))
474+
return 1;
475+
if (unlikely(pmd_bad(pmdval))) {
476+
if (!pmd_trans_huge(pmdval))
477+
pmd_clear_bad(pmd);
478+
return 1;
479+
}
480+
return 0;
481+
}
482+
483+
/*
484+
* This is a noop if Transparent Hugepage Support is not built into
485+
* the kernel. Otherwise it is equivalent to
486+
* pmd_none_or_trans_huge_or_clear_bad(), and shall only be called in
487+
* places that already verified the pmd is not none and they want to
488+
* walk ptes while holding the mmap sem in read mode (write mode don't
489+
* need this). If THP is not enabled, the pmd can't go away under the
490+
* code even if MADV_DONTNEED runs, but if THP is enabled we need to
491+
* run a pmd_trans_unstable before walking the ptes after
492+
* split_huge_page_pmd returns (because it may have run when the pmd
493+
* become null, but then a page fault can map in a THP and not a
494+
* regular page).
495+
*/
496+
static inline int pmd_trans_unstable(pmd_t *pmd)
497+
{
498+
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
499+
return pmd_none_or_trans_huge_or_clear_bad(pmd);
500+
#else
501+
return 0;
444502
#endif
503+
}
504+
505+
#endif /* CONFIG_MMU */
445506

446507
#endif /* !__ASSEMBLY__ */
447508

mm/memcontrol.c

+4
Original file line numberDiff line numberDiff line change
@@ -5230,6 +5230,8 @@ static int mem_cgroup_count_precharge_pte_range(pmd_t *pmd,
52305230
spinlock_t *ptl;
52315231

52325232
split_huge_page_pmd(walk->mm, pmd);
5233+
if (pmd_trans_unstable(pmd))
5234+
return 0;
52335235

52345236
pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
52355237
for (; addr != end; pte++, addr += PAGE_SIZE)
@@ -5390,6 +5392,8 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd,
53905392
spinlock_t *ptl;
53915393

53925394
split_huge_page_pmd(walk->mm, pmd);
5395+
if (pmd_trans_unstable(pmd))
5396+
return 0;
53935397
retry:
53945398
pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
53955399
for (; addr != end; addr += PAGE_SIZE) {

mm/memory.c

+12-4
Original file line numberDiff line numberDiff line change
@@ -1247,16 +1247,24 @@ static inline unsigned long zap_pmd_range(struct mmu_gather *tlb,
12471247
do {
12481248
next = pmd_addr_end(addr, end);
12491249
if (pmd_trans_huge(*pmd)) {
1250-
if (next-addr != HPAGE_PMD_SIZE) {
1250+
if (next - addr != HPAGE_PMD_SIZE) {
12511251
VM_BUG_ON(!rwsem_is_locked(&tlb->mm->mmap_sem));
12521252
split_huge_page_pmd(vma->vm_mm, pmd);
12531253
} else if (zap_huge_pmd(tlb, vma, pmd, addr))
1254-
continue;
1254+
goto next;
12551255
/* fall through */
12561256
}
1257-
if (pmd_none_or_clear_bad(pmd))
1258-
continue;
1257+
/*
1258+
* Here there can be other concurrent MADV_DONTNEED or
1259+
* trans huge page faults running, and if the pmd is
1260+
* none or trans huge it can change under us. This is
1261+
* because MADV_DONTNEED holds the mmap_sem in read
1262+
* mode.
1263+
*/
1264+
if (pmd_none_or_trans_huge_or_clear_bad(pmd))
1265+
goto next;
12591266
next = zap_pte_range(tlb, vma, pmd, addr, next, details);
1267+
next:
12601268
cond_resched();
12611269
} while (pmd++, addr = next, addr != end);
12621270

mm/mempolicy.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -512,7 +512,7 @@ static inline int check_pmd_range(struct vm_area_struct *vma, pud_t *pud,
512512
do {
513513
next = pmd_addr_end(addr, end);
514514
split_huge_page_pmd(vma->vm_mm, pmd);
515-
if (pmd_none_or_clear_bad(pmd))
515+
if (pmd_none_or_trans_huge_or_clear_bad(pmd))
516516
continue;
517517
if (check_pte_range(vma, pmd, addr, next, nodes,
518518
flags, private))

mm/mincore.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -164,7 +164,7 @@ static void mincore_pmd_range(struct vm_area_struct *vma, pud_t *pud,
164164
}
165165
/* fall through */
166166
}
167-
if (pmd_none_or_clear_bad(pmd))
167+
if (pmd_none_or_trans_huge_or_clear_bad(pmd))
168168
mincore_unmapped_range(vma, addr, next, vec);
169169
else
170170
mincore_pte_range(vma, pmd, addr, next, vec);

mm/pagewalk.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
5959
continue;
6060

6161
split_huge_page_pmd(walk->mm, pmd);
62-
if (pmd_none_or_clear_bad(pmd))
62+
if (pmd_none_or_trans_huge_or_clear_bad(pmd))
6363
goto again;
6464
err = walk_pte_range(pmd, addr, next, walk);
6565
if (err)

mm/swapfile.c

+1-3
Original file line numberDiff line numberDiff line change
@@ -932,9 +932,7 @@ static inline int unuse_pmd_range(struct vm_area_struct *vma, pud_t *pud,
932932
pmd = pmd_offset(pud, addr);
933933
do {
934934
next = pmd_addr_end(addr, end);
935-
if (unlikely(pmd_trans_huge(*pmd)))
936-
continue;
937-
if (pmd_none_or_clear_bad(pmd))
935+
if (pmd_none_or_trans_huge_or_clear_bad(pmd))
938936
continue;
939937
ret = unuse_pte_range(vma, pmd, addr, next, entry, page);
940938
if (ret)

0 commit comments

Comments
 (0)