mm: mincore: use pte_batch_hint() to batch process large folios

When I tested the mincore() syscall, I observed that it takes longer with
64K mTHP enabled on my Arm64 server.  The reason is the
mincore_pte_range() still checks each PTE individually, even when the PTEs
are contiguous, which is not efficient.

Thus we can use pte_batch_hint() to get the batch number of the present
contiguous PTEs, which can improve the performance.  I tested the
mincore() syscall with 1G anonymous memory populated with 64K mTHP, and
observed an obvious performance improvement:

w/o patch		w/ patch		changes
6022us			549us			+91%

Moreover, I also tested mincore() with disabling mTHP/THP, and did not see
any obvious regression for base pages.

Link: https://lkml.kernel.org/r/99cb00ee626ceb6e788102ca36821815cd832237.1746697240.git.baolin.wang@linux.alibaba.com
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Reviewed-by: Barry Song <baohua@kernel.org>
Reviewed-by: Dev Jain <dev.jain@arm.com>
Acked-by: David Hildenbrand <david@redhat.com>
Cc: Dev Jain <dev.jain@arm.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Zi Yan <ziy@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
Baolin Wang 2025-05-09 08:45:21 +08:00 committed by Andrew Morton
parent 83b6d498d0
commit 4df65651f7

View File

@ -21,6 +21,7 @@
#include <linux/uaccess.h>
#include "swap.h"
#include "internal.h"
static int mincore_hugetlb(pte_t *pte, unsigned long hmask, unsigned long addr,
unsigned long end, struct mm_walk *walk)
@ -105,6 +106,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
pte_t *ptep;
unsigned char *vec = walk->private;
int nr = (end - addr) >> PAGE_SHIFT;
int step, i;
ptl = pmd_trans_huge_lock(pmd, vma);
if (ptl) {
@ -118,16 +120,26 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
walk->action = ACTION_AGAIN;
return 0;
}
for (; addr != end; ptep++, addr += PAGE_SIZE) {
for (; addr != end; ptep += step, addr += step * PAGE_SIZE) {
pte_t pte = ptep_get(ptep);
step = 1;
/* We need to do cache lookup too for pte markers */
if (pte_none_mostly(pte))
__mincore_unmapped_range(addr, addr + PAGE_SIZE,
vma, vec);
else if (pte_present(pte))
*vec = 1;
else { /* pte is a swap entry */
else if (pte_present(pte)) {
unsigned int batch = pte_batch_hint(ptep, pte);
if (batch > 1) {
unsigned int max_nr = (end - addr) >> PAGE_SHIFT;
step = min_t(unsigned int, batch, max_nr);
}
for (i = 0; i < step; i++)
vec[i] = 1;
} else { /* pte is a swap entry */
swp_entry_t entry = pte_to_swp_entry(pte);
if (non_swap_entry(entry)) {
@ -146,7 +158,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
#endif
}
}
vec++;
vec += step;
}
pte_unmap_unlock(ptep - 1, ptl);
out: