mm/gup: revert "mm: gup: fix infinite loop within __get_longterm_locked"

[ Upstream commit 517f496e1e ]

After commit 1aaf8c1229 ("mm: gup: fix infinite loop within
__get_longterm_locked") we are able to longterm pin folios that are not
supposed to get longterm pinned, simply because they temporarily have the
LRU flag cleared (esp.  temporarily isolated).

For example, two __get_longterm_locked() callers can race, or
__get_longterm_locked() can race with anything else that temporarily
isolates folios.

The introducing commit mentions the use case of a driver that uses
vm_ops->fault to insert pages allocated through cma_alloc() into the page
tables, assuming they can later get longterm pinned.  These pages/ folios
would never have the LRU flag set and consequently cannot get isolated.
There is no known in-tree user making use of that so far, fortunately.

To handle that in the future -- and avoid retrying forever to
isolate/migrate them -- we will need a different mechanism for the CMA
area *owner* to indicate that it actually already allocated the page and
is fine with longterm pinning it.  The LRU flag is not suitable for that.

Probably we can lookup the relevant CMA area and query the bitmap; we only
have have to care about some races, probably.  If already allocated, we
could just allow longterm pinning)

Anyhow, let's fix the "must not be longterm pinned" problem first by
reverting the original commit.

Link: https://lkml.kernel.org/r/20250611131314.594529-1-david@redhat.com
Fixes: 1aaf8c1229 ("mm: gup: fix infinite loop within __get_longterm_locked")
Signed-off-by: David Hildenbrand <david@redhat.com>
Closes: https://lore.kernel.org/all/20250522092755.GA3277597@tiffany/
Reported-by: Hyesoo Yu <hyesoo.yu@samsung.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Peter Xu <peterx@redhat.com>
Cc: Zhaoyang Huang <zhaoyang.huang@unisoc.com>
Cc: Aijun Sun <aijun.sun@unisoc.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
[ Revert v6.1.129 commit c986a5fb15 ]
Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
This commit is contained in:
David Hildenbrand 2025-06-11 15:13:14 +02:00 committed by Greg Kroah-Hartman
parent 8634e8eb12
commit c22fcd236c

View File

@ -1961,14 +1961,14 @@ struct page *get_dump_page(unsigned long addr)
/*
* Returns the number of collected pages. Return value is always >= 0.
*/
static void collect_longterm_unpinnable_pages(
static unsigned long collect_longterm_unpinnable_pages(
struct list_head *movable_page_list,
unsigned long nr_pages,
struct page **pages)
{
unsigned long i, collected = 0;
struct folio *prev_folio = NULL;
bool drain_allow = true;
unsigned long i;
for (i = 0; i < nr_pages; i++) {
struct folio *folio = page_folio(pages[i]);
@ -1980,6 +1980,8 @@ static void collect_longterm_unpinnable_pages(
if (folio_is_longterm_pinnable(folio))
continue;
collected++;
if (folio_is_device_coherent(folio))
continue;
@ -2001,6 +2003,8 @@ static void collect_longterm_unpinnable_pages(
NR_ISOLATED_ANON + folio_is_file_lru(folio),
folio_nr_pages(folio));
}
return collected;
}
/*
@ -2093,10 +2097,12 @@ err:
static long check_and_migrate_movable_pages(unsigned long nr_pages,
struct page **pages)
{
unsigned long collected;
LIST_HEAD(movable_page_list);
collect_longterm_unpinnable_pages(&movable_page_list, nr_pages, pages);
if (list_empty(&movable_page_list))
collected = collect_longterm_unpinnable_pages(&movable_page_list,
nr_pages, pages);
if (!collected)
return 0;
return migrate_longterm_unpinnable_pages(&movable_page_list, nr_pages,