mirror of
git://git.yoctoproject.org/linux-yocto.git
synced 2025-07-05 13:25:20 +02:00

Conceptually, we want the memory mappings to always be up to date and represent whatever is in the TLB. To ensure that, we need to sync them over in the userspace case and for the kernel we need to process the mappings. The kernel will call flush_tlb_* if page table entries that were valid before become invalid. Unfortunately, this is not the case if entries are added. As such, change both flush_tlb_* and set_ptes to track the memory range that has to be synchronized. For the kernel, we need to execute a flush_tlb_kern_* immediately but we can wait for the first page fault in case of set_ptes. For userspace in contrast we only store that a range of memory needs to be synced and do so whenever we switch to that process. Signed-off-by: Benjamin Berg <benjamin.berg@intel.com> Link: https://patch.msgid.link/20240703134536.1161108-13-benjamin@sipsolutions.net Signed-off-by: Johannes Berg <johannes.berg@intel.com>
60 lines
1.9 KiB
C
60 lines
1.9 KiB
C
/* SPDX-License-Identifier: GPL-2.0 */
|
|
/*
|
|
* Copyright (C) 2002 - 2007 Jeff Dike (jdike@{addtoit,linux.intel}.com)
|
|
*/
|
|
|
|
#ifndef __UM_TLBFLUSH_H
|
|
#define __UM_TLBFLUSH_H
|
|
|
|
#include <linux/mm.h>
|
|
|
|
/*
|
|
* In UML, we need to sync the TLB over by using mmap/munmap/mprotect syscalls
|
|
* from the process handling the MM (which can be the kernel itself).
|
|
*
|
|
* To track updates, we can hook into set_ptes and flush_tlb_*. With set_ptes
|
|
* we catch all PTE transitions where memory that was unusable becomes usable.
|
|
* While with flush_tlb_* we can track any memory that becomes unusable and
|
|
* even if a higher layer of the page table was modified.
|
|
*
|
|
* So, we simply track updates using both methods and mark the memory area to
|
|
* be synced later on. The only special case is that flush_tlb_kern_* needs to
|
|
* be executed immediately as there is no good synchronization point in that
|
|
* case. In contrast, in the set_ptes case we can wait for the next kernel
|
|
* segfault before we do the synchornization.
|
|
*
|
|
* - flush_tlb_all() flushes all processes TLBs
|
|
* - flush_tlb_mm(mm) flushes the specified mm context TLB's
|
|
* - flush_tlb_page(vma, vmaddr) flushes one page
|
|
* - flush_tlb_range(vma, start, end) flushes a range of pages
|
|
* - flush_tlb_kernel_range(start, end) flushes a range of kernel pages
|
|
*/
|
|
|
|
extern int um_tlb_sync(struct mm_struct *mm);
|
|
|
|
extern void flush_tlb_all(void);
|
|
extern void flush_tlb_mm(struct mm_struct *mm);
|
|
|
|
static inline void flush_tlb_page(struct vm_area_struct *vma,
|
|
unsigned long address)
|
|
{
|
|
um_tlb_mark_sync(vma->vm_mm, address, address + PAGE_SIZE);
|
|
}
|
|
|
|
static inline void flush_tlb_range(struct vm_area_struct *vma,
|
|
unsigned long start, unsigned long end)
|
|
{
|
|
um_tlb_mark_sync(vma->vm_mm, start, end);
|
|
}
|
|
|
|
static inline void flush_tlb_kernel_range(unsigned long start,
|
|
unsigned long end)
|
|
{
|
|
um_tlb_mark_sync(&init_mm, start, end);
|
|
|
|
/* Kernel needs to be synced immediately */
|
|
um_tlb_sync(&init_mm);
|
|
}
|
|
|
|
#endif
|