mirror of
https://github.com/torvalds/linux.git
synced 2025-04-09 14:45:27 +00:00

Patch series "hugetlb/CMA improvements for large systems", v5. On large systems, we observed some issues with hugetlb and CMA: 1) When specifying a large number of hugetlb boot pages (hugepages= on the commandline), the kernel may run out of memory before it even gets to HVO. For example, if you have a 3072G system, and want to use 3024 1G hugetlb pages for VMs, that should leave you plenty of space for the hypervisor, provided you have the hugetlb vmemmap optimization (HVO) enabled. However, since the vmemmap pages are always allocated first, and then later in boot freed, you will actually run yourself out of memory before you can do HVO. This means not getting all the hugetlb pages you want, and worse, failure to boot if there is an allocation failure in the system from which it can't recover. 2) There is a system setup where you might want to use hugetlb_cma with a large value (say, again, 3024 out of 3072G like above), and then lower that if system usage allows it, to make room for non-hugetlb processes. For this, a variation of the problem above applies: the kernel runs out of unmovable space to allocate from before you finish boot, since your CMA area takes up all the space. 3) CMA wants to use one big contiguous area for allocations. Which fails if you have the aforementioned 3T system with a gap in the middle of physical memory (like the < 40bits BIOS DMA area seen on some AMD systems). You then won't be able to set up a CMA area for one of the NUMA nodes, leading to loss of half of your hugetlb CMA area. 4) Under the scenario mentioned in 2), when trying to grow the number of hugetlb pages after dropping it for a while, new CMA allocations may fail occasionally. This is not unexpected, some transient references on pages may prevent cma_alloc from succeeding under memory pressure. However, the hugetlb code then falls back to a normal contiguous alloc, which may end up succeeding. This is not always desired behavior. If you have a large CMA area, then the kernel has a restricted amount of memory it can do unmovable allocations from (a well known issue). A normal contiguous alloc may eat further in to this space. To resolve these issues, do the following: * Add hooks to the section init code to do custom initialization of memmap pages. Hugetlb bootmem (memblock) allocated pages can then be pre-HVOed. This avoids allocating a large number of vmemmap pages early in boot, only to have them be freed again later, and also avoids running out of memory as described under 1). Using these hooks for hugetlb is optional. It requires moving hugetlb bootmem allocation to an earlier spot by the architecture. This has been enabled on x86. * hugetlb_cma doesn't care about the CMA area it uses being one large contiguous range. Multiple smaller ranges are fine. The only requirements are that the areas should be on one NUMA node, and individual gigantic pages should be allocatable from them. So, implement multi-range support for CMA, avoiding issue 3). * Introduce a hugetlb_cma_only option on the commandline. This only allows allocations from CMA for gigantic pages, if hugetlb_cma= is also specified. * With hugetlb_cma_only active, it also makes sense to be able to pre-allocate gigantic hugetlb pages at boot time from the CMA area(s). Add a rudimentary early CMA allocation interface, that just grabs a piece of memblock-allocated space from the CMA area, which gets marked as allocated in the CMA bitmap when the CMA area is initialized. With this, hugepages= can be supported with hugetlb_cma=, making scenario 2) work. Additionally, fix some minor bugs, with one worth mentioning: since hugetlb gigantic bootmem pages are allocated by memblock, they may span multiple zones, as memblock doesn't (and mostly can't) know about zones. This can cause problems. A hugetlb page spanning multiple zones is bad, and it's worse with HVO, when the de-HVO step effectively sneakily re-assigns pages to a different zone than originally configured, since the tail pages all inherit the zone from the first 60 tail pages. This condition is not common, but can be easily reproduced using ZONE_MOVABLE. To fix this, add checks to see if gigantic bootmem pages intersect with multiple zones, and do not use them if they do, giving them back to the page allocator instead. The first patch is kind of along for the ride, except that maintaining an available_count for a CMA area is convenient for the multiple range support. This patch (of 27): In addition to the number of allocations and releases, system management software may like to be aware of the size of CMA areas, and how many pages are available in it. This information is currently not available, so export it in total_page and available_pages, respectively. The name 'available_pages' was picked over 'free_pages' because 'free' implies that the pages are unused. But they might not be, they just haven't been used by cma_alloc The number of available pages is tracked regardless of CONFIG_CMA_SYSFS, allowing for a few minor shortcuts in the code, avoiding bitmap operations. Link: https://lkml.kernel.org/r/20250228182928.2645936-2-fvdl@google.com Signed-off-by: Frank van der Linden <fvdl@google.com> Reviewed-by: Oscar Salvador <osalvador@suse.de> Cc: David Hildenbrand <david@redhat.com> Cc: Joao Martins <joao.m.martins@oracle.com> Cc: Muchun Song <muchun.song@linux.dev> Cc: Roman Gushchin (Cruise) <roman.gushchin@linux.dev> Cc: Usama Arif <usamaarif642@gmail.com> Cc: Yu Zhao <yuzhao@google.com> Cc: Zi Yan <ziy@nvidia.com> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Dan Carpenter <dan.carpenter@linaro.org> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Madhavan Srinivasan <maddy@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
148 lines
3.3 KiB
C
148 lines
3.3 KiB
C
// SPDX-License-Identifier: GPL-2.0
|
|
/*
|
|
* CMA SysFS Interface
|
|
*
|
|
* Copyright (c) 2021 Minchan Kim <minchan@kernel.org>
|
|
*/
|
|
|
|
#include <linux/cma.h>
|
|
#include <linux/kernel.h>
|
|
#include <linux/slab.h>
|
|
|
|
#include "cma.h"
|
|
|
|
#define CMA_ATTR_RO(_name) \
|
|
static struct kobj_attribute _name##_attr = __ATTR_RO(_name)
|
|
|
|
void cma_sysfs_account_success_pages(struct cma *cma, unsigned long nr_pages)
|
|
{
|
|
atomic64_add(nr_pages, &cma->nr_pages_succeeded);
|
|
}
|
|
|
|
void cma_sysfs_account_fail_pages(struct cma *cma, unsigned long nr_pages)
|
|
{
|
|
atomic64_add(nr_pages, &cma->nr_pages_failed);
|
|
}
|
|
|
|
void cma_sysfs_account_release_pages(struct cma *cma, unsigned long nr_pages)
|
|
{
|
|
atomic64_add(nr_pages, &cma->nr_pages_released);
|
|
}
|
|
|
|
static inline struct cma *cma_from_kobj(struct kobject *kobj)
|
|
{
|
|
return container_of(kobj, struct cma_kobject, kobj)->cma;
|
|
}
|
|
|
|
static ssize_t alloc_pages_success_show(struct kobject *kobj,
|
|
struct kobj_attribute *attr, char *buf)
|
|
{
|
|
struct cma *cma = cma_from_kobj(kobj);
|
|
|
|
return sysfs_emit(buf, "%llu\n",
|
|
atomic64_read(&cma->nr_pages_succeeded));
|
|
}
|
|
CMA_ATTR_RO(alloc_pages_success);
|
|
|
|
static ssize_t alloc_pages_fail_show(struct kobject *kobj,
|
|
struct kobj_attribute *attr, char *buf)
|
|
{
|
|
struct cma *cma = cma_from_kobj(kobj);
|
|
|
|
return sysfs_emit(buf, "%llu\n", atomic64_read(&cma->nr_pages_failed));
|
|
}
|
|
CMA_ATTR_RO(alloc_pages_fail);
|
|
|
|
static ssize_t release_pages_success_show(struct kobject *kobj,
|
|
struct kobj_attribute *attr, char *buf)
|
|
{
|
|
struct cma *cma = cma_from_kobj(kobj);
|
|
|
|
return sysfs_emit(buf, "%llu\n", atomic64_read(&cma->nr_pages_released));
|
|
}
|
|
CMA_ATTR_RO(release_pages_success);
|
|
|
|
static ssize_t total_pages_show(struct kobject *kobj,
|
|
struct kobj_attribute *attr, char *buf)
|
|
{
|
|
struct cma *cma = cma_from_kobj(kobj);
|
|
|
|
return sysfs_emit(buf, "%lu\n", cma->count);
|
|
}
|
|
CMA_ATTR_RO(total_pages);
|
|
|
|
static ssize_t available_pages_show(struct kobject *kobj,
|
|
struct kobj_attribute *attr, char *buf)
|
|
{
|
|
struct cma *cma = cma_from_kobj(kobj);
|
|
|
|
return sysfs_emit(buf, "%lu\n", cma->available_count);
|
|
}
|
|
CMA_ATTR_RO(available_pages);
|
|
|
|
static void cma_kobj_release(struct kobject *kobj)
|
|
{
|
|
struct cma *cma = cma_from_kobj(kobj);
|
|
struct cma_kobject *cma_kobj = cma->cma_kobj;
|
|
|
|
kfree(cma_kobj);
|
|
cma->cma_kobj = NULL;
|
|
}
|
|
|
|
static struct attribute *cma_attrs[] = {
|
|
&alloc_pages_success_attr.attr,
|
|
&alloc_pages_fail_attr.attr,
|
|
&release_pages_success_attr.attr,
|
|
&total_pages_attr.attr,
|
|
&available_pages_attr.attr,
|
|
NULL,
|
|
};
|
|
ATTRIBUTE_GROUPS(cma);
|
|
|
|
static const struct kobj_type cma_ktype = {
|
|
.release = cma_kobj_release,
|
|
.sysfs_ops = &kobj_sysfs_ops,
|
|
.default_groups = cma_groups,
|
|
};
|
|
|
|
static int __init cma_sysfs_init(void)
|
|
{
|
|
struct kobject *cma_kobj_root;
|
|
struct cma_kobject *cma_kobj;
|
|
struct cma *cma;
|
|
int i, err;
|
|
|
|
cma_kobj_root = kobject_create_and_add("cma", mm_kobj);
|
|
if (!cma_kobj_root)
|
|
return -ENOMEM;
|
|
|
|
for (i = 0; i < cma_area_count; i++) {
|
|
cma_kobj = kzalloc(sizeof(*cma_kobj), GFP_KERNEL);
|
|
if (!cma_kobj) {
|
|
err = -ENOMEM;
|
|
goto out;
|
|
}
|
|
|
|
cma = &cma_areas[i];
|
|
cma->cma_kobj = cma_kobj;
|
|
cma_kobj->cma = cma;
|
|
err = kobject_init_and_add(&cma_kobj->kobj, &cma_ktype,
|
|
cma_kobj_root, "%s", cma->name);
|
|
if (err) {
|
|
kobject_put(&cma_kobj->kobj);
|
|
goto out;
|
|
}
|
|
}
|
|
|
|
return 0;
|
|
out:
|
|
while (--i >= 0) {
|
|
cma = &cma_areas[i];
|
|
kobject_put(&cma->cma_kobj->kobj);
|
|
}
|
|
kobject_put(cma_kobj_root);
|
|
|
|
return err;
|
|
}
|
|
subsys_initcall(cma_sysfs_init);
|