From patchwork Thu Aug 14 13:31:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: William Hunt X-Patchwork-Id: 118343 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 41F8F3858401 for ; Thu, 14 Aug 2025 13:32:29 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 41F8F3858401 X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id 5D0563858D37 for ; Thu, 14 Aug 2025 13:31:40 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 5D0563858D37 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 5D0563858D37 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1755178300; cv=none; b=QinK3Ie7lpPJjKZ6kqVnnRmg/KRngCE9LolZ1I6+jpuCVGwvqUmPFh348gDW/1aPF4lPRRCdIsxeeer5AJxXBYlLpNd6i9jZX32dk6dRcomMgBTjL0yHBY86LiVJX3zLXeD9kFJ0sSgzQBA1yeOL0L8AfQ7hDN/g2B4+u3s7Pvk= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1755178300; c=relaxed/simple; bh=fklUs4mNtIyyUu75FsrN8wyueLXlRcX+kOHyWihM9q4=; h=From:To:Subject:Date:Message-ID:MIME-Version; b=Bjc30LF1Zy6zgOCOayS2kyGetY5DuEkmNay2y1p4gNMajs9yX1y4IYBbIgh+7DYO2Hj8lEfU6vwpG5Nl2w/iF4zU3fBK+g765oIXPZUD/KvPd9O9NSNUpbem+R7uuRckNlaE/ezpa4eee/BgPFZ2n+p3s/58Z4ZgFi+YT1j6nLs= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 5D0563858D37 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CEFC5168F; Thu, 14 Aug 2025 06:31:31 -0700 (PDT) Received: from e137891.cambridge.arm.com (e137891.arm.com [10.1.33.37]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C768C3F5A1; Thu, 14 Aug 2025 06:31:38 -0700 (PDT) From: William Hunt To: libc-alpha@sourceware.org Cc: William Hunt Subject: [PATCH v2] malloc: Mark pages as MADV_DONTNEED in realloc to shrink/grow Date: Thu, 14 Aug 2025 14:31:13 +0100 Message-ID: <20250814133113.281886-1-william.hunt@arm.com> X-Mailer: git-send-email 2.43.0 MIME-Version: 1.0 X-Spam-Status: No, score=-12.8 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_NONE, KAM_DMARC_STATUS, KAM_LAZY_DOMAIN_SECURITY, KAM_SHORT, PROLO_LEO1, RCVD_IN_VALIDITY_RPBL_BLOCKED, RCVD_IN_VALIDITY_SAFE_BLOCKED, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces~patchwork=sourceware.org@sourceware.org When reallocating mmap()ed chunks, use madvise() if shrinking to mark unused pages as MADV_DONTNEED, only making a call to mremap() on failure. Allow growing within MADV_DONTNEED pages for later calls to realloc under the original size. This improves the efficiency of shrinking large mmap()ed chunks, as madvise() is significantly faster than mremap(). It also provides robustness, as if mremap() fails when shrinking, another madvise() without the threshold check is attempted, and if this fails the pointer is returned as to avoid a potential malloc+memcpy+free failure. Since mremap() fragments the VAS by usually shrinking in-place, using madvise() will keep the VAS intact while freeing the physical frames backing the unused pages, such that they will be zero-filled before the first access. To avoid the process' VAS from being exhausted, create a threshold for the maximum relative size of an mmap()ed chunk that can be marked MADV_DONTNEED. Place the logic for reallocating mmap()ed chunks into an _int_realloc_mmapped function to increase modularity. Create a tst-realloc-madvise.c test to verify that the relative threshold works as intended. Create a bench-realloc-shrink.c benchtest to show a 210% increase in reallocs/sec when shrinking up until an arbitrary limit for the process, verifying that realloc does handle shrinking large mmap()ed chunks more efficiently when using madvise() rather than mremap(). Update malloc-check.c to call _int_realloc_mmapped directly when testing realloc if the chunk is mmapped. Changes from v1: - Used the correct tst-realloc-madvise.c, v1 had an older incorrect version. Passed regress, OK for commit? --- benchtests/Makefile | 3 + benchtests/bench-realloc-shrink.c | 180 +++++++++++++++++++++++++ malloc/Makefile | 6 +- malloc/malloc-check.c | 24 +--- malloc/malloc.c | 111 +++++++++++----- malloc/tst-realloc-madvise.c | 212 ++++++++++++++++++++++++++++++ 6 files changed, 481 insertions(+), 55 deletions(-) create mode 100644 benchtests/bench-realloc-shrink.c create mode 100644 malloc/tst-realloc-madvise.c diff --git a/benchtests/Makefile b/benchtests/Makefile index 53f84bfeb9..5aa171a0f7 100644 --- a/benchtests/Makefile +++ b/benchtests/Makefile @@ -343,10 +343,12 @@ bench-malloc := \ malloc-simple \ malloc-tcache \ malloc-thread \ + realloc-shrink \ # bench-malloc else bench-malloc := $(filter malloc-%,${BENCHSET}) bench-malloc += $(filter calloc-%,${BENCHSET}) +bench-malloc += $(filter realloc-%,${BENCHSET}) endif ifeq (${STATIC-BENCHTESTS},yes) @@ -473,6 +475,7 @@ VALIDBENCHSETNAMES := \ malloc-tcache \ malloc-thread \ math-benchset \ + realloc-shrink \ stdio-benchset \ stdio-common-benchset \ stdlib-benchset \ diff --git a/benchtests/bench-realloc-shrink.c b/benchtests/bench-realloc-shrink.c new file mode 100644 index 0000000000..8dedc28afd --- /dev/null +++ b/benchtests/bench-realloc-shrink.c @@ -0,0 +1,180 @@ +/* Measure shrinking mmap()'ed chunks with madvise. + Copyright (C) 2025 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include "bench-timing.h" +#include "json-lib.h" + +#define PAGESIZE getpagesize() +#define MMAP_RELATIVE_DONTNEED (1.0 / 4.0) +#define MAX_MMAP_DONTNEED_MEM (size_t) 4194304 +#define MAX_SYSTEM_DONTNEED_MEM (size_t) 134217728 +#define START_SIZE (size_t) (MAX_MMAP_DONTNEED_MEM + PAGESIZE) +#define NUM_ALLOCS (size_t) (MAX_SYSTEM_DONTNEED_MEM / MAX_MMAP_DONTNEED_MEM) +#define NUM_ITERS 10 +#define MAX_PAGES_DIFF (size_t) (((START_SIZE - PAGESIZE) * (1 - MMAP_RELATIVE_DONTNEED)) / PAGESIZE) +#define TEST_NAME "realloc-shrink" + +static size_t num_pages_diff = 1; +#define get_num_shrinks() (size_t) (MAX_PAGES_DIFF / num_pages_diff) +static size_t num_shrinks = 1; +static size_t num_realloc_iters = 0; +static void *ps[NUM_ALLOCS]; + +typedef struct +{ + size_t num_iters; + double mmap_relative_dontneed; + size_t max_mmap_dontneed_mem; + timing_t elapsed; +} realloc_bench_args; + +static realloc_bench_args args; + +void +alloc_pointers (void) +{ + for (size_t i = 0; i < NUM_ALLOCS; ++i) + ps[i] = malloc (START_SIZE - PAGESIZE / 2); +} + +void +free_pointers (void) +{ + for (size_t i = 0; i < NUM_ALLOCS; ++i) + { + free (ps[i]); + ps[i] = NULL; + } +} + +void +shrink_max_times (void) +{ + for (size_t i = 0; i < NUM_ALLOCS; ++i) + { + size_t shrink_size = PAGESIZE * num_pages_diff; + for (size_t j = 0; j < num_shrinks; ++j) + { + ++num_realloc_iters; + ps[i] = realloc (ps[i], (size_t) (START_SIZE - (PAGESIZE / 2) - shrink_size)); + shrink_size += (PAGESIZE * num_pages_diff); + assert (ps[i] != NULL); + } + } +} + +void +grow_max_times (void) +{ + for (size_t i = 0; i < NUM_ALLOCS; ++i) + { + size_t shrink_size = PAGESIZE * num_pages_diff * (num_shrinks - 1); + for (size_t j = 0; j < num_shrinks; ++j) + { + ++num_realloc_iters; + ps[i] = realloc (ps[i], (size_t) (START_SIZE - (PAGESIZE / 2) - shrink_size)); + shrink_size -= (PAGESIZE * num_pages_diff); + assert (ps[i] != NULL); + } + } +} + +static void +do_benchmark (realloc_bench_args *args) +{ + timing_t start, stop; + size_t num_iters = args->num_iters; + double mmap_relative_dontneed = args->mmap_relative_dontneed; + size_t max_mmap_dontneed_mem = args->max_mmap_dontneed_mem; + + const size_t max_diff = (size_t) ((max_mmap_dontneed_mem * mmap_relative_dontneed) / PAGESIZE); + assert ((max_diff & (max_diff - 1)) == 0); + alloc_pointers (); + + TIMING_NOW (start); + + for (size_t i = 0; i < num_iters; ++i) + { + for (size_t i = 0; i <= __builtin_ctzll (MAX_PAGES_DIFF); ++i) + { + num_pages_diff = (2 << i) / 2; + num_shrinks = get_num_shrinks (); + shrink_max_times (); + grow_max_times (); + } + } + + TIMING_NOW (stop); + TIMING_DIFF (args->elapsed, start, stop); +} + +void +bench (void) +{ + args.num_iters = NUM_ITERS; + args.mmap_relative_dontneed = MMAP_RELATIVE_DONTNEED; + args.max_mmap_dontneed_mem = MAX_MMAP_DONTNEED_MEM; + + do_benchmark (&args); + + free_pointers (); + + json_ctx_t json_ctx; + json_init (&json_ctx, 0, stdout); + json_document_begin (&json_ctx); + json_attr_string (&json_ctx, "timing_type", TIMING_TYPE); + json_attr_object_begin (&json_ctx, "functions"); + json_attr_object_begin (&json_ctx, TEST_NAME); + + json_attr_uint (&json_ctx, "start_size", START_SIZE); + + struct rusage usage; + getrusage (RUSAGE_SELF, &usage); + json_attr_uint (&json_ctx, "max_rss", usage.ru_maxrss); + json_attr_double (&json_ctx, "reallocs/sec", num_realloc_iters / (args.elapsed / 1e9f)); + + json_attr_object_end (&json_ctx); + json_attr_object_end (&json_ctx); + + json_document_end (&json_ctx); +} + +static void usage (const char *name) +{ + fprintf (stderr, "%s\n", name); + exit (1); +} + +int +main (int argc, char **argv) +{ + if (argc != 1) + usage (argv[0]); + + bench (); + + return 0; +} \ No newline at end of file diff --git a/malloc/Makefile b/malloc/Makefile index 83f6c873e8..97b696eab8 100644 --- a/malloc/Makefile +++ b/malloc/Makefile @@ -61,6 +61,7 @@ tests := \ tst-pvalloc \ tst-pvalloc-fortify \ tst-realloc \ + tst-realloc-madvise \ tst-reallocarray \ tst-safe-linking \ tst-tcfree1 tst-tcfree2 tst-tcfree3 tst-tcfree4 \ @@ -113,6 +114,7 @@ tests-exclude-malloc-check = \ tst-memalign-2 \ tst-memalign-3 \ tst-mxfast \ + tst-realloc-madvise \ tst-safe-linking \ # tests-exclude-malloc-check @@ -141,7 +143,8 @@ tests-exclude-hugetlb1 = \ # overlapping region. tests-exclude-hugetlb2 = \ $(tests-exclude-hugetlb1) \ - tst-free-errno + tst-free-errno \ + tst-realloc-madvise tests-malloc-hugetlb1 = \ $(filter-out $(tests-exclude-hugetlb1), $(tests)) tests-malloc-hugetlb2 = \ @@ -187,6 +190,7 @@ tests-exclude-mcheck = \ tst-memalign-2 \ tst-memalign-3 \ tst-mxfast \ + tst-realloc-madvise \ tst-safe-linking \ # tests-exclude-mcheck diff --git a/malloc/malloc-check.c b/malloc/malloc-check.c index 9532316a29..09e1483a12 100644 --- a/malloc/malloc-check.c +++ b/malloc/malloc-check.c @@ -286,28 +286,8 @@ realloc_check (void *oldmem, size_t bytes) if (chunk_is_mmapped (oldp)) { -#if HAVE_MREMAP - mchunkptr newp = mremap_chunk (oldp, chnb); - if (newp) - newmem = chunk2mem_tag (newp); - else -#endif - { - /* Note the extra SIZE_SZ overhead. */ - if (oldsize - SIZE_SZ >= chnb) - newmem = oldmem; /* do nothing */ - else - { - /* Must alloc, copy, free. */ - top_check (); - newmem = _int_malloc (&main_arena, rb); - if (newmem) - { - memcpy (newmem, oldmem, oldsize - CHUNK_HDR_SZ); - munmap_chunk (oldp); - } - } - } + top_check (); + newmem = _int_realloc_mmapped (oldmem, bytes); } else { diff --git a/malloc/malloc.c b/malloc/malloc.c index e08873cad5..ca9ad2a324 100644 --- a/malloc/malloc.c +++ b/malloc/malloc.c @@ -1102,6 +1102,7 @@ static INTERNAL_SIZE_T _int_free_create_chunk (mstate, static void _int_free_maybe_consolidate (mstate, INTERNAL_SIZE_T); static void* _int_realloc(mstate, mchunkptr, INTERNAL_SIZE_T, INTERNAL_SIZE_T); +static void* _int_realloc_mmapped (void *oldmem, size_t bytes); static void* _int_memalign(mstate, size_t, size_t); #if IS_IN (libc) static void* _mid_memalign(size_t, size_t); @@ -3544,9 +3545,8 @@ __libc_realloc (void *oldmem, size_t bytes) if (bytes <= usable) { size_t difference = usable - bytes; - if ((unsigned long) difference < 2 * sizeof (INTERNAL_SIZE_T) - || (chunk_is_mmapped (oldp) && difference <= GLRO (dl_pagesize))) - return oldmem; + if ((unsigned long) difference < 2 * sizeof (INTERNAL_SIZE_T)) + return oldmem; } /* its size */ @@ -3573,35 +3573,7 @@ __libc_realloc (void *oldmem, size_t bytes) nb = checked_request2size (bytes); if (chunk_is_mmapped (oldp)) - { - void *newmem; - -#if HAVE_MREMAP - newp = mremap_chunk (oldp, nb); - if (newp) - { - void *newmem = chunk2mem_tag (newp); - /* Give the new block a different tag. This helps to ensure - that stale handles to the previous mapping are not - reused. There's a performance hit for both us and the - caller for doing this, so we might want to - reconsider. */ - return tag_new_usable (newmem); - } -#endif - /* Note the extra SIZE_SZ overhead. */ - if (oldsize - SIZE_SZ >= nb) - return oldmem; /* do nothing */ - - /* Must alloc, copy, free. */ - newmem = __libc_malloc (bytes); - if (newmem == NULL) - return NULL; /* propagate failure */ - - memcpy (newmem, oldmem, oldsize - CHUNK_HDR_SZ); - munmap_chunk (oldp); - return newmem; - } + return _int_realloc_mmapped (oldmem, bytes); if (SINGLE_THREAD_P) { @@ -5075,6 +5047,81 @@ _int_realloc (mstate av, mchunkptr oldp, INTERNAL_SIZE_T oldsize, return tag_new_usable (chunk2mem (newp)); } +/* Equivalent to a relative threshold of 1/4 for mmap()'ed chunks. */ +static __always_inline size_t +max_madvise (size_t madvise_sz) +{ + return madvise_sz - (madvise_sz >> 2); +} + +/* Shrink the region by marking pages as MADV_DONTNEED. + The physical frames are released, but madvise() preserves the VAS. + This prevents fragmenting the address space like mremap() would do. */ +static __always_inline bool +_int_realloc_madvise (void *oldmem, mchunkptr oldp, size_t difference) +{ + return __madvise ((char *)oldp + chunksize (oldp) - difference, + difference, MADV_DONTNEED) != -1; +} + +static void * +_int_realloc_mmapped (void *oldmem, size_t bytes) +{ + const mchunkptr oldp = mem2chunk (oldmem); + const INTERNAL_SIZE_T oldsize = chunksize (oldp); + size_t nb = checked_request2size (bytes); + size_t usable = musable (oldmem); + size_t difference = usable - bytes; + + /* For mmap()'ed chunks where the size is within the region's VAS. */ + if (bytes <= usable) + { + if (difference < GLRO (dl_pagesize)) + return oldmem; + + difference = ALIGN_DOWN (difference, GLRO (dl_pagesize)); + /* Don't shrink more than the relative threshold for the chunk. */ + if (difference <= max_madvise (chunksize (oldp)) && + _int_realloc_madvise (oldmem, oldp, difference)) + return oldmem; + } + + void *newmem; + +#if HAVE_MREMAP + void *newp = mremap_chunk (oldp, nb); + if (newp) + { + newmem = chunk2mem_tag (newp); + /* Give the new block a different tag. This helps to ensure + that stale handles to the previous mapping are not + reused. There's a performance hit for both us and the + caller for doing this, so we might want to + reconsider. */ + return tag_new_usable (newmem); + } +#endif + + /* Last attempt to prevent malloc+memcpy+free when shrinking the region. */ + if (bytes <= usable) + { + /* Attempt another madvise() call without the previous threshold check. + Even if the call to madvise() failed, since we are shrinking it is + safer to just return, rather than risk a malloc+memcpy+free error. */ + _int_realloc_madvise (oldmem, oldp, difference); + return oldmem; + } + + /* Must alloc, copy, free if the region grows as a last resort. */ + newmem = __libc_malloc (bytes); + if (newmem == NULL) + return NULL; /* propagate failure */ + + memcpy (newmem, oldmem, oldsize - CHUNK_HDR_SZ); + munmap_chunk (oldp); + return newmem; +} + /* ------------------------------ memalign ------------------------------ */ diff --git a/malloc/tst-realloc-madvise.c b/malloc/tst-realloc-madvise.c new file mode 100644 index 0000000000..374f45f224 --- /dev/null +++ b/malloc/tst-realloc-madvise.c @@ -0,0 +1,212 @@ +/* Test for realloc madvise use in shrinking and growing mmap()'ed chunks. + Copyright (C) 2025 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include +#include +#include +#include +#include + +#include "tst-malloc-aux.h" + +static int pagesize; +static size_t system_dontneed_mem = 0; +static struct mallinfo2 original_mi; +static size_t original_size; + +static __always_inline size_t +unusable (void *p) +{ + return (uintptr_t) p - ALIGN_DOWN ((uintptr_t) p, pagesize); +} + +static __always_inline void * +checked_malloc (void) +{ + /* Take off half a page to safely handle architectural alignment. + The region will be rounded up to the nearest page size anyways. */ + void *p = malloc (original_size - pagesize / 2); + if (p == NULL && errno == ENOMEM) + FAIL_UNSUPPORTED ("Not enough memory for the minimum mmap threshold"); + TEST_VERIFY (p != NULL); + return p; +} + +static void * +shrink_page (void *p, size_t old_size, bool expect_same) +{ + TEST_VERIFY (p != NULL); + const size_t new_size = old_size - pagesize - pagesize / 2; + + void *oldp = p; + p = realloc (p, new_size); + /* If only marking pages as MADV_DONTNEED, the new pointer will not change. + But mremap may not shrink in-place due to alignment or fragmentation. */ + if (expect_same) + TEST_VERIFY (p == oldp); + + size_t mmap_dontneed_mem; + size_t chunksize = malloc_usable_size (p) + unusable (p); + + if (original_size == chunksize) + { + TEST_VERIFY (expect_same); + mmap_dontneed_mem = ALIGN_DOWN (original_size - new_size - unusable (p), + pagesize); + system_dontneed_mem += pagesize; + TEST_VERIFY (system_dontneed_mem == mmap_dontneed_mem); + } + else + { + TEST_VERIFY (!expect_same); + mmap_dontneed_mem = 0; + } + + struct mallinfo2 new_mi = mallinfo2 (); + size_t min_used_size = original_size >> 2; + /* If there are no more MADV_DONTNEED pages, ignore the threshold. */ + if (mmap_dontneed_mem && mmap_dontneed_mem <= chunksize - min_used_size) + { + TEST_VERIFY (expect_same); + TEST_VERIFY (new_mi.hblkhd - original_mi.hblkhd == original_size); + return p; + } + TEST_VERIFY (!expect_same); + + TEST_VERIFY (new_mi.hblkhd - original_mi.hblkhd == + ALIGN_DOWN (min_used_size - 1, pagesize)); + + return p; +} + +static void * +grow_page (void *p, size_t old_size, size_t old_dontneed, bool expect_same) +{ + TEST_VERIFY (p != NULL); + const size_t usable = original_size - unusable (p); + size_t new_size = old_size + pagesize / 2; + + void *oldp = p; + p = realloc (p, new_size); + /* If the region was not extended the returned pointer will not change. + * If it did extend it may have done so in-place, this cannot be tested. */ + if (expect_same) + TEST_VERIFY (p == oldp); + + new_size = ALIGN_UP (new_size, pagesize); + + struct mallinfo2 new_mi = mallinfo2 (); + /* If growing within the MADV_DONTNEED pages. */ + if (new_size - unusable (p) <= usable && new_size > usable - old_dontneed) + { + TEST_VERIFY (expect_same); + TEST_VERIFY (new_mi.hblkhd - original_mi.hblkhd == original_size); + } + /* Otherwise the allocation should have grown by a page. */ + else + { + TEST_VERIFY (!expect_same); + TEST_VERIFY (new_mi.hblkhd - original_mi.hblkhd == + original_size + pagesize); + } + + return p; +} + +static void +test_allocation_size_threshold (void) +{ + void *p = checked_malloc (); + + struct mallinfo2 new_mi = mallinfo2 (); + /* All previous allocations should be done within the main arena. */ + TEST_VERIFY (new_mi.hblkhd == original_size); + + /* Shrink down to the relative size threshold. */ + size_t size = original_size; + const size_t min_used_size = original_size >> 2; + while (size > min_used_size) + { + p = shrink_page (p, size, true); + size -= pagesize; + } + /* This should exceed the relative size threshold. */ + p = shrink_page (p, size, false); + /* Get a fresh region for testing growing within a region. */ + free (p); + p = checked_malloc (); + system_dontneed_mem = 0; + + /* Reallocate down to the relative threshold. */ + size = min_used_size; + p = realloc(p, size - unusable (p)); + new_mi = mallinfo2 (); + TEST_VERIFY (new_mi.hblkhd - original_mi.hblkhd == original_size); + /* Grow back up to the original size. */ + while (size < original_size) + { + p = grow_page (p, size, original_size - size, + true); + size += pagesize; + } + /* This should exceed the size of the allocation. */ + p = grow_page (p, size, original_size - size, + false); + + /* New pointer to test shrinking immediately past the relative threshold. */ + free (p); + p = checked_malloc (); + system_dontneed_mem = 0; + + /* Shrink a page past the size threshold, this should call mremap(). */ + /* Must take into account the usable size of the original allocation. */ + p = realloc (p, min_used_size - pagesize - pagesize / 2); + new_mi = mallinfo2 (); + TEST_VERIFY (new_mi.hblkhd - original_mi.hblkhd == + min_used_size - pagesize); + + /* Grow a page to the threshold, this should call mremap(). */ + p = realloc (p, min_used_size - pagesize / 2); + new_mi = mallinfo2 (); + TEST_VERIFY (new_mi.hblkhd - original_mi.hblkhd == min_used_size); + + /* Free so the pointer doesn't effect mallinfo for later tests. */ + free (p); + system_dontneed_mem = 0; +} + +static int +do_test (void) +{ + pagesize = getpagesize (); + /* Prevent MORECORE from being used for large regions. */ + mallopt (M_MMAP_THRESHOLD, 131072); + /* The mmap minimum threshold has been set to 128K via tunables to prevent + unexpected calls, so there are no data between comparisons using mmap. */ + original_mi = mallinfo2(); + + /* To ensure that the mmap()'ed region's relative size threshold is tested, + make the original allocation size far below max_mmap_dontneed_mem. */ + original_size = 4 * 1024 * 1024; + test_allocation_size_threshold (); + + return 0; +} + +#include \ No newline at end of file