From patchwork Thu May 18 08:28:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: stsp X-Patchwork-Id: 69573 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 2A44C384A4B7 for ; Thu, 18 May 2023 08:30:43 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from forward100b.mail.yandex.net (forward100b.mail.yandex.net [178.154.239.147]) by sourceware.org (Postfix) with ESMTPS id 4EFB93858C5F for ; Thu, 18 May 2023 08:29:40 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 4EFB93858C5F Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=yandex.ru Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=yandex.ru Received: from mail-nwsmtp-smtp-production-main-91.iva.yp-c.yandex.net (mail-nwsmtp-smtp-production-main-91.iva.yp-c.yandex.net [IPv6:2a02:6b8:c0c:1186:0:640:38cb:0]) by forward100b.mail.yandex.net (Yandex) with ESMTP id 3A97B60134 for ; Thu, 18 May 2023 11:29:38 +0300 (MSK) Received: by mail-nwsmtp-smtp-production-main-91.iva.yp-c.yandex.net (smtp/Yandex) with ESMTPSA id XTYe39MDZ0U0-7RPYs3dA; Thu, 18 May 2023 11:29:37 +0300 X-Yandex-Fwd: 1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.ru; s=mail; t=1684398577; bh=4EKnfJMYKMTUXJiSasLrOjAtgw84M7fIw6aUugmmrZs=; h=Message-Id:Date:In-Reply-To:Cc:Subject:References:To:From; b=vRKDwWJLjmMfMx9VxKO8rVZmWUIbBKgQIhDdj73deT0hlzIsG9Q9gB3bSxSp5rvBW XTrL2jJrDXb7pgQjp5NqoWLLzI91c1+dHFgjcJBhF3/wXzgxyOWHKlgxnSMSHggn0N mwAn5lzyTaXr4afrpnUgRurAsgHlqnbQlOzWAmVA= Authentication-Results: mail-nwsmtp-smtp-production-main-91.iva.yp-c.yandex.net; dkim=pass header.i=@yandex.ru From: Stas Sergeev To: libc-alpha@sourceware.org Cc: Stas Sergeev Subject: [PATCH 03/14] rework maphole Date: Thu, 18 May 2023 13:28:43 +0500 Message-Id: <20230518082854.3903342-4-stsp2@yandex.ru> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230518082854.3903342-1-stsp2@yandex.ru> References: <20230518082854.3903342-1-stsp2@yandex.ru> MIME-Version: 1.0 X-Spam-Status: No, score=-10.8 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_ENVFROM_END_DIGIT, FREEMAIL_FROM, GIT_PATCH_0, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" Remove "has_holes" argument that was used to mprotect the entire initial mapping as PROT_NONE. Instead apply PROT_NONE at each individual hole. This is needed to make it possible to split the protection stage from mmap stage. The test-suite was run on x86_64/64 and showed no regressions. Signed-off-by: Stas Sergeev --- elf/dl-load.c | 8 +++---- elf/dl-load.h | 3 +-- elf/dl-map-segments.h | 52 ++++++++++++++++++++++++++----------------- 3 files changed, 36 insertions(+), 27 deletions(-) diff --git a/elf/dl-load.c b/elf/dl-load.c index 39c63ff1b3..4007a4aae3 100644 --- a/elf/dl-load.c +++ b/elf/dl-load.c @@ -1089,7 +1089,6 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd, /* Scan the program header table, collecting its load commands. */ struct loadcmd loadcmds[l->l_phnum]; size_t nloadcmds = 0; - bool has_holes = false; bool empty_dynamic = false; ElfW(Addr) p_align_max = 0; @@ -1141,6 +1140,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd, if (powerof2 (ph->p_align) && ph->p_align > p_align_max) p_align_max = ph->p_align; c->mapoff = ALIGN_DOWN (ph->p_offset, GLRO(dl_pagesize)); + c->maphole = 0; DIAG_PUSH_NEEDS_COMMENT; @@ -1153,8 +1153,8 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd, #endif /* Determine whether there is a gap between the last segment and this one. */ - if (nloadcmds > 1 && c[-1].mapend != c->mapstart) - has_holes = true; + if (nloadcmds > 1 && c[-1].mapend < c->mapstart) + c[-1].maphole = c->mapstart - c[-1].mapend; DIAG_POP_NEEDS_COMMENT; /* Optimize a common case. */ @@ -1256,7 +1256,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd, l_map_start, l_map_end, l_addr, l_contiguous, l_text_end, l_phdr */ errstring = _dl_map_segments (l, fd, header, type, loadcmds, nloadcmds, - maplength, has_holes, loader); + maplength, loader); if (__glibc_unlikely (errstring != NULL)) { /* Mappings can be in an inconsistent state: avoid unmap. */ diff --git a/elf/dl-load.h b/elf/dl-load.h index ecf6910c68..029181e8c8 100644 --- a/elf/dl-load.h +++ b/elf/dl-load.h @@ -75,7 +75,7 @@ ELF_PREFERRED_ADDRESS_DATA; Its details have been expanded out and converted. */ struct loadcmd { - ElfW(Addr) mapstart, mapend, dataend, allocend, mapalign; + ElfW(Addr) mapstart, mapend, dataend, allocend, mapalign, maphole; ElfW(Off) mapoff; int prot; /* PROT_* bits. */ }; @@ -118,7 +118,6 @@ static const char *_dl_map_segments (struct link_map *l, int fd, const struct loadcmd loadcmds[], size_t nloadcmds, const size_t maplength, - bool has_holes, struct link_map *loader); /* All the error message strings _dl_map_segments might return are diff --git a/elf/dl-map-segments.h b/elf/dl-map-segments.h index 6a6127f773..080199b76e 100644 --- a/elf/dl-map-segments.h +++ b/elf/dl-map-segments.h @@ -77,7 +77,7 @@ static __always_inline const char * _dl_map_segments (struct link_map *l, int fd, const ElfW(Ehdr) *header, int type, const struct loadcmd loadcmds[], size_t nloadcmds, - const size_t maplength, bool has_holes, + const size_t maplength, struct link_map *loader) { const struct loadcmd *c = loadcmds; @@ -106,25 +106,6 @@ _dl_map_segments (struct link_map *l, int fd, l->l_map_end = l->l_map_start + maplength; l->l_addr = l->l_map_start - c->mapstart; - - if (has_holes) - { - /* Change protection on the excess portion to disallow all access; - the portions we do not remap later will be inaccessible as if - unallocated. Then jump into the normal segment-mapping loop to - handle the portion of the segment past the end of the file - mapping. */ - if (__glibc_unlikely (loadcmds[nloadcmds - 1].mapstart < - c->mapend)) - return N_("ELF load command address/offset not page-aligned"); - if (__glibc_unlikely - (__mprotect ((caddr_t) (l->l_addr + c->mapend), - loadcmds[nloadcmds - 1].mapstart - c->mapend, - PROT_NONE) < 0)) - return DL_MAP_SEGMENTS_ERROR_MPROTECT; - } - - l->l_contiguous = 1; } else { @@ -136,11 +117,14 @@ _dl_map_segments (struct link_map *l, int fd, if (__glibc_unlikely ((void *) l->l_map_start == MAP_FAILED)) return DL_MAP_SEGMENTS_ERROR_MAP_SEGMENT; l->l_map_end = l->l_map_start + maplength; - l->l_contiguous = !has_holes; } + /* Reset to 0 later if hole found. */ + l->l_contiguous = 1; while (c < &loadcmds[nloadcmds]) { + ElfW(Addr) hole_start, hole_size; + if (c->mapend > c->mapstart /* Map the segment contents from the file. */ && (__mmap ((void *) (l->l_addr + c->mapstart), @@ -157,11 +141,15 @@ _dl_map_segments (struct link_map *l, int fd, /* Extra zero pages should appear at the end of this segment, after the data mapped from the file. */ ElfW(Addr) zero, zeroend, zeropage; + ElfW(Off) hole_off; zero = l->l_addr + c->dataend; zeroend = l->l_addr + c->allocend; zeropage = ((zero + GLRO(dl_pagesize) - 1) & ~(GLRO(dl_pagesize) - 1)); + hole_start = ALIGN_UP (c->allocend, GLRO(dl_pagesize)); + hole_off = hole_start - c->mapend; + hole_size = c->maphole - hole_off; if (zeroend < zeropage) /* All the extra data is in the last page of the segment. @@ -194,6 +182,28 @@ _dl_map_segments (struct link_map *l, int fd, return DL_MAP_SEGMENTS_ERROR_MPROTECT; } } + else + { + hole_start = c->mapend; + hole_size = c->maphole; + } + + if (__glibc_unlikely (c->maphole)) + { + if (__glibc_likely (type == ET_DYN)) + { + if (hole_size) + { + if (__mprotect ((caddr_t) (l->l_addr + hole_start), + hole_size, PROT_NONE) < 0) + return DL_MAP_SEGMENTS_ERROR_MPROTECT; + } + } + else if (l->l_contiguous) + { + l->l_contiguous = 0; + } + } ++c; }