From patchwork Thu Dec 7 10:32:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Florian Weimer X-Patchwork-Id: 81650 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 533C73857BAD for ; Thu, 7 Dec 2023 10:34:15 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by sourceware.org (Postfix) with ESMTPS id 881C4385BADA for ; Thu, 7 Dec 2023 10:32:34 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 881C4385BADA Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=redhat.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 881C4385BADA Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1701945156; cv=none; b=mGLFPYUJ+zlGY92FjZEr1wBemTwzmcu+qIvkeoD4sUDSnLpfsm9POs7qDG9w5XxAj/y7QgqFUMSp4HLwrdc64p9fktPtwzYFXaUnvkoGU+ZPE93pkTgpCUMI4aagQ3nMcwsackluwcQ1WNpR3e6W7JS05fSLV9g7zkjvfZouK8E= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1701945156; c=relaxed/simple; bh=hf31loQ76lCtpEEVwUS+9IVvMfbadk1Eacl8Om7cvoA=; h=DKIM-Signature:From:To:Subject:Message-ID:Date:MIME-Version; b=wH1iHOaY9UYRexxMPbY4hHSCrik4A9+jYTz104S3MM+xSjy1epFB6lt9hLkA9t0wtDHxK5SY1zRGZL4OomJEorV/nHA/O0oCx36rWjxaAZONHo0cdiLMgxEJ3FnWyjak5YdyOOqa3TV44YOnFVYkyepoPzFQ7QbmDlMoJI8max8= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701945154; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=jp6/CrhDOvTpj0Hcm27RVWrWNXtoKuupq1Lm9/4FzCQ=; b=IOvEalDiYAw48QxWf2nXTcT5++QT9ifM+mzxaeBuxw+62S/f/xKMPSTrQI+tDjMBpJnJRc BcpuWd06Nm7wBKzDnjQqwf6uQDSTUIE4JblAxPK+aNkQf9cfAUoU4cC+1b4hU1LCwpDsLr CZMFz8OoHCA6AULAmVkkq16M9hxo5DI= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-375-LtKvoFzoMraBx2lQI91oJw-1; Thu, 07 Dec 2023 05:32:33 -0500 X-MC-Unique: LtKvoFzoMraBx2lQI91oJw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D2D4E38130BC for ; Thu, 7 Dec 2023 10:32:32 +0000 (UTC) Received: from oldenburg.str.redhat.com (unknown [10.39.192.131]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 3E0692026D66 for ; Thu, 7 Dec 2023 10:32:32 +0000 (UTC) From: Florian Weimer To: libc-alpha@sourceware.org Subject: [PATCH v3 25/32] elf: Move most of the _dl_find_object data to the protected heap In-Reply-To: Message-ID: References: X-From-Line: a225e2d43b0e1d2e463bf6ad666ec9c1a165fdd2 Mon Sep 17 00:00:00 2001 Date: Thu, 07 Dec 2023 11:32:30 +0100 User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/28.3 (gnu/linux) MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Spam-Status: No, score=-10.9 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H4, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org The heap is mostly read-only by design, so allocation padding is no longer required. The protected heap is not visible to malloc, so it's not necessary to deallocate the allocations during __libc_freeres anymore. --- elf/dl-find_object.c | 94 ++++++++----------------------------------- elf/dl-find_object.h | 3 -- elf/dl-libc_freeres.c | 2 - 3 files changed, 16 insertions(+), 83 deletions(-) diff --git a/elf/dl-find_object.c b/elf/dl-find_object.c index f81351b0ef..82f493d817 100644 --- a/elf/dl-find_object.c +++ b/elf/dl-find_object.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -91,8 +92,9 @@ static struct dl_find_object_internal *_dlfo_nodelete_mappings to avoid data races. The memory allocations are never deallocated, but slots used for - objects that have been dlclose'd can be reused by dlopen. The - memory can live in the regular C malloc heap. + objects that have been dlclose'd can be reused by dlopen. + Allocations come from the protected memory heap. This makes it + harder to inject DWARF data. The segments are populated from the start of the list, with the mappings with the highest address. Only if this segment is full, @@ -111,9 +113,6 @@ struct dlfo_mappings_segment initialization; read in the TM region. */ struct dlfo_mappings_segment *previous; - /* Used by __libc_freeres to deallocate malloc'ed memory. */ - void *to_free; - /* Count of array elements in use and allocated. */ size_t size; /* Read in the TM region. */ size_t allocated; @@ -154,44 +153,15 @@ _dlfo_mappings_segment_count_allocated (struct dlfo_mappings_segment *seg) /* This is essentially an arbitrary value. dlopen allocates plenty of memory anyway, so over-allocated a bit does not hurt. Not having - many small-ish segments helps to avoid many small binary searches. - Not using a power of 2 means that we do not waste an extra page - just for the malloc header if a mapped allocation is used in the - glibc allocator. */ -enum { dlfo_mappings_initial_segment_size = 63 }; - -/* Allocate an empty segment. This used for the first ever - allocation. */ -static struct dlfo_mappings_segment * -_dlfo_mappings_segment_allocate_unpadded (size_t size) -{ - if (size < dlfo_mappings_initial_segment_size) - size = dlfo_mappings_initial_segment_size; - /* No overflow checks here because the size is a mapping count, and - struct link_map_private is larger than what we allocate here. */ - enum - { - element_size = sizeof ((struct dlfo_mappings_segment) {}.objects[0]) - }; - size_t to_allocate = (sizeof (struct dlfo_mappings_segment) - + size * element_size); - struct dlfo_mappings_segment *result = malloc (to_allocate); - if (result != NULL) - { - result->previous = NULL; - result->to_free = NULL; /* Minimal malloc memory cannot be freed. */ - result->size = 0; - result->allocated = size; - } - return result; -} + many small-ish segments helps to avoid many small binary searches. */ +enum { dlfo_mappings_initial_segment_size = 64 }; /* Allocate an empty segment that is at least SIZE large. PREVIOUS points to the chain of previously allocated segments and can be NULL. */ static struct dlfo_mappings_segment * _dlfo_mappings_segment_allocate (size_t size, - struct dlfo_mappings_segment * previous) + struct dlfo_mappings_segment *previous) { /* Exponential sizing policies, so that lookup approximates a binary search. */ @@ -200,11 +170,10 @@ _dlfo_mappings_segment_allocate (size_t size, if (previous == NULL) minimum_growth = dlfo_mappings_initial_segment_size; else - minimum_growth = 2* previous->allocated; + minimum_growth = 2 * previous->allocated; if (size < minimum_growth) size = minimum_growth; } - enum { cache_line_size_estimate = 128 }; /* No overflow checks here because the size is a mapping count, and struct link_map_private is larger than what we allocate here. */ enum @@ -212,28 +181,13 @@ _dlfo_mappings_segment_allocate (size_t size, element_size = sizeof ((struct dlfo_mappings_segment) {}.objects[0]) }; size_t to_allocate = (sizeof (struct dlfo_mappings_segment) - + size * element_size - + 2 * cache_line_size_estimate); - char *ptr = malloc (to_allocate); - if (ptr == NULL) + + size * element_size); + struct dlfo_mappings_segment *result = _dl_protmem_allocate (to_allocate); + if (result == NULL) return NULL; - char *original_ptr = ptr; - /* Start and end at a (conservative) 128-byte cache line boundary. - Do not use memalign for compatibility with partially interposing - malloc implementations. */ - char *end = PTR_ALIGN_DOWN (ptr + to_allocate, cache_line_size_estimate); - ptr = PTR_ALIGN_UP (ptr, cache_line_size_estimate); - struct dlfo_mappings_segment *result - = (struct dlfo_mappings_segment *) ptr; result->previous = previous; - result->to_free = original_ptr; result->size = 0; - /* We may have obtained slightly more space if malloc happened - to provide an over-aligned pointer. */ - result->allocated = (((uintptr_t) (end - ptr) - - sizeof (struct dlfo_mappings_segment)) - / element_size); - assert (result->allocated >= size); + result->allocated = size; return result; } @@ -577,11 +531,12 @@ _dl_find_object_init (void) /* Allocate the data structures. */ size_t loaded_size = _dlfo_process_initial (); - _dlfo_nodelete_mappings = malloc (_dlfo_nodelete_mappings_size - * sizeof (*_dlfo_nodelete_mappings)); + _dlfo_nodelete_mappings + = _dl_protmem_allocate (_dlfo_nodelete_mappings_size + * sizeof (*_dlfo_nodelete_mappings)); if (loaded_size > 0) _dlfo_loaded_mappings[0] - = _dlfo_mappings_segment_allocate_unpadded (loaded_size); + = _dlfo_mappings_segment_allocate (loaded_size, NULL); if (_dlfo_nodelete_mappings == NULL || (loaded_size > 0 && _dlfo_loaded_mappings[0] == NULL)) _dl_fatal_printf ("\ @@ -838,20 +793,3 @@ _dl_find_object_dlclose (struct link_map_private *map) return; } } - -void -_dl_find_object_freeres (void) -{ - for (int idx = 0; idx < 2; ++idx) - { - for (struct dlfo_mappings_segment *seg = _dlfo_loaded_mappings[idx]; - seg != NULL; ) - { - struct dlfo_mappings_segment *previous = seg->previous; - free (seg->to_free); - seg = previous; - } - /* Stop searching in shared objects. */ - _dlfo_loaded_mappings[idx] = 0; - } -} diff --git a/elf/dl-find_object.h b/elf/dl-find_object.h index edcc0a7755..54601e7d00 100644 --- a/elf/dl-find_object.h +++ b/elf/dl-find_object.h @@ -135,7 +135,4 @@ bool _dl_find_object_update (struct link_map_private *new_l) attribute_hidden; data structures. Needs to be protected by loader write lock. */ void _dl_find_object_dlclose (struct link_map_private *l) attribute_hidden; -/* Called from __libc_freeres to deallocate malloc'ed memory. */ -void _dl_find_object_freeres (void) attribute_hidden; - #endif /* _DL_FIND_OBJECT_H */ diff --git a/elf/dl-libc_freeres.c b/elf/dl-libc_freeres.c index 88c0e444b8..066629639c 100644 --- a/elf/dl-libc_freeres.c +++ b/elf/dl-libc_freeres.c @@ -128,6 +128,4 @@ __rtld_libc_freeres (void) void *scope_free_list = GL(dl_scope_free_list); GL(dl_scope_free_list) = NULL; free (scope_free_list); - - _dl_find_object_freeres (); }