From patchwork Sat May 13 04:02:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: develop--- via Libc-alpha X-Patchwork-Id: 69283 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 9C0F0385415D for ; Sat, 13 May 2023 04:03:04 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 9C0F0385415D DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1683950584; bh=vpVmnr9SpXcF/Sr6L8SEre2nK1xHAreA9ByceSWFfjg=; h=To:Cc:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=WawjaXJWDXqPUMEkz2IrN8xWqBaGVvSw7iMe5+Zr4bbmIj/mXfYuX+2YD8lchQJzG hRZmNT0zgVsnDPWK5KUFsoJw/S7MIaOr9LULrRi9jI4ZPsZA6EqbynhR08O/38xNYz aAwyOkx2ShNYqeAN6E3C01WN7Wr4zisIdVZ5vXMs= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) by sourceware.org (Postfix) with ESMTPS id B09B93858426 for ; Sat, 13 May 2023 04:02:38 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org B09B93858426 Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id 7F5C45C0268; Sat, 13 May 2023 00:02:38 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute5.internal (MEProxy); Sat, 13 May 2023 00:02:38 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeehuddgkedtucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmhgrlhht vghskhgrrhhuphhkvgesfhgrshhtmhgrihhlrdhfmhenucggtffrrghtthgvrhhnpeetge elgfeggeeuleeuffetveefgffgjedvgeehffdthfekteegtdeguefhffeftdenucevlhhu shhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghlthgvshhkrg hruhhpkhgvsehfrghsthhmrghilhdrfhhm X-ME-Proxy: Feedback-ID: ifa6c408f:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sat, 13 May 2023 00:02:36 -0400 (EDT) To: libc-alpha@sourceware.org Cc: Malte Skarupke Subject: [PATCH v5 2/9] nptl: Update comments and indentation for new condvar implementation Date: Sat, 13 May 2023 00:02:17 -0400 Message-Id: <20230513040224.8057-3-malteskarupke@fastmail.fm> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230513040224.8057-1-malteskarupke@fastmail.fm> References: <20230513040224.8057-1-malteskarupke@fastmail.fm> MIME-Version: 1.0 X-Spam-Status: No, score=-13.2 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_LOW, SPF_HELO_PASS, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: malteskarupke--- via Libc-alpha From: develop--- via Libc-alpha Reply-To: malteskarupke@fastmail.fm Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" From: Malte Skarupke Some comments were wrong after the most recent commit. This fixes that. Also fixing indentation where it was using spaces instead of tabs. Signed-off-by: Malte Skarupke --- nptl/pthread_cond_common.c | 5 +++-- nptl/pthread_cond_wait.c | 39 +++++++++++++++++++------------------- 2 files changed, 22 insertions(+), 22 deletions(-) diff --git a/nptl/pthread_cond_common.c b/nptl/pthread_cond_common.c index a55eee3e6b..350a16fab2 100644 --- a/nptl/pthread_cond_common.c +++ b/nptl/pthread_cond_common.c @@ -221,8 +221,9 @@ __condvar_quiesce_and_switch_g1 (pthread_cond_t *cond, uint64_t wseq, * New waiters arriving concurrently with the group switching will all go into G2 until we atomically make the switch. Waiters existing in G2 are not affected. - * Waiters in G1 will be closed out immediately by the advancing of - __g_signals to the next "lowseq" (low 31 bits of the new g1_start), + * Waiters in G1 have already received a signal and been woken. If they + haven't woken yet, they will be closed out immediately by the advancing + of __g_signals to the next "lowseq" (low 31 bits of the new g1_start), which will prevent waiters from blocking using a futex on __g_signals since it provides enough signals for all possible remaining waiters. As a result, they can each consume a signal diff --git a/nptl/pthread_cond_wait.c b/nptl/pthread_cond_wait.c index 1cb3dbf7b0..cee1968756 100644 --- a/nptl/pthread_cond_wait.c +++ b/nptl/pthread_cond_wait.c @@ -249,7 +249,7 @@ __condvar_cleanup_waiting (void *arg) figure out whether they are in a group that has already been completely signaled (i.e., if the current G1 starts at a later position that the waiter's position). Waiters cannot determine whether they are currently - in G2 or G1 -- but they do not have too because all they are interested in + in G2 or G1 -- but they do not have to because all they are interested in is whether there are available signals, and they always start in G2 (whose group slot they know because of the bit in the waiter sequence. Signalers will simply fill the right group until it is completely signaled and can @@ -412,7 +412,7 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, } /* Now wait until a signal is available in our group or it is closed. - Acquire MO so that if we observe a value of zero written after group + Acquire MO so that if we observe (signals == lowseq) after group switching in __condvar_quiesce_and_switch_g1, we synchronize with that store and will see the prior update of __g1_start done while switching groups too. */ @@ -422,8 +422,8 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, { while (1) { - uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); - unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; + uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); + unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; /* Spin-wait first. Note that spinning first without checking whether a timeout @@ -447,21 +447,21 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, /* Reload signals. See above for MO. */ signals = atomic_load_acquire (cond->__data.__g_signals + g); - g1_start = __condvar_load_g1_start_relaxed (cond); - lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; + g1_start = __condvar_load_g1_start_relaxed (cond); + lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; spin--; } - if (seq < (g1_start >> 1)) + if (seq < (g1_start >> 1)) { - /* If the group is closed already, + /* If the group is closed already, then this waiter originally had enough extra signals to consume, up until the time its group was closed. */ goto done; - } + } /* If there is an available signal, don't block. - If __g1_start has advanced at all, then we must be in G1 + If __g1_start has advanced at all, then we must be in G1 by now, perhaps in the process of switching back to an older G2, but in either case we're allowed to consume the available signal and should not block anymore. */ @@ -483,22 +483,23 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, sequence. */ atomic_fetch_add_acquire (cond->__data.__g_refs + g, 2); signals = atomic_load_acquire (cond->__data.__g_signals + g); - g1_start = __condvar_load_g1_start_relaxed (cond); - lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; + g1_start = __condvar_load_g1_start_relaxed (cond); + lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; - if (seq < (g1_start >> 1)) + if (seq < (g1_start >> 1)) { - /* group is closed already, so don't block */ + /* group is closed already, so don't block */ __condvar_dec_grefs (cond, g, private); goto done; } if ((int)(signals - lowseq) >= 2) { - /* a signal showed up or G1/G2 switched after we grabbed the refcount */ + /* a signal showed up or G1/G2 switched after we grabbed the + refcount */ __condvar_dec_grefs (cond, g, private); break; - } + } // Now block. struct _pthread_cleanup_buffer buffer; @@ -536,10 +537,8 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, if (seq < (__condvar_load_g1_start_relaxed (cond) >> 1)) goto done; } - /* Try to grab a signal. Use acquire MO so that we see an up-to-date value - of __g1_start below (see spinning above for a similar case). In - particular, if we steal from a more recent group, we will also see a - more recent __g1_start below. */ + /* Try to grab a signal. See above for MO. (if we do another loop + iteration we need to see the correct value of g1_start) */ while (!atomic_compare_exchange_weak_acquire (cond->__data.__g_signals + g, &signals, signals - 2));