From patchwork Fri Dec 22 02:29:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Tirta Halim X-Patchwork-Id: 82737 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 6BA0F38618F5 for ; Fri, 22 Dec 2023 02:29:54 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-ot1-x329.google.com (mail-ot1-x329.google.com [IPv6:2607:f8b0:4864:20::329]) by sourceware.org (Postfix) with ESMTPS id EBA0B3864C5C for ; Fri, 22 Dec 2023 02:29:37 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org EBA0B3864C5C Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org EBA0B3864C5C Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=2607:f8b0:4864:20::329 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1703212180; cv=none; b=T5UU9HhYFwYmUoTiLpjpI4V7jdUoYx/IIsM6ZPd4EyBXd+wFqatEF1+HZHWzWqjUzo12XxLZL3XpR+eer9eNvZk0U/18AvSb2ztvbzk+CMbKQQZBPQ8L4UOGg9jWJSrM8POb8TsM9o/oEVEWUNPeeM9BcwPL5p+FotbqYNwE0cU= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1703212180; c=relaxed/simple; bh=rG0ZOd3pIYpvfSvAbfxxiDE1+8u55IZfplk2DiZyRsM=; h=DKIM-Signature:From:To:Subject:Date:Message-ID:MIME-Version; b=KCoWKQqFpInohaEM33oCymDSzGH/YAs/vmDnIrye0NAyFqVHc7/1B7R6h0FZJS1HeiEt3zo7mdzu7ScTcy+RyP8YqG0I5GlSmX/acun+0UZHhITVQXzbh5+usGMKjwEsteZd59RI9ZrIlQw87I/8q902wqhqWlJYPXRVLYJMqeM= ARC-Authentication-Results: i=1; server2.sourceware.org Received: by mail-ot1-x329.google.com with SMTP id 46e09a7af769-6dbaf12c866so1030898a34.3 for ; Thu, 21 Dec 2023 18:29:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703212176; x=1703816976; darn=sourceware.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=d6f0kdOz+wOV5Efv73bwTg3IngqNWT/YM0nZE2O5Dmo=; b=WmGsm3XWbcOdsI/o0ToCqVkFo1rdxvKnohvk5qBXri2vR+VvEd22UXOZbTCTcvK8Qr ZzYjSuAxb9hhv7v000j/3/NelBsY+cCeGbpqEwgtjmBRh0uf7OZZtYFCgvUCbgCAHyhg p2NlIbkqX7KQLSwjUAUPWOXOCQGkyk8GK4SvLmNa/v1YVosEcm5EtVkM/Ku5Tj3I3m58 7yRCTuWTbZ4NFqWCtbo3xjnhqGKfzbWHcOwqv5a9PdGwVvti2mASLvgVjYuDA8AjUmBr CH0UpWImhoiGJVJSeO0hMZWFW/7WjT6xIci8RecTLktS5ReFkXdX19ZmF/IbdSZ8y6c2 hO9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703212176; x=1703816976; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=d6f0kdOz+wOV5Efv73bwTg3IngqNWT/YM0nZE2O5Dmo=; b=PhEwH2b/d7hg8vTzrGAUieLFLPR1ILB+E+5Rvzq1K74d/fZn2kYZgMuCaxSdzuPQU0 HLPUzEsnbfSgpKKQIIpCnV/SdMCBEfnEZwiEYlVRPaLcedszteoQFSqkntceurxFdnA3 /TmdHRItXo9NHEn5nnfmTZQlxDer7UdcK7QHjyVAIfDGUtScj57qnAPA0stGFyN/fDId gMBQHVptmw0hXmaKbVLMreWEuHDWMAAH9uCnVIESBQxCp+LaJr6Ap03jHWJLaWdVfyFE 2o1telTDbeKc1SN06MYsTOUHOhbKNuqF/YgHAO8tmXl9Tr2fFKJigphp/sDFWrelm2Ji diMw== X-Gm-Message-State: AOJu0YwWPU9vy45NqTGRBEcuyuR2oDa0rfXsbZ9hEME6ea39fIIfSTjj a5UEZDNuY7Mabbt26Oglf/EPWD+Gjlsjrw== X-Google-Smtp-Source: AGHT+IGqHUA5CnDftiP1vRgY5ZYft+OXX5HcIKQ6mMSwQDZAouJjzGJypsCupMIGFlz9TxFekhVjEQ== X-Received: by 2002:a05:6808:1708:b0:3b9:e2bf:c24e with SMTP id bc8-20020a056808170800b003b9e2bfc24emr846863oib.15.1703212176138; Thu, 21 Dec 2023 18:29:36 -0800 (PST) Received: from localhost.localdomain ([103.191.109.3]) by smtp.gmail.com with ESMTPSA id g5-20020a056a00078500b006ce7344328asm2305760pfu.77.2023.12.21.18.29.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Dec 2023 18:29:35 -0800 (PST) From: James Tirta Halim To: libc-alpha@sourceware.org Cc: James Tirta Halim Subject: [PATCH v2] sysdeps/x86_64/multiarch/memmem-avx2.c: add memmem-avx2.c Date: Fri, 22 Dec 2023 09:29:10 +0700 Message-ID: <20231222022910.1210826-1-tirtajames45@gmail.com> X-Mailer: git-send-email 2.43.0 MIME-Version: 1.0 X-Spam-Status: No, score=-9.8 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_ENVFROM_END_DIGIT, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SCC_10_SHORT_WORD_LINES, SCC_20_SHORT_WORD_LINES, SCC_5_SHORT_WORD_LINES, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Changes in v2: 1. Add avx512 support with a generic header file 2. Use __memcmpeq instead of memcmp 3. Remove scalar loop 4. Fix unsafe unaligned load Average timings (Core i3-1115G4): __memmem_avx512 __memmem_avx2 basic_memmem twoway_memmem memmem 842.43 1284.78 25569 4124.97 2927.43 Passes test-memmem (avx2 uses __memcmpeq, presumably __memcmpeq_avx2 works). --- sysdeps/x86_64/multiarch/memmem-avx2.c | 4 + sysdeps/x86_64/multiarch/memmem-avx512.c | 18 ++ .../x86_64/multiarch/memmem-vectorized-avx.h | 226 ++++++++++++++++++ 3 files changed, 248 insertions(+) create mode 100644 sysdeps/x86_64/multiarch/memmem-avx2.c create mode 100644 sysdeps/x86_64/multiarch/memmem-avx512.c create mode 100644 sysdeps/x86_64/multiarch/memmem-vectorized-avx.h diff --git a/sysdeps/x86_64/multiarch/memmem-avx2.c b/sysdeps/x86_64/multiarch/memmem-avx2.c new file mode 100644 index 0000000000..ee78546f90 --- /dev/null +++ b/sysdeps/x86_64/multiarch/memmem-avx2.c @@ -0,0 +1,4 @@ +#define MEMCMPEQ __memcmpeq_avx2 +#define FUNC_NAME __memmem_avx2 + +#include "memmem-vectorized-avx.h" diff --git a/sysdeps/x86_64/multiarch/memmem-avx512.c b/sysdeps/x86_64/multiarch/memmem-avx512.c new file mode 100644 index 0000000000..6a6da9e69c --- /dev/null +++ b/sysdeps/x86_64/multiarch/memmem-avx512.c @@ -0,0 +1,18 @@ +#define VEC __m512i +#define MASK uint64_t +#define LOAD(x) _mm512_load_si512 (x) +#define LOADU(x) _mm512_loadu_si512 (x) +#define STORE(dst, src) _mm512_store_si512 (dst, src) +#define STOREU(dst, src) _mm512_storeu_si512 (dst, src) +#define CMPEQ8_MASK(x, y) _mm512_cmpeq_epi8_mask (x, y) +#define SETZERO(x) _mm512_setzero_si512 (x) +#define SETONE8(x) _mm512_set1_epi8 (x) +#define POPCNT(x) _mm_popcnt_u64 (x) +#define TZCNT(x) _tzcnt_u64 (x) +#define BLSR(x) _blsr_u64 (x) +#define LZCNT(x) _lzcnt_u64 (x) +#define ONES ((MASK) -1) + +#define FUNC_NAME __memmem_avx512 + +#include "memmem-vectorized-avx.h" diff --git a/sysdeps/x86_64/multiarch/memmem-vectorized-avx.h b/sysdeps/x86_64/multiarch/memmem-vectorized-avx.h new file mode 100644 index 0000000000..8810b3c118 --- /dev/null +++ b/sysdeps/x86_64/multiarch/memmem-vectorized-avx.h @@ -0,0 +1,231 @@ +#include +#include +#include +#include + +#ifndef FUNC_NAME +# define __memmem_avx2 +#endif +#ifndef VEC +# define VEC __m256i +#endif +#ifndef VEC_SIZE +# define VEC_SIZE sizeof (VEC) +#endif +#ifndef MASK +# define MASK uint32_t +#endif +#ifndef MASK_SIZE +# define MASK_SIZE sizeof (MASK) +#endif +#ifndef LOAD +# define LOAD(x) _mm256_load_si256 (x) +#endif +#ifndef LOADU +# define LOADU(x) _mm256_loadu_si256 (x) +#endif +#ifndef STORE +# define STORE(dst, src) _mm256_store_si256 (dst, src) +#endif +#ifndef STOREU +# define STOREU(dst, src) _mm256_storeu_si256 (dst, src) +#endif +#ifndef CMPEQ8_MASK +# define CMPEQ8_MASK(x, y) _mm256_movemask_epi8 (_mm256_cmpeq_epi8 (x, y)) +#endif +#ifndef SETZERO +# define SETZERO(x) _mm256_setzero_si256 (x) +#endif +#ifndef SETONE8 +# define SETONE8(x) _mm256_set1_epi8 (x) +#endif +#ifndef POPCNT +# define POPCNT(x) _mm_popcnt_u32 (x) +#endif +#ifndef TZCNT +# define TZCNT(x) _tzcnt_u32 (x) +#endif +#ifndef BLSR +# define BLSR(x) _blsr_u32 (x) +#endif +#ifndef LZCNT +# define LZCNT(x) _lzcnt_u32 (x) +#endif +#ifndef ONES +# define ONES ((MASK) -1) +#endif + +#ifndef MEMCMPEQ +# define MEMCMPEQ __memcmpeq +#endif +#ifndef MEMCPY +# define MEMCPY memcpy +#endif +#ifndef MEMCHR +# define MEMCHR memchr +#endif +#ifndef PAGE_SIZE +# define PAGE_SIZE 4096 +#endif +#define MIN(x, y) (((x) < (y)) ? (x) : (y)) + +static inline void * +find_rarest_byte (const void *ne, size_t n) +{ + /* Lower is rarer. The table is based on the + *.c and *.h files in glibc. */ + static const unsigned char rarebyte_table[256] + = { 0, 1, 13, 56, 59, 60, 61, 62, 63, 232, 248, 2, 158, 4, + 5, 6, 7, 8, 9, 10, 14, 20, 26, 29, 37, 46, 52, 53, + 54, 55, 57, 58, 255, 172, 242, 193, 162, 174, 178, 182, 218, 219, + 212, 180, 249, 197, 221, 210, 253, 231, 230, 224, 225, 226, 227, 223, + 222, 220, 176, 213, 184, 229, 188, 164, 159, 209, 181, 203, 189, 216, + 196, 192, 185, 205, 161, 168, 215, 187, 211, 194, 195, 165, 206, 204, + 214, 198, 173, 179, 175, 183, 167, 202, 239, 201, 160, 241, 163, 246, + 233, 238, 240, 254, 237, 208, 234, 250, 169, 186, 236, 217, 245, 243, + 228, 170, 247, 244, 251, 235, 199, 200, 252, 207, 177, 191, 171, 190, + 166, 3, 140, 134, 124, 126, 86, 128, 95, 117, 114, 93, 81, 87, + 132, 96, 112, 97, 103, 82, 139, 89, 98, 88, 119, 74, 156, 115, + 104, 75, 120, 106, 76, 155, 90, 122, 107, 125, 152, 145, 136, 137, + 101, 116, 102, 108, 99, 141, 77, 78, 118, 79, 109, 100, 150, 73, + 94, 72, 121, 151, 113, 135, 110, 105, 83, 91, 11, 12, 64, 149, + 146, 111, 65, 69, 66, 15, 16, 17, 18, 19, 130, 92, 144, 123, + 21, 22, 23, 24, 131, 133, 127, 142, 25, 70, 129, 27, 28, 67, + 153, 84, 143, 138, 147, 157, 148, 68, 71, 30, 31, 32, 33, 34, + 35, 36, 154, 38, 39, 40, 41, 42, 80, 43, 44, 45, 47, 48, + 85, 49, 50, 51 }; + const unsigned char *rare = (const unsigned char *) ne; + const unsigned char *p = (const unsigned char *) ne; + int c_rare = rarebyte_table[*rare]; + int c; + for (; n--; ++p) + { + c = rarebyte_table[*p]; + if (c < c_rare) + { + rare = p; + c_rare = c; + } + } + return (void *) rare; +} + +void * +FUNC_NAME (const void *hs, size_t hs_len, const void *ne, size_t ne_len) +{ + if (ne_len == 1) + return (void *) MEMCHR (hs, *(unsigned char *) ne, hs_len); + if (__glibc_unlikely (ne_len == 0)) + return (void *) hs; + if (__glibc_unlikely (hs_len < ne_len)) + return NULL; + VEC hv0, hv1, hv, nv; + MASK i, hm0, hm1, m, cmpm; + const unsigned int matchsh = ne_len < VEC_SIZE ? VEC_SIZE - ne_len : 0; + const MASK matchm = ONES << matchsh; + const unsigned char *h = (const unsigned char *) hs; + const unsigned char *const end = h + hs_len - ne_len; + const unsigned char *hp; + size_t shift = PTR_DIFF (find_rarest_byte (ne, ne_len), ne); + if (shift == ne_len - 1) + --shift; + const VEC nv0 = SETONE8 (*((char *) ne + shift)); + const VEC nv1 = SETONE8 (*((char *) ne + shift + 1)); + h += shift; + if (PTR_DIFF (PTR_ALIGN_UP (ne, PAGE_SIZE), ne) >= VEC_SIZE + || PTR_IS_ALIGNED (ne, PAGE_SIZE) || ne_len >= VEC_SIZE) + nv = LOADU ((VEC *) ne); + else + MEMCPY (&nv, ne, MIN (VEC_SIZE, ne_len)); + const unsigned int off = PTR_DIFF (h, PTR_ALIGN_DOWN (h, VEC_SIZE)); + unsigned int off2 = (PTR_DIFF (end, (h - shift)) < VEC_SIZE) + ? VEC_SIZE - (unsigned int) (end - (h - shift)) - 1 + : 0; + h -= off; + hv0 = LOAD ((const VEC *) h); + hm0 = (MASK) CMPEQ8_MASK (hv0, nv0); + hm1 = (MASK) CMPEQ8_MASK (hv0, nv1) >> 1; + /* Clear matched bits that are out of bounds. */ + m = (((hm0 & hm1) >> off) << off2) >> off2; + while (m) + { + i = TZCNT (m); + m = BLSR (m); + hp = h + off + i - shift; + if (PTR_DIFF (PTR_ALIGN_UP (hp, PAGE_SIZE), hp) >= VEC_SIZE + || PTR_IS_ALIGNED (hp, PAGE_SIZE)) + { + hv = LOADU ((VEC *) hp); + cmpm = (MASK) CMPEQ8_MASK (hv, nv) << matchsh; + if (cmpm == matchm) + if (ne_len <= VEC_SIZE + || !MEMCMPEQ (hp + VEC_SIZE, (const char *) ne + VEC_SIZE, + ne_len - VEC_SIZE)) + return (void *) hp; + } + else + { + if (!MEMCMPEQ (hp, ne, ne_len)) + return (void *) hp; + } + } + h += VEC_SIZE - 1; + for (; h - shift + VEC_SIZE <= end; h += VEC_SIZE) + { + hv0 = LOADU ((const VEC *) h); + hv1 = LOAD ((const VEC *) (h + 1)); + hm1 = (MASK) CMPEQ8_MASK (hv1, nv1); + hm0 = (MASK) CMPEQ8_MASK (hv0, nv0); + m = hm0 & hm1; + while (m) + { + match: + i = TZCNT (m); + m = BLSR (m); + hp = h + i - shift; + if (PTR_DIFF (PTR_ALIGN_UP (hp, PAGE_SIZE), hp) >= VEC_SIZE + || PTR_IS_ALIGNED (hp, PAGE_SIZE)) + { + hv = LOADU ((VEC *) hp); + cmpm = (MASK) CMPEQ8_MASK (hv, nv) << matchsh; + if (cmpm == matchm) + if (ne_len <= VEC_SIZE + || !MEMCMPEQ (hp + VEC_SIZE, (const char *) ne + VEC_SIZE, + ne_len - VEC_SIZE)) + return (void *) hp; + } + else + { + if (!MEMCMPEQ (hp, ne, ne_len)) + return (void *) hp; + } + } + } + if (h - shift <= end) + { + off2 = VEC_SIZE - (unsigned int) (end - (h - shift)) - 1; + hv1 = LOAD ((const VEC *) (h + 1)); + if (PTR_DIFF (PTR_ALIGN_UP (h, PAGE_SIZE), h) >= VEC_SIZE + || PTR_IS_ALIGNED (h, PAGE_SIZE)) + { + hv0 = LOADU ((const VEC *) h); + hm1 = (MASK) CMPEQ8_MASK (hv1, nv1); + hm0 = (MASK) CMPEQ8_MASK (hv0, nv0); + } + else + { + hm1 = (MASK) CMPEQ8_MASK (hv1, nv1); + hm0 = 1 | (MASK) CMPEQ8_MASK (hv1, nv0) << 1; + } + /* Clear matched bits that are out of bounds. */ + m = ((hm0 & hm1) << off2) >> off2; + if (m) + goto match; + } + return NULL; +} -- 2.43.0