From patchwork Thu Jan 23 13:43:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aleksandar Rakic X-Patchwork-Id: 105298 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 16C3F3858429 for ; Thu, 23 Jan 2025 13:47:10 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 16C3F3858429 Authentication-Results: sourceware.org; dkim=pass (2048-bit key, unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20230601 header.b=S7gDBPj2 X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-wm1-x32b.google.com (mail-wm1-x32b.google.com [IPv6:2a00:1450:4864:20::32b]) by sourceware.org (Postfix) with ESMTPS id A812A3858C39 for ; Thu, 23 Jan 2025 13:43:34 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org A812A3858C39 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org A812A3858C39 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=2a00:1450:4864:20::32b ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1737639815; cv=none; b=vB07zb6luvxo+K3jnYUMg22JQHM/Eq1hPhnwpr1dEPdVkuWkECXJQRzr88XkQ8pV2ESlxrwrvd0Ura6iskeJm/9KKFtONxUaEtZGKVYSILxINSRxTb/2PNdzZ652VfM1U31U4odxK3AZeh+sGJ6lcit40leoCC50gZbruA911NY= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1737639815; c=relaxed/simple; bh=Xgzi/mH8JDWkjr4A9b/5BLol9E09++BvCa6vMlkhdKg=; h=DKIM-Signature:From:To:Subject:Date:Message-Id:MIME-Version; b=pdXpH/+gWD1UcNhXYeLKmCoGG3Xg/7P5QRFJUxzKud9o4K586A69by3a/N+r3JTDDRYhSNWx/dAa9vQY+sHWLGwx25KkP7Jbeqo70yUFedXRIDeeLml943s5wlSfpU0pHuFVkY/N7AyyXL3TJDPeiZpYYGM1+ZgLW6w/T7qcYsI= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org A812A3858C39 Received: by mail-wm1-x32b.google.com with SMTP id 5b1f17b1804b1-43616c12d72so1012595e9.2 for ; Thu, 23 Jan 2025 05:43:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1737639813; x=1738244613; darn=sourceware.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=PYIMgkFgBnARLesDsxy7Hqjl2UMm/8y/rNR+EUk7VHk=; b=S7gDBPj2cLPTh3F9GKRGKGHLMEpzL6B64cJ6COWGcFRDp2WgwMv2nXxJQ16EoBbcTX CAAq1NiDRLZLnWEC1qsZmGcxrVAml/uLbbUs1v1dJmBW7W9SEjq9TJVq0wjkKPW9SoXR HY42r3901tumNM0SYQXEZhQyisiLd4Avq9kix+mdJQPhj4lds/PiablKwxowM+nH0ZpL TmGexh3qKDddpZTMasdm+GQWAh/5hLHYLisen9aWpaxEiQU2ozJTODzrxweN1LBkd0I2 IHu1CWjHMq954Zh9oJc6mWGB29WIKbqfFT6RhyriYJrUvMgrnftDY2wQ7+Cx5KNRidFF DqUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737639813; x=1738244613; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PYIMgkFgBnARLesDsxy7Hqjl2UMm/8y/rNR+EUk7VHk=; b=ByhpEXfSxKZh9mXwcj0CB88vameEhZGT6fa4WnjlwffTqPZL7zaAGhrYVdlaavL4Nj myTdG2N6zBikvwzzB9zv1YGI3gbdN3XsKQ7cAqUY7anMWCz7/TnP1LDXtBhTAz7jr2YP VEqyqPgDU/lc4NAUN/jU6zwM9yG6y1elyNRGe6K34XuA3Fw0L8nKOzJnDt/uG53dYiJN HemfRi1qhzQ4xcOCpMvpRw31/P9ug/o0/OpG4X/pyFtrUPDh3RV48mJmchUspWoHBn/k TVzVGDZnuDSEgEjGGZdVmLvibpDQJLA5+ZXaUd9NolvHB9Nodi22R2pvi6gkLs780mp7 AQMQ== X-Gm-Message-State: AOJu0YytydkKXdS+dZfVkB6HDUKKbaglIdZP5E3K0tVSMQdIEKoqKllc L0xGczdElSz/kOkT93yu4+1wTNfx/RafIcQnorAojO7QqMhQmkt+rV2e7A== X-Gm-Gg: ASbGnct+eFIwbkGWdj/JROrtj27H5Q1mfa3qC0LpXkpEWX70V2cv5Wy0NYsk4MaVdy5 bfPrS3eB5DLx0qbIolgBQmRISgjMJDTXCpHcI37NDvA2LTaF0Awqg7vJlU4S4ECNkL4I8o8t7uG N3YFAq32sR5pKbMMQlsQw/pJI0LSVROHuaIFoH9YmeN8KgtCdPgbjKWYwNJVo7DRna3r1apKIos pFMzL6mAxLe0lhkpPS98AX24x2yYMal/WZWQy2uEaz031e54v3ihjpApl6bUhg+wSAD99ZW2rGe z1iwUnM9b3E1o/eAZmlEuaDltKBvamVv7m1kOw4= X-Google-Smtp-Source: AGHT+IFQ3EP2XoTkY+pHKNMVaGSHT+ko3NDLfjhFmbcdyKnXKmZ51npTb98zWygGmjLn+gownliSig== X-Received: by 2002:a05:600c:1c88:b0:436:17f4:9b3b with SMTP id 5b1f17b1804b1-4389143c524mr99338785e9.6.1737639812102; Thu, 23 Jan 2025 05:43:32 -0800 (PST) Received: from L-H2N0CV05D839062.. ([79.175.87.218]) by smtp.googlemail.com with ESMTPSA id 5b1f17b1804b1-438b318c1a2sm64597575e9.7.2025.01.23.05.43.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Jan 2025 05:43:31 -0800 (PST) From: Aleksandar Rakic X-Google-Original-From: Aleksandar Rakic To: libc-alpha@sourceware.org Cc: aleksandar.rakic@htecgroup.com, djordje.todorovic@htecgroup.com, cfu@mips.com, Faraz Shahbazker Subject: [PATCH 04/11] Add C implementation of memcpy/memset Date: Thu, 23 Jan 2025 14:43:00 +0100 Message-Id: <20250123134308.1785777-6-aleksandar.rakic@htecgroup.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250123134308.1785777-1-aleksandar.rakic@htecgroup.com> References: <20250123134308.1785777-1-aleksandar.rakic@htecgroup.com> MIME-Version: 1.0 X-Spam-Status: No, score=-8.2 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_ENVFROM_END_DIGIT, FREEMAIL_FROM, GIT_PATCH_0, KAM_SHORT, RCVD_IN_BARRACUDACENTRAL, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces~patchwork=sourceware.org@sourceware.org Add improved C implementation of memcpy/memset and remove corresponding .S files. Cherry-picked 6b74133706246af94b71e4154e4ca09482828c9f from https://github.com/MIPS/glibc Signed-off-by: Faraz Shahbazker Signed-off-by: Aleksandar Rakic --- sysdeps/mips/memcpy.S | 886 ------------------------------------------ sysdeps/mips/memcpy.c | 415 ++++++++++++++++++++ sysdeps/mips/memset.S | 430 -------------------- sysdeps/mips/memset.c | 187 +++++++++ 4 files changed, 602 insertions(+), 1316 deletions(-) delete mode 100644 sysdeps/mips/memcpy.S create mode 100644 sysdeps/mips/memcpy.c delete mode 100644 sysdeps/mips/memset.S create mode 100644 sysdeps/mips/memset.c diff --git a/sysdeps/mips/memcpy.S b/sysdeps/mips/memcpy.S deleted file mode 100644 index 96d1c92d89..0000000000 --- a/sysdeps/mips/memcpy.S +++ /dev/null @@ -1,886 +0,0 @@ -/* Copyright (C) 2012-2024 Free Software Foundation, Inc. - This file is part of the GNU C Library. - - The GNU C Library is free software; you can redistribute it and/or - modify it under the terms of the GNU Lesser General Public - License as published by the Free Software Foundation; either - version 2.1 of the License, or (at your option) any later version. - - The GNU C Library is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - Lesser General Public License for more details. - - You should have received a copy of the GNU Lesser General Public - License along with the GNU C Library. If not, see - . */ - -#ifdef ANDROID_CHANGES -# include "machine/asm.h" -# include "machine/regdef.h" -# define USE_MEMMOVE_FOR_OVERLAP -# define PREFETCH_LOAD_HINT PREFETCH_HINT_LOAD_STREAMED -# define PREFETCH_STORE_HINT PREFETCH_HINT_PREPAREFORSTORE -#elif _LIBC -# include -# include -# include -# define PREFETCH_LOAD_HINT PREFETCH_HINT_LOAD_STREAMED -# define PREFETCH_STORE_HINT PREFETCH_HINT_PREPAREFORSTORE -#elif defined _COMPILING_NEWLIB -# include "machine/asm.h" -# include "machine/regdef.h" -# define PREFETCH_LOAD_HINT PREFETCH_HINT_LOAD_STREAMED -# define PREFETCH_STORE_HINT PREFETCH_HINT_PREPAREFORSTORE -#else -# include -# include -#endif - -#if (_MIPS_ISA == _MIPS_ISA_MIPS4) || (_MIPS_ISA == _MIPS_ISA_MIPS5) || \ - (_MIPS_ISA == _MIPS_ISA_MIPS32) || (_MIPS_ISA == _MIPS_ISA_MIPS64) -# ifndef DISABLE_PREFETCH -# define USE_PREFETCH -# endif -#endif - -#if defined(_MIPS_SIM) && ((_MIPS_SIM == _ABI64) || (_MIPS_SIM == _ABIN32)) -# ifndef DISABLE_DOUBLE -# define USE_DOUBLE -# endif -#endif - -/* Some asm.h files do not have the L macro definition. */ -#ifndef L -# if _MIPS_SIM == _ABIO32 -# define L(label) $L ## label -# else -# define L(label) .L ## label -# endif -#endif - -/* Some asm.h files do not have the PTR_ADDIU macro definition. */ -#ifndef PTR_ADDIU -# ifdef USE_DOUBLE -# define PTR_ADDIU daddiu -# else -# define PTR_ADDIU addiu -# endif -#endif - -/* Some asm.h files do not have the PTR_SRA macro definition. */ -#ifndef PTR_SRA -# ifdef USE_DOUBLE -# define PTR_SRA dsra -# else -# define PTR_SRA sra -# endif -#endif - -/* New R6 instructions that may not be in asm.h. */ -#ifndef PTR_LSA -# if _MIPS_SIM == _ABI64 -# define PTR_LSA dlsa -# else -# define PTR_LSA lsa -# endif -#endif - -#if __mips_isa_rev > 5 && defined (__mips_micromips) -# define PTR_BC bc16 -#else -# define PTR_BC bc -#endif - -/* - * Using PREFETCH_HINT_LOAD_STREAMED instead of PREFETCH_LOAD on load - * prefetches appear to offer a slight performance advantage. - * - * Using PREFETCH_HINT_PREPAREFORSTORE instead of PREFETCH_STORE - * or PREFETCH_STORE_STREAMED offers a large performance advantage - * but PREPAREFORSTORE has some special restrictions to consider. - * - * Prefetch with the 'prepare for store' hint does not copy a memory - * location into the cache, it just allocates a cache line and zeros - * it out. This means that if you do not write to the entire cache - * line before writing it out to memory some data will get zero'ed out - * when the cache line is written back to memory and data will be lost. - * - * Also if you are using this memcpy to copy overlapping buffers it may - * not behave correctly when using the 'prepare for store' hint. If you - * use the 'prepare for store' prefetch on a memory area that is in the - * memcpy source (as well as the memcpy destination), then you will get - * some data zero'ed out before you have a chance to read it and data will - * be lost. - * - * If you are going to use this memcpy routine with the 'prepare for store' - * prefetch you may want to set USE_MEMMOVE_FOR_OVERLAP in order to avoid - * the problem of running memcpy on overlapping buffers. - * - * There are ifdef'ed sections of this memcpy to make sure that it does not - * do prefetches on cache lines that are not going to be completely written. - * This code is only needed and only used when PREFETCH_STORE_HINT is set to - * PREFETCH_HINT_PREPAREFORSTORE. This code assumes that cache lines are - * 32 bytes and if the cache line is larger it will not work correctly. - */ - -#ifdef USE_PREFETCH -# define PREFETCH_HINT_LOAD 0 -# define PREFETCH_HINT_STORE 1 -# define PREFETCH_HINT_LOAD_STREAMED 4 -# define PREFETCH_HINT_STORE_STREAMED 5 -# define PREFETCH_HINT_LOAD_RETAINED 6 -# define PREFETCH_HINT_STORE_RETAINED 7 -# define PREFETCH_HINT_WRITEBACK_INVAL 25 -# define PREFETCH_HINT_PREPAREFORSTORE 30 - -/* - * If we have not picked out what hints to use at this point use the - * standard load and store prefetch hints. - */ -# ifndef PREFETCH_STORE_HINT -# define PREFETCH_STORE_HINT PREFETCH_HINT_STORE -# endif -# ifndef PREFETCH_LOAD_HINT -# define PREFETCH_LOAD_HINT PREFETCH_HINT_LOAD -# endif - -/* - * We double everything when USE_DOUBLE is true so we do 2 prefetches to - * get 64 bytes in that case. The assumption is that each individual - * prefetch brings in 32 bytes. - */ - -# ifdef USE_DOUBLE -# define PREFETCH_CHUNK 64 -# define PREFETCH_FOR_LOAD(chunk, reg) \ - pref PREFETCH_LOAD_HINT, (chunk)*64(reg); \ - pref PREFETCH_LOAD_HINT, ((chunk)*64)+32(reg) -# define PREFETCH_FOR_STORE(chunk, reg) \ - pref PREFETCH_STORE_HINT, (chunk)*64(reg); \ - pref PREFETCH_STORE_HINT, ((chunk)*64)+32(reg) -# else -# define PREFETCH_CHUNK 32 -# define PREFETCH_FOR_LOAD(chunk, reg) \ - pref PREFETCH_LOAD_HINT, (chunk)*32(reg) -# define PREFETCH_FOR_STORE(chunk, reg) \ - pref PREFETCH_STORE_HINT, (chunk)*32(reg) -# endif -/* MAX_PREFETCH_SIZE is the maximum size of a prefetch, it must not be less - * than PREFETCH_CHUNK, the assumed size of each prefetch. If the real size - * of a prefetch is greater than MAX_PREFETCH_SIZE and the PREPAREFORSTORE - * hint is used, the code will not work correctly. If PREPAREFORSTORE is not - * used then MAX_PREFETCH_SIZE does not matter. */ -# define MAX_PREFETCH_SIZE 128 -/* PREFETCH_LIMIT is set based on the fact that we never use an offset greater - * than 5 on a STORE prefetch and that a single prefetch can never be larger - * than MAX_PREFETCH_SIZE. We add the extra 32 when USE_DOUBLE is set because - * we actually do two prefetches in that case, one 32 bytes after the other. */ -# ifdef USE_DOUBLE -# define PREFETCH_LIMIT (5 * PREFETCH_CHUNK) + 32 + MAX_PREFETCH_SIZE -# else -# define PREFETCH_LIMIT (5 * PREFETCH_CHUNK) + MAX_PREFETCH_SIZE -# endif -# if (PREFETCH_STORE_HINT == PREFETCH_HINT_PREPAREFORSTORE) \ - && ((PREFETCH_CHUNK * 4) < MAX_PREFETCH_SIZE) -/* We cannot handle this because the initial prefetches may fetch bytes that - * are before the buffer being copied. We start copies with an offset - * of 4 so avoid this situation when using PREPAREFORSTORE. */ -#error "PREFETCH_CHUNK is too large and/or MAX_PREFETCH_SIZE is too small." -# endif -#else /* USE_PREFETCH not defined */ -# define PREFETCH_FOR_LOAD(offset, reg) -# define PREFETCH_FOR_STORE(offset, reg) -#endif - -#if __mips_isa_rev > 5 -# if (PREFETCH_STORE_HINT == PREFETCH_HINT_PREPAREFORSTORE) -# undef PREFETCH_STORE_HINT -# define PREFETCH_STORE_HINT PREFETCH_HINT_STORE_STREAMED -# endif -# define R6_CODE -#endif - -/* Allow the routine to be named something else if desired. */ -#ifndef MEMCPY_NAME -# define MEMCPY_NAME memcpy -#endif - -/* We use these 32/64 bit registers as temporaries to do the copying. */ -#define REG0 t0 -#define REG1 t1 -#define REG2 t2 -#define REG3 t3 -#if defined(_MIPS_SIM) && ((_MIPS_SIM == _ABIO32) || (_MIPS_SIM == _ABIO64)) -# define REG4 t4 -# define REG5 t5 -# define REG6 t6 -# define REG7 t7 -#else -# define REG4 ta0 -# define REG5 ta1 -# define REG6 ta2 -# define REG7 ta3 -#endif - -/* We load/store 64 bits at a time when USE_DOUBLE is true. - * The C_ prefix stands for CHUNK and is used to avoid macro name - * conflicts with system header files. */ - -#ifdef USE_DOUBLE -# define C_ST sd -# define C_LD ld -# ifdef __MIPSEB -# define C_LDHI ldl /* high part is left in big-endian */ -# define C_STHI sdl /* high part is left in big-endian */ -# define C_LDLO ldr /* low part is right in big-endian */ -# define C_STLO sdr /* low part is right in big-endian */ -# else -# define C_LDHI ldr /* high part is right in little-endian */ -# define C_STHI sdr /* high part is right in little-endian */ -# define C_LDLO ldl /* low part is left in little-endian */ -# define C_STLO sdl /* low part is left in little-endian */ -# endif -# define C_ALIGN dalign /* r6 align instruction */ -#else -# define C_ST sw -# define C_LD lw -# ifdef __MIPSEB -# define C_LDHI lwl /* high part is left in big-endian */ -# define C_STHI swl /* high part is left in big-endian */ -# define C_LDLO lwr /* low part is right in big-endian */ -# define C_STLO swr /* low part is right in big-endian */ -# else -# define C_LDHI lwr /* high part is right in little-endian */ -# define C_STHI swr /* high part is right in little-endian */ -# define C_LDLO lwl /* low part is left in little-endian */ -# define C_STLO swl /* low part is left in little-endian */ -# endif -# define C_ALIGN align /* r6 align instruction */ -#endif - -/* Bookkeeping values for 32 vs. 64 bit mode. */ -#ifdef USE_DOUBLE -# define NSIZE 8 -# define NSIZEMASK 0x3f -# define NSIZEDMASK 0x7f -#else -# define NSIZE 4 -# define NSIZEMASK 0x1f -# define NSIZEDMASK 0x3f -#endif -#define UNIT(unit) ((unit)*NSIZE) -#define UNITM1(unit) (((unit)*NSIZE)-1) - -#ifdef ANDROID_CHANGES -LEAF(MEMCPY_NAME, 0) -#else -LEAF(MEMCPY_NAME) -#endif - .set nomips16 -/* - * Below we handle the case where memcpy is called with overlapping src and dst. - * Although memcpy is not required to handle this case, some parts of Android - * like Skia rely on such usage. We call memmove to handle such cases. - */ -#ifdef USE_MEMMOVE_FOR_OVERLAP - PTR_SUBU t0,a0,a1 - PTR_SRA t2,t0,31 - xor t1,t0,t2 - PTR_SUBU t0,t1,t2 - sltu t2,t0,a2 - la t9,memmove - beq t2,zero,L(memcpy) - jr t9 -L(memcpy): -#endif -/* - * If the size is less than 2*NSIZE (8 or 16), go to L(lastb). Regardless of - * size, copy dst pointer to v0 for the return value. - */ - slti t2,a2,(2 * NSIZE) -#if defined(RETURN_FIRST_PREFETCH) || defined(RETURN_LAST_PREFETCH) - move v0,zero -#else - move v0,a0 -#endif - bne t2,zero,L(lasts) - -#ifndef R6_CODE - -/* - * If src and dst have different alignments, go to L(unaligned), if they - * have the same alignment (but are not actually aligned) do a partial - * load/store to make them aligned. If they are both already aligned - * we can start copying at L(aligned). - */ - xor t8,a1,a0 - andi t8,t8,(NSIZE-1) /* t8 is a0/a1 word-displacement */ - PTR_SUBU a3, zero, a0 - bne t8,zero,L(unaligned) - - andi a3,a3,(NSIZE-1) /* copy a3 bytes to align a0/a1 */ - PTR_SUBU a2,a2,a3 /* a2 is the remining bytes count */ - beq a3,zero,L(aligned) /* if a3=0, it is already aligned */ - - C_LDHI t8,0(a1) - PTR_ADDU a1,a1,a3 - C_STHI t8,0(a0) - PTR_ADDU a0,a0,a3 - -#else /* R6_CODE */ - -/* - * Align the destination and hope that the source gets aligned too. If it - * doesn't we jump to L(r6_unaligned*) to do unaligned copies using the r6 - * align instruction. - */ - andi t8,a0,7 -#ifdef __mips_micromips - auipc t9,%pcrel_hi(L(atable)) - addiu t9,t9,%pcrel_lo(L(atable)+4) - PTR_LSA t9,t8,t9,1 -#else - lapc t9,L(atable) - PTR_LSA t9,t8,t9,2 -#endif - jrc t9 -L(atable): - PTR_BC L(lb0) - PTR_BC L(lb7) - PTR_BC L(lb6) - PTR_BC L(lb5) - PTR_BC L(lb4) - PTR_BC L(lb3) - PTR_BC L(lb2) - PTR_BC L(lb1) -L(lb7): - lb a3, 6(a1) - sb a3, 6(a0) -L(lb6): - lb a3, 5(a1) - sb a3, 5(a0) -L(lb5): - lb a3, 4(a1) - sb a3, 4(a0) -L(lb4): - lb a3, 3(a1) - sb a3, 3(a0) -L(lb3): - lb a3, 2(a1) - sb a3, 2(a0) -L(lb2): - lb a3, 1(a1) - sb a3, 1(a0) -L(lb1): - lb a3, 0(a1) - sb a3, 0(a0) - - li t9,8 - subu t8,t9,t8 - PTR_SUBU a2,a2,t8 - PTR_ADDU a0,a0,t8 - PTR_ADDU a1,a1,t8 -L(lb0): - - andi t8,a1,(NSIZE-1) -#ifdef __mips_micromips - auipc t9,%pcrel_hi(L(jtable)) - addiu t9,t9,%pcrel_lo(L(jtable)+4) - PTR_LSA t9,t8,t9,1 -#else - lapc t9,L(jtable) - PTR_LSA t9,t8,t9,2 -#endif - jrc t9 -L(jtable): - PTR_BC L(aligned) - PTR_BC L(r6_unaligned1) - PTR_BC L(r6_unaligned2) - PTR_BC L(r6_unaligned3) -#ifdef USE_DOUBLE - PTR_BC L(r6_unaligned4) - PTR_BC L(r6_unaligned5) - PTR_BC L(r6_unaligned6) - PTR_BC L(r6_unaligned7) -#endif -#endif /* R6_CODE */ - -L(aligned): - -/* - * Now dst/src are both aligned to (word or double word) aligned addresses - * Set a2 to count how many bytes we have to copy after all the 64/128 byte - * chunks are copied and a3 to the dst pointer after all the 64/128 byte - * chunks have been copied. We will loop, incrementing a0 and a1 until a0 - * equals a3. - */ - - andi t8,a2,NSIZEDMASK /* any whole 64-byte/128-byte chunks? */ - PTR_SUBU a3,a2,t8 /* subtract from a2 the reminder */ - beq a2,t8,L(chkw) /* if a2==t8, no 64-byte/128-byte chunks */ - PTR_ADDU a3,a0,a3 /* Now a3 is the final dst after loop */ - -/* When in the loop we may prefetch with the 'prepare to store' hint, - * in this case the a0+x should not be past the "t0-32" address. This - * means: for x=128 the last "safe" a0 address is "t0-160". Alternatively, - * for x=64 the last "safe" a0 address is "t0-96" In the current version we - * will use "prefetch hint,128(a0)", so "t0-160" is the limit. - */ -#if defined(USE_PREFETCH) && (PREFETCH_STORE_HINT == PREFETCH_HINT_PREPAREFORSTORE) - PTR_ADDU t0,a0,a2 /* t0 is the "past the end" address */ - PTR_SUBU t9,t0,PREFETCH_LIMIT /* t9 is the "last safe pref" address */ -#endif - PREFETCH_FOR_LOAD (0, a1) - PREFETCH_FOR_LOAD (1, a1) - PREFETCH_FOR_LOAD (2, a1) - PREFETCH_FOR_LOAD (3, a1) -#if defined(USE_PREFETCH) && (PREFETCH_STORE_HINT != PREFETCH_HINT_PREPAREFORSTORE) - PREFETCH_FOR_STORE (1, a0) - PREFETCH_FOR_STORE (2, a0) - PREFETCH_FOR_STORE (3, a0) -#endif -#if defined(RETURN_FIRST_PREFETCH) && defined(USE_PREFETCH) -# if PREFETCH_STORE_HINT == PREFETCH_HINT_PREPAREFORSTORE - sltu v1,t9,a0 - bgtz v1,L(skip_set) - PTR_ADDIU v0,a0,(PREFETCH_CHUNK*4) -L(skip_set): -# else - PTR_ADDIU v0,a0,(PREFETCH_CHUNK*1) -# endif -#endif -#if defined(RETURN_LAST_PREFETCH) && defined(USE_PREFETCH) \ - && (PREFETCH_STORE_HINT != PREFETCH_HINT_PREPAREFORSTORE) - PTR_ADDIU v0,a0,(PREFETCH_CHUNK*3) -# ifdef USE_DOUBLE - PTR_ADDIU v0,v0,32 -# endif -#endif -L(loop16w): - C_LD t0,UNIT(0)(a1) -/* We need to separate out the C_LD instruction here so that it will work - both when it is used by itself and when it is used with the branch - instruction. */ -#if defined(USE_PREFETCH) && (PREFETCH_STORE_HINT == PREFETCH_HINT_PREPAREFORSTORE) - sltu v1,t9,a0 /* If a0 > t9 don't use next prefetch */ - C_LD t1,UNIT(1)(a1) - bgtz v1,L(skip_pref) -#else - C_LD t1,UNIT(1)(a1) -#endif -#ifdef R6_CODE - PREFETCH_FOR_STORE (2, a0) -#else - PREFETCH_FOR_STORE (4, a0) - PREFETCH_FOR_STORE (5, a0) -#endif -#if defined(RETURN_LAST_PREFETCH) && defined(USE_PREFETCH) - PTR_ADDIU v0,a0,(PREFETCH_CHUNK*5) -# ifdef USE_DOUBLE - PTR_ADDIU v0,v0,32 -# endif -#endif -L(skip_pref): - C_LD REG2,UNIT(2)(a1) - C_LD REG3,UNIT(3)(a1) - C_LD REG4,UNIT(4)(a1) - C_LD REG5,UNIT(5)(a1) - C_LD REG6,UNIT(6)(a1) - C_LD REG7,UNIT(7)(a1) -#ifdef R6_CODE - PREFETCH_FOR_LOAD (3, a1) -#else - PREFETCH_FOR_LOAD (4, a1) -#endif - C_ST t0,UNIT(0)(a0) - C_ST t1,UNIT(1)(a0) - C_ST REG2,UNIT(2)(a0) - C_ST REG3,UNIT(3)(a0) - C_ST REG4,UNIT(4)(a0) - C_ST REG5,UNIT(5)(a0) - C_ST REG6,UNIT(6)(a0) - C_ST REG7,UNIT(7)(a0) - - C_LD t0,UNIT(8)(a1) - C_LD t1,UNIT(9)(a1) - C_LD REG2,UNIT(10)(a1) - C_LD REG3,UNIT(11)(a1) - C_LD REG4,UNIT(12)(a1) - C_LD REG5,UNIT(13)(a1) - C_LD REG6,UNIT(14)(a1) - C_LD REG7,UNIT(15)(a1) -#ifndef R6_CODE - PREFETCH_FOR_LOAD (5, a1) -#endif - C_ST t0,UNIT(8)(a0) - C_ST t1,UNIT(9)(a0) - C_ST REG2,UNIT(10)(a0) - C_ST REG3,UNIT(11)(a0) - C_ST REG4,UNIT(12)(a0) - C_ST REG5,UNIT(13)(a0) - C_ST REG6,UNIT(14)(a0) - C_ST REG7,UNIT(15)(a0) - PTR_ADDIU a0,a0,UNIT(16) /* adding 64/128 to dest */ - PTR_ADDIU a1,a1,UNIT(16) /* adding 64/128 to src */ - bne a0,a3,L(loop16w) - move a2,t8 - -/* Here we have src and dest word-aligned but less than 64-bytes or - * 128 bytes to go. Check for a 32(64) byte chunk and copy if there - * is one. Otherwise jump down to L(chk1w) to handle the tail end of - * the copy. - */ - -L(chkw): - PREFETCH_FOR_LOAD (0, a1) - andi t8,a2,NSIZEMASK /* Is there a 32-byte/64-byte chunk. */ - /* The t8 is the reminder count past 32-bytes */ - beq a2,t8,L(chk1w) /* When a2=t8, no 32-byte chunk */ - C_LD t0,UNIT(0)(a1) - C_LD t1,UNIT(1)(a1) - C_LD REG2,UNIT(2)(a1) - C_LD REG3,UNIT(3)(a1) - C_LD REG4,UNIT(4)(a1) - C_LD REG5,UNIT(5)(a1) - C_LD REG6,UNIT(6)(a1) - C_LD REG7,UNIT(7)(a1) - PTR_ADDIU a1,a1,UNIT(8) - C_ST t0,UNIT(0)(a0) - C_ST t1,UNIT(1)(a0) - C_ST REG2,UNIT(2)(a0) - C_ST REG3,UNIT(3)(a0) - C_ST REG4,UNIT(4)(a0) - C_ST REG5,UNIT(5)(a0) - C_ST REG6,UNIT(6)(a0) - C_ST REG7,UNIT(7)(a0) - PTR_ADDIU a0,a0,UNIT(8) - -/* - * Here we have less than 32(64) bytes to copy. Set up for a loop to - * copy one word (or double word) at a time. Set a2 to count how many - * bytes we have to copy after all the word (or double word) chunks are - * copied and a3 to the dst pointer after all the (d)word chunks have - * been copied. We will loop, incrementing a0 and a1 until a0 equals a3. - */ -L(chk1w): - andi a2,t8,(NSIZE-1) /* a2 is the reminder past one (d)word chunks */ - PTR_SUBU a3,t8,a2 /* a3 is count of bytes in one (d)word chunks */ - beq a2,t8,L(lastw) - PTR_ADDU a3,a0,a3 /* a3 is the dst address after loop */ - -/* copying in words (4-byte or 8-byte chunks) */ -L(wordCopy_loop): - C_LD REG3,UNIT(0)(a1) - PTR_ADDIU a0,a0,UNIT(1) - PTR_ADDIU a1,a1,UNIT(1) - C_ST REG3,UNIT(-1)(a0) - bne a0,a3,L(wordCopy_loop) - -/* If we have been copying double words, see if we can copy a single word - before doing byte copies. We can have, at most, one word to copy. */ - -L(lastw): -#ifdef USE_DOUBLE - andi t8,a2,3 /* a2 is the remainder past 4 byte chunks. */ - beq t8,a2,L(lastb) - move a2,t8 - lw REG3,0(a1) - sw REG3,0(a0) - PTR_ADDIU a0,a0,4 - PTR_ADDIU a1,a1,4 -#endif - -/* Copy the last 8 (or 16) bytes */ -L(lastb): - PTR_ADDU a3,a0,a2 /* a3 is the last dst address */ - blez a2,L(leave) -L(lastbloop): - lb v1,0(a1) - PTR_ADDIU a0,a0,1 - PTR_ADDIU a1,a1,1 - sb v1,-1(a0) - bne a0,a3,L(lastbloop) -L(leave): - jr ra - -/* We jump here with a memcpy of less than 8 or 16 bytes, depending on - whether or not USE_DOUBLE is defined. Instead of just doing byte - copies, check the alignment and size and use lw/sw if possible. - Otherwise, do byte copies. */ - -L(lasts): - andi t8,a2,3 - beq t8,a2,L(lastb) - - andi t9,a0,3 - bne t9,zero,L(lastb) - andi t9,a1,3 - bne t9,zero,L(lastb) - - PTR_SUBU a3,a2,t8 - PTR_ADDU a3,a0,a3 - -L(wcopy_loop): - lw REG3,0(a1) - PTR_ADDIU a0,a0,4 - PTR_ADDIU a1,a1,4 - bne a0,a3,L(wcopy_loop) - sw REG3,-4(a0) - - b L(lastb) - move a2,t8 - -#ifndef R6_CODE -/* - * UNALIGNED case, got here with a3 = "negu a0" - * This code is nearly identical to the aligned code above - * but only the destination (not the source) gets aligned - * so we need to do partial loads of the source followed - * by normal stores to the destination (once we have aligned - * the destination). - */ - -L(unaligned): - andi a3,a3,(NSIZE-1) /* copy a3 bytes to align a0/a1 */ - PTR_SUBU a2,a2,a3 /* a2 is the remining bytes count */ - beqz a3,L(ua_chk16w) /* if a3=0, it is already aligned */ - - C_LDHI v1,UNIT(0)(a1) - C_LDLO v1,UNITM1(1)(a1) - PTR_ADDU a1,a1,a3 - C_STHI v1,UNIT(0)(a0) - PTR_ADDU a0,a0,a3 - -/* - * Now the destination (but not the source) is aligned - * Set a2 to count how many bytes we have to copy after all the 64/128 byte - * chunks are copied and a3 to the dst pointer after all the 64/128 byte - * chunks have been copied. We will loop, incrementing a0 and a1 until a0 - * equals a3. - */ - -L(ua_chk16w): - andi t8,a2,NSIZEDMASK /* any whole 64-byte/128-byte chunks? */ - PTR_SUBU a3,a2,t8 /* subtract from a2 the reminder */ - beq a2,t8,L(ua_chkw) /* if a2==t8, no 64-byte/128-byte chunks */ - PTR_ADDU a3,a0,a3 /* Now a3 is the final dst after loop */ - -# if defined(USE_PREFETCH) && (PREFETCH_STORE_HINT == PREFETCH_HINT_PREPAREFORSTORE) - PTR_ADDU t0,a0,a2 /* t0 is the "past the end" address */ - PTR_SUBU t9,t0,PREFETCH_LIMIT /* t9 is the "last safe pref" address */ -# endif - PREFETCH_FOR_LOAD (0, a1) - PREFETCH_FOR_LOAD (1, a1) - PREFETCH_FOR_LOAD (2, a1) -# if defined(USE_PREFETCH) && (PREFETCH_STORE_HINT != PREFETCH_HINT_PREPAREFORSTORE) - PREFETCH_FOR_STORE (1, a0) - PREFETCH_FOR_STORE (2, a0) - PREFETCH_FOR_STORE (3, a0) -# endif -# if defined(RETURN_FIRST_PREFETCH) && defined(USE_PREFETCH) -# if (PREFETCH_STORE_HINT == PREFETCH_HINT_PREPAREFORSTORE) - sltu v1,t9,a0 - bgtz v1,L(ua_skip_set) - PTR_ADDIU v0,a0,(PREFETCH_CHUNK*4) -L(ua_skip_set): -# else - PTR_ADDIU v0,a0,(PREFETCH_CHUNK*1) -# endif -# endif -L(ua_loop16w): - PREFETCH_FOR_LOAD (3, a1) - C_LDHI t0,UNIT(0)(a1) - C_LDHI t1,UNIT(1)(a1) - C_LDHI REG2,UNIT(2)(a1) -/* We need to separate out the C_LDHI instruction here so that it will work - both when it is used by itself and when it is used with the branch - instruction. */ -# if defined(USE_PREFETCH) && (PREFETCH_STORE_HINT == PREFETCH_HINT_PREPAREFORSTORE) - sltu v1,t9,a0 - C_LDHI REG3,UNIT(3)(a1) - bgtz v1,L(ua_skip_pref) -# else - C_LDHI REG3,UNIT(3)(a1) -# endif - PREFETCH_FOR_STORE (4, a0) - PREFETCH_FOR_STORE (5, a0) -L(ua_skip_pref): - C_LDHI REG4,UNIT(4)(a1) - C_LDHI REG5,UNIT(5)(a1) - C_LDHI REG6,UNIT(6)(a1) - C_LDHI REG7,UNIT(7)(a1) - C_LDLO t0,UNITM1(1)(a1) - C_LDLO t1,UNITM1(2)(a1) - C_LDLO REG2,UNITM1(3)(a1) - C_LDLO REG3,UNITM1(4)(a1) - C_LDLO REG4,UNITM1(5)(a1) - C_LDLO REG5,UNITM1(6)(a1) - C_LDLO REG6,UNITM1(7)(a1) - C_LDLO REG7,UNITM1(8)(a1) - PREFETCH_FOR_LOAD (4, a1) - C_ST t0,UNIT(0)(a0) - C_ST t1,UNIT(1)(a0) - C_ST REG2,UNIT(2)(a0) - C_ST REG3,UNIT(3)(a0) - C_ST REG4,UNIT(4)(a0) - C_ST REG5,UNIT(5)(a0) - C_ST REG6,UNIT(6)(a0) - C_ST REG7,UNIT(7)(a0) - C_LDHI t0,UNIT(8)(a1) - C_LDHI t1,UNIT(9)(a1) - C_LDHI REG2,UNIT(10)(a1) - C_LDHI REG3,UNIT(11)(a1) - C_LDHI REG4,UNIT(12)(a1) - C_LDHI REG5,UNIT(13)(a1) - C_LDHI REG6,UNIT(14)(a1) - C_LDHI REG7,UNIT(15)(a1) - C_LDLO t0,UNITM1(9)(a1) - C_LDLO t1,UNITM1(10)(a1) - C_LDLO REG2,UNITM1(11)(a1) - C_LDLO REG3,UNITM1(12)(a1) - C_LDLO REG4,UNITM1(13)(a1) - C_LDLO REG5,UNITM1(14)(a1) - C_LDLO REG6,UNITM1(15)(a1) - C_LDLO REG7,UNITM1(16)(a1) - PREFETCH_FOR_LOAD (5, a1) - C_ST t0,UNIT(8)(a0) - C_ST t1,UNIT(9)(a0) - C_ST REG2,UNIT(10)(a0) - C_ST REG3,UNIT(11)(a0) - C_ST REG4,UNIT(12)(a0) - C_ST REG5,UNIT(13)(a0) - C_ST REG6,UNIT(14)(a0) - C_ST REG7,UNIT(15)(a0) - PTR_ADDIU a0,a0,UNIT(16) /* adding 64/128 to dest */ - PTR_ADDIU a1,a1,UNIT(16) /* adding 64/128 to src */ - bne a0,a3,L(ua_loop16w) - move a2,t8 - -/* Here we have src and dest word-aligned but less than 64-bytes or - * 128 bytes to go. Check for a 32(64) byte chunk and copy if there - * is one. Otherwise jump down to L(ua_chk1w) to handle the tail end of - * the copy. */ - -L(ua_chkw): - PREFETCH_FOR_LOAD (0, a1) - andi t8,a2,NSIZEMASK /* Is there a 32-byte/64-byte chunk. */ - /* t8 is the reminder count past 32-bytes */ - beq a2,t8,L(ua_chk1w) /* When a2=t8, no 32-byte chunk */ - C_LDHI t0,UNIT(0)(a1) - C_LDHI t1,UNIT(1)(a1) - C_LDHI REG2,UNIT(2)(a1) - C_LDHI REG3,UNIT(3)(a1) - C_LDHI REG4,UNIT(4)(a1) - C_LDHI REG5,UNIT(5)(a1) - C_LDHI REG6,UNIT(6)(a1) - C_LDHI REG7,UNIT(7)(a1) - C_LDLO t0,UNITM1(1)(a1) - C_LDLO t1,UNITM1(2)(a1) - C_LDLO REG2,UNITM1(3)(a1) - C_LDLO REG3,UNITM1(4)(a1) - C_LDLO REG4,UNITM1(5)(a1) - C_LDLO REG5,UNITM1(6)(a1) - C_LDLO REG6,UNITM1(7)(a1) - C_LDLO REG7,UNITM1(8)(a1) - PTR_ADDIU a1,a1,UNIT(8) - C_ST t0,UNIT(0)(a0) - C_ST t1,UNIT(1)(a0) - C_ST REG2,UNIT(2)(a0) - C_ST REG3,UNIT(3)(a0) - C_ST REG4,UNIT(4)(a0) - C_ST REG5,UNIT(5)(a0) - C_ST REG6,UNIT(6)(a0) - C_ST REG7,UNIT(7)(a0) - PTR_ADDIU a0,a0,UNIT(8) -/* - * Here we have less than 32(64) bytes to copy. Set up for a loop to - * copy one word (or double word) at a time. - */ -L(ua_chk1w): - andi a2,t8,(NSIZE-1) /* a2 is the reminder past one (d)word chunks */ - PTR_SUBU a3,t8,a2 /* a3 is count of bytes in one (d)word chunks */ - beq a2,t8,L(ua_smallCopy) - PTR_ADDU a3,a0,a3 /* a3 is the dst address after loop */ - -/* copying in words (4-byte or 8-byte chunks) */ -L(ua_wordCopy_loop): - C_LDHI v1,UNIT(0)(a1) - C_LDLO v1,UNITM1(1)(a1) - PTR_ADDIU a0,a0,UNIT(1) - PTR_ADDIU a1,a1,UNIT(1) - C_ST v1,UNIT(-1)(a0) - bne a0,a3,L(ua_wordCopy_loop) - -/* Copy the last 8 (or 16) bytes */ -L(ua_smallCopy): - PTR_ADDU a3,a0,a2 /* a3 is the last dst address */ - beqz a2,L(leave) -L(ua_smallCopy_loop): - lb v1,0(a1) - PTR_ADDIU a0,a0,1 - PTR_ADDIU a1,a1,1 - sb v1,-1(a0) - bne a0,a3,L(ua_smallCopy_loop) - - jr ra - -#else /* R6_CODE */ - -# ifdef __MIPSEB -# define SWAP_REGS(X,Y) X, Y -# define ALIGN_OFFSET(N) (N) -# else -# define SWAP_REGS(X,Y) Y, X -# define ALIGN_OFFSET(N) (NSIZE-N) -# endif -# define R6_UNALIGNED_WORD_COPY(BYTEOFFSET) \ - andi REG7, a2, (NSIZE-1);/* REG7 is # of bytes to by bytes. */ \ - PTR_SUBU a3, a2, REG7; /* a3 is number of bytes to be copied in */ \ - /* (d)word chunks. */ \ - beq REG7, a2, L(lastb); /* Check for bytes to copy by word */ \ - move a2, REG7; /* a2 is # of bytes to copy byte by byte */ \ - /* after word loop is finished. */ \ - PTR_ADDU REG6, a0, a3; /* REG6 is the dst address after loop. */ \ - PTR_SUBU REG2, a1, t8; /* REG2 is the aligned src address. */ \ - PTR_ADDU a1, a1, a3; /* a1 is addr of source after word loop. */ \ - C_LD t0, UNIT(0)(REG2); /* Load first part of source. */ \ -L(r6_ua_wordcopy##BYTEOFFSET): \ - C_LD t1, UNIT(1)(REG2); /* Load second part of source. */ \ - C_ALIGN REG3, SWAP_REGS(t1,t0), ALIGN_OFFSET(BYTEOFFSET); \ - PTR_ADDIU a0, a0, UNIT(1); /* Increment destination pointer. */ \ - PTR_ADDIU REG2, REG2, UNIT(1); /* Increment aligned source pointer.*/ \ - move t0, t1; /* Move second part of source to first. */ \ - C_ST REG3, UNIT(-1)(a0); \ - bne a0, REG6,L(r6_ua_wordcopy##BYTEOFFSET); \ - j L(lastb); \ - - /* We are generating R6 code, the destination is 4 byte aligned and - the source is not 4 byte aligned. t8 is 1, 2, or 3 depending on the - alignment of the source. */ - -L(r6_unaligned1): - R6_UNALIGNED_WORD_COPY(1) -L(r6_unaligned2): - R6_UNALIGNED_WORD_COPY(2) -L(r6_unaligned3): - R6_UNALIGNED_WORD_COPY(3) -# ifdef USE_DOUBLE -L(r6_unaligned4): - R6_UNALIGNED_WORD_COPY(4) -L(r6_unaligned5): - R6_UNALIGNED_WORD_COPY(5) -L(r6_unaligned6): - R6_UNALIGNED_WORD_COPY(6) -L(r6_unaligned7): - R6_UNALIGNED_WORD_COPY(7) -# endif -#endif /* R6_CODE */ - - .set at -END(MEMCPY_NAME) -#ifndef ANDROID_CHANGES -# ifdef _LIBC -libc_hidden_builtin_def (MEMCPY_NAME) -# endif -#endif diff --git a/sysdeps/mips/memcpy.c b/sysdeps/mips/memcpy.c new file mode 100644 index 0000000000..8c3aec7b36 --- /dev/null +++ b/sysdeps/mips/memcpy.c @@ -0,0 +1,415 @@ +/* + * Copyright (C) 2024 MIPS Tech, LLC + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions are met: + * + * 1. Redistributions of source code must retain the above copyright notice, + * this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright notice, + * this list of conditions and the following disclaimer in the documentation + * and/or other materials provided with the distribution. + * 3. Neither the name of the copyright holder nor the names of its + * contributors may be used to endorse or promote products derived from this + * software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE + * POSSIBILITY OF SUCH DAMAGE. +*/ + +#ifdef __GNUC__ + +#undef memcpy + +/* Typical observed latency in cycles in fetching from DRAM. */ +#define LATENCY_CYCLES 63 + +/* Pre-fetch performance is subject to accurate prefetch ahead, + which in turn depends on both the cache-line size and the amount + of look-ahead. Since cache-line size is not nominally fixed in + a typically library built for multiple platforms, we make conservative + assumptions in the default case. This code will typically operate + on such conservative assumptions, but if compiled with the correct + -mtune=xx options, will perform even better on those specific + platforms. */ +#if defined(_MIPS_TUNE_OCTEON2) || defined(_MIPS_TUNE_OCTEON3) + #define CACHE_LINE 128 + #define BLOCK_CYCLES 30 + #undef LATENCY_CYCLES + #define LATENCY_CYCLES 150 +#elif defined(_MIPS_TUNE_I6400) || defined(_MIPS_TUNE_I6500) + #define CACHE_LINE 64 + #define BLOCK_CYCLES 16 +#elif defined(_MIPS_TUNE_P6600) + #define CACHE_LINE 32 + #define BLOCK_CYCLES 12 +#elif defined(_MIPS_TUNE_INTERAPTIV) || defined(_MIPS_TUNE_INTERAPTIV_MR2) + #define CACHE_LINE 32 + #define BLOCK_CYCLES 30 +#else + #define CACHE_LINE 32 + #define BLOCK_CYCLES 11 +#endif + +/* Pre-fetch look ahead = ceil (latency / block-cycles) */ +#define PREF_AHEAD (LATENCY_CYCLES / BLOCK_CYCLES \ + + ((LATENCY_CYCLES % BLOCK_CYCLES) == 0 ? 0 : 1)) + +/* Unroll-factor, controls how many words at a time in the core loop. */ +#define BLOCK (CACHE_LINE == 128 ? 16 : 8) + +#define __overloadable +#if !defined(UNALIGNED_INSTR_SUPPORT) +/* does target have unaligned lw/ld/ualw/uald instructions? */ + #define UNALIGNED_INSTR_SUPPORT 0 +#if (__mips_isa_rev < 6 && !defined(__mips1)) + #undef UNALIGNED_INSTR_SUPPORT + #define UNALIGNED_INSTR_SUPPORT 1 + #endif +#endif +#if !defined(HW_UNALIGNED_SUPPORT) +/* Does target have hardware support for unaligned accesses? */ + #define HW_UNALIGNED_SUPPORT 0 + #if __mips_isa_rev >= 6 + #undef HW_UNALIGNED_SUPPORT + #define HW_UNALIGNED_SUPPORT 1 + #endif +#endif +#define ENABLE_PREFETCH 1 +#if ENABLE_PREFETCH + #define PREFETCH(addr) __builtin_prefetch (addr, 0, 0) +#else + #define PREFETCH(addr) +#endif + +#include + +#ifdef __mips64 +typedef unsigned long long reg_t; +typedef struct +{ + reg_t B0:8, B1:8, B2:8, B3:8, B4:8, B5:8, B6:8, B7:8; +} bits_t; +#else +typedef unsigned long reg_t; +typedef struct +{ + reg_t B0:8, B1:8, B2:8, B3:8; +} bits_t; +#endif + +#define CACHE_LINES_PER_BLOCK ((BLOCK * sizeof (reg_t) > CACHE_LINE) ? \ + (BLOCK * sizeof (reg_t) / CACHE_LINE) \ + : 1) + +typedef union +{ + reg_t v; + bits_t b; +} bitfields_t; + +#define DO_BYTE(a, i) \ + a[i] = bw.b.B##i; \ + len--; \ + if(!len) return ret; \ + +/* This code is called when aligning a pointer, there are remaining bytes + after doing word compares, or architecture does not have some form + of unaligned support. */ +static inline void * __attribute__ ((always_inline)) +do_bytes (void *a, const void *b, unsigned long len, void *ret) +{ + unsigned char *x = (unsigned char *) a; + unsigned char *y = (unsigned char *) b; + unsigned long i; + /* 'len' might be zero here, so preloading the first two values + before the loop may access unallocated memory. */ + for (i = 0; i < len; i++) + { + *x = *y; + x++; + y++; + } + return ret; +} + +/* This code is called to copy only remaining bytes within word or doubleword */ +static inline void * __attribute__ ((always_inline)) +do_bytes_remaining (void *a, const void *b, unsigned long len, void *ret) +{ + unsigned char *x = (unsigned char *) a; + bitfields_t bw; + if(len > 0) + { + bw.v = *(reg_t *)b; + DO_BYTE(x, 0); + DO_BYTE(x, 1); + DO_BYTE(x, 2); +#ifdef __mips64 + DO_BYTE(x, 3); + DO_BYTE(x, 4); + DO_BYTE(x, 5); + DO_BYTE(x, 6); +#endif + } + return ret; +} + +static inline void * __attribute__ ((always_inline)) +do_words_remaining (reg_t *a, const reg_t *b, unsigned long words, + unsigned long bytes, void *ret) +{ + /* Use a set-back so that load/stores have incremented addresses in + order to promote bonding. */ + int off = (BLOCK - words); + a -= off; + b -= off; + switch (off) + { + case 1: a[1] = b[1]; // Fall through + case 2: a[2] = b[2]; // Fall through + case 3: a[3] = b[3]; // Fall through + case 4: a[4] = b[4]; // Fall through + case 5: a[5] = b[5]; // Fall through + case 6: a[6] = b[6]; // Fall through + case 7: a[7] = b[7]; // Fall through +#if BLOCK==16 + case 8: a[8] = b[8]; // Fall through + case 9: a[9] = b[9]; // Fall through + case 10: a[10] = b[10]; // Fall through + case 11: a[11] = b[11]; // Fall through + case 12: a[12] = b[12]; // Fall through + case 13: a[13] = b[13]; // Fall through + case 14: a[14] = b[14]; // Fall through + case 15: a[15] = b[15]; +#endif + } + return do_bytes_remaining (a + BLOCK, b + BLOCK, bytes, ret); +} + +#if !HW_UNALIGNED_SUPPORT +#if UNALIGNED_INSTR_SUPPORT +/* For MIPS GCC, there are no unaligned builtins - so this struct forces + the compiler to treat the pointer access as unaligned. */ +struct ulw +{ + reg_t uli; +} __attribute__ ((packed)); +static inline void * __attribute__ ((always_inline)) +do_uwords_remaining (struct ulw *a, const reg_t *b, unsigned long words, + unsigned long bytes, void *ret) +{ + /* Use a set-back so that load/stores have incremented addresses in + order to promote bonding. */ + int off = (BLOCK - words); + a -= off; + b -= off; + switch (off) + { + case 1: a[1].uli = b[1]; // Fall through + case 2: a[2].uli = b[2]; // Fall through + case 3: a[3].uli = b[3]; // Fall through + case 4: a[4].uli = b[4]; // Fall through + case 5: a[5].uli = b[5]; // Fall through + case 6: a[6].uli = b[6]; // Fall through + case 7: a[7].uli = b[7]; // Fall through +#if BLOCK==16 + case 8: a[8].uli = b[8]; // Fall through + case 9: a[9].uli = b[9]; // Fall through + case 10: a[10].uli = b[10]; // Fall through + case 11: a[11].uli = b[11]; // Fall through + case 12: a[12].uli = b[12]; // Fall through + case 13: a[13].uli = b[13]; // Fall through + case 14: a[14].uli = b[14]; // Fall through + case 15: a[15].uli = b[15]; +#endif + } + return do_bytes_remaining (a + BLOCK, b + BLOCK, bytes, ret); +} + +/* The first pointer is not aligned while second pointer is. */ +static void * +unaligned_words (struct ulw *a, const reg_t * b, + unsigned long words, unsigned long bytes, void *ret) +{ + unsigned long i, words_by_block, words_by_1; + words_by_1 = words % BLOCK; + words_by_block = words / BLOCK; + for (; words_by_block > 0; words_by_block--) + { + if (words_by_block >= PREF_AHEAD - CACHE_LINES_PER_BLOCK) + for (i = 0; i < CACHE_LINES_PER_BLOCK; i++) + PREFETCH (b + (BLOCK / CACHE_LINES_PER_BLOCK) * (PREF_AHEAD + i)); + + reg_t y0 = b[0], y1 = b[1], y2 = b[2], y3 = b[3]; + reg_t y4 = b[4], y5 = b[5], y6 = b[6], y7 = b[7]; + a[0].uli = y0; + a[1].uli = y1; + a[2].uli = y2; + a[3].uli = y3; + a[4].uli = y4; + a[5].uli = y5; + a[6].uli = y6; + a[7].uli = y7; +#if BLOCK==16 + y0 = b[8], y1 = b[9], y2 = b[10], y3 = b[11]; + y4 = b[12], y5 = b[13], y6 = b[14], y7 = b[15]; + a[8].uli = y0; + a[9].uli = y1; + a[10].uli = y2; + a[11].uli = y3; + a[12].uli = y4; + a[13].uli = y5; + a[14].uli = y6; + a[15].uli = y7; +#endif + a += BLOCK; + b += BLOCK; + } + + /* Mop up any remaining bytes. */ + return do_uwords_remaining (a, b, words_by_1, bytes, ret); +} + +#else + +/* No HW support or unaligned lw/ld/ualw/uald instructions. */ +static void * +unaligned_words (reg_t * a, const reg_t * b, + unsigned long words, unsigned long bytes, void *ret) +{ + unsigned long i; + unsigned char *x; + for (i = 0; i < words; i++) + { + bitfields_t bw; + bw.v = *((reg_t*) b); + x = (unsigned char *) a; + x[0] = bw.b.B0; + x[1] = bw.b.B1; + x[2] = bw.b.B2; + x[3] = bw.b.B3; +#ifdef __mips64 + x[4] = bw.b.B4; + x[5] = bw.b.B5; + x[6] = bw.b.B6; + x[7] = bw.b.B7; +#endif + a += 1; + b += 1; + } + /* Mop up any remaining bytes. */ + return do_bytes_remaining (a, b, bytes, ret); +} + +#endif /* UNALIGNED_INSTR_SUPPORT */ +#endif /* HW_UNALIGNED_SUPPORT */ + +/* both pointers are aligned, or first isn't and HW support for unaligned. */ +static void * +aligned_words (reg_t * a, const reg_t * b, + unsigned long words, unsigned long bytes, void *ret) +{ + unsigned long i, words_by_block, words_by_1; + words_by_1 = words % BLOCK; + words_by_block = words / BLOCK; + for (; words_by_block > 0; words_by_block--) + { + if(words_by_block >= PREF_AHEAD - CACHE_LINES_PER_BLOCK) + for (i = 0; i < CACHE_LINES_PER_BLOCK; i++) + PREFETCH (b + ((BLOCK / CACHE_LINES_PER_BLOCK) * (PREF_AHEAD + i))); + + reg_t x0 = b[0], x1 = b[1], x2 = b[2], x3 = b[3]; + reg_t x4 = b[4], x5 = b[5], x6 = b[6], x7 = b[7]; + a[0] = x0; + a[1] = x1; + a[2] = x2; + a[3] = x3; + a[4] = x4; + a[5] = x5; + a[6] = x6; + a[7] = x7; +#if BLOCK==16 + x0 = b[8], x1 = b[9], x2 = b[10], x3 = b[11]; + x4 = b[12], x5 = b[13], x6 = b[14], x7 = b[15]; + a[8] = x0; + a[9] = x1; + a[10] = x2; + a[11] = x3; + a[12] = x4; + a[13] = x5; + a[14] = x6; + a[15] = x7; +#endif + a += BLOCK; + b += BLOCK; + } + + /* mop up any remaining bytes. */ + return do_words_remaining (a, b, words_by_1, bytes, ret); +} + +void * +memcpy (void *a, const void *b, size_t len) __overloadable +{ + unsigned long bytes, words, i; + void *ret = a; + /* shouldn't hit that often. */ + if (len <= 8) + return do_bytes (a, b, len, a); + + /* Start pre-fetches ahead of time. */ + if (len > CACHE_LINE * (PREF_AHEAD - 1)) + for (i = 1; i < PREF_AHEAD - 1; i++) + PREFETCH ((char *)b + CACHE_LINE * i); + else + for (i = 1; i < len / CACHE_LINE; i++) + PREFETCH ((char *)b + CACHE_LINE * i); + + /* Align the second pointer to word/dword alignment. + Note that the pointer is only 32-bits for o32/n32 ABIs. For + n32, loads are done as 64-bit while address remains 32-bit. */ + bytes = ((unsigned long) b) % (sizeof (reg_t)); + + if (bytes) + { + bytes = (sizeof (reg_t)) - bytes; + if (bytes > len) + bytes = len; + do_bytes (a, b, bytes, ret); + if (len == bytes) + return ret; + len -= bytes; + a = (void *) (((unsigned char *) a) + bytes); + b = (const void *) (((unsigned char *) b) + bytes); + } + + /* Second pointer now aligned. */ + words = len / sizeof (reg_t); + bytes = len % sizeof (reg_t); + +#if HW_UNALIGNED_SUPPORT + /* treat possible unaligned first pointer as aligned. */ + return aligned_words (a, b, words, bytes, ret); +#else + if (((unsigned long) a) % sizeof (reg_t) == 0) + return aligned_words (a, b, words, bytes, ret); + /* need to use unaligned instructions on first pointer. */ + return unaligned_words (a, b, words, bytes, ret); +#endif +} + +libc_hidden_builtin_def (memcpy) + +#else +#include +#endif diff --git a/sysdeps/mips/memset.S b/sysdeps/mips/memset.S deleted file mode 100644 index 0c8375c9f5..0000000000 --- a/sysdeps/mips/memset.S +++ /dev/null @@ -1,430 +0,0 @@ -/* Copyright (C) 2013-2024 Free Software Foundation, Inc. - This file is part of the GNU C Library. - - The GNU C Library is free software; you can redistribute it and/or - modify it under the terms of the GNU Lesser General Public - License as published by the Free Software Foundation; either - version 2.1 of the License, or (at your option) any later version. - - The GNU C Library is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - Lesser General Public License for more details. - - You should have received a copy of the GNU Lesser General Public - License along with the GNU C Library. If not, see - . */ - -#ifdef ANDROID_CHANGES -# include "machine/asm.h" -# include "machine/regdef.h" -# define PREFETCH_STORE_HINT PREFETCH_HINT_PREPAREFORSTORE -#elif _LIBC -# include -# include -# include -# define PREFETCH_STORE_HINT PREFETCH_HINT_PREPAREFORSTORE -#elif defined _COMPILING_NEWLIB -# include "machine/asm.h" -# include "machine/regdef.h" -# define PREFETCH_STORE_HINT PREFETCH_HINT_PREPAREFORSTORE -#else -# include -# include -#endif - -/* Check to see if the MIPS architecture we are compiling for supports - prefetching. */ - -#if (__mips == 4) || (__mips == 5) || (__mips == 32) || (__mips == 64) -# ifndef DISABLE_PREFETCH -# define USE_PREFETCH -# endif -#endif - -#if defined(_MIPS_SIM) && ((_MIPS_SIM == _ABI64) || (_MIPS_SIM == _ABIN32)) -# ifndef DISABLE_DOUBLE -# define USE_DOUBLE -# endif -#endif - -#ifndef USE_DOUBLE -# ifndef DISABLE_DOUBLE_ALIGN -# define DOUBLE_ALIGN -# endif -#endif - - -/* Some asm.h files do not have the L macro definition. */ -#ifndef L -# if _MIPS_SIM == _ABIO32 -# define L(label) $L ## label -# else -# define L(label) .L ## label -# endif -#endif - -/* Some asm.h files do not have the PTR_ADDIU macro definition. */ -#ifndef PTR_ADDIU -# ifdef USE_DOUBLE -# define PTR_ADDIU daddiu -# else -# define PTR_ADDIU addiu -# endif -#endif - -/* New R6 instructions that may not be in asm.h. */ -#ifndef PTR_LSA -# if _MIPS_SIM == _ABI64 -# define PTR_LSA dlsa -# else -# define PTR_LSA lsa -# endif -#endif - -#if __mips_isa_rev > 5 && defined (__mips_micromips) -# define PTR_BC bc16 -#else -# define PTR_BC bc -#endif - -/* Using PREFETCH_HINT_PREPAREFORSTORE instead of PREFETCH_STORE - or PREFETCH_STORE_STREAMED offers a large performance advantage - but PREPAREFORSTORE has some special restrictions to consider. - - Prefetch with the 'prepare for store' hint does not copy a memory - location into the cache, it just allocates a cache line and zeros - it out. This means that if you do not write to the entire cache - line before writing it out to memory some data will get zero'ed out - when the cache line is written back to memory and data will be lost. - - There are ifdef'ed sections of this memcpy to make sure that it does not - do prefetches on cache lines that are not going to be completely written. - This code is only needed and only used when PREFETCH_STORE_HINT is set to - PREFETCH_HINT_PREPAREFORSTORE. This code assumes that cache lines are - less than MAX_PREFETCH_SIZE bytes and if the cache line is larger it will - not work correctly. */ - -#ifdef USE_PREFETCH -# define PREFETCH_HINT_STORE 1 -# define PREFETCH_HINT_STORE_STREAMED 5 -# define PREFETCH_HINT_STORE_RETAINED 7 -# define PREFETCH_HINT_PREPAREFORSTORE 30 - -/* If we have not picked out what hints to use at this point use the - standard load and store prefetch hints. */ -# ifndef PREFETCH_STORE_HINT -# define PREFETCH_STORE_HINT PREFETCH_HINT_STORE -# endif - -/* We double everything when USE_DOUBLE is true so we do 2 prefetches to - get 64 bytes in that case. The assumption is that each individual - prefetch brings in 32 bytes. */ -# ifdef USE_DOUBLE -# define PREFETCH_CHUNK 64 -# define PREFETCH_FOR_STORE(chunk, reg) \ - pref PREFETCH_STORE_HINT, (chunk)*64(reg); \ - pref PREFETCH_STORE_HINT, ((chunk)*64)+32(reg) -# else -# define PREFETCH_CHUNK 32 -# define PREFETCH_FOR_STORE(chunk, reg) \ - pref PREFETCH_STORE_HINT, (chunk)*32(reg) -# endif - -/* MAX_PREFETCH_SIZE is the maximum size of a prefetch, it must not be less - than PREFETCH_CHUNK, the assumed size of each prefetch. If the real size - of a prefetch is greater than MAX_PREFETCH_SIZE and the PREPAREFORSTORE - hint is used, the code will not work correctly. If PREPAREFORSTORE is not - used than MAX_PREFETCH_SIZE does not matter. */ -# define MAX_PREFETCH_SIZE 128 -/* PREFETCH_LIMIT is set based on the fact that we never use an offset greater - than 5 on a STORE prefetch and that a single prefetch can never be larger - than MAX_PREFETCH_SIZE. We add the extra 32 when USE_DOUBLE is set because - we actually do two prefetches in that case, one 32 bytes after the other. */ -# ifdef USE_DOUBLE -# define PREFETCH_LIMIT (5 * PREFETCH_CHUNK) + 32 + MAX_PREFETCH_SIZE -# else -# define PREFETCH_LIMIT (5 * PREFETCH_CHUNK) + MAX_PREFETCH_SIZE -# endif - -# if (PREFETCH_STORE_HINT == PREFETCH_HINT_PREPAREFORSTORE) \ - && ((PREFETCH_CHUNK * 4) < MAX_PREFETCH_SIZE) -/* We cannot handle this because the initial prefetches may fetch bytes that - are before the buffer being copied. We start copies with an offset - of 4 so avoid this situation when using PREPAREFORSTORE. */ -# error "PREFETCH_CHUNK is too large and/or MAX_PREFETCH_SIZE is too small." -# endif -#else /* USE_PREFETCH not defined */ -# define PREFETCH_FOR_STORE(offset, reg) -#endif - -#if __mips_isa_rev > 5 -# if (PREFETCH_STORE_HINT == PREFETCH_HINT_PREPAREFORSTORE) -# undef PREFETCH_STORE_HINT -# define PREFETCH_STORE_HINT PREFETCH_HINT_STORE_STREAMED -# endif -# define R6_CODE -#endif - -/* Allow the routine to be named something else if desired. */ -#ifndef MEMSET_NAME -# define MEMSET_NAME memset -#endif - -/* We load/store 64 bits at a time when USE_DOUBLE is true. - The C_ prefix stands for CHUNK and is used to avoid macro name - conflicts with system header files. */ - -#ifdef USE_DOUBLE -# define C_ST sd -# ifdef __MIPSEB -# define C_STHI sdl /* high part is left in big-endian */ -# else -# define C_STHI sdr /* high part is right in little-endian */ -# endif -#else -# define C_ST sw -# ifdef __MIPSEB -# define C_STHI swl /* high part is left in big-endian */ -# else -# define C_STHI swr /* high part is right in little-endian */ -# endif -#endif - -/* Bookkeeping values for 32 vs. 64 bit mode. */ -#ifdef USE_DOUBLE -# define NSIZE 8 -# define NSIZEMASK 0x3f -# define NSIZEDMASK 0x7f -#else -# define NSIZE 4 -# define NSIZEMASK 0x1f -# define NSIZEDMASK 0x3f -#endif -#define UNIT(unit) ((unit)*NSIZE) -#define UNITM1(unit) (((unit)*NSIZE)-1) - -#ifdef ANDROID_CHANGES -LEAF(MEMSET_NAME,0) -#else -LEAF(MEMSET_NAME) -#endif - - .set nomips16 -/* If the size is less than 4*NSIZE (16 or 32), go to L(lastb). Regardless of - size, copy dst pointer to v0 for the return value. */ - slti t2,a2,(4 * NSIZE) - move v0,a0 - bne t2,zero,L(lastb) - -/* If memset value is not zero, we copy it to all the bytes in a 32 or 64 - bit word. */ - PTR_SUBU a3,zero,a0 - beq a1,zero,L(set0) /* If memset value is zero no smear */ - nop - - /* smear byte into 32 or 64 bit word */ -#if ((__mips == 64) || (__mips == 32)) && (__mips_isa_rev >= 2) -# ifdef USE_DOUBLE - dins a1, a1, 8, 8 /* Replicate fill byte into half-word. */ - dins a1, a1, 16, 16 /* Replicate fill byte into word. */ - dins a1, a1, 32, 32 /* Replicate fill byte into dbl word. */ -# else - ins a1, a1, 8, 8 /* Replicate fill byte into half-word. */ - ins a1, a1, 16, 16 /* Replicate fill byte into word. */ -# endif -#else -# ifdef USE_DOUBLE - and a1,0xff - dsll t2,a1,8 - or a1,t2 - dsll t2,a1,16 - or a1,t2 - dsll t2,a1,32 - or a1,t2 -# else - and a1,0xff - sll t2,a1,8 - or a1,t2 - sll t2,a1,16 - or a1,t2 -# endif -#endif - -/* If the destination address is not aligned do a partial store to get it - aligned. If it is already aligned just jump to L(aligned). */ -L(set0): -#ifndef R6_CODE - andi t2,a3,(NSIZE-1) /* word-unaligned address? */ - PTR_SUBU a2,a2,t2 - beq t2,zero,L(aligned) /* t2 is the unalignment count */ - C_STHI a1,0(a0) - PTR_ADDU a0,a0,t2 -#else /* R6_CODE */ - andi t2,a0,7 -# ifdef __mips_micromips - auipc t9,%pcrel_hi(L(atable)) - addiu t9,t9,%pcrel_lo(L(atable)+4) - PTR_LSA t9,t2,t9,1 -# else - lapc t9,L(atable) - PTR_LSA t9,t2,t9,2 -# endif - jrc t9 -L(atable): - PTR_BC L(aligned) - PTR_BC L(lb7) - PTR_BC L(lb6) - PTR_BC L(lb5) - PTR_BC L(lb4) - PTR_BC L(lb3) - PTR_BC L(lb2) - PTR_BC L(lb1) -L(lb7): - sb a1,6(a0) -L(lb6): - sb a1,5(a0) -L(lb5): - sb a1,4(a0) -L(lb4): - sb a1,3(a0) -L(lb3): - sb a1,2(a0) -L(lb2): - sb a1,1(a0) -L(lb1): - sb a1,0(a0) - - li t9,NSIZE - subu t2,t9,t2 - PTR_SUBU a2,a2,t2 - PTR_ADDU a0,a0,t2 -#endif /* R6_CODE */ - -L(aligned): -/* If USE_DOUBLE is not set we may still want to align the data on a 16 - byte boundary instead of an 8 byte boundary to maximize the opportunity - of proAptiv chips to do memory bonding (combining two sequential 4 - byte stores into one 8 byte store). We know there are at least 4 bytes - left to store or we would have jumped to L(lastb) earlier in the code. */ -#ifdef DOUBLE_ALIGN - andi t2,a3,4 - PTR_SUBU a2,a2,t2 - beq t2,zero,L(double_aligned) - sw a1,0(a0) - PTR_ADDU a0,a0,t2 -L(double_aligned): -#endif - -/* Now the destination is aligned to (word or double word) aligned address - Set a2 to count how many bytes we have to copy after all the 64/128 byte - chunks are copied and a3 to the dest pointer after all the 64/128 byte - chunks have been copied. We will loop, incrementing a0 until it equals - a3. */ - andi t8,a2,NSIZEDMASK /* any whole 64-byte/128-byte chunks? */ - PTR_SUBU a3,a2,t8 /* subtract from a2 the reminder */ - beq a2,t8,L(chkw) /* if a2==t8, no 64-byte/128-byte chunks */ - PTR_ADDU a3,a0,a3 /* Now a3 is the final dst after loop */ - -/* When in the loop we may prefetch with the 'prepare to store' hint, - in this case the a0+x should not be past the "t0-32" address. This - means: for x=128 the last "safe" a0 address is "t0-160". Alternatively, - for x=64 the last "safe" a0 address is "t0-96" In the current version we - will use "prefetch hint,128(a0)", so "t0-160" is the limit. */ -#if defined(USE_PREFETCH) \ - && (PREFETCH_STORE_HINT == PREFETCH_HINT_PREPAREFORSTORE) - PTR_ADDU t0,a0,a2 /* t0 is the "past the end" address */ - PTR_SUBU t9,t0,PREFETCH_LIMIT /* t9 is the "last safe pref" address */ -#endif -#if defined(USE_PREFETCH) \ - && (PREFETCH_STORE_HINT != PREFETCH_HINT_PREPAREFORSTORE) - PREFETCH_FOR_STORE (1, a0) - PREFETCH_FOR_STORE (2, a0) - PREFETCH_FOR_STORE (3, a0) -#endif - -L(loop16w): -#if defined(USE_PREFETCH) \ - && (PREFETCH_STORE_HINT == PREFETCH_HINT_PREPAREFORSTORE) - sltu v1,t9,a0 /* If a0 > t9 don't use next prefetch */ - bgtz v1,L(skip_pref) -#endif -#ifdef R6_CODE - PREFETCH_FOR_STORE (2, a0) -#else - PREFETCH_FOR_STORE (4, a0) - PREFETCH_FOR_STORE (5, a0) -#endif -L(skip_pref): - C_ST a1,UNIT(0)(a0) - C_ST a1,UNIT(1)(a0) - C_ST a1,UNIT(2)(a0) - C_ST a1,UNIT(3)(a0) - C_ST a1,UNIT(4)(a0) - C_ST a1,UNIT(5)(a0) - C_ST a1,UNIT(6)(a0) - C_ST a1,UNIT(7)(a0) - C_ST a1,UNIT(8)(a0) - C_ST a1,UNIT(9)(a0) - C_ST a1,UNIT(10)(a0) - C_ST a1,UNIT(11)(a0) - C_ST a1,UNIT(12)(a0) - C_ST a1,UNIT(13)(a0) - C_ST a1,UNIT(14)(a0) - C_ST a1,UNIT(15)(a0) - PTR_ADDIU a0,a0,UNIT(16) /* adding 64/128 to dest */ - bne a0,a3,L(loop16w) - move a2,t8 - -/* Here we have dest word-aligned but less than 64-bytes or 128 bytes to go. - Check for a 32(64) byte chunk and copy if there is one. Otherwise - jump down to L(chk1w) to handle the tail end of the copy. */ -L(chkw): - andi t8,a2,NSIZEMASK /* is there a 32-byte/64-byte chunk. */ - /* the t8 is the reminder count past 32-bytes */ - beq a2,t8,L(chk1w)/* when a2==t8, no 32-byte chunk */ - C_ST a1,UNIT(0)(a0) - C_ST a1,UNIT(1)(a0) - C_ST a1,UNIT(2)(a0) - C_ST a1,UNIT(3)(a0) - C_ST a1,UNIT(4)(a0) - C_ST a1,UNIT(5)(a0) - C_ST a1,UNIT(6)(a0) - C_ST a1,UNIT(7)(a0) - PTR_ADDIU a0,a0,UNIT(8) - -/* Here we have less than 32(64) bytes to set. Set up for a loop to - copy one word (or double word) at a time. Set a2 to count how many - bytes we have to copy after all the word (or double word) chunks are - copied and a3 to the dest pointer after all the (d)word chunks have - been copied. We will loop, incrementing a0 until a0 equals a3. */ -L(chk1w): - andi a2,t8,(NSIZE-1) /* a2 is the reminder past one (d)word chunks */ - PTR_SUBU a3,t8,a2 /* a3 is count of bytes in one (d)word chunks */ - beq a2,t8,L(lastb) - PTR_ADDU a3,a0,a3 /* a3 is the dst address after loop */ - -/* copying in words (4-byte or 8 byte chunks) */ -L(wordCopy_loop): - PTR_ADDIU a0,a0,UNIT(1) - C_ST a1,UNIT(-1)(a0) - bne a0,a3,L(wordCopy_loop) - -/* Copy the last 8 (or 16) bytes */ -L(lastb): - PTR_ADDU a3,a0,a2 /* a3 is the last dst address */ - blez a2,L(leave) -L(lastbloop): - PTR_ADDIU a0,a0,1 - sb a1,-1(a0) - bne a0,a3,L(lastbloop) -L(leave): - jr ra - - .set at -END(MEMSET_NAME) -#ifndef ANDROID_CHANGES -# ifdef _LIBC -libc_hidden_builtin_def (MEMSET_NAME) -# endif -#endif diff --git a/sysdeps/mips/memset.c b/sysdeps/mips/memset.c new file mode 100644 index 0000000000..813b3bc0e6 --- /dev/null +++ b/sysdeps/mips/memset.c @@ -0,0 +1,187 @@ +/* + * Copyright (C) 2024 MIPS Tech, LLC + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions are met: + * + * 1. Redistributions of source code must retain the above copyright notice, + * this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright notice, + * this list of conditions and the following disclaimer in the documentation + * and/or other materials provided with the distribution. + * 3. Neither the name of the copyright holder nor the names of its + * contributors may be used to endorse or promote products derived from this + * software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE + * POSSIBILITY OF SUCH DAMAGE. +*/ + +#ifdef __GNUC__ + +#undef memset + +#include + +#if _MIPS_SIM == _ABIO32 +#define SIZEOF_reg_t 4 +typedef unsigned long reg_t; +#else +#define SIZEOF_reg_t 8 +typedef unsigned long long reg_t; +#endif + +typedef struct bits8 +{ + reg_t B0:8, B1:8, B2:8, B3:8; +#if SIZEOF_reg_t == 8 + reg_t B4:8, B5:8, B6:8, B7:8; +#endif +} bits8_t; +typedef struct bits16 +{ + reg_t B0:16, B1:16; +#if SIZEOF_reg_t == 8 + reg_t B2:16, B3:16; +#endif +} bits16_t; +typedef struct bits32 +{ + reg_t B0:32; +#if SIZEOF_reg_t == 8 + reg_t B1:32; +#endif +} bits32_t; + +/* This union assumes that small structures can be in registers. If + not, then memory accesses will be done - not optimal, but ok. */ +typedef union +{ + reg_t v; + bits8_t b8; + bits16_t b16; + bits32_t b32; +} bitfields_t; + +/* This code is called when aligning a pointer or there are remaining bytes + after doing word sets. */ +static inline void * __attribute__ ((always_inline)) +do_bytes (void *a, void *retval, unsigned char fill, const unsigned long len) +{ + unsigned char *x = ((unsigned char *) a); + unsigned long i; + + for (i = 0; i < len; i++) + *x++ = fill; + + return retval; +} + +/* Pointer is aligned. */ +static void * +do_aligned_words (reg_t * a, void * retval, reg_t fill, + unsigned long words, unsigned long bytes) +{ + unsigned long i, words_by_1, words_by_16; + + words_by_1 = words % 16; + words_by_16 = words / 16; + + /* + * Note: prefetching the store memory is not beneficial on most + * cores since the ls/st unit has store buffers that will be filled + * before the cache line is actually needed. + * + * Also, using prepare-for-store cache op is problematic since we + * don't know the implementation-defined cache line length and we + * don't want to touch unintended memory. + */ + for (i = 0; i < words_by_16; i++) + { + a[0] = fill; + a[1] = fill; + a[2] = fill; + a[3] = fill; + a[4] = fill; + a[5] = fill; + a[6] = fill; + a[7] = fill; + a[8] = fill; + a[9] = fill; + a[10] = fill; + a[11] = fill; + a[12] = fill; + a[13] = fill; + a[14] = fill; + a[15] = fill; + a += 16; + } + + /* do remaining words. */ + for (i = 0; i < words_by_1; i++) + *a++ = fill; + + /* mop up any remaining bytes. */ + return do_bytes (a, retval, fill, bytes); +} + +void * +memset (void *a, int ifill, size_t len) +{ + unsigned long bytes, words; + bitfields_t fill; + void *retval = (void *) a; + + /* shouldn't hit that often. */ + if (len < 16) + return do_bytes (a, retval, ifill, len); + + /* Align the pointer to word/dword alignment. + Note that the pointer is only 32-bits for o32/n32 ABIs. For + n32, loads are done as 64-bit while address remains 32-bit. */ + bytes = ((unsigned long) a) % (sizeof (reg_t) * 2); + if (bytes) + { + bytes = (sizeof (reg_t) * 2 - bytes); + if (bytes > len) + bytes = len; + do_bytes (a, retval, ifill, bytes); + if (len == bytes) + return retval; + len -= bytes; + a = (void *) (((unsigned char *) a) + bytes); + } + + /* Create correct fill value for reg_t sized variable. */ + if (ifill != 0) + { + fill.b8.B0 = (unsigned char) ifill; + fill.b8.B1 = fill.b8.B0; + fill.b16.B1 = fill.b16.B0; +#if SIZEOF_reg_t == 8 + fill.b32.B1 = fill.b32.B0; +#endif + } + else + fill.v = 0; + + words = len / sizeof (reg_t); + bytes = len % sizeof (reg_t); + return do_aligned_words (a, retval, fill.v, words, bytes); +} + + +libc_hidden_builtin_def (memset) + +#else +#include +#endif