From patchwork Mon Dec 18 08:31:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amrita H S X-Patchwork-Id: 82374 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id B6B0738582AE for ; Mon, 18 Dec 2023 08:31:43 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by sourceware.org (Postfix) with ESMTPS id 09ACA3858424 for ; Mon, 18 Dec 2023 08:31:24 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 09ACA3858424 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=linux.vnet.ibm.com Authentication-Results: sourceware.org; spf=none smtp.mailfrom=linux.vnet.ibm.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 09ACA3858424 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=148.163.156.1 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1702888287; cv=none; b=FN/kZpmcPaP7gDyYw2nNtt8lTXf5Ty2jYaXnR4zmVhoGD40ehxBe+lMZ+SEZBU4KvVYBKmWhdfSDJsUu9foP4hKEzXY8MZe+wHQFGbrNU2BtOVOGLrbcP9j8WiPsvg0zIgwrXZp63S9Fz06z81LMsiZSgU7ULCm09VKXsZReuDI= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1702888287; c=relaxed/simple; bh=1xdsZ+hd3vQLt+pE6JXkpPq2qtGIsDHbEd4ATyDR3HY=; h=DKIM-Signature:From:To:Subject:Date:Message-ID:MIME-Version; b=oXcIBZIUG8DXpYfbFHAWy5YqVmWw6Mehub6cFYziAGy2fIlOuxMPwQnslCywxL1hLZxEJSKKh29DuyBS6+6mFN8198Hoc9gqhmOwTPDibF53vxTTU6UzcylURY8fbtCAQ3OvfbRNO270RULJd5/9CRimUvLI+seUQeGheojqG+0= ARC-Authentication-Results: i=1; server2.sourceware.org Received: from pps.filterd (m0353726.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3BI7iB4f014634 for ; Mon, 18 Dec 2023 08:31:24 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : content-transfer-encoding : mime-version; s=pp1; bh=VRuBt80r50ZxNfesdJ2ETY/RyG7CtuKY0NccX5TUV7o=; b=FpVPz342oWJt8pynWTkd7PXgFYK1yDacRTFsfIdZxI+gmon1nPT37EiVGvUFU86Vd2FT v0Qsg99/imf2lNJmmI58G29krJCnkhdeeZFh+YEAgZOx1KvZ5jhIjJOkwR6sA323REP5 PywfeAOGbQxTfbV2UJ2UJ+x3Nffh3P6EqvvmaPgEuB5PG5Fo82eXrezkyZt6THZe/9y3 MpOBlizDYd7+8MZwG/fEzjT0t3PlCeNXEPO9d0wIezn5KljCcmo003u59jxuv6h8X/hb zV592qMQhRM8m+kLg5KGRZuADbY4ep9bojmzfuEGBnsBLltbCAfBHsYV+JaHzes8tI+3 ew== Received: from ppma23.wdc07v.mail.ibm.com (5d.69.3da9.ip4.static.sl-reverse.com [169.61.105.93]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3v2epx6aj8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Mon, 18 Dec 2023 08:31:23 +0000 Received: from pps.filterd (ppma23.wdc07v.mail.ibm.com [127.0.0.1]) by ppma23.wdc07v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 3BI6xKIO013763 for ; Mon, 18 Dec 2023 08:31:22 GMT Received: from smtprelay02.fra02v.mail.ibm.com ([9.218.2.226]) by ppma23.wdc07v.mail.ibm.com (PPS) with ESMTPS id 3v1qqjyfff-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Mon, 18 Dec 2023 08:31:22 +0000 Received: from smtpav06.fra02v.mail.ibm.com (smtpav06.fra02v.mail.ibm.com [10.20.54.105]) by smtprelay02.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 3BI8VJV622479360 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 18 Dec 2023 08:31:20 GMT Received: from smtpav06.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id DCB2020049; Mon, 18 Dec 2023 08:31:19 +0000 (GMT) Received: from smtpav06.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 483782004B; Mon, 18 Dec 2023 08:31:19 +0000 (GMT) Received: from ltcd97-lp3.. (unknown [9.40.194.171]) by smtpav06.fra02v.mail.ibm.com (Postfix) with ESMTP; Mon, 18 Dec 2023 08:31:19 +0000 (GMT) From: Amrita H S To: libc-alpha@sourceware.org Cc: Amrita H S Subject: [PATCH V1] powerpc: Optimized strncmp for power10 Date: Mon, 18 Dec 2023 03:31:16 -0500 Message-ID: <20231218083116.1174590-1-amritahs@linux.vnet.ibm.com> X-Mailer: git-send-email 2.41.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: kuiq0f_6mZohH-5meQEBQAXbmrAbNVMU X-Proofpoint-ORIG-GUID: kuiq0f_6mZohH-5meQEBQAXbmrAbNVMU X-Proofpoint-UnRewURL: 0 URL was un-rewritten MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-18_04,2023-12-14_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=436 priorityscore=1501 clxscore=1015 impostorscore=0 suspectscore=0 mlxscore=0 malwarescore=0 lowpriorityscore=0 spamscore=0 phishscore=0 bulkscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2311290000 definitions=main-2312180061 X-Spam-Status: No, score=-11.5 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_EF, GIT_PATCH_0, KAM_NUMSUBJECT, KAM_SHORT, RCVD_IN_MSPIKE_H4, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org This patch is based on __strcmp_power10. Improvements from __strncmp_power9: 1. Uses new POWER10 instructions - This code uses lxvp to decrease contention on load by loading 32 bytes per instruction. 2. Performance implication - This version has around 38% better performance on average. - Minor performance regression is seen for few small sizes and specific combination of alignments. Signed-off-by: Amrita H S --- .../powerpc/powerpc64/le/power10/strncmp.S | 277 ++++++++++++++++++ sysdeps/powerpc/powerpc64/multiarch/Makefile | 2 +- .../powerpc64/multiarch/ifunc-impl-list.c | 3 + .../powerpc64/multiarch/strncmp-power10.S | 25 ++ sysdeps/powerpc/powerpc64/multiarch/strncmp.c | 4 + 5 files changed, 310 insertions(+), 1 deletion(-) create mode 100644 sysdeps/powerpc/powerpc64/le/power10/strncmp.S create mode 100644 sysdeps/powerpc/powerpc64/multiarch/strncmp-power10.S diff --git a/sysdeps/powerpc/powerpc64/le/power10/strncmp.S b/sysdeps/powerpc/powerpc64/le/power10/strncmp.S new file mode 100644 index 0000000000..41541322df --- /dev/null +++ b/sysdeps/powerpc/powerpc64/le/power10/strncmp.S @@ -0,0 +1,277 @@ +/* Optimized strncmp implementation for PowerPC64/POWER10. + Copyright (C) 2021-2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include + +/* Implements the function + + int [r3] strncmp (const char *s1 [r3], const char *s2 [r4], size_t [r5] n) + + The implementation uses unaligned doubleword access to avoid specialized + code paths depending of data alignment for first 32 bytes and uses + vectorised loops after that. */ + +#ifndef STRNCMP +# define STRNCMP strncmp +#endif + +/* TODO: Change this to actual instructions when minimum binutils is upgraded + to 2.27. Macros are defined below for these newer instructions in order + to maintain compatibility. */ + +#define LXVP(xtp,dq,ra) \ + .long(((6)<<(32-6)) \ + | ((((xtp)-32)>>1)<<(32-10)) \ + | ((1)<<(32-11)) \ + | ((ra)<<(32-16)) \ + | dq) + +#define COMPARE_16(vreg1,vreg2,offset) \ + lxv vreg1+32,offset(r3); \ + lxv vreg2+32,offset(r4); \ + vcmpnezb. v7,vreg1,vreg2; \ + bne cr6,L(different); \ + cmpldi cr7,r5,16; \ + ble cr7,L(ret0); \ + addi r5,r5,-16; + +#define COMPARE_32(vreg1,vreg2,offset,label1,label2) \ + LXVP(vreg1+32,offset,r3); \ + LXVP(vreg2+32,offset,r4); \ + vcmpnezb. v7,vreg1+1,vreg2+1; \ + bne cr6,L(label1); \ + vcmpnezb. v7,vreg1,vreg2; \ + bne cr6,L(label2); \ + cmpldi cr7,r5,32; \ + ble cr7,L(ret0); \ + addi r5,r5,-32; + +#define TAIL_FIRST_16B(vreg1,vreg2) \ + vctzlsbb r6,v7; \ + cmpld cr7,r5,r6; \ + ble cr7,L(ret0); \ + vextubrx r5,r6,vreg1; \ + vextubrx r4,r6,vreg2; \ + subf r3,r4,r5; \ + blr; + +#define TAIL_SECOND_16B(vreg1,vreg2) \ + vctzlsbb r6,v7; \ + addi r0,r6,16; \ + cmplw cr7,r5,r0; \ + ble cr7,L(ret0); \ + vextubrx r5,r6,vreg1; \ + vextubrx r4,r6,vreg2; \ + subf r3,r4,r5; \ + blr; + +#define CHECK_N_BYTES(reg1,reg2,len_reg) \ + sldi r6,len_reg,56; \ + lxvl 32+v4,reg1,r6; \ + lxvl 32+v5,reg2,r6; \ + add reg1,reg1,len_reg; \ + add reg2,reg2,len_reg; \ + vcmpnezb v7,v4,v5; \ + vctzlsbb r6,v7; \ + cmpld cr7,r6,len_reg; \ + blt cr7,L(different); \ + cmpld cr7,r5,len_reg; \ + ble cr7,L(ret0); \ + sub r5,r5,len_reg; \ + + /* TODO: change this to .machine power10 when the minimum required + binutils allows it. */ + .machine power9 +ENTRY_TOCLESS (STRNCMP, 4) + /* Check if size is 0. */ + cmpdi cr0,r5,0 + beq cr0,L(ret0) + andi. r7,r3,4095 + andi. r8,r4,4095 + cmpldi cr0,r7,4096-16 + cmpldi cr1,r8,4096-16 + bgt cr0,L(crosses) + bgt cr1,L(crosses) + COMPARE_16(v4,v5,0) + addi r3,r3,16 + addi r4,r4,16 + +L(crosses): + andi. r7,r3,15 + subfic r7,r7,16 /* r7(nalign1) = 16 - (str1 & 15). */ + andi. r9,r4,15 + subfic r8,r9,16 /* r8(nalign2) = 16 - (str2 & 15). */ + cmpld cr7,r7,r8 + beq cr7,L(same_aligned) + blt cr7,L(nalign1_min) + + /* nalign2 is minimum and s2 pointer is aligned. */ + CHECK_N_BYTES(r3,r4,r8) + /* Are we on the 64B hunk which crosses a page? */ + andi. r10,r3,63 /* Determine offset into 64B hunk. */ + andi. r8,r3,15 /* The offset into the 16B hunk. */ + neg r7,r3 + andi. r9,r7,15 /* Number of bytes after a 16B cross. */ + rlwinm. r7,r7,26,0x3F /* ((r4-4096))>>6&63. */ + beq L(compare_64_pagecross) + mtctr r7 + b L(compare_64B_unaligned) + + /* nalign1 is minimum and s1 pointer is aligned. */ +L(nalign1_min): + CHECK_N_BYTES(r3,r4,r7) + /* Are we on the 64B hunk which crosses a page? */ + andi. r10,r4,63 /* Determine offset into 64B hunk. */ + andi. r8,r4,15 /* The offset into the 16B hunk. */ + neg r7,r4 + andi. r9,r7,15 /* Number of bytes after a 16B cross. */ + rlwinm. r7,r7,26,0x3F /* ((r4-4096))>>6&63. */ + beq L(compare_64_pagecross) + mtctr r7 + + .p2align 5 +L(compare_64B_unaligned): + COMPARE_16(v4,v5,0) + COMPARE_16(v4,v5,16) + COMPARE_16(v4,v5,32) + COMPARE_16(v4,v5,48) + addi r3,r3,64 + addi r4,r4,64 + bdnz L(compare_64B_unaligned) + + /* Cross the page boundary of s2, carefully. Only for first + iteration we have to get the count of 64B blocks to be checked. + From second iteration and beyond, loop counter is always 63. */ +L(compare_64_pagecross): + li r11, 63 + mtctr r11 + cmpldi r10,16 + ble L(cross_4) + cmpldi r10,32 + ble L(cross_3) + cmpldi r10,48 + ble L(cross_2) +L(cross_1): + CHECK_N_BYTES(r3,r4,r9) + CHECK_N_BYTES(r3,r4,r8) + COMPARE_16(v4,v5,0) + COMPARE_16(v4,v5,16) + COMPARE_16(v4,v5,32) + addi r3,r3,48 + addi r4,r4,48 + b L(compare_64B_unaligned) +L(cross_2): + COMPARE_16(v4,v5,0) + addi r3,r3,16 + addi r4,r4,16 + CHECK_N_BYTES(r3,r4,r9) + CHECK_N_BYTES(r3,r4,r8) + COMPARE_16(v4,v5,0) + COMPARE_16(v4,v5,16) + addi r3,r3,32 + addi r4,r4,32 + b L(compare_64B_unaligned) +L(cross_3): + COMPARE_16(v4,v5,0) + COMPARE_16(v4,v5,16) + addi r3,r3,32 + addi r4,r4,32 + CHECK_N_BYTES(r3,r4,r9) + CHECK_N_BYTES(r3,r4,r8) + COMPARE_16(v4,v5,0) + addi r3,r3,16 + addi r4,r4,16 + b L(compare_64B_unaligned) +L(cross_4): + COMPARE_16(v4,v5,0) + COMPARE_16(v4,v5,16) + COMPARE_16(v4,v5,32) + addi r3,r3,48 + addi r4,r4,48 + CHECK_N_BYTES(r3,r4,r9) + CHECK_N_BYTES(r3,r4,r8) + b L(compare_64B_unaligned) + +L(same_aligned): + CHECK_N_BYTES(r3,r4,r7) + /* Align s1 to 32B and adjust s2 address. + Use lxvp only if both s1 and s2 are 32B aligned. */ + COMPARE_16(v4,v5,0) + COMPARE_16(v4,v5,16) + COMPARE_16(v4,v5,32) + COMPARE_16(v4,v5,48) + addi r3,r3,64 + addi r4,r4,64 + COMPARE_16(v4,v5,0) + COMPARE_16(v4,v5,16) + addi r5,r5,32 + + clrldi r6,r3,59 + subfic r7,r6,32 + add r3,r3,r7 + add r4,r4,r7 + subf r5,r7,r5 + andi. r7,r4,0x1F + beq cr0,L(32B_aligned_loop) + + .p2align 5 +L(16B_aligned_loop): + COMPARE_16(v4,v5,0) + COMPARE_16(v4,v5,16) + COMPARE_16(v4,v5,32) + COMPARE_16(v4,v5,48) + addi r3,r3,64 + addi r4,r4,64 + b L(16B_aligned_loop) + /* Calculate and return the difference. */ +L(different): + TAIL_FIRST_16B(v4,v5) + /*vctzlsbb r6,v7 + cmpld cr7,r5,r6 + ble cr7,L(ret0) + vextubrx r5,r6,v4 + vextubrx r4,r6,v5 + subf r3,r4,r5 + blr*/ + + .p2align 5 +L(32B_aligned_loop): + COMPARE_32(v14,v16,0,tail1,tail2) + COMPARE_32(v18,v20,32,tail3,tail4) + COMPARE_32(v22,v24,64,tail5,tail6) + COMPARE_32(v26,v28,96,tail7,tail8) + addi r3,r3,128 + addi r4,r4,128 + b L(32B_aligned_loop) + +L(tail1): TAIL_FIRST_16B(v15,v17) +L(tail2): TAIL_SECOND_16B(v14,v16) +L(tail3): TAIL_FIRST_16B(v19,v21) +L(tail4): TAIL_SECOND_16B(v18,v20) +L(tail5): TAIL_FIRST_16B(v23,v25) +L(tail6): TAIL_SECOND_16B(v22,v24) +L(tail7): TAIL_FIRST_16B(v27,v29) +L(tail8): TAIL_SECOND_16B(v26,v28) + + .p2align 5 +L(ret0): + li r3,0 + blr + +END(STRNCMP) +libc_hidden_builtin_def(strncmp) diff --git a/sysdeps/powerpc/powerpc64/multiarch/Makefile b/sysdeps/powerpc/powerpc64/multiarch/Makefile index d7824a922b..e557ce1884 100644 --- a/sysdeps/powerpc/powerpc64/multiarch/Makefile +++ b/sysdeps/powerpc/powerpc64/multiarch/Makefile @@ -33,7 +33,7 @@ sysdep_routines += memcpy-power8-cached memcpy-power7 memcpy-a2 memcpy-power6 \ ifneq (,$(filter %le,$(config-machine))) sysdep_routines += memcmp-power10 memcpy-power10 memmove-power10 memset-power10 \ rawmemchr-power9 rawmemchr-power10 \ - strcmp-power9 strcmp-power10 strncmp-power9 \ + strcmp-power9 strcmp-power10 strncmp-power9 strncmp-power10\ strcpy-power9 stpcpy-power9 \ strlen-power9 strncpy-power9 stpncpy-power9 strlen-power10 endif diff --git a/sysdeps/powerpc/powerpc64/multiarch/ifunc-impl-list.c b/sysdeps/powerpc/powerpc64/multiarch/ifunc-impl-list.c index 965dd17786..b2e930abdb 100644 --- a/sysdeps/powerpc/powerpc64/multiarch/ifunc-impl-list.c +++ b/sysdeps/powerpc/powerpc64/multiarch/ifunc-impl-list.c @@ -164,6 +164,9 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, /* Support sysdeps/powerpc/powerpc64/multiarch/strncmp.c. */ IFUNC_IMPL (i, name, strncmp, #ifdef __LITTLE_ENDIAN__ + IFUNC_IMPL_ADD (array, i, strncmp, hwcap2 & PPC_FEATURE2_ARCH_3_1 + && hwcap & PPC_FEATURE_HAS_VSX, + __strncmp_power10) IFUNC_IMPL_ADD (array, i, strncmp, hwcap2 & PPC_FEATURE2_ARCH_3_00 && hwcap & PPC_FEATURE_HAS_ALTIVEC, __strncmp_power9) diff --git a/sysdeps/powerpc/powerpc64/multiarch/strncmp-power10.S b/sysdeps/powerpc/powerpc64/multiarch/strncmp-power10.S new file mode 100644 index 0000000000..c309d3caf9 --- /dev/null +++ b/sysdeps/powerpc/powerpc64/multiarch/strncmp-power10.S @@ -0,0 +1,25 @@ +/* Copyright (C) 2016-2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#if defined __LITTLE_ENDIAN__ && IS_IN (libc) +#define STRNCMP __strncmp_power10 + +#undef libc_hidden_builtin_def +#define libc_hidden_builtin_def(name) + +#include +#endif diff --git a/sysdeps/powerpc/powerpc64/multiarch/strncmp.c b/sysdeps/powerpc/powerpc64/multiarch/strncmp.c index e8bab8e23d..6f430d710d 100644 --- a/sysdeps/powerpc/powerpc64/multiarch/strncmp.c +++ b/sysdeps/powerpc/powerpc64/multiarch/strncmp.c @@ -29,6 +29,7 @@ extern __typeof (strncmp) __strncmp_ppc attribute_hidden; extern __typeof (strncmp) __strncmp_power8 attribute_hidden; # ifdef __LITTLE_ENDIAN__ extern __typeof (strncmp) __strncmp_power9 attribute_hidden; +extern __typeof (strncmp) __strncmp_power10 attribute_hidden; # endif # undef strncmp @@ -36,6 +37,9 @@ extern __typeof (strncmp) __strncmp_power9 attribute_hidden; ifunc symbol properly. */ libc_ifunc_redirected (__redirect_strncmp, strncmp, # ifdef __LITTLE_ENDIAN__ + (hwcap2 & PPC_FEATURE2_ARCH_3_1 + && hwcap & PPC_FEATURE_HAS_VSX) + ? __strncmp_power10 : (hwcap2 & PPC_FEATURE2_ARCH_3_00 && hwcap & PPC_FEATURE_HAS_ALTIVEC) ? __strncmp_power9 :