Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[llvm][test] Fix filecheck annotation typos [1/n] #93673

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

klensy
Copy link
Contributor

@klensy klensy commented May 29, 2024

Moved fixes for llvm from #91854, plus few additional.

Not all tests passing CI, so some help required to fix them.

See also #93674 for list of some unfixed tests

@llvmbot
Copy link
Member

llvmbot commented May 29, 2024

@llvm/pr-subscribers-backend-hexagon
@llvm/pr-subscribers-backend-m68k
@llvm/pr-subscribers-backend-nvptx
@llvm/pr-subscribers-llvm-analysis
@llvm/pr-subscribers-backend-powerpc
@llvm/pr-subscribers-backend-amdgpu
@llvm/pr-subscribers-backend-x86
@llvm/pr-subscribers-backend-loongarch

@llvm/pr-subscribers-backend-spir-v

Author: klensy (klensy)

Changes

Moved fixes for llvm from #91854, plus few additional.


Patch is 153.63 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/93673.diff

99 Files Affected:

  • (modified) llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll (+1-1)
  • (modified) llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll (+1-1)
  • (modified) llvm/test/Assembler/bfloat.ll (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/arm64_32-atomics.ll (+10-10)
  • (modified) llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/fpimm.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/addrspacecast.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir (+5-5)
  • (modified) llvm/test/CodeGen/ARM/dsp-loop-indexing.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/shifter_operand.ll (-1)
  • (modified) llvm/test/CodeGen/ARM/speculation-hardening-sls.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/sxt_rot.ll (-1)
  • (modified) llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll (+9-9)
  • (modified) llvm/test/CodeGen/NVPTX/idioms.ll (+5-5)
  • (modified) llvm/test/CodeGen/SPARC/inlineasm.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/prefetch-04.ll (+1-1)
  • (modified) llvm/test/CodeGen/Thumb2/LowOverheadLoops/branch-targets.ll (+3-3)
  • (modified) llvm/test/CodeGen/X86/global-sections.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/tailregccpic.ll (+2-2)
  • (modified) llvm/test/DebugInfo/MIR/InstrRef/livedebugvalues_illegal_locs.mir (+3-3)
  • (modified) llvm/test/DebugInfo/MIR/InstrRef/single-assign-propagation.mir (+3-3)
  • (modified) llvm/test/MC/AArch64/SME/feature.s (+1-1)
  • (modified) llvm/test/MC/AArch64/armv8.7a-xs.s (+79-79)
  • (modified) llvm/test/MC/AArch64/basic-a64-diagnostics.s (+57-57)
  • (modified) llvm/test/MC/AMDGPU/hsa-diag-v4.s (+1-1)
  • (modified) llvm/test/MC/ARM/coff-relocations.s (+1-1)
  • (modified) llvm/test/MC/ARM/neon-complex.s (+20-20)
  • (modified) llvm/test/MC/AsmParser/labels.s (+2-2)
  • (modified) llvm/test/MC/COFF/cv-inline-linetable.s (+3-3)
  • (modified) llvm/test/MC/Disassembler/AArch64/armv8.6a-bf16.txt (+8-8)
  • (modified) llvm/test/MC/Disassembler/AArch64/armv8.7a-xs.txt (+80-80)
  • (modified) llvm/test/MC/Disassembler/AArch64/tme.txt (+3-3)
  • (modified) llvm/test/MC/Disassembler/AMDGPU/gfx10-wave32.txt (+6-6)
  • (modified) llvm/test/MC/Disassembler/AMDGPU/gfx12_dasm_ds.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/ARM/arm-tests.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/Mips/mips32r6/valid-mips32r6.txt (+3-3)
  • (modified) llvm/test/MC/Disassembler/Mips/mips64r6/valid-mips64r6.txt (+3-3)
  • (modified) llvm/test/MC/Disassembler/PowerPC/ppc64-encoding-dfp.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/PowerPC/ppc64-encoding.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/PowerPC/ppc64le-encoding.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/X86/x86-16.txt (+16-16)
  • (modified) llvm/test/MC/LoongArch/Relocations/relax-align.s (+1-1)
  • (modified) llvm/test/MC/MachO/lto-set-conditional.s (+2-2)
  • (modified) llvm/test/MC/Mips/expansion-jal-sym-pic.s (+8-8)
  • (modified) llvm/test/MC/Mips/macro-rem.s (+1-1)
  • (modified) llvm/test/MC/Mips/micromips-dsp/invalid.s (+4-4)
  • (modified) llvm/test/MC/Mips/micromips/valid.s (+2-2)
  • (modified) llvm/test/MC/Mips/mips-pdr-bad.s (+2-2)
  • (modified) llvm/test/MC/Mips/mips32r6/invalid.s (+8-8)
  • (modified) llvm/test/MC/Mips/mips64r6/invalid.s (+8-8)
  • (modified) llvm/test/MC/PowerPC/ppc64-encoding-ISA31.s (+13-13)
  • (modified) llvm/test/MC/PowerPC/ppc64-encoding-vmx.s (+2-2)
  • (modified) llvm/test/MC/RISCV/compress-rv64i.s (+2-2)
  • (modified) llvm/test/MC/RISCV/csr-aliases.s (+10-10)
  • (modified) llvm/test/MC/RISCV/relocations.s (+4-4)
  • (modified) llvm/test/MC/RISCV/zicfiss-valid.s (+6-6)
  • (modified) llvm/test/MC/WebAssembly/globals.s (+1-1)
  • (modified) llvm/test/MC/X86/apx/evex-format-intel.s (+2-2)
  • (modified) llvm/test/MC/Xtensa/Relocations/relocations.s (+31-31)
  • (modified) llvm/test/Other/constant-fold-gep-address-spaces.ll (+47-47)
  • (modified) llvm/test/Other/new-pm-thinlto-postlink-defaults.ll (+1-1)
  • (modified) llvm/test/Transforms/Attributor/cb_liveness_disabled.ll (+2-2)
  • (modified) llvm/test/Transforms/Attributor/cb_liveness_enabled.ll (+2-2)
  • (modified) llvm/test/Transforms/Attributor/returned.ll (+1-1)
  • (modified) llvm/test/Transforms/CallSiteSplitting/callsite-split.ll (+1-1)
  • (modified) llvm/test/Transforms/Coroutines/coro-debug-coro-frame.ll (+1-1)
  • (modified) llvm/test/Transforms/FunctionAttrs/nonnull.ll (+1-1)
  • (modified) llvm/test/Transforms/GVNSink/sink-common-code.ll (+6-6)
  • (modified) llvm/test/Transforms/Inline/update_invoke_prof.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/lifetime-sanitizer.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopUnroll/peel-loop2.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVectorize/AArch64/nontemporal-load-store.ll (+11-11)
  • (modified) llvm/test/Transforms/LoopVectorize/AArch64/strict-fadd.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVectorize/branch-weights.ll (+2-2)
  • (modified) llvm/test/Transforms/LoopVectorize/memdep.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVersioning/wrapping-pointer-non-integral-addrspace.ll (+1-1)
  • (modified) llvm/test/Transforms/ObjCARC/rv.ll (+3-3)
  • (modified) llvm/test/Transforms/PGOProfile/counter_promo_exit_catchswitch.ll (+1-1)
  • (modified) llvm/test/Transforms/PGOProfile/icp_covariant_invoke_return.ll (+4-4)
  • (modified) llvm/test/Transforms/PartiallyInlineLibCalls/X86/good-prototype.ll (+1-1)
  • (modified) llvm/test/Transforms/SampleProfile/pseudo-probe-selectionDAG.ll (+2-2)
  • (modified) llvm/test/Transforms/SimplifyCFG/branch-fold-threshold.ll (+2-2)
  • (modified) llvm/test/Verifier/AMDGPU/intrinsic-immarg.ll (+1-1)
  • (modified) llvm/test/tools/gold/X86/global_with_section.ll (+1-1)
  • (modified) llvm/test/tools/llvm-ar/replace-update.test (+1-1)
  • (modified) llvm/test/tools/llvm-cov/Inputs/binary-formats.canonical.json (+1-1)
  • (modified) llvm/test/tools/llvm-cov/coverage_watermark.test (+5-5)
  • (modified) llvm/test/tools/llvm-cov/zeroFunctionFile.c (+1-1)
  • (modified) llvm/test/tools/llvm-objcopy/ELF/update-section.test (+1-1)
  • (modified) llvm/test/tools/llvm-objdump/ELF/ARM/v5te-subarch.s (+1-1)
  • (modified) llvm/test/tools/llvm-objdump/X86/start-stop-address.test (+1-1)
  • (modified) llvm/test/tools/llvm-profgen/filter-ambiguous-profile.test (+5-5)
  • (modified) llvm/test/tools/llvm-reduce/mir/reduce-register-defs.mir (+1-1)
  • (modified) llvm/test/tools/llvm-reduce/skip-delta-passes.ll (+1-1)
  • (modified) llvm/test/tools/lto/discard-value-names.ll (+1-1)
diff --git a/llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll b/llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll
index a2526d9f5591a..c2aab35194831 100644
--- a/llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll
+++ b/llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll
@@ -31,7 +31,7 @@ define void  @broadcast() #0{
 ; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction:   %22 = shufflevector <vscale x 8 x i1> undef, <vscale x 8 x i1> undef, <vscale x 8 x i32> zeroinitializer
 ; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction:   %23 = shufflevector <vscale x 4 x i1> undef, <vscale x 4 x i1> undef, <vscale x 4 x i32> zeroinitializer
 ; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction:   %24 = shufflevector <vscale x 2 x i1> undef, <vscale x 2 x i1> undef, <vscale x 2 x i32> zeroinitializer
-; CHECK-NETX: Cost Model: Found an estimated cost of 0 for instruction:   ret void
+; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction:   ret void
 
   %zero = shufflevector <vscale x 16 x i8> undef, <vscale x 16 x i8> undef, <vscale x 16 x i32> zeroinitializer
   %1 = shufflevector <vscale x 32 x i8> undef, <vscale x 32 x i8> undef, <vscale x 32 x i32> zeroinitializer
diff --git a/llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll b/llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll
index 335026dc9b62b..efad77b684a75 100644
--- a/llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll
+++ b/llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll
@@ -90,7 +90,7 @@ S:
   br i1 %cond.uni, label %exit, label %T
 
 T:
-; CHECK-NIT:   DIVERGENT:   %tt.phi = phi i32
+; CHECK-NOT:   DIVERGENT:   %tt.phi = phi i32
   %tt.phi = phi i32 [ %ss, %S ], [ %a, %entry ]
   %tt = add i32 %b, 1
   br label %P
diff --git a/llvm/test/Assembler/bfloat.ll b/llvm/test/Assembler/bfloat.ll
index 3a3b4c2b277db..6f935c5dac154 100644
--- a/llvm/test/Assembler/bfloat.ll
+++ b/llvm/test/Assembler/bfloat.ll
@@ -37,25 +37,25 @@ define float @check_bfloat_convert() {
   ret float %tmp
 }
 
-; ASSEM-DISASS-LABEL @snan_bfloat
+; ASSEM-DISASS-LABEL: @snan_bfloat
 define bfloat @snan_bfloat() {
 ; ASSEM-DISASS: ret bfloat 0xR7F81
     ret bfloat 0xR7F81
 }
 
-; ASSEM-DISASS-LABEL @qnan_bfloat
+; ASSEM-DISASS-LABEL: @qnan_bfloat
 define bfloat @qnan_bfloat() {
 ; ASSEM-DISASS: ret bfloat 0xR7FC0
     ret bfloat 0xR7FC0
 }
 
-; ASSEM-DISASS-LABEL @pos_inf_bfloat
+; ASSEM-DISASS-LABEL: @pos_inf_bfloat
 define bfloat @pos_inf_bfloat() {
 ; ASSEM-DISASS: ret bfloat 0xR7F80
     ret bfloat 0xR7F80
 }
 
-; ASSEM-DISASS-LABEL @neg_inf_bfloat
+; ASSEM-DISASS-LABEL: @neg_inf_bfloat
 define bfloat @neg_inf_bfloat() {
 ; ASSEM-DISASS: ret bfloat 0xRFF80
     ret bfloat 0xRFF80
diff --git a/llvm/test/CodeGen/AArch64/arm64_32-atomics.ll b/llvm/test/CodeGen/AArch64/arm64_32-atomics.ll
index 0000262e833da..19b9205dc1786 100644
--- a/llvm/test/CodeGen/AArch64/arm64_32-atomics.ll
+++ b/llvm/test/CodeGen/AArch64/arm64_32-atomics.ll
@@ -2,70 +2,70 @@
 ; RUN: llc -mtriple=arm64_32-apple-ios7.0 -mattr=+outline-atomics -o - %s | FileCheck %s -check-prefix=OUTLINE-ATOMICS
 
 define i8 @test_load_8(ptr %addr) {
-; CHECK-LABAL: test_load_8:
+; CHECK-LABEL: test_load_8:
 ; CHECK: ldarb w0, [x0]
   %val = load atomic i8, ptr %addr seq_cst, align 1
   ret i8 %val
 }
 
 define i16 @test_load_16(ptr %addr) {
-; CHECK-LABAL: test_load_16:
+; CHECK-LABEL: test_load_16:
 ; CHECK: ldarh w0, [x0]
   %val = load atomic i16, ptr %addr acquire, align 2
   ret i16 %val
 }
 
 define i32 @test_load_32(ptr %addr) {
-; CHECK-LABAL: test_load_32:
+; CHECK-LABEL: test_load_32:
 ; CHECK: ldar w0, [x0]
   %val = load atomic i32, ptr %addr seq_cst, align 4
   ret i32 %val
 }
 
 define i64 @test_load_64(ptr %addr) {
-; CHECK-LABAL: test_load_64:
+; CHECK-LABEL: test_load_64:
 ; CHECK: ldar x0, [x0]
   %val = load atomic i64, ptr %addr seq_cst, align 8
   ret i64 %val
 }
 
 define ptr @test_load_ptr(ptr %addr) {
-; CHECK-LABAL: test_load_ptr:
+; CHECK-LABEL: test_load_ptr:
 ; CHECK: ldar w0, [x0]
   %val = load atomic ptr, ptr %addr seq_cst, align 8
   ret ptr %val
 }
 
 define void @test_store_8(ptr %addr) {
-; CHECK-LABAL: test_store_8:
+; CHECK-LABEL: test_store_8:
 ; CHECK: stlrb wzr, [x0]
   store atomic i8 0, ptr %addr seq_cst, align 1
   ret void
 }
 
 define void @test_store_16(ptr %addr) {
-; CHECK-LABAL: test_store_16:
+; CHECK-LABEL: test_store_16:
 ; CHECK: stlrh wzr, [x0]
   store atomic i16 0, ptr %addr seq_cst, align 2
   ret void
 }
 
 define void @test_store_32(ptr %addr) {
-; CHECK-LABAL: test_store_32:
+; CHECK-LABEL: test_store_32:
 ; CHECK: stlr wzr, [x0]
   store atomic i32 0, ptr %addr seq_cst, align 4
   ret void
 }
 
 define void @test_store_64(ptr %addr) {
-; CHECK-LABAL: test_store_64:
+; CHECK-LABEL: test_store_64:
 ; CHECK: stlr xzr, [x0]
   store atomic i64 0, ptr %addr seq_cst, align 8
   ret void
 }
 
 define void @test_store_ptr(ptr %addr) {
-; CHECK-LABAL: test_store_ptr:
+; CHECK-LABEL: test_store_ptr:
 ; CHECK: stlr wzr, [x0]
   store atomic ptr null, ptr %addr seq_cst, align 8
   ret void
diff --git a/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll b/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll
index e9556b9d5cbee..e93fcec822846 100644
--- a/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll
+++ b/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll
@@ -1,7 +1,7 @@
 ; RUN: llc -mtriple=arm64ec-pc-windows-msvc < %s | FileCheck %s
 
 define void @no_op() nounwind {
-; CHECK-LABEL     .def    $ientry_thunk$cdecl$v$v;
+; CHECK-LABEL:     .def    $ientry_thunk$cdecl$v$v;
 ; CHECK:          .section        .wowthk$aa,"xr",discard,$ientry_thunk$cdecl$v$v
 ; CHECK:          // %bb.0:
 ; CHECK-NEXT:     stp     q6, q7, [sp, #-176]!            // 32-byte Folded Spill
diff --git a/llvm/test/CodeGen/AArch64/fpimm.ll b/llvm/test/CodeGen/AArch64/fpimm.ll
index b92bb4245c7f3..e2944243338f5 100644
--- a/llvm/test/CodeGen/AArch64/fpimm.ll
+++ b/llvm/test/CodeGen/AArch64/fpimm.ll
@@ -38,7 +38,7 @@ define void @check_double() {
 ; 64-bit ORR followed by MOVK.
 ; CHECK-DAG: mov  [[XFP0:x[0-9]+]], #1082331758844
 ; CHECK-DAG: movk [[XFP0]], #64764, lsl #16
-; CHECk-DAG: fmov {{d[0-9]+}}, [[XFP0]]
+; CHECK-DAG: fmov {{d[0-9]+}}, [[XFP0]]
   %newval3 = fadd double %val, 0xFCFCFC00FC
   store volatile double %newval3, ptr @varf64
 
diff --git a/llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll b/llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll
index a5757a70843a9..fa63df35ac857 100644
--- a/llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll
+++ b/llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll
@@ -28,7 +28,7 @@ define void @a() "sign-return-address"="all" {
 }
 
 define void @b() "sign-return-address"="non-leaf" {
-; CHECK-LABE:      b:                                     // @b
+; CHECK-LABEL:     b:                                     // @b
 ; V8A-NOT:         hint #25
 ; V83A-NOT:        paciasp
 ; CHECK-NOT:       .cfi_negate_ra_state
diff --git a/llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll b/llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll
index f380b2d05d863..fe08fa5642574 100644
--- a/llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll
+++ b/llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll
@@ -192,7 +192,7 @@ entry:
 ; CHECK: .Lfunc_end
 }
 
-; HARDEN-label: __llvm_slsblr_thunk_x0:
+; HARDEN-LABEL: __llvm_slsblr_thunk_x0:
 ; HARDEN:    mov x16, x0
 ; HARDEN:    br x16
 ; ISBDSB-NEXT: dsb sy
@@ -208,7 +208,7 @@ entry:
 ; HARDEN-COMDAT-OFF-NOT:  .hidden __llvm_slsblr_thunk_x19
 ; HARDEN-COMDAT-OFF-NOT:  .weak __llvm_slsblr_thunk_x19
 ; HARDEN-COMDAT-OFF:      .type __llvm_slsblr_thunk_x19,@function
-; HARDEN-label: __llvm_slsblr_thunk_x19:
+; HARDEN-LABEL: __llvm_slsblr_thunk_x19:
 ; HARDEN:    mov x16, x19
 ; HARDEN:    br x16
 ; ISBDSB-NEXT: dsb sy
diff --git a/llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir b/llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir
index 66d2067b531a3..bfdb1763776b4 100644
--- a/llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir
+++ b/llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir
@@ -12,7 +12,7 @@
 
 # This test also checks that pairwise store STP is generated.
 
-# CHECK-LABLE: test
+# CHECK-LABEL: test
 # CHECK: bb.0:
 # CHECK-NEXT: liveins: $x0, $x17, $x18
 # CHECK: renamable $q13_q14_q15 = LD3Threev16b undef renamable $x17 :: (load (s384) from `ptr undef`, align 64)
diff --git a/llvm/test/CodeGen/AMDGPU/addrspacecast.ll b/llvm/test/CodeGen/AMDGPU/addrspacecast.ll
index 50423c59eabe9..526d5c946ec7f 100644
--- a/llvm/test/CodeGen/AMDGPU/addrspacecast.ll
+++ b/llvm/test/CodeGen/AMDGPU/addrspacecast.ll
@@ -108,7 +108,7 @@ define amdgpu_kernel void @use_global_to_flat_addrspacecast(ptr addrspace(1) %pt
 }
 
 ; no-op
-; HSA-LABEl: {{^}}use_constant_to_flat_addrspacecast:
+; HSA-LABEL: {{^}}use_constant_to_flat_addrspacecast:
 ; HSA: s_load_dwordx2 s[[[PTRLO:[0-9]+]]:[[PTRHI:[0-9]+]]]
 ; HSA-DAG: v_mov_b32_e32 v[[VPTRLO:[0-9]+]], s[[PTRLO]]
 ; HSA-DAG: v_mov_b32_e32 v[[VPTRHI:[0-9]+]], s[[PTRHI]]
@@ -119,7 +119,7 @@ define amdgpu_kernel void @use_constant_to_flat_addrspacecast(ptr addrspace(4) %
   ret void
 }
 
-; HSA-LABEl: {{^}}use_constant_to_global_addrspacecast:
+; HSA-LABEL: {{^}}use_constant_to_global_addrspacecast:
 ; HSA: s_load_dwordx2 s[[[PTRLO:[0-9]+]]:[[PTRHI:[0-9]+]]]
 ; CI-DAG: v_mov_b32_e32 v[[VPTRLO:[0-9]+]], s[[PTRLO]]
 ; CI-DAG: v_mov_b32_e32 v[[VPTRHI:[0-9]+]], s[[PTRHI]]
diff --git a/llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir b/llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir
index 29621a0477418..1151bde02ef62 100644
--- a/llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir
+++ b/llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir
@@ -4,7 +4,7 @@
 
 ---
 
-# GCN-label: name: vop3
+# GCN-LABEL: name: vop3
 # GCN: %6:vgpr_32, %7:sreg_32_xm0_xexec = V_SUBBREV_U32_e64_dpp %3, %0, %1, %5, 1, 1, 15, 15, 1, implicit $exec
 # GCN: %8:vgpr_32 = V_CVT_PK_U8_F32_e64_dpp %3, 4, %0, 2, %2, 2, %1, 1, 1, 15, 15, 1, implicit $mode, implicit $exec
 # GCN: %10:vgpr_32 = V_MED3_F32_e64 0, %9, 0, %0, 0, 12345678, 0, 0, implicit $mode, implicit $exec
@@ -37,7 +37,7 @@ body:             |
 ...
 ---
 
-# GCN-label: name: vop3_sgpr_src1
+# GCN-LABEL: name: vop3_sgpr_src1
 # GCN: %6:vgpr_32 = V_MED3_F32_e64_dpp %4, 0, %0, 0, %1, 0, %2, 0, 0, 1, 15, 15, 1, implicit $mode, implicit $exec
 # GFX1100: %8:vgpr_32 = V_MED3_F32_e64 0, %7, 0, %2, 0, %1, 0, 0, implicit $mode, implicit $exec
 # GFX1150: %8:vgpr_32 = V_MED3_F32_e64_dpp %4, 0, %0, 0, %2, 0, %1, 0, 0, 1, 15, 15, 1, implicit $mode, implicit $exec
@@ -81,7 +81,7 @@ body:             |
 ---
 
 # Regression test for src_modifiers on base u16 opcode
-# GCN-label: name: vop3_u16
+# GCN-LABEL: name: vop3_u16
 # GCN: %5:vgpr_32 = V_ADD_NC_U16_e64_dpp %3, 0, %1, 0, %3, 0, 0, 1, 15, 15, 1, implicit $exec
 # GCN: %7:vgpr_32 = V_ADD_NC_U16_e64_dpp %3, 1, %5, 2, %5, 0, 0, 1, 15, 15, 1, implicit $exec
 # GCN: %9:vgpr_32 = V_ADD_NC_U16_e64 4, %8, 8, %7, 0, 0, implicit $exec
@@ -205,7 +205,7 @@ body:             |
 ...
 
 # do not combine, dpp arg used twice
-# GCN-label: name: dpp_arg_twice
+# GCN-LABEL: name: dpp_arg_twice
 # GCN: %4:vgpr_32 = V_FMA_F32_e64 1, %1, 2, %3, 2, %3, 1, 2, implicit $mode, implicit $exec
 # GCN: %6:vgpr_32 = V_FMA_F32_e64 2, %5, 2, %1, 2, %5, 1, 2, implicit $mode, implicit $exec
 # GCN: %8:vgpr_32 = V_FMA_F32_e64 2, %7, 2, %7, 2, %1, 1, 2, implicit $mode, implicit $exec
@@ -231,7 +231,7 @@ body:             |
 ...
 
 # when the dpp source isn't a src0 operand the operation should be commuted if possible
-# GCN-label: name: dpp_commute_e64
+# GCN-LABEL: name: dpp_commute_e64
 # GCN: %4:vgpr_32  = V_MUL_U32_U24_e64_dpp %1, %0, %1, 1, 1, 14, 15, 0, implicit $exec
 # GCN: %7:vgpr_32 = V_FMA_F32_e64_dpp %5, 2, %0, 1, %1, 2, %1, 1, 2, 1, 15, 15, 1, implicit $mode, implicit $exec
 # GCN: %10:vgpr_32 = V_SUBREV_U32_e64_dpp %1, %0, %1, 1, 1, 14, 15, 0, implicit $exec
diff --git a/llvm/test/CodeGen/ARM/dsp-loop-indexing.ll b/llvm/test/CodeGen/ARM/dsp-loop-indexing.ll
index 9fb64471e9881..892e66aed4e5f 100644
--- a/llvm/test/CodeGen/ARM/dsp-loop-indexing.ll
+++ b/llvm/test/CodeGen/ARM/dsp-loop-indexing.ll
@@ -22,7 +22,7 @@
 ; CHECK-DEFAULT: ldr{{.*}}, #4]
 ; CHECK-DEFAULT: str{{.*}}, #4]
 ; CHECK-DEFAULT: ldr{{.*}}, #8]!
-; CHECK-DEAFULT: ldr{{.*}}, #8]!
+; CHECK-DEFAULT: ldr{{.*}}, #8]!
 ; CHECK-DEFAULT: str{{.*}}, #8]!
 
 ; CHECK-COMPLEX: ldr{{.*}}, #8]!
diff --git a/llvm/test/CodeGen/ARM/shifter_operand.ll b/llvm/test/CodeGen/ARM/shifter_operand.ll
index bf2e8aa911c64..00922b1bf2492 100644
--- a/llvm/test/CodeGen/ARM/shifter_operand.ll
+++ b/llvm/test/CodeGen/ARM/shifter_operand.ll
@@ -121,7 +121,6 @@ define i32 @test_orr_extract_from_mul_1(i32 %x, i32 %y) {
 ; CHECK-THUMB-NEXT:    orrs r0, r1
 ; CHECK-THUMB-NEXT:    bx lr
 entry:
-; CHECk-THUMB: orrs r0, r1
   %mul = mul i32 %y, 63767
   %or = or i32 %mul, %x
   ret i32 %or
diff --git a/llvm/test/CodeGen/ARM/speculation-hardening-sls.ll b/llvm/test/CodeGen/ARM/speculation-hardening-sls.ll
index f25d73a12246f..1f60f120dc86a 100644
--- a/llvm/test/CodeGen/ARM/speculation-hardening-sls.ll
+++ b/llvm/test/CodeGen/ARM/speculation-hardening-sls.ll
@@ -248,7 +248,7 @@ entry:
 ; HARDEN-COMDAT-OFF-NOT:  .hidden {{__llvm_slsblr_thunk_(arm|thumb)_r5}}
 ; HARDEN-COMDAT-OFF-NOT:  .weak {{__llvm_slsblr_thunk_(arm|thumb)_r5}}
 ; HARDEN-COMDAT-OFF:      .type {{__llvm_slsblr_thunk_(arm|thumb)_r5}},%function
-; HARDEN-label: {{__llvm_slsblr_thunk_(arm|thumb)_r5}}:
+; HARDEN-LABEL: {{__llvm_slsblr_thunk_(arm|thumb)_r5}}:
 ; HARDEN:    bx r5
 ; ISBDSB-NEXT: dsb sy
 ; ISBDSB-NEXT: isb
diff --git a/llvm/test/CodeGen/ARM/sxt_rot.ll b/llvm/test/CodeGen/ARM/sxt_rot.ll
index e9649c7a7fd9a..775e45201105c 100644
--- a/llvm/test/CodeGen/ARM/sxt_rot.ll
+++ b/llvm/test/CodeGen/ARM/sxt_rot.ll
@@ -22,7 +22,6 @@ define signext i8 @test1(i32 %A) {
 ; CHECK-V7:       @ %bb.0:
 ; CHECK-V7-NEXT:    sbfx r0, r0, #8, #8
 ; CHECK-V7-NEXT:    bx lr
-; CHECk-V7: sbfx r0, r0, #8, #8
   %B = lshr i32 %A, 8
   %C = shl i32 %A, 24
   %D = or i32 %B, %C
diff --git a/llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll b/llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll
index bf69adf6702f0..58920483e24bf 100644
--- a/llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll
+++ b/llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll
@@ -3,11 +3,11 @@
 ; RUN: llc < %s -mtriple=mips64el-unknown-linux-gnuabi64 | FileCheck %s --check-prefixes=MIPS64
 
 define i32 @shl_32(i32 %a, i32 %b) {
-; MIPS32-LABLE:   shl_32:
+; MIPS32-LABEL:   shl_32:
 ; MIPS32:	  # %bb.0:
 ; MIPS32-NEXT:    jr	$ra
 ; MIPS32-NEXT:    sllv	$2, $4, $5
-; MIPS64-LABLE:   shl_32:
+; MIPS64-LABEL:   shl_32:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    sll   $2, $4, 0
@@ -19,11 +19,11 @@ define i32 @shl_32(i32 %a, i32 %b) {
 }
 
 define i32 @lshr_32(i32 %a, i32 %b) {
-; MIPS32-LABLE:   lshr_32:
+; MIPS32-LABEL:   lshr_32:
 ; MIPS32:	  # %bb.0:
 ; MIPS32-NEXT:    jr	$ra
 ; MIPS32-NEXT:    srlv	$2, $4, $5
-; MIPS64-LABLE:   lshr_32:
+; MIPS64-LABEL:   lshr_32:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    sll   $2, $4, 0
@@ -35,11 +35,11 @@ define i32 @lshr_32(i32 %a, i32 %b) {
 }
 
 define i32 @ashr_32(i32 %a, i32 %b) {
-; MIPS32-LABLE:   ashr_32:
+; MIPS32-LABEL:   ashr_32:
 ; MIPS32:	  # %bb.0:
 ; MIPS32-NEXT:    jr	$ra
 ; MIPS32-NEXT:    srav	$2, $4, $5
-; MIPS64-LABLE:   ashr_32:
+; MIPS64-LABEL:   ashr_32:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    sll   $2, $4, 0
@@ -51,7 +51,7 @@ define i32 @ashr_32(i32 %a, i32 %b) {
 }
 
 define i64 @shl_64(i64 %a, i64 %b) {
-; MIPS64-LABLE:   shl_64:
+; MIPS64-LABEL:   shl_64:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    jr	$ra
@@ -62,7 +62,7 @@ define i64 @shl_64(i64 %a, i64 %b) {
 }
 
 define i64 @lshr_64(i64 %a, i64 %b) {
-; MIPS64-LABLE:   lshr_64:
+; MIPS64-LABEL:   lshr_64:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    jr	$ra
@@ -73,7 +73,7 @@ define i64 @lshr_64(i64 %a, i64 %b) {
 }
 
 define i64 @ashr_64(i64 %a, i64 %b) {
-; MIPS64-LABLE:   ashr_64:
+; MIPS64-LABEL:   ashr_64:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    jr	$ra
diff --git a/llvm/test/CodeGen/NVPTX/idioms.ll b/llvm/test/CodeGen/NVPTX/idioms.ll
index e8fe47c303f92..0669d2a3717cb 100644
--- a/llvm/test/CodeGen/NVPTX/idioms.ll
+++ b/llvm/test/CodeGen/NVPTX/idioms.ll
@@ -42,7 +42,7 @@ define %struct.S16 @i32_to_2xi16(i32 noundef %in) {
   %high = trunc i32 %high32 to i16
 ; CHECK:       ld.param.u32  %[[R32:r[0-9]+]], [i32_to_2xi16_param_0];
 ; CHECK-DAG:   cvt.u16.u32   %rs{{[0-9+]}}, %[[R32]];
-; CHECK-DAG    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
+; CHECK-DAG:    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
   %s1 = insertvalue %struct.S16 poison, i16 %low, 0
   %s = insertvalue %struct.S16 %s1, i16 %high, 1
   ret %struct.S16 %s
@@ -56,7 +56,7 @@ define %struct.S16 @i32_to_2xi16_lh(i32 noundef %in) {
   %low = trunc i32 %in to i16
 ; CHECK:       ld.param.u32  %[[R32:r[0-9]+]], [i32_to_2xi16_lh_param_0];
 ; CHECK-DAG:   cvt.u16.u32   %rs{{[0-9+]}}, %[[R32]];
-; CHECK-DAG    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
+; CHECK-DAG:    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
   %s1 = insertvalue %struct.S16 poison, i16 %low, 0
   %s = insertvalue %struct.S16 %s1, i16 %high, 1
   ret %struct.S16 %s
@@ -84,7 +84,7 @@ define %struct.S32 @i64_to_2xi32(i64 noundef %in) {
   %high = trunc i64 %high64 to i32
 ; CHECK:       ld.param.u64  %[[R64:rd[0-9]+]], [i64_to_2xi32_param_0];
 ; CHECK-DAG:   cvt.u32.u64   %r{{[0-9+]}}, %[[R64]];
-; CHECK-DAG    mov.b64       {tmp, %r{{[0-9+]}}}, %[[R64]];
+; CHECK-DAG:    mov.b64       {tmp, %r{{[0-9+]}}}, %[[R64]];
   %s1 = insertvalue %struct.S32 poison, i32 %low, 0
   %s = insertvalue %struct.S32 %s1, i32 %high, 1
   ret %struct.S32 %s
@@ -114,8 +114,8 @@ define %struct.S16 @i32_to_2xi16_shr(i32 noundef %i){
   %h = trunc i32 %h32 to i16
 ; CHECK:      ld.param.u32    %[[R32:r[0-9]+]], [i32_to_2xi16_shr_param_0];
 ; CHECK:      shr.s32         %[[R32H:r[0-9]+]], %[[R32]], 16;
-; CHECK-DAG    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
-; CHECK-DAG    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32H]];
+; CHECK-DAG:    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
+; CHECK-DAG:    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32H]];
   %s0 = insertvalue %struct.S16 poison, i16 %l, 0
   %s1 = insertvalue %struct.S16 %s0, i16 %h, 1
   ret %struct.S16 %s1
diff --git a/llvm/test/CodeGen/SPARC/inlineasm.ll b/llvm/test/CodeGen/SPARC/inlineasm.ll
index 9817d7c6971f5..786e9f3eb1e13 100644
--- a/llvm/test/CodeGen/SPARC/inlineasm.ll
+++ b/llvm/test/CodeGen/SPARC/inlineasm.ll
@@ -144,7 +144,7 @@ entry:
   ret void
 }
 
-; CHECK-label:test_twinword
+; CHECK-LABEL:test_twinword
 ; CHECK: rd  %asr5, %i1
 ; CHECK: srlx %i1, 32, %i0
 
diff --git a/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll b/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll
index b5df462bd8fa9..f5f1382a35fca 100644
--- a/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll
+++ b/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll
@@ -6,7 +6,7 @@
 ; TODO: %if spirv-tools %{ llc -O0 -mtriple=spirv64-unknown-unknown %s -o - -filetype=obj | spirv-val %}
 
 ; CHECK-: Capability USMStorageClassesINTEL
-; CHECK-SPIRV-WITHOUT-NO: Capability USMStorageClassesINTEL
+; CHECK-SPIRV-WITHOUT-NOT: Capability USMStorageClassesINTEL
 ; CHECK-SPIRV-EXT-DAG: %[[DevTy:[0-9]+]] = OpTypePointer DeviceOnlyINTEL %[[#]]
 ; CHECK-SPIRV-EXT-DAG: %[[HostTy:[0-9]+]] = OpTypePointer HostOnlyINTEL %[[#]]
 ; CHECK-SPIRV-DAG: %[[CrsWrkTy:[0-9]+]] = OpTypePointer CrossWorkgroup %[[#]]
diff --git a/llvm/test/CodeGen/SystemZ/prefetch-04.ll b/llvm/test/CodeGen/SystemZ/prefetch-04.ll
index 61a2a1460c583..10755bdb66eb5 100644
--- a/llvm/test/CodeGen/SystemZ/prefetch-04.ll
+++ b/llvm/test/CodeGen/SystemZ/prefetch-04.ll
@@ -6,7 +6,7 @@
 ;
 ; CHECK-LABEL: for.body
 ; CHECK: call void @llvm.prefetch.p0(ptr %...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented May 29, 2024

@llvm/pr-subscribers-debuginfo

Author: klensy (klensy)

Changes

Moved fixes for llvm from #91854, plus few additional.


Patch is 153.82 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/93673.diff

99 Files Affected:

  • (modified) llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll (+1-1)
  • (modified) llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll (+1-1)
  • (modified) llvm/test/Assembler/bfloat.ll (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/arm64_32-atomics.ll (+10-10)
  • (modified) llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/fpimm.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/addrspacecast.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir (+5-5)
  • (modified) llvm/test/CodeGen/ARM/dsp-loop-indexing.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/shifter_operand.ll (-1)
  • (modified) llvm/test/CodeGen/ARM/speculation-hardening-sls.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/sxt_rot.ll (-1)
  • (modified) llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll (+9-9)
  • (modified) llvm/test/CodeGen/NVPTX/idioms.ll (+5-5)
  • (modified) llvm/test/CodeGen/SPARC/inlineasm.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/prefetch-04.ll (+1-1)
  • (modified) llvm/test/CodeGen/Thumb2/LowOverheadLoops/branch-targets.ll (+3-3)
  • (modified) llvm/test/CodeGen/X86/global-sections.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/tailregccpic.ll (+2-2)
  • (modified) llvm/test/DebugInfo/MIR/InstrRef/livedebugvalues_illegal_locs.mir (+3-3)
  • (modified) llvm/test/DebugInfo/MIR/InstrRef/single-assign-propagation.mir (+3-3)
  • (modified) llvm/test/MC/AArch64/SME/feature.s (+1-1)
  • (modified) llvm/test/MC/AArch64/armv8.7a-xs.s (+79-79)
  • (modified) llvm/test/MC/AArch64/basic-a64-diagnostics.s (+57-57)
  • (modified) llvm/test/MC/AMDGPU/hsa-diag-v4.s (+1-1)
  • (modified) llvm/test/MC/ARM/coff-relocations.s (+1-1)
  • (modified) llvm/test/MC/ARM/neon-complex.s (+20-20)
  • (modified) llvm/test/MC/AsmParser/labels.s (+2-2)
  • (modified) llvm/test/MC/COFF/cv-inline-linetable.s (+3-3)
  • (modified) llvm/test/MC/Disassembler/AArch64/armv8.6a-bf16.txt (+8-8)
  • (modified) llvm/test/MC/Disassembler/AArch64/armv8.7a-xs.txt (+80-80)
  • (modified) llvm/test/MC/Disassembler/AArch64/tme.txt (+3-3)
  • (modified) llvm/test/MC/Disassembler/AMDGPU/gfx10-wave32.txt (+6-6)
  • (modified) llvm/test/MC/Disassembler/AMDGPU/gfx12_dasm_ds.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/ARM/arm-tests.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/Mips/mips32r6/valid-mips32r6.txt (+3-3)
  • (modified) llvm/test/MC/Disassembler/Mips/mips64r6/valid-mips64r6.txt (+3-3)
  • (modified) llvm/test/MC/Disassembler/PowerPC/ppc64-encoding-dfp.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/PowerPC/ppc64-encoding.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/PowerPC/ppc64le-encoding.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/X86/x86-16.txt (+16-16)
  • (modified) llvm/test/MC/LoongArch/Relocations/relax-align.s (+1-1)
  • (modified) llvm/test/MC/MachO/lto-set-conditional.s (+2-2)
  • (modified) llvm/test/MC/Mips/expansion-jal-sym-pic.s (+8-8)
  • (modified) llvm/test/MC/Mips/macro-rem.s (+1-1)
  • (modified) llvm/test/MC/Mips/micromips-dsp/invalid.s (+4-4)
  • (modified) llvm/test/MC/Mips/micromips/valid.s (+2-2)
  • (modified) llvm/test/MC/Mips/mips-pdr-bad.s (+2-2)
  • (modified) llvm/test/MC/Mips/mips32r6/invalid.s (+8-8)
  • (modified) llvm/test/MC/Mips/mips64r6/invalid.s (+8-8)
  • (modified) llvm/test/MC/PowerPC/ppc64-encoding-ISA31.s (+13-13)
  • (modified) llvm/test/MC/PowerPC/ppc64-encoding-vmx.s (+2-2)
  • (modified) llvm/test/MC/RISCV/compress-rv64i.s (+2-2)
  • (modified) llvm/test/MC/RISCV/csr-aliases.s (+10-10)
  • (modified) llvm/test/MC/RISCV/relocations.s (+4-4)
  • (modified) llvm/test/MC/RISCV/zicfiss-valid.s (+6-6)
  • (modified) llvm/test/MC/WebAssembly/globals.s (+1-1)
  • (modified) llvm/test/MC/X86/apx/evex-format-intel.s (+2-2)
  • (modified) llvm/test/MC/Xtensa/Relocations/relocations.s (+31-31)
  • (modified) llvm/test/Other/constant-fold-gep-address-spaces.ll (+47-47)
  • (modified) llvm/test/Other/new-pm-thinlto-postlink-defaults.ll (+1-1)
  • (modified) llvm/test/Transforms/Attributor/cb_liveness_disabled.ll (+2-2)
  • (modified) llvm/test/Transforms/Attributor/cb_liveness_enabled.ll (+2-2)
  • (modified) llvm/test/Transforms/Attributor/returned.ll (+1-1)
  • (modified) llvm/test/Transforms/CallSiteSplitting/callsite-split.ll (+1-1)
  • (modified) llvm/test/Transforms/Coroutines/coro-debug-coro-frame.ll (+1-1)
  • (modified) llvm/test/Transforms/FunctionAttrs/nonnull.ll (+1-1)
  • (modified) llvm/test/Transforms/GVNSink/sink-common-code.ll (+6-6)
  • (modified) llvm/test/Transforms/Inline/update_invoke_prof.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/lifetime-sanitizer.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopUnroll/peel-loop2.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVectorize/AArch64/nontemporal-load-store.ll (+11-11)
  • (modified) llvm/test/Transforms/LoopVectorize/AArch64/strict-fadd.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVectorize/branch-weights.ll (+2-2)
  • (modified) llvm/test/Transforms/LoopVectorize/memdep.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVersioning/wrapping-pointer-non-integral-addrspace.ll (+1-1)
  • (modified) llvm/test/Transforms/ObjCARC/rv.ll (+3-3)
  • (modified) llvm/test/Transforms/PGOProfile/counter_promo_exit_catchswitch.ll (+1-1)
  • (modified) llvm/test/Transforms/PGOProfile/icp_covariant_invoke_return.ll (+4-4)
  • (modified) llvm/test/Transforms/PartiallyInlineLibCalls/X86/good-prototype.ll (+1-1)
  • (modified) llvm/test/Transforms/SampleProfile/pseudo-probe-selectionDAG.ll (+2-2)
  • (modified) llvm/test/Transforms/SimplifyCFG/branch-fold-threshold.ll (+2-2)
  • (modified) llvm/test/Verifier/AMDGPU/intrinsic-immarg.ll (+1-1)
  • (modified) llvm/test/tools/gold/X86/global_with_section.ll (+1-1)
  • (modified) llvm/test/tools/llvm-ar/replace-update.test (+1-1)
  • (modified) llvm/test/tools/llvm-cov/Inputs/binary-formats.canonical.json (+1-1)
  • (modified) llvm/test/tools/llvm-cov/coverage_watermark.test (+5-5)
  • (modified) llvm/test/tools/llvm-cov/zeroFunctionFile.c (+1-1)
  • (modified) llvm/test/tools/llvm-objcopy/ELF/update-section.test (+1-1)
  • (modified) llvm/test/tools/llvm-objdump/ELF/ARM/v5te-subarch.s (+1-1)
  • (modified) llvm/test/tools/llvm-objdump/X86/start-stop-address.test (+1-1)
  • (modified) llvm/test/tools/llvm-profgen/filter-ambiguous-profile.test (+5-5)
  • (modified) llvm/test/tools/llvm-reduce/mir/reduce-register-defs.mir (+1-1)
  • (modified) llvm/test/tools/llvm-reduce/skip-delta-passes.ll (+1-1)
  • (modified) llvm/test/tools/lto/discard-value-names.ll (+1-1)
diff --git a/llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll b/llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll
index a2526d9f5591a7..c2aab351948311 100644
--- a/llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll
+++ b/llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll
@@ -31,7 +31,7 @@ define void  @broadcast() #0{
 ; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction:   %22 = shufflevector <vscale x 8 x i1> undef, <vscale x 8 x i1> undef, <vscale x 8 x i32> zeroinitializer
 ; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction:   %23 = shufflevector <vscale x 4 x i1> undef, <vscale x 4 x i1> undef, <vscale x 4 x i32> zeroinitializer
 ; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction:   %24 = shufflevector <vscale x 2 x i1> undef, <vscale x 2 x i1> undef, <vscale x 2 x i32> zeroinitializer
-; CHECK-NETX: Cost Model: Found an estimated cost of 0 for instruction:   ret void
+; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction:   ret void
 
   %zero = shufflevector <vscale x 16 x i8> undef, <vscale x 16 x i8> undef, <vscale x 16 x i32> zeroinitializer
   %1 = shufflevector <vscale x 32 x i8> undef, <vscale x 32 x i8> undef, <vscale x 32 x i32> zeroinitializer
diff --git a/llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll b/llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll
index 335026dc9b62bd..efad77b684a75a 100644
--- a/llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll
+++ b/llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll
@@ -90,7 +90,7 @@ S:
   br i1 %cond.uni, label %exit, label %T
 
 T:
-; CHECK-NIT:   DIVERGENT:   %tt.phi = phi i32
+; CHECK-NOT:   DIVERGENT:   %tt.phi = phi i32
   %tt.phi = phi i32 [ %ss, %S ], [ %a, %entry ]
   %tt = add i32 %b, 1
   br label %P
diff --git a/llvm/test/Assembler/bfloat.ll b/llvm/test/Assembler/bfloat.ll
index 3a3b4c2b277db7..6f935c5dac154a 100644
--- a/llvm/test/Assembler/bfloat.ll
+++ b/llvm/test/Assembler/bfloat.ll
@@ -37,25 +37,25 @@ define float @check_bfloat_convert() {
   ret float %tmp
 }
 
-; ASSEM-DISASS-LABEL @snan_bfloat
+; ASSEM-DISASS-LABEL: @snan_bfloat
 define bfloat @snan_bfloat() {
 ; ASSEM-DISASS: ret bfloat 0xR7F81
     ret bfloat 0xR7F81
 }
 
-; ASSEM-DISASS-LABEL @qnan_bfloat
+; ASSEM-DISASS-LABEL: @qnan_bfloat
 define bfloat @qnan_bfloat() {
 ; ASSEM-DISASS: ret bfloat 0xR7FC0
     ret bfloat 0xR7FC0
 }
 
-; ASSEM-DISASS-LABEL @pos_inf_bfloat
+; ASSEM-DISASS-LABEL: @pos_inf_bfloat
 define bfloat @pos_inf_bfloat() {
 ; ASSEM-DISASS: ret bfloat 0xR7F80
     ret bfloat 0xR7F80
 }
 
-; ASSEM-DISASS-LABEL @neg_inf_bfloat
+; ASSEM-DISASS-LABEL: @neg_inf_bfloat
 define bfloat @neg_inf_bfloat() {
 ; ASSEM-DISASS: ret bfloat 0xRFF80
     ret bfloat 0xRFF80
diff --git a/llvm/test/CodeGen/AArch64/arm64_32-atomics.ll b/llvm/test/CodeGen/AArch64/arm64_32-atomics.ll
index 0000262e833da8..19b9205dc17866 100644
--- a/llvm/test/CodeGen/AArch64/arm64_32-atomics.ll
+++ b/llvm/test/CodeGen/AArch64/arm64_32-atomics.ll
@@ -2,70 +2,70 @@
 ; RUN: llc -mtriple=arm64_32-apple-ios7.0 -mattr=+outline-atomics -o - %s | FileCheck %s -check-prefix=OUTLINE-ATOMICS
 
 define i8 @test_load_8(ptr %addr) {
-; CHECK-LABAL: test_load_8:
+; CHECK-LABEL: test_load_8:
 ; CHECK: ldarb w0, [x0]
   %val = load atomic i8, ptr %addr seq_cst, align 1
   ret i8 %val
 }
 
 define i16 @test_load_16(ptr %addr) {
-; CHECK-LABAL: test_load_16:
+; CHECK-LABEL: test_load_16:
 ; CHECK: ldarh w0, [x0]
   %val = load atomic i16, ptr %addr acquire, align 2
   ret i16 %val
 }
 
 define i32 @test_load_32(ptr %addr) {
-; CHECK-LABAL: test_load_32:
+; CHECK-LABEL: test_load_32:
 ; CHECK: ldar w0, [x0]
   %val = load atomic i32, ptr %addr seq_cst, align 4
   ret i32 %val
 }
 
 define i64 @test_load_64(ptr %addr) {
-; CHECK-LABAL: test_load_64:
+; CHECK-LABEL: test_load_64:
 ; CHECK: ldar x0, [x0]
   %val = load atomic i64, ptr %addr seq_cst, align 8
   ret i64 %val
 }
 
 define ptr @test_load_ptr(ptr %addr) {
-; CHECK-LABAL: test_load_ptr:
+; CHECK-LABEL: test_load_ptr:
 ; CHECK: ldar w0, [x0]
   %val = load atomic ptr, ptr %addr seq_cst, align 8
   ret ptr %val
 }
 
 define void @test_store_8(ptr %addr) {
-; CHECK-LABAL: test_store_8:
+; CHECK-LABEL: test_store_8:
 ; CHECK: stlrb wzr, [x0]
   store atomic i8 0, ptr %addr seq_cst, align 1
   ret void
 }
 
 define void @test_store_16(ptr %addr) {
-; CHECK-LABAL: test_store_16:
+; CHECK-LABEL: test_store_16:
 ; CHECK: stlrh wzr, [x0]
   store atomic i16 0, ptr %addr seq_cst, align 2
   ret void
 }
 
 define void @test_store_32(ptr %addr) {
-; CHECK-LABAL: test_store_32:
+; CHECK-LABEL: test_store_32:
 ; CHECK: stlr wzr, [x0]
   store atomic i32 0, ptr %addr seq_cst, align 4
   ret void
 }
 
 define void @test_store_64(ptr %addr) {
-; CHECK-LABAL: test_store_64:
+; CHECK-LABEL: test_store_64:
 ; CHECK: stlr xzr, [x0]
   store atomic i64 0, ptr %addr seq_cst, align 8
   ret void
 }
 
 define void @test_store_ptr(ptr %addr) {
-; CHECK-LABAL: test_store_ptr:
+; CHECK-LABEL: test_store_ptr:
 ; CHECK: stlr wzr, [x0]
   store atomic ptr null, ptr %addr seq_cst, align 8
   ret void
diff --git a/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll b/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll
index e9556b9d5cbeef..e93fcec8228468 100644
--- a/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll
+++ b/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll
@@ -1,7 +1,7 @@
 ; RUN: llc -mtriple=arm64ec-pc-windows-msvc < %s | FileCheck %s
 
 define void @no_op() nounwind {
-; CHECK-LABEL     .def    $ientry_thunk$cdecl$v$v;
+; CHECK-LABEL:     .def    $ientry_thunk$cdecl$v$v;
 ; CHECK:          .section        .wowthk$aa,"xr",discard,$ientry_thunk$cdecl$v$v
 ; CHECK:          // %bb.0:
 ; CHECK-NEXT:     stp     q6, q7, [sp, #-176]!            // 32-byte Folded Spill
diff --git a/llvm/test/CodeGen/AArch64/fpimm.ll b/llvm/test/CodeGen/AArch64/fpimm.ll
index b92bb4245c7f35..e2944243338f5d 100644
--- a/llvm/test/CodeGen/AArch64/fpimm.ll
+++ b/llvm/test/CodeGen/AArch64/fpimm.ll
@@ -38,7 +38,7 @@ define void @check_double() {
 ; 64-bit ORR followed by MOVK.
 ; CHECK-DAG: mov  [[XFP0:x[0-9]+]], #1082331758844
 ; CHECK-DAG: movk [[XFP0]], #64764, lsl #16
-; CHECk-DAG: fmov {{d[0-9]+}}, [[XFP0]]
+; CHECK-DAG: fmov {{d[0-9]+}}, [[XFP0]]
   %newval3 = fadd double %val, 0xFCFCFC00FC
   store volatile double %newval3, ptr @varf64
 
diff --git a/llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll b/llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll
index a5757a70843a92..fa63df35ac857d 100644
--- a/llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll
+++ b/llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll
@@ -28,7 +28,7 @@ define void @a() "sign-return-address"="all" {
 }
 
 define void @b() "sign-return-address"="non-leaf" {
-; CHECK-LABE:      b:                                     // @b
+; CHECK-LABEL:     b:                                     // @b
 ; V8A-NOT:         hint #25
 ; V83A-NOT:        paciasp
 ; CHECK-NOT:       .cfi_negate_ra_state
diff --git a/llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll b/llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll
index f380b2d05d8634..fe08fa56425741 100644
--- a/llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll
+++ b/llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll
@@ -192,7 +192,7 @@ entry:
 ; CHECK: .Lfunc_end
 }
 
-; HARDEN-label: __llvm_slsblr_thunk_x0:
+; HARDEN-LABEL: __llvm_slsblr_thunk_x0:
 ; HARDEN:    mov x16, x0
 ; HARDEN:    br x16
 ; ISBDSB-NEXT: dsb sy
@@ -208,7 +208,7 @@ entry:
 ; HARDEN-COMDAT-OFF-NOT:  .hidden __llvm_slsblr_thunk_x19
 ; HARDEN-COMDAT-OFF-NOT:  .weak __llvm_slsblr_thunk_x19
 ; HARDEN-COMDAT-OFF:      .type __llvm_slsblr_thunk_x19,@function
-; HARDEN-label: __llvm_slsblr_thunk_x19:
+; HARDEN-LABEL: __llvm_slsblr_thunk_x19:
 ; HARDEN:    mov x16, x19
 ; HARDEN:    br x16
 ; ISBDSB-NEXT: dsb sy
diff --git a/llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir b/llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir
index 66d2067b531a3c..bfdb1763776b40 100644
--- a/llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir
+++ b/llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir
@@ -12,7 +12,7 @@
 
 # This test also checks that pairwise store STP is generated.
 
-# CHECK-LABLE: test
+# CHECK-LABEL: test
 # CHECK: bb.0:
 # CHECK-NEXT: liveins: $x0, $x17, $x18
 # CHECK: renamable $q13_q14_q15 = LD3Threev16b undef renamable $x17 :: (load (s384) from `ptr undef`, align 64)
diff --git a/llvm/test/CodeGen/AMDGPU/addrspacecast.ll b/llvm/test/CodeGen/AMDGPU/addrspacecast.ll
index 50423c59eabe94..526d5c946ec7f6 100644
--- a/llvm/test/CodeGen/AMDGPU/addrspacecast.ll
+++ b/llvm/test/CodeGen/AMDGPU/addrspacecast.ll
@@ -108,7 +108,7 @@ define amdgpu_kernel void @use_global_to_flat_addrspacecast(ptr addrspace(1) %pt
 }
 
 ; no-op
-; HSA-LABEl: {{^}}use_constant_to_flat_addrspacecast:
+; HSA-LABEL: {{^}}use_constant_to_flat_addrspacecast:
 ; HSA: s_load_dwordx2 s[[[PTRLO:[0-9]+]]:[[PTRHI:[0-9]+]]]
 ; HSA-DAG: v_mov_b32_e32 v[[VPTRLO:[0-9]+]], s[[PTRLO]]
 ; HSA-DAG: v_mov_b32_e32 v[[VPTRHI:[0-9]+]], s[[PTRHI]]
@@ -119,7 +119,7 @@ define amdgpu_kernel void @use_constant_to_flat_addrspacecast(ptr addrspace(4) %
   ret void
 }
 
-; HSA-LABEl: {{^}}use_constant_to_global_addrspacecast:
+; HSA-LABEL: {{^}}use_constant_to_global_addrspacecast:
 ; HSA: s_load_dwordx2 s[[[PTRLO:[0-9]+]]:[[PTRHI:[0-9]+]]]
 ; CI-DAG: v_mov_b32_e32 v[[VPTRLO:[0-9]+]], s[[PTRLO]]
 ; CI-DAG: v_mov_b32_e32 v[[VPTRHI:[0-9]+]], s[[PTRHI]]
diff --git a/llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir b/llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir
index 29621a0477418d..1151bde02ef62c 100644
--- a/llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir
+++ b/llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir
@@ -4,7 +4,7 @@
 
 ---
 
-# GCN-label: name: vop3
+# GCN-LABEL: name: vop3
 # GCN: %6:vgpr_32, %7:sreg_32_xm0_xexec = V_SUBBREV_U32_e64_dpp %3, %0, %1, %5, 1, 1, 15, 15, 1, implicit $exec
 # GCN: %8:vgpr_32 = V_CVT_PK_U8_F32_e64_dpp %3, 4, %0, 2, %2, 2, %1, 1, 1, 15, 15, 1, implicit $mode, implicit $exec
 # GCN: %10:vgpr_32 = V_MED3_F32_e64 0, %9, 0, %0, 0, 12345678, 0, 0, implicit $mode, implicit $exec
@@ -37,7 +37,7 @@ body:             |
 ...
 ---
 
-# GCN-label: name: vop3_sgpr_src1
+# GCN-LABEL: name: vop3_sgpr_src1
 # GCN: %6:vgpr_32 = V_MED3_F32_e64_dpp %4, 0, %0, 0, %1, 0, %2, 0, 0, 1, 15, 15, 1, implicit $mode, implicit $exec
 # GFX1100: %8:vgpr_32 = V_MED3_F32_e64 0, %7, 0, %2, 0, %1, 0, 0, implicit $mode, implicit $exec
 # GFX1150: %8:vgpr_32 = V_MED3_F32_e64_dpp %4, 0, %0, 0, %2, 0, %1, 0, 0, 1, 15, 15, 1, implicit $mode, implicit $exec
@@ -81,7 +81,7 @@ body:             |
 ---
 
 # Regression test for src_modifiers on base u16 opcode
-# GCN-label: name: vop3_u16
+# GCN-LABEL: name: vop3_u16
 # GCN: %5:vgpr_32 = V_ADD_NC_U16_e64_dpp %3, 0, %1, 0, %3, 0, 0, 1, 15, 15, 1, implicit $exec
 # GCN: %7:vgpr_32 = V_ADD_NC_U16_e64_dpp %3, 1, %5, 2, %5, 0, 0, 1, 15, 15, 1, implicit $exec
 # GCN: %9:vgpr_32 = V_ADD_NC_U16_e64 4, %8, 8, %7, 0, 0, implicit $exec
@@ -205,7 +205,7 @@ body:             |
 ...
 
 # do not combine, dpp arg used twice
-# GCN-label: name: dpp_arg_twice
+# GCN-LABEL: name: dpp_arg_twice
 # GCN: %4:vgpr_32 = V_FMA_F32_e64 1, %1, 2, %3, 2, %3, 1, 2, implicit $mode, implicit $exec
 # GCN: %6:vgpr_32 = V_FMA_F32_e64 2, %5, 2, %1, 2, %5, 1, 2, implicit $mode, implicit $exec
 # GCN: %8:vgpr_32 = V_FMA_F32_e64 2, %7, 2, %7, 2, %1, 1, 2, implicit $mode, implicit $exec
@@ -231,7 +231,7 @@ body:             |
 ...
 
 # when the dpp source isn't a src0 operand the operation should be commuted if possible
-# GCN-label: name: dpp_commute_e64
+# GCN-LABEL: name: dpp_commute_e64
 # GCN: %4:vgpr_32  = V_MUL_U32_U24_e64_dpp %1, %0, %1, 1, 1, 14, 15, 0, implicit $exec
 # GCN: %7:vgpr_32 = V_FMA_F32_e64_dpp %5, 2, %0, 1, %1, 2, %1, 1, 2, 1, 15, 15, 1, implicit $mode, implicit $exec
 # GCN: %10:vgpr_32 = V_SUBREV_U32_e64_dpp %1, %0, %1, 1, 1, 14, 15, 0, implicit $exec
diff --git a/llvm/test/CodeGen/ARM/dsp-loop-indexing.ll b/llvm/test/CodeGen/ARM/dsp-loop-indexing.ll
index 9fb64471e9881a..892e66aed4e5f7 100644
--- a/llvm/test/CodeGen/ARM/dsp-loop-indexing.ll
+++ b/llvm/test/CodeGen/ARM/dsp-loop-indexing.ll
@@ -22,7 +22,7 @@
 ; CHECK-DEFAULT: ldr{{.*}}, #4]
 ; CHECK-DEFAULT: str{{.*}}, #4]
 ; CHECK-DEFAULT: ldr{{.*}}, #8]!
-; CHECK-DEAFULT: ldr{{.*}}, #8]!
+; CHECK-DEFAULT: ldr{{.*}}, #8]!
 ; CHECK-DEFAULT: str{{.*}}, #8]!
 
 ; CHECK-COMPLEX: ldr{{.*}}, #8]!
diff --git a/llvm/test/CodeGen/ARM/shifter_operand.ll b/llvm/test/CodeGen/ARM/shifter_operand.ll
index bf2e8aa911c649..00922b1bf24920 100644
--- a/llvm/test/CodeGen/ARM/shifter_operand.ll
+++ b/llvm/test/CodeGen/ARM/shifter_operand.ll
@@ -121,7 +121,6 @@ define i32 @test_orr_extract_from_mul_1(i32 %x, i32 %y) {
 ; CHECK-THUMB-NEXT:    orrs r0, r1
 ; CHECK-THUMB-NEXT:    bx lr
 entry:
-; CHECk-THUMB: orrs r0, r1
   %mul = mul i32 %y, 63767
   %or = or i32 %mul, %x
   ret i32 %or
diff --git a/llvm/test/CodeGen/ARM/speculation-hardening-sls.ll b/llvm/test/CodeGen/ARM/speculation-hardening-sls.ll
index f25d73a12246fc..1f60f120dc86a0 100644
--- a/llvm/test/CodeGen/ARM/speculation-hardening-sls.ll
+++ b/llvm/test/CodeGen/ARM/speculation-hardening-sls.ll
@@ -248,7 +248,7 @@ entry:
 ; HARDEN-COMDAT-OFF-NOT:  .hidden {{__llvm_slsblr_thunk_(arm|thumb)_r5}}
 ; HARDEN-COMDAT-OFF-NOT:  .weak {{__llvm_slsblr_thunk_(arm|thumb)_r5}}
 ; HARDEN-COMDAT-OFF:      .type {{__llvm_slsblr_thunk_(arm|thumb)_r5}},%function
-; HARDEN-label: {{__llvm_slsblr_thunk_(arm|thumb)_r5}}:
+; HARDEN-LABEL: {{__llvm_slsblr_thunk_(arm|thumb)_r5}}:
 ; HARDEN:    bx r5
 ; ISBDSB-NEXT: dsb sy
 ; ISBDSB-NEXT: isb
diff --git a/llvm/test/CodeGen/ARM/sxt_rot.ll b/llvm/test/CodeGen/ARM/sxt_rot.ll
index e9649c7a7fd9a1..775e45201105c3 100644
--- a/llvm/test/CodeGen/ARM/sxt_rot.ll
+++ b/llvm/test/CodeGen/ARM/sxt_rot.ll
@@ -22,7 +22,6 @@ define signext i8 @test1(i32 %A) {
 ; CHECK-V7:       @ %bb.0:
 ; CHECK-V7-NEXT:    sbfx r0, r0, #8, #8
 ; CHECK-V7-NEXT:    bx lr
-; CHECk-V7: sbfx r0, r0, #8, #8
   %B = lshr i32 %A, 8
   %C = shl i32 %A, 24
   %D = or i32 %B, %C
diff --git a/llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll b/llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll
index bf69adf6702f0a..58920483e24bf9 100644
--- a/llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll
+++ b/llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll
@@ -3,11 +3,11 @@
 ; RUN: llc < %s -mtriple=mips64el-unknown-linux-gnuabi64 | FileCheck %s --check-prefixes=MIPS64
 
 define i32 @shl_32(i32 %a, i32 %b) {
-; MIPS32-LABLE:   shl_32:
+; MIPS32-LABEL:   shl_32:
 ; MIPS32:	  # %bb.0:
 ; MIPS32-NEXT:    jr	$ra
 ; MIPS32-NEXT:    sllv	$2, $4, $5
-; MIPS64-LABLE:   shl_32:
+; MIPS64-LABEL:   shl_32:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    sll   $2, $4, 0
@@ -19,11 +19,11 @@ define i32 @shl_32(i32 %a, i32 %b) {
 }
 
 define i32 @lshr_32(i32 %a, i32 %b) {
-; MIPS32-LABLE:   lshr_32:
+; MIPS32-LABEL:   lshr_32:
 ; MIPS32:	  # %bb.0:
 ; MIPS32-NEXT:    jr	$ra
 ; MIPS32-NEXT:    srlv	$2, $4, $5
-; MIPS64-LABLE:   lshr_32:
+; MIPS64-LABEL:   lshr_32:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    sll   $2, $4, 0
@@ -35,11 +35,11 @@ define i32 @lshr_32(i32 %a, i32 %b) {
 }
 
 define i32 @ashr_32(i32 %a, i32 %b) {
-; MIPS32-LABLE:   ashr_32:
+; MIPS32-LABEL:   ashr_32:
 ; MIPS32:	  # %bb.0:
 ; MIPS32-NEXT:    jr	$ra
 ; MIPS32-NEXT:    srav	$2, $4, $5
-; MIPS64-LABLE:   ashr_32:
+; MIPS64-LABEL:   ashr_32:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    sll   $2, $4, 0
@@ -51,7 +51,7 @@ define i32 @ashr_32(i32 %a, i32 %b) {
 }
 
 define i64 @shl_64(i64 %a, i64 %b) {
-; MIPS64-LABLE:   shl_64:
+; MIPS64-LABEL:   shl_64:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    jr	$ra
@@ -62,7 +62,7 @@ define i64 @shl_64(i64 %a, i64 %b) {
 }
 
 define i64 @lshr_64(i64 %a, i64 %b) {
-; MIPS64-LABLE:   lshr_64:
+; MIPS64-LABEL:   lshr_64:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    jr	$ra
@@ -73,7 +73,7 @@ define i64 @lshr_64(i64 %a, i64 %b) {
 }
 
 define i64 @ashr_64(i64 %a, i64 %b) {
-; MIPS64-LABLE:   ashr_64:
+; MIPS64-LABEL:   ashr_64:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    jr	$ra
diff --git a/llvm/test/CodeGen/NVPTX/idioms.ll b/llvm/test/CodeGen/NVPTX/idioms.ll
index e8fe47c303f92d..0669d2a3717cb1 100644
--- a/llvm/test/CodeGen/NVPTX/idioms.ll
+++ b/llvm/test/CodeGen/NVPTX/idioms.ll
@@ -42,7 +42,7 @@ define %struct.S16 @i32_to_2xi16(i32 noundef %in) {
   %high = trunc i32 %high32 to i16
 ; CHECK:       ld.param.u32  %[[R32:r[0-9]+]], [i32_to_2xi16_param_0];
 ; CHECK-DAG:   cvt.u16.u32   %rs{{[0-9+]}}, %[[R32]];
-; CHECK-DAG    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
+; CHECK-DAG:    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
   %s1 = insertvalue %struct.S16 poison, i16 %low, 0
   %s = insertvalue %struct.S16 %s1, i16 %high, 1
   ret %struct.S16 %s
@@ -56,7 +56,7 @@ define %struct.S16 @i32_to_2xi16_lh(i32 noundef %in) {
   %low = trunc i32 %in to i16
 ; CHECK:       ld.param.u32  %[[R32:r[0-9]+]], [i32_to_2xi16_lh_param_0];
 ; CHECK-DAG:   cvt.u16.u32   %rs{{[0-9+]}}, %[[R32]];
-; CHECK-DAG    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
+; CHECK-DAG:    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
   %s1 = insertvalue %struct.S16 poison, i16 %low, 0
   %s = insertvalue %struct.S16 %s1, i16 %high, 1
   ret %struct.S16 %s
@@ -84,7 +84,7 @@ define %struct.S32 @i64_to_2xi32(i64 noundef %in) {
   %high = trunc i64 %high64 to i32
 ; CHECK:       ld.param.u64  %[[R64:rd[0-9]+]], [i64_to_2xi32_param_0];
 ; CHECK-DAG:   cvt.u32.u64   %r{{[0-9+]}}, %[[R64]];
-; CHECK-DAG    mov.b64       {tmp, %r{{[0-9+]}}}, %[[R64]];
+; CHECK-DAG:    mov.b64       {tmp, %r{{[0-9+]}}}, %[[R64]];
   %s1 = insertvalue %struct.S32 poison, i32 %low, 0
   %s = insertvalue %struct.S32 %s1, i32 %high, 1
   ret %struct.S32 %s
@@ -114,8 +114,8 @@ define %struct.S16 @i32_to_2xi16_shr(i32 noundef %i){
   %h = trunc i32 %h32 to i16
 ; CHECK:      ld.param.u32    %[[R32:r[0-9]+]], [i32_to_2xi16_shr_param_0];
 ; CHECK:      shr.s32         %[[R32H:r[0-9]+]], %[[R32]], 16;
-; CHECK-DAG    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
-; CHECK-DAG    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32H]];
+; CHECK-DAG:    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
+; CHECK-DAG:    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32H]];
   %s0 = insertvalue %struct.S16 poison, i16 %l, 0
   %s1 = insertvalue %struct.S16 %s0, i16 %h, 1
   ret %struct.S16 %s1
diff --git a/llvm/test/CodeGen/SPARC/inlineasm.ll b/llvm/test/CodeGen/SPARC/inlineasm.ll
index 9817d7c6971f5c..786e9f3eb1e13c 100644
--- a/llvm/test/CodeGen/SPARC/inlineasm.ll
+++ b/llvm/test/CodeGen/SPARC/inlineasm.ll
@@ -144,7 +144,7 @@ entry:
   ret void
 }
 
-; CHECK-label:test_twinword
+; CHECK-LABEL:test_twinword
 ; CHECK: rd  %asr5, %i1
 ; CHECK: srlx %i1, 32, %i0
 
diff --git a/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll b/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll
index b5df462bd8fa9b..f5f1382a35fca9 100644
--- a/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll
+++ b/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll
@@ -6,7 +6,7 @@
 ; TODO: %if spirv-tools %{ llc -O0 -mtriple=spirv64-unknown-unknown %s -o - -filetype=obj | spirv-val %}
 
 ; CHECK-: Capability USMStorageClassesINTEL
-; CHECK-SPIRV-WITHOUT-NO: Capability USMStorageClassesINTEL
+; CHECK-SPIRV-WITHOUT-NOT: Capability USMStorageClassesINTEL
 ; CHECK-SPIRV-EXT-DAG: %[[DevTy:[0-9]+]] = OpTypePointer DeviceOnlyINTEL %[[#]]
 ; CHECK-SPIRV-EXT-DAG: %[[HostTy:[0-9]+]] = OpTypePointer HostOnlyINTEL %[[#]]
 ; CHECK-SPIRV-DAG: %[[CrsWrkTy:[0-9]+]] = OpTypePointer CrossWorkgroup %[[#]]
diff --git a/llvm/test/CodeGen/SystemZ/prefetch-04.ll b/llvm/test/CodeGen/SystemZ/prefetch-04.ll
index 61a2a1460c583d..10755bdb66eb5b 100644
--- a/llvm/test/CodeGen/SystemZ/prefetch-04.ll
+++ b/llvm/test/CodeGen/SystemZ/prefetch-04.ll
@@ -6,7 +6,7 @@
 ;
 ; CHECK-LABEL: for.body
 ; ...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented May 29, 2024

@llvm/pr-subscribers-backend-webassembly

Author: klensy (klensy)

Changes

Moved fixes for llvm from #91854, plus few additional.


Patch is 153.63 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/93673.diff

99 Files Affected:

  • (modified) llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll (+1-1)
  • (modified) llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll (+1-1)
  • (modified) llvm/test/Assembler/bfloat.ll (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/arm64_32-atomics.ll (+10-10)
  • (modified) llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/fpimm.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/addrspacecast.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir (+5-5)
  • (modified) llvm/test/CodeGen/ARM/dsp-loop-indexing.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/shifter_operand.ll (-1)
  • (modified) llvm/test/CodeGen/ARM/speculation-hardening-sls.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/sxt_rot.ll (-1)
  • (modified) llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll (+9-9)
  • (modified) llvm/test/CodeGen/NVPTX/idioms.ll (+5-5)
  • (modified) llvm/test/CodeGen/SPARC/inlineasm.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/prefetch-04.ll (+1-1)
  • (modified) llvm/test/CodeGen/Thumb2/LowOverheadLoops/branch-targets.ll (+3-3)
  • (modified) llvm/test/CodeGen/X86/global-sections.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/tailregccpic.ll (+2-2)
  • (modified) llvm/test/DebugInfo/MIR/InstrRef/livedebugvalues_illegal_locs.mir (+3-3)
  • (modified) llvm/test/DebugInfo/MIR/InstrRef/single-assign-propagation.mir (+3-3)
  • (modified) llvm/test/MC/AArch64/SME/feature.s (+1-1)
  • (modified) llvm/test/MC/AArch64/armv8.7a-xs.s (+79-79)
  • (modified) llvm/test/MC/AArch64/basic-a64-diagnostics.s (+57-57)
  • (modified) llvm/test/MC/AMDGPU/hsa-diag-v4.s (+1-1)
  • (modified) llvm/test/MC/ARM/coff-relocations.s (+1-1)
  • (modified) llvm/test/MC/ARM/neon-complex.s (+20-20)
  • (modified) llvm/test/MC/AsmParser/labels.s (+2-2)
  • (modified) llvm/test/MC/COFF/cv-inline-linetable.s (+3-3)
  • (modified) llvm/test/MC/Disassembler/AArch64/armv8.6a-bf16.txt (+8-8)
  • (modified) llvm/test/MC/Disassembler/AArch64/armv8.7a-xs.txt (+80-80)
  • (modified) llvm/test/MC/Disassembler/AArch64/tme.txt (+3-3)
  • (modified) llvm/test/MC/Disassembler/AMDGPU/gfx10-wave32.txt (+6-6)
  • (modified) llvm/test/MC/Disassembler/AMDGPU/gfx12_dasm_ds.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/ARM/arm-tests.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/Mips/mips32r6/valid-mips32r6.txt (+3-3)
  • (modified) llvm/test/MC/Disassembler/Mips/mips64r6/valid-mips64r6.txt (+3-3)
  • (modified) llvm/test/MC/Disassembler/PowerPC/ppc64-encoding-dfp.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/PowerPC/ppc64-encoding.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/PowerPC/ppc64le-encoding.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/X86/x86-16.txt (+16-16)
  • (modified) llvm/test/MC/LoongArch/Relocations/relax-align.s (+1-1)
  • (modified) llvm/test/MC/MachO/lto-set-conditional.s (+2-2)
  • (modified) llvm/test/MC/Mips/expansion-jal-sym-pic.s (+8-8)
  • (modified) llvm/test/MC/Mips/macro-rem.s (+1-1)
  • (modified) llvm/test/MC/Mips/micromips-dsp/invalid.s (+4-4)
  • (modified) llvm/test/MC/Mips/micromips/valid.s (+2-2)
  • (modified) llvm/test/MC/Mips/mips-pdr-bad.s (+2-2)
  • (modified) llvm/test/MC/Mips/mips32r6/invalid.s (+8-8)
  • (modified) llvm/test/MC/Mips/mips64r6/invalid.s (+8-8)
  • (modified) llvm/test/MC/PowerPC/ppc64-encoding-ISA31.s (+13-13)
  • (modified) llvm/test/MC/PowerPC/ppc64-encoding-vmx.s (+2-2)
  • (modified) llvm/test/MC/RISCV/compress-rv64i.s (+2-2)
  • (modified) llvm/test/MC/RISCV/csr-aliases.s (+10-10)
  • (modified) llvm/test/MC/RISCV/relocations.s (+4-4)
  • (modified) llvm/test/MC/RISCV/zicfiss-valid.s (+6-6)
  • (modified) llvm/test/MC/WebAssembly/globals.s (+1-1)
  • (modified) llvm/test/MC/X86/apx/evex-format-intel.s (+2-2)
  • (modified) llvm/test/MC/Xtensa/Relocations/relocations.s (+31-31)
  • (modified) llvm/test/Other/constant-fold-gep-address-spaces.ll (+47-47)
  • (modified) llvm/test/Other/new-pm-thinlto-postlink-defaults.ll (+1-1)
  • (modified) llvm/test/Transforms/Attributor/cb_liveness_disabled.ll (+2-2)
  • (modified) llvm/test/Transforms/Attributor/cb_liveness_enabled.ll (+2-2)
  • (modified) llvm/test/Transforms/Attributor/returned.ll (+1-1)
  • (modified) llvm/test/Transforms/CallSiteSplitting/callsite-split.ll (+1-1)
  • (modified) llvm/test/Transforms/Coroutines/coro-debug-coro-frame.ll (+1-1)
  • (modified) llvm/test/Transforms/FunctionAttrs/nonnull.ll (+1-1)
  • (modified) llvm/test/Transforms/GVNSink/sink-common-code.ll (+6-6)
  • (modified) llvm/test/Transforms/Inline/update_invoke_prof.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/lifetime-sanitizer.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopUnroll/peel-loop2.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVectorize/AArch64/nontemporal-load-store.ll (+11-11)
  • (modified) llvm/test/Transforms/LoopVectorize/AArch64/strict-fadd.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVectorize/branch-weights.ll (+2-2)
  • (modified) llvm/test/Transforms/LoopVectorize/memdep.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVersioning/wrapping-pointer-non-integral-addrspace.ll (+1-1)
  • (modified) llvm/test/Transforms/ObjCARC/rv.ll (+3-3)
  • (modified) llvm/test/Transforms/PGOProfile/counter_promo_exit_catchswitch.ll (+1-1)
  • (modified) llvm/test/Transforms/PGOProfile/icp_covariant_invoke_return.ll (+4-4)
  • (modified) llvm/test/Transforms/PartiallyInlineLibCalls/X86/good-prototype.ll (+1-1)
  • (modified) llvm/test/Transforms/SampleProfile/pseudo-probe-selectionDAG.ll (+2-2)
  • (modified) llvm/test/Transforms/SimplifyCFG/branch-fold-threshold.ll (+2-2)
  • (modified) llvm/test/Verifier/AMDGPU/intrinsic-immarg.ll (+1-1)
  • (modified) llvm/test/tools/gold/X86/global_with_section.ll (+1-1)
  • (modified) llvm/test/tools/llvm-ar/replace-update.test (+1-1)
  • (modified) llvm/test/tools/llvm-cov/Inputs/binary-formats.canonical.json (+1-1)
  • (modified) llvm/test/tools/llvm-cov/coverage_watermark.test (+5-5)
  • (modified) llvm/test/tools/llvm-cov/zeroFunctionFile.c (+1-1)
  • (modified) llvm/test/tools/llvm-objcopy/ELF/update-section.test (+1-1)
  • (modified) llvm/test/tools/llvm-objdump/ELF/ARM/v5te-subarch.s (+1-1)
  • (modified) llvm/test/tools/llvm-objdump/X86/start-stop-address.test (+1-1)
  • (modified) llvm/test/tools/llvm-profgen/filter-ambiguous-profile.test (+5-5)
  • (modified) llvm/test/tools/llvm-reduce/mir/reduce-register-defs.mir (+1-1)
  • (modified) llvm/test/tools/llvm-reduce/skip-delta-passes.ll (+1-1)
  • (modified) llvm/test/tools/lto/discard-value-names.ll (+1-1)
diff --git a/llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll b/llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll
index a2526d9f5591a..c2aab35194831 100644
--- a/llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll
+++ b/llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll
@@ -31,7 +31,7 @@ define void  @broadcast() #0{
 ; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction:   %22 = shufflevector <vscale x 8 x i1> undef, <vscale x 8 x i1> undef, <vscale x 8 x i32> zeroinitializer
 ; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction:   %23 = shufflevector <vscale x 4 x i1> undef, <vscale x 4 x i1> undef, <vscale x 4 x i32> zeroinitializer
 ; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction:   %24 = shufflevector <vscale x 2 x i1> undef, <vscale x 2 x i1> undef, <vscale x 2 x i32> zeroinitializer
-; CHECK-NETX: Cost Model: Found an estimated cost of 0 for instruction:   ret void
+; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction:   ret void
 
   %zero = shufflevector <vscale x 16 x i8> undef, <vscale x 16 x i8> undef, <vscale x 16 x i32> zeroinitializer
   %1 = shufflevector <vscale x 32 x i8> undef, <vscale x 32 x i8> undef, <vscale x 32 x i32> zeroinitializer
diff --git a/llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll b/llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll
index 335026dc9b62b..efad77b684a75 100644
--- a/llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll
+++ b/llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll
@@ -90,7 +90,7 @@ S:
   br i1 %cond.uni, label %exit, label %T
 
 T:
-; CHECK-NIT:   DIVERGENT:   %tt.phi = phi i32
+; CHECK-NOT:   DIVERGENT:   %tt.phi = phi i32
   %tt.phi = phi i32 [ %ss, %S ], [ %a, %entry ]
   %tt = add i32 %b, 1
   br label %P
diff --git a/llvm/test/Assembler/bfloat.ll b/llvm/test/Assembler/bfloat.ll
index 3a3b4c2b277db..6f935c5dac154 100644
--- a/llvm/test/Assembler/bfloat.ll
+++ b/llvm/test/Assembler/bfloat.ll
@@ -37,25 +37,25 @@ define float @check_bfloat_convert() {
   ret float %tmp
 }
 
-; ASSEM-DISASS-LABEL @snan_bfloat
+; ASSEM-DISASS-LABEL: @snan_bfloat
 define bfloat @snan_bfloat() {
 ; ASSEM-DISASS: ret bfloat 0xR7F81
     ret bfloat 0xR7F81
 }
 
-; ASSEM-DISASS-LABEL @qnan_bfloat
+; ASSEM-DISASS-LABEL: @qnan_bfloat
 define bfloat @qnan_bfloat() {
 ; ASSEM-DISASS: ret bfloat 0xR7FC0
     ret bfloat 0xR7FC0
 }
 
-; ASSEM-DISASS-LABEL @pos_inf_bfloat
+; ASSEM-DISASS-LABEL: @pos_inf_bfloat
 define bfloat @pos_inf_bfloat() {
 ; ASSEM-DISASS: ret bfloat 0xR7F80
     ret bfloat 0xR7F80
 }
 
-; ASSEM-DISASS-LABEL @neg_inf_bfloat
+; ASSEM-DISASS-LABEL: @neg_inf_bfloat
 define bfloat @neg_inf_bfloat() {
 ; ASSEM-DISASS: ret bfloat 0xRFF80
     ret bfloat 0xRFF80
diff --git a/llvm/test/CodeGen/AArch64/arm64_32-atomics.ll b/llvm/test/CodeGen/AArch64/arm64_32-atomics.ll
index 0000262e833da..19b9205dc1786 100644
--- a/llvm/test/CodeGen/AArch64/arm64_32-atomics.ll
+++ b/llvm/test/CodeGen/AArch64/arm64_32-atomics.ll
@@ -2,70 +2,70 @@
 ; RUN: llc -mtriple=arm64_32-apple-ios7.0 -mattr=+outline-atomics -o - %s | FileCheck %s -check-prefix=OUTLINE-ATOMICS
 
 define i8 @test_load_8(ptr %addr) {
-; CHECK-LABAL: test_load_8:
+; CHECK-LABEL: test_load_8:
 ; CHECK: ldarb w0, [x0]
   %val = load atomic i8, ptr %addr seq_cst, align 1
   ret i8 %val
 }
 
 define i16 @test_load_16(ptr %addr) {
-; CHECK-LABAL: test_load_16:
+; CHECK-LABEL: test_load_16:
 ; CHECK: ldarh w0, [x0]
   %val = load atomic i16, ptr %addr acquire, align 2
   ret i16 %val
 }
 
 define i32 @test_load_32(ptr %addr) {
-; CHECK-LABAL: test_load_32:
+; CHECK-LABEL: test_load_32:
 ; CHECK: ldar w0, [x0]
   %val = load atomic i32, ptr %addr seq_cst, align 4
   ret i32 %val
 }
 
 define i64 @test_load_64(ptr %addr) {
-; CHECK-LABAL: test_load_64:
+; CHECK-LABEL: test_load_64:
 ; CHECK: ldar x0, [x0]
   %val = load atomic i64, ptr %addr seq_cst, align 8
   ret i64 %val
 }
 
 define ptr @test_load_ptr(ptr %addr) {
-; CHECK-LABAL: test_load_ptr:
+; CHECK-LABEL: test_load_ptr:
 ; CHECK: ldar w0, [x0]
   %val = load atomic ptr, ptr %addr seq_cst, align 8
   ret ptr %val
 }
 
 define void @test_store_8(ptr %addr) {
-; CHECK-LABAL: test_store_8:
+; CHECK-LABEL: test_store_8:
 ; CHECK: stlrb wzr, [x0]
   store atomic i8 0, ptr %addr seq_cst, align 1
   ret void
 }
 
 define void @test_store_16(ptr %addr) {
-; CHECK-LABAL: test_store_16:
+; CHECK-LABEL: test_store_16:
 ; CHECK: stlrh wzr, [x0]
   store atomic i16 0, ptr %addr seq_cst, align 2
   ret void
 }
 
 define void @test_store_32(ptr %addr) {
-; CHECK-LABAL: test_store_32:
+; CHECK-LABEL: test_store_32:
 ; CHECK: stlr wzr, [x0]
   store atomic i32 0, ptr %addr seq_cst, align 4
   ret void
 }
 
 define void @test_store_64(ptr %addr) {
-; CHECK-LABAL: test_store_64:
+; CHECK-LABEL: test_store_64:
 ; CHECK: stlr xzr, [x0]
   store atomic i64 0, ptr %addr seq_cst, align 8
   ret void
 }
 
 define void @test_store_ptr(ptr %addr) {
-; CHECK-LABAL: test_store_ptr:
+; CHECK-LABEL: test_store_ptr:
 ; CHECK: stlr wzr, [x0]
   store atomic ptr null, ptr %addr seq_cst, align 8
   ret void
diff --git a/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll b/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll
index e9556b9d5cbee..e93fcec822846 100644
--- a/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll
+++ b/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll
@@ -1,7 +1,7 @@
 ; RUN: llc -mtriple=arm64ec-pc-windows-msvc < %s | FileCheck %s
 
 define void @no_op() nounwind {
-; CHECK-LABEL     .def    $ientry_thunk$cdecl$v$v;
+; CHECK-LABEL:     .def    $ientry_thunk$cdecl$v$v;
 ; CHECK:          .section        .wowthk$aa,"xr",discard,$ientry_thunk$cdecl$v$v
 ; CHECK:          // %bb.0:
 ; CHECK-NEXT:     stp     q6, q7, [sp, #-176]!            // 32-byte Folded Spill
diff --git a/llvm/test/CodeGen/AArch64/fpimm.ll b/llvm/test/CodeGen/AArch64/fpimm.ll
index b92bb4245c7f3..e2944243338f5 100644
--- a/llvm/test/CodeGen/AArch64/fpimm.ll
+++ b/llvm/test/CodeGen/AArch64/fpimm.ll
@@ -38,7 +38,7 @@ define void @check_double() {
 ; 64-bit ORR followed by MOVK.
 ; CHECK-DAG: mov  [[XFP0:x[0-9]+]], #1082331758844
 ; CHECK-DAG: movk [[XFP0]], #64764, lsl #16
-; CHECk-DAG: fmov {{d[0-9]+}}, [[XFP0]]
+; CHECK-DAG: fmov {{d[0-9]+}}, [[XFP0]]
   %newval3 = fadd double %val, 0xFCFCFC00FC
   store volatile double %newval3, ptr @varf64
 
diff --git a/llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll b/llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll
index a5757a70843a9..fa63df35ac857 100644
--- a/llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll
+++ b/llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll
@@ -28,7 +28,7 @@ define void @a() "sign-return-address"="all" {
 }
 
 define void @b() "sign-return-address"="non-leaf" {
-; CHECK-LABE:      b:                                     // @b
+; CHECK-LABEL:     b:                                     // @b
 ; V8A-NOT:         hint #25
 ; V83A-NOT:        paciasp
 ; CHECK-NOT:       .cfi_negate_ra_state
diff --git a/llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll b/llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll
index f380b2d05d863..fe08fa5642574 100644
--- a/llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll
+++ b/llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll
@@ -192,7 +192,7 @@ entry:
 ; CHECK: .Lfunc_end
 }
 
-; HARDEN-label: __llvm_slsblr_thunk_x0:
+; HARDEN-LABEL: __llvm_slsblr_thunk_x0:
 ; HARDEN:    mov x16, x0
 ; HARDEN:    br x16
 ; ISBDSB-NEXT: dsb sy
@@ -208,7 +208,7 @@ entry:
 ; HARDEN-COMDAT-OFF-NOT:  .hidden __llvm_slsblr_thunk_x19
 ; HARDEN-COMDAT-OFF-NOT:  .weak __llvm_slsblr_thunk_x19
 ; HARDEN-COMDAT-OFF:      .type __llvm_slsblr_thunk_x19,@function
-; HARDEN-label: __llvm_slsblr_thunk_x19:
+; HARDEN-LABEL: __llvm_slsblr_thunk_x19:
 ; HARDEN:    mov x16, x19
 ; HARDEN:    br x16
 ; ISBDSB-NEXT: dsb sy
diff --git a/llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir b/llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir
index 66d2067b531a3..bfdb1763776b4 100644
--- a/llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir
+++ b/llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir
@@ -12,7 +12,7 @@
 
 # This test also checks that pairwise store STP is generated.
 
-# CHECK-LABLE: test
+# CHECK-LABEL: test
 # CHECK: bb.0:
 # CHECK-NEXT: liveins: $x0, $x17, $x18
 # CHECK: renamable $q13_q14_q15 = LD3Threev16b undef renamable $x17 :: (load (s384) from `ptr undef`, align 64)
diff --git a/llvm/test/CodeGen/AMDGPU/addrspacecast.ll b/llvm/test/CodeGen/AMDGPU/addrspacecast.ll
index 50423c59eabe9..526d5c946ec7f 100644
--- a/llvm/test/CodeGen/AMDGPU/addrspacecast.ll
+++ b/llvm/test/CodeGen/AMDGPU/addrspacecast.ll
@@ -108,7 +108,7 @@ define amdgpu_kernel void @use_global_to_flat_addrspacecast(ptr addrspace(1) %pt
 }
 
 ; no-op
-; HSA-LABEl: {{^}}use_constant_to_flat_addrspacecast:
+; HSA-LABEL: {{^}}use_constant_to_flat_addrspacecast:
 ; HSA: s_load_dwordx2 s[[[PTRLO:[0-9]+]]:[[PTRHI:[0-9]+]]]
 ; HSA-DAG: v_mov_b32_e32 v[[VPTRLO:[0-9]+]], s[[PTRLO]]
 ; HSA-DAG: v_mov_b32_e32 v[[VPTRHI:[0-9]+]], s[[PTRHI]]
@@ -119,7 +119,7 @@ define amdgpu_kernel void @use_constant_to_flat_addrspacecast(ptr addrspace(4) %
   ret void
 }
 
-; HSA-LABEl: {{^}}use_constant_to_global_addrspacecast:
+; HSA-LABEL: {{^}}use_constant_to_global_addrspacecast:
 ; HSA: s_load_dwordx2 s[[[PTRLO:[0-9]+]]:[[PTRHI:[0-9]+]]]
 ; CI-DAG: v_mov_b32_e32 v[[VPTRLO:[0-9]+]], s[[PTRLO]]
 ; CI-DAG: v_mov_b32_e32 v[[VPTRHI:[0-9]+]], s[[PTRHI]]
diff --git a/llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir b/llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir
index 29621a0477418..1151bde02ef62 100644
--- a/llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir
+++ b/llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir
@@ -4,7 +4,7 @@
 
 ---
 
-# GCN-label: name: vop3
+# GCN-LABEL: name: vop3
 # GCN: %6:vgpr_32, %7:sreg_32_xm0_xexec = V_SUBBREV_U32_e64_dpp %3, %0, %1, %5, 1, 1, 15, 15, 1, implicit $exec
 # GCN: %8:vgpr_32 = V_CVT_PK_U8_F32_e64_dpp %3, 4, %0, 2, %2, 2, %1, 1, 1, 15, 15, 1, implicit $mode, implicit $exec
 # GCN: %10:vgpr_32 = V_MED3_F32_e64 0, %9, 0, %0, 0, 12345678, 0, 0, implicit $mode, implicit $exec
@@ -37,7 +37,7 @@ body:             |
 ...
 ---
 
-# GCN-label: name: vop3_sgpr_src1
+# GCN-LABEL: name: vop3_sgpr_src1
 # GCN: %6:vgpr_32 = V_MED3_F32_e64_dpp %4, 0, %0, 0, %1, 0, %2, 0, 0, 1, 15, 15, 1, implicit $mode, implicit $exec
 # GFX1100: %8:vgpr_32 = V_MED3_F32_e64 0, %7, 0, %2, 0, %1, 0, 0, implicit $mode, implicit $exec
 # GFX1150: %8:vgpr_32 = V_MED3_F32_e64_dpp %4, 0, %0, 0, %2, 0, %1, 0, 0, 1, 15, 15, 1, implicit $mode, implicit $exec
@@ -81,7 +81,7 @@ body:             |
 ---
 
 # Regression test for src_modifiers on base u16 opcode
-# GCN-label: name: vop3_u16
+# GCN-LABEL: name: vop3_u16
 # GCN: %5:vgpr_32 = V_ADD_NC_U16_e64_dpp %3, 0, %1, 0, %3, 0, 0, 1, 15, 15, 1, implicit $exec
 # GCN: %7:vgpr_32 = V_ADD_NC_U16_e64_dpp %3, 1, %5, 2, %5, 0, 0, 1, 15, 15, 1, implicit $exec
 # GCN: %9:vgpr_32 = V_ADD_NC_U16_e64 4, %8, 8, %7, 0, 0, implicit $exec
@@ -205,7 +205,7 @@ body:             |
 ...
 
 # do not combine, dpp arg used twice
-# GCN-label: name: dpp_arg_twice
+# GCN-LABEL: name: dpp_arg_twice
 # GCN: %4:vgpr_32 = V_FMA_F32_e64 1, %1, 2, %3, 2, %3, 1, 2, implicit $mode, implicit $exec
 # GCN: %6:vgpr_32 = V_FMA_F32_e64 2, %5, 2, %1, 2, %5, 1, 2, implicit $mode, implicit $exec
 # GCN: %8:vgpr_32 = V_FMA_F32_e64 2, %7, 2, %7, 2, %1, 1, 2, implicit $mode, implicit $exec
@@ -231,7 +231,7 @@ body:             |
 ...
 
 # when the dpp source isn't a src0 operand the operation should be commuted if possible
-# GCN-label: name: dpp_commute_e64
+# GCN-LABEL: name: dpp_commute_e64
 # GCN: %4:vgpr_32  = V_MUL_U32_U24_e64_dpp %1, %0, %1, 1, 1, 14, 15, 0, implicit $exec
 # GCN: %7:vgpr_32 = V_FMA_F32_e64_dpp %5, 2, %0, 1, %1, 2, %1, 1, 2, 1, 15, 15, 1, implicit $mode, implicit $exec
 # GCN: %10:vgpr_32 = V_SUBREV_U32_e64_dpp %1, %0, %1, 1, 1, 14, 15, 0, implicit $exec
diff --git a/llvm/test/CodeGen/ARM/dsp-loop-indexing.ll b/llvm/test/CodeGen/ARM/dsp-loop-indexing.ll
index 9fb64471e9881..892e66aed4e5f 100644
--- a/llvm/test/CodeGen/ARM/dsp-loop-indexing.ll
+++ b/llvm/test/CodeGen/ARM/dsp-loop-indexing.ll
@@ -22,7 +22,7 @@
 ; CHECK-DEFAULT: ldr{{.*}}, #4]
 ; CHECK-DEFAULT: str{{.*}}, #4]
 ; CHECK-DEFAULT: ldr{{.*}}, #8]!
-; CHECK-DEAFULT: ldr{{.*}}, #8]!
+; CHECK-DEFAULT: ldr{{.*}}, #8]!
 ; CHECK-DEFAULT: str{{.*}}, #8]!
 
 ; CHECK-COMPLEX: ldr{{.*}}, #8]!
diff --git a/llvm/test/CodeGen/ARM/shifter_operand.ll b/llvm/test/CodeGen/ARM/shifter_operand.ll
index bf2e8aa911c64..00922b1bf2492 100644
--- a/llvm/test/CodeGen/ARM/shifter_operand.ll
+++ b/llvm/test/CodeGen/ARM/shifter_operand.ll
@@ -121,7 +121,6 @@ define i32 @test_orr_extract_from_mul_1(i32 %x, i32 %y) {
 ; CHECK-THUMB-NEXT:    orrs r0, r1
 ; CHECK-THUMB-NEXT:    bx lr
 entry:
-; CHECk-THUMB: orrs r0, r1
   %mul = mul i32 %y, 63767
   %or = or i32 %mul, %x
   ret i32 %or
diff --git a/llvm/test/CodeGen/ARM/speculation-hardening-sls.ll b/llvm/test/CodeGen/ARM/speculation-hardening-sls.ll
index f25d73a12246f..1f60f120dc86a 100644
--- a/llvm/test/CodeGen/ARM/speculation-hardening-sls.ll
+++ b/llvm/test/CodeGen/ARM/speculation-hardening-sls.ll
@@ -248,7 +248,7 @@ entry:
 ; HARDEN-COMDAT-OFF-NOT:  .hidden {{__llvm_slsblr_thunk_(arm|thumb)_r5}}
 ; HARDEN-COMDAT-OFF-NOT:  .weak {{__llvm_slsblr_thunk_(arm|thumb)_r5}}
 ; HARDEN-COMDAT-OFF:      .type {{__llvm_slsblr_thunk_(arm|thumb)_r5}},%function
-; HARDEN-label: {{__llvm_slsblr_thunk_(arm|thumb)_r5}}:
+; HARDEN-LABEL: {{__llvm_slsblr_thunk_(arm|thumb)_r5}}:
 ; HARDEN:    bx r5
 ; ISBDSB-NEXT: dsb sy
 ; ISBDSB-NEXT: isb
diff --git a/llvm/test/CodeGen/ARM/sxt_rot.ll b/llvm/test/CodeGen/ARM/sxt_rot.ll
index e9649c7a7fd9a..775e45201105c 100644
--- a/llvm/test/CodeGen/ARM/sxt_rot.ll
+++ b/llvm/test/CodeGen/ARM/sxt_rot.ll
@@ -22,7 +22,6 @@ define signext i8 @test1(i32 %A) {
 ; CHECK-V7:       @ %bb.0:
 ; CHECK-V7-NEXT:    sbfx r0, r0, #8, #8
 ; CHECK-V7-NEXT:    bx lr
-; CHECk-V7: sbfx r0, r0, #8, #8
   %B = lshr i32 %A, 8
   %C = shl i32 %A, 24
   %D = or i32 %B, %C
diff --git a/llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll b/llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll
index bf69adf6702f0..58920483e24bf 100644
--- a/llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll
+++ b/llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll
@@ -3,11 +3,11 @@
 ; RUN: llc < %s -mtriple=mips64el-unknown-linux-gnuabi64 | FileCheck %s --check-prefixes=MIPS64
 
 define i32 @shl_32(i32 %a, i32 %b) {
-; MIPS32-LABLE:   shl_32:
+; MIPS32-LABEL:   shl_32:
 ; MIPS32:	  # %bb.0:
 ; MIPS32-NEXT:    jr	$ra
 ; MIPS32-NEXT:    sllv	$2, $4, $5
-; MIPS64-LABLE:   shl_32:
+; MIPS64-LABEL:   shl_32:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    sll   $2, $4, 0
@@ -19,11 +19,11 @@ define i32 @shl_32(i32 %a, i32 %b) {
 }
 
 define i32 @lshr_32(i32 %a, i32 %b) {
-; MIPS32-LABLE:   lshr_32:
+; MIPS32-LABEL:   lshr_32:
 ; MIPS32:	  # %bb.0:
 ; MIPS32-NEXT:    jr	$ra
 ; MIPS32-NEXT:    srlv	$2, $4, $5
-; MIPS64-LABLE:   lshr_32:
+; MIPS64-LABEL:   lshr_32:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    sll   $2, $4, 0
@@ -35,11 +35,11 @@ define i32 @lshr_32(i32 %a, i32 %b) {
 }
 
 define i32 @ashr_32(i32 %a, i32 %b) {
-; MIPS32-LABLE:   ashr_32:
+; MIPS32-LABEL:   ashr_32:
 ; MIPS32:	  # %bb.0:
 ; MIPS32-NEXT:    jr	$ra
 ; MIPS32-NEXT:    srav	$2, $4, $5
-; MIPS64-LABLE:   ashr_32:
+; MIPS64-LABEL:   ashr_32:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    sll   $2, $4, 0
@@ -51,7 +51,7 @@ define i32 @ashr_32(i32 %a, i32 %b) {
 }
 
 define i64 @shl_64(i64 %a, i64 %b) {
-; MIPS64-LABLE:   shl_64:
+; MIPS64-LABEL:   shl_64:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    jr	$ra
@@ -62,7 +62,7 @@ define i64 @shl_64(i64 %a, i64 %b) {
 }
 
 define i64 @lshr_64(i64 %a, i64 %b) {
-; MIPS64-LABLE:   lshr_64:
+; MIPS64-LABEL:   lshr_64:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    jr	$ra
@@ -73,7 +73,7 @@ define i64 @lshr_64(i64 %a, i64 %b) {
 }
 
 define i64 @ashr_64(i64 %a, i64 %b) {
-; MIPS64-LABLE:   ashr_64:
+; MIPS64-LABEL:   ashr_64:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    jr	$ra
diff --git a/llvm/test/CodeGen/NVPTX/idioms.ll b/llvm/test/CodeGen/NVPTX/idioms.ll
index e8fe47c303f92..0669d2a3717cb 100644
--- a/llvm/test/CodeGen/NVPTX/idioms.ll
+++ b/llvm/test/CodeGen/NVPTX/idioms.ll
@@ -42,7 +42,7 @@ define %struct.S16 @i32_to_2xi16(i32 noundef %in) {
   %high = trunc i32 %high32 to i16
 ; CHECK:       ld.param.u32  %[[R32:r[0-9]+]], [i32_to_2xi16_param_0];
 ; CHECK-DAG:   cvt.u16.u32   %rs{{[0-9+]}}, %[[R32]];
-; CHECK-DAG    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
+; CHECK-DAG:    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
   %s1 = insertvalue %struct.S16 poison, i16 %low, 0
   %s = insertvalue %struct.S16 %s1, i16 %high, 1
   ret %struct.S16 %s
@@ -56,7 +56,7 @@ define %struct.S16 @i32_to_2xi16_lh(i32 noundef %in) {
   %low = trunc i32 %in to i16
 ; CHECK:       ld.param.u32  %[[R32:r[0-9]+]], [i32_to_2xi16_lh_param_0];
 ; CHECK-DAG:   cvt.u16.u32   %rs{{[0-9+]}}, %[[R32]];
-; CHECK-DAG    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
+; CHECK-DAG:    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
   %s1 = insertvalue %struct.S16 poison, i16 %low, 0
   %s = insertvalue %struct.S16 %s1, i16 %high, 1
   ret %struct.S16 %s
@@ -84,7 +84,7 @@ define %struct.S32 @i64_to_2xi32(i64 noundef %in) {
   %high = trunc i64 %high64 to i32
 ; CHECK:       ld.param.u64  %[[R64:rd[0-9]+]], [i64_to_2xi32_param_0];
 ; CHECK-DAG:   cvt.u32.u64   %r{{[0-9+]}}, %[[R64]];
-; CHECK-DAG    mov.b64       {tmp, %r{{[0-9+]}}}, %[[R64]];
+; CHECK-DAG:    mov.b64       {tmp, %r{{[0-9+]}}}, %[[R64]];
   %s1 = insertvalue %struct.S32 poison, i32 %low, 0
   %s = insertvalue %struct.S32 %s1, i32 %high, 1
   ret %struct.S32 %s
@@ -114,8 +114,8 @@ define %struct.S16 @i32_to_2xi16_shr(i32 noundef %i){
   %h = trunc i32 %h32 to i16
 ; CHECK:      ld.param.u32    %[[R32:r[0-9]+]], [i32_to_2xi16_shr_param_0];
 ; CHECK:      shr.s32         %[[R32H:r[0-9]+]], %[[R32]], 16;
-; CHECK-DAG    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
-; CHECK-DAG    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32H]];
+; CHECK-DAG:    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
+; CHECK-DAG:    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32H]];
   %s0 = insertvalue %struct.S16 poison, i16 %l, 0
   %s1 = insertvalue %struct.S16 %s0, i16 %h, 1
   ret %struct.S16 %s1
diff --git a/llvm/test/CodeGen/SPARC/inlineasm.ll b/llvm/test/CodeGen/SPARC/inlineasm.ll
index 9817d7c6971f5..786e9f3eb1e13 100644
--- a/llvm/test/CodeGen/SPARC/inlineasm.ll
+++ b/llvm/test/CodeGen/SPARC/inlineasm.ll
@@ -144,7 +144,7 @@ entry:
   ret void
 }
 
-; CHECK-label:test_twinword
+; CHECK-LABEL:test_twinword
 ; CHECK: rd  %asr5, %i1
 ; CHECK: srlx %i1, 32, %i0
 
diff --git a/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll b/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll
index b5df462bd8fa9..f5f1382a35fca 100644
--- a/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll
+++ b/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll
@@ -6,7 +6,7 @@
 ; TODO: %if spirv-tools %{ llc -O0 -mtriple=spirv64-unknown-unknown %s -o - -filetype=obj | spirv-val %}
 
 ; CHECK-: Capability USMStorageClassesINTEL
-; CHECK-SPIRV-WITHOUT-NO: Capability USMStorageClassesINTEL
+; CHECK-SPIRV-WITHOUT-NOT: Capability USMStorageClassesINTEL
 ; CHECK-SPIRV-EXT-DAG: %[[DevTy:[0-9]+]] = OpTypePointer DeviceOnlyINTEL %[[#]]
 ; CHECK-SPIRV-EXT-DAG: %[[HostTy:[0-9]+]] = OpTypePointer HostOnlyINTEL %[[#]]
 ; CHECK-SPIRV-DAG: %[[CrsWrkTy:[0-9]+]] = OpTypePointer CrossWorkgroup %[[#]]
diff --git a/llvm/test/CodeGen/SystemZ/prefetch-04.ll b/llvm/test/CodeGen/SystemZ/prefetch-04.ll
index 61a2a1460c583..10755bdb66eb5 100644
--- a/llvm/test/CodeGen/SystemZ/prefetch-04.ll
+++ b/llvm/test/CodeGen/SystemZ/prefetch-04.ll
@@ -6,7 +6,7 @@
 ;
 ; CHECK-LABEL: for.body
 ; CHECK: call void @llvm.prefetch.p0(ptr %...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented May 29, 2024

@llvm/pr-subscribers-backend-systemz

Author: klensy (klensy)

Changes

Moved fixes for llvm from #91854, plus few additional.


Patch is 153.63 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/93673.diff

99 Files Affected:

  • (modified) llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll (+1-1)
  • (modified) llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll (+1-1)
  • (modified) llvm/test/Assembler/bfloat.ll (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/arm64_32-atomics.ll (+10-10)
  • (modified) llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/fpimm.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/addrspacecast.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir (+5-5)
  • (modified) llvm/test/CodeGen/ARM/dsp-loop-indexing.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/shifter_operand.ll (-1)
  • (modified) llvm/test/CodeGen/ARM/speculation-hardening-sls.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/sxt_rot.ll (-1)
  • (modified) llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll (+9-9)
  • (modified) llvm/test/CodeGen/NVPTX/idioms.ll (+5-5)
  • (modified) llvm/test/CodeGen/SPARC/inlineasm.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/prefetch-04.ll (+1-1)
  • (modified) llvm/test/CodeGen/Thumb2/LowOverheadLoops/branch-targets.ll (+3-3)
  • (modified) llvm/test/CodeGen/X86/global-sections.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/tailregccpic.ll (+2-2)
  • (modified) llvm/test/DebugInfo/MIR/InstrRef/livedebugvalues_illegal_locs.mir (+3-3)
  • (modified) llvm/test/DebugInfo/MIR/InstrRef/single-assign-propagation.mir (+3-3)
  • (modified) llvm/test/MC/AArch64/SME/feature.s (+1-1)
  • (modified) llvm/test/MC/AArch64/armv8.7a-xs.s (+79-79)
  • (modified) llvm/test/MC/AArch64/basic-a64-diagnostics.s (+57-57)
  • (modified) llvm/test/MC/AMDGPU/hsa-diag-v4.s (+1-1)
  • (modified) llvm/test/MC/ARM/coff-relocations.s (+1-1)
  • (modified) llvm/test/MC/ARM/neon-complex.s (+20-20)
  • (modified) llvm/test/MC/AsmParser/labels.s (+2-2)
  • (modified) llvm/test/MC/COFF/cv-inline-linetable.s (+3-3)
  • (modified) llvm/test/MC/Disassembler/AArch64/armv8.6a-bf16.txt (+8-8)
  • (modified) llvm/test/MC/Disassembler/AArch64/armv8.7a-xs.txt (+80-80)
  • (modified) llvm/test/MC/Disassembler/AArch64/tme.txt (+3-3)
  • (modified) llvm/test/MC/Disassembler/AMDGPU/gfx10-wave32.txt (+6-6)
  • (modified) llvm/test/MC/Disassembler/AMDGPU/gfx12_dasm_ds.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/ARM/arm-tests.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/Mips/mips32r6/valid-mips32r6.txt (+3-3)
  • (modified) llvm/test/MC/Disassembler/Mips/mips64r6/valid-mips64r6.txt (+3-3)
  • (modified) llvm/test/MC/Disassembler/PowerPC/ppc64-encoding-dfp.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/PowerPC/ppc64-encoding.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/PowerPC/ppc64le-encoding.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/X86/x86-16.txt (+16-16)
  • (modified) llvm/test/MC/LoongArch/Relocations/relax-align.s (+1-1)
  • (modified) llvm/test/MC/MachO/lto-set-conditional.s (+2-2)
  • (modified) llvm/test/MC/Mips/expansion-jal-sym-pic.s (+8-8)
  • (modified) llvm/test/MC/Mips/macro-rem.s (+1-1)
  • (modified) llvm/test/MC/Mips/micromips-dsp/invalid.s (+4-4)
  • (modified) llvm/test/MC/Mips/micromips/valid.s (+2-2)
  • (modified) llvm/test/MC/Mips/mips-pdr-bad.s (+2-2)
  • (modified) llvm/test/MC/Mips/mips32r6/invalid.s (+8-8)
  • (modified) llvm/test/MC/Mips/mips64r6/invalid.s (+8-8)
  • (modified) llvm/test/MC/PowerPC/ppc64-encoding-ISA31.s (+13-13)
  • (modified) llvm/test/MC/PowerPC/ppc64-encoding-vmx.s (+2-2)
  • (modified) llvm/test/MC/RISCV/compress-rv64i.s (+2-2)
  • (modified) llvm/test/MC/RISCV/csr-aliases.s (+10-10)
  • (modified) llvm/test/MC/RISCV/relocations.s (+4-4)
  • (modified) llvm/test/MC/RISCV/zicfiss-valid.s (+6-6)
  • (modified) llvm/test/MC/WebAssembly/globals.s (+1-1)
  • (modified) llvm/test/MC/X86/apx/evex-format-intel.s (+2-2)
  • (modified) llvm/test/MC/Xtensa/Relocations/relocations.s (+31-31)
  • (modified) llvm/test/Other/constant-fold-gep-address-spaces.ll (+47-47)
  • (modified) llvm/test/Other/new-pm-thinlto-postlink-defaults.ll (+1-1)
  • (modified) llvm/test/Transforms/Attributor/cb_liveness_disabled.ll (+2-2)
  • (modified) llvm/test/Transforms/Attributor/cb_liveness_enabled.ll (+2-2)
  • (modified) llvm/test/Transforms/Attributor/returned.ll (+1-1)
  • (modified) llvm/test/Transforms/CallSiteSplitting/callsite-split.ll (+1-1)
  • (modified) llvm/test/Transforms/Coroutines/coro-debug-coro-frame.ll (+1-1)
  • (modified) llvm/test/Transforms/FunctionAttrs/nonnull.ll (+1-1)
  • (modified) llvm/test/Transforms/GVNSink/sink-common-code.ll (+6-6)
  • (modified) llvm/test/Transforms/Inline/update_invoke_prof.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/lifetime-sanitizer.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopUnroll/peel-loop2.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVectorize/AArch64/nontemporal-load-store.ll (+11-11)
  • (modified) llvm/test/Transforms/LoopVectorize/AArch64/strict-fadd.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVectorize/branch-weights.ll (+2-2)
  • (modified) llvm/test/Transforms/LoopVectorize/memdep.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVersioning/wrapping-pointer-non-integral-addrspace.ll (+1-1)
  • (modified) llvm/test/Transforms/ObjCARC/rv.ll (+3-3)
  • (modified) llvm/test/Transforms/PGOProfile/counter_promo_exit_catchswitch.ll (+1-1)
  • (modified) llvm/test/Transforms/PGOProfile/icp_covariant_invoke_return.ll (+4-4)
  • (modified) llvm/test/Transforms/PartiallyInlineLibCalls/X86/good-prototype.ll (+1-1)
  • (modified) llvm/test/Transforms/SampleProfile/pseudo-probe-selectionDAG.ll (+2-2)
  • (modified) llvm/test/Transforms/SimplifyCFG/branch-fold-threshold.ll (+2-2)
  • (modified) llvm/test/Verifier/AMDGPU/intrinsic-immarg.ll (+1-1)
  • (modified) llvm/test/tools/gold/X86/global_with_section.ll (+1-1)
  • (modified) llvm/test/tools/llvm-ar/replace-update.test (+1-1)
  • (modified) llvm/test/tools/llvm-cov/Inputs/binary-formats.canonical.json (+1-1)
  • (modified) llvm/test/tools/llvm-cov/coverage_watermark.test (+5-5)
  • (modified) llvm/test/tools/llvm-cov/zeroFunctionFile.c (+1-1)
  • (modified) llvm/test/tools/llvm-objcopy/ELF/update-section.test (+1-1)
  • (modified) llvm/test/tools/llvm-objdump/ELF/ARM/v5te-subarch.s (+1-1)
  • (modified) llvm/test/tools/llvm-objdump/X86/start-stop-address.test (+1-1)
  • (modified) llvm/test/tools/llvm-profgen/filter-ambiguous-profile.test (+5-5)
  • (modified) llvm/test/tools/llvm-reduce/mir/reduce-register-defs.mir (+1-1)
  • (modified) llvm/test/tools/llvm-reduce/skip-delta-passes.ll (+1-1)
  • (modified) llvm/test/tools/lto/discard-value-names.ll (+1-1)
diff --git a/llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll b/llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll
index a2526d9f5591a..c2aab35194831 100644
--- a/llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll
+++ b/llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll
@@ -31,7 +31,7 @@ define void  @broadcast() #0{
 ; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction:   %22 = shufflevector <vscale x 8 x i1> undef, <vscale x 8 x i1> undef, <vscale x 8 x i32> zeroinitializer
 ; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction:   %23 = shufflevector <vscale x 4 x i1> undef, <vscale x 4 x i1> undef, <vscale x 4 x i32> zeroinitializer
 ; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction:   %24 = shufflevector <vscale x 2 x i1> undef, <vscale x 2 x i1> undef, <vscale x 2 x i32> zeroinitializer
-; CHECK-NETX: Cost Model: Found an estimated cost of 0 for instruction:   ret void
+; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction:   ret void
 
   %zero = shufflevector <vscale x 16 x i8> undef, <vscale x 16 x i8> undef, <vscale x 16 x i32> zeroinitializer
   %1 = shufflevector <vscale x 32 x i8> undef, <vscale x 32 x i8> undef, <vscale x 32 x i32> zeroinitializer
diff --git a/llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll b/llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll
index 335026dc9b62b..efad77b684a75 100644
--- a/llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll
+++ b/llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll
@@ -90,7 +90,7 @@ S:
   br i1 %cond.uni, label %exit, label %T
 
 T:
-; CHECK-NIT:   DIVERGENT:   %tt.phi = phi i32
+; CHECK-NOT:   DIVERGENT:   %tt.phi = phi i32
   %tt.phi = phi i32 [ %ss, %S ], [ %a, %entry ]
   %tt = add i32 %b, 1
   br label %P
diff --git a/llvm/test/Assembler/bfloat.ll b/llvm/test/Assembler/bfloat.ll
index 3a3b4c2b277db..6f935c5dac154 100644
--- a/llvm/test/Assembler/bfloat.ll
+++ b/llvm/test/Assembler/bfloat.ll
@@ -37,25 +37,25 @@ define float @check_bfloat_convert() {
   ret float %tmp
 }
 
-; ASSEM-DISASS-LABEL @snan_bfloat
+; ASSEM-DISASS-LABEL: @snan_bfloat
 define bfloat @snan_bfloat() {
 ; ASSEM-DISASS: ret bfloat 0xR7F81
     ret bfloat 0xR7F81
 }
 
-; ASSEM-DISASS-LABEL @qnan_bfloat
+; ASSEM-DISASS-LABEL: @qnan_bfloat
 define bfloat @qnan_bfloat() {
 ; ASSEM-DISASS: ret bfloat 0xR7FC0
     ret bfloat 0xR7FC0
 }
 
-; ASSEM-DISASS-LABEL @pos_inf_bfloat
+; ASSEM-DISASS-LABEL: @pos_inf_bfloat
 define bfloat @pos_inf_bfloat() {
 ; ASSEM-DISASS: ret bfloat 0xR7F80
     ret bfloat 0xR7F80
 }
 
-; ASSEM-DISASS-LABEL @neg_inf_bfloat
+; ASSEM-DISASS-LABEL: @neg_inf_bfloat
 define bfloat @neg_inf_bfloat() {
 ; ASSEM-DISASS: ret bfloat 0xRFF80
     ret bfloat 0xRFF80
diff --git a/llvm/test/CodeGen/AArch64/arm64_32-atomics.ll b/llvm/test/CodeGen/AArch64/arm64_32-atomics.ll
index 0000262e833da..19b9205dc1786 100644
--- a/llvm/test/CodeGen/AArch64/arm64_32-atomics.ll
+++ b/llvm/test/CodeGen/AArch64/arm64_32-atomics.ll
@@ -2,70 +2,70 @@
 ; RUN: llc -mtriple=arm64_32-apple-ios7.0 -mattr=+outline-atomics -o - %s | FileCheck %s -check-prefix=OUTLINE-ATOMICS
 
 define i8 @test_load_8(ptr %addr) {
-; CHECK-LABAL: test_load_8:
+; CHECK-LABEL: test_load_8:
 ; CHECK: ldarb w0, [x0]
   %val = load atomic i8, ptr %addr seq_cst, align 1
   ret i8 %val
 }
 
 define i16 @test_load_16(ptr %addr) {
-; CHECK-LABAL: test_load_16:
+; CHECK-LABEL: test_load_16:
 ; CHECK: ldarh w0, [x0]
   %val = load atomic i16, ptr %addr acquire, align 2
   ret i16 %val
 }
 
 define i32 @test_load_32(ptr %addr) {
-; CHECK-LABAL: test_load_32:
+; CHECK-LABEL: test_load_32:
 ; CHECK: ldar w0, [x0]
   %val = load atomic i32, ptr %addr seq_cst, align 4
   ret i32 %val
 }
 
 define i64 @test_load_64(ptr %addr) {
-; CHECK-LABAL: test_load_64:
+; CHECK-LABEL: test_load_64:
 ; CHECK: ldar x0, [x0]
   %val = load atomic i64, ptr %addr seq_cst, align 8
   ret i64 %val
 }
 
 define ptr @test_load_ptr(ptr %addr) {
-; CHECK-LABAL: test_load_ptr:
+; CHECK-LABEL: test_load_ptr:
 ; CHECK: ldar w0, [x0]
   %val = load atomic ptr, ptr %addr seq_cst, align 8
   ret ptr %val
 }
 
 define void @test_store_8(ptr %addr) {
-; CHECK-LABAL: test_store_8:
+; CHECK-LABEL: test_store_8:
 ; CHECK: stlrb wzr, [x0]
   store atomic i8 0, ptr %addr seq_cst, align 1
   ret void
 }
 
 define void @test_store_16(ptr %addr) {
-; CHECK-LABAL: test_store_16:
+; CHECK-LABEL: test_store_16:
 ; CHECK: stlrh wzr, [x0]
   store atomic i16 0, ptr %addr seq_cst, align 2
   ret void
 }
 
 define void @test_store_32(ptr %addr) {
-; CHECK-LABAL: test_store_32:
+; CHECK-LABEL: test_store_32:
 ; CHECK: stlr wzr, [x0]
   store atomic i32 0, ptr %addr seq_cst, align 4
   ret void
 }
 
 define void @test_store_64(ptr %addr) {
-; CHECK-LABAL: test_store_64:
+; CHECK-LABEL: test_store_64:
 ; CHECK: stlr xzr, [x0]
   store atomic i64 0, ptr %addr seq_cst, align 8
   ret void
 }
 
 define void @test_store_ptr(ptr %addr) {
-; CHECK-LABAL: test_store_ptr:
+; CHECK-LABEL: test_store_ptr:
 ; CHECK: stlr wzr, [x0]
   store atomic ptr null, ptr %addr seq_cst, align 8
   ret void
diff --git a/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll b/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll
index e9556b9d5cbee..e93fcec822846 100644
--- a/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll
+++ b/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll
@@ -1,7 +1,7 @@
 ; RUN: llc -mtriple=arm64ec-pc-windows-msvc < %s | FileCheck %s
 
 define void @no_op() nounwind {
-; CHECK-LABEL     .def    $ientry_thunk$cdecl$v$v;
+; CHECK-LABEL:     .def    $ientry_thunk$cdecl$v$v;
 ; CHECK:          .section        .wowthk$aa,"xr",discard,$ientry_thunk$cdecl$v$v
 ; CHECK:          // %bb.0:
 ; CHECK-NEXT:     stp     q6, q7, [sp, #-176]!            // 32-byte Folded Spill
diff --git a/llvm/test/CodeGen/AArch64/fpimm.ll b/llvm/test/CodeGen/AArch64/fpimm.ll
index b92bb4245c7f3..e2944243338f5 100644
--- a/llvm/test/CodeGen/AArch64/fpimm.ll
+++ b/llvm/test/CodeGen/AArch64/fpimm.ll
@@ -38,7 +38,7 @@ define void @check_double() {
 ; 64-bit ORR followed by MOVK.
 ; CHECK-DAG: mov  [[XFP0:x[0-9]+]], #1082331758844
 ; CHECK-DAG: movk [[XFP0]], #64764, lsl #16
-; CHECk-DAG: fmov {{d[0-9]+}}, [[XFP0]]
+; CHECK-DAG: fmov {{d[0-9]+}}, [[XFP0]]
   %newval3 = fadd double %val, 0xFCFCFC00FC
   store volatile double %newval3, ptr @varf64
 
diff --git a/llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll b/llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll
index a5757a70843a9..fa63df35ac857 100644
--- a/llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll
+++ b/llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll
@@ -28,7 +28,7 @@ define void @a() "sign-return-address"="all" {
 }
 
 define void @b() "sign-return-address"="non-leaf" {
-; CHECK-LABE:      b:                                     // @b
+; CHECK-LABEL:     b:                                     // @b
 ; V8A-NOT:         hint #25
 ; V83A-NOT:        paciasp
 ; CHECK-NOT:       .cfi_negate_ra_state
diff --git a/llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll b/llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll
index f380b2d05d863..fe08fa5642574 100644
--- a/llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll
+++ b/llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll
@@ -192,7 +192,7 @@ entry:
 ; CHECK: .Lfunc_end
 }
 
-; HARDEN-label: __llvm_slsblr_thunk_x0:
+; HARDEN-LABEL: __llvm_slsblr_thunk_x0:
 ; HARDEN:    mov x16, x0
 ; HARDEN:    br x16
 ; ISBDSB-NEXT: dsb sy
@@ -208,7 +208,7 @@ entry:
 ; HARDEN-COMDAT-OFF-NOT:  .hidden __llvm_slsblr_thunk_x19
 ; HARDEN-COMDAT-OFF-NOT:  .weak __llvm_slsblr_thunk_x19
 ; HARDEN-COMDAT-OFF:      .type __llvm_slsblr_thunk_x19,@function
-; HARDEN-label: __llvm_slsblr_thunk_x19:
+; HARDEN-LABEL: __llvm_slsblr_thunk_x19:
 ; HARDEN:    mov x16, x19
 ; HARDEN:    br x16
 ; ISBDSB-NEXT: dsb sy
diff --git a/llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir b/llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir
index 66d2067b531a3..bfdb1763776b4 100644
--- a/llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir
+++ b/llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir
@@ -12,7 +12,7 @@
 
 # This test also checks that pairwise store STP is generated.
 
-# CHECK-LABLE: test
+# CHECK-LABEL: test
 # CHECK: bb.0:
 # CHECK-NEXT: liveins: $x0, $x17, $x18
 # CHECK: renamable $q13_q14_q15 = LD3Threev16b undef renamable $x17 :: (load (s384) from `ptr undef`, align 64)
diff --git a/llvm/test/CodeGen/AMDGPU/addrspacecast.ll b/llvm/test/CodeGen/AMDGPU/addrspacecast.ll
index 50423c59eabe9..526d5c946ec7f 100644
--- a/llvm/test/CodeGen/AMDGPU/addrspacecast.ll
+++ b/llvm/test/CodeGen/AMDGPU/addrspacecast.ll
@@ -108,7 +108,7 @@ define amdgpu_kernel void @use_global_to_flat_addrspacecast(ptr addrspace(1) %pt
 }
 
 ; no-op
-; HSA-LABEl: {{^}}use_constant_to_flat_addrspacecast:
+; HSA-LABEL: {{^}}use_constant_to_flat_addrspacecast:
 ; HSA: s_load_dwordx2 s[[[PTRLO:[0-9]+]]:[[PTRHI:[0-9]+]]]
 ; HSA-DAG: v_mov_b32_e32 v[[VPTRLO:[0-9]+]], s[[PTRLO]]
 ; HSA-DAG: v_mov_b32_e32 v[[VPTRHI:[0-9]+]], s[[PTRHI]]
@@ -119,7 +119,7 @@ define amdgpu_kernel void @use_constant_to_flat_addrspacecast(ptr addrspace(4) %
   ret void
 }
 
-; HSA-LABEl: {{^}}use_constant_to_global_addrspacecast:
+; HSA-LABEL: {{^}}use_constant_to_global_addrspacecast:
 ; HSA: s_load_dwordx2 s[[[PTRLO:[0-9]+]]:[[PTRHI:[0-9]+]]]
 ; CI-DAG: v_mov_b32_e32 v[[VPTRLO:[0-9]+]], s[[PTRLO]]
 ; CI-DAG: v_mov_b32_e32 v[[VPTRHI:[0-9]+]], s[[PTRHI]]
diff --git a/llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir b/llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir
index 29621a0477418..1151bde02ef62 100644
--- a/llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir
+++ b/llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir
@@ -4,7 +4,7 @@
 
 ---
 
-# GCN-label: name: vop3
+# GCN-LABEL: name: vop3
 # GCN: %6:vgpr_32, %7:sreg_32_xm0_xexec = V_SUBBREV_U32_e64_dpp %3, %0, %1, %5, 1, 1, 15, 15, 1, implicit $exec
 # GCN: %8:vgpr_32 = V_CVT_PK_U8_F32_e64_dpp %3, 4, %0, 2, %2, 2, %1, 1, 1, 15, 15, 1, implicit $mode, implicit $exec
 # GCN: %10:vgpr_32 = V_MED3_F32_e64 0, %9, 0, %0, 0, 12345678, 0, 0, implicit $mode, implicit $exec
@@ -37,7 +37,7 @@ body:             |
 ...
 ---
 
-# GCN-label: name: vop3_sgpr_src1
+# GCN-LABEL: name: vop3_sgpr_src1
 # GCN: %6:vgpr_32 = V_MED3_F32_e64_dpp %4, 0, %0, 0, %1, 0, %2, 0, 0, 1, 15, 15, 1, implicit $mode, implicit $exec
 # GFX1100: %8:vgpr_32 = V_MED3_F32_e64 0, %7, 0, %2, 0, %1, 0, 0, implicit $mode, implicit $exec
 # GFX1150: %8:vgpr_32 = V_MED3_F32_e64_dpp %4, 0, %0, 0, %2, 0, %1, 0, 0, 1, 15, 15, 1, implicit $mode, implicit $exec
@@ -81,7 +81,7 @@ body:             |
 ---
 
 # Regression test for src_modifiers on base u16 opcode
-# GCN-label: name: vop3_u16
+# GCN-LABEL: name: vop3_u16
 # GCN: %5:vgpr_32 = V_ADD_NC_U16_e64_dpp %3, 0, %1, 0, %3, 0, 0, 1, 15, 15, 1, implicit $exec
 # GCN: %7:vgpr_32 = V_ADD_NC_U16_e64_dpp %3, 1, %5, 2, %5, 0, 0, 1, 15, 15, 1, implicit $exec
 # GCN: %9:vgpr_32 = V_ADD_NC_U16_e64 4, %8, 8, %7, 0, 0, implicit $exec
@@ -205,7 +205,7 @@ body:             |
 ...
 
 # do not combine, dpp arg used twice
-# GCN-label: name: dpp_arg_twice
+# GCN-LABEL: name: dpp_arg_twice
 # GCN: %4:vgpr_32 = V_FMA_F32_e64 1, %1, 2, %3, 2, %3, 1, 2, implicit $mode, implicit $exec
 # GCN: %6:vgpr_32 = V_FMA_F32_e64 2, %5, 2, %1, 2, %5, 1, 2, implicit $mode, implicit $exec
 # GCN: %8:vgpr_32 = V_FMA_F32_e64 2, %7, 2, %7, 2, %1, 1, 2, implicit $mode, implicit $exec
@@ -231,7 +231,7 @@ body:             |
 ...
 
 # when the dpp source isn't a src0 operand the operation should be commuted if possible
-# GCN-label: name: dpp_commute_e64
+# GCN-LABEL: name: dpp_commute_e64
 # GCN: %4:vgpr_32  = V_MUL_U32_U24_e64_dpp %1, %0, %1, 1, 1, 14, 15, 0, implicit $exec
 # GCN: %7:vgpr_32 = V_FMA_F32_e64_dpp %5, 2, %0, 1, %1, 2, %1, 1, 2, 1, 15, 15, 1, implicit $mode, implicit $exec
 # GCN: %10:vgpr_32 = V_SUBREV_U32_e64_dpp %1, %0, %1, 1, 1, 14, 15, 0, implicit $exec
diff --git a/llvm/test/CodeGen/ARM/dsp-loop-indexing.ll b/llvm/test/CodeGen/ARM/dsp-loop-indexing.ll
index 9fb64471e9881..892e66aed4e5f 100644
--- a/llvm/test/CodeGen/ARM/dsp-loop-indexing.ll
+++ b/llvm/test/CodeGen/ARM/dsp-loop-indexing.ll
@@ -22,7 +22,7 @@
 ; CHECK-DEFAULT: ldr{{.*}}, #4]
 ; CHECK-DEFAULT: str{{.*}}, #4]
 ; CHECK-DEFAULT: ldr{{.*}}, #8]!
-; CHECK-DEAFULT: ldr{{.*}}, #8]!
+; CHECK-DEFAULT: ldr{{.*}}, #8]!
 ; CHECK-DEFAULT: str{{.*}}, #8]!
 
 ; CHECK-COMPLEX: ldr{{.*}}, #8]!
diff --git a/llvm/test/CodeGen/ARM/shifter_operand.ll b/llvm/test/CodeGen/ARM/shifter_operand.ll
index bf2e8aa911c64..00922b1bf2492 100644
--- a/llvm/test/CodeGen/ARM/shifter_operand.ll
+++ b/llvm/test/CodeGen/ARM/shifter_operand.ll
@@ -121,7 +121,6 @@ define i32 @test_orr_extract_from_mul_1(i32 %x, i32 %y) {
 ; CHECK-THUMB-NEXT:    orrs r0, r1
 ; CHECK-THUMB-NEXT:    bx lr
 entry:
-; CHECk-THUMB: orrs r0, r1
   %mul = mul i32 %y, 63767
   %or = or i32 %mul, %x
   ret i32 %or
diff --git a/llvm/test/CodeGen/ARM/speculation-hardening-sls.ll b/llvm/test/CodeGen/ARM/speculation-hardening-sls.ll
index f25d73a12246f..1f60f120dc86a 100644
--- a/llvm/test/CodeGen/ARM/speculation-hardening-sls.ll
+++ b/llvm/test/CodeGen/ARM/speculation-hardening-sls.ll
@@ -248,7 +248,7 @@ entry:
 ; HARDEN-COMDAT-OFF-NOT:  .hidden {{__llvm_slsblr_thunk_(arm|thumb)_r5}}
 ; HARDEN-COMDAT-OFF-NOT:  .weak {{__llvm_slsblr_thunk_(arm|thumb)_r5}}
 ; HARDEN-COMDAT-OFF:      .type {{__llvm_slsblr_thunk_(arm|thumb)_r5}},%function
-; HARDEN-label: {{__llvm_slsblr_thunk_(arm|thumb)_r5}}:
+; HARDEN-LABEL: {{__llvm_slsblr_thunk_(arm|thumb)_r5}}:
 ; HARDEN:    bx r5
 ; ISBDSB-NEXT: dsb sy
 ; ISBDSB-NEXT: isb
diff --git a/llvm/test/CodeGen/ARM/sxt_rot.ll b/llvm/test/CodeGen/ARM/sxt_rot.ll
index e9649c7a7fd9a..775e45201105c 100644
--- a/llvm/test/CodeGen/ARM/sxt_rot.ll
+++ b/llvm/test/CodeGen/ARM/sxt_rot.ll
@@ -22,7 +22,6 @@ define signext i8 @test1(i32 %A) {
 ; CHECK-V7:       @ %bb.0:
 ; CHECK-V7-NEXT:    sbfx r0, r0, #8, #8
 ; CHECK-V7-NEXT:    bx lr
-; CHECk-V7: sbfx r0, r0, #8, #8
   %B = lshr i32 %A, 8
   %C = shl i32 %A, 24
   %D = or i32 %B, %C
diff --git a/llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll b/llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll
index bf69adf6702f0..58920483e24bf 100644
--- a/llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll
+++ b/llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll
@@ -3,11 +3,11 @@
 ; RUN: llc < %s -mtriple=mips64el-unknown-linux-gnuabi64 | FileCheck %s --check-prefixes=MIPS64
 
 define i32 @shl_32(i32 %a, i32 %b) {
-; MIPS32-LABLE:   shl_32:
+; MIPS32-LABEL:   shl_32:
 ; MIPS32:	  # %bb.0:
 ; MIPS32-NEXT:    jr	$ra
 ; MIPS32-NEXT:    sllv	$2, $4, $5
-; MIPS64-LABLE:   shl_32:
+; MIPS64-LABEL:   shl_32:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    sll   $2, $4, 0
@@ -19,11 +19,11 @@ define i32 @shl_32(i32 %a, i32 %b) {
 }
 
 define i32 @lshr_32(i32 %a, i32 %b) {
-; MIPS32-LABLE:   lshr_32:
+; MIPS32-LABEL:   lshr_32:
 ; MIPS32:	  # %bb.0:
 ; MIPS32-NEXT:    jr	$ra
 ; MIPS32-NEXT:    srlv	$2, $4, $5
-; MIPS64-LABLE:   lshr_32:
+; MIPS64-LABEL:   lshr_32:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    sll   $2, $4, 0
@@ -35,11 +35,11 @@ define i32 @lshr_32(i32 %a, i32 %b) {
 }
 
 define i32 @ashr_32(i32 %a, i32 %b) {
-; MIPS32-LABLE:   ashr_32:
+; MIPS32-LABEL:   ashr_32:
 ; MIPS32:	  # %bb.0:
 ; MIPS32-NEXT:    jr	$ra
 ; MIPS32-NEXT:    srav	$2, $4, $5
-; MIPS64-LABLE:   ashr_32:
+; MIPS64-LABEL:   ashr_32:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    sll   $2, $4, 0
@@ -51,7 +51,7 @@ define i32 @ashr_32(i32 %a, i32 %b) {
 }
 
 define i64 @shl_64(i64 %a, i64 %b) {
-; MIPS64-LABLE:   shl_64:
+; MIPS64-LABEL:   shl_64:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    jr	$ra
@@ -62,7 +62,7 @@ define i64 @shl_64(i64 %a, i64 %b) {
 }
 
 define i64 @lshr_64(i64 %a, i64 %b) {
-; MIPS64-LABLE:   lshr_64:
+; MIPS64-LABEL:   lshr_64:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    jr	$ra
@@ -73,7 +73,7 @@ define i64 @lshr_64(i64 %a, i64 %b) {
 }
 
 define i64 @ashr_64(i64 %a, i64 %b) {
-; MIPS64-LABLE:   ashr_64:
+; MIPS64-LABEL:   ashr_64:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    jr	$ra
diff --git a/llvm/test/CodeGen/NVPTX/idioms.ll b/llvm/test/CodeGen/NVPTX/idioms.ll
index e8fe47c303f92..0669d2a3717cb 100644
--- a/llvm/test/CodeGen/NVPTX/idioms.ll
+++ b/llvm/test/CodeGen/NVPTX/idioms.ll
@@ -42,7 +42,7 @@ define %struct.S16 @i32_to_2xi16(i32 noundef %in) {
   %high = trunc i32 %high32 to i16
 ; CHECK:       ld.param.u32  %[[R32:r[0-9]+]], [i32_to_2xi16_param_0];
 ; CHECK-DAG:   cvt.u16.u32   %rs{{[0-9+]}}, %[[R32]];
-; CHECK-DAG    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
+; CHECK-DAG:    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
   %s1 = insertvalue %struct.S16 poison, i16 %low, 0
   %s = insertvalue %struct.S16 %s1, i16 %high, 1
   ret %struct.S16 %s
@@ -56,7 +56,7 @@ define %struct.S16 @i32_to_2xi16_lh(i32 noundef %in) {
   %low = trunc i32 %in to i16
 ; CHECK:       ld.param.u32  %[[R32:r[0-9]+]], [i32_to_2xi16_lh_param_0];
 ; CHECK-DAG:   cvt.u16.u32   %rs{{[0-9+]}}, %[[R32]];
-; CHECK-DAG    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
+; CHECK-DAG:    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
   %s1 = insertvalue %struct.S16 poison, i16 %low, 0
   %s = insertvalue %struct.S16 %s1, i16 %high, 1
   ret %struct.S16 %s
@@ -84,7 +84,7 @@ define %struct.S32 @i64_to_2xi32(i64 noundef %in) {
   %high = trunc i64 %high64 to i32
 ; CHECK:       ld.param.u64  %[[R64:rd[0-9]+]], [i64_to_2xi32_param_0];
 ; CHECK-DAG:   cvt.u32.u64   %r{{[0-9+]}}, %[[R64]];
-; CHECK-DAG    mov.b64       {tmp, %r{{[0-9+]}}}, %[[R64]];
+; CHECK-DAG:    mov.b64       {tmp, %r{{[0-9+]}}}, %[[R64]];
   %s1 = insertvalue %struct.S32 poison, i32 %low, 0
   %s = insertvalue %struct.S32 %s1, i32 %high, 1
   ret %struct.S32 %s
@@ -114,8 +114,8 @@ define %struct.S16 @i32_to_2xi16_shr(i32 noundef %i){
   %h = trunc i32 %h32 to i16
 ; CHECK:      ld.param.u32    %[[R32:r[0-9]+]], [i32_to_2xi16_shr_param_0];
 ; CHECK:      shr.s32         %[[R32H:r[0-9]+]], %[[R32]], 16;
-; CHECK-DAG    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
-; CHECK-DAG    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32H]];
+; CHECK-DAG:    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
+; CHECK-DAG:    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32H]];
   %s0 = insertvalue %struct.S16 poison, i16 %l, 0
   %s1 = insertvalue %struct.S16 %s0, i16 %h, 1
   ret %struct.S16 %s1
diff --git a/llvm/test/CodeGen/SPARC/inlineasm.ll b/llvm/test/CodeGen/SPARC/inlineasm.ll
index 9817d7c6971f5..786e9f3eb1e13 100644
--- a/llvm/test/CodeGen/SPARC/inlineasm.ll
+++ b/llvm/test/CodeGen/SPARC/inlineasm.ll
@@ -144,7 +144,7 @@ entry:
   ret void
 }
 
-; CHECK-label:test_twinword
+; CHECK-LABEL:test_twinword
 ; CHECK: rd  %asr5, %i1
 ; CHECK: srlx %i1, 32, %i0
 
diff --git a/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll b/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll
index b5df462bd8fa9..f5f1382a35fca 100644
--- a/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll
+++ b/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll
@@ -6,7 +6,7 @@
 ; TODO: %if spirv-tools %{ llc -O0 -mtriple=spirv64-unknown-unknown %s -o - -filetype=obj | spirv-val %}
 
 ; CHECK-: Capability USMStorageClassesINTEL
-; CHECK-SPIRV-WITHOUT-NO: Capability USMStorageClassesINTEL
+; CHECK-SPIRV-WITHOUT-NOT: Capability USMStorageClassesINTEL
 ; CHECK-SPIRV-EXT-DAG: %[[DevTy:[0-9]+]] = OpTypePointer DeviceOnlyINTEL %[[#]]
 ; CHECK-SPIRV-EXT-DAG: %[[HostTy:[0-9]+]] = OpTypePointer HostOnlyINTEL %[[#]]
 ; CHECK-SPIRV-DAG: %[[CrsWrkTy:[0-9]+]] = OpTypePointer CrossWorkgroup %[[#]]
diff --git a/llvm/test/CodeGen/SystemZ/prefetch-04.ll b/llvm/test/CodeGen/SystemZ/prefetch-04.ll
index 61a2a1460c583..10755bdb66eb5 100644
--- a/llvm/test/CodeGen/SystemZ/prefetch-04.ll
+++ b/llvm/test/CodeGen/SystemZ/prefetch-04.ll
@@ -6,7 +6,7 @@
 ;
 ; CHECK-LABEL: for.body
 ; CHECK: call void @llvm.prefetch.p0(ptr %...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented May 29, 2024

@llvm/pr-subscribers-pgo

Author: klensy (klensy)

Changes

Moved fixes for llvm from #91854, plus few additional.


Patch is 153.82 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/93673.diff

99 Files Affected:

  • (modified) llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll (+1-1)
  • (modified) llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll (+1-1)
  • (modified) llvm/test/Assembler/bfloat.ll (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/arm64_32-atomics.ll (+10-10)
  • (modified) llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/fpimm.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/addrspacecast.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir (+5-5)
  • (modified) llvm/test/CodeGen/ARM/dsp-loop-indexing.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/shifter_operand.ll (-1)
  • (modified) llvm/test/CodeGen/ARM/speculation-hardening-sls.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/sxt_rot.ll (-1)
  • (modified) llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll (+9-9)
  • (modified) llvm/test/CodeGen/NVPTX/idioms.ll (+5-5)
  • (modified) llvm/test/CodeGen/SPARC/inlineasm.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/prefetch-04.ll (+1-1)
  • (modified) llvm/test/CodeGen/Thumb2/LowOverheadLoops/branch-targets.ll (+3-3)
  • (modified) llvm/test/CodeGen/X86/global-sections.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/tailregccpic.ll (+2-2)
  • (modified) llvm/test/DebugInfo/MIR/InstrRef/livedebugvalues_illegal_locs.mir (+3-3)
  • (modified) llvm/test/DebugInfo/MIR/InstrRef/single-assign-propagation.mir (+3-3)
  • (modified) llvm/test/MC/AArch64/SME/feature.s (+1-1)
  • (modified) llvm/test/MC/AArch64/armv8.7a-xs.s (+79-79)
  • (modified) llvm/test/MC/AArch64/basic-a64-diagnostics.s (+57-57)
  • (modified) llvm/test/MC/AMDGPU/hsa-diag-v4.s (+1-1)
  • (modified) llvm/test/MC/ARM/coff-relocations.s (+1-1)
  • (modified) llvm/test/MC/ARM/neon-complex.s (+20-20)
  • (modified) llvm/test/MC/AsmParser/labels.s (+2-2)
  • (modified) llvm/test/MC/COFF/cv-inline-linetable.s (+3-3)
  • (modified) llvm/test/MC/Disassembler/AArch64/armv8.6a-bf16.txt (+8-8)
  • (modified) llvm/test/MC/Disassembler/AArch64/armv8.7a-xs.txt (+80-80)
  • (modified) llvm/test/MC/Disassembler/AArch64/tme.txt (+3-3)
  • (modified) llvm/test/MC/Disassembler/AMDGPU/gfx10-wave32.txt (+6-6)
  • (modified) llvm/test/MC/Disassembler/AMDGPU/gfx12_dasm_ds.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/ARM/arm-tests.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/Mips/mips32r6/valid-mips32r6.txt (+3-3)
  • (modified) llvm/test/MC/Disassembler/Mips/mips64r6/valid-mips64r6.txt (+3-3)
  • (modified) llvm/test/MC/Disassembler/PowerPC/ppc64-encoding-dfp.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/PowerPC/ppc64-encoding.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/PowerPC/ppc64le-encoding.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/X86/x86-16.txt (+16-16)
  • (modified) llvm/test/MC/LoongArch/Relocations/relax-align.s (+1-1)
  • (modified) llvm/test/MC/MachO/lto-set-conditional.s (+2-2)
  • (modified) llvm/test/MC/Mips/expansion-jal-sym-pic.s (+8-8)
  • (modified) llvm/test/MC/Mips/macro-rem.s (+1-1)
  • (modified) llvm/test/MC/Mips/micromips-dsp/invalid.s (+4-4)
  • (modified) llvm/test/MC/Mips/micromips/valid.s (+2-2)
  • (modified) llvm/test/MC/Mips/mips-pdr-bad.s (+2-2)
  • (modified) llvm/test/MC/Mips/mips32r6/invalid.s (+8-8)
  • (modified) llvm/test/MC/Mips/mips64r6/invalid.s (+8-8)
  • (modified) llvm/test/MC/PowerPC/ppc64-encoding-ISA31.s (+13-13)
  • (modified) llvm/test/MC/PowerPC/ppc64-encoding-vmx.s (+2-2)
  • (modified) llvm/test/MC/RISCV/compress-rv64i.s (+2-2)
  • (modified) llvm/test/MC/RISCV/csr-aliases.s (+10-10)
  • (modified) llvm/test/MC/RISCV/relocations.s (+4-4)
  • (modified) llvm/test/MC/RISCV/zicfiss-valid.s (+6-6)
  • (modified) llvm/test/MC/WebAssembly/globals.s (+1-1)
  • (modified) llvm/test/MC/X86/apx/evex-format-intel.s (+2-2)
  • (modified) llvm/test/MC/Xtensa/Relocations/relocations.s (+31-31)
  • (modified) llvm/test/Other/constant-fold-gep-address-spaces.ll (+47-47)
  • (modified) llvm/test/Other/new-pm-thinlto-postlink-defaults.ll (+1-1)
  • (modified) llvm/test/Transforms/Attributor/cb_liveness_disabled.ll (+2-2)
  • (modified) llvm/test/Transforms/Attributor/cb_liveness_enabled.ll (+2-2)
  • (modified) llvm/test/Transforms/Attributor/returned.ll (+1-1)
  • (modified) llvm/test/Transforms/CallSiteSplitting/callsite-split.ll (+1-1)
  • (modified) llvm/test/Transforms/Coroutines/coro-debug-coro-frame.ll (+1-1)
  • (modified) llvm/test/Transforms/FunctionAttrs/nonnull.ll (+1-1)
  • (modified) llvm/test/Transforms/GVNSink/sink-common-code.ll (+6-6)
  • (modified) llvm/test/Transforms/Inline/update_invoke_prof.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/lifetime-sanitizer.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopUnroll/peel-loop2.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVectorize/AArch64/nontemporal-load-store.ll (+11-11)
  • (modified) llvm/test/Transforms/LoopVectorize/AArch64/strict-fadd.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVectorize/branch-weights.ll (+2-2)
  • (modified) llvm/test/Transforms/LoopVectorize/memdep.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVersioning/wrapping-pointer-non-integral-addrspace.ll (+1-1)
  • (modified) llvm/test/Transforms/ObjCARC/rv.ll (+3-3)
  • (modified) llvm/test/Transforms/PGOProfile/counter_promo_exit_catchswitch.ll (+1-1)
  • (modified) llvm/test/Transforms/PGOProfile/icp_covariant_invoke_return.ll (+4-4)
  • (modified) llvm/test/Transforms/PartiallyInlineLibCalls/X86/good-prototype.ll (+1-1)
  • (modified) llvm/test/Transforms/SampleProfile/pseudo-probe-selectionDAG.ll (+2-2)
  • (modified) llvm/test/Transforms/SimplifyCFG/branch-fold-threshold.ll (+2-2)
  • (modified) llvm/test/Verifier/AMDGPU/intrinsic-immarg.ll (+1-1)
  • (modified) llvm/test/tools/gold/X86/global_with_section.ll (+1-1)
  • (modified) llvm/test/tools/llvm-ar/replace-update.test (+1-1)
  • (modified) llvm/test/tools/llvm-cov/Inputs/binary-formats.canonical.json (+1-1)
  • (modified) llvm/test/tools/llvm-cov/coverage_watermark.test (+5-5)
  • (modified) llvm/test/tools/llvm-cov/zeroFunctionFile.c (+1-1)
  • (modified) llvm/test/tools/llvm-objcopy/ELF/update-section.test (+1-1)
  • (modified) llvm/test/tools/llvm-objdump/ELF/ARM/v5te-subarch.s (+1-1)
  • (modified) llvm/test/tools/llvm-objdump/X86/start-stop-address.test (+1-1)
  • (modified) llvm/test/tools/llvm-profgen/filter-ambiguous-profile.test (+5-5)
  • (modified) llvm/test/tools/llvm-reduce/mir/reduce-register-defs.mir (+1-1)
  • (modified) llvm/test/tools/llvm-reduce/skip-delta-passes.ll (+1-1)
  • (modified) llvm/test/tools/lto/discard-value-names.ll (+1-1)
diff --git a/llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll b/llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll
index a2526d9f5591a7..c2aab351948311 100644
--- a/llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll
+++ b/llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll
@@ -31,7 +31,7 @@ define void  @broadcast() #0{
 ; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction:   %22 = shufflevector <vscale x 8 x i1> undef, <vscale x 8 x i1> undef, <vscale x 8 x i32> zeroinitializer
 ; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction:   %23 = shufflevector <vscale x 4 x i1> undef, <vscale x 4 x i1> undef, <vscale x 4 x i32> zeroinitializer
 ; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction:   %24 = shufflevector <vscale x 2 x i1> undef, <vscale x 2 x i1> undef, <vscale x 2 x i32> zeroinitializer
-; CHECK-NETX: Cost Model: Found an estimated cost of 0 for instruction:   ret void
+; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction:   ret void
 
   %zero = shufflevector <vscale x 16 x i8> undef, <vscale x 16 x i8> undef, <vscale x 16 x i32> zeroinitializer
   %1 = shufflevector <vscale x 32 x i8> undef, <vscale x 32 x i8> undef, <vscale x 32 x i32> zeroinitializer
diff --git a/llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll b/llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll
index 335026dc9b62bd..efad77b684a75a 100644
--- a/llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll
+++ b/llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll
@@ -90,7 +90,7 @@ S:
   br i1 %cond.uni, label %exit, label %T
 
 T:
-; CHECK-NIT:   DIVERGENT:   %tt.phi = phi i32
+; CHECK-NOT:   DIVERGENT:   %tt.phi = phi i32
   %tt.phi = phi i32 [ %ss, %S ], [ %a, %entry ]
   %tt = add i32 %b, 1
   br label %P
diff --git a/llvm/test/Assembler/bfloat.ll b/llvm/test/Assembler/bfloat.ll
index 3a3b4c2b277db7..6f935c5dac154a 100644
--- a/llvm/test/Assembler/bfloat.ll
+++ b/llvm/test/Assembler/bfloat.ll
@@ -37,25 +37,25 @@ define float @check_bfloat_convert() {
   ret float %tmp
 }
 
-; ASSEM-DISASS-LABEL @snan_bfloat
+; ASSEM-DISASS-LABEL: @snan_bfloat
 define bfloat @snan_bfloat() {
 ; ASSEM-DISASS: ret bfloat 0xR7F81
     ret bfloat 0xR7F81
 }
 
-; ASSEM-DISASS-LABEL @qnan_bfloat
+; ASSEM-DISASS-LABEL: @qnan_bfloat
 define bfloat @qnan_bfloat() {
 ; ASSEM-DISASS: ret bfloat 0xR7FC0
     ret bfloat 0xR7FC0
 }
 
-; ASSEM-DISASS-LABEL @pos_inf_bfloat
+; ASSEM-DISASS-LABEL: @pos_inf_bfloat
 define bfloat @pos_inf_bfloat() {
 ; ASSEM-DISASS: ret bfloat 0xR7F80
     ret bfloat 0xR7F80
 }
 
-; ASSEM-DISASS-LABEL @neg_inf_bfloat
+; ASSEM-DISASS-LABEL: @neg_inf_bfloat
 define bfloat @neg_inf_bfloat() {
 ; ASSEM-DISASS: ret bfloat 0xRFF80
     ret bfloat 0xRFF80
diff --git a/llvm/test/CodeGen/AArch64/arm64_32-atomics.ll b/llvm/test/CodeGen/AArch64/arm64_32-atomics.ll
index 0000262e833da8..19b9205dc17866 100644
--- a/llvm/test/CodeGen/AArch64/arm64_32-atomics.ll
+++ b/llvm/test/CodeGen/AArch64/arm64_32-atomics.ll
@@ -2,70 +2,70 @@
 ; RUN: llc -mtriple=arm64_32-apple-ios7.0 -mattr=+outline-atomics -o - %s | FileCheck %s -check-prefix=OUTLINE-ATOMICS
 
 define i8 @test_load_8(ptr %addr) {
-; CHECK-LABAL: test_load_8:
+; CHECK-LABEL: test_load_8:
 ; CHECK: ldarb w0, [x0]
   %val = load atomic i8, ptr %addr seq_cst, align 1
   ret i8 %val
 }
 
 define i16 @test_load_16(ptr %addr) {
-; CHECK-LABAL: test_load_16:
+; CHECK-LABEL: test_load_16:
 ; CHECK: ldarh w0, [x0]
   %val = load atomic i16, ptr %addr acquire, align 2
   ret i16 %val
 }
 
 define i32 @test_load_32(ptr %addr) {
-; CHECK-LABAL: test_load_32:
+; CHECK-LABEL: test_load_32:
 ; CHECK: ldar w0, [x0]
   %val = load atomic i32, ptr %addr seq_cst, align 4
   ret i32 %val
 }
 
 define i64 @test_load_64(ptr %addr) {
-; CHECK-LABAL: test_load_64:
+; CHECK-LABEL: test_load_64:
 ; CHECK: ldar x0, [x0]
   %val = load atomic i64, ptr %addr seq_cst, align 8
   ret i64 %val
 }
 
 define ptr @test_load_ptr(ptr %addr) {
-; CHECK-LABAL: test_load_ptr:
+; CHECK-LABEL: test_load_ptr:
 ; CHECK: ldar w0, [x0]
   %val = load atomic ptr, ptr %addr seq_cst, align 8
   ret ptr %val
 }
 
 define void @test_store_8(ptr %addr) {
-; CHECK-LABAL: test_store_8:
+; CHECK-LABEL: test_store_8:
 ; CHECK: stlrb wzr, [x0]
   store atomic i8 0, ptr %addr seq_cst, align 1
   ret void
 }
 
 define void @test_store_16(ptr %addr) {
-; CHECK-LABAL: test_store_16:
+; CHECK-LABEL: test_store_16:
 ; CHECK: stlrh wzr, [x0]
   store atomic i16 0, ptr %addr seq_cst, align 2
   ret void
 }
 
 define void @test_store_32(ptr %addr) {
-; CHECK-LABAL: test_store_32:
+; CHECK-LABEL: test_store_32:
 ; CHECK: stlr wzr, [x0]
   store atomic i32 0, ptr %addr seq_cst, align 4
   ret void
 }
 
 define void @test_store_64(ptr %addr) {
-; CHECK-LABAL: test_store_64:
+; CHECK-LABEL: test_store_64:
 ; CHECK: stlr xzr, [x0]
   store atomic i64 0, ptr %addr seq_cst, align 8
   ret void
 }
 
 define void @test_store_ptr(ptr %addr) {
-; CHECK-LABAL: test_store_ptr:
+; CHECK-LABEL: test_store_ptr:
 ; CHECK: stlr wzr, [x0]
   store atomic ptr null, ptr %addr seq_cst, align 8
   ret void
diff --git a/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll b/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll
index e9556b9d5cbeef..e93fcec8228468 100644
--- a/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll
+++ b/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll
@@ -1,7 +1,7 @@
 ; RUN: llc -mtriple=arm64ec-pc-windows-msvc < %s | FileCheck %s
 
 define void @no_op() nounwind {
-; CHECK-LABEL     .def    $ientry_thunk$cdecl$v$v;
+; CHECK-LABEL:     .def    $ientry_thunk$cdecl$v$v;
 ; CHECK:          .section        .wowthk$aa,"xr",discard,$ientry_thunk$cdecl$v$v
 ; CHECK:          // %bb.0:
 ; CHECK-NEXT:     stp     q6, q7, [sp, #-176]!            // 32-byte Folded Spill
diff --git a/llvm/test/CodeGen/AArch64/fpimm.ll b/llvm/test/CodeGen/AArch64/fpimm.ll
index b92bb4245c7f35..e2944243338f5d 100644
--- a/llvm/test/CodeGen/AArch64/fpimm.ll
+++ b/llvm/test/CodeGen/AArch64/fpimm.ll
@@ -38,7 +38,7 @@ define void @check_double() {
 ; 64-bit ORR followed by MOVK.
 ; CHECK-DAG: mov  [[XFP0:x[0-9]+]], #1082331758844
 ; CHECK-DAG: movk [[XFP0]], #64764, lsl #16
-; CHECk-DAG: fmov {{d[0-9]+}}, [[XFP0]]
+; CHECK-DAG: fmov {{d[0-9]+}}, [[XFP0]]
   %newval3 = fadd double %val, 0xFCFCFC00FC
   store volatile double %newval3, ptr @varf64
 
diff --git a/llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll b/llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll
index a5757a70843a92..fa63df35ac857d 100644
--- a/llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll
+++ b/llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll
@@ -28,7 +28,7 @@ define void @a() "sign-return-address"="all" {
 }
 
 define void @b() "sign-return-address"="non-leaf" {
-; CHECK-LABE:      b:                                     // @b
+; CHECK-LABEL:     b:                                     // @b
 ; V8A-NOT:         hint #25
 ; V83A-NOT:        paciasp
 ; CHECK-NOT:       .cfi_negate_ra_state
diff --git a/llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll b/llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll
index f380b2d05d8634..fe08fa56425741 100644
--- a/llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll
+++ b/llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll
@@ -192,7 +192,7 @@ entry:
 ; CHECK: .Lfunc_end
 }
 
-; HARDEN-label: __llvm_slsblr_thunk_x0:
+; HARDEN-LABEL: __llvm_slsblr_thunk_x0:
 ; HARDEN:    mov x16, x0
 ; HARDEN:    br x16
 ; ISBDSB-NEXT: dsb sy
@@ -208,7 +208,7 @@ entry:
 ; HARDEN-COMDAT-OFF-NOT:  .hidden __llvm_slsblr_thunk_x19
 ; HARDEN-COMDAT-OFF-NOT:  .weak __llvm_slsblr_thunk_x19
 ; HARDEN-COMDAT-OFF:      .type __llvm_slsblr_thunk_x19,@function
-; HARDEN-label: __llvm_slsblr_thunk_x19:
+; HARDEN-LABEL: __llvm_slsblr_thunk_x19:
 ; HARDEN:    mov x16, x19
 ; HARDEN:    br x16
 ; ISBDSB-NEXT: dsb sy
diff --git a/llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir b/llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir
index 66d2067b531a3c..bfdb1763776b40 100644
--- a/llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir
+++ b/llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir
@@ -12,7 +12,7 @@
 
 # This test also checks that pairwise store STP is generated.
 
-# CHECK-LABLE: test
+# CHECK-LABEL: test
 # CHECK: bb.0:
 # CHECK-NEXT: liveins: $x0, $x17, $x18
 # CHECK: renamable $q13_q14_q15 = LD3Threev16b undef renamable $x17 :: (load (s384) from `ptr undef`, align 64)
diff --git a/llvm/test/CodeGen/AMDGPU/addrspacecast.ll b/llvm/test/CodeGen/AMDGPU/addrspacecast.ll
index 50423c59eabe94..526d5c946ec7f6 100644
--- a/llvm/test/CodeGen/AMDGPU/addrspacecast.ll
+++ b/llvm/test/CodeGen/AMDGPU/addrspacecast.ll
@@ -108,7 +108,7 @@ define amdgpu_kernel void @use_global_to_flat_addrspacecast(ptr addrspace(1) %pt
 }
 
 ; no-op
-; HSA-LABEl: {{^}}use_constant_to_flat_addrspacecast:
+; HSA-LABEL: {{^}}use_constant_to_flat_addrspacecast:
 ; HSA: s_load_dwordx2 s[[[PTRLO:[0-9]+]]:[[PTRHI:[0-9]+]]]
 ; HSA-DAG: v_mov_b32_e32 v[[VPTRLO:[0-9]+]], s[[PTRLO]]
 ; HSA-DAG: v_mov_b32_e32 v[[VPTRHI:[0-9]+]], s[[PTRHI]]
@@ -119,7 +119,7 @@ define amdgpu_kernel void @use_constant_to_flat_addrspacecast(ptr addrspace(4) %
   ret void
 }
 
-; HSA-LABEl: {{^}}use_constant_to_global_addrspacecast:
+; HSA-LABEL: {{^}}use_constant_to_global_addrspacecast:
 ; HSA: s_load_dwordx2 s[[[PTRLO:[0-9]+]]:[[PTRHI:[0-9]+]]]
 ; CI-DAG: v_mov_b32_e32 v[[VPTRLO:[0-9]+]], s[[PTRLO]]
 ; CI-DAG: v_mov_b32_e32 v[[VPTRHI:[0-9]+]], s[[PTRHI]]
diff --git a/llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir b/llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir
index 29621a0477418d..1151bde02ef62c 100644
--- a/llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir
+++ b/llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir
@@ -4,7 +4,7 @@
 
 ---
 
-# GCN-label: name: vop3
+# GCN-LABEL: name: vop3
 # GCN: %6:vgpr_32, %7:sreg_32_xm0_xexec = V_SUBBREV_U32_e64_dpp %3, %0, %1, %5, 1, 1, 15, 15, 1, implicit $exec
 # GCN: %8:vgpr_32 = V_CVT_PK_U8_F32_e64_dpp %3, 4, %0, 2, %2, 2, %1, 1, 1, 15, 15, 1, implicit $mode, implicit $exec
 # GCN: %10:vgpr_32 = V_MED3_F32_e64 0, %9, 0, %0, 0, 12345678, 0, 0, implicit $mode, implicit $exec
@@ -37,7 +37,7 @@ body:             |
 ...
 ---
 
-# GCN-label: name: vop3_sgpr_src1
+# GCN-LABEL: name: vop3_sgpr_src1
 # GCN: %6:vgpr_32 = V_MED3_F32_e64_dpp %4, 0, %0, 0, %1, 0, %2, 0, 0, 1, 15, 15, 1, implicit $mode, implicit $exec
 # GFX1100: %8:vgpr_32 = V_MED3_F32_e64 0, %7, 0, %2, 0, %1, 0, 0, implicit $mode, implicit $exec
 # GFX1150: %8:vgpr_32 = V_MED3_F32_e64_dpp %4, 0, %0, 0, %2, 0, %1, 0, 0, 1, 15, 15, 1, implicit $mode, implicit $exec
@@ -81,7 +81,7 @@ body:             |
 ---
 
 # Regression test for src_modifiers on base u16 opcode
-# GCN-label: name: vop3_u16
+# GCN-LABEL: name: vop3_u16
 # GCN: %5:vgpr_32 = V_ADD_NC_U16_e64_dpp %3, 0, %1, 0, %3, 0, 0, 1, 15, 15, 1, implicit $exec
 # GCN: %7:vgpr_32 = V_ADD_NC_U16_e64_dpp %3, 1, %5, 2, %5, 0, 0, 1, 15, 15, 1, implicit $exec
 # GCN: %9:vgpr_32 = V_ADD_NC_U16_e64 4, %8, 8, %7, 0, 0, implicit $exec
@@ -205,7 +205,7 @@ body:             |
 ...
 
 # do not combine, dpp arg used twice
-# GCN-label: name: dpp_arg_twice
+# GCN-LABEL: name: dpp_arg_twice
 # GCN: %4:vgpr_32 = V_FMA_F32_e64 1, %1, 2, %3, 2, %3, 1, 2, implicit $mode, implicit $exec
 # GCN: %6:vgpr_32 = V_FMA_F32_e64 2, %5, 2, %1, 2, %5, 1, 2, implicit $mode, implicit $exec
 # GCN: %8:vgpr_32 = V_FMA_F32_e64 2, %7, 2, %7, 2, %1, 1, 2, implicit $mode, implicit $exec
@@ -231,7 +231,7 @@ body:             |
 ...
 
 # when the dpp source isn't a src0 operand the operation should be commuted if possible
-# GCN-label: name: dpp_commute_e64
+# GCN-LABEL: name: dpp_commute_e64
 # GCN: %4:vgpr_32  = V_MUL_U32_U24_e64_dpp %1, %0, %1, 1, 1, 14, 15, 0, implicit $exec
 # GCN: %7:vgpr_32 = V_FMA_F32_e64_dpp %5, 2, %0, 1, %1, 2, %1, 1, 2, 1, 15, 15, 1, implicit $mode, implicit $exec
 # GCN: %10:vgpr_32 = V_SUBREV_U32_e64_dpp %1, %0, %1, 1, 1, 14, 15, 0, implicit $exec
diff --git a/llvm/test/CodeGen/ARM/dsp-loop-indexing.ll b/llvm/test/CodeGen/ARM/dsp-loop-indexing.ll
index 9fb64471e9881a..892e66aed4e5f7 100644
--- a/llvm/test/CodeGen/ARM/dsp-loop-indexing.ll
+++ b/llvm/test/CodeGen/ARM/dsp-loop-indexing.ll
@@ -22,7 +22,7 @@
 ; CHECK-DEFAULT: ldr{{.*}}, #4]
 ; CHECK-DEFAULT: str{{.*}}, #4]
 ; CHECK-DEFAULT: ldr{{.*}}, #8]!
-; CHECK-DEAFULT: ldr{{.*}}, #8]!
+; CHECK-DEFAULT: ldr{{.*}}, #8]!
 ; CHECK-DEFAULT: str{{.*}}, #8]!
 
 ; CHECK-COMPLEX: ldr{{.*}}, #8]!
diff --git a/llvm/test/CodeGen/ARM/shifter_operand.ll b/llvm/test/CodeGen/ARM/shifter_operand.ll
index bf2e8aa911c649..00922b1bf24920 100644
--- a/llvm/test/CodeGen/ARM/shifter_operand.ll
+++ b/llvm/test/CodeGen/ARM/shifter_operand.ll
@@ -121,7 +121,6 @@ define i32 @test_orr_extract_from_mul_1(i32 %x, i32 %y) {
 ; CHECK-THUMB-NEXT:    orrs r0, r1
 ; CHECK-THUMB-NEXT:    bx lr
 entry:
-; CHECk-THUMB: orrs r0, r1
   %mul = mul i32 %y, 63767
   %or = or i32 %mul, %x
   ret i32 %or
diff --git a/llvm/test/CodeGen/ARM/speculation-hardening-sls.ll b/llvm/test/CodeGen/ARM/speculation-hardening-sls.ll
index f25d73a12246fc..1f60f120dc86a0 100644
--- a/llvm/test/CodeGen/ARM/speculation-hardening-sls.ll
+++ b/llvm/test/CodeGen/ARM/speculation-hardening-sls.ll
@@ -248,7 +248,7 @@ entry:
 ; HARDEN-COMDAT-OFF-NOT:  .hidden {{__llvm_slsblr_thunk_(arm|thumb)_r5}}
 ; HARDEN-COMDAT-OFF-NOT:  .weak {{__llvm_slsblr_thunk_(arm|thumb)_r5}}
 ; HARDEN-COMDAT-OFF:      .type {{__llvm_slsblr_thunk_(arm|thumb)_r5}},%function
-; HARDEN-label: {{__llvm_slsblr_thunk_(arm|thumb)_r5}}:
+; HARDEN-LABEL: {{__llvm_slsblr_thunk_(arm|thumb)_r5}}:
 ; HARDEN:    bx r5
 ; ISBDSB-NEXT: dsb sy
 ; ISBDSB-NEXT: isb
diff --git a/llvm/test/CodeGen/ARM/sxt_rot.ll b/llvm/test/CodeGen/ARM/sxt_rot.ll
index e9649c7a7fd9a1..775e45201105c3 100644
--- a/llvm/test/CodeGen/ARM/sxt_rot.ll
+++ b/llvm/test/CodeGen/ARM/sxt_rot.ll
@@ -22,7 +22,6 @@ define signext i8 @test1(i32 %A) {
 ; CHECK-V7:       @ %bb.0:
 ; CHECK-V7-NEXT:    sbfx r0, r0, #8, #8
 ; CHECK-V7-NEXT:    bx lr
-; CHECk-V7: sbfx r0, r0, #8, #8
   %B = lshr i32 %A, 8
   %C = shl i32 %A, 24
   %D = or i32 %B, %C
diff --git a/llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll b/llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll
index bf69adf6702f0a..58920483e24bf9 100644
--- a/llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll
+++ b/llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll
@@ -3,11 +3,11 @@
 ; RUN: llc < %s -mtriple=mips64el-unknown-linux-gnuabi64 | FileCheck %s --check-prefixes=MIPS64
 
 define i32 @shl_32(i32 %a, i32 %b) {
-; MIPS32-LABLE:   shl_32:
+; MIPS32-LABEL:   shl_32:
 ; MIPS32:	  # %bb.0:
 ; MIPS32-NEXT:    jr	$ra
 ; MIPS32-NEXT:    sllv	$2, $4, $5
-; MIPS64-LABLE:   shl_32:
+; MIPS64-LABEL:   shl_32:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    sll   $2, $4, 0
@@ -19,11 +19,11 @@ define i32 @shl_32(i32 %a, i32 %b) {
 }
 
 define i32 @lshr_32(i32 %a, i32 %b) {
-; MIPS32-LABLE:   lshr_32:
+; MIPS32-LABEL:   lshr_32:
 ; MIPS32:	  # %bb.0:
 ; MIPS32-NEXT:    jr	$ra
 ; MIPS32-NEXT:    srlv	$2, $4, $5
-; MIPS64-LABLE:   lshr_32:
+; MIPS64-LABEL:   lshr_32:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    sll   $2, $4, 0
@@ -35,11 +35,11 @@ define i32 @lshr_32(i32 %a, i32 %b) {
 }
 
 define i32 @ashr_32(i32 %a, i32 %b) {
-; MIPS32-LABLE:   ashr_32:
+; MIPS32-LABEL:   ashr_32:
 ; MIPS32:	  # %bb.0:
 ; MIPS32-NEXT:    jr	$ra
 ; MIPS32-NEXT:    srav	$2, $4, $5
-; MIPS64-LABLE:   ashr_32:
+; MIPS64-LABEL:   ashr_32:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    sll   $2, $4, 0
@@ -51,7 +51,7 @@ define i32 @ashr_32(i32 %a, i32 %b) {
 }
 
 define i64 @shl_64(i64 %a, i64 %b) {
-; MIPS64-LABLE:   shl_64:
+; MIPS64-LABEL:   shl_64:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    jr	$ra
@@ -62,7 +62,7 @@ define i64 @shl_64(i64 %a, i64 %b) {
 }
 
 define i64 @lshr_64(i64 %a, i64 %b) {
-; MIPS64-LABLE:   lshr_64:
+; MIPS64-LABEL:   lshr_64:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    jr	$ra
@@ -73,7 +73,7 @@ define i64 @lshr_64(i64 %a, i64 %b) {
 }
 
 define i64 @ashr_64(i64 %a, i64 %b) {
-; MIPS64-LABLE:   ashr_64:
+; MIPS64-LABEL:   ashr_64:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    jr	$ra
diff --git a/llvm/test/CodeGen/NVPTX/idioms.ll b/llvm/test/CodeGen/NVPTX/idioms.ll
index e8fe47c303f92d..0669d2a3717cb1 100644
--- a/llvm/test/CodeGen/NVPTX/idioms.ll
+++ b/llvm/test/CodeGen/NVPTX/idioms.ll
@@ -42,7 +42,7 @@ define %struct.S16 @i32_to_2xi16(i32 noundef %in) {
   %high = trunc i32 %high32 to i16
 ; CHECK:       ld.param.u32  %[[R32:r[0-9]+]], [i32_to_2xi16_param_0];
 ; CHECK-DAG:   cvt.u16.u32   %rs{{[0-9+]}}, %[[R32]];
-; CHECK-DAG    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
+; CHECK-DAG:    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
   %s1 = insertvalue %struct.S16 poison, i16 %low, 0
   %s = insertvalue %struct.S16 %s1, i16 %high, 1
   ret %struct.S16 %s
@@ -56,7 +56,7 @@ define %struct.S16 @i32_to_2xi16_lh(i32 noundef %in) {
   %low = trunc i32 %in to i16
 ; CHECK:       ld.param.u32  %[[R32:r[0-9]+]], [i32_to_2xi16_lh_param_0];
 ; CHECK-DAG:   cvt.u16.u32   %rs{{[0-9+]}}, %[[R32]];
-; CHECK-DAG    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
+; CHECK-DAG:    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
   %s1 = insertvalue %struct.S16 poison, i16 %low, 0
   %s = insertvalue %struct.S16 %s1, i16 %high, 1
   ret %struct.S16 %s
@@ -84,7 +84,7 @@ define %struct.S32 @i64_to_2xi32(i64 noundef %in) {
   %high = trunc i64 %high64 to i32
 ; CHECK:       ld.param.u64  %[[R64:rd[0-9]+]], [i64_to_2xi32_param_0];
 ; CHECK-DAG:   cvt.u32.u64   %r{{[0-9+]}}, %[[R64]];
-; CHECK-DAG    mov.b64       {tmp, %r{{[0-9+]}}}, %[[R64]];
+; CHECK-DAG:    mov.b64       {tmp, %r{{[0-9+]}}}, %[[R64]];
   %s1 = insertvalue %struct.S32 poison, i32 %low, 0
   %s = insertvalue %struct.S32 %s1, i32 %high, 1
   ret %struct.S32 %s
@@ -114,8 +114,8 @@ define %struct.S16 @i32_to_2xi16_shr(i32 noundef %i){
   %h = trunc i32 %h32 to i16
 ; CHECK:      ld.param.u32    %[[R32:r[0-9]+]], [i32_to_2xi16_shr_param_0];
 ; CHECK:      shr.s32         %[[R32H:r[0-9]+]], %[[R32]], 16;
-; CHECK-DAG    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
-; CHECK-DAG    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32H]];
+; CHECK-DAG:    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
+; CHECK-DAG:    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32H]];
   %s0 = insertvalue %struct.S16 poison, i16 %l, 0
   %s1 = insertvalue %struct.S16 %s0, i16 %h, 1
   ret %struct.S16 %s1
diff --git a/llvm/test/CodeGen/SPARC/inlineasm.ll b/llvm/test/CodeGen/SPARC/inlineasm.ll
index 9817d7c6971f5c..786e9f3eb1e13c 100644
--- a/llvm/test/CodeGen/SPARC/inlineasm.ll
+++ b/llvm/test/CodeGen/SPARC/inlineasm.ll
@@ -144,7 +144,7 @@ entry:
   ret void
 }
 
-; CHECK-label:test_twinword
+; CHECK-LABEL:test_twinword
 ; CHECK: rd  %asr5, %i1
 ; CHECK: srlx %i1, 32, %i0
 
diff --git a/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll b/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll
index b5df462bd8fa9b..f5f1382a35fca9 100644
--- a/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll
+++ b/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll
@@ -6,7 +6,7 @@
 ; TODO: %if spirv-tools %{ llc -O0 -mtriple=spirv64-unknown-unknown %s -o - -filetype=obj | spirv-val %}
 
 ; CHECK-: Capability USMStorageClassesINTEL
-; CHECK-SPIRV-WITHOUT-NO: Capability USMStorageClassesINTEL
+; CHECK-SPIRV-WITHOUT-NOT: Capability USMStorageClassesINTEL
 ; CHECK-SPIRV-EXT-DAG: %[[DevTy:[0-9]+]] = OpTypePointer DeviceOnlyINTEL %[[#]]
 ; CHECK-SPIRV-EXT-DAG: %[[HostTy:[0-9]+]] = OpTypePointer HostOnlyINTEL %[[#]]
 ; CHECK-SPIRV-DAG: %[[CrsWrkTy:[0-9]+]] = OpTypePointer CrossWorkgroup %[[#]]
diff --git a/llvm/test/CodeGen/SystemZ/prefetch-04.ll b/llvm/test/CodeGen/SystemZ/prefetch-04.ll
index 61a2a1460c583d..10755bdb66eb5b 100644
--- a/llvm/test/CodeGen/SystemZ/prefetch-04.ll
+++ b/llvm/test/CodeGen/SystemZ/prefetch-04.ll
@@ -6,7 +6,7 @@
 ;
 ; CHECK-LABEL: for.body
 ; ...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented May 29, 2024

@llvm/pr-subscribers-mc

Author: klensy (klensy)

Changes

Moved fixes for llvm from #91854, plus few additional.


Patch is 153.63 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/93673.diff

99 Files Affected:

  • (modified) llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll (+1-1)
  • (modified) llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll (+1-1)
  • (modified) llvm/test/Assembler/bfloat.ll (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/arm64_32-atomics.ll (+10-10)
  • (modified) llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/fpimm.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/addrspacecast.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir (+5-5)
  • (modified) llvm/test/CodeGen/ARM/dsp-loop-indexing.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/shifter_operand.ll (-1)
  • (modified) llvm/test/CodeGen/ARM/speculation-hardening-sls.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/sxt_rot.ll (-1)
  • (modified) llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll (+9-9)
  • (modified) llvm/test/CodeGen/NVPTX/idioms.ll (+5-5)
  • (modified) llvm/test/CodeGen/SPARC/inlineasm.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/prefetch-04.ll (+1-1)
  • (modified) llvm/test/CodeGen/Thumb2/LowOverheadLoops/branch-targets.ll (+3-3)
  • (modified) llvm/test/CodeGen/X86/global-sections.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/tailregccpic.ll (+2-2)
  • (modified) llvm/test/DebugInfo/MIR/InstrRef/livedebugvalues_illegal_locs.mir (+3-3)
  • (modified) llvm/test/DebugInfo/MIR/InstrRef/single-assign-propagation.mir (+3-3)
  • (modified) llvm/test/MC/AArch64/SME/feature.s (+1-1)
  • (modified) llvm/test/MC/AArch64/armv8.7a-xs.s (+79-79)
  • (modified) llvm/test/MC/AArch64/basic-a64-diagnostics.s (+57-57)
  • (modified) llvm/test/MC/AMDGPU/hsa-diag-v4.s (+1-1)
  • (modified) llvm/test/MC/ARM/coff-relocations.s (+1-1)
  • (modified) llvm/test/MC/ARM/neon-complex.s (+20-20)
  • (modified) llvm/test/MC/AsmParser/labels.s (+2-2)
  • (modified) llvm/test/MC/COFF/cv-inline-linetable.s (+3-3)
  • (modified) llvm/test/MC/Disassembler/AArch64/armv8.6a-bf16.txt (+8-8)
  • (modified) llvm/test/MC/Disassembler/AArch64/armv8.7a-xs.txt (+80-80)
  • (modified) llvm/test/MC/Disassembler/AArch64/tme.txt (+3-3)
  • (modified) llvm/test/MC/Disassembler/AMDGPU/gfx10-wave32.txt (+6-6)
  • (modified) llvm/test/MC/Disassembler/AMDGPU/gfx12_dasm_ds.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/ARM/arm-tests.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/Mips/mips32r6/valid-mips32r6.txt (+3-3)
  • (modified) llvm/test/MC/Disassembler/Mips/mips64r6/valid-mips64r6.txt (+3-3)
  • (modified) llvm/test/MC/Disassembler/PowerPC/ppc64-encoding-dfp.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/PowerPC/ppc64-encoding.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/PowerPC/ppc64le-encoding.txt (+1-1)
  • (modified) llvm/test/MC/Disassembler/X86/x86-16.txt (+16-16)
  • (modified) llvm/test/MC/LoongArch/Relocations/relax-align.s (+1-1)
  • (modified) llvm/test/MC/MachO/lto-set-conditional.s (+2-2)
  • (modified) llvm/test/MC/Mips/expansion-jal-sym-pic.s (+8-8)
  • (modified) llvm/test/MC/Mips/macro-rem.s (+1-1)
  • (modified) llvm/test/MC/Mips/micromips-dsp/invalid.s (+4-4)
  • (modified) llvm/test/MC/Mips/micromips/valid.s (+2-2)
  • (modified) llvm/test/MC/Mips/mips-pdr-bad.s (+2-2)
  • (modified) llvm/test/MC/Mips/mips32r6/invalid.s (+8-8)
  • (modified) llvm/test/MC/Mips/mips64r6/invalid.s (+8-8)
  • (modified) llvm/test/MC/PowerPC/ppc64-encoding-ISA31.s (+13-13)
  • (modified) llvm/test/MC/PowerPC/ppc64-encoding-vmx.s (+2-2)
  • (modified) llvm/test/MC/RISCV/compress-rv64i.s (+2-2)
  • (modified) llvm/test/MC/RISCV/csr-aliases.s (+10-10)
  • (modified) llvm/test/MC/RISCV/relocations.s (+4-4)
  • (modified) llvm/test/MC/RISCV/zicfiss-valid.s (+6-6)
  • (modified) llvm/test/MC/WebAssembly/globals.s (+1-1)
  • (modified) llvm/test/MC/X86/apx/evex-format-intel.s (+2-2)
  • (modified) llvm/test/MC/Xtensa/Relocations/relocations.s (+31-31)
  • (modified) llvm/test/Other/constant-fold-gep-address-spaces.ll (+47-47)
  • (modified) llvm/test/Other/new-pm-thinlto-postlink-defaults.ll (+1-1)
  • (modified) llvm/test/Transforms/Attributor/cb_liveness_disabled.ll (+2-2)
  • (modified) llvm/test/Transforms/Attributor/cb_liveness_enabled.ll (+2-2)
  • (modified) llvm/test/Transforms/Attributor/returned.ll (+1-1)
  • (modified) llvm/test/Transforms/CallSiteSplitting/callsite-split.ll (+1-1)
  • (modified) llvm/test/Transforms/Coroutines/coro-debug-coro-frame.ll (+1-1)
  • (modified) llvm/test/Transforms/FunctionAttrs/nonnull.ll (+1-1)
  • (modified) llvm/test/Transforms/GVNSink/sink-common-code.ll (+6-6)
  • (modified) llvm/test/Transforms/Inline/update_invoke_prof.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/lifetime-sanitizer.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopUnroll/peel-loop2.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVectorize/AArch64/nontemporal-load-store.ll (+11-11)
  • (modified) llvm/test/Transforms/LoopVectorize/AArch64/strict-fadd.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVectorize/branch-weights.ll (+2-2)
  • (modified) llvm/test/Transforms/LoopVectorize/memdep.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVersioning/wrapping-pointer-non-integral-addrspace.ll (+1-1)
  • (modified) llvm/test/Transforms/ObjCARC/rv.ll (+3-3)
  • (modified) llvm/test/Transforms/PGOProfile/counter_promo_exit_catchswitch.ll (+1-1)
  • (modified) llvm/test/Transforms/PGOProfile/icp_covariant_invoke_return.ll (+4-4)
  • (modified) llvm/test/Transforms/PartiallyInlineLibCalls/X86/good-prototype.ll (+1-1)
  • (modified) llvm/test/Transforms/SampleProfile/pseudo-probe-selectionDAG.ll (+2-2)
  • (modified) llvm/test/Transforms/SimplifyCFG/branch-fold-threshold.ll (+2-2)
  • (modified) llvm/test/Verifier/AMDGPU/intrinsic-immarg.ll (+1-1)
  • (modified) llvm/test/tools/gold/X86/global_with_section.ll (+1-1)
  • (modified) llvm/test/tools/llvm-ar/replace-update.test (+1-1)
  • (modified) llvm/test/tools/llvm-cov/Inputs/binary-formats.canonical.json (+1-1)
  • (modified) llvm/test/tools/llvm-cov/coverage_watermark.test (+5-5)
  • (modified) llvm/test/tools/llvm-cov/zeroFunctionFile.c (+1-1)
  • (modified) llvm/test/tools/llvm-objcopy/ELF/update-section.test (+1-1)
  • (modified) llvm/test/tools/llvm-objdump/ELF/ARM/v5te-subarch.s (+1-1)
  • (modified) llvm/test/tools/llvm-objdump/X86/start-stop-address.test (+1-1)
  • (modified) llvm/test/tools/llvm-profgen/filter-ambiguous-profile.test (+5-5)
  • (modified) llvm/test/tools/llvm-reduce/mir/reduce-register-defs.mir (+1-1)
  • (modified) llvm/test/tools/llvm-reduce/skip-delta-passes.ll (+1-1)
  • (modified) llvm/test/tools/lto/discard-value-names.ll (+1-1)
diff --git a/llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll b/llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll
index a2526d9f5591a..c2aab35194831 100644
--- a/llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll
+++ b/llvm/test/Analysis/CostModel/AArch64/sve-shuffle-broadcast.ll
@@ -31,7 +31,7 @@ define void  @broadcast() #0{
 ; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction:   %22 = shufflevector <vscale x 8 x i1> undef, <vscale x 8 x i1> undef, <vscale x 8 x i32> zeroinitializer
 ; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction:   %23 = shufflevector <vscale x 4 x i1> undef, <vscale x 4 x i1> undef, <vscale x 4 x i32> zeroinitializer
 ; CHECK-NEXT: Cost Model: Found an estimated cost of 1 for instruction:   %24 = shufflevector <vscale x 2 x i1> undef, <vscale x 2 x i1> undef, <vscale x 2 x i32> zeroinitializer
-; CHECK-NETX: Cost Model: Found an estimated cost of 0 for instruction:   ret void
+; CHECK-NEXT: Cost Model: Found an estimated cost of 0 for instruction:   ret void
 
   %zero = shufflevector <vscale x 16 x i8> undef, <vscale x 16 x i8> undef, <vscale x 16 x i32> zeroinitializer
   %1 = shufflevector <vscale x 32 x i8> undef, <vscale x 32 x i8> undef, <vscale x 32 x i32> zeroinitializer
diff --git a/llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll b/llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll
index 335026dc9b62b..efad77b684a75 100644
--- a/llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll
+++ b/llvm/test/Analysis/UniformityAnalysis/AMDGPU/irreducible/diverged-entry-headers.ll
@@ -90,7 +90,7 @@ S:
   br i1 %cond.uni, label %exit, label %T
 
 T:
-; CHECK-NIT:   DIVERGENT:   %tt.phi = phi i32
+; CHECK-NOT:   DIVERGENT:   %tt.phi = phi i32
   %tt.phi = phi i32 [ %ss, %S ], [ %a, %entry ]
   %tt = add i32 %b, 1
   br label %P
diff --git a/llvm/test/Assembler/bfloat.ll b/llvm/test/Assembler/bfloat.ll
index 3a3b4c2b277db..6f935c5dac154 100644
--- a/llvm/test/Assembler/bfloat.ll
+++ b/llvm/test/Assembler/bfloat.ll
@@ -37,25 +37,25 @@ define float @check_bfloat_convert() {
   ret float %tmp
 }
 
-; ASSEM-DISASS-LABEL @snan_bfloat
+; ASSEM-DISASS-LABEL: @snan_bfloat
 define bfloat @snan_bfloat() {
 ; ASSEM-DISASS: ret bfloat 0xR7F81
     ret bfloat 0xR7F81
 }
 
-; ASSEM-DISASS-LABEL @qnan_bfloat
+; ASSEM-DISASS-LABEL: @qnan_bfloat
 define bfloat @qnan_bfloat() {
 ; ASSEM-DISASS: ret bfloat 0xR7FC0
     ret bfloat 0xR7FC0
 }
 
-; ASSEM-DISASS-LABEL @pos_inf_bfloat
+; ASSEM-DISASS-LABEL: @pos_inf_bfloat
 define bfloat @pos_inf_bfloat() {
 ; ASSEM-DISASS: ret bfloat 0xR7F80
     ret bfloat 0xR7F80
 }
 
-; ASSEM-DISASS-LABEL @neg_inf_bfloat
+; ASSEM-DISASS-LABEL: @neg_inf_bfloat
 define bfloat @neg_inf_bfloat() {
 ; ASSEM-DISASS: ret bfloat 0xRFF80
     ret bfloat 0xRFF80
diff --git a/llvm/test/CodeGen/AArch64/arm64_32-atomics.ll b/llvm/test/CodeGen/AArch64/arm64_32-atomics.ll
index 0000262e833da..19b9205dc1786 100644
--- a/llvm/test/CodeGen/AArch64/arm64_32-atomics.ll
+++ b/llvm/test/CodeGen/AArch64/arm64_32-atomics.ll
@@ -2,70 +2,70 @@
 ; RUN: llc -mtriple=arm64_32-apple-ios7.0 -mattr=+outline-atomics -o - %s | FileCheck %s -check-prefix=OUTLINE-ATOMICS
 
 define i8 @test_load_8(ptr %addr) {
-; CHECK-LABAL: test_load_8:
+; CHECK-LABEL: test_load_8:
 ; CHECK: ldarb w0, [x0]
   %val = load atomic i8, ptr %addr seq_cst, align 1
   ret i8 %val
 }
 
 define i16 @test_load_16(ptr %addr) {
-; CHECK-LABAL: test_load_16:
+; CHECK-LABEL: test_load_16:
 ; CHECK: ldarh w0, [x0]
   %val = load atomic i16, ptr %addr acquire, align 2
   ret i16 %val
 }
 
 define i32 @test_load_32(ptr %addr) {
-; CHECK-LABAL: test_load_32:
+; CHECK-LABEL: test_load_32:
 ; CHECK: ldar w0, [x0]
   %val = load atomic i32, ptr %addr seq_cst, align 4
   ret i32 %val
 }
 
 define i64 @test_load_64(ptr %addr) {
-; CHECK-LABAL: test_load_64:
+; CHECK-LABEL: test_load_64:
 ; CHECK: ldar x0, [x0]
   %val = load atomic i64, ptr %addr seq_cst, align 8
   ret i64 %val
 }
 
 define ptr @test_load_ptr(ptr %addr) {
-; CHECK-LABAL: test_load_ptr:
+; CHECK-LABEL: test_load_ptr:
 ; CHECK: ldar w0, [x0]
   %val = load atomic ptr, ptr %addr seq_cst, align 8
   ret ptr %val
 }
 
 define void @test_store_8(ptr %addr) {
-; CHECK-LABAL: test_store_8:
+; CHECK-LABEL: test_store_8:
 ; CHECK: stlrb wzr, [x0]
   store atomic i8 0, ptr %addr seq_cst, align 1
   ret void
 }
 
 define void @test_store_16(ptr %addr) {
-; CHECK-LABAL: test_store_16:
+; CHECK-LABEL: test_store_16:
 ; CHECK: stlrh wzr, [x0]
   store atomic i16 0, ptr %addr seq_cst, align 2
   ret void
 }
 
 define void @test_store_32(ptr %addr) {
-; CHECK-LABAL: test_store_32:
+; CHECK-LABEL: test_store_32:
 ; CHECK: stlr wzr, [x0]
   store atomic i32 0, ptr %addr seq_cst, align 4
   ret void
 }
 
 define void @test_store_64(ptr %addr) {
-; CHECK-LABAL: test_store_64:
+; CHECK-LABEL: test_store_64:
 ; CHECK: stlr xzr, [x0]
   store atomic i64 0, ptr %addr seq_cst, align 8
   ret void
 }
 
 define void @test_store_ptr(ptr %addr) {
-; CHECK-LABAL: test_store_ptr:
+; CHECK-LABEL: test_store_ptr:
 ; CHECK: stlr wzr, [x0]
   store atomic ptr null, ptr %addr seq_cst, align 8
   ret void
diff --git a/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll b/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll
index e9556b9d5cbee..e93fcec822846 100644
--- a/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll
+++ b/llvm/test/CodeGen/AArch64/arm64ec-entry-thunks.ll
@@ -1,7 +1,7 @@
 ; RUN: llc -mtriple=arm64ec-pc-windows-msvc < %s | FileCheck %s
 
 define void @no_op() nounwind {
-; CHECK-LABEL     .def    $ientry_thunk$cdecl$v$v;
+; CHECK-LABEL:     .def    $ientry_thunk$cdecl$v$v;
 ; CHECK:          .section        .wowthk$aa,"xr",discard,$ientry_thunk$cdecl$v$v
 ; CHECK:          // %bb.0:
 ; CHECK-NEXT:     stp     q6, q7, [sp, #-176]!            // 32-byte Folded Spill
diff --git a/llvm/test/CodeGen/AArch64/fpimm.ll b/llvm/test/CodeGen/AArch64/fpimm.ll
index b92bb4245c7f3..e2944243338f5 100644
--- a/llvm/test/CodeGen/AArch64/fpimm.ll
+++ b/llvm/test/CodeGen/AArch64/fpimm.ll
@@ -38,7 +38,7 @@ define void @check_double() {
 ; 64-bit ORR followed by MOVK.
 ; CHECK-DAG: mov  [[XFP0:x[0-9]+]], #1082331758844
 ; CHECK-DAG: movk [[XFP0]], #64764, lsl #16
-; CHECk-DAG: fmov {{d[0-9]+}}, [[XFP0]]
+; CHECK-DAG: fmov {{d[0-9]+}}, [[XFP0]]
   %newval3 = fadd double %val, 0xFCFCFC00FC
   store volatile double %newval3, ptr @varf64
 
diff --git a/llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll b/llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll
index a5757a70843a9..fa63df35ac857 100644
--- a/llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll
+++ b/llvm/test/CodeGen/AArch64/machine-outliner-retaddr-sign-diff-scope-same-key.ll
@@ -28,7 +28,7 @@ define void @a() "sign-return-address"="all" {
 }
 
 define void @b() "sign-return-address"="non-leaf" {
-; CHECK-LABE:      b:                                     // @b
+; CHECK-LABEL:     b:                                     // @b
 ; V8A-NOT:         hint #25
 ; V83A-NOT:        paciasp
 ; CHECK-NOT:       .cfi_negate_ra_state
diff --git a/llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll b/llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll
index f380b2d05d863..fe08fa5642574 100644
--- a/llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll
+++ b/llvm/test/CodeGen/AArch64/speculation-hardening-sls.ll
@@ -192,7 +192,7 @@ entry:
 ; CHECK: .Lfunc_end
 }
 
-; HARDEN-label: __llvm_slsblr_thunk_x0:
+; HARDEN-LABEL: __llvm_slsblr_thunk_x0:
 ; HARDEN:    mov x16, x0
 ; HARDEN:    br x16
 ; ISBDSB-NEXT: dsb sy
@@ -208,7 +208,7 @@ entry:
 ; HARDEN-COMDAT-OFF-NOT:  .hidden __llvm_slsblr_thunk_x19
 ; HARDEN-COMDAT-OFF-NOT:  .weak __llvm_slsblr_thunk_x19
 ; HARDEN-COMDAT-OFF:      .type __llvm_slsblr_thunk_x19,@function
-; HARDEN-label: __llvm_slsblr_thunk_x19:
+; HARDEN-LABEL: __llvm_slsblr_thunk_x19:
 ; HARDEN:    mov x16, x19
 ; HARDEN:    br x16
 ; ISBDSB-NEXT: dsb sy
diff --git a/llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir b/llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir
index 66d2067b531a3..bfdb1763776b4 100644
--- a/llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir
+++ b/llvm/test/CodeGen/AArch64/stp-opt-with-renaming-undef-assert.mir
@@ -12,7 +12,7 @@
 
 # This test also checks that pairwise store STP is generated.
 
-# CHECK-LABLE: test
+# CHECK-LABEL: test
 # CHECK: bb.0:
 # CHECK-NEXT: liveins: $x0, $x17, $x18
 # CHECK: renamable $q13_q14_q15 = LD3Threev16b undef renamable $x17 :: (load (s384) from `ptr undef`, align 64)
diff --git a/llvm/test/CodeGen/AMDGPU/addrspacecast.ll b/llvm/test/CodeGen/AMDGPU/addrspacecast.ll
index 50423c59eabe9..526d5c946ec7f 100644
--- a/llvm/test/CodeGen/AMDGPU/addrspacecast.ll
+++ b/llvm/test/CodeGen/AMDGPU/addrspacecast.ll
@@ -108,7 +108,7 @@ define amdgpu_kernel void @use_global_to_flat_addrspacecast(ptr addrspace(1) %pt
 }
 
 ; no-op
-; HSA-LABEl: {{^}}use_constant_to_flat_addrspacecast:
+; HSA-LABEL: {{^}}use_constant_to_flat_addrspacecast:
 ; HSA: s_load_dwordx2 s[[[PTRLO:[0-9]+]]:[[PTRHI:[0-9]+]]]
 ; HSA-DAG: v_mov_b32_e32 v[[VPTRLO:[0-9]+]], s[[PTRLO]]
 ; HSA-DAG: v_mov_b32_e32 v[[VPTRHI:[0-9]+]], s[[PTRHI]]
@@ -119,7 +119,7 @@ define amdgpu_kernel void @use_constant_to_flat_addrspacecast(ptr addrspace(4) %
   ret void
 }
 
-; HSA-LABEl: {{^}}use_constant_to_global_addrspacecast:
+; HSA-LABEL: {{^}}use_constant_to_global_addrspacecast:
 ; HSA: s_load_dwordx2 s[[[PTRLO:[0-9]+]]:[[PTRHI:[0-9]+]]]
 ; CI-DAG: v_mov_b32_e32 v[[VPTRLO:[0-9]+]], s[[PTRLO]]
 ; CI-DAG: v_mov_b32_e32 v[[VPTRHI:[0-9]+]], s[[PTRHI]]
diff --git a/llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir b/llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir
index 29621a0477418..1151bde02ef62 100644
--- a/llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir
+++ b/llvm/test/CodeGen/AMDGPU/dpp_combine_gfx11.mir
@@ -4,7 +4,7 @@
 
 ---
 
-# GCN-label: name: vop3
+# GCN-LABEL: name: vop3
 # GCN: %6:vgpr_32, %7:sreg_32_xm0_xexec = V_SUBBREV_U32_e64_dpp %3, %0, %1, %5, 1, 1, 15, 15, 1, implicit $exec
 # GCN: %8:vgpr_32 = V_CVT_PK_U8_F32_e64_dpp %3, 4, %0, 2, %2, 2, %1, 1, 1, 15, 15, 1, implicit $mode, implicit $exec
 # GCN: %10:vgpr_32 = V_MED3_F32_e64 0, %9, 0, %0, 0, 12345678, 0, 0, implicit $mode, implicit $exec
@@ -37,7 +37,7 @@ body:             |
 ...
 ---
 
-# GCN-label: name: vop3_sgpr_src1
+# GCN-LABEL: name: vop3_sgpr_src1
 # GCN: %6:vgpr_32 = V_MED3_F32_e64_dpp %4, 0, %0, 0, %1, 0, %2, 0, 0, 1, 15, 15, 1, implicit $mode, implicit $exec
 # GFX1100: %8:vgpr_32 = V_MED3_F32_e64 0, %7, 0, %2, 0, %1, 0, 0, implicit $mode, implicit $exec
 # GFX1150: %8:vgpr_32 = V_MED3_F32_e64_dpp %4, 0, %0, 0, %2, 0, %1, 0, 0, 1, 15, 15, 1, implicit $mode, implicit $exec
@@ -81,7 +81,7 @@ body:             |
 ---
 
 # Regression test for src_modifiers on base u16 opcode
-# GCN-label: name: vop3_u16
+# GCN-LABEL: name: vop3_u16
 # GCN: %5:vgpr_32 = V_ADD_NC_U16_e64_dpp %3, 0, %1, 0, %3, 0, 0, 1, 15, 15, 1, implicit $exec
 # GCN: %7:vgpr_32 = V_ADD_NC_U16_e64_dpp %3, 1, %5, 2, %5, 0, 0, 1, 15, 15, 1, implicit $exec
 # GCN: %9:vgpr_32 = V_ADD_NC_U16_e64 4, %8, 8, %7, 0, 0, implicit $exec
@@ -205,7 +205,7 @@ body:             |
 ...
 
 # do not combine, dpp arg used twice
-# GCN-label: name: dpp_arg_twice
+# GCN-LABEL: name: dpp_arg_twice
 # GCN: %4:vgpr_32 = V_FMA_F32_e64 1, %1, 2, %3, 2, %3, 1, 2, implicit $mode, implicit $exec
 # GCN: %6:vgpr_32 = V_FMA_F32_e64 2, %5, 2, %1, 2, %5, 1, 2, implicit $mode, implicit $exec
 # GCN: %8:vgpr_32 = V_FMA_F32_e64 2, %7, 2, %7, 2, %1, 1, 2, implicit $mode, implicit $exec
@@ -231,7 +231,7 @@ body:             |
 ...
 
 # when the dpp source isn't a src0 operand the operation should be commuted if possible
-# GCN-label: name: dpp_commute_e64
+# GCN-LABEL: name: dpp_commute_e64
 # GCN: %4:vgpr_32  = V_MUL_U32_U24_e64_dpp %1, %0, %1, 1, 1, 14, 15, 0, implicit $exec
 # GCN: %7:vgpr_32 = V_FMA_F32_e64_dpp %5, 2, %0, 1, %1, 2, %1, 1, 2, 1, 15, 15, 1, implicit $mode, implicit $exec
 # GCN: %10:vgpr_32 = V_SUBREV_U32_e64_dpp %1, %0, %1, 1, 1, 14, 15, 0, implicit $exec
diff --git a/llvm/test/CodeGen/ARM/dsp-loop-indexing.ll b/llvm/test/CodeGen/ARM/dsp-loop-indexing.ll
index 9fb64471e9881..892e66aed4e5f 100644
--- a/llvm/test/CodeGen/ARM/dsp-loop-indexing.ll
+++ b/llvm/test/CodeGen/ARM/dsp-loop-indexing.ll
@@ -22,7 +22,7 @@
 ; CHECK-DEFAULT: ldr{{.*}}, #4]
 ; CHECK-DEFAULT: str{{.*}}, #4]
 ; CHECK-DEFAULT: ldr{{.*}}, #8]!
-; CHECK-DEAFULT: ldr{{.*}}, #8]!
+; CHECK-DEFAULT: ldr{{.*}}, #8]!
 ; CHECK-DEFAULT: str{{.*}}, #8]!
 
 ; CHECK-COMPLEX: ldr{{.*}}, #8]!
diff --git a/llvm/test/CodeGen/ARM/shifter_operand.ll b/llvm/test/CodeGen/ARM/shifter_operand.ll
index bf2e8aa911c64..00922b1bf2492 100644
--- a/llvm/test/CodeGen/ARM/shifter_operand.ll
+++ b/llvm/test/CodeGen/ARM/shifter_operand.ll
@@ -121,7 +121,6 @@ define i32 @test_orr_extract_from_mul_1(i32 %x, i32 %y) {
 ; CHECK-THUMB-NEXT:    orrs r0, r1
 ; CHECK-THUMB-NEXT:    bx lr
 entry:
-; CHECk-THUMB: orrs r0, r1
   %mul = mul i32 %y, 63767
   %or = or i32 %mul, %x
   ret i32 %or
diff --git a/llvm/test/CodeGen/ARM/speculation-hardening-sls.ll b/llvm/test/CodeGen/ARM/speculation-hardening-sls.ll
index f25d73a12246f..1f60f120dc86a 100644
--- a/llvm/test/CodeGen/ARM/speculation-hardening-sls.ll
+++ b/llvm/test/CodeGen/ARM/speculation-hardening-sls.ll
@@ -248,7 +248,7 @@ entry:
 ; HARDEN-COMDAT-OFF-NOT:  .hidden {{__llvm_slsblr_thunk_(arm|thumb)_r5}}
 ; HARDEN-COMDAT-OFF-NOT:  .weak {{__llvm_slsblr_thunk_(arm|thumb)_r5}}
 ; HARDEN-COMDAT-OFF:      .type {{__llvm_slsblr_thunk_(arm|thumb)_r5}},%function
-; HARDEN-label: {{__llvm_slsblr_thunk_(arm|thumb)_r5}}:
+; HARDEN-LABEL: {{__llvm_slsblr_thunk_(arm|thumb)_r5}}:
 ; HARDEN:    bx r5
 ; ISBDSB-NEXT: dsb sy
 ; ISBDSB-NEXT: isb
diff --git a/llvm/test/CodeGen/ARM/sxt_rot.ll b/llvm/test/CodeGen/ARM/sxt_rot.ll
index e9649c7a7fd9a..775e45201105c 100644
--- a/llvm/test/CodeGen/ARM/sxt_rot.ll
+++ b/llvm/test/CodeGen/ARM/sxt_rot.ll
@@ -22,7 +22,6 @@ define signext i8 @test1(i32 %A) {
 ; CHECK-V7:       @ %bb.0:
 ; CHECK-V7-NEXT:    sbfx r0, r0, #8, #8
 ; CHECK-V7-NEXT:    bx lr
-; CHECk-V7: sbfx r0, r0, #8, #8
   %B = lshr i32 %A, 8
   %C = shl i32 %A, 24
   %D = or i32 %B, %C
diff --git a/llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll b/llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll
index bf69adf6702f0..58920483e24bf 100644
--- a/llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll
+++ b/llvm/test/CodeGen/Mips/optimizeAndPlusShift.ll
@@ -3,11 +3,11 @@
 ; RUN: llc < %s -mtriple=mips64el-unknown-linux-gnuabi64 | FileCheck %s --check-prefixes=MIPS64
 
 define i32 @shl_32(i32 %a, i32 %b) {
-; MIPS32-LABLE:   shl_32:
+; MIPS32-LABEL:   shl_32:
 ; MIPS32:	  # %bb.0:
 ; MIPS32-NEXT:    jr	$ra
 ; MIPS32-NEXT:    sllv	$2, $4, $5
-; MIPS64-LABLE:   shl_32:
+; MIPS64-LABEL:   shl_32:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    sll   $2, $4, 0
@@ -19,11 +19,11 @@ define i32 @shl_32(i32 %a, i32 %b) {
 }
 
 define i32 @lshr_32(i32 %a, i32 %b) {
-; MIPS32-LABLE:   lshr_32:
+; MIPS32-LABEL:   lshr_32:
 ; MIPS32:	  # %bb.0:
 ; MIPS32-NEXT:    jr	$ra
 ; MIPS32-NEXT:    srlv	$2, $4, $5
-; MIPS64-LABLE:   lshr_32:
+; MIPS64-LABEL:   lshr_32:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    sll   $2, $4, 0
@@ -35,11 +35,11 @@ define i32 @lshr_32(i32 %a, i32 %b) {
 }
 
 define i32 @ashr_32(i32 %a, i32 %b) {
-; MIPS32-LABLE:   ashr_32:
+; MIPS32-LABEL:   ashr_32:
 ; MIPS32:	  # %bb.0:
 ; MIPS32-NEXT:    jr	$ra
 ; MIPS32-NEXT:    srav	$2, $4, $5
-; MIPS64-LABLE:   ashr_32:
+; MIPS64-LABEL:   ashr_32:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    sll   $2, $4, 0
@@ -51,7 +51,7 @@ define i32 @ashr_32(i32 %a, i32 %b) {
 }
 
 define i64 @shl_64(i64 %a, i64 %b) {
-; MIPS64-LABLE:   shl_64:
+; MIPS64-LABEL:   shl_64:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    jr	$ra
@@ -62,7 +62,7 @@ define i64 @shl_64(i64 %a, i64 %b) {
 }
 
 define i64 @lshr_64(i64 %a, i64 %b) {
-; MIPS64-LABLE:   lshr_64:
+; MIPS64-LABEL:   lshr_64:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    jr	$ra
@@ -73,7 +73,7 @@ define i64 @lshr_64(i64 %a, i64 %b) {
 }
 
 define i64 @ashr_64(i64 %a, i64 %b) {
-; MIPS64-LABLE:   ashr_64:
+; MIPS64-LABEL:   ashr_64:
 ; MIPS64:	  # %bb.0:
 ; MIPS64-NEXT:    sll   $1, $5, 0
 ; MIPS64-NEXT:    jr	$ra
diff --git a/llvm/test/CodeGen/NVPTX/idioms.ll b/llvm/test/CodeGen/NVPTX/idioms.ll
index e8fe47c303f92..0669d2a3717cb 100644
--- a/llvm/test/CodeGen/NVPTX/idioms.ll
+++ b/llvm/test/CodeGen/NVPTX/idioms.ll
@@ -42,7 +42,7 @@ define %struct.S16 @i32_to_2xi16(i32 noundef %in) {
   %high = trunc i32 %high32 to i16
 ; CHECK:       ld.param.u32  %[[R32:r[0-9]+]], [i32_to_2xi16_param_0];
 ; CHECK-DAG:   cvt.u16.u32   %rs{{[0-9+]}}, %[[R32]];
-; CHECK-DAG    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
+; CHECK-DAG:    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
   %s1 = insertvalue %struct.S16 poison, i16 %low, 0
   %s = insertvalue %struct.S16 %s1, i16 %high, 1
   ret %struct.S16 %s
@@ -56,7 +56,7 @@ define %struct.S16 @i32_to_2xi16_lh(i32 noundef %in) {
   %low = trunc i32 %in to i16
 ; CHECK:       ld.param.u32  %[[R32:r[0-9]+]], [i32_to_2xi16_lh_param_0];
 ; CHECK-DAG:   cvt.u16.u32   %rs{{[0-9+]}}, %[[R32]];
-; CHECK-DAG    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
+; CHECK-DAG:    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
   %s1 = insertvalue %struct.S16 poison, i16 %low, 0
   %s = insertvalue %struct.S16 %s1, i16 %high, 1
   ret %struct.S16 %s
@@ -84,7 +84,7 @@ define %struct.S32 @i64_to_2xi32(i64 noundef %in) {
   %high = trunc i64 %high64 to i32
 ; CHECK:       ld.param.u64  %[[R64:rd[0-9]+]], [i64_to_2xi32_param_0];
 ; CHECK-DAG:   cvt.u32.u64   %r{{[0-9+]}}, %[[R64]];
-; CHECK-DAG    mov.b64       {tmp, %r{{[0-9+]}}}, %[[R64]];
+; CHECK-DAG:    mov.b64       {tmp, %r{{[0-9+]}}}, %[[R64]];
   %s1 = insertvalue %struct.S32 poison, i32 %low, 0
   %s = insertvalue %struct.S32 %s1, i32 %high, 1
   ret %struct.S32 %s
@@ -114,8 +114,8 @@ define %struct.S16 @i32_to_2xi16_shr(i32 noundef %i){
   %h = trunc i32 %h32 to i16
 ; CHECK:      ld.param.u32    %[[R32:r[0-9]+]], [i32_to_2xi16_shr_param_0];
 ; CHECK:      shr.s32         %[[R32H:r[0-9]+]], %[[R32]], 16;
-; CHECK-DAG    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
-; CHECK-DAG    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32H]];
+; CHECK-DAG:    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32]];
+; CHECK-DAG:    mov.b32       {tmp, %rs{{[0-9+]}}}, %[[R32H]];
   %s0 = insertvalue %struct.S16 poison, i16 %l, 0
   %s1 = insertvalue %struct.S16 %s0, i16 %h, 1
   ret %struct.S16 %s1
diff --git a/llvm/test/CodeGen/SPARC/inlineasm.ll b/llvm/test/CodeGen/SPARC/inlineasm.ll
index 9817d7c6971f5..786e9f3eb1e13 100644
--- a/llvm/test/CodeGen/SPARC/inlineasm.ll
+++ b/llvm/test/CodeGen/SPARC/inlineasm.ll
@@ -144,7 +144,7 @@ entry:
   ret void
 }
 
-; CHECK-label:test_twinword
+; CHECK-LABEL:test_twinword
 ; CHECK: rd  %asr5, %i1
 ; CHECK: srlx %i1, 32, %i0
 
diff --git a/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll b/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll
index b5df462bd8fa9..f5f1382a35fca 100644
--- a/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll
+++ b/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_usm_storage_classes/intel-usm-addrspaces.ll
@@ -6,7 +6,7 @@
 ; TODO: %if spirv-tools %{ llc -O0 -mtriple=spirv64-unknown-unknown %s -o - -filetype=obj | spirv-val %}
 
 ; CHECK-: Capability USMStorageClassesINTEL
-; CHECK-SPIRV-WITHOUT-NO: Capability USMStorageClassesINTEL
+; CHECK-SPIRV-WITHOUT-NOT: Capability USMStorageClassesINTEL
 ; CHECK-SPIRV-EXT-DAG: %[[DevTy:[0-9]+]] = OpTypePointer DeviceOnlyINTEL %[[#]]
 ; CHECK-SPIRV-EXT-DAG: %[[HostTy:[0-9]+]] = OpTypePointer HostOnlyINTEL %[[#]]
 ; CHECK-SPIRV-DAG: %[[CrsWrkTy:[0-9]+]] = OpTypePointer CrossWorkgroup %[[#]]
diff --git a/llvm/test/CodeGen/SystemZ/prefetch-04.ll b/llvm/test/CodeGen/SystemZ/prefetch-04.ll
index 61a2a1460c583..10755bdb66eb5 100644
--- a/llvm/test/CodeGen/SystemZ/prefetch-04.ll
+++ b/llvm/test/CodeGen/SystemZ/prefetch-04.ll
@@ -6,7 +6,7 @@
 ;
 ; CHECK-LABEL: for.body
 ; CHECK: call void @llvm.prefetch.p0(ptr %...
[truncated]

@klensy
Copy link
Contributor Author

klensy commented May 29, 2024

See also #93674 for list of some unfixed tests

@klensy klensy changed the title [llvm][test] Fix filecheck annotation typos [llvm][test] Fix filecheck annotation typos [1/n] May 29, 2024
@klensy klensy force-pushed the filecheck-llvm-f branch from 3336bb3 to 57cb2fc Compare May 29, 2024 12:44
@jayfoad
Copy link
Contributor

jayfoad commented May 29, 2024

I took the liberty of committing all the AMDGPU fixes as fbe98da. Thanks!

@klensy
Copy link
Contributor Author

klensy commented May 29, 2024

I took the liberty of committing all the AMDGPU fixes as fbe98da. Thanks!

This is best possible way to commit those independent fixes, thanks.

Copy link
Collaborator

@pogo59 pogo59 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This kind of audit is long overdue.

Did you verify that all of the COM: annotations are still needed?

I saw one questionable change, the rest LGTM.

@klensy klensy force-pushed the filecheck-llvm-f branch 2 times, most recently from a0bd60b to 948f53b Compare June 1, 2024 12:00
RKSimon added a commit that referenced this pull request Jun 3, 2024
Noticed while triaging the failures on #93673 - the attributor pass doesn't emit any range metadata in these tests
RKSimon added a commit that referenced this pull request Jun 3, 2024
Fix typos in AGGRESIVE-->AGGRESSIVE + WAYAGGRESIVE->WAYAGGRESSIVE

This also exposed an issue that the WAYAGGRESSIVE run removed a block entirely, so the LABEL check was silently failing.

Noticed while triaging the failures on #93673
RKSimon added a commit that referenced this pull request Jun 3, 2024
…complete codegen line

Noticed while triaging the failures on #93673
@klensy klensy force-pushed the filecheck-llvm-f branch from 948f53b to 2091edc Compare June 5, 2024 12:09
klensy pushed a commit to klensy/llvm-project that referenced this pull request Jun 8, 2024
@klensy
Copy link
Contributor Author

klensy commented Jun 8, 2024

Copied passing tests to #94857 so they can be merged faster.

@klensy klensy force-pushed the filecheck-llvm-f branch from 2091edc to 2129c06 Compare June 8, 2024 14:22
klensy pushed a commit to klensy/llvm-project that referenced this pull request Jun 10, 2024
@RKSimon
Copy link
Collaborator

RKSimon commented Jun 10, 2024

@klensy The last few test failures are messy - I think you're going to have to trawl through git blame, and reach out to the relevant people (not just the last person to touch them as they might have just done a cleanup).

@klensy
Copy link
Contributor Author

klensy commented Jun 10, 2024

@klensy The last few test failures are messy - I think you're going to have to trawl through git blame, and reach out to the relevant people (not just the last person to touch them as they might have just done a cleanup).

That's why i split working part into separate PR and to stop rebasing files around.

klensy pushed a commit to klensy/llvm-project that referenced this pull request Feb 3, 2025
@klensy
Copy link
Contributor Author

klensy commented Feb 8, 2025

Rebased, first 2 commits from almost merged #94857, lets see what left.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants