Skip to content

Commit fd9bf59

Browse files
committed
Auto merge of rust-lang#111999 - scottmcm:codegen-less-memcpy, r=compiler-errors
Use `load`+`store` instead of `memcpy` for small integer arrays I was inspired by rust-lang#98892 to see whether, rather than making `mem::swap` do something smart in the library, we could update MIR assignments like `*_1 = *_2` to do something smarter than `memcpy` for sufficiently-small types that doing it inline is going to be better than a `memcpy` call in assembly anyway. After all, special code may help `mem::swap`, but if the "obvious" MIR can just result in the correct thing that helps everything -- other code like `mem::replace`, people doing it manually, and just passing around by value in general -- as well as makes MIR inlining happier since it doesn't need to deal with all the complicated library code if it just sees a couple assignments. LLVM will turn the short, known-length `memcpy`s into direct instructions in the backend, but that's too late for it to be able to remove `alloca`s. In general, replacing `memcpy`s with typed instructions is hard in the middle-end -- even for `memcpy.inline` where it knows it won't be a function call -- is hard [due to poison propagation issues](https://rust-lang.zulipchat.com/#narrow/stream/187780-t-compiler.2Fwg-llvm/topic/memcpy.20vs.20load-store.20for.20MIR.20assignments/near/360376712). So because we know more about the type invariants -- these are typed copies -- rustc can emit something more specific, allowing LLVM to `mem2reg` away the `alloca`s in some situations. rust-lang#52051 previously did something like this in the library for `mem::swap`, but it ended up regressing during enabling mir inlining (rust-lang@cbbf06b), so this has been suboptimal on stable for ≈5 releases now. The code in this PR is narrowly targeted at just integer arrays in LLVM, but works via a new method on the [`LayoutTypeMethods`](https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/traits/trait.LayoutTypeMethods.html) trait, so specific backends based on cg_ssa can enable this for more situations over time, as we find them. I don't want to try to bite off too much in this PR, though. (Transparent newtypes and simple things like the 3×usize `String` would be obvious candidates for a follow-up.) Codegen demonstrations: <https://llvm.godbolt.org/z/fK8hT9aqv> Before: ```llvm define void `@swap_rgb48_old(ptr` noalias nocapture noundef align 2 dereferenceable(6) %x, ptr noalias nocapture noundef align 2 dereferenceable(6) %y) unnamed_addr #1 { %a.i = alloca [3 x i16], align 2 call void `@llvm.lifetime.start.p0(i64` 6, ptr nonnull %a.i) call void `@llvm.memcpy.p0.p0.i64(ptr` noundef nonnull align 2 dereferenceable(6) %a.i, ptr noundef nonnull align 2 dereferenceable(6) %x, i64 6, i1 false) tail call void `@llvm.memcpy.p0.p0.i64(ptr` noundef nonnull align 2 dereferenceable(6) %x, ptr noundef nonnull align 2 dereferenceable(6) %y, i64 6, i1 false) call void `@llvm.memcpy.p0.p0.i64(ptr` noundef nonnull align 2 dereferenceable(6) %y, ptr noundef nonnull align 2 dereferenceable(6) %a.i, i64 6, i1 false) call void `@llvm.lifetime.end.p0(i64` 6, ptr nonnull %a.i) ret void } ``` Note it going to stack: ```nasm swap_rgb48_old: # `@swap_rgb48_old` movzx eax, word ptr [rdi + 4] mov word ptr [rsp - 4], ax mov eax, dword ptr [rdi] mov dword ptr [rsp - 8], eax movzx eax, word ptr [rsi + 4] mov word ptr [rdi + 4], ax mov eax, dword ptr [rsi] mov dword ptr [rdi], eax movzx eax, word ptr [rsp - 4] mov word ptr [rsi + 4], ax mov eax, dword ptr [rsp - 8] mov dword ptr [rsi], eax ret ``` Now: ```llvm define void `@swap_rgb48(ptr` noalias nocapture noundef align 2 dereferenceable(6) %x, ptr noalias nocapture noundef align 2 dereferenceable(6) %y) unnamed_addr #0 { start: %0 = load <3 x i16>, ptr %x, align 2 %1 = load <3 x i16>, ptr %y, align 2 store <3 x i16> %1, ptr %x, align 2 store <3 x i16> %0, ptr %y, align 2 ret void } ``` still lowers to `dword`+`word` operations, but has no stack traffic: ```nasm swap_rgb48: # `@swap_rgb48` mov eax, dword ptr [rdi] movzx ecx, word ptr [rdi + 4] movzx edx, word ptr [rsi + 4] mov r8d, dword ptr [rsi] mov dword ptr [rdi], r8d mov word ptr [rdi + 4], dx mov word ptr [rsi + 4], cx mov dword ptr [rsi], eax ret ``` And as a demonstration that this isn't just `mem::swap`, a `mem::replace` on a small array (since replace doesn't use swap since rust-lang#83022), which used to be `memcpy`s in LLVM changes in IR ```llvm define void `@replace_short_array(ptr` noalias nocapture noundef sret([3 x i32]) dereferenceable(12) %0, ptr noalias noundef align 4 dereferenceable(12) %r, ptr noalias nocapture noundef readonly dereferenceable(12) %v) unnamed_addr #0 { start: %1 = load <3 x i32>, ptr %r, align 4 store <3 x i32> %1, ptr %0, align 4 %2 = load <3 x i32>, ptr %v, align 4 store <3 x i32> %2, ptr %r, align 4 ret void } ``` but that lowers to reasonable `dword`+`qword` instructions still ```nasm replace_short_array: # `@replace_short_array` mov rax, rdi mov rcx, qword ptr [rsi] mov edi, dword ptr [rsi + 8] mov dword ptr [rax + 8], edi mov qword ptr [rax], rcx mov rcx, qword ptr [rdx] mov edx, dword ptr [rdx + 8] mov dword ptr [rsi + 8], edx mov qword ptr [rsi], rcx ret ```
2 parents adc719d + e1b020d commit fd9bf59

File tree

8 files changed

+146
-6
lines changed

8 files changed

+146
-6
lines changed

compiler/rustc_codegen_llvm/src/type_.rs

+3
Original file line numberDiff line numberDiff line change
@@ -288,6 +288,9 @@ impl<'ll, 'tcx> LayoutTypeMethods<'tcx> for CodegenCx<'ll, 'tcx> {
288288
fn reg_backend_type(&self, ty: &Reg) -> &'ll Type {
289289
ty.llvm_type(self)
290290
}
291+
fn scalar_copy_backend_type(&self, layout: TyAndLayout<'tcx>) -> Option<Self::Type> {
292+
layout.scalar_copy_llvm_type(self)
293+
}
291294
}
292295

293296
impl<'ll, 'tcx> TypeMembershipMethods<'tcx> for CodegenCx<'ll, 'tcx> {

compiler/rustc_codegen_llvm/src/type_of.rs

+33
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ use rustc_middle::bug;
66
use rustc_middle::ty::layout::{FnAbiOf, LayoutOf, TyAndLayout};
77
use rustc_middle::ty::print::{with_no_trimmed_paths, with_no_visible_paths};
88
use rustc_middle::ty::{self, Ty, TypeVisitableExt};
9+
use rustc_target::abi::HasDataLayout;
910
use rustc_target::abi::{Abi, Align, FieldsShape};
1011
use rustc_target::abi::{Int, Pointer, F32, F64};
1112
use rustc_target::abi::{PointeeInfo, Scalar, Size, TyAbiInterface, Variants};
@@ -192,6 +193,7 @@ pub trait LayoutLlvmExt<'tcx> {
192193
) -> &'a Type;
193194
fn llvm_field_index<'a>(&self, cx: &CodegenCx<'a, 'tcx>, index: usize) -> u64;
194195
fn pointee_info_at<'a>(&self, cx: &CodegenCx<'a, 'tcx>, offset: Size) -> Option<PointeeInfo>;
196+
fn scalar_copy_llvm_type<'a>(&self, cx: &CodegenCx<'a, 'tcx>) -> Option<&'a Type>;
195197
}
196198

197199
impl<'tcx> LayoutLlvmExt<'tcx> for TyAndLayout<'tcx> {
@@ -414,4 +416,35 @@ impl<'tcx> LayoutLlvmExt<'tcx> for TyAndLayout<'tcx> {
414416
cx.pointee_infos.borrow_mut().insert((self.ty, offset), result);
415417
result
416418
}
419+
420+
fn scalar_copy_llvm_type<'a>(&self, cx: &CodegenCx<'a, 'tcx>) -> Option<&'a Type> {
421+
debug_assert!(self.is_sized());
422+
423+
// FIXME: this is a fairly arbitrary choice, but 128 bits on WASM
424+
// (matching the 128-bit SIMD types proposal) and 256 bits on x64
425+
// (like AVX2 registers) seems at least like a tolerable starting point.
426+
let threshold = cx.data_layout().pointer_size * 4;
427+
if self.layout.size() > threshold {
428+
return None;
429+
}
430+
431+
// Vectors, even for non-power-of-two sizes, have the same layout as
432+
// arrays but don't count as aggregate types
433+
if let FieldsShape::Array { count, .. } = self.layout.fields()
434+
&& let element = self.field(cx, 0)
435+
&& element.ty.is_integral()
436+
{
437+
// `cx.type_ix(bits)` is tempting here, but while that works great
438+
// for things that *stay* as memory-to-memory copies, it also ends
439+
// up suppressing vectorization as it introduces shifts when it
440+
// extracts all the individual values.
441+
442+
let ety = element.llvm_type(cx);
443+
return Some(cx.type_vector(ety, *count));
444+
}
445+
446+
// FIXME: The above only handled integer arrays; surely more things
447+
// would also be possible. Be careful about provenance, though!
448+
None
449+
}
417450
}

compiler/rustc_codegen_ssa/src/base.rs

+13-1
Original file line numberDiff line numberDiff line change
@@ -380,7 +380,19 @@ pub fn memcpy_ty<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>>(
380380
return;
381381
}
382382

383-
bx.memcpy(dst, dst_align, src, src_align, bx.cx().const_usize(size), flags);
383+
if flags == MemFlags::empty()
384+
&& let Some(bty) = bx.cx().scalar_copy_backend_type(layout)
385+
{
386+
// I look forward to only supporting opaque pointers
387+
let pty = bx.type_ptr_to(bty);
388+
let src = bx.pointercast(src, pty);
389+
let dst = bx.pointercast(dst, pty);
390+
391+
let temp = bx.load(bty, src, src_align);
392+
bx.store(temp, dst, dst_align);
393+
} else {
394+
bx.memcpy(dst, dst_align, src, src_align, bx.cx().const_usize(size), flags);
395+
}
384396
}
385397

386398
pub fn codegen_instance<'a, 'tcx: 'a, Bx: BuilderMethods<'a, 'tcx>>(

compiler/rustc_codegen_ssa/src/traits/type_.rs

+22
Original file line numberDiff line numberDiff line change
@@ -126,6 +126,28 @@ pub trait LayoutTypeMethods<'tcx>: Backend<'tcx> {
126126
index: usize,
127127
immediate: bool,
128128
) -> Self::Type;
129+
130+
/// A type that can be used in a [`super::BuilderMethods::load`] +
131+
/// [`super::BuilderMethods::store`] pair to implement a *typed* copy,
132+
/// such as a MIR `*_0 = *_1`.
133+
///
134+
/// It's always legal to return `None` here, as the provided impl does,
135+
/// in which case callers should use [`super::BuilderMethods::memcpy`]
136+
/// instead of the `load`+`store` pair.
137+
///
138+
/// This can be helpful for things like arrays, where the LLVM backend type
139+
/// `[3 x i16]` optimizes to three separate loads and stores, but it can
140+
/// instead be copied via an `i48` that stays as the single `load`+`store`.
141+
/// (As of 2023-05 LLVM cannot necessarily optimize away a `memcpy` in these
142+
/// cases, due to `poison` handling, but in codegen we have more information
143+
/// about the type invariants, so can emit something better instead.)
144+
///
145+
/// This *should* return `None` for particularly-large types, where leaving
146+
/// the `memcpy` may well be important to avoid code size explosion.
147+
fn scalar_copy_backend_type(&self, layout: TyAndLayout<'tcx>) -> Option<Self::Type> {
148+
let _ = layout;
149+
None
150+
}
129151
}
130152

131153
// For backends that support CFI using type membership (i.e., testing whether a given pointer is

tests/codegen/array-codegen.rs

+35
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
// compile-flags: -O -C no-prepopulate-passes
2+
// min-llvm-version: 15.0 (for opaque pointers)
3+
4+
#![crate_type = "lib"]
5+
6+
// CHECK-LABEL: @array_load
7+
#[no_mangle]
8+
pub fn array_load(a: &[u8; 4]) -> [u8; 4] {
9+
// CHECK: %0 = alloca [4 x i8], align 1
10+
// CHECK: %[[TEMP1:.+]] = load <4 x i8>, ptr %a, align 1
11+
// CHECK: store <4 x i8> %[[TEMP1]], ptr %0, align 1
12+
// CHECK: %[[TEMP2:.+]] = load i32, ptr %0, align 1
13+
// CHECK: ret i32 %[[TEMP2]]
14+
*a
15+
}
16+
17+
// CHECK-LABEL: @array_store
18+
#[no_mangle]
19+
pub fn array_store(a: [u8; 4], p: &mut [u8; 4]) {
20+
// CHECK: %a = alloca [4 x i8]
21+
// CHECK: %[[TEMP:.+]] = load <4 x i8>, ptr %a, align 1
22+
// CHECK-NEXT: store <4 x i8> %[[TEMP]], ptr %p, align 1
23+
*p = a;
24+
}
25+
26+
// CHECK-LABEL: @array_copy
27+
#[no_mangle]
28+
pub fn array_copy(a: &[u8; 4], p: &mut [u8; 4]) {
29+
// CHECK: %[[LOCAL:.+]] = alloca [4 x i8], align 1
30+
// CHECK: %[[TEMP1:.+]] = load <4 x i8>, ptr %a, align 1
31+
// CHECK: store <4 x i8> %[[TEMP1]], ptr %[[LOCAL]], align 1
32+
// CHECK: %[[TEMP2:.+]] = load <4 x i8>, ptr %[[LOCAL]], align 1
33+
// CHECK: store <4 x i8> %[[TEMP2]], ptr %p, align 1
34+
*p = *a;
35+
}

tests/codegen/mem-replace-simple-type.rs

+11
Original file line numberDiff line numberDiff line change
@@ -32,3 +32,14 @@ pub fn replace_ref_str<'a>(r: &mut &'a str, v: &'a str) -> &'a str {
3232
// CHECK: ret { ptr, i64 } %[[P2]]
3333
std::mem::replace(r, v)
3434
}
35+
36+
#[no_mangle]
37+
// CHECK-LABEL: @replace_short_array(
38+
pub fn replace_short_array(r: &mut [u32; 3], v: [u32; 3]) -> [u32; 3] {
39+
// CHECK-NOT: alloca
40+
// CHECK: %[[R:.+]] = load <3 x i32>, ptr %r, align 4
41+
// CHECK: store <3 x i32> %[[R]], ptr %0
42+
// CHECK: %[[V:.+]] = load <3 x i32>, ptr %v, align 4
43+
// CHECK: store <3 x i32> %[[V]], ptr %r
44+
std::mem::replace(r, v)
45+
}

tests/codegen/swap-simd-types.rs

+9
Original file line numberDiff line numberDiff line change
@@ -30,3 +30,12 @@ pub fn swap_m256_slice(x: &mut [__m256], y: &mut [__m256]) {
3030
x.swap_with_slice(y);
3131
}
3232
}
33+
34+
// CHECK-LABEL: @swap_bytes32
35+
#[no_mangle]
36+
pub fn swap_bytes32(x: &mut [u8; 32], y: &mut [u8; 32]) {
37+
// CHECK-NOT: alloca
38+
// CHECK: load <32 x i8>{{.+}}align 1
39+
// CHECK: store <32 x i8>{{.+}}align 1
40+
swap(x, y)
41+
}

tests/codegen/swap-small-types.rs

+20-5
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
// compile-flags: -O
1+
// compile-flags: -O -Z merge-functions=disabled
22
// only-x86_64
33
// ignore-debug: the debug assertions get in the way
44

@@ -8,13 +8,28 @@ use std::mem::swap;
88

99
type RGB48 = [u16; 3];
1010

11+
// CHECK-LABEL: @swap_rgb48_manually(
12+
#[no_mangle]
13+
pub fn swap_rgb48_manually(x: &mut RGB48, y: &mut RGB48) {
14+
// CHECK-NOT: alloca
15+
// CHECK: %[[TEMP0:.+]] = load <3 x i16>, ptr %x, align 2
16+
// CHECK: %[[TEMP1:.+]] = load <3 x i16>, ptr %y, align 2
17+
// CHECK: store <3 x i16> %[[TEMP1]], ptr %x, align 2
18+
// CHECK: store <3 x i16> %[[TEMP0]], ptr %y, align 2
19+
20+
let temp = *x;
21+
*x = *y;
22+
*y = temp;
23+
}
24+
1125
// CHECK-LABEL: @swap_rgb48
1226
#[no_mangle]
1327
pub fn swap_rgb48(x: &mut RGB48, y: &mut RGB48) {
14-
// FIXME MIR inlining messes up LLVM optimizations.
15-
// WOULD-CHECK-NOT: alloca
16-
// WOULD-CHECK: load i48
17-
// WOULD-CHECK: store i48
28+
// CHECK-NOT: alloca
29+
// CHECK: load <3 x i16>
30+
// CHECK: load <3 x i16>
31+
// CHECK: store <3 x i16>
32+
// CHECK: store <3 x i16>
1833
swap(x, y)
1934
}
2035

0 commit comments

Comments
 (0)