Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replacing use of iree-hal-target-backends in most tests. #20295

Merged
merged 5 commits into from
Mar 21, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions .github/workflows/pkgci_test_onnx.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,9 @@ jobs:
runs-on: ubuntu-24.04

# AMD GPU
- name: amdgpu_rocm_rdna3
- name: amdgpu_hip_rdna3
numprocesses: 1
config-file: onnx_ops_gpu_rocm_rdna3.json
config-file: onnx_ops_gpu_hip_rdna3.json
runs-on: nodai-amdgpu-w7900-x86-64
- name: amdgpu_vulkan
numprocesses: 4
Expand Down Expand Up @@ -90,7 +90,7 @@ jobs:
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
repository: iree-org/iree-test-suites
ref: fb8ebeea324dccce51af8e725008689cab745600
ref: 0c8c3acbd7c49b001af8a58fffccb5c3503d8a87
path: iree-test-suites
- name: Install ONNX ops test suite requirements
run: |
Expand Down Expand Up @@ -138,8 +138,8 @@ jobs:
- X64

# AMD GPU
- name: amdgpu_rocm_rdna3
config-file: onnx_models_gpu_rocm_rdna3.json
- name: amdgpu_hip_rdna3
config-file: onnx_models_gpu_hip_rdna3.json
runs-on: nodai-amdgpu-w7900-x86-64
- name: amdgpu_vulkan
config-file: onnx_models_gpu_vulkan.json
Expand Down Expand Up @@ -173,7 +173,7 @@ jobs:
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
repository: iree-org/iree-test-suites
ref: fb8ebeea324dccce51af8e725008689cab745600
ref: 0c8c3acbd7c49b001af8a58fffccb5c3503d8a87
path: iree-test-suites
- name: Install ONNX models test suite requirements
run: |
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/pkgci_test_sharktank.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ jobs:
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
repository: iree-org/iree-test-suites
ref: fb8ebeea324dccce51af8e725008689cab745600
ref: 0c8c3acbd7c49b001af8a58fffccb5c3503d8a87
path: iree-test-suites
lfs: true
- name: Install Sharktank models test suite requirements
Expand Down
20 changes: 10 additions & 10 deletions build_tools/pkgci/bisect/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Package bisect scripting

This scripting connects the `git bisect` tool
(https://git-scm.com/docs/git-bisect) with IREE's package builds, allowing
(<https://git-scm.com/docs/git-bisect>) with IREE's package builds, allowing
developers to run tests through commit history efficiently. For example, this
can be used to spot at which commit an `iree-compile` command started failing.

Expand All @@ -20,8 +20,8 @@ commit.
Requirement | Details
----------- | -------
Linux | (at least until IREE builds packages for other systems at each commit)
`git` | https://git-scm.com/
`gh` CLI | https://cli.github.com/
`git` | <https://git-scm.com/>
`gh` CLI | <https://cli.github.com/>
iree-org/iree repository read access | Needed to [download workflow artifacts](https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-workflow-runs/downloading-workflow-artifacts). See also [obtaining commit access](https://iree.dev/developers/general/contributing/#obtaining-commit-access).
`python3.11` with `venv` support | (Version must match what PkgCI builds) `sudo apt install python3.11 python3.11-dev python3.11-venv`

Expand All @@ -42,7 +42,7 @@ serialized `.mlir` files not being stable, etc.).
### Example

Let's try to find the culprit commit for issue
https://github.com/iree-org/iree/issues/18879. Thanks to the detailed issue
<https://github.com/iree-org/iree/issues/18879>. Thanks to the detailed issue
description, we have all the data we need to run a bisect already.

To run the bisect tool:
Expand Down Expand Up @@ -70,7 +70,7 @@ To run the bisect tool:
# instead of spending all the time to serialize an output `.vmfb` file.
# https://iree.dev/developers/general/developer-tips/#compiling-phase-by-phase

iree-compile --iree-hal-target-backends=llvm-cpu -o /dev/null /tmp/issue_18879.mlir
iree-compile --iree-hal-target-device=local --iree-hal-local-target-device-backends=llvm-cpu -o /dev/null /tmp/issue_18879.mlir
```

If the test command spans multiple lines, you can put it in an executable
Expand All @@ -90,13 +90,13 @@ To run the bisect tool:
./bisect_packages.py \
--good-ref=f9fa934c649749b30fc4be05d9cef78eb043f0e9 \
--bad-ref=05bbcf1385146d075829cd940a52bf06961614d0 \
--test-command="iree-compile --iree-hal-target-backends=llvm-cpu -o /dev/null /tmp/issue_18879.mlir"
--test-command="iree-compile --iree-hal-target-device=local --iree-hal-local-target-device-backends=llvm-cpu -o /dev/null /tmp/issue_18879.mlir"

# 206b60ca59c9dbbca5769694df4714c38cecaced is the first bad commit
```

As expected, the bisect agrees with the culprit mentioned on the issue:
https://github.com/iree-org/iree/issues/18879#issuecomment-2435531655.
<https://github.com/iree-org/iree/issues/18879#issuecomment-2435531655>.

Note that any git ref can be used, so we can use tags too:

Expand Down Expand Up @@ -180,7 +180,7 @@ set +e
############ ORIGINAL SCRIPT ############
#########################################

iree-compile --iree-hal-target-backends=llvm-cpu -o /dev/null /home/nod/.iree/bisect/issue_18879.mlir
iree-compile --iree-hal-target-device=local --iree-hal-local-target-device-backends=llvm-cpu -o /dev/null /home/nod/.iree/bisect/issue_18879.mlir

#########################################
##### BISECT RELEASE SCRIPT CLEANUP #####
Expand All @@ -194,7 +194,7 @@ fi

### Example annotated logs

Raw logs here: https://gist.github.com/ScottTodd/cff468a50df63b65e5c5f449fabab6af
Raw logs here: <https://gist.github.com/ScottTodd/cff468a50df63b65e5c5f449fabab6af>

```bash
$ ./bisect_packages.py \
Expand Down Expand Up @@ -261,7 +261,7 @@ sympy==1.13.3
# Here we run the test script
# -----------------------------------------------------
+ set +e
+ iree-compile --iree-hal-target-backends=llvm-cpu -o /dev/null /home/nod/.iree/bisect/issue_18879.mlir
+ iree-compile --iree-hal-target-device=local --iree-hal-local-target-device-backends=llvm-cpu -o /dev/null /home/nod/.iree/bisect/issue_18879.mlir
/home/nod/.iree/bisect/issue_18879.mlir:17:11: error: operand #0 does not dominate this use
%21 = torch.operator "onnx.Resize"(%20, %none, %1) {torch.onnx.coordinate_transformation_mode = "asymmetric", torch.onnx.cubic_coeff_a = -7.500000e-01 : f32, torch.onnx.mode = "nearest", torch.onnx.nearest_mode = "floor"} : (!torch.vtensor<[1,18,14,14],f32>, !torch.none, !torch.vtensor<[4],f32>) -> !torch.vtensor<[1,18,56,56],f32>
^
Expand Down
2 changes: 1 addition & 1 deletion build_tools/pkgci/bisect/bisect_packages.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
bisect_packages.py \
--good-ref=iree-3.0.0 \
--bad-ref=main \
--test-command="iree-compile --iree-hal-target-backends=llvm-cpu -o /dev/null /tmp/repro.mlir"
--test-command="iree-compile --iree-hal-target-device=local --iree-hal-local-target-device-backends=llvm-cpu -o /dev/null /tmp/repro.mlir"
"""


Expand Down
3 changes: 2 additions & 1 deletion build_tools/scripts/ir_to_markdown.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,8 @@
# Get a dump of IR from a compiler tool:
$ iree-opt \
--iree-transformation-pipeline \
--iree-hal-target-backends=vmvx \
--iree-hal-target-device=local \
--iree-hal-local-target-device-backends=vmvx \
--mlir-disable-threading \
--mlir-print-ir-after-all \
--mlir-print-ir-after-change \
Expand Down
3 changes: 2 additions & 1 deletion compiler/bindings/python/test/api/api_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -210,7 +210,8 @@ def testExecutePassPipeline(self):

def testExecuteStdPipeline(self):
session = Session()
session.set_flags("--iree-hal-target-backends=vmvx")
session.set_flags("--iree-hal-target-device=local")
session.set_flags("--iree-hal-local-target-device-backends=vmvx")
inv = session.invocation()
source = Source.wrap_buffer(
session,
Expand Down
4 changes: 4 additions & 0 deletions compiler/plugins/iree_compiler_plugin.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,10 @@ if(IREE_TARGET_BACKEND_LLVM_CPU)
add_subdirectory(${CMAKE_CURRENT_LIST_DIR}/target/LLVMCPU target/LLVMCPU)
endif()

# NOTE: the local device target is always added as it is needed by any in- or
# out-of-tree plugins that run on the local device.
add_subdirectory(${CMAKE_CURRENT_LIST_DIR}/target/Local target/Local)

if(IREE_TARGET_BACKEND_METAL_SPIRV)
add_subdirectory(${CMAKE_CURRENT_LIST_DIR}/target/MetalSPIRV target/MetalSPIRV)
endif()
Expand Down
15 changes: 1 addition & 14 deletions compiler/plugins/target/LLVMCPU/LLVMCPUTarget.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ class LLVMCPUTargetBackend final : public TargetBackend {
explicit LLVMCPUTargetBackend(LLVMTargetOptions options)
: defaultOptions_(std::move(options)) {}

std::string getLegacyDefaultDeviceID() const override { return "llvm-cpu"; }
std::string getLegacyDefaultDeviceID() const override { return "local"; }

void getDefaultExecutableTargets(
MLIRContext *context, StringRef deviceID, DictionaryAttr deviceConfigAttr,
Expand Down Expand Up @@ -844,19 +844,6 @@ class LLVMCPUTargetBackend final : public TargetBackend {
struct LLVMCPUSession
: public PluginSession<LLVMCPUSession, LLVMCPUTargetCLOptions,
PluginActivationPolicy::DefaultActivated> {
void populateHALTargetDevices(IREE::HAL::TargetDeviceList &targets) {
// TODO(multi-device): move local device registration out.
// This exists here for backwards compat with the old
// iree-hal-target-backends flag that needs to look up the device by backend
// name.
// #hal.device.target<"llvm-cpu", ...
targets.add("llvm-cpu", [=]() {
LocalDevice::Options localDeviceOptions;
localDeviceOptions.defaultTargetBackends.push_back("llvm-cpu");
localDeviceOptions.defaultHostBackends.push_back("llvm-cpu");
return std::make_shared<LocalDevice>(localDeviceOptions);
});
}
void populateHALTargetBackends(IREE::HAL::TargetBackendList &targets) {
// #hal.executable.target<"llvm-cpu", ...
targets.add("llvm-cpu", [=]() {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

// TODO: Expand the test for more CLI configurations, e.g. different target triples

// RUN: iree-compile --compile-to=preprocessing --iree-hal-target-backends=llvm-cpu --iree-llvmcpu-target-triple=x86_64-linux-gnu %s \
// RUN: iree-compile --compile-to=preprocessing --iree-hal-target-device=local --iree-hal-local-target-device-backends=llvm-cpu --iree-llvmcpu-target-triple=x86_64-linux-gnu %s \
// RUN: | FileCheck %s --check-prefix=CHECK-X86-DEFAULT
//
// CHECK-X86-DEFAULT: module attributes {stream.affinity.default = #hal.device.affinity<@__device_0>} {
Expand All @@ -14,7 +14,7 @@
// CHECK-X86-DEFAULT-SAME: target_triple = "x86_64-unknown-unknown-eabi-elf"
// CHECK-X86-DEFAULT-SAME: }>]> : !hal.device

// RUN: iree-compile --compile-to=preprocessing --iree-hal-target-backends=llvm-cpu --iree-llvmcpu-target-triple=x86_64-linux-gnu %s \
// RUN: iree-compile --compile-to=preprocessing --iree-hal-target-device=local --iree-hal-local-target-device-backends=llvm-cpu --iree-llvmcpu-target-triple=x86_64-linux-gnu %s \
// RUN: --iree-llvmcpu-stack-allocation-limit=65536 \
// RUN: | FileCheck %s --check-prefix=CHECK-STACK-VALUE
//
Expand All @@ -26,7 +26,7 @@
//
// CHECK-STACK-VALUE-SAME: }>]> : !hal.device

// RUN: not iree-compile --compile-to=preprocessing --iree-hal-target-backends=llvm-cpu --iree-llvmcpu-target-triple=x86_64-linux-gnu %s \
// RUN: not iree-compile --compile-to=preprocessing --iree-hal-target-device=local --iree-hal-local-target-device-backends=llvm-cpu --iree-llvmcpu-target-triple=x86_64-linux-gnu %s \
// RUN: --iree-llvmcpu-stack-allocation-limit=64266 \
// RUN: 2>&1 | FileCheck %s --check-prefix=CHECK-INCORRECT-OPT-STACK-VALUE
//
Expand Down
32 changes: 32 additions & 0 deletions compiler/plugins/target/Local/BUILD.bazel
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# Copyright 2025 The IREE Authors
#
# Licensed under the Apache License v2.0 with LLVM Exceptions.
# See https://llvm.org/LICENSE.txt for license information.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception

load("//build_tools/bazel:build_defs.oss.bzl", "iree_compiler_cc_library", "iree_compiler_register_plugin")

package(
default_visibility = ["//visibility:public"],
features = ["layering_check"],
licenses = ["notice"], # Apache 2.0
)

iree_compiler_register_plugin(
plugin_id = "hal_target_local",
target = ":Local",
)

iree_compiler_cc_library(
name = "Local",
srcs = [
"LocalTarget.cpp",
],
deps = [
"//compiler/src/iree/compiler/Dialect/HAL/Target",
"//compiler/src/iree/compiler/Dialect/HAL/Target/Devices",
"//compiler/src/iree/compiler/PluginAPI",
"@llvm-project//llvm:Support",
"@llvm-project//mlir:Support",
],
)
34 changes: 34 additions & 0 deletions compiler/plugins/target/Local/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
################################################################################
# Autogenerated by build_tools/bazel_to_cmake/bazel_to_cmake.py from #
# compiler/plugins/target/Local/BUILD.bazel #
# #
# Use iree_cmake_extra_content from iree/build_defs.oss.bzl to add arbitrary #
# CMake-only content. #
# #
# To disable autogeneration for this file entirely, delete this header. #
################################################################################

iree_add_all_subdirs()

iree_compiler_register_plugin(
PLUGIN_ID
hal_target_local
TARGET
::Local
)

iree_cc_library(
NAME
Local
SRCS
"LocalTarget.cpp"
DEPS
LLVMSupport
MLIRSupport
iree::compiler::Dialect::HAL::Target
iree::compiler::Dialect::HAL::Target::Devices
iree::compiler::PluginAPI
PUBLIC
)

### BAZEL_TO_CMAKE_PRESERVES_ALL_CONTENT_BELOW_THIS_LINE ###
39 changes: 39 additions & 0 deletions compiler/plugins/target/Local/LocalTarget.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
// Copyright 2025 The IREE Authors
//
// Licensed under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception

#include "iree/compiler/Dialect/HAL/Target/Devices/LocalDevice.h"
#include "iree/compiler/Dialect/HAL/Target/TargetRegistry.h"
#include "iree/compiler/PluginAPI/Client.h"
#include "llvm/Support/CommandLine.h"
#include "mlir/Support/LogicalResult.h"

namespace mlir::iree_compiler::IREE::HAL {

namespace {
struct LocalSession
: public PluginSession<LocalSession, IREE::HAL::LocalDevice::Options,
PluginActivationPolicy::DefaultActivated> {
void populateHALTargetDevices(IREE::HAL::TargetDeviceList &targets) {
// #hal.device.target<"local", ...
targets.add("local", [=]() {
return std::make_shared<IREE::HAL::LocalDevice>(options);
});
}
};

} // namespace

} // namespace mlir::iree_compiler::IREE::HAL

extern "C" bool iree_register_compiler_plugin_hal_target_local(
mlir::iree_compiler::PluginRegistrar *registrar) {
registrar->registerPlugin<mlir::iree_compiler::IREE::HAL::LocalSession>(
"hal_target_local");
return true;
}

IREE_DEFINE_COMPILER_OPTION_FLAGS(
mlir::iree_compiler::IREE::HAL::LocalDevice::Options);
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// RUN: iree-compile --split-input-file --iree-hal-target-backends=rocm --iree-hip-enable-ukernels=all --iree-hip-target=gfx1100 --compile-to=executable-targets %s | FileCheck %s
// RUN: iree-compile --split-input-file --iree-hal-target-device=hip --iree-hip-enable-ukernels=all --iree-hip-target=gfx1100 --compile-to=executable-targets %s | FileCheck %s

// We want to check that uKernel is indeed generated from e2e workflow.

Expand Down
Loading
Loading