Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BGV/CKKS: support scale management #1459

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

ZenithalHourlyRate
Copy link
Collaborator

@ZenithalHourlyRate ZenithalHourlyRate commented Feb 24, 2025

See #1169

Fixes #1169 Fixes #1364 Fixes #785

I am afraid we can only do this for Lattigo backend, as Openfhe does not have explicit port for setting scale. Although the policy implemented is in line with Openfhe's implementation, and Openfhe does that automatically.

The detailed rational/impl of the scale management should be put in design doc within this PR.

There are a few changes to support scale

  • mgmt.level_reduce, mgmt.adjust_scale op to support corresponding operation
  • Modify secret-insert-mgmt-bgv to use these ops to handle cross level op, where adjust_scale is a place holder
  • --validate-noise will generate parameters aware of these management op
  • --populate-scale (better name wanted) to concretely fill the scale based on the parameter

TODO

Cc @AlexanderViand-Intel: A comment on #1295 (comment) is that, the two backends we have can safely Add(ct0, ct1) with ciphertexts of different scale as internally when they find scale mismatching they would just adjust scale themselves. So the mixed-degree option for optimize-relinearization can be on without affecting correctness, though the noise is different. The merging of this PR does not fix the scale mismatching problem possibly induced by optimize-relinearization for our current two backends, but it indeed pave the way for our own poly backend which must be scale aware.

Example

the input mlir

func.func @cross_level_add(%base: tensor<4xi16> {secret.secret}, %add: tensor<4xi16> {secret.secret}) -> tensor<4xi16> {
  // same level
  %base0 = arith.addi %base, %add : tensor<4xi16>
  // increase one level
  %mul1 = arith.muli %base0, %base0 : tensor<4xi16>
  // cross level add
  %base1 = arith.addi %mul1, %add : tensor<4xi16>
  // increase one level
  %mul2 = arith.muli %base1, %base1 : tensor<4xi16>
  // cross level add
  %base2 = arith.addi %mul2, %add : tensor<4xi16>
  // increase one level
  %mul3 = arith.muli %base2, %base2 : tensor<4xi16>
  // cross level add
  %base3 = arith.addi %mul3, %add : tensor<4xi16>
  return %base3 : tensor<4xi16>
}

After secret-insert-mgmt-bgv, we get

      %1 = arith.addi %input0, %input1 {mgmt.mgmt = #mgmt.mgmt<level = 3>} : tensor<4xi16>
      %2 = arith.muli %1, %1 {mgmt.mgmt = #mgmt.mgmt<level = 3, dimension = 3>} : tensor<4xi16>
      %3 = mgmt.relinearize %2 {mgmt.mgmt = #mgmt.mgmt<level = 3>} : tensor<4xi16>
      %4 = arith.addi %3, %input1 {mgmt.mgmt = #mgmt.mgmt<level = 3>} : tensor<4xi16>
      %5 = mgmt.modreduce %4 {mgmt.mgmt = #mgmt.mgmt<level = 2>} : tensor<4xi16>
      %6 = arith.muli %5, %5 {mgmt.mgmt = #mgmt.mgmt<level = 2, dimension = 3>} : tensor<4xi16>
      %7 = mgmt.relinearize %6 {mgmt.mgmt = #mgmt.mgmt<level = 2>} : tensor<4xi16>
      %8 = mgmt.adjust_scale %input1 {mgmt.mgmt = #mgmt.mgmt<level = 3>} : tensor<4xi16>
      %9 = mgmt.modreduce %8 {mgmt.mgmt = #mgmt.mgmt<level = 2>} : tensor<4xi16>
      %10 = arith.addi %7, %9 {mgmt.mgmt = #mgmt.mgmt<level = 2>} : tensor<4xi16>
      %11 = mgmt.modreduce %10 {mgmt.mgmt = #mgmt.mgmt<level = 1>} : tensor<4xi16>
      %12 = arith.muli %11, %11 {mgmt.mgmt = #mgmt.mgmt<level = 1, dimension = 3>} : tensor<4xi16>
      %13 = mgmt.relinearize %12 {mgmt.mgmt = #mgmt.mgmt<level = 1>} : tensor<4xi16>
      %14 = mgmt.level_reduce %input1 {mgmt.mgmt = #mgmt.mgmt<level = 2>} : tensor<4xi16>
      %15 = mgmt.adjust_scale %14 {mgmt.mgmt = #mgmt.mgmt<level = 2>} : tensor<4xi16>
      %16 = mgmt.modreduce %15 {mgmt.mgmt = #mgmt.mgmt<level = 1>} : tensor<4xi16>
      %17 = arith.addi %13, %16 {mgmt.mgmt = #mgmt.mgmt<level = 1>} : tensor<4xi16>
      %18 = mgmt.modreduce %17 {mgmt.mgmt = #mgmt.mgmt<level = 0>} : tensor<4xi16>

where adjust_scale has no concrete scale parameter

After --validate-noise and --populate-scale, we will get the per-level scale, and the value to fill for each adjust_scale

PopulateScale: scale = [57802, 46604, 21845, 1, ]
PopulateScale: scaleBig = [60481, 36636, 29128, 1, ]
PopulateScale: adjustScale = [1, 2528, 13431, 21845, ]
Propagate ScaleState(1) to <block argument> of type 'tensor<4xi16>' at index: 0
Propagate ScaleState(1) to <block argument> of type 'tensor<4xi16>' at index: 1
Propagate ScaleState(1) to %1 = arith.addi %input0, %input1 {mgmt.mgmt = #mgmt.mgmt<level = 3>} : tensor<4xi16>
Propagate ScaleState(1) to %2 = arith.muli %1, %1 {mgmt.mgmt = #mgmt.mgmt<level = 3, dimension = 3>} : tensor<4xi16>
Propagate ScaleState(1) to %3 = mgmt.relinearize %2 {mgmt.mgmt = #mgmt.mgmt<level = 3>} : tensor<4xi16>
Propagate ScaleState(1) to %4 = arith.addi %3, %input1 {mgmt.mgmt = #mgmt.mgmt<level = 3>} : tensor<4xi16>
Propagate ScaleState(21845) to %5 = mgmt.modreduce %4 {mgmt.mgmt = #mgmt.mgmt<level = 2>} : tensor<4xi16>
Propagate ScaleState(29128) to %6 = arith.muli %5, %5 {mgmt.mgmt = #mgmt.mgmt<level = 2, dimension = 3>} : tensor<4xi16>
Propagate ScaleState(29128) to %7 = mgmt.relinearize %6 {mgmt.mgmt = #mgmt.mgmt<level = 2>} : tensor<4xi16>
Propagate ScaleState(21845) to %8 = mgmt.adjust_scale %input1 {mgmt.mgmt = #mgmt.mgmt<level = 3>, scale = 21845 : i64} : tensor<4xi16>
Propagate ScaleState(29128) to %9 = mgmt.modreduce %8 {mgmt.mgmt = #mgmt.mgmt<level = 2>} : tensor<4xi16>
Propagate ScaleState(29128) to %10 = arith.addi %7, %9 {mgmt.mgmt = #mgmt.mgmt<level = 2>} : tensor<4xi16>
Propagate ScaleState(46604) to %11 = mgmt.modreduce %10 {mgmt.mgmt = #mgmt.mgmt<level = 1>} : tensor<4xi16>
Propagate ScaleState(36636) to %12 = arith.muli %11, %11 {mgmt.mgmt = #mgmt.mgmt<level = 1, dimension = 3>} : tensor<4xi16>
Propagate ScaleState(36636) to %13 = mgmt.relinearize %12 {mgmt.mgmt = #mgmt.mgmt<level = 1>} : tensor<4xi16>
Propagate ScaleState(1) to %14 = mgmt.level_reduce %input1 {mgmt.mgmt = #mgmt.mgmt<level = 2>} : tensor<4xi16>
Propagate ScaleState(13431) to %15 = mgmt.adjust_scale %14 {mgmt.mgmt = #mgmt.mgmt<level = 2>, scale = 13431 : i64} : tensor<4xi16>
Propagate ScaleState(36636) to %16 = mgmt.modreduce %15 {mgmt.mgmt = #mgmt.mgmt<level = 1>} : tensor<4xi16>
Propagate ScaleState(36636) to %17 = arith.addi %13, %16 {mgmt.mgmt = #mgmt.mgmt<level = 1>} : tensor<4xi16>
Propagate ScaleState(57802) to %18 = mgmt.modreduce %17 {mgmt.mgmt = #mgmt.mgmt<level = 0>} : tensor<4xi16>

Where the first three lines are purely calculated from bgv scheme parameter and the later is the analysis to validate whether the scale matches.

The initial scaling factor is chosen to be 1 for both include-first-mul={true,false}, as for include-first-mul=false, the scaling factor of the last level must be the same, so we have 1 * 1 = 1.

@ZenithalHourlyRate
Copy link
Collaborator Author

It has been quite messy supporting scale, as we have to change these things below

  • mgmt insertion policy for both BGV and CKKS
    • insert rescale after mult, before mult, and before mult with first mult
    • note that the original cross-level policy for CKKS is wrong
  • populate scale with regard to all three rescale insertion policy
  • LWE type, which if we add a new field quite a bunch of test file need to change
  • Two backends

My idea is to skip the LWE type support and use attribute to pass information temporarily, and skip openfhe as it does support that anyway.

The pipeline works now for Lattigo, see example

func.func @cross_level_add(%base: tensor<4xi16> {secret.secret}, %add: tensor<4xi16> {secret.secret}) -> tensor<4xi16> {
  // increase one level
  %mul1 = arith.muli %base, %add : tensor<4xi16>
  // cross level add
  %base1 = arith.addi %mul1, %add : tensor<4xi16>
  return %base1 : tensor<4xi16>
}

After properly managed and calculation of scale we get

      %1 = mgmt.modreduce %input0 {mgmt.mgmt = #mgmt.mgmt<level = 1, scale = 4>} : tensor<4xi16>
      %2 = mgmt.modreduce %input1 {mgmt.mgmt = #mgmt.mgmt<level = 1, scale = 4>} : tensor<4xi16>
      %3 = arith.muli %1, %2 {mgmt.mgmt = #mgmt.mgmt<level = 1, dimension = 3, scale = 16>} : tensor<4xi16>
      %4 = mgmt.relinearize %3 {mgmt.mgmt = #mgmt.mgmt<level = 1, scale = 16>} : tensor<4xi16>
      // need to adjust the scale by mul_const delta_scale
      %5 = mgmt.adjust_scale %input1 {delta_scale = 4 : i64, mgmt.mgmt = #mgmt.mgmt<level = 2, scale = 4>, scale = 4 : i64} : tensor<4xi16>
      %6 = mgmt.modreduce %5 {mgmt.mgmt = #mgmt.mgmt<level = 1, scale = 16>} : tensor<4xi16>
      %7 = arith.addi %4, %6 {mgmt.mgmt = #mgmt.mgmt<level = 1, scale = 16>} : tensor<4xi16>
      %8 = mgmt.modreduce %7 {mgmt.mgmt = #mgmt.mgmt<level = 0, scale = 65505>} : tensor<4xi16>

adjust_scale is materialized as the following

    %cst = arith.constant dense<1> : tensor<4xi16>
    %pt = lwe.rlwe_encode %cst {encoding = #full_crt_packing_encoding, lwe.scale = 4 : i64, ring = #ring_Z65537_i64_1_x4_} : tensor<4xi16> -> !pt
    %ct_5 = bgv.mul_plain %ct_0, %pt : (!ct_L2_, !pt) -> !ct_L2_

When emitted to lattigo with debug handler, we can observe the scale change exactly the same

Input
  Scale:  1
Input
  Scale:  1
lattigo.bgv.rescale_new
  Scale:  4
lattigo.bgv.rescale_new
  Scale:  4
lattigo.bgv.mul_new
  Scale:  16
lattigo.bgv.relinearize_new
  Scale:  16
// this is adjust_scale
lattigo.bgv.mul_new
  Scale:  4
lattigo.bgv.rescale_new
  Scale:  16
lattigo.bgv.add_new
  Scale:  16
lattigo.bgv.rescale_new
  Scale:  65505
Result [4 9 16 25]

@j2kun
Copy link
Collaborator

j2kun commented Mar 3, 2025

Talking about this in office hours. Some ideas:

  • Have one integer scaling factor attribute that specifies its own bitwidth, to support large bitwidth scaling factors (e.g., for CKKS). The the lowerings to a particular backend (e.g., C++/OpenFHE) would need to pick the appropriate type (perhaps long double) to represent that scaling factor in the target language.
  • Make the scaling factor attribute optional on the LWE type to avoid having to update the entire codebase, and raise errors or handle the default cause when the scaling factor is not present. We could also have some backlog work to go update the rest of the codebase so that there are scaling factors and we can later remove optionality.

@ZenithalHourlyRate ZenithalHourlyRate force-pushed the bgv-scaled branch 2 times, most recently from 50174dd to 180b6c9 Compare March 5, 2025 08:06
@ZenithalHourlyRate ZenithalHourlyRate marked this pull request as ready for review March 5, 2025 17:21
@ZenithalHourlyRate ZenithalHourlyRate changed the title BGV: support scale management BGV/CKKS: support scale management Mar 5, 2025
@ZenithalHourlyRate
Copy link
Collaborator Author

Until now 99 files changed...would be insane if more changes are introduced. Ask for review now because many technical changes need discussion/decision. Doc/cleanup are not done yet.

Loop problem

The hard part of supporting scale management is that, making scale match everywhere. The current state of the PR will break the loop support.

The intrinsic problem with loop support is that, we need to make it FHE-aware enough. This is the same problem as LevelAnalysis in #1181, where we want to know some invariant kept by the loop. We used to think about keeping level/dimension the same, now we need to consider more to make the scale the same.

The following example shows the current matmul code can not live through the scale analysis

affine.for %result_tensor (assume 2^45 scale initially)
  %a, %const scale 2^45, 
  %0 = mul_plain %a, %const // result scale 2^90
  %1 = add %0, result // scale 2^90
  tensor.insert %1 into %result_tensor // scale mismatch!

This centainly need some insight into the loop.

We can not even deal with unrolled version because we need some kind of back-propagation:

%result = tensor.empty // how do we know it when we encounter it
tensor.insert %sth into %result // until now do we know

Current status

  • Changed secret-insert-mgmt with three different way of inserting mgmt ops, note that these different ways has different indication of initial scaling factor requirement
    • for before-mul-include-first-mul, we need to encode at double degree like 2^90.
  • In secret-insert-mgmt, cross-level op now will be adjusted with level_reduce + adjust_level + mod_reduce to the same level.
    • adjust_level at this time will be adjust_level { scale = -1 } as we do now know the scale now
  • Now annotate-mgmt also annotates the MgmtAttr for plaintext because we need to know the scale of the plaintext. As multiple arith.constant will be canonicalied away, mgmt.no_op is introduced as a placeholder for mgmt attr
  • generate-param will generate param as usual
  • populate-scale now knows the param and it can determine the scale of each ciphertext, then it will fill the adjust_scale with concrete scale by back-propagation style heuristic, then adjust_scale is lowered to mul_plain %ct, 1 where arith.constant 1 has the mgmt attr of scale = N. Note that this is a metadata change instead of message change.
  • secret-to-<scheme> takes the scale into LWE type
  • refactored LWE type related dialect on code structure, with verifier on scale
  • Make backend aware of scale (lattigo can set pt.Scale = NewScale(Pow(2, 90)), for openfhe we can not

Problem with backend

  • Openfhe does the automation itself, so the mul_plain %ct, 1 there does not have any metadata effect. In constrast it will introduce noise, and openfhe itself then does the adjustment itself, introducing more nosie. We might want to just turn off adjust_scale for openfhe backend, but then the scale matching problem emerges in LWE type system, where we might need to lwe.opaque_scale_cast indicating the backend is doing its own job.

  • For Lattigo BGV, our adjust to the scale is exact and Lattigo likes it. For CKKS it is not the case as there will be tiny scale mismatch (2^44.9999999 != 2^45) and it will automatically rescale somewhere, make more level assumption and will fail execution when it find no more level to consume.

@github-advanced-security
Copy link

This pull request sets up GitHub code scanning for this repository. Once the scans have completed and the checks have passed, the analysis results for this pull request branch will appear on this overview. Once you merge this pull request, the 'Security' tab will show more code scanning analysis results (for example, for the default branch). Depending on your configuration and choice of analysis tool, future pull requests will be annotated with code scanning analysis results. For more information about GitHub code scanning, check out the documentation.

@ZenithalHourlyRate
Copy link
Collaborator Author

Until now 99 files changed...would be insane if more changes are introduced.

The hard part of supporting scale management is that, making scale match everywhere. The current state of the PR will break the loop support.

I think I am going to pend this PR as the loop part is important. I intend to open another PR that possibly add a pass taking a loop and analyse if it is expressible by FHE world, for at least LevelAnalysis, then we can make ScaleAnalysis happy.

In the meantime, some non-critical part of this is split out into like #1540 and further PRs.

@j2kun
Copy link
Collaborator

j2kun commented Mar 11, 2025

I am still digesting many of these details, but this one stuck out for me:

We can not even deal with unrolled version because we need some kind of back-propagation

We had the same problem with layout management, a tensor.empty has no intrinsic layout when we visit it. We added the assign_layout op to give it a default placeholder layout, and then a back-propagation pass would support merging downstream layout changes into this op to "override" it with a better layout discovered later. Could you use the same idea for scale? I.e., set some default assign_scale in a forward-propagation pass, then in a backward propagation pass you could hoist adjust_scale ops backward through the IR, and if you encounter assign_scale you can replace it with the desired adjusted scale and then use that to set the scale of the initial empty tensor. In the forward pass you could insert adjust_scale ops whenever, e.g., something with the wrong scale tries to insert into a tensor with a different scale.

@asraa if this makes sense for scale, this is also making me think: how generalizable is our Fhelipe forward-backward propagation pass to concepts beyond layouts? Could we have an agnostic sort of "compatibility optimizer" pass that allows one to plug in the ops for "assign" and "convert" and fit to a cost model interface? Or am I just wallowing in abstraction for its own sake 🤷‍♂️

Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds design documentation for managing ciphertext scale in BGV and CKKS schemes, explaining the required operations and strategies. Key changes include:

  • A detailed description of ciphertext management operations.
  • An explanation of modulus switching, relinearization, and scale management in BGV.
  • An overview of how the design applies similarly to CKKS.
Files not reviewed (19)
  • lib/Analysis/AddAndKeySwitchCountAnalysis/AddAndKeySwitchCountAnalysis.cpp: Language not supported
  • lib/Analysis/DimensionAnalysis/DimensionAnalysis.cpp: Language not supported
  • lib/Analysis/DimensionAnalysis/DimensionAnalysis.h: Language not supported
  • lib/Analysis/LevelAnalysis/LevelAnalysis.cpp: Language not supported
  • lib/Analysis/LevelAnalysis/LevelAnalysis.h: Language not supported
  • lib/Analysis/MulDepthAnalysis/BUILD: Language not supported
  • lib/Analysis/MulDepthAnalysis/MulDepthAnalysis.cpp: Language not supported
  • lib/Analysis/NoiseAnalysis/BFV/NoiseAnalysis.cpp: Language not supported
  • lib/Analysis/NoiseAnalysis/BGV/NoiseAnalysis.cpp: Language not supported
  • lib/Analysis/ScaleAnalysis/BUILD: Language not supported
  • lib/Analysis/ScaleAnalysis/ScaleAnalysis.h: Language not supported
  • lib/Dialect/Mgmt/IR/BUILD: Language not supported
  • lib/Dialect/Mgmt/IR/MgmtAttributes.cpp: Language not supported
  • lib/Dialect/Mgmt/IR/MgmtAttributes.h: Language not supported
  • lib/Dialect/Mgmt/IR/MgmtAttributes.td: Language not supported
  • lib/Dialect/Mgmt/IR/MgmtCanonicalization.td: Language not supported
  • lib/Dialect/Mgmt/IR/MgmtOps.cpp: Language not supported
  • lib/Dialect/Mgmt/IR/MgmtOps.td: Language not supported
  • lib/Dialect/Mgmt/Transforms/AnnotateMgmt.cpp: Language not supported

@ZenithalHourlyRate
Copy link
Collaborator Author

After dependent PRs merged #1540 #1586 #1611 this PR now is much cleaner, still it is big enough (85 files changed with 3k additional lines)

Notable changes

  • Now Loop support is back-alive, as we can skip management passes entirely by specifying backend=openfhe, and additionally providing a scheme parameter could skip all the analysis so infinite loop like AnnotateMgmt for programs with loops fails - LevelAnalysis yields infinite loop #1364 won't happen.
  • Only the lattigo pipeline has scale management. And all the messy code (back-propagation mentioned above) is now handled by Backward Analysis provided by MLIR framework, i.e. ScaleAnalysisBackward. A similar LevelAnalysisBackward is also introduced. The backward analyses are responsible for handling plaintext and adjust_scale, which naturally needs back-prop from ciphertext.
  • For plaintext operand, mgmt.no_op is inserted for them as a holder for mgmt attribute, in case canonicalizer merges arith.constant despite they have different annotation.
  • adjust_scale is materialized to mul_plain %ct, %c1 with %c1 encoded at specific scaling factor in populate-scale-<scheme>
  • different modulus switching policy and scale management policy test cases and e2e test cases are added.

Other parts are described in the management.md design doc.

@ZenithalHourlyRate ZenithalHourlyRate force-pushed the bgv-scaled branch 2 times, most recently from b8523c0 to 31b8770 Compare April 1, 2025 00:04
Copy link
Collaborator

@j2kun j2kun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still have to review lib/Transforms, but the Scale analysis looks good so far and I think the direction is good. I am very excited about the documentation writeup here. I apologize for all the suggested edits, but I hope the result will be well polished as a result.

Copy link
Collaborator Author

@ZenithalHourlyRate ZenithalHourlyRate left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for not replying every review comments and directly marking them as resolved as people will receive too many emails for them. These are handled in code accordingly.

(one annoying part of github review is that once I push a new version and comments become outdated, I could not submit reply comments in batch)

As for those questions, I put explanation in the code comment as I believe every other person will have similar questions.

@ZenithalHourlyRate
Copy link
Collaborator Author

Fixes #1169 #1364 #785

Copy link
Collaborator

@j2kun j2kun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One minor conflict to resolve and then we are good to go!

Copy link
Collaborator Author

@ZenithalHourlyRate ZenithalHourlyRate left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rebased to resolve merge conflict

Comment on lines -88 to +89
def CKKS_MulPlainOp : CKKS_CiphertextPlaintextOp<"mul_plain", [InferTypeOpAdaptor, AllCiphertextTypesMatch, Commutative]> {
// MulPlain op result ciphertext type could be different from the input
def CKKS_MulPlainOp : CKKS_CiphertextPlaintextOp<"mul_plain", [InferTypeOpAdaptor, Commutative]> {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cc @asraa on #1620: BGV/CKKS MulPlainOp does not require result and input ciphertext types to be the same in that they may have different scale. The verification of result scale = ct scale + pt scale is done by InferTypeOpAdaptor in

// verify plaintext space matches
auto ctPlaintext = ct.getPlaintextSpace();
auto ptPlaintext = pt.getPlaintextSpace();
auto outPlaintext = out.getPlaintextSpace();
if (outPlaintext != inferMulOpPlaintextSpaceAttr(op->getContext(),
ctPlaintext, ptPlaintext)) {
return op->emitOpError() << "output plaintext space does not match";
}
return success();
}

@j2kun j2kun added the pull_ready Indicates whether a PR is ready to pull. The copybara worker will import for internal testing label Apr 3, 2025
@ZenithalHourlyRate
Copy link
Collaborator Author

Rebased to resolve merge conflict

copybara-service bot pushed a commit that referenced this pull request Apr 4, 2025
--
39aef37 by Zenithal <i@zenithal.me>:

BGV/CKKS: support scale management

COPYBARA_INTEGRATE_REVIEW=#1459 from ZenithalHourlyRate:bgv-scaled 39aef37
PiperOrigin-RevId: 743968761
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pull_ready Indicates whether a PR is ready to pull. The copybara worker will import for internal testing
Projects
None yet
2 participants