-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BGV/CKKS: support scale management #1459
base: main
Are you sure you want to change the base?
Conversation
ac97379
to
8253471
Compare
It has been quite messy supporting scale, as we have to change these things below
My idea is to skip the LWE type support and use attribute to pass information temporarily, and skip openfhe as it does support that anyway. The pipeline works now for Lattigo, see example func.func @cross_level_add(%base: tensor<4xi16> {secret.secret}, %add: tensor<4xi16> {secret.secret}) -> tensor<4xi16> {
// increase one level
%mul1 = arith.muli %base, %add : tensor<4xi16>
// cross level add
%base1 = arith.addi %mul1, %add : tensor<4xi16>
return %base1 : tensor<4xi16>
} After properly managed and calculation of scale we get %1 = mgmt.modreduce %input0 {mgmt.mgmt = #mgmt.mgmt<level = 1, scale = 4>} : tensor<4xi16>
%2 = mgmt.modreduce %input1 {mgmt.mgmt = #mgmt.mgmt<level = 1, scale = 4>} : tensor<4xi16>
%3 = arith.muli %1, %2 {mgmt.mgmt = #mgmt.mgmt<level = 1, dimension = 3, scale = 16>} : tensor<4xi16>
%4 = mgmt.relinearize %3 {mgmt.mgmt = #mgmt.mgmt<level = 1, scale = 16>} : tensor<4xi16>
// need to adjust the scale by mul_const delta_scale
%5 = mgmt.adjust_scale %input1 {delta_scale = 4 : i64, mgmt.mgmt = #mgmt.mgmt<level = 2, scale = 4>, scale = 4 : i64} : tensor<4xi16>
%6 = mgmt.modreduce %5 {mgmt.mgmt = #mgmt.mgmt<level = 1, scale = 16>} : tensor<4xi16>
%7 = arith.addi %4, %6 {mgmt.mgmt = #mgmt.mgmt<level = 1, scale = 16>} : tensor<4xi16>
%8 = mgmt.modreduce %7 {mgmt.mgmt = #mgmt.mgmt<level = 0, scale = 65505>} : tensor<4xi16>
%cst = arith.constant dense<1> : tensor<4xi16>
%pt = lwe.rlwe_encode %cst {encoding = #full_crt_packing_encoding, lwe.scale = 4 : i64, ring = #ring_Z65537_i64_1_x4_} : tensor<4xi16> -> !pt
%ct_5 = bgv.mul_plain %ct_0, %pt : (!ct_L2_, !pt) -> !ct_L2_ When emitted to lattigo with debug handler, we can observe the scale change exactly the same
|
Talking about this in office hours. Some ideas:
|
50174dd
to
180b6c9
Compare
Until now 99 files changed...would be insane if more changes are introduced. Ask for review now because many technical changes need discussion/decision. Doc/cleanup are not done yet. Loop problemThe hard part of supporting scale management is that, making scale match everywhere. The current state of the PR will break the loop support. The intrinsic problem with loop support is that, we need to make it FHE-aware enough. This is the same problem as LevelAnalysis in #1181, where we want to know some invariant kept by the loop. We used to think about keeping level/dimension the same, now we need to consider more to make the scale the same. The following example shows the current matmul code can not live through the scale analysis affine.for %result_tensor (assume 2^45 scale initially)
%a, %const scale 2^45,
%0 = mul_plain %a, %const // result scale 2^90
%1 = add %0, result // scale 2^90
tensor.insert %1 into %result_tensor // scale mismatch! This centainly need some insight into the loop. We can not even deal with unrolled version because we need some kind of back-propagation: %result = tensor.empty // how do we know it when we encounter it
tensor.insert %sth into %result // until now do we know Current status
Problem with backend
|
4cb5285
to
c404362
Compare
This pull request sets up GitHub code scanning for this repository. Once the scans have completed and the checks have passed, the analysis results for this pull request branch will appear on this overview. Once you merge this pull request, the 'Security' tab will show more code scanning analysis results (for example, for the default branch). Depending on your configuration and choice of analysis tool, future pull requests will be annotated with code scanning analysis results. For more information about GitHub code scanning, check out the documentation. |
I think I am going to pend this PR as the loop part is important. I intend to open another PR that possibly add a pass taking a loop and analyse if it is expressible by FHE world, for at least LevelAnalysis, then we can make ScaleAnalysis happy. In the meantime, some non-critical part of this is split out into like #1540 and further PRs. |
I am still digesting many of these details, but this one stuck out for me:
We had the same problem with layout management, a @asraa if this makes sense for scale, this is also making me think: how generalizable is our Fhelipe forward-backward propagation pass to concepts beyond layouts? Could we have an agnostic sort of "compatibility optimizer" pass that allows one to plug in the ops for "assign" and "convert" and fit to a cost model interface? Or am I just wallowing in abstraction for its own sake 🤷♂️ |
d24ca65
to
bdb2625
Compare
bdb2625
to
aa4e0c1
Compare
471e107
to
6e057b5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds design documentation for managing ciphertext scale in BGV and CKKS schemes, explaining the required operations and strategies. Key changes include:
- A detailed description of ciphertext management operations.
- An explanation of modulus switching, relinearization, and scale management in BGV.
- An overview of how the design applies similarly to CKKS.
Files not reviewed (19)
- lib/Analysis/AddAndKeySwitchCountAnalysis/AddAndKeySwitchCountAnalysis.cpp: Language not supported
- lib/Analysis/DimensionAnalysis/DimensionAnalysis.cpp: Language not supported
- lib/Analysis/DimensionAnalysis/DimensionAnalysis.h: Language not supported
- lib/Analysis/LevelAnalysis/LevelAnalysis.cpp: Language not supported
- lib/Analysis/LevelAnalysis/LevelAnalysis.h: Language not supported
- lib/Analysis/MulDepthAnalysis/BUILD: Language not supported
- lib/Analysis/MulDepthAnalysis/MulDepthAnalysis.cpp: Language not supported
- lib/Analysis/NoiseAnalysis/BFV/NoiseAnalysis.cpp: Language not supported
- lib/Analysis/NoiseAnalysis/BGV/NoiseAnalysis.cpp: Language not supported
- lib/Analysis/ScaleAnalysis/BUILD: Language not supported
- lib/Analysis/ScaleAnalysis/ScaleAnalysis.h: Language not supported
- lib/Dialect/Mgmt/IR/BUILD: Language not supported
- lib/Dialect/Mgmt/IR/MgmtAttributes.cpp: Language not supported
- lib/Dialect/Mgmt/IR/MgmtAttributes.h: Language not supported
- lib/Dialect/Mgmt/IR/MgmtAttributes.td: Language not supported
- lib/Dialect/Mgmt/IR/MgmtCanonicalization.td: Language not supported
- lib/Dialect/Mgmt/IR/MgmtOps.cpp: Language not supported
- lib/Dialect/Mgmt/IR/MgmtOps.td: Language not supported
- lib/Dialect/Mgmt/Transforms/AnnotateMgmt.cpp: Language not supported
After dependent PRs merged #1540 #1586 #1611 this PR now is much cleaner, still it is big enough (85 files changed with 3k additional lines) Notable changes
Other parts are described in the |
b8523c0
to
31b8770
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still have to review lib/Transforms, but the Scale analysis looks good so far and I think the direction is good. I am very excited about the documentation writeup here. I apologize for all the suggested edits, but I hope the result will be well polished as a result.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for not replying every review comments and directly marking them as resolved as people will receive too many emails for them. These are handled in code accordingly.
(one annoying part of github review is that once I push a new version and comments become outdated, I could not submit reply comments in batch)
As for those questions, I put explanation in the code comment as I believe every other person will have similar questions.
31b8770
to
b1edcc2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One minor conflict to resolve and then we are good to go!
b1edcc2
to
39aef37
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rebased to resolve merge conflict
def CKKS_MulPlainOp : CKKS_CiphertextPlaintextOp<"mul_plain", [InferTypeOpAdaptor, AllCiphertextTypesMatch, Commutative]> { | ||
// MulPlain op result ciphertext type could be different from the input | ||
def CKKS_MulPlainOp : CKKS_CiphertextPlaintextOp<"mul_plain", [InferTypeOpAdaptor, Commutative]> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cc @asraa on #1620: BGV/CKKS MulPlainOp does not require result and input ciphertext types to be the same in that they may have different scale. The verification of result scale = ct scale + pt scale is done by InferTypeOpAdaptor in
heir/lib/Dialect/LWE/IR/LWEOps.h
Lines 69 to 78 in 55d883d
// verify plaintext space matches | |
auto ctPlaintext = ct.getPlaintextSpace(); | |
auto ptPlaintext = pt.getPlaintextSpace(); | |
auto outPlaintext = out.getPlaintextSpace(); | |
if (outPlaintext != inferMulOpPlaintextSpaceAttr(op->getContext(), | |
ctPlaintext, ptPlaintext)) { | |
return op->emitOpError() << "output plaintext space does not match"; | |
} | |
return success(); | |
} |
39aef37
to
659d4b6
Compare
Rebased to resolve merge conflict |
659d4b6
to
a049849
Compare
See #1169
Fixes #1169 Fixes #1364 Fixes #785
I am afraid we can only do this for Lattigo backend, as Openfhe does not have explicit port for setting scale. Although the policy implemented is in line with Openfhe's implementation, and Openfhe does that automatically.
The detailed rational/impl of the scale management should be put in design doc within this PR.
There are a few changes to support scale
mgmt.level_reduce
,mgmt.adjust_scale
op to support corresponding operationsecret-insert-mgmt-bgv
to use these ops to handle cross level op, whereadjust_scale
is a place holder--validate-noise
will generate parameters aware of these management op--populate-scale
(better name wanted) to concretely fill the scale based on the parameterTODO
include-first-mul
option.LWE
dialect scale support--mlir-to-<scheme>
#1420Cc @AlexanderViand-Intel: A comment on #1295 (comment) is that, the two backends we have can safely
Add(ct0, ct1)
with ciphertexts of different scale as internally when they find scale mismatching they would just adjust scale themselves. So the mixed-degree option foroptimize-relinearization
can be on without affecting correctness, though the noise is different. The merging of this PR does not fix the scale mismatching problem possibly induced by optimize-relinearization for our current two backends, but it indeed pave the way for our ownpoly
backend which must be scale aware.Example
the input mlir
After
secret-insert-mgmt-bgv
, we getwhere adjust_scale has no concrete scale parameter
After --validate-noise and
--populate-scale
, we will get the per-level scale, and the value to fill for eachadjust_scale
Where the first three lines are purely calculated from bgv scheme parameter and the later is the analysis to validate whether the scale matches.
The initial scaling factor is chosen to be 1 for both
include-first-mul={true,false}
, as forinclude-first-mul=false
, the scaling factor of the last level must be the same, so we have1 * 1 = 1
.