-
Notifications
You must be signed in to change notification settings - Fork 501
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
option_data_loss_protect: concretely define my_current_per_commitment_point
#550
option_data_loss_protect: concretely define my_current_per_commitment_point
#550
Conversation
@Roasbeef did you want to add some clarification to this? |
@@ -1154,6 +1154,8 @@ The sending node: | |||
- MUST set `next_remote_revocation_number` to the commitment number of the | |||
next `revoke_and_ack` message it expects to receive. | |||
- if it supports `option_data_loss_protect`: | |||
- MUST set `my_current_per_commitment_point` to its commitment point for |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this would be more clear as:
to its current unrevoked commitment point (sent in the last
revoke_and_ack
message) used by its channel peer to create the last commitment it received from them
As this is the point that the party who lost data will need to use once the surviving party broadcasts their commitment on chain. In our codebase, we call it LocalUnrevokedCommitPoint
for this reason. Main thing that IMO makes it easier to grasp is that this is the current unrevoked point for that party.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about we append:
(the commitment_point corresponding to the commitment transaction the sender would use to unilaterally close)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
amended to include clarification about being commitment transaction sender would use to unilaterally close
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
commitment transaction sender would use to unilaterally close
Still seems like this is ambiguous when we have two unrevoked commitments. IINM, LND and CL will play different commitments when force closing in this scenario
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm being a bit dense, but can you elaborate on the scenario where you'd have two unrevoked commitments?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since A committed to 1, it's safe for B to consider it's at state 1
The HTLC sent to B isn't fully locked in at this point, it shouldn't be forwarded until B commits to state 1 and A sends a revocation for state 0 to B. The new HTLC is in sort of a limbo state until the full dance is completed.
and it's economically more viable because B received one more HTLC (compared to 0).
It could also be a settle/fail, in which case state 1 has one fewer HTLCs.
I could be wrong, but wouldn't it be more fair and economically interesting to take the highest?
Idk if there's an objective answer to this question. When there are multiple unrevoked commitments, which one is more economically viable depends on the state of the commitments and how the differing states impact the party that holds them.
Game theory would say B should play whichever nets them the most money.
If HTLCs are only added between 0 and 1 and:
- B is certain they won't learn the preimage (bc it was never locked in and couldn't forward it, or it was a probe to an exit hop), there's no reason to manifest that HTLC on chain. Commitment 0 in that case has fewer HTLCs and so we burn less to fees
- B already knows the preimage to the new HTLC (if they're the exit hop), then they should play 1 because they can pull that money (assuming it nets more than the fees to add it). Otherwise they should play 0 and take the loss
All of this is dependent on whether A or B funded the channel, since the initiator will pay the fees. It becomes more complex if there are both additions and removals, and whether the removals are settles or fails.
Anyway, don't wanna get too far away from the conversation at hand. The question we need to answer is: should the act of receiving a commitment inherently revoke your prior, or is it the act of the receiver persisting the revocation locally. If we go with the latter, at least it is consistent whether or not saving the commitment and revoking the prior are implemented atomically. Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it, it's indeed hard to decide whether it's more interesting for B to be at state 0 or 1, it depends on too many factors.
I'm not 100% sure what Eclair does in that case, I'll let @pm47 correct me if I'm wrong, but I think that we choose state 1. Alice committed to it, so if we want to play nice we should resume the dance here and send our revoke_and_ack
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Eclair would send state 1. The storing of the latest commit_sig
and newly generated revoke_and_ack
happens in one atomic step, so either we are at state 0, or we are at state 1 and the revocation has been generated and sent (and will be re-sent if needed). But aren't we all doing that?
In short I agree with @Roasbeef, we should send the latest unrevoked commitment point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The storing of the latest commit_sig and newly generated revoke_and_ack happens in one atomic step, so either we are at state 0, or we are at state 1 and the revocation has been generated and sent (and will be re-sent if needed).
This is also how lnd handles the transition.
I wonder if something here could explain the dlp errors we’ve been seeing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently implementing this rust-lightning side, we went for latest unrevoked commitment point and we handle transition in one atomic step too.
IMO, if you have both valid state 0 and state 1, when you get commitment_signed
for 1, that's your current per_commitment_point. Before receiving this message, state 1 is just a non-enforceable per_commitment_point previously announced via funding_locked
or revoke_and_ack
. When we reply RAA we can't be sure that other peer is going to see it in case of disconnection but as we received CS we have a least an agreement with our remote peer of what our latest commitment point is.
i.e. the commitment_point corresponding to the commitment transaction the sender would use to unilaterally close
Doesn't making it clearer, as previously said you may have two valid commitment transactions, you can't enforce that remote use latest commitment transaction to unilaterally close.
d019238
to
cd2a016
Compare
ACK cd2a016 |
`my_current_per_commitment_point` Make it more obvious what the expected value of `my_current_per_commitment_point` is.
cd2a016
to
fd191fa
Compare
merging as per action item at last spec meeting |
This was the way DataLossProtect was originally written, however it didn't match other implementations at the time during testing. It turns out, other implementations didn't agree with each other anyway (depending on the exact timeline), so the spec was clarified somewhat in lightning/bolts#550 . This updates us to be in line with the new guidance and appears to solve out-of-sync issues in testing.
This was the way DataLossProtect was originally written, however it didn't match other implementations at the time during testing. It turns out, other implementations didn't agree with each other anyway (depending on the exact timeline), so the spec was clarified somewhat in lightning/bolts#550 . This updates us to be in line with the new guidance and appears to solve out-of-sync issues in testing.
This was the way DataLossProtect was originally written, however it didn't match other implementations at the time during testing. It turns out, other implementations didn't agree with each other anyway (depending on the exact timeline), so the spec was clarified somewhat in lightning/bolts#550 . This updates us to be in line with the new guidance and appears to solve out-of-sync issues in testing.
This was the way DataLossProtect was originally written, however it didn't match other implementations at the time during testing. It turns out, other implementations didn't agree with each other anyway (depending on the exact timeline), so the spec was clarified somewhat in lightning/bolts#550 . This updates us to be in line with the new guidance and appears to solve out-of-sync issues in testing.
Make it more obvious what the expected value of
my_current_per_commitment_point
is.