Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce congestion via Batching ProveCommits #49

Closed
nicola opened this issue Dec 15, 2020 · 3 comments
Closed

Reduce congestion via Batching ProveCommits #49

nicola opened this issue Dec 15, 2020 · 3 comments

Comments

@nicola
Copy link
Contributor

nicola commented Dec 15, 2020

Problem

ProveCommits and PreCommits are creating network congestion leading to high base fee, leading to high cost for SubmitWindowPoSt and PublishStorageDeal.

Proposed solution

Processing multiple ProveCommits at the same time can drastically reduce the gas used per sector, leading to a much lower gas usage from ProveCommits. We propose to add a new method: "ProveCommitBatched".

There are two parts that can be amortized:

  • State operations: several state reads and writes can be batched - done once per ProveCommitBatched instead of once per sector (similar improvements done in Add a batched PreCommitSectors method #25 )
  • Batch verification: we currently batch verify 10 SNARKs in a single ProveCommit, in this proposal we propose to batch verify all the proofs in a ProveCommitBatched message.

From now on, I will call "Batching saving factor" the factor of gas saved by doing this.

This change should be done in conjuction with #25

Outline

  • Implement ProveCommitBatched where we allow for submitting from 1 to MaxBatchedProofs proofs, we take advantage of batching of state operations and verification.
  • Disable ProveCommit.

With this mechanism, miners will prefer to batch multiple proofs together since they would be substantially reduce their costs.

Discussion

This issue is intended for discussion. There are a number of details to work out before drafting a FIP.

Batch verification parameters

Benchmarks

The following table describes the proof size and the batch verification times.

#proofs #snarks size verification time efficiency* savings**
1 10 1,920 bytes 3ms 1 1x
10 100 19,200 bytes 10ms 3.3 3x
20 200 38,400 bytes 15ms 5 4x
30 300 57,600 bytes 23ms 7.66 3.91x
50 500 96,000 bytes 40ms 13.3 3.75x
100 1000 192,000 bytes 53ms 17.76 5.6x
  • efficiency*: how many ProveCommit's VerifySeal fit into a ProveCommitBatched's VerifySeal timing.
  • savings**: how much gas are we saving by doing ProveCommitBatched over ProveCommits.

Tradeoffs

Here are some back of the envelope calculation to understand the advantages and disadvantages of this proposal:

  • Aggregating more ProveCommit has an advantage in verification time e.g. 100-ProveCommitBatched, would cost as must as 17-ProveCommit, which is about a ~5.6x cost reduction.
  • Proof size is not reduced, meaning that:
    • Practical tx size limitation a 100-ProveCommitBatched will still be 192kB large and it may not be practical to post large transactions like this
    • Gas is paid per tx size (about 26k per SNARK)

Risks

  • Miner throughput is much higher than the batch saving factor, leading to gas fees still being high. In other words, even if gas used is now say 5x less for ProveCommits, then miners could be trying to onboard 5x more proofs, so congestion may remain.
  • Small miners may not be able to take advantage of large batches due to incorrect timing of PoReps

Implementation details (TODO)

More work needs to go into this, but preliminary:

  • Implementation of ProveCommitBatched in actors
  • Implementation of "batching" of provecommits in lotus that takes advantages of large batches, without risking the miners' PreCommitDeposit (e.g. storage miner waits to fill the batch, but the ProveCommit deadline has passed).
  • Increase the max tx size ByteArrayMaxLen

Open questions

  • What is the largest number of proofs that we can aggregate and still have an OK tx size?
  • What is the range of possible optimizations in state operations and how much are we expecting to get?
@nicola
Copy link
Contributor Author

nicola commented Dec 15, 2020

Personal opinion:

@anorth
Copy link
Member

anorth commented May 18, 2021

@nicola I think we can close this, as we're pursuing aggregation instead of batching.

@kaitlin-beegle
Copy link
Contributor

Marking closed, per @anorth's comment. This topic no longer seems relevant to the community.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants
@anorth @nicola @kaitlin-beegle and others