Skip to content

Commit 03b7e0e

Browse files
MirandaWoodludamad
andauthored
feat: blobs. (#9302)
## The Blobbening It's happening and I can only apologise. ![image](https://github.com/user-attachments/assets/5592b2ad-55a6-459d-a838-4084b310ee93) Follows #8955. ### Intro More detailed stuff below, but the major changes are: - Publish DA through blobs rather than calldata. This means: - No more txs effects hashing - Proving that our effects are included in a blob (see below for details) using base -> root rollup circuits - Accumulating tx effects in a blob sponge (`SpongeBlob`) - Building blocks by first passing a hint of how many tx effects the block will have - Updating forge to handle blobs - Tests for all the above ### Major Issues Things that we should resolve before merging: - Run times massively increased: - This is largely because the `nr` code for `blob`s is written with the BigNum lib, which uses a lot of unconstrained code then a small amount of constrained code to verify results. Unfortunately this means we cannot simulate circuits containing blobs (currently `block-root`) using wasm or set `nr` tests to `unconstrained` because that causes a `stack overflow` in brillig. - To avoid straight up failures, I've created nr tests which are not `unconstrained` (meaning `rollup-lib` tests take 10mins or so to run) and I've forced circuit simulation to run in native ACVM rather than wasm (adding around 1min to any tests that simulate `block-root`). - Yes, there's more! All the above is happening while we only create _one blob per block_. This is definitely not enough space (we aim for 3 per block), but I imagine tripling the blob `nr` code would only cause more runtime issues. - ~Data retrieval~ The below will be done in #9101, and for now we use calldata just to keep the archiver working: - The current (interim) solution is to still publish the same block body calldata as before, just so the archiver actually runs. This calldata is no longer verified with the txs effects hash, but is checked (in ts) against the known blob hash, so a mismatch will still throw. - The actual blob contents will look different to the body calldata since we will be tightly packing effects and adding length markers before each section (like how log lengths work). I've added to/from methods to aid conversion in `data-retrieval` to use. - ~Blob verification precompile gas~ Batching blob KZG proofs is being thought about (see #8955 for progression): - The current approach to verify that the published blob matches the tx effects coming from the rollup is to call the point evaluation precompile _for each blob_. This costs 50k gas each time, so is not sustainable. - We think it's possible to accumulate the KZG proofs used to validate blobs into one. Mike is thinking about this and whether it's doable using `nr`, so we can call the precompile once per epoch rather than 3 times per block. ### General TODOs Things I'm working on: - Moving from 1 to 3 blobs per block - This will slow everything down massively so I'd prefer to solve the runtime issues before tackling this. - It's also going to be relatively complex, because the base rollup will need code to fill from one of three SpongeBlob instances and will need to know how to 'jump' from one full blob to the next at any possible index. Hopefully this does not lead to a jump in gates. ### Description The general maths in nr and replicated across `foundation/blob` is described [here](https://github.com/AztecProtocol/engineering-designs/blob/3362f6ddf62cba5eda605ab4203069b2b77a777c/in-progress/8403-blobbity-boo.md#implementation). #### Old DA Flow From the base rollup to L1, the previous flow for publishing DA was: Nr: - In the `base` rollup, take in all tx effects we wish to publish and `sha256` hash them to a single value: `tx_effects_hash` - This value is propogated up the rollup to the next `merge` (or `block-root`) circuit - Each `merge` or `block-root` circuit simply `sha256` hashes each 'child' `tx_effects_hash` from its left and right inputs - Eventually, at `block-root`, we have one value: `txs_effects_hash` which becomes part of the header's content commitment Ts: - The `txs_effects_hash` is checked and propogated through the orchestrator and becomes part of the ts class `L2Block` in the header - The actual tx effects to publish become the `L2Block`'s `.body` - The `publisher` sends the serialised block `body` and `header` to the L1 block `propose` function Sol: - In `propose`, we decode the block `body` and `header` - The `body` is deconstructed per tx into its tx effects and then hashed using `sha256`, until we have `N` `tx_effects_hash`es (mimicing the calculation in the `base` rollup) - Each `tx_effects_hash` is then input as leaves to a wonky tree and hashed up to the root (mimicing the calculation from `base` to `block-root`), forming the final `txs_effects_hash` - This final value is checked to be equal to the one in the header's content commitment, then stored to be checked against for validating data availability - *Later, when verifying a rollup proof, we use the above header values as public inputs. If they do not match what came from the circuit, the verification fails. *NB: With batch rollups, I've lost touch with what currently happens at verification and how we ensure the `txs_effects_hash` matches the one calculated in the rollup, so this might not be accurate. #### New DA Flow The new flow for publishing DA is: Nr: - In the `base` rollup, we treat tx effects as we treat `PartialStateReference`s - injecting a hint to the `start` and `end` state we expect from processing this `base`'s transaction - We take all the tx effects to publish and `absorb` them into the given `start` `SpongeBlob` state. We then check the result is the same as the given `end` state - Like with `PartialStateReference`s, each `merge` or `block-root` checks that the left input's `end` blob state is equal to the right input's `start` blob state - Eventually, at `block-root`, we check the above _and_ that the left's `start` blob state was empty. Now we have a sponge which has absorbed, as a flat array, all the tx effects in the block we wish to publish - We inject the flat array of effects as a private input, along with the ts calculated blob commitment, and pass them and the sponge to the blob function - The blob function: - Poseidon hashes the flat array of effects, and checks this matches the accumulated sponge when squeezed (this confirms that the flat array is indeed the same array of tx effects propogated from each `base`) - Computes the challenge `z` by hashing this ^ hash with the blob commitment - Evaluates the blob polynomial at `z` using the flat array of effects in the barycentric formula (more details on the engineering design link above), to return `y` - The `block-root` adds this triple (`z`, `y`, and commitment `C`) to a new array of `BlobPublicInputs` - *Like how we handle `fees`, each `block-merge` and `root` merges the left and right input arrays, so we end up with an array of each block's blob info *NB: this will likely change to accumulating to a single set of values, rather than one per block, and is being worked on by Mike. The above also describes what happens for one blob per block for simplicity (it will actually be 3). Ts: - The `BlobPublicInputs` are checked against the ts calculated blob for each block in the orchestrator - They form part of a serialised array of bytes called `blobInput` (plus the expected L1 `blobHash` and a ts generated KZG proof) sent to L1 to the `propose` function - The `propose` transaction is now a special 'blob transaction' where all the tx effects (the same flat array as dealt with in the rollup) are sent as a sidecar - *We also send the serialised block `body`, so the archiver can still read the data back until #9101 *NB: this will change once we can read the blobs themselves from the beacon chain/some web2 client. Sol: - In `propose`, instead of recalcating the `txs_effects_hash`, we send the `blobInput` to a new `validateBlob` function. *This function: - Gathers the real `blobHash` from the EVM and checks it against the one in `blobInput` - Calls the [point evaluation precompile ](https://eips.ethereum.org/EIPS/eip-4844#point-evaluation-precompile) and checks that our `z`, `y`, and `C` indeed correspond to the blob we claim - We now have a verified link between the published blob and our `blobInput`, but still need to link this to our rollup circuit: - Each set of `BlobPublicInputs` is extracted from the bytes array and stored against its block number - When the `root` proof is verified, we reconstruct the array of `BlobPublicInputs` from the above stored values and use them in proof verification - If any of the `BlobPublicInputs` are incorrect (equivalently, if any of the published blobs were incorrect), the proof verification will fail - To aid users/the archiver in checking their blob data matches a certain block, the EVM `blobHash` is been added to `BlockLog` once it has been verified by the precompile *NB: As above, we will eventually call the precompile just once for many blobs with one set of `BlobPublicInputs`. This will still be used in verifying the `root` proof to ensure the tx effects match those from each `base`. --------- Co-authored-by: ludamad <adam.domurad@gmail.com>
1 parent 69bdf4f commit 03b7e0e

File tree

128 files changed

+17443
-11034
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

128 files changed

+17443
-11034
lines changed

.github/workflows/ci.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -157,7 +157,7 @@ jobs:
157157
concurrency_key: build-x86
158158
# prepare images locally, tagged by commit hash
159159
- name: "Build E2E Image"
160-
timeout-minutes: 40
160+
timeout-minutes: 90
161161
if: (needs.configure.outputs.non-docs == 'true' && needs.configure.outputs.non-barretenberg-cpp == 'true') || github.ref_name == 'master'
162162
run: |
163163
earthly-ci ./yarn-project+export-e2e-test-images

l1-contracts/src/core/Rollup.sol

+216-78
Large diffs are not rendered by default.

l1-contracts/src/core/interfaces/IRollup.sol

+6-9
Original file line numberDiff line numberDiff line change
@@ -36,20 +36,20 @@ interface IRollup {
3636
function propose(
3737
ProposeArgs calldata _args,
3838
SignatureLib.Signature[] memory _signatures,
39-
bytes calldata _body
39+
bytes calldata _body,
40+
bytes calldata _blobInput
4041
) external;
4142

4243
function proposeAndClaim(
4344
ProposeArgs calldata _args,
4445
SignatureLib.Signature[] memory _signatures,
4546
bytes calldata _body,
47+
bytes calldata _blobInput,
4648
EpochProofQuoteLib.SignedEpochProofQuote calldata _quote
4749
) external;
4850

4951
function submitEpochRootProof(
50-
uint256 _epochSize,
51-
bytes32[7] calldata _args,
52-
bytes32[] calldata _fees,
52+
DataStructures.SubmitProofArgs calldata _submitArgs,
5353
bytes calldata _aggregationObject,
5454
bytes calldata _proof
5555
) external;
@@ -61,7 +61,7 @@ interface IRollup {
6161
SignatureLib.Signature[] memory _signatures,
6262
bytes32 _digest,
6363
Timestamp _currentTime,
64-
bytes32 _txsEffecstHash,
64+
bytes32 _blobsHash,
6565
DataStructures.ExecutionFlags memory _flags
6666
) external view;
6767

@@ -105,10 +105,7 @@ interface IRollup {
105105
) external view;
106106
function getEpochForBlock(uint256 _blockNumber) external view returns (Epoch);
107107
function getEpochProofPublicInputs(
108-
uint256 _epochSize,
109-
bytes32[7] calldata _args,
110-
bytes32[] calldata _fees,
108+
DataStructures.SubmitProofArgs calldata _submitArgs,
111109
bytes calldata _aggregationObject
112110
) external view returns (bytes32[] memory);
113-
function computeTxsEffectsHash(bytes calldata _body) external pure returns (bytes32);
114111
}

l1-contracts/src/core/libraries/ConstantsGen.sol

+20-4
Original file line numberDiff line numberDiff line change
@@ -92,8 +92,9 @@ library Constants {
9292
uint256 internal constant FUNCTION_SELECTOR_NUM_BYTES = 4;
9393
uint256 internal constant INITIALIZATION_SLOT_SEPARATOR = 1000000000;
9494
uint256 internal constant INITIAL_L2_BLOCK_NUM = 1;
95+
uint256 internal constant FIELDS_PER_BLOB = 4096;
96+
uint256 internal constant BLOBS_PER_BLOCK = 3;
9597
uint256 internal constant PRIVATE_LOG_SIZE_IN_BYTES = 576;
96-
uint256 internal constant BLOB_SIZE_IN_BYTES = 126976;
9798
uint256 internal constant AZTEC_MAX_EPOCH_DURATION = 32;
9899
uint256 internal constant GENESIS_ARCHIVE_ROOT =
99100
19007378675971183768036762391356802220352606103602592933942074152320327194720;
@@ -170,6 +171,9 @@ library Constants {
170171
uint256 internal constant FUNCTION_LEAF_PREIMAGE_LENGTH = 5;
171172
uint256 internal constant GLOBAL_VARIABLES_LENGTH = 9;
172173
uint256 internal constant APPEND_ONLY_TREE_SNAPSHOT_LENGTH = 2;
174+
uint256 internal constant APPEND_ONLY_TREE_SNAPSHOT_LENGTH_BYTES = 36;
175+
uint256 internal constant SPONGE_BLOB_LENGTH = 11;
176+
uint256 internal constant BLOB_PUBLIC_INPUTS = 6;
173177
uint256 internal constant L1_TO_L2_MESSAGE_LENGTH = 6;
174178
uint256 internal constant L2_TO_L1_MESSAGE_LENGTH = 3;
175179
uint256 internal constant SCOPED_L2_TO_L1_MESSAGE_LENGTH = 4;
@@ -202,6 +206,7 @@ library Constants {
202206
uint256 internal constant TX_REQUEST_LENGTH = 12;
203207
uint256 internal constant TOTAL_FEES_LENGTH = 1;
204208
uint256 internal constant HEADER_LENGTH = 24;
209+
uint256 internal constant HEADER_LENGTH_BYTES = 616;
205210
uint256 internal constant PRIVATE_CIRCUIT_PUBLIC_INPUTS_LENGTH = 490;
206211
uint256 internal constant PUBLIC_CIRCUIT_PUBLIC_INPUTS_LENGTH = 866;
207212
uint256 internal constant PRIVATE_CONTEXT_INPUTS_LENGTH = 37;
@@ -221,9 +226,9 @@ library Constants {
221226
uint256 internal constant PRIVATE_TO_PUBLIC_KERNEL_CIRCUIT_PUBLIC_INPUTS_LENGTH = 1140;
222227
uint256 internal constant KERNEL_CIRCUIT_PUBLIC_INPUTS_LENGTH = 605;
223228
uint256 internal constant CONSTANT_ROLLUP_DATA_LENGTH = 13;
224-
uint256 internal constant BASE_OR_MERGE_PUBLIC_INPUTS_LENGTH = 30;
225-
uint256 internal constant BLOCK_ROOT_OR_BLOCK_MERGE_PUBLIC_INPUTS_LENGTH = 90;
226-
uint256 internal constant ROOT_ROLLUP_PUBLIC_INPUTS_LENGTH = 76;
229+
uint256 internal constant BASE_OR_MERGE_PUBLIC_INPUTS_LENGTH = 51;
230+
uint256 internal constant BLOCK_ROOT_OR_BLOCK_MERGE_PUBLIC_INPUTS_LENGTH = 666;
231+
uint256 internal constant ROOT_ROLLUP_PUBLIC_INPUTS_LENGTH = 652;
227232
uint256 internal constant GET_NOTES_ORACLE_RETURN_LENGTH = 674;
228233
uint256 internal constant NOTE_HASHES_NUM_BYTES_PER_BASE_ROLLUP = 2048;
229234
uint256 internal constant NULLIFIERS_NUM_BYTES_PER_BASE_ROLLUP = 2048;
@@ -272,6 +277,17 @@ library Constants {
272277
uint256 internal constant START_EMIT_NULLIFIER_WRITE_OFFSET = 208;
273278
uint256 internal constant START_EMIT_L2_TO_L1_MSG_WRITE_OFFSET = 224;
274279
uint256 internal constant START_EMIT_UNENCRYPTED_LOG_WRITE_OFFSET = 226;
280+
uint256 internal constant TX_START_PREFIX = 8392562855083340404;
281+
uint256 internal constant REVERT_CODE_PREFIX = 1;
282+
uint256 internal constant TX_FEE_PREFIX = 2;
283+
uint256 internal constant NOTES_PREFIX = 3;
284+
uint256 internal constant NULLIFIERS_PREFIX = 4;
285+
uint256 internal constant L2_L1_MSGS_PREFIX = 5;
286+
uint256 internal constant PUBLIC_DATA_UPDATE_REQUESTS_PREFIX = 6;
287+
uint256 internal constant NOTE_ENCRYPTED_LOGS_PREFIX = 7;
288+
uint256 internal constant ENCRYPTED_LOGS_PREFIX = 8;
289+
uint256 internal constant UNENCRYPTED_LOGS_PREFIX = 9;
290+
uint256 internal constant CONTRACT_CLASS_LOGS_PREFIX = 10;
275291
uint256 internal constant PROOF_TYPE_PLONK = 0;
276292
uint256 internal constant PROOF_TYPE_HONK = 1;
277293
uint256 internal constant PROOF_TYPE_OINK = 2;

l1-contracts/src/core/libraries/DataStructures.sol

+14
Original file line numberDiff line numberDiff line change
@@ -92,4 +92,18 @@ library DataStructures {
9292
address bondProvider;
9393
address proposerClaimant;
9494
}
95+
96+
/**
97+
* @notice Struct for submitting the Epoch Proof
98+
* @param epochSize - The size of the epoch (to be promoted to a constant)
99+
* @param args - Array of public inputs to the proof (previousArchive, endArchive, previousBlockHash, endBlockHash, endTimestamp, outHash, proverId)
100+
* @param fees - Array of recipient-value pairs with fees to be distributed for the epoch
101+
* @param blobPublicInputs- The blob PIs for the proof
102+
*/
103+
struct SubmitProofArgs {
104+
uint256 epochSize;
105+
bytes32[7] args;
106+
bytes32[] fees;
107+
bytes blobPublicInputs;
108+
}
95109
}

l1-contracts/src/core/libraries/Errors.sol

+3-4
Original file line numberDiff line numberDiff line change
@@ -60,6 +60,9 @@ library Errors {
6060
error Rollup__InvalidProposedArchive(bytes32 expected, bytes32 actual); // 0x32532e73
6161
error Rollup__InvalidTimestamp(Timestamp expected, Timestamp actual); // 0x3132e895
6262
error Rollup__InvalidVersion(uint256 expected, uint256 actual); // 0x9ef30794
63+
error Rollup__InvalidBlobHash(bytes32 blobHash); // 0xc4a168c6
64+
error Rollup__InvalidBlobProof(bytes32 blobHash); // 0x5ca17bef
65+
error Rollup__InvalidBlobPublicInputsHash(bytes32 expected, bytes32 actual); // 0xfe6b4994
6366
error Rollup__NoEpochToProve(); // 0xcbaa3951
6467
error Rollup__NonSequentialProving(); // 0x1e5be132
6568
error Rollup__NotClaimingCorrectEpoch(Epoch expected, Epoch actual); // 0xf0e0744d
@@ -76,10 +79,6 @@ library Errors {
7679
error Rollup__NonZeroL2Fee(); // 0x7e728abc
7780
error Rollup__InvalidBasisPointFee(uint256 basisPointFee); // 0x4292d136
7881

79-
//TxsDecoder
80-
error TxsDecoder__InvalidLogsLength(uint256 expected, uint256 actual); // 0x829ca981
81-
error TxsDecoder__TxsTooLarge(uint256 expected, uint256 actual); // 0xc7d44a62
82-
8382
// HeaderLib
8483
error HeaderLib__InvalidHeaderSize(uint256 expected, uint256 actual); // 0xf3ccb247
8584
error HeaderLib__InvalidSlotNumber(Slot expected, Slot actual); // 0x09ba91ff

l1-contracts/src/core/libraries/HeaderLib.sol

+4-78
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ import {Errors} from "@aztec/core/libraries/Errors.sol";
2424
* | 0x0020 | 0x04 | lastArchive.nextAvailableLeafIndex
2525
* | | | ContentCommitment {
2626
* | 0x0024 | 0x20 | numTxs
27-
* | 0x0044 | 0x20 | txsEffectsHash
27+
* | 0x0044 | 0x20 | blobsHash
2828
* | 0x0064 | 0x20 | inHash
2929
* | 0x0084 | 0x20 | outHash
3030
* | | | StateReference {
@@ -91,7 +91,7 @@ library HeaderLib {
9191

9292
struct ContentCommitment {
9393
uint256 numTxs;
94-
bytes32 txsEffectsHash;
94+
bytes32 blobsHash;
9595
bytes32 inHash;
9696
bytes32 outHash;
9797
}
@@ -104,7 +104,7 @@ library HeaderLib {
104104
uint256 totalFees;
105105
}
106106

107-
uint256 private constant HEADER_LENGTH = 0x268; // Header byte length
107+
uint256 private constant HEADER_LENGTH = Constants.HEADER_LENGTH_BYTES; // Header byte length
108108

109109
/**
110110
* @notice Decodes the header
@@ -126,7 +126,7 @@ library HeaderLib {
126126

127127
// Reading ContentCommitment
128128
header.contentCommitment.numTxs = uint256(bytes32(_header[0x0024:0x0044]));
129-
header.contentCommitment.txsEffectsHash = bytes32(_header[0x0044:0x0064]);
129+
header.contentCommitment.blobsHash = bytes32(_header[0x0044:0x0064]);
130130
header.contentCommitment.inHash = bytes32(_header[0x0064:0x0084]);
131131
header.contentCommitment.outHash = bytes32(_header[0x0084:0x00a4]);
132132

@@ -160,78 +160,4 @@ library HeaderLib {
160160

161161
return header;
162162
}
163-
164-
function toFields(Header memory _header) internal pure returns (bytes32[] memory) {
165-
bytes32[] memory fields = new bytes32[](24);
166-
167-
// must match the order in the Header.getFields
168-
fields[0] = _header.lastArchive.root;
169-
fields[1] = bytes32(uint256(_header.lastArchive.nextAvailableLeafIndex));
170-
fields[2] = bytes32(_header.contentCommitment.numTxs);
171-
fields[3] = _header.contentCommitment.txsEffectsHash;
172-
fields[4] = _header.contentCommitment.inHash;
173-
fields[5] = _header.contentCommitment.outHash;
174-
fields[6] = _header.stateReference.l1ToL2MessageTree.root;
175-
fields[7] = bytes32(uint256(_header.stateReference.l1ToL2MessageTree.nextAvailableLeafIndex));
176-
fields[8] = _header.stateReference.partialStateReference.noteHashTree.root;
177-
fields[9] = bytes32(
178-
uint256(_header.stateReference.partialStateReference.noteHashTree.nextAvailableLeafIndex)
179-
);
180-
fields[10] = _header.stateReference.partialStateReference.nullifierTree.root;
181-
fields[11] = bytes32(
182-
uint256(_header.stateReference.partialStateReference.nullifierTree.nextAvailableLeafIndex)
183-
);
184-
fields[12] = _header.stateReference.partialStateReference.publicDataTree.root;
185-
fields[13] = bytes32(
186-
uint256(_header.stateReference.partialStateReference.publicDataTree.nextAvailableLeafIndex)
187-
);
188-
fields[14] = bytes32(_header.globalVariables.chainId);
189-
fields[15] = bytes32(_header.globalVariables.version);
190-
fields[16] = bytes32(_header.globalVariables.blockNumber);
191-
fields[17] = bytes32(_header.globalVariables.slotNumber);
192-
fields[18] = bytes32(_header.globalVariables.timestamp);
193-
fields[19] = bytes32(uint256(uint160(_header.globalVariables.coinbase)));
194-
fields[20] = bytes32(_header.globalVariables.feeRecipient);
195-
fields[21] = bytes32(_header.globalVariables.gasFees.feePerDaGas);
196-
fields[22] = bytes32(_header.globalVariables.gasFees.feePerL2Gas);
197-
fields[23] = bytes32(_header.totalFees);
198-
199-
// fail if the header structure has changed without updating this function
200-
require(
201-
fields.length == Constants.HEADER_LENGTH,
202-
Errors.HeaderLib__InvalidHeaderSize(Constants.HEADER_LENGTH, fields.length)
203-
);
204-
205-
return fields;
206-
}
207-
208-
// TODO(#7346): Currently using the below to verify block root proofs until batch rollups fully integrated.
209-
// Once integrated, remove the below fn (not used anywhere else).
210-
function toFields(GlobalVariables memory _globalVariables)
211-
internal
212-
pure
213-
returns (bytes32[] memory)
214-
{
215-
bytes32[] memory fields = new bytes32[](Constants.GLOBAL_VARIABLES_LENGTH);
216-
217-
fields[0] = bytes32(_globalVariables.chainId);
218-
fields[1] = bytes32(_globalVariables.version);
219-
fields[2] = bytes32(_globalVariables.blockNumber);
220-
fields[3] = bytes32(_globalVariables.slotNumber);
221-
fields[4] = bytes32(_globalVariables.timestamp);
222-
fields[5] = bytes32(uint256(uint160(_globalVariables.coinbase)));
223-
fields[6] = bytes32(_globalVariables.feeRecipient);
224-
fields[7] = bytes32(_globalVariables.gasFees.feePerDaGas);
225-
fields[8] = bytes32(_globalVariables.gasFees.feePerL2Gas);
226-
227-
// fail if the header structure has changed without updating this function
228-
// TODO(Miranda): Temporarily using this method and below error while block-root proofs are verified
229-
// When we verify root proofs, this method can be removed => no need for separate named error
230-
require(
231-
fields.length == Constants.GLOBAL_VARIABLES_LENGTH,
232-
Errors.HeaderLib__InvalidHeaderSize(Constants.HEADER_LENGTH, fields.length)
233-
);
234-
235-
return fields;
236-
}
237163
}

0 commit comments

Comments
 (0)