Skip to content

Releases: ArweaveTeam/arweave

Release 2.9.5-alpha1

09 Mar 13:23
Compare
Choose a tag to compare
Release 2.9.5-alpha1 Pre-release
Pre-release

This is an alpha update and may not be ready for production use. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.

This release includes several bug fixes. It passes all automated tests and has undergone a base level of internal testing, but is not considered production ready. We only recommend upgrading if you believe one of the listed bug fixes will improve your mining experience.

verify Tool Improvements

This release contains several improvements to the verify tool. Several miners have reported block failures due to invalid or missing chunks. The hope is that the verify tool improvements in this release will either allow those errors to be healed, or provide more information about the issue.

New verify modes

The verify tool can now be launched in log or purge modes. In log mode the tool will log errors but will not flag the chunks for healing. In purge mode all bad chunks will be marked as invalid and flagged to be resynced and repacked.

To launch in log mode specify the verify log flag. To launch in purge mode specify the verify purge flag. Note: verify true is no longer valid and will print an error on launch.

Chunk sampling

The verify tool will now sample 1,000 chunks and do a full unpack and validation of the chunk. This sampling mode is intended to give a statistical measure of how much data might be corrupt. To change the number of chunks sampled you can use the the verify_samples option. E.g. verify_samples 500 will have the node sample 500 chunks.

More invalid scenarios tested

This latest version of the verify tool detects several new types of bad data. The first time you run the verify tool we recommend launching it in log mode and running it on a single partition. This should avoid any surprises due to the more aggressive detection logic. If the results are as you expect, then you can relaunch in purge mode to clean up any bad data. In particular, if you've misnamed your storage_module the verify tool will invalidate all chunks and force a full repack - running in log mode first will allow you to catch this error and rename your storage_module before purging all data.

Bug Fixes

  • Fix several issues which could cause a node to "desync". Desyncing occurs when a node gets stuck at one block height and stops advancing.
  • Reduce the volume of unnecessary network traffic due to a flood of 404 requests when trying to sync chunks from a node which only serves replica.2.9 data. Note: the benefit of this change will only be seen when most of the nodes in the network upgrade.
  • Performance improvements to HTTP handling that should improve performance more generally.
  • Add TX polling so that a node will pull missing transactions in addition to receiving them via gossip

Community involvement

A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!

Discord users (alphabetical order):

  • AraAraTime
  • BerryCZ
  • bigbang
  • BloodHunter
  • Butcher_
  • dlmx
  • dzeto
  • edzo
  • EvM
  • Fox Malder
  • Iba Shinu
  • JF
  • jimmyjoe7768
  • lawso2517
  • MaSTeRMinD
  • MCB
  • Methistos
  • Michael | Artifact
  • qq87237850
  • Qwinn
  • RedMOoN
  • smash
  • sumimi
  • T777
  • Thaseus
  • Vidiot
  • Wednesday
  • wybiacx

What's Changed

Full Changelog: N.2.9.4...N.2.9.5-alpha1

Release 2.9.4

09 Feb 22:20
Compare
Choose a tag to compare

This is a minor update. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.

This release includes several bug fixes. We recommend upgrading, but it's not required. All releases 2.9.1 and higher implement the consensus rule changes for the 2.9 hard fork and should be sufficient to participate in the network.

Note: this release fixes a packing bug that affects any storage module that does not start on a partition boundary. If you have previously packed replica.2.9 data in a storage module that does not start on a partition boundary, we recommend discarding the previously packed data and repacking the storage module with the 2.9.4 release. This applies only to storage modules that do not start on a partition boundary, all other storage modules are not impacted.

Example of an impacted storage module:

  • storage_module 3,1800000000000,addr.replica.2.9

Example of storage modules that are not impacted:

  • storage_module 10,addr.replica.2.9
  • storage_module 2,1800000000000,addr.replica.2.9
  • storage_module 0,3400000000000,addr.replica.2.9

Other bug fixes and improvements:

  • Fix a regression that caused GET /tx/id/data to fail
  • Fix a regression that could cause a node to get stuck on a single peer while syncing (both sync_from_local_peers_only and syncing from the network)
  • Limit the resources used to sync the tip data. This may address some memory issues reported by miners.
  • Limit the resources used to gossip new transactions. This may address some memory issues reported by miners.
  • Allow the node to heal itself after encountering a not_prepared_yet error. The error has also been downgraded to a warning.

Community involvement

A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!

Discord users (alphabetical order):

  • AraAraTime
  • bigbang
  • BloodHunter
  • Butcher_
  • dlmx
  • dzeto
  • Iba Shinu
  • JF
  • jimmyjoe7768
  • lawso2517
  • MaSTeRMinD
  • MCB
  • Methistos
  • qq87237850
  • Qwinn
  • RedMOoN
  • sam
  • T777
  • U genius
  • Vidiot
  • Wednesday

What's Changed

Full Changelog: N.2.9.3...N.2.9.4

Release 2.9.3

04 Feb 01:26
Compare
Choose a tag to compare

This is a minor update. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.

This is a minor release that fixes a few bugs:

  • sync and pack stalling
  • ready_for_work error when sync_jobs = 0
  • unnecessary entropy generated on storage modules that are smaller than 3.6TB
  • remove some overly verbose error logs

What's Changed

Full Changelog: N.2.9.2...N.2.9.3

Release 2.9.2

31 Jan 16:38
Compare
Choose a tag to compare

Arweave 2.9.2

This is a minor update. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.

Bug Fixes / Improvements

  • Fix a bug where the node would not sync new data to disks that were 95-99% full
  • Fix a bug causing an error message like [error] ar_chunk_copy:do_ready_for_work/2:135 event: worker_not_found, module: ar_chunk_copy, call: ready_for_work, store_id: default
  • Fix a bug preventing the node from launching on some old Xeon processors
  • Improve the efficiency of sharing newly uploaded data between peers
  • Small performance improvement when preparing entropy
  • Small performance improvement when syncing from peers
  • Add two more checks to the verify tool. These checks will identify some scenarios which resulted in a partition having data packed to two formats. In those cases running the verify tool should flag the incorrectly packed chunks as invalid so that they can be synced and repacked.

Community involvement

A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!

Discord users (alphabetical order):

  • AraAraTime
  • Butcher_
  • dlmx
  • dzeto
  • JF
  • jimmyjoe7768
  • lawso2517
  • MaSTeRMinD
  • Methistos
  • Michael | Artifact
  • qq87237850
  • Qwinn
  • RedMOoN
  • sam
  • some1else
  • sumimi
  • T777
  • U genius
  • Vidiot
  • Wednesday

What's Changed

Full Changelog: N.2.9.1...N.2.9.2

Release 2.9.1

27 Jan 21:26
Compare
Choose a tag to compare

Arweave 2.9.1

This Arweave node implementation proposes a hard fork that activates at height 1602350, approximately 2025-02-03 14:00 UTC. This software was prepared by the Digital History Association, in cooperation with the wider Arweave ecosystem. Additionally, this release was audited by NCC Group.

Note: with 2.9.1 when enabling the randomx_large_pages option you will need to configure 5,000 HugePages rather than the 3,500 required for earlier releases.

Replica 2.9 Format

The primary focus of this release is to complete the implementation, validation, and testing of the Replica 2.9 Format introduced in the previous "early adopter" release: 2.9.0-early-adopter. Those release notes are still a good source of information about the Replica 2.9 Format.

With this 2.9.1 release the Replica 2.9 Format is ready for production use. New and existing miners should consider packing or repacking to the replica.2.9 format.

Note: If you have replica.2.9 data that was previously packed with the 2.9.0-early-adopter release, please delete it before running 2.9.1. There are changes in 2.9.1 which render it incompatible with previously packed replica.2.9 data. spora_2_6 and composite data is unaffected.

Benefits of the Replica 2.9 Format

Arweave 2.9’s format enables:

  • Allow miners to read from their drives at a rate of 5 MiB/s (the equivalent of difficulty 10 in Arweave 2.8), without adversely affecting the security of the network. This represents a decrease of 90% from Arweave 2.8, and 97.5% from Arweave 2.7.x. This will allow miners to use the most cost efficient drives in order to participate in the network, while also lowering pressure on disk I/O during mining operations.
  • A ~96.9% decrease in the compute necessary to pack Arweave data when compared to 2.8 composite.1, and SPoRA_2.6. This decrease also scales approximately linearly for higher packing difficulties. For example, for miners that would have packed with Arweave 2.8 to the difficulty necessary to reach a 5 MB/s read speed (composite.10), Arweave 2.9 will require ~99.56% less energy and time. This represents an efficiency improvement of 32x against 2.7.x and 2.8 composite.1, and ~229x for composite.10.

Packing Performance

Arweave packing consists of two phases:

  1. Entropy generation
  2. Chunk enciphering

In prior packing formats (e.g. spora_2_6 and composite) those phases were merged: for each chunk a small bit of entropy was generated and then the chunk was enciphered. Historically the entropy generation has been the bottleneck and main driver of CPU usage.

With replica.2.9 the phases are separated. Entropy is generated for many chunks, and then that entropy is read and many chunks are enciphered. The entropy generation phase is many times faster than it was for spora_2_6 and composite - in our benchmarks a single node is able to generate entropy for the full weave in ~3 days. The CPU requirements for the enciphering phase are also quite low as enciphering is now a lightweight XOR operation. The end result is that now disk IO is the main bottleneck when packing to replica.2.9.

We have updated the docs to provide guidance on how to approach repacking to replica.2.9: Syncing and Packing Guide

We are working on a follow-up release which will attempt to further optimize the disk IO phase of the packing process.

Changes from the 2.9.0-early-adopter release

  • Previously there was a limitation which degraded packing performance for non-contiguous storage modules. This has been addressed. You can now pack singular and non-contiguous storage modules with no impact on packing performance.
  • All modes of packing to replica.2.9 are supported. I.e. "sync and pack", "cross-module repack", and "repack-in-place" are all supported. However you are not yet be able to repack from replica.2.9 to any other format.
  • Overall packing performance has improved. Further work is needed to streamline the disk IO during the packing process.
  • The packing_rate flag is now deprecated and will have no impact. It's been replaced by the packing_workers flag which allows you to set how many concurrent worker threads are used while packing. The default is the number of logical cores in the system.
  • The replica_2_9_workers flag controls how many storage modules the node will generate entropy for at once. Only one storage module per physical device will have entropy generated at a time. The default is 8, but the optimal value will vary from system to system.
  • We've update the Metrics Guide with a new Syncing and Packing Grafana dashboard to better visualize the replica.2.9 packing process.

Support for ECDSA Keys

This release introduces support for ECDSA signing keys. Blocks and transactions now support ECDSA signatures and can be signed with ECDSA keys. RSA keys continue to be supported and remain the default key type.

An upcoming arweave-js release will provide more guidance on using ECDSA keys with the Arweave network.

ECDSA support will activate at the 2.9 hard fork (block height 1602350).

Composite Packing Format Deprecated

The new packing format was discovered as a result of researching an issue (not endangering data, tokens, or consensus) that affects higher difficulty packs of the 2.8 composite scheme. Given this, and the availability of the significantly improved 2.9 packing format, as of block height 1642850 (roughly 2025-04-04 14:00 UTC), data packed to any level of the composite packing format will not produce valid block solutions.

What's Changed

Full Changelog: N.2.8.3...N.2.9.1

Release 2.9.0-early-adopter

13 Dec 00:38
Compare
Choose a tag to compare

Arweave 2.9.0-Early-Adopter Release Notes

This Arweave node implementation proposes a hard fork that activates at height 1602350, approximately 2025-02-03 14:00 UTC. This software was prepared by the Digital History Association, in cooperation with the wider Arweave ecosystem. Additionally, this release was audited by NCC Group.

This 2.9.0 release is an early adopter release. If you do not plan to benchmark and test the new data format, you do not need to upgrade for the 2.9 hard fork yet.

Note: with 2.9.0 when enabling the randomx_large_pages option you will need to configure 5,000 HugePages rather than the 3,500 required for earlier releases.

Replica 2.9 Packing Format

The Arweave 2.9.0-early-adopter release introduces a new data preparation (‘packing’) format. Starting with this release you can begin to test out this new format. This format brings significant improvements to all of the core metrics of data preparation.

To understand the details, please read the full paper here: https://github.com/ArweaveTeam/arweave/blob/release/N.2.9.0-early-adopter/papers/Arweave2_9.pdf

Additionally, an audit of this mechanism was performed by NCC group and is available to read here (the comments highlighted in this audit have since been remediated): https://github.com/ArweaveTeam/arweave/blob/release/N.2.9.0-early-adopter/papers/NCC_Group_ForwardResearch_E020578_Report_2024-12-06_v1.0.pdf

Arweave 2.9’s format enables:

  • Allow miners to read from their drives at a rate of 5 MiB/s (the equivalent of difficulty 10 in Arweave 2.8), without adversely affecting the security of the network. This represents a decrease of 90% from Arweave 2.8, and 97.5% from Arweave 2.7.x. This will allow miners to use the most cost efficient drives in order to participate in the network, while also lowering pressure on disk I/O during mining operations.
  • A ~96.9% decrease in the compute necessary to pack Arweave data when compared to 2.8 composite.1, and SPoRA_2.6. This decrease also scales approximately linearly for higher packing difficulties. For example, for miners that would have packed with Arweave 2.8 to the difficulty necessary to reach a 5 MB/s read speed (composite.10), Arweave 2.9 will require ~99.56% less energy and time. This represents an efficiency improvement of 32x against 2.7.x and 2.8 composite.1, and ~229x for composite.10.

Replica 2.9 Benchmark Tool

If you'd like to benchmark the performance of the new Replica 2.9 packing format on your own machine you can use the new ./bin/benchmark-2.9 tool. It has 2 modes:

  • Entropy generation which generates and then discards entropy. This allows you to benchmark the time it takes for your CPU to perform the work component of packing, ignoring any IO-related effects.
    • To use the entropy generation benchmark run the tool without using any dir flags.
  • Packing which generates entropy, packs some random data, and then writes it to disk. This provides a more complete benchmark of the time it might take your server to pack data. Note: This benchmark does not include unpacking or reading data (and associated disk seek times).
    • To use the packing benchmark mode specify one or more output directories using the multi-use dir flag.
Usage: benchmark-2.9 [format replica_2_9|composite|spora_2_6] [threads N] [mib N] [dir path1 dir path2 dir path3 ...]

format: format to pack. replica_2_9, composite.1, composite.10, or spora_2_6. Default: replica_2_9.
threads: number of threads to run. Default: 1.
mib: total amount of data to pack in MiB. Default: 1024.
     Will be divided evenly between threads, so the final number may be
     lower than specified to ensure balanced threads.
dir: directories to pack data to. If left off, benchmark will just simulate
     entropy generation without writing to disk.

Repacking to Replica 2.9

As well as allowing you to run benchmarks, the 2.9.0-early-adopter release also allows you to pack data for the 2.9 format. It has not, however, been fully optimized and tuned for the new entropy distribution scheme. It is included in this build for validation purposes. In our tests, we have observed consistent >=75% reductions in computation requirements (>4x faster packing speeds), but future releases will continue to improve this towards the performance of the benchmarking tool.

To test this functionality run a node with storage modules configured to use the <address>.replica.2.9 packing format. repack_in_place is not yet supported.

Composite Packing Format Deprecated

The new packing format was discovered as a result of researching an issue (not endangering data, tokens, or consensus) that affects higher difficulty packs of the 2.8 composite scheme. Given this, and the availability of the significantly improved 2.9 packing format, as of block height 1642850 (roughly 2025-04-04 14:00 UTC), data packed to any level of the composite packing format will not produce valid block solutions.

Note: This is an "Early Adopter" release. It implements significant new protocol improvements, but is still in validation. This release is intended for members of the community to try out and benchmark the new data preparation mechanism. You will not need to update your node for 2.9 unless you are interested in testing these features, until shortly before the hard fork height at 1602350 – approximately Feb 3, 2025. As this release is intended for validation purposes, please be aware that there is a possibility that data encoded using its new preparation scheme may need to be repacked before 2.9 activates. The first ‘mainline’ releases for Arweave 2.9 will follow in the coming weeks after community validation has been completed.

Full Changelog: N.2.8.3...N.2.9.0-early-adopter

Release 2.8.3

30 Nov 01:49
Compare
Choose a tag to compare

This is a minor update. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.

Bug fixes

  • Fix a performance issue which could cause very low read rates when multiple storage modules were stored on a single disk. The bug had a significant impact on SATA read speeds and hash rates, and noticeable, but smaller, impact on SAS disks.
  • Fix a bug which caused the Mining Performance Report to report incorrectly for some miners. Notably: 0s in the Ideal and Data Size columns.
  • Fix a bug which could cause the verify tool to get stuck when encountering an invalid_iterator error
  • Fix a bug which caused the verify tool to fail to launch with the error reward_history_not_found
  • Fix a performance issue which could cause a node to get backed up during periods of high network transaction volume.
  • Add the packing_difficulty of a storage module to the /metrics endponit

Community involvement

A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!

Discord users (alphabetical order):

  • bigbang
  • BloodHunter
  • Butcher_
  • dzeto
  • edzo
  • foozoolsanjj
  • heavyarms1912
  • JF
  • MCB
  • Methistos
  • Mastermind
  • Qwinn
  • Thaseus
  • Vidiot
  • a8_ar
  • jimmyjoe7768
  • lawso2517
  • qq87237850
  • smash
  • sumimi
  • T777
  • tashilo
  • thekitty
  • wybiacx

What's Changed

Full Changelog: N.2.8.2...N.2.8.3

Release 2.8.2

13 Nov 20:17
Compare
Choose a tag to compare

Fixes issue with peer history validation upon re-joining the network.

Full Changelog: N.2.8.1...N.2.8.2

Release 2.8.1

11 Nov 13:38
Compare
Choose a tag to compare

This is a minor update. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.

Bug Fix: OOM when setting mining_server_chunk_cache_size_limit

2.8.1 deprecates the mining_server_chunk_cache_size_limit flag and replaces it with the mining_cache_size_mb flag. Miners who wish to increase or decrease the amount of memory allocated to the mining cache can specify the target cache size (in MiB) using the mining_cache_size_mb NUM flag.

Feature: verify mode

The new release includes a new verify mode. When set the node will run a series of checks on all listed storage_modules. If the node discovers any inconsistencies (e.g. missing proofs, inconsistent indices) it will flag the chunks so that they can be resynced and repacked later. Once the verification completes, you can restart then node in a normal mode and it should re-sync and re-pack any flagged chunks.

Note: When running in verify mode several flags will be forced on and several flags are disallowed. See the node output for details.

An example launch command:

./bin/start verify data_dir /opt/data storage_module 10,unpacked storage_module 20,En2eqsVJARnTVOSh723PBXAKGmKgrGSjQ2YIGwE_ZRI.1

Community involvement

A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!

Discord users (alphabetical order):

  • BloodHunter
  • Butcher_
  • JF
  • MCB
  • Mastermind
  • Qwinn
  • Thaseus
  • Vidiot
  • a8_ar
  • jimmyjoe7768
  • lawso2517
  • smash
  • thekitty

What's Changed

Full Changelog: N.2.8.0...N.2.8.1

Release 2.8.0

17 Oct 15:27
Compare
Choose a tag to compare

This Arweave node implementation proposes a hard fork that activates at height 1547120, approximately 2024-11-13 14:00 UTC. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.

Note: with 2.8.0 when enabling the randomx_large_pages option you will need to configure 3,500 HugePages rather than the 1,000 required for earlier releases. More information below.

Composite Packing

The biggest change in 2.8.0 is the introduction of a new packing format referred to as "composite". Composite packing allows miners in the Arweave network to have slower access to the dataset over time (and thus, mine on larger hard drives at the same bandwidth). The packing format used from version 2.6.0 through 2.7.4 will be referred to as spora_2_6 going forward. spora_2_6 will continue to be supported by the software without change for roughly 4 years.

The composite packing format allows node operators to provide a difficulty setting varying from 1 to 32. Higher difficulties take longer to pack data, but have proportionately lower read requirements while mining. For example, the read speeds for a variety of difficulties are as follows:

Packing Format Example storage module configuration Example storage_modules directory name Time to pack (benchmarked to spora_2_6) Disk read rate per partition when mining against a full replica
spora_2_6 12,addr storage_module_12_addr 1x 200 MiB/s
composite.1 12,addr.1 storage_module_12_addr.1 1x 50 MiB/s
composite.2 12,addr.2 storage_module_12_addr.2 2x 25 MiB/s
composite.3 12,addr.3 storage_module_12_addr.3 3x 16.6667 MiB/s
composite.4 12,addr.4 storage_module_12_addr.4 4x 12.5 MiB/s
... ... ... ... ...
composite.32 12,addr.32 storage_module_12_addr.32 32x 1.5625 MiB/s

The effective hashrate for a full replica packed to any of the supported packing formats is the same. A miner who has packed a full replica to spora_2_6 or composite.1 or composite.32 can expect to find the same number of blocks on average, but with the higher difficulty miner reading fewer chunks from their storage per second. This allows the miner to use larger hard drives in their setup, without increasing the necessary bandwidth between disk and CPU.

Each composite-packed chunk is divided into 32 sub-chunks and then packed with increasing rounds of the RandomX packing function. Each sub-chunk at difficulty 1 is packed with 10 RandomX rounds. This value was selected to roughly match the time it takes to pack a chunk using spora_2_6. At difficulty 2 each sub-chunk is packed with 20 RandomX rounds - this will take roughly twice as long to pack a chunk as it does with difficulty 1 or spora_2_6. At difficulty 3, 30 rounds, and so on.

Composite packing also uses a slightly different version of the RandomX packing function with further improvements to ASIC resistance properties. As a result when running Arweave 2.8 with the randomx_large_pages option you will need to allocate 3,500 HugePages rather than the 1,000 needed for earlier node implementations. If you're unable to immediately increase your HugePages value we recommend restarting your server and trying again. If your node has been running for a while the memory space may simply be too fragmented to allocate the needed HugePages. A reboot should alleviate this issue.

When mining, all storage modules within the same replica must be packed to the same packing format and difficulty level. For example, a single miner will not be able to build a solution involving chunks from storage_module_1_addr.1 and storage_module_2_addr.2 even if the packing address is the same.

To use composite packing miners can modify their storage_module configuration. E.g. if previously you used storage_module 12,addr and had a storage module directory named storage_module_12_addr now you use storage_module 12,addr.1 and create a directory named storage_module_12_addr.1. Syncing, packing, repacking, and repacking in place are handled the same as before just with the addition of the new packing formats.

While you can begin packing data to the composite format immediately, you will not be able to mine the data until the 2.8 hard fork activates at block height 1547120.

Implications of Composite Packing

By enabling lower read rates the new packing format provides greater flexibility when selecting hard drives. For example, it is now possible to mine 4 partitions off a single 16TB hard drive. Whether you need to pack to composite difficulty 1 or 2 in order to optimally mine 4 partitions on a 16TB drive will depend on the specific performance characteristics of your setup.

CPU and RAM requirements while mining will be lower for composite packing versus spora_2_6, and will continue to reduce further as the packing difficulty increases. Extensive benchmarking to confirm the degree of these efficiency gains have yet to be confirmed, but with the lower read rate comes a lower volume of data that needs to be hashed (CPU) and a lower volume of data that needs to be held in memory (RAM).

Block Header Format

The following block header fields have been added or changed:

  • packing_difficulty: the packing difficulty of the chunks used in the block solution. Both reward_address and packing_difficulty together are needed to unpack and validate the solution chunk. packing_difficulty is 0 for spora_2_6 chunks
  • poa1->chunk and poa2->chunk: under spora_2_6 the full packed chunk is provided. Under composite only a packed sub-chunk is included. A sub-chunk is 1/32 of a packed chunk.
  • poa1->unpacked_chunk and poa2->unpacked_chunk: this field is omitted for spora_2_6, and includes the complete unpacked chunk for all composite blocks.
  • unpacked_chunk_hash and unpacked_chunk_hash2: these fields are omitted under spora_2_6 and contain the hash of the full unpacked_chunks for composite blocks

Other Fixes and Improvements

  • Protocol change: The current protocol (implemented prior to the 2.8 Hard Fork) will begin transitioning the upload pricing to a trustless oracle at block height 1551470. 2.8 introduces a slight change: 3 months of blockchain history rather than 1 month will be used to calculate the upload price.
  • Bug fix: several updates to the RocksDB handling have been made which should reduce the frequency of RocksDB corruption - particularly corruption that may have previously occurred during a hard node shutdown.
    • Note: with these changes the repair_rocksdb option has been removed.
  • Optimization: Blockchain syncing (e.g. block and transaction headers) has been optimized to reduce the time it takes to sync the full blockchain
  • Bug fix: GET /data_sync_record no longer reports chunks that have been purged from the disk pool

Community involvement

A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs and providing guidance on performance tuning!

Discord users (alphabetical order):

  • BloodHunter
  • Butcher_
  • dzeto
  • edzo
  • heavyarms1912
  • lawso2517
  • ldp
  • MaSTeRMinD
  • MCB
  • Methistos
  • qq87237850
  • Qwinn
  • sk
  • smash
  • sumimi
  • tashilo
  • Thaseus
  • thekitty
  • Vidiot
  • Wednesday

Code Changes

New Contributors

Full Changelog: N.2.7.4...N.2.8.0