v0.45.0 - Sapsido
Unleashing the true scaling potential of NeoFS this release is the first one to break the old ~320 node limit. It also brings a new powerful search API and distributed settings management commands. Write cache received some care and improved its efficiency. A number of other optimizations and fixes is included as well.
Added
- Initial support for meta-on-chain for objects (#2877)
- First split-object part into the CLI output (#3064)
neofs-cli control notary
withlist
,request
andsign
commands (#3059)- IR
fschain.consensus.keep_only_latest_state
andfschain.consensus.remove_untraceable_blocks
config options (#3093) logger.timestamp
config option (#3105)- Container system attributes verification on IR side (#3107)
- IR
fschain.consensus.rpc.max_websocket_clients
andfschain.consensus.rpc.session_pool_size
config options (#3126) ObjectService.SearchV2
SN API (#3111)neofs-cli object searchv2
command (#3119)- Retries to update node state on new epoch (#3158)
Fixed
neofs-cli object delete
command output (#3056)- More clear error message when validating IR configuration (#3072)
- Panic during shutdown if N3 client connection is lost (#3073)
- The first object of the split object is not deleted (#3089)
- The parent of the split object is not removed from the metabase (#3089)
- A split expired object is not deleted after the lock is removed or expired (#3089)
neofs_node_engine_list_containers_time_bucket
andneofs_node_engine_exists_time_bucket
metrics (#3014)neofs_node_engine_list_objects_time_bucket
metric (#3120)- The correct role parameter to invocation (#3127)
- nil pointer error for
storage sanity
command (#3151) - Process designation event of the mainnet RoleManagement contract (#3134)
- Storage nodes running out of GAS because
putContainerSize
was not paid for by proxy (#3167) - Write-cache flushing loop to drop objects (#3169)
- FS chain client pool overflows and event ordering (#3163)
Changed
- Number of cuncurrenly handled notifications from the chain was increased from 10 to 300 for IR (#3068)
- Write-cache size estimations (#3106)
- New network map support solving the limit of ~320 nodes per network (#3088)
- Calculation of VUB for zero hash (#3134)
- More efficient block header subscription is used now instead of block-based one (#3163)
ObjectService.GetRange(Hash)
ops now handle zero ranges as full payload (#3071)- Add some GAS to system fee of notary transactions (#3176)
- IR now applies sane limits to containers' storage policies recently added to the protocol (#3075)
- Make flushing objects in write-cache in parallel (#3179)
Removed
Updated
Updating from v0.44.2
Using public keys as a rule target in eACL tables was deprecated, and since this realese it is not supported to create new eACL table with keys, use addresses instead. For more information call neofs-cli acl extended create -h
.
small_object_size
, workers_number
, max_batch_size
and max_batch_delay
paramteters are removed from writecache
config. These parameters are related to the BoltDB part of the write-cache, which is dropped from the code. Also, because of this, there will be automatic migration from BoltDB by flushing objects to the main storage and removing database file.
This version maintains two network map lists, the old one is used by default and the new one will be used once "UseNodeV2" network-wide setting is set to non-zero value. Storage nodes add their records to both lists by default, so IR nodes must be updated first, otherwise SNs will fail to bootstrap. Monitor candidates with neofs-adm and make a switch once all nodes are properly migrated to the new list.
IR nodes using embedded CN require chain resynchronization (drop the DB to do that) for this release because of NeoGo updates.