Skip to content
This repository was archived by the owner on Feb 3, 2025. It is now read-only.

Save LN peer metadata to indexedDB #359

Merged
merged 4 commits into from
Apr 14, 2023
Merged

Save LN peer metadata to indexedDB #359

merged 4 commits into from
Apr 14, 2023

Conversation

benthecarman
Copy link
Collaborator

@benthecarman benthecarman commented Apr 13, 2023

I also moved the peer connection info to indexedDB.

TODO:

  • Add tests
  • Test to make sure this works myself
  • bring back per-node connections

@benthecarman benthecarman force-pushed the save-some-gossip branch 5 times, most recently from 23495cf to 908e12e Compare April 13, 2023 23:57
@AnthonyRonning
Copy link
Contributor

needs rebase

Copy link
Contributor

@AnthonyRonning AnthonyRonning left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some of the global peer stuff combined with individual peer connections/lists seem to be intertwined still unless I'm wrong. I wish there was an easier way to test and ensure we have multi node support.

src/node.rs Outdated
@@ -385,24 +392,31 @@ impl Node {
continue;
}

let peer_connections = connect_persister.list_peer_connection_info();
let peer_connections = get_all_peers().await.unwrap_or_default();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isnt this going to pull in peers that all nodes have saved?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes but on line 401 we filter out ones that our node isn't included in

Comment on lines 955 to 971
pub async fn list_peers(&self) -> Result<Vec<MutinyPeer>, MutinyError> {
let nodes = self.nodes.lock().await;
let peer_data = gossip::get_all_peers().await?;

// get peers saved in storage
let mut storage_peers: Vec<MutinyPeer> = nodes
let mut storage_peers: Vec<MutinyPeer> = peer_data
.iter()
.flat_map(|(_, n)| n.persister.list_peer_connection_info())
.map(|(pubkey, connection_string)| MutinyPeer {
pubkey,
connection_string,
.map(|(node_id, metadata)| MutinyPeer {
// node id should be safe here
pubkey: secp256k1::PublicKey::from_slice(node_id.as_slice())
.expect("Invalid pubkey"),
connection_string: metadata.connection_string.clone(),
alias: metadata.alias.clone(),
color: metadata.color.clone(),
label: metadata.label.clone(),
is_connected: false,
})
.collect();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait this could just be a side effect with a bug (or not intended behavior) with our existing API. We're just doing a normal list_peers, but perhaps we should change the API to indicate a specific node and the peers that node should be connected to?

Copy link
Contributor

@AnthonyRonning AnthonyRonning left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seperating node and general peers is difficult and unsure if we should be doing that anyways, but existing behavior so not something for this to solve or get hung up on. Nice work, looks good to me!

@benthecarman benthecarman merged commit e88d508 into master Apr 14, 2023
@AnthonyRonning
Copy link
Contributor

needs rebase

@benthecarman benthecarman deleted the save-some-gossip branch April 15, 2023 00:00
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants