-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: implement Beaver triple generation #335
Conversation
Terraform Feature Environment (dev-335)Terraform Initialization ⚙️
|
Terraform Feature Environment Destroy (dev-335)Terraform Initialization ⚙️
|
@@ -19,6 +19,7 @@ clap = { version = "4.2", features = ["derive", "env"] } | |||
hex = "0.4.3" | |||
k256 = { version = "0.13.1", features = ["sha256", "ecdsa", "serde"] } | |||
local-ip-address = "0.5.4" | |||
rand = "0.8" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm, why do we have both rand7 and rand8 in regular MPC?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need both versions anymore, but in the past there was some issue with one of the dependencies relying on rand being exactly 0.7
pub fn take(&mut self, id: TripleId) -> Option<TripleGenerationOutput<Secp256k1>> { | ||
match self.triples.entry(id) { | ||
Entry::Vacant(_) => None, | ||
Entry::Occupied(entry) => Some(entry.remove()), | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not just:
pub fn take(&mut self, id: TripleId) -> Option<TripleGenerationOutput<Secp256k1>> { | |
match self.triples.entry(id) { | |
Entry::Vacant(_) => None, | |
Entry::Occupied(entry) => Some(entry.remove()), | |
} | |
} | |
pub fn take(&mut self, id: TripleId) -> Option<TripleGenerationOutput<Secp256k1>> { | |
self.triples.remove(&id) | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lol yea just had a brainfart
Action::Wait => { | ||
tracing::debug!("waiting"); | ||
// Retain protocol until we are finished | ||
return true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
didn't notice the loop
at first, so I thought it was weird why the other match statements didn't return. How about we just change all these to break
just to be more clear that it's a loop
pub fn poke(&mut self) -> Vec<(Participant, TripleMessage)> { | ||
let mut messages = Vec::new(); | ||
self.generators.retain(|id, protocol| loop { | ||
let action = protocol.poke().unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these unwraps
scare me a little in these crypto math related code. Let's have some explicit errors or write a message about the guarantees of why this unwrap is safe
@@ -35,6 +35,7 @@ impl NearPublicKeyExt for near_crypto::PublicKey { | |||
|
|||
pub trait AffinePointExt { | |||
fn into_near_public_key(self) -> near_crypto::PublicKey; | |||
fn into_base58(self) -> String; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see why we need to take ownership. better to just borrow since it seems like we're just serializing anyways
#[tracing::instrument(level = "debug", skip_all)] | ||
async fn state(Extension(state): Extension<Arc<AxumState>>) -> (StatusCode, Json<StateView>) { | ||
tracing::debug!("fetching state"); | ||
let protocol_state = state.protocol_state.read().await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
now that I see this, why do we have Arc<RwLock<ProtocolState>>
? It never seems to get written into via a write()
call
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we have Arc<RwLock<ProtocolState>>
in one place (where we write) and Arc<ProtocolState>
in the other place where we only read tho?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to clarify, it is shared with MpcSignProtocol
inside protocol/mod.rs
for (p, msg) in self.triple_manager.poke() { | ||
let url = self.participants.get(&p).unwrap(); | ||
http_client::message(ctx.http_client(), url.clone(), MpcMessage::Triple(msg)).await?; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this correct that we should be doing all the messaging after the protocol completes? What if all nodes are waiting on messages and we'll be stuck in a deadlock?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, the protocol is interactive and occasionally you need to wait for other messages to arrive before you can progress. So ideally there should be a timeout and a restart mechanism but that hasn't been implemented yet.
/// 1) Already generated in which case returns `None`, or | ||
/// 2) Is currently being generated by `protocol` in which case returns `Some(protocol)`, or | ||
/// 3) Has never been seen by the manager in which case start a new protocol and returns `Some(protocol)` | ||
// TODO: What if the triple completed generation and is already spent? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we probably need a cache just to see if the triple already got generated, or that triple never gets generated again
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Triples are stored in RAM. It means that there is no guarantee that this cache was not wiped out by reboot. And it would be nice to avoid DB.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah DB will come down the line, it is one of the tickets I have not scoped out yet
/// Unique number used to identify a specific ongoing triple generation protocol. | ||
/// Without `TripleId` it would be unclear where to route incoming cait-sith triple generation | ||
/// messages. | ||
pub type TripleId = u64; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would this id also be used to track the uniqueness of the triple used?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it is not the end of the world if two of them happen to have the same id (although it shouldn't happen, picking two 64-bit numbers that are the same is very inprobable). It will just mean that the triple generation for this specific triple is going to fail and the nodes would have to give up on it.
if self.triple_manager.potential_len() < 2 { | ||
self.triple_manager.generate(); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so, we're generating more triples on the fly when we run low? Won't this be very computationally expensive or since we're just generating one, it will be fine? But wouldn't that still impose a good amount of latency with all the messaging the triple generation protocol requires?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this is just pretty much a placeholder. We need to implement TripleStockpile
that operates on top of TripleManager
and actively tries to initiate generation when it can (the ticket is in the epic, but hasn't been scoped out yet).
@ChaoticTempest sorry somehow I completely missed your comments! Addressed them in #348 |
Fixes #328
Introduces a separate entity
TripleManager
that tries to progress all ongoing triple generation protocols and saves their results. Triples are identified by a randomly generated id one the proposing node's side.