-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge issue43 hotfix #55
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I gave this a first pass. It really feels like this should be solvable in an easier way.
Going the async route means we need to access async functions from chain::Listen
(as you already mentioned in #43) which means the design has to be changed significantly. On top of that, it also means accessing the Carrier in an async way, but this interacts with bitcoind synchronously, so things become pretty hacky.
I need deeper understanding on Rust async and lifetimes to judge how to properly fix this. Given this fixes a pretty rare edge case I'll leave it on hold for now.
PS: I've created a branch trying to simplify this but didn't really go further before realizing what I stated in the review https://github.com/talaia-labs/rust-teos/tree/55-tokio-notify. I will revisit this once the cln plugin is merged.
pub(crate) async fn send_transaction(&self, tx: &Transaction) -> ConfirmationStatus { | ||
let mut continue_looping = true; | ||
let mut receipt: Option<ConfirmationStatus> = None; | ||
while continue_looping { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels real hacky, I thing we'd be better off using async_recursion
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Certainly open to async_recursion
here but usually tend towards non-recursive solutions to save the stack.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again this is just a side effect of the async
change made in Watcher
ConfirmationStatus::InMempoolSince(self.block_height) | ||
pub(crate) async fn send_transaction(&self, tx: &Transaction) -> ConfirmationStatus { | ||
let mut continue_looping = true; | ||
let mut receipt: Option<ConfirmationStatus> = None; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This shouldn't be needed. I think it comes from the continue_looping
approach so in the case you need to loop again you have no value for receipt
. I think recursion may make more sense.
@@ -439,12 +478,13 @@ impl Watcher { | |||
/// | |||
/// If the appointment is rejected by the [Responder] (i.e. for being invalid), the data is wiped | |||
/// from the database but the slot is not freed. | |||
fn store_triggered_appointment( | |||
&self, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Converting this into a class method is one of the things I was trying to avoid to be honest, because it means having to redesign how the data structures of the Watcher are accessed (Arc<Mutex<>>
now). I'm guessing you ended up going this route because you hit an issue regarding lifetimes in the Watcher
when trying to call store_triggered_appointment
(it needing to be static). That's the exact issue I didn't know exactly how to fix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah this didn't feel right to me either but based on my understanding of Rust this is inevitable because you cannot use self
within an async
call as it violates the static
lifetime requirement.
Yeah I agree this fix is a bit meaty for the problem it addresses. Regarding lifetimes and Rust, I'm not sure you'll find a clean solution combining In my opinion, if you'd like to use Maybe we can chat when you're ready and converge on a solution you'd be happy with. |
@carterian8 I chatted about this recently with @TheBlueMatt since it goes above my current understanding of Rust. He mentioned that Rust needs to make sure that the object is pointed by I may try to make this work in a simpler scenario, like for the watchtower-plugin retrier to convince myself how this may work (i.e. convert it into an object given it is currently spawning async tasks). |
join_handle_option = Some(tokio::spawn(async move { | ||
let mut appointments_to_delete = HashSet::from_iter( | ||
invalid_breaches.into_keys()); | ||
let mut delivered_appointments = HashSet::new(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: This would make this method exit early and a task will be spawned on tokio to do the job. This could potentially make the responder's/gatekeeper's own block_connected
method run before or interleaving with this task.
Closes #43.
Watcher::add_appointment
spawns anasync
tokio::task
to handle cases whenWatcher::store_triggered_appointment
runs intobitcoind
connectivity issues.Introducing this
async
change caused a substantial ripple effect through the code with the most notable being an update to theCarrier
to usetokio::sync::Notify
to listen for changes to the status ofbitcoind_reachable
(please see #43 for justification). Other changes just further support the constraints introduced by incorporating moreasync
operations.All unit tests were updated to verify these changes and an additional test,
Watcher::test_add_appointment_bitcoind_unreachable
, was added to verify the correctness of the specific corner case presented in #43.