-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
lnwallet: aux signer batching fixes #9074
lnwallet: aux signer batching fixes #9074
Conversation
Important Review skippedAuto reviews are limited to specific labels. 🏷️ Labels to auto review (1)
Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
ee97c4a
to
def4704
Compare
Updated to sort in the correct spot; we lose the index info when processing the job, so this sort should be earlier. Migrating to And in this case, https://pkg.go.dev/[email protected]#example-SortFunc-CaseInsensitive Will add more changes to this PR following this sketch: |
lnwallet/channel.go
Outdated
@@ -4648,7 +4648,12 @@ func (lc *LightningChannel) SignNextCommitment() (*NewCommitState, error) { | |||
newCommitView.txn, | |||
) | |||
if err != nil { | |||
close(cancelChan) | |||
select { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took another look at how exactly the cancelChan
is used here and in tapd
. And I think the original idea was that it would only ever be closed in SignNextCommitment()
. And only directly before a return
, so it was guaranteed to only happen once.
And in any other place than SignNextCommitment
, we would only ever select on the channel to abort (since that would be in a goroutine).
Therefore, I don't think we need to change anything here, really. We just need to follow the same pattern in tapd
and never close()
the channel ourselves but instead only select on it whenever we block somewhere (e.g. when writing to sigJob.Resp
.
I think the description of the Cancel
channel is what got me to implement this incorrectly in tapd
:
Line 53 in 4960ead
// abandon all pending sign jobs part of a single batch. |
// Cancel is a channel that should be closed if the caller wishes to
// abandon all pending sign jobs part of a single batch.
This should read something like this:
// Cancel is a channel that is closed by the caller if they wish to
// abandon all pending sign jobs part of a single batch. This should
// never be closed by the validator.
And maybe we should make it <-chan struct{}
to disallow it being closed in the first place by the verify job?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I decided to go with the "read only channel" approach in PART3:
8a2a097
This makes it clear that the purpose of the Cancel
channel is purely to listen on for the abort signal.
So the main work will need to be done in tapd
, where we need to make sure that we always also listen on this channel wherever we try to send to the response channel or otherwise have a blocking operation.
That should make sure we never run into a "close of closed channel" panic, as we prevent closure in the verifier at compile time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Those changes on the tapd
since were merged with lightninglabs/taproot-assets#1118 .
def4704
to
d52b456
Compare
Not sure what's up with that linter error, but I observe the same error locally. Seems like imports are already properly ordered. |
That's a false positive with the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🎉
@@ -4622,6 +4622,17 @@ func (lc *LightningChannel) SignNextCommitment() (*NewCommitState, error) { | |||
if err != nil { | |||
return nil, err | |||
} | |||
|
|||
// We'll need to send over the signatures to the remote party in the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Took this commit over into part3, added this commit message:
To make sure we attempt to read the results of the sig batches in the
same order they're processed, we sort them _before_ submitting them to
the batch processor.
Otherwise it might happen that we try to read on a result channel that
was never sent on because we aborted due to an error.
We also use slices.SortFunc now which doesn't use reflection and might
be slightly faster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome!
I'm wondering now how we should handle these multiple lnd
branches, if we want to cut a release with this included, before 0-19-staging
gets fully reviewed and pulled into master
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Basically we're only keeping the 0-19-staging
in order for us to be able to cut a release before the rebased versions land in master
. So as long as we don't have everything in master
, we'll keep adding bugfixes here and then reference this branch in litd
, using the experimental
suffix.
Once everything landed in master
, we'll cut lnd v0.18.4-beta
and a non-experimental litd
release.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah ok, so the key insight here seems to be:
Otherwise it might happen that we try to read on a result channel that was never sent on because we aborted due to an error.
If that's the case though, why isn't the solution to start to select on the error/cancel chan here:
Lines 3920 to 3925 in 750770e
// With the jobs sorted, we'll now iterate through all the responses to | |
// gather each of the signatures in order. | |
htlcSigs = make([]lnwire.Sig, 0, len(sigBatch)) | |
for _, htlcSigJob := range sigBatch { | |
jobResp := <-htlcSigJob.Resp | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Though I still don't see why this change is needed at all.
Note that in master, we don't sort anything here:
Lines 3867 to 3874 in 750770e
sigBatch, cancelChan, err := genRemoteHtlcSigJobs( | |
keyRing, lc.channelState, leaseExpiry, newCommitView, | |
lc.leafStore, | |
) | |
if err != nil { | |
return nil, err | |
} | |
lc.sigPool.SubmitSignBatch(sigBatch) |
This seems to have been added only in the side branch.
We don't need to sort anything here as things are already in the correct order, as we iterate over a slice which is ordered according to the htlc ID:
Lines 3105 to 3111 in 750770e
// For each outgoing and incoming HTLC, if the HTLC isn't considered a | |
// dust output after taking into account second-level HTLC fees, then a | |
// sigJob will be generated and appended to the current batch. | |
for _, htlc := range remoteCommitView.incomingHTLCs { | |
if HtlcIsDust( | |
chanType, true, lntypes.Remote, feePerKw, | |
htlc.Amount.ToSatoshis(), dustLimit, |
When we go to validate, we iterate over the TxOut
, then use an index to locate the HTLC that maps to a given output index:
Lines 4523 to 4528 in 750770e
// If this output index is found within the incoming HTLC | |
// index, then this means that we need to generate an HTLC | |
// success transaction in order to validate the signature. | |
case localCommitmentView.incomingHTLCIndex[outputIndex] != nil: | |
htlc := localCommitmentView.incomingHTLCIndex[outputIndex] | |
So unless we have a failing unit test to show this actually fixes something, I think we should drop it from the diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah no, I'm wrong re the above, we sort here, right before we read the response:
Lines 3913 to 3918 in 750770e
// We'll need to send over the signatures to the remote party in the | |
// order as they appear on the commitment transaction after BIP 69 | |
// sorting. | |
sort.Slice(sigBatch, func(i, j int) bool { | |
return sigBatch[i].OutputIndex < sigBatch[j].OutputIndex | |
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With that said, I think if we're going to make this change, then we need a test to demonstrate the concrete impact. Also based on the commit message at the top of this comment thread, it seems the correct solution would be to thread through either a context or use the quit channel.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With that said, I think if we're going to make this change, then we need a test to demonstrate the concrete impact.
Fair enough; seems like the TestMaxAcceptedHTLCs
test could be forked to add a fallible aux signer that causes the issue I'm describing. Working on that now.
Also based on the commit message at the top of this comment thread, it seems the correct solution would be to thread through either a context or use the quit channel.
I think that would be a helpful change, though still separate from the sorting issue I've described. AFAICT the correct quit channel to propagate would be channelLink.quit
:
Line 386 in fb66bd2
quit chan struct{} |
Which is the only non-test caller of SignNextCommitment
. I'll add that in to this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A test to demonstrate the issue and validate the fix is now in the second commit. To test:
git checkout jharveyb/aux_signer_batching_fixes
git checkout HEAD~3
# This test run should time out, and show that SignNextCommitment was stuck on reading the sig job response
make unit pkg=lnwallet case=TestAuxSignerShutdown timeout=10s
git checkout jharveyb/aux_signer_batching_fixes
# This should pass
make unit pkg=lnwallet case=TestAuxSignerShutdown timeout=10s
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also added the propagation of the quit channel in here as the last commit, though I'm not sure that's the best way to actually add that parameter.
I did test in the playground that passing a nil channel in the tests is safe, and that arg is not nil outside of tests. It seems like setting that quit channel when constructing a LightningChannel
would be better, but also a more invasive change. Open to doing that a different way.
@@ -4622,6 +4622,17 @@ func (lc *LightningChannel) SignNextCommitment() (*NewCommitState, error) { | |||
if err != nil { | |||
return nil, err | |||
} | |||
|
|||
// We'll need to send over the signatures to the remote party in the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah ok, so the key insight here seems to be:
Otherwise it might happen that we try to read on a result channel that was never sent on because we aborted due to an error.
If that's the case though, why isn't the solution to start to select on the error/cancel chan here:
Lines 3920 to 3925 in 750770e
// With the jobs sorted, we'll now iterate through all the responses to | |
// gather each of the signatures in order. | |
htlcSigs = make([]lnwire.Sig, 0, len(sigBatch)) | |
for _, htlcSigJob := range sigBatch { | |
jobResp := <-htlcSigJob.Resp | |
@@ -4622,6 +4622,17 @@ func (lc *LightningChannel) SignNextCommitment() (*NewCommitState, error) { | |||
if err != nil { | |||
return nil, err | |||
} | |||
|
|||
// We'll need to send over the signatures to the remote party in the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Though I still don't see why this change is needed at all.
Note that in master, we don't sort anything here:
Lines 3867 to 3874 in 750770e
sigBatch, cancelChan, err := genRemoteHtlcSigJobs( | |
keyRing, lc.channelState, leaseExpiry, newCommitView, | |
lc.leafStore, | |
) | |
if err != nil { | |
return nil, err | |
} | |
lc.sigPool.SubmitSignBatch(sigBatch) |
This seems to have been added only in the side branch.
We don't need to sort anything here as things are already in the correct order, as we iterate over a slice which is ordered according to the htlc ID:
Lines 3105 to 3111 in 750770e
// For each outgoing and incoming HTLC, if the HTLC isn't considered a | |
// dust output after taking into account second-level HTLC fees, then a | |
// sigJob will be generated and appended to the current batch. | |
for _, htlc := range remoteCommitView.incomingHTLCs { | |
if HtlcIsDust( | |
chanType, true, lntypes.Remote, feePerKw, | |
htlc.Amount.ToSatoshis(), dustLimit, |
When we go to validate, we iterate over the TxOut
, then use an index to locate the HTLC that maps to a given output index:
Lines 4523 to 4528 in 750770e
// If this output index is found within the incoming HTLC | |
// index, then this means that we need to generate an HTLC | |
// success transaction in order to validate the signature. | |
case localCommitmentView.incomingHTLCIndex[outputIndex] != nil: | |
htlc := localCommitmentView.incomingHTLCIndex[outputIndex] | |
So unless we have a failing unit test to show this actually fixes something, I think we should drop it from the diff.
d52b456
to
c383799
Compare
Investigating if a rogue quit signal is occurring. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice unit test reproduction! Looks very close to me, just a couple suggestions.
c383799
to
195652a
Compare
Updated to address feedback + added a test to verify that the quit signal will be handled correctly. To verify: make unit pkg=lnwallet case=TestQuitDuringSignNextCommitment timeout=20ms Should cause that test to fail, since the quit signal will not have been sent yet. The exact timeout needed to verify may need to be a bit longer than 20 ms depending on the machine. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work with the extra tests! Just two final comments.
lnwallet/channel.go
Outdated
@@ -4800,7 +4820,7 @@ func (lc *LightningChannel) resignMusigCommit(commitTx *wire.MsgTx, | |||
// previous commitment txn. This allows the link to clear its mailbox of those | |||
// circuits in case they are still in memory, and ensure the switch's circuit | |||
// map has been updated by deleting the closed circuits. | |||
func (lc *LightningChannel) ProcessChanSyncMsg( | |||
func (lc *LightningChannel) ProcessChanSyncMsg(quit <-chan struct{}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we make this a context instead? That way the context from the main coordination goroutine, or RPC calls can pass the context in, and even create a timeout, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds reasonable, but I don't see a context from ChannelLink
that we could actually inherit; only the quit channel.
We could build a Context from that quit channel with some background goroutine that cancels a context if a channel is closed:
func cancelOnQuit(cancel context.CancelFunc, quit <-chan struct{}) {
for {
select {
case <-quit:
cancel()
return
default:
}
}
Working example (run in local WASM mode, not server mode):
https://goplay.tools/snippet/ZToR1fg1ROe
That does feel a bit hacky IMO, and I'm not sure if we can neatly 'combine' two contexts. We could pass both the 'quit' channel, and a context.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alternatively, we can pass the new quit channel/context in via the main constructor. It has functional options set. This way, we don't need to change all call sites with the new input argument. Then in the link, we signal the channel state machine to exit.
If we go this path, then we'll need to verify some assumptions re state/pointer sharing amongst the: switch, cnct, and peer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also you can get a channel to listen on for done like via the Done
method: https://pkg.go.dev/context#Context.
That sounds reasonable, but I don't see a context from ChannelLink that we could actually inherit; only the quit channel.
Yeah so along the way we'd need to add a context to the link.
Re my suggestion above: is it correct that we only want things to bail out once the peer/link does? Or does a custom chans caller need to be able to cancel things earlier?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Re my suggestion above: is it correct that we only want things to bail out once the peer/link does?
I think this is correct / the problem this PR should be fixing.
Or does a custom chans caller need to be able to cancel things earlier?
Not sure, but it does add another spot for a blocking read where we do want to listen on this quit signal.
WIP commit for replacing quit channel with cancellable context: https://github.com/jharveyb/lnd/tree/aux_signer_batching_fixes_wip Required replacing some other quit channels since the |
195652a
to
b1dcfb3
Compare
Cleaned up the changes for replacing the quit channel with a context into one new commit modifying |
Fixing some lint issues. Wrt to the context issue, I read the referenced blog post: https://go.dev/blog/context-and-structs
|
This is a requirement for replacing the quit channel with a Context. The Done() channel of a Context is always recv-only, so all users of that channel must not expect a bidirectional channel.
In this commit, we make sig job handling when singing a next commitment non-blocking by allowing the shutdown of a channel link to prevent further waiting on sig jobs by the channel state machine. This addresses possible cases where the aux signer may be shut down via a separate quit signal, so the state machine could block indefinitely on receiving an update on a sig job.
b1dcfb3
to
41e491c
Compare
AFAICT the remaining lint failures are incorrect / may be fixed on the final target branch. Similar re: release notes, I figure the target branch will update that. Ready for final approval. |
Can confirm these are known linter issues (false positives in the sort order) that were fixed on |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🛖
@@ -94,7 +94,7 @@ type mailBoxConfig struct { | |||
// forwardPackets send a varidic number of htlcPackets to the switch to | |||
// be routed. A quit channel should be provided so that the call can | |||
// properly exit during shutdown. | |||
forwardPackets func(chan struct{}, ...*htlcPacket) error | |||
forwardPackets func(<-chan struct{}, ...*htlcPacket) error |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
quitCtx, quitFunc := context.WithCancel(context.Background()) | ||
|
||
// Initialize the Done channel for our quit context. | ||
_ = quitCtx.Done() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TIL it needs to be initialized: https://cs.opensource.google/go/go/+/refs/tags/go1.23.2:src/context/context.go;l=438-451
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's doesn't really have to be, but if we wanted to save that channel and then use it later (call Done()
once) we'd want to explicitly initialize it as here.
5272db3
into
lightningnetwork:0-19-staging
Change Description
Description of change / link to associated issue.
Addresses lightninglabs/taproot-assets#1114 . Improves the safety of cancel channel usage, as the cancel signal can now be sent by
tapd
as well aslnd
.Steps to Test
Steps for reviewers to follow to test the change.
Unclear; not sure exactly which events led to the original user issue that caused the stack trace from lightninglabs/taproot-assets#1114 .
Pull Request Checklist
Testing
Code Style and Documentation
[skip ci]
in the commit message for small changes.📝 Please see our Contribution Guidelines for further guidance.