Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

htlcswitch: use fn.GoroutineManager #9140

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

starius
Copy link
Collaborator

@starius starius commented Sep 27, 2024

Change Description

Replaced the use of s.quit and s.wg with s.gm (GoroutineManager). WaitGroup is still needed to wait for handleLocalResponse: if it was switched to s.gm, then it may skip running, which has unclear consequences. After handleLocalResponse is changed to run without a goroutine, we can remove WaitGroup completely.

This fixes a race condition between s.wg.Add(1) and s.wg.Wait().

Steps to Test

I added a test which used to fail under -race before this commit.

$ cd htlcswitch

$ go test -race -run TestSwitchGetAttemptResultStress

This test crashes with a data race if I undo the changes of implementation of switch.

Pull Request Checklist

Testing

  • Your PR passes all CI checks.
  • Tests covering the positive and negative (error paths) are included.
  • Bug fixes contain tests triggering the bug to prevent regressions.

Code Style and Documentation

Copy link
Contributor

coderabbitai bot commented Sep 27, 2024

Important

Review skipped

Auto reviews are limited to specific labels.

🏷️ Labels to auto review (1)
  • llm-review

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Experiment)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@starius starius mentioned this pull request Sep 27, 2024
8 tasks
@starius starius force-pushed the goroutines branch 2 times, most recently from 8810118 to 88fbc4b Compare October 3, 2024 15:27
@starius starius force-pushed the goroutines branch 2 times, most recently from 8395cca to e001027 Compare October 7, 2024 19:00
@starius starius changed the title [WIP] htlcswitch: use fn.GoroutineManager htlcswitch: use fn.GoroutineManager Oct 11, 2024
@starius starius marked this pull request as ready for review October 11, 2024 15:50
@saubyk saubyk requested review from Crypt-iQ and ellemouton October 15, 2024 16:54
@saubyk saubyk added this to the v0.19.0 milestone Oct 15, 2024
@ellemouton
Copy link
Collaborator

@starius - I think these unit test failures are related to this PR - maybe take a look at fixing those up first & then re-ping reviewers when ready?

htlcswitch/switch.go Outdated Show resolved Hide resolved
htlcswitch/switch_test.go Show resolved Hide resolved
htlcswitch/switch_test.go Outdated Show resolved Hide resolved
htlcswitch/switch_test.go Outdated Show resolved Hide resolved
htlcswitch/switch_test.go Outdated Show resolved Hide resolved
htlcswitch/switch_test.go Outdated Show resolved Hide resolved
htlcswitch/switch_test.go Show resolved Hide resolved
Copy link
Collaborator

@Crypt-iQ Crypt-iQ left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I couldn't reproduce the race condition with the attached test, do you have an error trace of it?

})

// The switch shutting down is signaled by closing the channel.
if err != nil {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the switch shutting down and an error from the goroutine manager are different?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GoroutineManager can only return an error, if it is stopping. I added a check just in case:

        // The switch shutting down is signaled by closing the channel.
        if errors.Is(err, fn.ErrStopping) {
                close(resultChan)
        } else if err != nil {
                return nil, fmt.Errorf("got an unexpected error from "+
                        "GoroutineManager: %w", err)
        }

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah I think this is an API design flaw. My latest review adds a suggestion. Basically: i dont think the caller should need to know that the only error the goroutine manger can return is ErrStopping. I also dont think that that is actually an error - more just a state we want to handle. See latest review for more details

}()
})
if err != nil {
return
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't think this should return?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed, added a comment. Now this section looks like this:

                // When this time ticks, then it indicates that we should
                // collect all the forwarding events since the last internal,
                // and write them out to our log.
                case <-s.cfg.FwdEventTicker.Ticks():
                        // The error of Go is ignored: if it is shutting down,
                        // the loop will terminate on the next iteration, in
                        // s.gm.Done case.
                        _ = s.gm.Go(func(ctx context.Context) {
                                err := s.FlushForwardingEvents()
                                if err != nil {
                                        log.Errorf("unable to flush "+
                                                "forwarding events: %v", err)
                                }
                        })

@starius
Copy link
Collaborator Author

starius commented Oct 23, 2024

@Crypt-iQ

I couldn't reproduce the race condition with the attached test, do you have an error trace of it?

I pushed branch reproduce-race to my fork.

In that branch:

htlcswitch$ go test -race -run TestSwitchGetAttemptResultStress
==================
WARNING: DATA RACE
Read at 0x00c0001d4118 by goroutine 21:
  runtime.raceread()
      <autogenerated>:1 +0x1e
  github.com/lightningnetwork/lnd/htlcswitch.(*Switch).GetAttemptResult()
      /home/user/lnd/htlcswitch/switch.go:496 +0x1c4
  github.com/lightningnetwork/lnd/htlcswitch.TestSwitchGetAttemptResultStress.func1()
      /home/user/lnd/htlcswitch/switch_test.go:3211 +0x168

Previous write at 0x00c0001d4118 by goroutine 22:
  runtime.racewrite()
      <autogenerated>:1 +0x1e
  github.com/lightningnetwork/lnd/htlcswitch.(*Switch).Stop()
      /home/user/lnd/htlcswitch/switch.go:1995 +0x1e9
  github.com/lightningnetwork/lnd/htlcswitch.TestSwitchGetAttemptResultStress.func2()
      /home/user/lnd/htlcswitch/switch_test.go:3232 +0xae

Goroutine 21 (running) created at:
  github.com/lightningnetwork/lnd/htlcswitch.TestSwitchGetAttemptResultStress()
      /home/user/lnd/htlcswitch/switch_test.go:3203 +0x356
  testing.tRunner()
      /home/user/.goroot/src/testing/testing.go:1690 +0x226
  testing.(*T).Run.gowrap1()
      /home/user/.goroot/src/testing/testing.go:1743 +0x44

Goroutine 22 (finished) created at:
  github.com/lightningnetwork/lnd/htlcswitch.TestSwitchGetAttemptResultStress()
      /home/user/lnd/htlcswitch/switch_test.go:3222 +0x45c
  testing.tRunner()
      /home/user/.goroot/src/testing/testing.go:1690 +0x226
  testing.(*T).Run.gowrap1()
      /home/user/.goroot/src/testing/testing.go:1743 +0x44
==================
--- FAIL: TestSwitchGetAttemptResultStress (0.08s)
    testing.go:1399: race detected during execution of test
FAIL
exit status 1
FAIL    github.com/lightningnetwork/lnd/htlcswitch      0.380s

@starius starius force-pushed the goroutines branch 2 times, most recently from 7cb95ef to 662c47b Compare October 24, 2024 00:30
@starius
Copy link
Collaborator Author

starius commented Oct 24, 2024

@starius - I think these unit test failures are related to this PR - maybe take a look at fixing those up first & then re-ping reviewers when ready?

Test failure was caused by extra call to s.Stop in defer. I removed it.

Copy link
Collaborator

@ellemouton ellemouton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the updates @starius!

Logic looks good, but I have some opinions about the API of the fn.Go call that I think is worth discussing before we merge. Would love to hear what @yyforyongyu & @ProofOfKeags think too.

htlcswitch/switch_test.go Show resolved Hide resolved
htlcswitch/switch.go Outdated Show resolved Hide resolved
Comment on lines 531 to 534
})
// The switch shutting down is signaled by closing the channel.
if errors.Is(err, fn.ErrStopping) {
close(resultChan)
} else if err != nil {
return nil, fmt.Errorf("got an unexpected error from "+
"GoroutineManager: %w", err)
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

related to my comment above: from what I can tell, the only error that gm.Go(..) will ever return is fn.ErrStopping or nil. So to me, it returning this is not actually an error but more just a "state transition" we should be aware of. Which I think is a point towards handling this explicitly in the actual callback passed to Go via a quit channel as mentioned above.
If we do want some idea of if the goroutine manager did its thing from outside the call-back (cause it could also be that the call-back never gets called), then I think a simple bool could to the trick since it basically just communicates "handled/not handled due to shutdown"

htlcswitch/switch.go Show resolved Hide resolved
@ProofOfKeags ProofOfKeags self-requested a review October 29, 2024 15:39
@ProofOfKeags
Copy link
Collaborator

What's the prio on this? I want to review but I need to balance with other stuff.

@saubyk
Copy link
Collaborator

saubyk commented Oct 30, 2024

What's the prio on this? I want to review but I need to balance with other stuff.

Not critical. You can focus on P0 stuff, before addressing this.

Copy link
Member

@yyforyongyu yyforyongyu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry a bit late in the game, but is there an issue page describing what the issue is?

I also don't understand the struct GoroutineManager - it looks like it's putting a mutex to guard the wait group operations?

My instinct is this is solving the wrong problem - we should always know when/where we call wg.Add and wg.Wait, if not, we should refactor our code so we always know when we cal wg.Add and wg.Wait. I guess other people have run into this issue before too.

@@ -245,8 +246,14 @@ type Switch struct {
// This will be retrieved by the registered links atomically.
bestHeight uint32

wg sync.WaitGroup
quit chan struct{}
// TODO(yy): remove handleLocalResponseWG, once handleLocalResponse runs
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm why it's my TODO😂

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

handleLocalResponseWG was added because handleLocalResponse is called in a goroutine, which can't be tracked using GoroutineManagaer. There is an existing TODO(yy) to remove the goroutine running handleLocalResponse. I copy-pasted that TODO here, since if that TODO is fixed, then handleLocalResponseWG is not needed, so this TODO is also fixed :-)

starius added a commit to starius/lnd that referenced this pull request Nov 14, 2024
starius added a commit to starius/lnd that referenced this pull request Nov 14, 2024
Comment on lines 252 to 253
// unclear if it safe to skip handleLocalResponse.
handleLocalResponseWG sync.WaitGroup
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't believe this is necessary as I believe that the composition of waitgroups is equivalent to the waitgroup of the composition of threads when the wait conditions are always called in conjunction with one another.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We discussed this offline.

The main reason for the special handling of handleLocalResponse was that, in the old version, it was launched unconditionally (even when the switch was stopping), and switching to GoroutineManager introduces a change in behavior. However, we need to ensure that the effects are idempotent to prevent inconsistent states in the event of power failures. If this requirement is met, the behavior change should not pose an issue.

Therefore, I removed the special treatment of handleLocalResponse, along with the associated WaitGroup and TODOs. It is now managed entirely by GoroutineManager.

@starius
Copy link
Collaborator Author

starius commented Nov 26, 2024

Sorry a bit late in the game, but is there an issue page describing what the issue is?

@yyforyongyu Thank you for the suggestion!
I opened #9308 to describe the original issue.

I also don't understand the struct GoroutineManager - it looks like it's putting a mutex to guard the wait group operations?

WaitGroup cannot be directly used to track goroutines in the scenario we encounter in htlcswitch. The issue arises when we have a long-lived object (the switch) that launches goroutines during its lifecycle (via the GetAttemptResult method, which calls wg.Add(1)) and a Stop() method, which cancels running goroutines (using context cancellation) and waits for them to complete (via wg.Wait()).

In this setup, wg.Add(1) and wg.Wait() can be called in parallel when the WaitGroup counter is at 0. At this point, WaitGroup cannot determine whether it should wait or not because the outcome depends on the timing and order of these calls. Essentially, this creates a situation where switch.Stop() doesn’t know whether to wait for a goroutine launched by GetAttemptResult if it was initiated at the same time Stop() was called. This results in a race condition.

GoroutineManager resolves this issue by introducing a Go method that is synchronized with the Stop method. This ensures that either a goroutine is successfully launched or the Go method returns false. This synchronization is achieved by using a mutex alongside the WaitGroup.

My instinct is this is solving the wrong problem - we should always know when/where we call wg.Add and wg.Wait, if not, we should refactor our code so we always know when we cal wg.Add and wg.Wait. I guess other people have run into this issue before too.

I agree that, ideally, the code should be refactored into an event-loop style, centralizing all goroutine launches and state changes within a single goroutine and using channels to transmit data to and from it. This approach aligns with the patterns we follow in other packages. However, implementing such a change would require significant time and extensive modifications to the package. What are your thoughts?

Copy link
Collaborator

@ellemouton ellemouton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@starius - I think this still needs to be updated to point to the latest version of fn (#9270).

Also - I think you can go ahead and squash in that final commit

@starius
Copy link
Collaborator Author

starius commented Nov 28, 2024

I squashed the last commit (deeacc6), rebased and used GoroutineManager from fn v2. Fortunately fn v1 and fn v2 can be used simultaneously!

Copy link
Collaborator

@ellemouton ellemouton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the updates, I think things look good but i think we should change the API of the goroutine manager a bit more. See my suggestion here

htlcswitch/switch.go Show resolved Hide resolved
@@ -836,7 +847,8 @@ func (s *Switch) logFwdErrs(num *int, wg *sync.WaitGroup, fwdChan chan error) {
log.Errorf("Unhandled error while reforwarding htlc "+
"settle/fail over htlcswitch: %v", err)
}
case <-s.quit:

case <-ctx.Done():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

something doesnt feel right here. It feels like we are mixing the use of caller ctx and quit channels. Here, they mean the same thing: so ie, why cant we just listen on s.gm.Done() here (ie, s.quit)? because this ctx that is now being passed in here is not coming from the caller of ForwardPackets and is instead coming from the creator of the the gm. I think the issue is stemming from the fact that we are passing a context to the constructor of the goroutine manager which is an anti-pattern. Im gonna see if I can work the goroutine manager a bit to work around this anti-pattern

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I replaced ctx.Done() with s.gm.Done() here and also inside a goroutine launched by GetAttemptResult.

@@ -368,8 +370,11 @@ func New(cfg Config, currentHeight uint32) (*Switch, error) {
return nil, err
}

gm := fn2.NewGoroutineManager(context.Background())
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's an anti-pattern to pass a context into a constructor. I think we should try to avoid this as much as possible. I'll put up a suggested diff for the goroutine manager 👍

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I updated fn dependency and used new API!

Updated protofsm package for changed API of fn.GoroutineManager.
Replaced the use of s.quit and s.wg with s.gm (GoroutineManager).

This fixes a race condition between s.wg.Add(1) and s.wg.Wait().
Also added a test which used to fail under `-race` before this commit.
@starius starius requested a review from ellemouton December 13, 2024 04:07
Copy link
Collaborator

@ellemouton ellemouton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's hold off here until #9344 and #9342 are merged as those will make things easier here

@@ -85,6 +86,9 @@ var (
// fail payments if they increase our fee exposure. This is currently
// set to 500m msats.
DefaultMaxFeeExposure = lnwire.MilliSatoshi(500_000_000)

// background is a shortcut for context.Background.
background = context.Background()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i dont think we should do this. Rather use a context.TODO() where needed

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you rebase on top of #9344, then we can also add a context guard here and then we only need a single context.TODO() in Start()

Comment on lines +29 to +30
// background is a shortcut for context.Background.
background = context.Background()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should not do this.

consider rebasing on top of #9342 which handles the bump to the correct fn version and handles updating the statemachine to thread contexts through correctly

var n *networkResult
select {
case n = <-nChan:
case <-s.quit:
case <-s.gm.Done():
Copy link
Collaborator

@ellemouton ellemouton Dec 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think it is not great to refer to s.gm from inside a call-back that is called from s.gm (it screams "deadlock"). Rather just use the ctx provided to the callback which will be cancelled when the gm is shutdown (ie, when gm.Done() would have returned anyways)

// The error of Go is ignored: if it is shutting down,
// the loop will terminate on the next iteration, in
// s.gm.Done case.
_ = s.gm.Go(background, func(ctx context.Context) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let htlcForwarder take a context and pass in a context in there from the goroutine which is starting it

@@ -3020,8 +3042,12 @@ func (s *Switch) handlePacketSettle(packet *htlcPacket) error {
// NOTE: `closeCircuit` modifies the state of `packet`.
if localHTLC {
// TODO(yy): remove the goroutine and send back the error here.
s.wg.Add(1)
go s.handleLocalResponse(packet)
ok := s.gm.Go(background, func(ctx context.Context) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rather pass in a context to the calling func. Same for all the others

@lightninglabs-deploy
Copy link

@Crypt-iQ: review reminder
@ProofOfKeags: review reminder
@yyforyongyu: review reminder
@starius, remember to re-request review from reviewers when ready

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[bug]: htlcswitch may crash upon shutdown because of a race in WaitGroup
7 participants