Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix deadlock in messager and health streamer #17230

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

GuptaManan100
Copy link
Member

Description

This PR fixes the deadlocks in messager and health streamer as pointed out in #17229.

The fix to the deadlock for both the cases was to use two different mutexes. One that only protects against calling setup functions like Open and Close concurrently, and the other one that actually protects the fields. This helps in removing the deadlock since the function that runs on a broadcast from the schema engine would acquire a different mutex than the call to Close.

Related Issue(s)

Checklist

  • "Backport to:" labels have been added if this change should be back-ported to release branches
  • If this change is to be back-ported to previous releases, a justification is included in the PR description
  • Tests were added or are not required
  • Did the new or modified tests pass consistently locally and on CI?
  • Documentation was added or is not required

Deployment Notes

Copy link
Contributor

vitess-bot bot commented Nov 14, 2024

Review Checklist

Hello reviewers! 👋 Please follow this checklist when reviewing this Pull Request.

General

  • Ensure that the Pull Request has a descriptive title.
  • Ensure there is a link to an issue (except for internal cleanup and flaky test fixes), new features should have an RFC that documents use cases and test cases.

Tests

  • Bug fixes should have at least one unit or end-to-end test, enhancement and new features should have a sufficient number of tests.

Documentation

  • Apply the release notes (needs details) label if users need to know about this change.
  • New features should be documented.
  • There should be some code comments as to why things are implemented the way they are.
  • There should be a comment at the top of each new or modified test to explain what the test does.

New flags

  • Is this flag really necessary?
  • Flag names must be clear and intuitive, use dashes (-), and have a clear help text.

If a workflow is added or modified:

  • Each item in Jobs should be named in order to mark it as required.
  • If the workflow needs to be marked as required, the maintainer team must be notified.

Backward compatibility

  • Protobuf changes should be wire-compatible.
  • Changes to _vt tables and RPCs need to be backward compatible.
  • RPC changes should be compatible with vitess-operator
  • If a flag is removed, then it should also be removed from vitess-operator and arewefastyet, if used there.
  • vtctl command output order should be stable and awk-able.

@vitess-bot vitess-bot bot added NeedsBackportReason If backport labels have been applied to a PR, a justification is required NeedsDescriptionUpdate The description is not clear or comprehensive enough, and needs work NeedsIssue A linked issue is missing for this Pull Request NeedsWebsiteDocsUpdate What it says labels Nov 14, 2024
@github-actions github-actions bot added this to the v22.0.0 milestone Nov 14, 2024
@GuptaManan100 GuptaManan100 removed NeedsDescriptionUpdate The description is not clear or comprehensive enough, and needs work NeedsWebsiteDocsUpdate What it says NeedsIssue A linked issue is missing for this Pull Request NeedsBackportReason If backport labels have been applied to a PR, a justification is required labels Nov 14, 2024
Copy link

codecov bot commented Nov 14, 2024

Codecov Report

Attention: Patch coverage is 89.18919% with 4 lines in your changes missing coverage. Please review.

Project coverage is 67.39%. Comparing base (c72bbdc) to head (3901fc2).
Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
go/vt/vttablet/tabletserver/health_streamer.go 82.60% 4 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##             main   #17230   +/-   ##
=======================================
  Coverage   67.39%   67.39%           
=======================================
  Files        1570     1570           
  Lines      252903   252912    +9     
=======================================
+ Hits       170438   170461   +23     
+ Misses      82465    82451   -14     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@mattlord mattlord left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, @GuptaManan100 ! ❤️


// TestDeadlockBwCloseAndSchemaChange tests the deadlock observed between Close and schemaChanged
// functions. More details can be found in the issue https://github.com/vitessio/vitess/issues/17229.
func TestDeadlockBwCloseAndSchemaChange(t *testing.T) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ran this on main in a loop while reviewing the PR. I didn't get any failures:

while true; do
    go test -v -count 1 -timeout 5s -run ^TestDeadlockBwCloseAndSchemaChange$ vitess.io/vitess/go/vt/vttablet/tabletserver/messager
    if [[ $? -ne 0 ]]; then
        say "got one"
        break
    fi
done

But we already know that it's not easy to trigger and we can logically see how it can happen. Just noting that the test doesn't seem to be as useful as one would hope.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This tracks. I commented on the issue that we haven't seen this in practice, at least that I'm aware of.
#17229 (comment)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really!? It fails for me -

while true; do
    go test -v -count 1 -timeout 5s -run ^TestDeadlockBwCloseAndSchemaChange$ vitess.io/vitess/go/vt/vttablet/tabletserver/messager
    if [[ $? -ne 0 ]]; then
        say "got one"
        break
    fi
done
=== RUN   TestDeadlockBwCloseAndSchemaChange
--- PASS: TestDeadlockBwCloseAndSchemaChange (0.00s)
PASS
ok      vitess.io/vitess/go/vt/vttablet/tabletserver/messager   0.579s
=== RUN   TestDeadlockBwCloseAndSchemaChange
panic: test timed out after 5s
        running tests:
                TestDeadlockBwCloseAndSchemaChange (5s)

goroutine 66 [running]:
testing.(*M).startAlarm.func1()
        /Users/manangupta/.gvm/pkgsets/go1.23.0/global/pkg/mod/golang.org/[email protected]/src/testing/testing.go:2373 +0x30c
created by time.goFunc
        /Users/manangupta/.gvm/pkgsets/go1.23.0/global/pkg/mod/golang.org/[email protected]/src/time/sleep.go:215 +0x38

goroutine 1 [chan receive]:
testing.(*T).Run(0x140001be820, {0x1012cb3fe?, 0x1400059fb48?}, 0x101ab0970)
        /Users/manangupta/.gvm/pkgsets/go1.23.0/global/pkg/mod/golang.org/[email protected]/src/testing/testing.go:1751 +0x328
testing.runTests.func1(0x140001be820)
        /Users/manangupta/.gvm/pkgsets/go1.23.0/global/pkg/mod/golang.org/[email protected]/src/testing/testing.go:2168 +0x40
testing.tRunner(0x140001be820, 0x1400059fc68)
        /Users/manangupta/.gvm/pkgsets/go1.23.0/global/pkg/mod/golang.org/[email protected]/src/testing/testing.go:1690 +0xe4
testing.runTests(0x140001c7d58, {0x1024c7e00, 0x19, 0x19}, {0xfc00000000000000?, 0xfcaacd9be1871533?, 0x102a6b640?})
        /Users/manangupta/.gvm/pkgsets/go1.23.0/global/pkg/mod/golang.org/[email protected]/src/testing/testing.go:2166 +0x3ac
testing.(*M).Run(0x14000342f00)
        /Users/manangupta/.gvm/pkgsets/go1.23.0/global/pkg/mod/golang.org/[email protected]/src/testing/testing.go:2034 +0x588
main.main()
        _testmain.go:93 +0x90

goroutine 35 [select]:
github.com/golang/glog.(*fileSink).flushDaemon(0x102a6b6d8)
        /Users/manangupta/.gvm/pkgsets/go1.23.0/global/pkg/mod/github.com/golang/[email protected]/glog_file.go:352 +0xa4
created by github.com/golang/glog.init.1 in goroutine 1
        /Users/manangupta/.gvm/pkgsets/go1.23.0/global/pkg/mod/github.com/golang/[email protected]/glog_file.go:167 +0x1c8

goroutine 10 [syscall]:
os/signal.signal_recv()
        /Users/manangupta/.gvm/pkgsets/go1.23.0/global/pkg/mod/golang.org/[email protected]/src/runtime/sigqueue.go:149 +0x2c
os/signal.loop()
        /Users/manangupta/.gvm/pkgsets/go1.23.0/global/pkg/mod/golang.org/[email protected]/src/os/signal/signal_unix.go:23 +0x1c
created by os/signal.Notify.func1.1 in goroutine 1
        /Users/manangupta/.gvm/pkgsets/go1.23.0/global/pkg/mod/golang.org/[email protected]/src/os/signal/signal.go:151 +0x28

goroutine 11 [chan receive]:
vitess.io/vitess/go/vt/dbconfigs.init.0.func1()
        /Users/manangupta/vitess/go/vt/dbconfigs/credentials.go:90 +0x34
created by vitess.io/vitess/go/vt/dbconfigs.init.0 in goroutine 1
        /Users/manangupta/vitess/go/vt/dbconfigs/credentials.go:89 +0x164

goroutine 12 [semacquire]:
sync.runtime_Semacquire(0x14000663340?)
        /Users/manangupta/.gvm/pkgsets/go1.23.0/global/pkg/mod/golang.org/[email protected]/src/runtime/sema.go:71 +0x2c
sync.(*WaitGroup).Wait(0x140003b47d0)
        /Users/manangupta/.gvm/pkgsets/go1.23.0/global/pkg/mod/golang.org/[email protected]/src/sync/waitgroup.go:118 +0x74
vitess.io/vitess/go/vt/vttablet/tabletserver/messager.TestDeadlockBwCloseAndSchemaChange(0x140001be9c0?)
        /Users/manangupta/vitess/go/vt/vttablet/tabletserver/messager/engine_test.go:201 +0x110
testing.tRunner(0x140001be9c0, 0x101ab0970)
        /Users/manangupta/.gvm/pkgsets/go1.23.0/global/pkg/mod/golang.org/[email protected]/src/testing/testing.go:1690 +0xe4
created by testing.(*T).Run in goroutine 1
        /Users/manangupta/.gvm/pkgsets/go1.23.0/global/pkg/mod/golang.org/[email protected]/src/testing/testing.go:1743 +0x314

goroutine 13 [select]:
vitess.io/vitess/go/stats.(*Rates).track(0x14000465100)
        /Users/manangupta/vitess/go/stats/rates.go:110 +0xac
created by vitess.io/vitess/go/stats.NewRates in goroutine 12
        /Users/manangupta/vitess/go/stats/rates.go:97 +0x240

goroutine 14 [select]:
vitess.io/vitess/go/stats.(*Rates).track(0x14000465280)
        /Users/manangupta/vitess/go/stats/rates.go:110 +0xac
created by vitess.io/vitess/go/stats.NewRates in goroutine 12
        /Users/manangupta/vitess/go/stats/rates.go:97 +0x240

goroutine 15 [sync.Mutex.Lock]:
sync.runtime_SemacquireMutex(0x2?, 0x0?, 0x1012d7c7b?)
        /Users/manangupta/.gvm/pkgsets/go1.23.0/global/pkg/mod/golang.org/[email protected]/src/runtime/sema.go:95 +0x28
sync.(*Mutex).lockSlow(0x14000410048)
        /Users/manangupta/.gvm/pkgsets/go1.23.0/global/pkg/mod/golang.org/[email protected]/src/sync/mutex.go:173 +0x174
sync.(*Mutex).Lock(...)
        /Users/manangupta/.gvm/pkgsets/go1.23.0/global/pkg/mod/golang.org/[email protected]/src/sync/mutex.go:92
vitess.io/vitess/go/vt/vttablet/tabletserver/schema.(*Engine).UnregisterNotifier(0x14000410000, {0x10129dacb, 0x8})
        /Users/manangupta/vitess/go/vt/vttablet/tabletserver/schema/engine.go:846 +0xc0
vitess.io/vitess/go/vt/vttablet/tabletserver/messager.(*Engine).Close(0x14000061c20)
        /Users/manangupta/vitess/go/vt/vttablet/tabletserver/messager/engine.go:108 +0x144
vitess.io/vitess/go/vt/vttablet/tabletserver/messager.TestDeadlockBwCloseAndSchemaChange.func1()
        /Users/manangupta/vitess/go/vt/vttablet/tabletserver/messager/engine_test.go:188 +0x5c
created by vitess.io/vitess/go/vt/vttablet/tabletserver/messager.TestDeadlockBwCloseAndSchemaChange in goroutine 12
        /Users/manangupta/vitess/go/vt/vttablet/tabletserver/messager/engine_test.go:185 +0xb8

goroutine 16 [sync.Mutex.Lock]:
sync.runtime_SemacquireMutex(0x1298a9bf0?, 0x50?, 0x1298a9bf0?)
        /Users/manangupta/.gvm/pkgsets/go1.23.0/global/pkg/mod/golang.org/[email protected]/src/runtime/sema.go:95 +0x28
sync.(*Mutex).lockSlow(0x14000061c20)
        /Users/manangupta/.gvm/pkgsets/go1.23.0/global/pkg/mod/golang.org/[email protected]/src/sync/mutex.go:173 +0x174
sync.(*Mutex).Lock(...)
        /Users/manangupta/.gvm/pkgsets/go1.23.0/global/pkg/mod/golang.org/[email protected]/src/sync/mutex.go:92
vitess.io/vitess/go/vt/vttablet/tabletserver/messager.(*Engine).schemaChanged(0x14000061c20, 0x140001b7dc8?, {0x0, 0x0, 0x0}, {0x0, 0x0, 0x1012570f4?}, {0x0, 0x0, ...}, ...)
        /Users/manangupta/vitess/go/vt/vttablet/tabletserver/messager/engine.go:152 +0xa8
vitess.io/vitess/go/vt/vttablet/tabletserver/schema.(*Engine).broadcast(0x14000410000, {0x0, 0x0, 0x0}, {0x0, 0x0, 0x0}, {0x0, 0x0, 0x0}, ...)
        /Users/manangupta/vitess/go/vt/vttablet/tabletserver/schema/engine.go:864 +0x264
vitess.io/vitess/go/vt/vttablet/tabletserver/schema.(*Engine).BroadcastForTesting(0x0?, {0x0?, 0x0?, 0x0?}, {0x0?, 0x0?, 0x0?}, {0x0?, 0x0?, 0x0?}, ...)
        /Users/manangupta/vitess/go/vt/vttablet/tabletserver/schema/engine.go:872 +0x114
vitess.io/vitess/go/vt/vttablet/tabletserver/messager.TestDeadlockBwCloseAndSchemaChange.func2()
        /Users/manangupta/vitess/go/vt/vttablet/tabletserver/messager/engine_test.go:196 +0x88
created by vitess.io/vitess/go/vt/vttablet/tabletserver/messager.TestDeadlockBwCloseAndSchemaChange in goroutine 12
        /Users/manangupta/vitess/go/vt/vttablet/tabletserver/messager/engine_test.go:193 +0x108
FAIL    vitess.io/vitess/go/vt/vttablet/tabletserver/messager   5.648s
FAIL

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made sure the test was failing before I added any fixes. After the changes, the test doesn't fail across 1000 or so runs 😄

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can try running this on e525c1a169 commit hash. In this PR I've added the test in one commit, before fixing the issue in the next. So it should be reproducible for you too.

@@ -76,15 +83,12 @@ func NewEngine(tsv TabletService, se *schema.Engine, vs VStreamer) *Engine {
// Open starts the Engine service.
func (me *Engine) Open() {
me.mu.Lock()
defer me.mu.Unlock()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Comment on lines +109 to +110
me.managersMu.Lock()
defer me.managersMu.Unlock()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@derekperkins
Copy link
Member

derekperkins commented Nov 14, 2024

For some historical context, when the original code was written, defer still had a somewhat significant penalty, hence the lack of defer mu.Unlock. That was resolved in upstream Go in v1.14.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Bug Report: Deadlock in messager engine
4 participants