Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

itest: add optional 10k asset test #325

Merged
merged 10 commits into from
Jun 22, 2023
14 changes: 12 additions & 2 deletions tapdb/universe.go
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ import (
"encoding/hex"
"errors"
"fmt"
"sync"

"github.com/btcsuite/btcd/btcec/v2/schnorr"
"github.com/btcsuite/btcd/wire"
Expand Down Expand Up @@ -111,8 +112,8 @@ func NewBaseUniverseReadTx() BaseUniverseStoreOptions {
}
}

// BasedUniverseTree is a wrapper around the base universe tree that allows us
// perform batch queries with all the relevant query interfaces.
// BatchedUniverseTree is a wrapper around the base universe tree that allows us
// to perform batch queries with all the relevant query interfaces.
type BatchedUniverseTree interface {
BaseUniverseStore

Expand All @@ -129,6 +130,8 @@ type BaseUniverseTree struct {

id universe.Identifier

registrationMtx sync.Mutex

smtNamespace string
}

Expand Down Expand Up @@ -292,6 +295,13 @@ func (b *BaseUniverseTree) RegisterIssuance(ctx context.Context,
return nil, err
}

// Up to this point many writers can perform all required read
// operations to prepare and validate the keys. But since we're now
// actually starting to write, we want to limit this to a single writer
// at a time.
b.registrationMtx.Lock()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should revert this change. We want to make sure that our db concurrency control can handle situations like this. For sqlite, there's only a single writer. For postgres, the new serialization mode should detect conflicts, then abort so it can retry safely.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Without this change we ran into "number of retries exhausted" errors quite quickly, both on SQLite and Postgres. That's mainly because we constantly sync multiple leaves at a time (number of CPUs in parallel).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha, perhaps we need to increase either the back off timeout, or the number of attempts (I just kinda chose 10 randomly). Wanting to try to tune these params, as otherwise we may end up hitting this in other areas that have more involved db transactions.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like batching these transactions would also work? E.x. commit batches of 5 issuance proofs vs. single proofs.

IIUC in all cases where we hit this issue we also don't need to write these proofs individually to recover from a restart.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that'll work too, rn we just do them one by one, but we can also batch the insertion as well. We can easily batch when adding to our own universe (that for loop in the planter), and then for RPC, we can let it be a repeated field.

defer b.registrationMtx.Unlock()

var (
writeTx BaseUniverseStoreOptions

Expand Down