Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmarking mempool and TPS #1107

Open
therealdannzor opened this issue Jul 31, 2024 · 1 comment
Open

Benchmarking mempool and TPS #1107

therealdannzor opened this issue Jul 31, 2024 · 1 comment

Comments

@therealdannzor
Copy link
Contributor

Background

This is a continuation of the issue reported in #723, with varying amounts of transactions submitted to the swarm node with five (5) node instances.

Replication Steps

  1. Start off with clean database and pre-generated transactions:
cargo run --bin transaction_generator --release -- write -n 100000 -o transactions.bin --overwrite
  1. Initialise:
cargo run --bin tari_swarm_daemon --release -- -b data/swarm init
  1. Run:
cargo run --bin tari_swarm_daemon --release -- -b data/swarm start
  1. Submit txs to node:
cargo run --bin transaction_submitter -- stress-test -f  -y -a127.0.0.1:<VALIDATOR_PORT> -n $TX_AMOUNT -k 0

Results

We define $\Delta T$ as the difference in the two timestamps, the last and first block containing transactions, respectively. In other words, an average of the time it takes to have finalised all transactions onchain. TPS is transactions per seconds.

In the last row, with no delta, the submitter was never able to finish and left transactions pending in the mempool indefinitely.

Note: every case were run twice so if there is an interval, it is the two different data points. If there is no interval, the second observation was in line (up to two decimal points) with the first one.

Transactions $\Delta T$ TPS
500 4 125
1,000 6 166.67
5,000 65 - 66 74.63 - 75.76
10,000 234 - 271 36.90 - 42.74
12,500 - -

Future Work

These results are with a single shard group and the numbers will likely change when the chain supports pre-sharding with shard groups (see #1092).

Two possible optimisations to investigate further at some points would be:

  1. How to support more transactions submitted through the transaction_submitter, i.e. how can we unlock support for 12,500 sequenced transactions and beyond?
  2. How to increase throughput for the existing amount of transactions we support? It is possible to make the argument that it is fine to periodically send 1,000 or 5,000 transactions in separate bundles rather than one big one.

In addition, there are tweaks such as removing the pacemaker heartbeat which temporarily increased the 5,000 transaction throughput to 91 TPS but on the other hand fails to cope with 10,000 transactions, so is likely a regression.

@therealdannzor
Copy link
Contributor Author

A debug release profile for a validator_node with 5,000 transactions:

Screenshot 2024-08-02 at 09 05 24

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant