You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We should add a new associative table (that can store other meta data) that we add to each time we burn an asset. We should then create a set of queries to easily look up the set of burnt assets all time, for a given asset, etc, etc.
The note field can be used to store other metadata related to a burn.
Describe alternatives you've considered
A user can scan all the proof files on disk, to find those that end in a bug suffix, but this doesn't lend well to the creation of an automated system.
The text was updated successfully, but these errors were encountered:
Roasbeef
changed the title
tapdb - [feature]: add new associative table to track asset burns
tapdb - [feature]: add new associative table to track asset burns ListBurnsOct 28, 2024
Is your feature request related to a problem? Please describe.
Today we have an easy way to burn assets, but no easy way to track all burns we've made for a given asset.
Describe the solution you'd like
With the way things work today, burns are just another transfer:
taproot-assets/rpcserver.go
Lines 3258 to 3282 in 420f246
However, we don't have an easy/efficient way to scan the transfers table for all the burns we've done:
taproot-assets/tapdb/sqlc/migrations/000005_transfers.up.sql
Lines 1 to 9 in 420f246
We should add a new associative table (that can store other meta data) that we add to each time we burn an asset. We should then create a set of queries to easily look up the set of burnt assets all time, for a given asset, etc, etc.
A draft table would look something like:
The
note
field can be used to store other metadata related to a burn.Describe alternatives you've considered
A user can scan all the proof files on disk, to find those that end in a bug suffix, but this doesn't lend well to the creation of an automated system.
The text was updated successfully, but these errors were encountered: