Please complete the following:
- Read this document and explore the other adapters (see
adapters/example
) - See the schema that we are requesting based on your protocol type (currently supporting, lending and perps)
- Submit a PR with progress with a new folder containing your adapter (in
adapter/{protocol-name}
) - Build your adapter
- Ensure your
package.json
has astart
command that executes the proper code - Follow the schema/output format, standard input file format, and standard function inputs
- Ensure the adapter runs and executes in the proper workflow (see:
How to execute this project?
)
- Ensure your
- QA your data
- Notify our team when you are ready for a review!
- Call standard function and input block number read from
hourly_blocks.csv
- Build a pandas dataframe with the user level data for at the given block
- Query the subgraph/make
eth_call
s (ensure they are historical) to get all of the data- Tip: use
viem
multicall()
to save time and rpc compute
- Tip: use
- Write the data to
outputData.csv
Create your block file (adapters/{protocol-name}/hourly_blocks.csv
) using comma-delineated csv.
Example:
number,timestamp
6916213,1719322991
Run these commands:
cd adapters/{protocol-name}
npm install
tsc
npm run start # should execute node dist/index.js
Your adapter should write the data to a csv file named adapters/{protocol-name}/outputData.csv
following the proper schema below.
Generally each row corresponds to a user / market pair at a given block.
Please note the following:
- We will be querying hourly snapshots of user level data by asset/market
- A standard function / entry point that inputs a block number and fetches all user level data at that block. (see below)
- Accepts the standard input defined in
hourly_blocks.csv
(see more details below) - All values are in the underlying token amount (no lp or output tokens such as, cTokens, aTokens, uni pools)
- Token amounts are normalized to the underlying token decimals (not raw values)
- All strings/addresses are lowercase with no spaces
Note: Expect multiple entries per user if the protocol has more than one token asset
Data Field | Notes |
---|---|
protocol | Name of the protocol (no spaces, should match the folder name) |
block_number | |
timestamp | Block timestamp |
user_address | The address of the user who's data is being recorded |
market | The smart contract address of the market |
token_address | The smart contract address of the underlying token for this position |
token_symbol | Symbol of the underlying token |
supply_token | Balance of the supplied amount in this market from user_address |
borrow_token | balance of the borrowed amount in this market from user_address |
etl_timestamp | Run timestamp of this row |
Data Field | Notes |
---|---|
protocol | Name of the protocol (no spaces, should match the folder name) |
block_number | |
timestamp | Block timestamp |
user_address | The address of the user who's data is being recorded |
market | The smart contract address of the market |
trade_pair_symbol | Symbol of the trade pair |
daily_user_volume_usd | The cumulative volume of this user for this trade pair for the preceding 24h in USD |
funding_rate | The funding rate for this trade pair (in percentage, ie: 63.4 = 63.4%) |
supplied_amount_usd | The TVL or deposited amount of this user for this trade pair at this timestamp in USD (same as UI) |
open_shorts_usd | Total notional value (in USD) of the shorts for this trade pair of the user |
open_longs_usd | Total notional value (in USD) of the longs for this trade pair of the user |
protocol_fees_usd | Revenue fees for the protocol in USD, generated from this user and trade pair (cumulative for the preceding 24h) |
users_fees_usd | Revenue fees for the users (LP / takers) in USD, generated from this user and trade pair (cumulative for the preceding 24h) |
etl_timestamp | Run timestamp of this row |
This is a general guideline to help begin your project. See the following examples for how to call this function and write to outputData.csv
:
- Example
- Layerbank (compound v2 fork)
- Rhomarkets (compound v2 fork)
export const getUserDataByBlock = async (blocks: BlockData) => {
const { blockNumber, blockTimestamp } = blocks
// Retrieve data using block number and timestamp
// YOUR LOGIC HERE
return csvRows
};
It is good practice to provide typing of newly defined data structures in typescript. Let's take a look at how we can standardize these based on your protocol type.
Input Data Type
interface BlockData {
blockNumber: number;
blockTimestamp: number;
}
Lending Schema Output
const csvRows: OutputDataSchemaRow[] = [];
type OutputDataSchemaRow = {
// User / Market data
user_address: string;
market: string;
token_address: string;
token_symbol: string;
// Financial data
supply_token: number;
borrow_token: number;
// Metadata
block_number: number;
timestamp: number;
protocol: string;
etl_timestamp: number;
};
Perps Schema Output
const csvRows: OutputDataSchemaRow[] = [];
type OutputDataSchemaRow = {
// User / Market info
user_address: string;
market: string;
trade_pair_symbol: string;
funding_rate: number;
// Financial data
daily_user_volume_usd: number;
supplied_amount_usd: number;
open_shorts_usd: number;
open_longs_usd: number;
protocol_fees_usd: number;
users_fees_usd: number;
// Metadata
protocol: string;
block_number: number;
timestamp: number;
etl_timestamp: number;
};
- Set up a subquery indexer (e.g. Goldsky Subgraph)
- Follow the docs here: https://docs.goldsky.com/guides/create-a-no-code-subgraph
- General Steps
- create an account at app.goldsky.com
- deploy a subgraph or migrate an existing subgraph - https://docs.goldsky.com/subgraphs/introduction
- Use the slugs
scroll-testnet
andscroll
when deploying the config
- Prepare Subquery query code according to the Schema Requirements section
In this repo, there is an adapter example. This adapter aims to get data positions from the subrgaph and calculate the TVL by users. The main scripts is generating a output as CSV file.