📂 Collect Binance Tick Data using Websocket and Automatically Store it to AWS S3.
- install docker🐳
# timezone (ex. Asia/Seoul)
TZ=UTC
# market type: { SPOT or FUTURE }
market=SPOT
# multiple symbols {base asset}{quote asset} (ex. BTCUSDT) w/ comma separated
symbols=BTCUSDT,ETHUSDT
# a condition indicating whenever the current file should be closed and a new one started.
# human-friendly parametrization of one of the previously enumerated types.
# ex) "1 GB", "4 days", "10h", "monthly", "18:00", "sunday", "monday at 12:00"
rotation=00:00
# aws s3 settings
use_s3=true
aws_access_key=YOUR_AWS_ACCESS_KEY
aws_secret_key=YOUR_AWS_SECRET_KEY
s3_bucket=YOUR_S3_BUCKET_NAME
s3_bucket_path=data/
# telegram settings
use_telegram=false
telegram_token=YOUR_TELEGRAM_BOT_TOKEN
telegram_chat_id=YOUR_TELEGRAM_CHAT_ID
Edit docker-compose.yaml
file, line 20 👉 here
ofelia.job-exec.app.schedule: "0 5 0 * *"
The above default setting means that tick data is uploaded to S3 every midnight+5minutes (00:05:00).
Scheduling format is the same as the Go implementation of cron
. E.g. @every 10s
or 0 0 1 * *
(every night at 1 AM).
Note: the format starts with seconds, instead of minutes.
$ docker-compose build
$ docker-compose up -d
- Tick data is stored in the form of csv.
// The Aggregate Trade Streams push trade information that is aggregated for a single taker order.
{
"e": "aggTrade", // Event type
"E": 123456789, // Event time
"s": "BTCUSDT", // Symbol
"a": 5933014, // Aggregate trade ID
"p": "0.001", // Price
"q": "100", // Quantity
"f": 100, // First trade ID
"l": 105, // Last trade ID
"T": 123456785, // Trade time
"m": true, // Is the buyer the market maker?
}
// This is .csv file
aggTrade,1620744948060,BTCUSDT,476905218,55845.13,1.887,777675070,777675082,1620744948055,True
aggTrade,1620744948060,BTCUSDT,476905219,55844.48,0.191,777675083,777675083,1620744948055,True
...
- Binance docs for aggregate trade streams: Link