The dcrdata repository is a collection of golang packages and apps for Decred data collection, storage, and presentation.
../dcrdata The dcrdata daemon.
├── blockdata Package blockdata.
├── cmd
│  ├── rebuilddb rebuilddb utility, for SQLite backend.
│  ├── rebuilddb2 rebuilddb2 utility, for PostgreSQL backend.
│  └── scanblocks scanblocks utility.
├── dcrdataapi Package dcrdataapi for golang API clients.
├── db
│  ├── dbtypes Package dbtypes with common data types.
│  ├── dcrpg Package dcrpg providing PostgreSQL backend.
│  └── dcrsqlite Package dcrsqlite providing SQLite backend.
├── public Public resources for block explorer (css, js, etc.).
├── explorer Package explorer, powering the block explorer.
├── mempool Package mempool.
├── rpcutils Package rpcutils.
├── semver Package semver.
├── stakedb Package stakedb, for tracking tickets.
├── txhelpers Package txhelpers.
└── views HTML templates for block explorer.
- Go 1.9.x or 1.10.x.
- Running
dcrd
(>=1.1.2) synchronized to the current best block on the network.
The following instructions assume a Unix-like shell (e.g. bash).
-
Verify Go installation:
go env GOROOT GOPATH
-
Ensure
$GOPATH/bin
is on your$PATH
. -
Install
dep
, the dependency management tool.go get -u -v github.com/golang/dep/cmd/dep
-
Clone the dcrdata repository. It must be cloned into the following directory.
git clone https://github.com/decred/dcrdata $GOPATH/src/github.com/decred/dcrdata
-
Fetch dependencies, and build the
dcrdata
executable.cd $GOPATH/src/github.com/decred/dcrdata dep ensure # build dcrdata executable in workspace: go build
The sqlite driver uses cgo, which requires a C compiler (e.g. gcc) to compile the C sources. On Windows this is easily handled with MSYS2 (download and install MinGW-w64 gcc packages).
Tip: If you receive other build errors, it may be due to "vendor" directories
left by dep builds of dependencies such as dcrwallet. You may safely delete
vendor folders and run dep ensure
again.
Presently the dcrdata executable, it's config file, logs, data files, and web interface resources are all in the same folder. An option to specify the application data folder will be added in the future.
As with the config file, the "public" and "views" folders must be in the same
folder as the dcrdata
executable.
First, update the repository (assuming you have master
checked out):
cd $GOPATH/src/github.com/decred/dcrdata
git pull origin master
dep ensure
go build
Look carefully for errors with git pull
, and reset locally modified files if
necessary.
Begin with the sample configuration file:
cp sample-dcrdata.conf dcrdata.conf
Then edit dcrdata.conf with your dcrd RPC settings. See the output of dcrdata --help
for a list of all options and their default values.
If dcrdata has not previously been run with the PostgreSQL database backend, it
is necessary to perform a bulk import of blockchain data and generate table
indexes. This will be done automatically by dcrdata, but the PostgreSQL tables
may also be generated with the rebuilddb2
command line tool:
- Create the dcrdata user and database in PostgreSQL (tables will be created automatically).
- Set your PostgreSQL credentials and host in both
./cmd/rebuilddb2/rebuilddb2.conf
and./dcrdata.conf
. - Run
rebuilddb2 -u
to bulk import and index. - In case of errors, or schema changes, the tables may be dropped with
rebuilddb2 -D
.
Launch the dcrdata daemon and allow the databases to process new blocks. Both SQLite and PostgreSQL synchronization require about an hour the first time dcrdata is run, but they will be done concurrently. On subsequent launches, only blocks new to dcrdata are scanned.
./dcrdata # don't forget to configure dcrdata.conf in this folder
The "public" and "views" folders must be in the same folder as the dcrdata
executable.
The root of the repository is the main
package for the dcrdata app, which has
several components including:
- Block explorer (web interface).
- Blockchain monitoring and data collection.
- Mempool monitoring and reporting.
- Data storage in durable database (sqlite presently).
- RESTful JSON API over HTTP(S).
After dcrdata syncs with the blockchain server via RPC, by default it will begin
listening for HTTP connections on http://127.0.0.1:7777/
. This means it starts
a web server listening on IPv4 localhost, port 7777. Both the interface and port
are configurable. The block explorer and the JSON API are both provided by the
server on this port. See JSON REST API for details.
Note that while dcrdata can be started with HTTPS support, it is recommended to employ a reverse proxy such as nginx. See sample-nginx.conf for an example nginx configuration.
A new database backend using PostgreSQL was introduced in v0.9.0 that provides
expanded functionality. However, initial population of the database takes
additional time and tens of gigabytes of disk storage space. To disable the
PostgreSQL backend (and the expanded functionality), dcrdata may be started with
the --lite
(-l
for short) command line flag.
The API serves JSON data over HTTP(S). All
API endpoints are currently prefixed with /api
(e.g.
http://localhost:7777/api/stake
), but this may be configurable in the future.
Best block | |
---|---|
Summary | /block/best |
Stake info | /block/best/pos |
Header | /block/best/header |
Hash | /block/best/hash |
Height | /block/best/height |
Size | /block/best/size |
Transactions | /block/best/tx |
Transactions Count | /block/best/tx/count |
Verbose block result | /block/best/verbose |
Block X (block index) | |
---|---|
Summary | /block/X |
Stake info | /block/X/pos |
Header | /block/X/header |
Hash | /block/X/hash |
Size | /block/X/size |
Transactions | /block/X/tx |
Transactions Count | /block/X/tx/count |
Verbose block result | /block/X/verbose |
Block H (block hash) | |
---|---|
Summary | /block/hash/H |
Stake info | /block/hash/H/pos |
Header | /block/hash/H/header |
Height | /block/hash/H/height |
Size | /block/hash/H/size |
Transactions | /block/hash/H/tx |
Transactions Count | /block/hash/H/tx/count |
Verbose block result | /block/hash/H/verbose |
Block range (X < Y) | |
---|---|
Summary array for blocks on [X,Y] |
/block/range/X/Y |
Summary array with block index step S |
/block/range/X/Y/S |
Size (bytes) array | /block/range/X/Y/size |
Size array with step S |
/block/range/X/Y/S/size |
Transaction T (transaction id) | |
---|---|
Transaction Details | /tx/T |
Inputs | /tx/T/in |
Details for input at index X |
/tx/T/in/X |
Outputs | /tx/T/out |
Details for output at index X |
/tx/T/out/X |
Address A | |
---|---|
Summary of last 10 transactions | /address/A |
Verbose transaction result for last 10 transactions |
/address/A/raw |
Summary of last N transactions |
/address/A/count/N |
Verbose transaction result for last N transactions |
/address/A/count/N/raw |
Stake Difficulty (Ticket Price) | |
---|---|
Current sdiff and estimates | /stake/diff |
Sdiff for block X |
/stake/diff/b/X |
Sdiff for block range [X,Y] (X <= Y) |
/stake/diff/r/X/Y |
Current sdiff separately | /stake/diff/current |
Estimates separately | /stake/diff/estimates |
Ticket Pool | |
---|---|
Current pool info (size, total value, and average price) | /stake/pool |
Current ticket pool, in a JSON object with a "tickets" key holding an array of ticket hashes |
/stake/pool/full |
Pool info for block X |
/stake/pool/b/X |
Full ticket pool at block height or hash H |
/stake/pool/b/H/full |
Pool info for block range [X,Y] (X <= Y) |
/stake/pool/r/X/Y?arrays=[true|false] * |
The full ticket pool endpoints accept the URL query ?sort=[true\|false]
for
requesting the tickets array in lexicographical order. If a sorted list or list
with deterministic order is not required, using sort=false
will reduce
server load and latency. However, be aware that the ticket order will be random,
and will change each time the tickets are requested.
*For the pool info block range endpoint that accepts the arrays
url query,
a value of true
will put all pool values and pool sizes into separate arrays,
rather than having a single array of pool info JSON objects. This may make
parsing more efficient for the client.
Vote and Agenda Info | |
---|---|
The current agenda and its status | /stake/vote/info |
Mempool | |
---|---|
Ticket fee rate summary | /mempool/sstx |
Ticket fee rate list (all) | /mempool/sstx/fees |
Ticket fee rate list (N highest) | /mempool/sstx/fees/N |
Detailed ticket list (fee, hash, size, age, etc.) | /mempool/sstx/details |
Detailed ticket list (N highest fee rates) | /mempool/sstx/details/N |
Other | |
---|---|
Status | /status |
Endpoint list (always indented) | /list |
Directory | /directory |
All JSON endpoints accept the URL query indent=[true|false]
. For example,
/stake/diff?indent=true
. By default, indentation is off. The characters to use
for indentation may be specified with the indentjson
string configuration
option.
Although there is mempool data collection and serving, it is very important to keep in mind that the mempool in your node (dcrd) is not likely to be the same as other nodes' mempool. Also, your mempool is cleared out when you shutdown dcrd. So, if you have recently (e.g. after the start of the current ticket price window) started dcrd, your mempool will be missing transactions that other nodes have.
rebuilddb is a CLI app that performs a full blockchain scan that fills past block data into a SQLite database. This functionality is included in the startup of the dcrdata daemon, but may be called alone with rebuilddb.
rebuilddb2
is a CLI app used for maintenance of dcrdata's dcrpg
database
(a.k.a. DB v2) that uses PostgreSQL to store a nearly complete record of the
Decred blockchain data. See the README.md for
rebuilddb2
for important usage information.
scanblocks is a CLI app to scan the blockchain and save data into a JSON file. More details are in its own README. The repository also includes a shell script, jsonarray2csv.sh, to convert the result into a comma-separated value (CSV) file.
package dcrdataapi
defines the data types, with json tags, used by the JSON
API. This facilitates authoring of robust golang clients of the API.
package dbtypes
defines the data types used by the DB backends to model the
block, transaction, and related blockchain data structures. Functions for
converting from standard Decred data types (e.g. wire.MsgBlock
) are also
provided.
package rpcutils
includes helper functions for interacting with a
rpcclient.Client
.
package stakedb
defines the StakeDatabase
and ChainMonitor
types for
efficiently tracking live tickets, with the primary purpose of computing ticket
pool value quickly. It uses the database.DB
type from
github.com/decred/dcrd/database
with an ffldb storage backend from
github.com/decred/dcrd/database/ffldb
. It also makes use of the stake.Node
type from github.com/decred/dcrd/blockchain/stake
. The ChainMonitor
type
handles connecting new blocks and chain reorganization in response to notifications
from dcrd.
package txhelpers
includes helper functions for working with the common types
dcrutil.Tx
, dcrutil.Block
, chainhash.Hash
, and others.
Packages blockdata
and dcrsqlite
are currently designed only for internal
use internal use by other dcrdata packages, but they may be of general value in
the future.
blockdata
defines:
- The
chainMonitor
type and itsBlockConnectedHandler()
method that handles block-connected notifications and triggers data collection and storage. - The
BlockData
type and methods for converting to API types. - The
blockDataCollector
type and itsCollect()
andCollectHash()
methods that are called by the chain monitor when a new block is detected. - The
BlockDataSaver
interface required bychainMonitor
for storage of collected data.
dcrpg
defines:
- The
ChainDB
type, which is the primary exported type fromdcrpg
, providing an interface for a PostgreSQL database. - A large set of lower-level functions to perform a range of queries given a
*sql.DB
instance and various parameters. - The internal package contains the raw SQL statements.
dcrsqlite
defines:
- A
sql.DB
wrapper type (DB
) with the necessary SQLite queries for storage and retrieval of block and stake data. - The
wiredDB
type, intended to satisfy theAPIDataSource
interface used by the dcrdata app's API. The block header is not stored in the DB, so a RPC client is used bywiredDB
to get it on demand.wiredDB
also includes methods to resync the database file.
package mempool
defines a mempoolMonitor
type that can monitor a node's
mempool using the OnTxAccepted
notification handler to send newly received
transaction hashes via a designated channel. Ticket purchases (SSTx) are
triggers for mempool data collection, which is handled by the
mempoolDataCollector
class, and data storage, which is handled by any number
of objects implementing the MempoolDataSaver
interface.
See the GitHub issue tracker and the project milestones.
Yes, please! See the CONTRIBUTING.md file for details, but here's the gist of it:
- Fork the repo.
- Create a branch for your work (
git branch -b cool-stuff
). - Code something great.
- Commit and push to your repo.
- Create a pull request.
Note that all dcrdata.org community and team members are expected to adhere to the code of conduct, described in the CODE_OF_CONDUCT file.
Also, come chat with us on Slack!
This project is licensed under the ISC License. See the LICENSE file for details.