Skip to content

Commit

Permalink
Merge pull request #198 from vivian1912/reorganize-the-doc
Browse files Browse the repository at this point in the history
update toolkit
  • Loading branch information
ethan1844 authored Nov 14, 2023
2 parents 05ffb64 + 229df6d commit 4e20321
Show file tree
Hide file tree
Showing 11 changed files with 453 additions and 116 deletions.
2 changes: 1 addition & 1 deletion docs/api/http.md
Original file line number Diff line number Diff line change
Expand Up @@ -456,7 +456,7 @@ Return:Unsigned transaction
Description: Cancel unstakings, all unstaked funds still in the waiting period will be re-staked, all unstaked funds that exceeded the 14-day waiting period will be automatically withdrawn to the owner’s account

```
curl -X POST http://127.0.0.1:8090/wallet/unfreezebalancev2 -d
curl -X POST http://127.0.0.1:8090/wallet/cancelallunfreezev2 -d
'{
"owner_address": "41e472f387585c2b58bc2c9bb4492bc1f17342cd1"
}'
Expand Down
17 changes: 17 additions & 0 deletions docs/api/rpc.md
Original file line number Diff line number Diff line change
Expand Up @@ -511,3 +511,20 @@ Nodes: FullNode
rpc CancelAllUnfreezeV2 (CancelAllUnfreezeV2Contract) returns (TransactionExtention) {}
```
Nodes: FullNode

**82.  Get bandwidth unit price**
```protobuf
rpc GetBandwidthPrices (EmptyMessage) returns (PricesResponseMessage) {}
```
Nodes: FullNode
**83.  Get energy unit price**
```protobuf
rpc GetEnergyPrices (EmptyMessage) returns (PricesResponseMessage) {}
```
Nodes: FullNode

**84.  Get transaction memo fee**
```protobuf
rpc GetMemoFee (EmptyMessage) returns (PricesResponseMessage) {}
```
Nodes: FullNodes
24 changes: 12 additions & 12 deletions docs/developers/archive-manifest.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,14 +21,14 @@ For more design details, please refer to: [TIP298](https://github.com/tronprotoc

### How to get
- build by yourself.
Under java-tron, execute ``. /gradlew build``, you can get ArchiveManifest.jar under `build/libs/`.
Under java-tron, execute ``. /gradlew build``, you can get Toolkit.jar under `build/libs/`.
- Download directly.
[Links](https://github.com/tronprotocol/java-tron/releases)

### Use Steps

- 1. Stop the FullNode service.
- 2. Execute the ArchiveManifest plugin.
- 2. Execute the Toolkit command.
- 3. Start the FullNode service.

> Note: ``Step ii`` is not required every time, but it is recommended to run it every time to optimize the experience.
Expand All @@ -38,7 +38,7 @@ For more design details, please refer to: [TIP298](https://github.com/tronprotoc
After FullNode runs, the default database directory: `output-directory`, the optimization plugin will work with the `output-directory/database` directory.
Developers can choose one of the following two ways according to actual situation.

#### 1. Use it Independently
#### Use it Independently

##### 1.Stop the FullNode service

Expand All @@ -48,16 +48,16 @@ Query the pid: `ps -ef |grep FullNode.jar |grep -v grep |awk '{print $2}'`



##### 2.Execute the ArchiveManifest plugin
##### 2.Execute the Toolkit command

```shell
# Full command
java -jar ArchiveManifest.jar [-b batchSize] [-d databaseDirectory] [-m manifestSize] [-h]
java -jar Toolkit.jar [-b batchSize] [-d databaseDirectory] [-m manifestSize] [-h]
# examples
java -jar ArchiveManifest.jar #1. use default settings
java -jar ArchiveManifest.jar -d /tmp/db/database #2. Specify the database directory as /tmp/db/database
java -jar ArchiveManifest.jar -b 64000 #3. Specify the batch size to 64000 when optimizing Manifest
java -jar ArchiveManifest.jar -m 128 #4. Specify optimization only when Manifest exceeds 128M
java -jar Toolkit.jar #1. use default settings
java -jar Toolkit.jar -d /tmp/db/database #2. Specify the database directory as /tmp/db/database
java -jar Toolkit.jar -b 64000 #3. Specify the batch size to 64000 when optimizing Manifest
java -jar Toolkit.jar -m 128 #4. Specify optimization only when Manifest exceeds 128M
```

After the command is executed, `archive.log` will be generated in the `./logs` directory, you can see the result.
Expand All @@ -78,7 +78,7 @@ nohup java -Xmx24g -XX:+UseConcMarkSweepGC -jar FullNode.jar -c main_net_config.
nohup java -Xmx24g -XX:+UseConcMarkSweepGC -jar FullNode.jar -p private key --witness -c main_net_config.conf </dev/null &>/dev/null &
```

#### 2. Integrated startup script
#### Integrated startup script


```shell
Expand Down Expand Up @@ -112,7 +112,7 @@ rebuildManifest() {

buildManifest() {

ARCHIVE_JAR='ArchiveManifest.jar'
ARCHIVE_JAR='Toolkit.jar'

java -jar $ARCHIVE_JAR $ALL_OPT

Expand Down Expand Up @@ -246,7 +246,7 @@ stopService

checkPath

#2.Execute the ArchiveManifest plugin
#2.Execute the Toolkit plugin
if [[ 0 == $? ]] ; then
rebuildManifest
else
Expand Down
85 changes: 6 additions & 79 deletions docs/developers/litefullnode.md
Original file line number Diff line number Diff line change
@@ -1,85 +1,12 @@
# Lite FullNode
Lite FullNode runs the same code with FullNode, the difference is that Lite FullNode only starts based on state data snapshot, which only contains all account state data and historical data of the last 256 blocks. Moreover, during the running of the node, only the data related to the state data snapshot is stored, and the historical data of blocks and transactions are not saved. Therefore, Lite Fullnode has the advantages of occupying less disk space and startting up fast, but it does not provide historical block and transaction data query, and only provides part of HTTP API and GRPC API of fullnode. For APIs that are not supported by Lite Fullnode, please refer to [HTTP]( https://github.com/tronprotocol/java-tron/blob/develop/framework/src/main/java/org/tron/core/services/filter/LiteFnQueryHttpFilter.java), [GRPC](https://github.com/tronprotocol/java-tron/blob/develop/framework/src/main/java/org/tron/core/services/filter/LiteFnQueryGrpcInterceptor.java). These APIs can be forced open by configuring `openHistoryQueryWhenLiteFN = true` in the configuration file, but this is not recommended.
Lite FullNode runs the same code with FullNode, the difference is that Lite FullNode only starts based on state data snapshot, which only contains all account state data and historical data of the latest 65536 blocks. The state data snapshot is small, only about 3% of the FullNode data. Therefore, Lite Fullnode has the advantages of occupying less disk space and startting up fast, but it does not provide historical block and transaction data query by default, and only provides part of HTTP API and GRPC API of FullNode. For APIs that are not supported by Lite Fullnode, please refer to [HTTP]( https://github.com/tronprotocol/java-tron/blob/develop/framework/src/main/java/org/tron/core/services/filter/LiteFnQueryHttpFilter.java), [GRPC](https://github.com/tronprotocol/java-tron/blob/develop/framework/src/main/java/org/tron/core/services/filter/LiteFnQueryGrpcInterceptor.java). But these APIs can be opened by configuring `openHistoryQueryWhenLiteFN = true` in the configuration file, because after the Lite Fullnode startup, the saved data by the Lite Fullnode is exactly the same as that of the FullNode, so after this configuration item is turned on, the Lite Fullnode supports querying the block data synchronized after the node startup, but still does not support querying the block data before the node startup.

Therefore, if developers only need to use nodes for block synchronization, processing and broadcasting transactions, then Lite Fullnoe will be a better choice.
Therefore, if developers only need to use node for block synchronization, processing and broadcasting transactions, or only query the blocks and transactions synchronized after the node starts up, then Lite Fullnoe will be a better choice.

The deployment steps of a Lite fullnode are the same as fullnode. The difference is that the light node database needs to be obtained. You can directly download the light node data snapshot from the [public backup data](../../using_javatron/backup_restore/#lite-fullnode-data-snapshot) and use it directly; you can also use the lite fulnode tool to convert the fullnode database to lite fullnode database. The use of the light node tool will be described in detail below.
## Lite FullNode Deployment
The deployment steps and startup command of a Lite fullnode are the same as fullnode's, please refer to [Deployment Instructions](../../using_javatron/installing_javatron/) to deploy a Lite Fullnode. The only difference is the database. You need to obtain the Lite Fullnode database. You can download the Lite Fullnode data snapshot from the [public backup data](../../using_javatron/backup_restore/#lite-fullnode-data-snapshot) and use it directly; or use the [Lite Fulnode data pruning tool](../../using_javatron/toolkit/#lite-fullnode-data-pruning) to convert the Fullnode database to Lite Fullnode database.


# Lite FullNode Tool

Lite FullNode Tool is used to split the database of a FullNode into a `Snapshot dataset` and a `History dataset`.

- `Snapshot dataset`: the minimum dataset for quick startup of the Lite FullNode.
- `History dataset`: the archive dataset that used for historical data queries.

Before using this tool for any operation, you need to stop the currently running FullNode process first. This tool provides the function of splitting the complete data into two datasets according to the current `latest block height` (latest_block_number). Lite FullNode launched from snapshot datasets do not support querying historical data prior to this block height. The tool also provides the ability to merge historical datasets with snapshot datasets.

For more design details, please refer to: [TIP-128](https://github.com/tronprotocol/tips/issues/128).

### Obtain Lite Fullnode Tool
LiteFullNodeTool.jar can be obtained by compiling the java-tron source code, the steps are as follows:

1. Obtain java-tron source code
```
$ git clone https://github.com/tronprotocol/java-tron.git
$ git checkout -t origin/master
```
2. Compile
```
$ cd java-tron
$ ./gradlew clean build -x test
```

After compiling, `LiteFullNodeTool.jar` will be generated in the `java-tron/build/libs/` directory.



### Use Lite Fullnode tool

**Options**

This tool provides independent cutting of `Snapshot Dataset` and `History Dataset` and a merge function.

- `--operation | -o`: [ split | merge ] specifies the operation as either to split or to merge
- `--type | -t`: [ snapshot | history ] is used only with `split` to specify the type of the dataset to be split; snapshot refers to Snapshot Dataset and history refers to History Dataset.
- `--fn-data-path`: FullNode database directory
- `--dataset-path`: dataset directory, when operation is `split`, `dataset-path` is the path that store the `Snapshot Dataset` or `History Dataset`,
otherwise `dataset-path` should be the `History Dataset` path.

**Examples**

Start a new FullNode using the default config, then an `output-directory` will be produced in the current directory.
`output-directory` contains a sub-directory named `database` which is the database to be split.

* **Split and get a `Snapshot Dataset`**

First, stop the FullNode and execute:
```
// just for simplify, locate the snapshot into `/tmp` directory,
$ java -jar LiteFullNodeTool.jar -o split -t snapshot --fn-data-path output-directory/database --dataset-path /tmp
```
then a `snapshot` directory will be generated in `/tmp`, pack this directory and copy it to somewhere that is ready to run a Lite Fullnode.
Do not forget rename the directory from `snapshot` to `database`.
(the default value of the storage.db.directory is `database`, make sure rename the snapshot to the specified value)

* **Split and get a `History Dataset`**

If historical data query is needed, `History dataset` should be generated and merged into Lite FullNode.
```
// just for simplify, locate the history into `/tmp` directory,
$ java -jar LiteFullNodeTool.jar -o split -t history --fn-data-path output-directory/database --dataset-path /tmp
```
A `history` directory will be generated in `/tmp`, pack this directory and copy it to a Lite Fullnode.
`History dataset` always take a large storage, make sure the disk has enough volume to store the `History dataset`.

* **Merge `History Dataset` and `Snapshot Dataset`**

Both `History Dataset` and `Snapshot Dataset` have an info.properties file to identify the block height from which they are segmented.
Make sure that the `split_block_num` in `History Dataset` is not less than the corresponding value in the `Snapshot Dataset`.

After getting the `History dataset`, the Lite FullNode can merge the `History dataset` and become a real FullNode.
```
// just for simplify, assume `History dataset` is locate in /tmp
$ java -jar LiteFullNodeTool.jar -o merge --fn-data-path output-directory/database --dataset-path /tmp/history
```
## Lite FullNode Maintenance
Since the Lite Fullnode will save the same data as the FullNode's after startup, although the data volume of the Lite Fullnode is very small at startup, the data expansion rate in the later period is the same as that of the FullNode, so it may be necessary to periodically cut the data. Pruning Lite Fullnode data is also to use [Lite Fullnode data pruning tool](../../using_javatron/toolkit/#lite-fullnode-data-pruning) to split Lite Fullnode data into snapshot dataset, that is, to obtain the pruned Lite Fullnode data.
2 changes: 1 addition & 1 deletion docs/mechanism-algorithm/resource.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Such as if the number of bytes of a transaction is 200, so this transaction cons
### 1. How to Get Bandwidth Points

1. By staking TRX to get Bandwidth Points, Bandwidth Points = the amount of TRX self-staked / the total amount of TRX staked for Bandwidth Points in the network * 43_200_000_000
2. Every account has a fixed amount of free Bandwidth Points(1500) every day
2. Every account has a fixed amount of free Bandwidth Points(600) every day

### 2. Bandwidth Points Consumption

Expand Down
Loading

0 comments on commit 4e20321

Please sign in to comment.