diff --git a/docs/api/http.md b/docs/api/http.md index d062f948..eba11afd 100644 --- a/docs/api/http.md +++ b/docs/api/http.md @@ -456,7 +456,7 @@ Return:Unsigned transaction Description: Cancel unstakings, all unstaked funds still in the waiting period will be re-staked, all unstaked funds that exceeded the 14-day waiting period will be automatically withdrawn to the owner’s account ``` -curl -X POST http://127.0.0.1:8090/wallet/unfreezebalancev2 -d +curl -X POST http://127.0.0.1:8090/wallet/cancelallunfreezev2 -d '{ "owner_address": "41e472f387585c2b58bc2c9bb4492bc1f17342cd1" }' diff --git a/docs/api/rpc.md b/docs/api/rpc.md index 4d3eb57f..3f800f15 100644 --- a/docs/api/rpc.md +++ b/docs/api/rpc.md @@ -511,3 +511,20 @@ Nodes: FullNode rpc CancelAllUnfreezeV2 (CancelAllUnfreezeV2Contract) returns (TransactionExtention) {} ``` Nodes: FullNode + +**82.  Get bandwidth unit price** +```protobuf +rpc GetBandwidthPrices (EmptyMessage) returns (PricesResponseMessage) {} +``` +Nodes: FullNode +**83.  Get energy unit price** +```protobuf +rpc GetEnergyPrices (EmptyMessage) returns (PricesResponseMessage) {} +``` +Nodes: FullNode + +**84.  Get transaction memo fee** +```protobuf +rpc GetMemoFee (EmptyMessage) returns (PricesResponseMessage) {} +``` +Nodes: FullNodes \ No newline at end of file diff --git a/docs/developers/archive-manifest.md b/docs/developers/archive-manifest.md index b0f77920..526252cd 100644 --- a/docs/developers/archive-manifest.md +++ b/docs/developers/archive-manifest.md @@ -21,14 +21,14 @@ For more design details, please refer to: [TIP298](https://github.com/tronprotoc ### How to get - build by yourself. - Under java-tron, execute ``. /gradlew build``, you can get ArchiveManifest.jar under `build/libs/`. + Under java-tron, execute ``. /gradlew build``, you can get Toolkit.jar under `build/libs/`. - Download directly. [Links](https://github.com/tronprotocol/java-tron/releases) ### Use Steps - 1. Stop the FullNode service. -- 2. Execute the ArchiveManifest plugin. +- 2. Execute the Toolkit command. - 3. Start the FullNode service. > Note: ``Step ii`` is not required every time, but it is recommended to run it every time to optimize the experience. @@ -38,7 +38,7 @@ For more design details, please refer to: [TIP298](https://github.com/tronprotoc After FullNode runs, the default database directory: `output-directory`, the optimization plugin will work with the `output-directory/database` directory. Developers can choose one of the following two ways according to actual situation. -#### 1. Use it Independently +#### Use it Independently ##### 1.Stop the FullNode service @@ -48,16 +48,16 @@ Query the pid: `ps -ef |grep FullNode.jar |grep -v grep |awk '{print $2}'` -##### 2.Execute the ArchiveManifest plugin +##### 2.Execute the Toolkit command ```shell # Full command -java -jar ArchiveManifest.jar [-b batchSize] [-d databaseDirectory] [-m manifestSize] [-h] +java -jar Toolkit.jar [-b batchSize] [-d databaseDirectory] [-m manifestSize] [-h] # examples - java -jar ArchiveManifest.jar #1. use default settings - java -jar ArchiveManifest.jar -d /tmp/db/database #2. Specify the database directory as /tmp/db/database - java -jar ArchiveManifest.jar -b 64000 #3. Specify the batch size to 64000 when optimizing Manifest - java -jar ArchiveManifest.jar -m 128 #4. Specify optimization only when Manifest exceeds 128M + java -jar Toolkit.jar #1. use default settings + java -jar Toolkit.jar -d /tmp/db/database #2. Specify the database directory as /tmp/db/database + java -jar Toolkit.jar -b 64000 #3. Specify the batch size to 64000 when optimizing Manifest + java -jar Toolkit.jar -m 128 #4. Specify optimization only when Manifest exceeds 128M ``` After the command is executed, `archive.log` will be generated in the `./logs` directory, you can see the result. @@ -78,7 +78,7 @@ nohup java -Xmx24g -XX:+UseConcMarkSweepGC -jar FullNode.jar -c main_net_config. nohup java -Xmx24g -XX:+UseConcMarkSweepGC -jar FullNode.jar -p private key --witness -c main_net_config.conf /dev/null & ``` -#### 2. Integrated startup script +#### Integrated startup script ```shell @@ -112,7 +112,7 @@ rebuildManifest() { buildManifest() { - ARCHIVE_JAR='ArchiveManifest.jar' + ARCHIVE_JAR='Toolkit.jar' java -jar $ARCHIVE_JAR $ALL_OPT @@ -246,7 +246,7 @@ stopService checkPath -#2.Execute the ArchiveManifest plugin +#2.Execute the Toolkit plugin if [[ 0 == $? ]] ; then rebuildManifest else diff --git a/docs/developers/litefullnode.md b/docs/developers/litefullnode.md index da667c7c..58766b87 100644 --- a/docs/developers/litefullnode.md +++ b/docs/developers/litefullnode.md @@ -1,85 +1,12 @@ # Lite FullNode -Lite FullNode runs the same code with FullNode, the difference is that Lite FullNode only starts based on state data snapshot, which only contains all account state data and historical data of the last 256 blocks. Moreover, during the running of the node, only the data related to the state data snapshot is stored, and the historical data of blocks and transactions are not saved. Therefore, Lite Fullnode has the advantages of occupying less disk space and startting up fast, but it does not provide historical block and transaction data query, and only provides part of HTTP API and GRPC API of fullnode. For APIs that are not supported by Lite Fullnode, please refer to [HTTP]( https://github.com/tronprotocol/java-tron/blob/develop/framework/src/main/java/org/tron/core/services/filter/LiteFnQueryHttpFilter.java), [GRPC](https://github.com/tronprotocol/java-tron/blob/develop/framework/src/main/java/org/tron/core/services/filter/LiteFnQueryGrpcInterceptor.java). These APIs can be forced open by configuring `openHistoryQueryWhenLiteFN = true` in the configuration file, but this is not recommended. +Lite FullNode runs the same code with FullNode, the difference is that Lite FullNode only starts based on state data snapshot, which only contains all account state data and historical data of the latest 65536 blocks. The state data snapshot is small, only about 3% of the FullNode data. Therefore, Lite Fullnode has the advantages of occupying less disk space and startting up fast, but it does not provide historical block and transaction data query by default, and only provides part of HTTP API and GRPC API of FullNode. For APIs that are not supported by Lite Fullnode, please refer to [HTTP]( https://github.com/tronprotocol/java-tron/blob/develop/framework/src/main/java/org/tron/core/services/filter/LiteFnQueryHttpFilter.java), [GRPC](https://github.com/tronprotocol/java-tron/blob/develop/framework/src/main/java/org/tron/core/services/filter/LiteFnQueryGrpcInterceptor.java). But these APIs can be opened by configuring `openHistoryQueryWhenLiteFN = true` in the configuration file, because after the Lite Fullnode startup, the saved data by the Lite Fullnode is exactly the same as that of the FullNode, so after this configuration item is turned on, the Lite Fullnode supports querying the block data synchronized after the node startup, but still does not support querying the block data before the node startup. -Therefore, if developers only need to use nodes for block synchronization, processing and broadcasting transactions, then Lite Fullnoe will be a better choice. +Therefore, if developers only need to use node for block synchronization, processing and broadcasting transactions, or only query the blocks and transactions synchronized after the node starts up, then Lite Fullnoe will be a better choice. -The deployment steps of a Lite fullnode are the same as fullnode. The difference is that the light node database needs to be obtained. You can directly download the light node data snapshot from the [public backup data](../../using_javatron/backup_restore/#lite-fullnode-data-snapshot) and use it directly; you can also use the lite fulnode tool to convert the fullnode database to lite fullnode database. The use of the light node tool will be described in detail below. +## Lite FullNode Deployment +The deployment steps and startup command of a Lite fullnode are the same as fullnode's, please refer to [Deployment Instructions](../../using_javatron/installing_javatron/) to deploy a Lite Fullnode. The only difference is the database. You need to obtain the Lite Fullnode database. You can download the Lite Fullnode data snapshot from the [public backup data](../../using_javatron/backup_restore/#lite-fullnode-data-snapshot) and use it directly; or use the [Lite Fulnode data pruning tool](../../using_javatron/toolkit/#lite-fullnode-data-pruning) to convert the Fullnode database to Lite Fullnode database. -# Lite FullNode Tool -Lite FullNode Tool is used to split the database of a FullNode into a `Snapshot dataset` and a `History dataset`. - -- `Snapshot dataset`: the minimum dataset for quick startup of the Lite FullNode. -- `History dataset`: the archive dataset that used for historical data queries. - -Before using this tool for any operation, you need to stop the currently running FullNode process first. This tool provides the function of splitting the complete data into two datasets according to the current `latest block height` (latest_block_number). Lite FullNode launched from snapshot datasets do not support querying historical data prior to this block height. The tool also provides the ability to merge historical datasets with snapshot datasets. - -For more design details, please refer to: [TIP-128](https://github.com/tronprotocol/tips/issues/128). - -### Obtain Lite Fullnode Tool -LiteFullNodeTool.jar can be obtained by compiling the java-tron source code, the steps are as follows: - -1. Obtain java-tron source code - ``` - $ git clone https://github.com/tronprotocol/java-tron.git - $ git checkout -t origin/master - ``` -2. Compile - ``` - $ cd java-tron - $ ./gradlew clean build -x test - ``` - - After compiling, `LiteFullNodeTool.jar` will be generated in the `java-tron/build/libs/` directory. - - - -### Use Lite Fullnode tool - -**Options** - -This tool provides independent cutting of `Snapshot Dataset` and `History Dataset` and a merge function. - -- `--operation | -o`: [ split | merge ] specifies the operation as either to split or to merge -- `--type | -t`: [ snapshot | history ] is used only with `split` to specify the type of the dataset to be split; snapshot refers to Snapshot Dataset and history refers to History Dataset. -- `--fn-data-path`: FullNode database directory -- `--dataset-path`: dataset directory, when operation is `split`, `dataset-path` is the path that store the `Snapshot Dataset` or `History Dataset`, -otherwise `dataset-path` should be the `History Dataset` path. - -**Examples** - -Start a new FullNode using the default config, then an `output-directory` will be produced in the current directory. -`output-directory` contains a sub-directory named `database` which is the database to be split. - -* **Split and get a `Snapshot Dataset`** - - First, stop the FullNode and execute: - ``` - // just for simplify, locate the snapshot into `/tmp` directory, - $ java -jar LiteFullNodeTool.jar -o split -t snapshot --fn-data-path output-directory/database --dataset-path /tmp - ``` - then a `snapshot` directory will be generated in `/tmp`, pack this directory and copy it to somewhere that is ready to run a Lite Fullnode. - Do not forget rename the directory from `snapshot` to `database`. - (the default value of the storage.db.directory is `database`, make sure rename the snapshot to the specified value) - -* **Split and get a `History Dataset`** - - If historical data query is needed, `History dataset` should be generated and merged into Lite FullNode. - ``` - // just for simplify, locate the history into `/tmp` directory, - $ java -jar LiteFullNodeTool.jar -o split -t history --fn-data-path output-directory/database --dataset-path /tmp - ``` - A `history` directory will be generated in `/tmp`, pack this directory and copy it to a Lite Fullnode. - `History dataset` always take a large storage, make sure the disk has enough volume to store the `History dataset`. - -* **Merge `History Dataset` and `Snapshot Dataset`** - - Both `History Dataset` and `Snapshot Dataset` have an info.properties file to identify the block height from which they are segmented. - Make sure that the `split_block_num` in `History Dataset` is not less than the corresponding value in the `Snapshot Dataset`. - - After getting the `History dataset`, the Lite FullNode can merge the `History dataset` and become a real FullNode. - ``` - // just for simplify, assume `History dataset` is locate in /tmp - $ java -jar LiteFullNodeTool.jar -o merge --fn-data-path output-directory/database --dataset-path /tmp/history - ``` \ No newline at end of file +## Lite FullNode Maintenance +Since the Lite Fullnode will save the same data as the FullNode's after startup, although the data volume of the Lite Fullnode is very small at startup, the data expansion rate in the later period is the same as that of the FullNode, so it may be necessary to periodically cut the data. Pruning Lite Fullnode data is also to use [Lite Fullnode data pruning tool](../../using_javatron/toolkit/#lite-fullnode-data-pruning) to split Lite Fullnode data into snapshot dataset, that is, to obtain the pruned Lite Fullnode data. diff --git a/docs/mechanism-algorithm/resource.md b/docs/mechanism-algorithm/resource.md index eec444be..bd22b217 100644 --- a/docs/mechanism-algorithm/resource.md +++ b/docs/mechanism-algorithm/resource.md @@ -25,7 +25,7 @@ Such as if the number of bytes of a transaction is 200, so this transaction cons ### 1. How to Get Bandwidth Points 1. By staking TRX to get Bandwidth Points, Bandwidth Points = the amount of TRX self-staked / the total amount of TRX staked for Bandwidth Points in the network * 43_200_000_000 -2. Every account has a fixed amount of free Bandwidth Points(1500) every day +2. Every account has a fixed amount of free Bandwidth Points(600) every day ### 2. Bandwidth Points Consumption diff --git a/docs/releases/history.md b/docs/releases/history.md index 8b4f5e7a..f05c08a9 100644 --- a/docs/releases/history.md +++ b/docs/releases/history.md @@ -2,6 +2,7 @@ | Code Name |Version | Released | Incl TIPs | Release Note | Specs | | -------- | -------- | -------- | -------- | -------- | -------- | +| Chilon | GreatVoyage-v4.7.3 | 2023-10-25 | [TIP-586](https://github.com/tronprotocol/tips/blob/master/tip-586.md)
[TIP-592](https://github.com/tronprotocol/tips/blob/master/tip-592.md) | [Release Note](https://github.com/tronprotocol/java-tron/releases/tag/GreatVoyage-v4.7.3) | [Specs](#greatvoyage-v473chilon) | | Periander | GreatVoyage-v4.7.2 | 2023-7-1 | [TIP-541](https://github.com/tronprotocol/tips/issues/541)
[TIP-542](https://github.com/tronprotocol/tips/issues/542)
[TIP-543](https://github.com/tronprotocol/tips/issues/543)
[TIP-544](https://github.com/tronprotocol/tips/issues/544)
[TIP-555](https://github.com/tronprotocol/tips/issues/555)
[TIP-547](https://github.com/tronprotocol/tips/issues/547)
[TIP-548](https://github.com/tronprotocol/tips/issues/548)
[TIP-549](https://github.com/tronprotocol/tips/issues/549)
[TIP-550](https://github.com/tronprotocol/tips/issues/550) | [Release Note](https://github.com/tronprotocol/java-tron/releases/tag/GreatVoyage-v4.7.2) | [Specs](#greatvoyage-v472periander) | | Pittacus | GreatVoyage-v4.7.1.1 | 2023-4-17 | [TIP-534](https://github.com/tronprotocol/tips/blob/master/tip-534.md) | [Release Note](https://github.com/tronprotocol/java-tron/releases/tag/GreatVoyage-v4.7.1.1) | [Specs](#greatvoyage-v4711-pittacus) | | Sartre | GreatVoyage-v4.7.1 | 2023-2-27 | N/A | [Release Note](https://github.com/tronprotocol/java-tron/releases/tag/GreatVoyage-v4.7.1) | [Specs](#greatvoyage-v471sartre) | @@ -71,6 +72,188 @@ | N/A | Odyssey-v1.0.3 | 2018-4-5 | N/A | [Release Note](https://github.com/tronprotocol/java-tron/releases/tag/Odyssey-v1.0.3) | N/A | | N/A | Exodus-v1.0 | 2017-12-28 | N/A | [Release Note](https://github.com/tronprotocol/java-tron/releases/tag/Exodus-v1.0) | N/A | +## GreatVoyage-v4.7.3(Chilon) + +Chilon is a non-mandatory upgrade version that will introduce multiple important updates. Richer gRPC interfaces and faster node startup speed, bring users a more friendly development experience. Optimized disconnection strategy and synchronization process improve the stability of the connection among nodes. The optimized transaction processing logic and database query performance elevate the transaction packaging efficiency and network throughput. + +Please find the details below. +### Core + +#### 1. Add gRPC interfaces for resource price and transaction memo fee query + +Chilon adds three new gRPC interfaces. Users can obtain historical bandwidth unit price through `getBandwidthPrices` API, obtain historical energy unit price through `getEnergyPrices` API, and obtain transaction memo fee through `getMemoFee` API. These new gRPC APIs further improve the developer experience. + +TIP: [https://github.com/tronprotocol/tips/blob/master/tip-586.md](https://github.com/tronprotocol/tips/blob/master/tip-586.md) +Source Code: [https://github.com/tronprotocol/java-tron/pull/5412]([https://github.com/tronprotocol/java-tron/pull/5412) +#### 2. Supplement disconnect reasons + +When a node fails to process a message from a peer, it may initiatively disconnect from the peer. However, in previous versions of Chilon, in some cases, the node did not inform the other node of the reason for the disconnection, which was not conducive to the analysis and troubleshooting of the connection issue by the other node. + +The Chilon version supplements two reasons for disconnection. Node will send the disconnection reasons to the other node before dropping the connection, so as to facilitate efficient handling of node connection problems. + + +TIP: [https://github.com/tronprotocol/tips/blob/master/tip-592.md](https://github.com/tronprotocol/tips/blob/master/tip-592.md) +Source Code: [https://github.com/tronprotocol/java-tron/pull/5392 ](https://github.com/tronprotocol/java-tron/pull/5392) +#### 3. Discard transactions from bad peers instead of disconnected peers + +For a broadcast transaction, the node must determine whether to process it. In previous versions of Chilon, the basis for judgment is whether the transaction comes from a disconnected peer. If so, the transaction will be discarded. However, whether to execute a broadcasted transaction should not be judged based on whether it maintains a connection with the other node, but whether the other node is a malicious node. + +Therefore, the Chilon version optimizes the transaction processing logic and no longer discards transactions from disconnected peers. Instead, it only discards transactions broadcasted from the nodes that have sent illegal transactions. This change improves transaction broadcast and packaging efficiency. + + +Source Code: [https://github.com/tronprotocol/java-tron/pull/5440](https://github.com/tronprotocol/java-tron/pull/5440) + + + +#### 4. Optimize Stake 2.0 codes and error messages + +The Chilon version standardizes Stake 2.0-related code and simplifies complex functions, improving the simplicity and readability of the code. + + +Source Code: [https://github.com/tronprotocol/java-tron/pull/5426](https://github.com/tronprotocol/java-tron/pull/5426) + + +#### 5. Accelerate bloomFilter initialization for transaction cache +When a node starts, it will load the transactions of the latest 65536 blocks from the database to build a transaction cache bloomFilter, which is used to determine duplicate transactions when verifying transactions later. In previous versions of Chilon, the loading time of the transaction cache accounted for more than 70% of the node startup time. In order to accelerate the speed of the transaction cache bloomFilter initialization, the Chilon version persists in the transaction cache bloomFilter. When the node exits normally, the transaction cache bloomFilter-related data will be stored on the disk. When the node restarts, there will be no need to read the transaction information in the recent blocks, but directly load the bloomFilter data into the memory, speeding up the initialization process of the transaction cache bloomFilter and greatly improving the node startup speed. +This feature is disabled by default and can be enabled through the node configuration item `storage.txCache.initOptimization = true`. + +Source Code: [https://github.com/tronprotocol/java-tron/pull/5394 ](https://github.com/tronprotocol/java-tron/pull/5394) [https://github.com/tronprotocol/java-tron/pull/5491 ](https://github.com/tronprotocol/java-tron/pull/5491) [https://github.com/tronprotocol/java-tron/pull/5505 ](https://github.com/tronprotocol/java-tron/pull/5505) [https://github.com/tronprotocol/java-tron/pull/5523 ](https://github.com/tronprotocol/java-tron/pull/5523) [https://github.com/tronprotocol/java-tron/pull/5543 ](https://github.com/tronprotocol/java-tron/pull/5543) + +#### 6. Fix concurrency issues when generating chain inventory + +In previous versions of Chilon, when node A requests to synchronize blocks from node B, it first sends its own chain summary to node B. After receiving it, node B generates node A's missing block list according to the local chain and returns the list to node A. The list generation process is: first, find the maximum common block height of the two nodes from the chain summary of node A, and then add the IDs of several blocks starting from the maximum common block height to the missing blocks list of node A. Since the generation of the missing block list and chain switching are executed concurrently, if chain switching occurs when generating the missing block list, it may happen that after the maximum common block height is obtained, the corresponding block id cannot be obtained, causing the generated missing block list does not match the chain summary of node A, resulting in dropping the node connection. + +The Chilon version optimizes the generation logic of the missing block list. When the ID of the highest common block previously calculated cannot be obtained, the node will retry to ensure that the returned list contains the highest common block information, which improves the stability of connections between nodes. + + +Source Code: [https://github.com/tronprotocol/java-tron/pull/5393 ](https://github.com/tronprotocol/java-tron/pull/5393) [https://github.com/tronprotocol/java-tron/pull/5532](https://github.com/tronprotocol/java-tron/pull/5532) + +#### 7. Correct resource disorder closure behavior on kill -15 + +In previous versions of Chilon, when the service is shut down, abnormal errors may occur +due to the resource release order issue. The Chilon version optimizes the service shutdown logic. When the `kill -15` command is used to shut down the service, it can ensure the accuracy of the release sequence of various types of resources so that the node can exit normally. + + +Source Code: [https://github.com/tronprotocol/java-tron/pull/5410](ttps://github.com/tronprotocol/java-tron/pull/5410) [https://github.com/tronprotocol/java-tron/pull/5425 ](ttps://github.com/tronprotocol/java-tron/pull/5425) [https://github.com/tronprotocol/java-tron/pull/5421](https://github.com/tronprotocol/java-tron/pull/5421) [https://github.com/tronprotocol/java-tron/pull/5429 ](https://github.com/tronprotocol/java-tron/pull/5429) [https://github.com/tronprotocol/java-tron/pull/5447 ](https://github.com/tronprotocol/java-tron/pull/5447) + + +### API + +#### 1. Optimize HTTP interface monitoring + +Chilon optimizes the HTTP interface monitoring, it no longer counts requests for APIs that are not supported by the node, making the statistics of successful or failed HTTP interface requests more accurate. + + +Source Code: [https://github.com/tronprotocol/java-tron/pull/5332](https://github.com/tronprotocol/java-tron/pull/5332) + +#### 2. Provide uniform rate limitation configuration for all HTTP and gRPC APIs + +Java-tron supports interface rate limiting. The default qps (queries per second) of each interface is 1000. Node deployers can also limit the traffic of a particular interface. However, in previous versions of Chilon, it was not supported to modify the default qps of each interface, that way, If you want to configure the default qps of each interface to 2000, you need to configure the current limit for each interface respectively. The Chilon version adds a new default interface rate limit configuration `rate.limiter.global.api.qps`. With this configuration, users can change the rate limit of all interfaces, simplifying the configuration complexity. + +``` +rate.limiter.global.api.qps = 1000 +``` + + +Source Code: [https://github.com/tronprotocol/java-tron/pull/5502](https://github.com/tronprotocol/java-tron/pull/5502) + +#### 3. Optimize HTTP interface parameter parsing + +In previous versions of Chilon, for interfaces involving reward queries, if the request passes in invalid parameters or non-JSON formatted parameters, the node will throw an exception. The Chilon version optimizes the HTTP interface parameter parsing logic and returns a 0 value or error message for requests with incorrect parameter formats. + + +Source Code: [https://github.com/tronprotocol/java-tron/pull/5367](https://github.com/tronprotocol/java-tron/pull/5367) [https://github.com/tronprotocol/java-tron/pull/5483](https://github.com/tronprotocol/java-tron/pull/5483) + +#### 4. Add solidity query interfaces of resource unit price + +Chilon supplements query interfaces of resource unit price for solidity, they are `/walletsolidity/getbandwidthprices` and `/walletsolidity/getenergyprices`. + + +Source Code: [https://github.com/tronprotocol/java-tron/pull/5412](https://github.com/tronprotocol/java-tron/pull/5412) [https://github.com/tronprotocol/java-tron/pull/5451](https://github.com/tronprotocol/java-tron/pull/5451) +[https://github.com/tronprotocol/java-tron/pull/5437](https://github.com/tronprotocol/java-tron/pull/5437) + + +#### 5. Optimize the processing logic of some HTTP interfaces + +The Chilon version optimizes some HTTP interfaces to make it consistent with get and post request processing, including parameters check and return value. The interfaces include `/wallet/getavailableunfreezecount`, `/wallet/getcanwithdrawunfreezeamount`, `/wallet/getcandelegatedmaxsize`, and `/wallet/getavailableunfreezecount`. + + +Source Code: [https://github.com/tronprotocol/java-tron/pull/5408](https://github.com/tronprotocol/java-tron/pull/5408) + + +### Other Changes +#### 1. Add check for expired transactions when fetching transactions + +Chilon adds a check for expired transactions in the broadcast list it receives. For transactions timed out in the list, it will no longer make requests to its remote node, avoiding node connections being disconnected due to transaction processing failures, and improving node connection stability. + + +Source Code: [https://github.com/tronprotocol/java-tron/pull/5460](https://github.com/tronprotocol/java-tron/pull/5460) + +#### 2. Fix concurrency issue of getHeadBlockId method + +During the block synchronization process, the node must obtain the `BlockId` of the latest block through the `getHeadBlockId` method. In previous versions of Chilon, the `BlockId` was obtained through the block number and hash of the latest block. However, due to the concurrent execution of the latest block data acquisition thread and the update thread, getHeadBlockId may start to obtain the BlockId of the latest block before the block number and hash value of the latest block have been updated, which makes it possible for the `getHeadBlockId` method to return an abnormal `BlockId` value. + +Chilon optimizes the `BlockId` acquisition logic of the latest block, and `getHeadBlockId` only obtains `BlockId` through the hash value of the latest block, ensuring the correctness of the block ID acquisition. + + +Source Code: [https://github.com/tronprotocol/java-tron/pull/5403](https://github.com/tronprotocol/java-tron/pull/5403) + + +#### 3. Delete unused network configurations + +Chilon deleted four unused network parameters, including the three configuration items below, simplifying the complexity of using for developers. + +``` +node.discovery.public.home.node +node.discovery.ping.timeout +node.p2p.pingInterval +``` + +Source Code: [https://github.com/tronprotocol/java-tron/pull/5441](https://github.com/tronprotocol/java-tron/pull/5441) + +#### 4. Obtain external IP through Libp2p + +In previous versions of Chilon, when a node starts, the external IP address would be obtained repeatedly, and Java-tron and lib2p2 each perform the IP acquisition once. To improve the node startup speed, Chilon optimizes the external IP acquisition logic. When a node starts, it directly calls the libp2p module to obtain the external IP, and it can directly assign the external IP to libp2p and repeated obtaining is avoided. + + +Source Code: [https://github.com/tronprotocol/java-tron/pull/5407](https://github.com/tronprotocol/java-tron/pull/5407) + +#### 5. Add address parsing for stake-related transactions in event subscription + +Chilon optimizes the event subscription service and adds the parsing of addresses in stake-related transactions, so that event subscribers can obtain address information in stake, resource delegation, and other transactions. + + +Source Code:[https://github.com/tronprotocol/java-tron/pull/5419](https://github.com/tronprotocol/java-tron/pull/5419) + +#### 6. Adjust default number of CPU cores used in signature validation + +In previous versions of Chilon, nodes used 1/2 of the system CPU cores for parallel signature verification by default. To improve the performance of node synchronization and block processing, the Chilon version changed the default value of the number of threads used for signature verification to the maximum number of CPU cores to maximize signature verification performance. Node deployers can also adjust the number of signature verification threads through the `node.validateSignThreadNum` configuration item. + + +Source Code:[https://github.com/tronprotocol/java-tron/pull/5396 ](https://github.com/tronprotocol/java-tron/pull/5396) + +#### 7. Migrate LiteFullNode tool related unit test cases to Plugins module + +In the previous version, the code related to the LiteFullNode tool has been integrated into the toolkit in the plugins module. The Chilon version has further integrated and moved the test cases related to the LiteFullNode tool from the framework module to the plugins module. Not only does It make the code structure clearer but also improves the execution efficiency of test cases. + + +Source Code: [https://github.com/tronprotocol/java-tron/pull/5475](https://github.com/tronprotocol/java-tron/pull/5475) [https://github.com/tronprotocol/java-tron/pull/5482](https://github.com/tronprotocol/java-tron/pull/5482) + + +#### 8. Enhance query performance of properties DB + +During the block processing process, nodes access the `properties` database more frequently. Better `properties` database query performance will improve the processing speed of the block. Since the property data volume is small and updates are infrequent, Chilon optimizes the query performance of the `properties` database, loading all data into the first-level cache to maximize data query performance and thereby improve transaction processing capabilities. + + +Source Code: [https://github.com/tronprotocol/java-tron/pull/5378 ](https://github.com/tronprotocol/java-tron/pull/5378) + + + +--- + +*Do not desire impossible.* +

---Chilon

+ ## GreatVoyage-v4.7.2(Periander) diff --git a/docs/using_javatron/backup_restore.md b/docs/using_javatron/backup_restore.md index 7de1d732..f576e892 100644 --- a/docs/using_javatron/backup_restore.md +++ b/docs/using_javatron/backup_restore.md @@ -57,6 +57,7 @@ The following table shows the download address of Fullnode data snapshots. Pleas | Data sources provided by TronGrid Community | https://backup.trongrid.io/ | LevelDB, include internal transactions (About 1603G on 13 Jun. 2023) | + **Note**:The data of LevelDB and RocksDB are not allowed to be mixed. The database can be specified in the config file of the full node, set db.engine to LEVELDB or ROCKSDB. @@ -71,7 +72,8 @@ The Tron Public Chain has supported the type of the Lite FullNode since the vers | -------- | -------- | -------- | | Official data source (North America: Virginia) | http://3.219.199.168/ | LevelDB, About 31G on 13 Jun. 2023 | -**Tips**: You can split the data from the whole data with the help of the [Lite FullNode Tool](../../developers/litefullnode/#lite-fullnode-tool). + +**Tips**: You can split the data from the whole data with the help of the [Lite FullNode Data Pruning Tool](../../using_javatron/toolkit/#lite-fullnode-data-pruning). ### Use the data snapshot diff --git a/docs/using_javatron/installing_javatron.md b/docs/using_javatron/installing_javatron.md index e1d91e2d..a10c4571 100644 --- a/docs/using_javatron/installing_javatron.md +++ b/docs/using_javatron/installing_javatron.md @@ -2,7 +2,7 @@ Java-tron nodes support to be deployed on `Linux` or `MacOS` operating systems, and rely on `Oracle JDK 1.8`, other versions of JDK are not supported. -The minimum hardware configuration required to run a Java-tron node is `8-core CPU`, `16G memory`, `1T SDD`, the recommended configuration is: `16-core CPU`, `32G memory`, `1.5T+ SDD`. The fullnode running by super representative to produce block recommends `32-core CPU` and `64G memory`. +The minimum hardware configuration required to run a Java-tron node is `8-core CPU`, `16G memory`, `2T SDD`, the recommended configuration is: `16-core CPU`, `32G memory`, `2.5T+ SDD`. The fullnode running by super representative to produce block recommends `32-core CPU` and `64G memory`. # Compile the Source Code diff --git a/docs/using_javatron/private_network.md b/docs/using_javatron/private_network.md index 57ef6adc..c5c1a1f0 100644 --- a/docs/using_javatron/private_network.md +++ b/docs/using_javatron/private_network.md @@ -83,7 +83,7 @@ The process of building a node on private chain is the same as that on mainnet. privateKey: 'c741f5c0224020d7ccaf4617a33cc099ac13240f150cf35f496db5bfc7d220dc' }) - var unsignedProposal1Txn = await tronWeb.transactionBuilder.createProposal([{"key":9,"value":1},{"key":10,"value":1},{"key":11,"value":280},{"key":19,"value":90000000000},{"key":15,"value":1},{"key":18,"value":1},{"key":16,"value":1},{"key":20,"value":1},{"key":26,"value":1},{"key":30,"value":1},{"key":5,"value":16000000},{"key":31,"value":160000000},{"key":32,"value":1},{"key":39,"value":1},{"key":41,"value":1},{"key":3,"value":1000},{"key":47,"value":10000000000},{"key":49,"value":1},{"key":13,"value":80},{"key":7,"value":1000000},{"key":61,"value":1500},{"key":63,"value":1}],"41D0B69631440F0A494BB51F7EEE68FF5C593C00F0") + var unsignedProposal1Txn = await tronWeb.transactionBuilder.createProposal([{"key":9,"value":1},{"key":10,"value":1},{"key":11,"value":280},{"key":19,"value":90000000000},{"key":15,"value":1},{"key":18,"value":1},{"key":16,"value":1},{"key":20,"value":1},{"key":26,"value":1},{"key":30,"value":1},{"key":5,"value":16000000},{"key":31,"value":160000000},{"key":32,"value":1},{"key":39,"value":1},{"key":41,"value":1},{"key":3,"value":1000},{"key":47,"value":10000000000},{"key":49,"value":1},{"key":13,"value":80},{"key":7,"value":1000000},{"key":61,"value":600},{"key":63,"value":1}],"41D0B69631440F0A494BB51F7EEE68FF5C593C00F0") var signedProposal1Txn = await tronWeb.trx.sign(unsignedProposal1Txn, "c741f5c0224020d7ccaf4617a33cc099ac13240f150cf35f496db5bfc7d220dc"); var receipt1 = await tronWeb.trx.sendRawTransaction(signedProposal1Txn); diff --git a/docs/using_javatron/toolkit.md b/docs/using_javatron/toolkit.md index 32ff5f3d..54f9969b 100644 --- a/docs/using_javatron/toolkit.md +++ b/docs/using_javatron/toolkit.md @@ -1,22 +1,55 @@ -# Database Partition Tool +# Java-tron Node Maintenance Tool - Toolkit -As the data on the chain continues to grow, the pressure on data storage will increase. At present, the FullNode data of the TRON public chain is close to 1T, and the daily data growth is about 1.2G. According to the current data growth rate, the annual growth rate is about 450G. A single disk capacity may be insufficient and need to be replaced by a larger disk. To solve it, a database storage partition tool has been introduced in `GreatVoyage-v4.5.2 (Aurelius)`. The tool can migrate some databases to other storage disks. When the user encounters insufficient disk space, he only needs to add another disk according to the capacity requirement and does not need to replace the original disk. +The Toolkit integrates a series of tools of java-tron, and more functions will be added into it in the future for the convenience of developers. Currently Toolkit includes the following functions: -## Compile -Under the java-tron project directory, execute the command `./gradlew build -x test` to compile the tool, and the tool will be generated in `build/libs/Toolkit.jar`. +* [Database Partition Tool](#database-partition-tool) +* [Lite Fullnode Data Pruning](#lite-fullnode-data-pruning) +* [Data Copy](#data-copy) +* [Data Conversion](#data-conversion) +* [LevelDB Startup Optimization](#leveldb-startup-optimization) - -## Options +The following describes the acquisition and use of the Toolkit toolbox in detail. -This tool provides data migration and storage functions. The optional command parameters are as follows: +## Obtain Toolkit.jar +`Toolkit.jar` can be obtained from the [released version](https://github.com/tronprotocol/java-tron/releases) directly or by compiling the java-tron source code. -- `-c | --config`: [ string ] This option is used to specify the FullNode configuration file. If not specified, the default value will be `config.conf`. -- `-d | --database-directory`: [ string ] This option is used to specify the FullNode database directory. If not specified, the default value will be `output-directory`. -- `-h | --help`: [ bool ] This option is used to view help description, default value: false. +Compile the source code: +1. Obtain java-tron source code + ``` + $ git clone https://github.com/tronprotocol/java-tron.git + $ git checkout -t origin/master + ``` +2. Compile + ``` + $ cd java-tron + $ ./gradlew clean build -x test + ``` + You will find the `Toolkit.jar` under `./java-tron/build/libs/` folder if build is successful. -## Usage Instructions +## Database Partition Tool +As the data on the chain continues to grow, the pressure on data storage will increase. At present, the FullNode data of the TRON public chain has reached 1T, and the daily data growth is about 1.2G. According to the current data growth rate, the annual growth rate is about 450G. A single disk capacity may be insufficient and need to be replaced by a larger disk. To this end the Toolkit toolbox introduces the database storage partitioning tool. The tool can migrate some databases to other storage disks. When the user encounters insufficient disk space, he only needs to add another disk according to the capacity requirement and does not need to replace the original disk. + +### Commands and parameters +To use the data partition function provided by Toolkit through the `db mv` command: + +``` +# full command +java -jar Toolkit.jar db mv [-h] [-c=] [-d=] +# examples +java -jar Toolkit.jar db mv -c main_net_config.conf -d /data/tron/output-directory +``` + +Optional command parameters are as follows: + +- `-c | --config`: [ string ] This option is used to specify the FullNode configuration file. If not specified, the default value will be config.conf. +- `-d | --database-directory`: [ string ] This option is used to specify the FullNode database directory. If not specified, the default value will be output-directory. +- `-h | --help`: [ bool ] This option is used to view help description, default value: false. + + + +### Usage Instructions Follow the following steps to use the database partition tool: 1. [Stop FullNode service](#stop-fullnode-service) @@ -24,18 +57,18 @@ Follow the following steps to use the database partition tool: 3. [Perform database migration](#perform-database-migration) 4. [Restart FullNode service](#restart-fullnode-service) +#### Stop FullNode Service -### Stop FullNode Service +Use the command kill -15 pid to close FullNode.jar, below is the FullNode process pid lookup command: -Use the command `kill -15 pid` to close FullNode.jar, below is the FullNode process pid lookup command: ``` $ ps -ef |grep FullNode.jar |grep -v grep |awk '{print $2}'` ``` +#### Configure For Database Storage Migration -### Configure For Database Storage Migration +The configuration of database migration is in the [storage.properties](https://github.com/tronprotocol/tron-deployment/blob/master/main_net_config.conf#L36) field in the Java-tron node configuration file. The following is an example of migrating only the `block` and `trans` databases to illustrate how to migrate some databases to other storage disks: -The configuration of database migration is in the [storage.properties](https://github.com/tronprotocol/tron-deployment/blob/master/main_net_config.conf#L37) field in the Java-tron node configuration file. The following is an example of migrating only the `block` and `trans` databases to illustrate how to migrate some databases to other storage disks: ```conf storage { @@ -58,7 +91,7 @@ storage { `name` is the database name which you want to migrate, and `path` is the destination directory for database migration. The tool will migrate the database specified by `name` to the directory specified by `path`, and then create a soft link under the original path pointing to `path` directory. After `FullNode` starts, it will find the `path` directory according to the soft link. -### Perform Database Migration +#### Perform Database Migration When executed, the current migration progress will be shown. @@ -66,7 +99,7 @@ When executed, the current migration progress will be shown. $ java -jar Toolkit.jar db mv -c main_net_config.conf -d /data/tron/output-directory ``` -### Restart FullNode Service +#### Restart FullNode Service After the migration is complete, restart the java-tron node. ``` # FullNode @@ -80,7 +113,7 @@ $ nohup java -Xms9G -Xmx9G -XX:ReservedCodeCacheSize=256m \ -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 \ -jar FullNode.jar -c main_net_config.conf >> start.log 2>&1 & -# Super representative's FullNode +# Super representitive's FullNode $ nohup java -Xms9G -Xmx9G -XX:ReservedCodeCacheSize=256m \ -XX:MetaspaceSize=256m -XX:MaxMetaspaceSize=512m \ -XX:MaxDirectMemorySize=1G -XX:+PrintGCDetails \ @@ -91,3 +124,178 @@ $ nohup java -Xms9G -Xmx9G -XX:ReservedCodeCacheSize=256m \ -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 \ -jar FullNode.jar --witness -c main_net_config.conf >> start.log 2>&1 & ``` +## Lite Fullnode Data Pruning +Toolkit provides data pruning tool, which is mainly used for generating or pruning Lite Fullnode data. + +The data pruning tool can divide the complete FullNode data into a snapshot dataset (Snapshot dataset) or a historical dataset (History dataset) according to the current `latest_block_number`, the snapshot dataset is used to start the Lite Fullnode (That is the Lite fullnode database), and the historical dataset is used for historical data query. Lite Fullnode started with a snapshot data set do not support querying historical data prior to the latest block height at the time of pruning. The data pruning tool also provides the function of merging historical data set with snapshot data set. The usage scenarios are as follows: + + + +* **Convert FullNode data into Lite Fullnode data** + + The Lite Fullnode starts only based on the snapshot data set, use the data pruning tool to convert the full node data into the snapshot data set, and that will get the Lite Fullnode data + +* **Prune Lite Fullnode data regularly** + + Since the Lite Fullnode saves the same data as the FullNode after startup, although the data volume of the Lite Fullnode is very small at startup, the data expansion rate in the later period is the same as that of the FullNode, so it may be necessary to periodically prune the data. Clipping the Lite Fullnode data is also to use this tool to cut the Lite Fullnode data into snapshot data set, that is, to obtain the Pruned Lite Fullnode data + +* **Convert Lite Fullnode data back to FullNode data** + + Since Lite Fullnode does not support historical data query, if you want to support it, you need to change Lite Fullnode data into FullNode data, then the node will change from Lite Fullnode to FullNode. You can directly download the snapshot of the FullNode database, or you can use the data pruning tool: first, convert the FullNode data into historical data set, and then merge the historical data set and the snapshot data set of the Lite Fullnode to obtain the FullNode data. + +Note: Before using this tool for any operation, you need to stop the currently running node first. + + +### Command and parameters +To use the data pruning tool provided by Toolkit through the `db lite` command: + +``` +# full command + java -jar Toolkit.jar db lite [-h] -ds= -fn= [-o=] [-t=] +# examples + #split and get a snapshot dataset + java -jar Toolkit.jar db lite -o split -t snapshot --fn-data-path output-directory/database --dataset-path /tmp + #split and get a history dataset + java -jar Toolkit.jar db lite -o split -t history --fn-data-path output-directory/database --dataset-path /tmp + #merge history dataset and snapshot dataset + java -jar Toolkit.jar db lite -o merge --fn-data-path /tmp/snapshot --dataset-path /tmp/history +``` + +Optional command parameters are as follows: + +- `--operation | -o`: [ split | merge ], this parameter specifies the operation as either to split or to merge, default is split. +- `--type | -t`: [ snapshot | history ], this parameter is used only when the operation is `split`. `snapshot` means clip to Snapshot Dataset and `history` means clip to History Dataset. Default is `snapshot`. +- `--fn-data-path | -fn`: The database path to be split or merged. When the operation type is `split`, `fn-data-path` is used to indicate the directory of the data to be pruned; when the operation type is `merge`, `fn-data-path` indicates the database directory of the Lite Fullnode or the directory of the snapshot dataset. +- `--dataset-path | -ds`: When operation is `split`, dataset-path is the path that store the snapshot or history, when operation is `merge`, dataset-path is the history data path. + + +### Usage Instructions +The node database is stored in the `output-directory/database` directory by default. The examples in this chapter will be explained with the default database directory. + + +The following three examples illustrate how to use the data pruning tool: + +* **Split and get a `Snapshot Dataset`** + + This function can split FullNode data into Lite Fullnode data, and can also be used to regularly trim Lite Fullnode data. The steps are as follows: + + First, stop the FullNode and execute: + + ```shell + # just for simplify, save the snapshot into /tmp directory + java -jar Toolkit.jar db lite -o split -t snapshot --fn-data-path output-directory/database --dataset-path /tmp + ``` + + * --fn-data-path: The data directory to be trimmed, that is, the node data directory + * --dataset-path: The directory where the output snapshot dataset is stored + + After the command is executed, a `snapshot` directory will be generated in `/tmp`, the data in this directory is the Lite Fullnode data, then rename the directory from `snapshot` to `database` (the default value of the storage.db.directory is `database`, make sure rename the snapshot directory to the specified value) and copy the `database` directory to the Lite Fullnode database directory to finish the spliting. Finally start the Lite Fullnode. + + +* **Split and get a `History Dataset`** + + The command to split the historical data set is as follows: + + ```shell + # just for simplify, save the history into `/tmp` directory, + java -jar Toolkit.jar db lite -o split -t history --fn-data-path output-directory/database --dataset-path /tmp + ``` + + * --fn-data-path: FullNode data directory + * --dataset-path: The directory where the output historical dataset is stored + + After the command is executed, the `history` directory will be generated under the `/tmp` directory, and the data in it is the historical dataset. + +* **Merge `History Dataset` and `Snapshot Dataset`** + + Both `History Dataset` and `Snapshot Dataset` have an `info.properties` file to identify the block height when they are splitted. Make sure that the `split_block_num` in `History Dataset` is not less than the corresponding value in the `Snapshot Dataset`. After the historical dataset is merged with the snapshot dataset through the merge operation, the Lite Fullnode will become a real FullNode. + + The command to merge the historical dataset and the snapshot dataset is as follows: + + ```shell + # just for simplify, assume `History dataset` is locate in /tmp + java -jar Toolkit.jar db lite -o merge --fn-data-path /tmp/snapshot --dataset-path /tmp/history + ``` + + * --fn-data-path: snapshot dataset directory + * --dataset-path: history dataset directory + + + After the command is executed, the merged data will overwrite the directory where the snapshot data set is located, that is, the directory specified by `--fn-data-path`, copy the merged data to the node database directory, and the Lite Fullnode becomes a FullNode. + + +## Data Copy +The node database is large, and the database copy operation is time-consuming. The Toolkit provides a fast database copy function, which can quickly copy the LevelDB or RocksDB database in the same file system by creating a hard link. + + +### Command and parameters +To use the data copy function provided by Toolkit through `db copy` : + +``` +# full command + java -jar Toolkit.jar db cp [-h] +# examples + java -jar Toolkit.jar db cp output-directory/database /tmp/databse +``` + +Optional command parameters are as follows: + +- ``: Source path for database. Default: output-directory/database +- ``: Output path for database. Default: output-directory-cp/database +- `-h | --help`:[ bool ] provide the help info. Default: false + +Note: Before using this tool for any operation, you need to stop the currently running node first. + +## Data Conversion +Toolkit supports database data conversion function, which can convert LevelDB data into RocksDB data. + + +### Command and parameters +To use the data conversion function provided by Toolkit through `db convert` command: + +``` +# full command + java -jar Toolkit.jar db convert [-h] [--safe] +# examples + java -jar Toolkit.jar db convert output-directory/database /tmp/database +``` + +Optional command parameters are as follows: + +- ``: Input path for leveldb, default: output-directory/database. +- ``: Output path for rocksdb, default: output-directory-dst/database. +- `--safe`:In safe mode, read data from leveldb then put into rocksdb, it's a very time-consuming procedure. If not, just change engine.properties from leveldb to rocksdb, rocksdb is compatible with leveldb for the current version. This may not be the case in the future, default: false. +- `-h | --help`:[ bool ] Provide the help info, default: false。 + +Note: Before using this tool for any operation, you need to stop the currently running node first. + +## LevelDB Startup Optimization + +with the running of levedb, the manifest file will continue to grow. Excessive manifest file will not only affect the startup speed of the node, moreover, there may be an issue that the service is terminated abnormally due to the continuous growth of memory. To solve this issue, toolkit provides the leveldb startup optimization tool. The tool optimizes the file size of the manifest and the startup process of LevelDB, reduces memory usage, and improves node startup speed. + + +### Command and parameters +To use the LevelDB startup optimization function provided by Toolkit through `db archive` command: + +``` +# full command + java -jar Toolkit.jar db archive [-h] [-b=] [-d=] [-m=] +# examples + #1. use default settings + java -jar Toolkit.jar db archive + #2. specify the database directory as /tmp/db/database + java -jar Toolkit.jar db archive -d /tmp/db/database + #3. specify the batch size to 64000 when optimizing manifest + java -jar Toolkit.jar db archive -b 64000 + #4. specify optimization only when Manifest exceeds 128M + java -jar Toolkit.jar db archive -m 128 +``` + +Optional command parameters are as follows: + +- `-b | --batch-size`: Specify the batch manifest size, default: 80000. +- `-d | --database-directory`: Specify the database directory to be processed, default: output-directory/database. +- `-m | --manifest-size`: Specify the minimum required manifest file size, unit: M, default: 0. +- `-h | --help`:[ bool ] Provide the help info, default: false. + +Note: Before using this tool for any operation, you need to stop the currently running node first. Usage instructions, please refer to [Leveldb Startup Optimization Plugins](../../developers/archive-manifest/). diff --git a/mkdocs.yml b/mkdocs.yml index 0a28e823..7a882bbe 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -8,13 +8,13 @@ nav: - Using Java-tron: - Deploying Java-tron: using_javatron/installing_javatron.md - Backup & Restore: using_javatron/backup_restore.md - - Lite Fullnode: developers/litefullnode.md + - Lite FullNode: developers/litefullnode.md - Private Network: using_javatron/private_network.md - Event Subscription: architecture/event.md - Database Configuration: architecture/database.md - Network Configuration: using_javatron/connecting_to_tron.md - Node Monitoring: using_javatron/metrics.md - - Tools: using_javatron/toolkit.md + - Node Maintenance Tool: using_javatron/toolkit.md - API: - HTTP API: api/http.md - gRPC API: api/rpc.md