Skip to content

Commit

Permalink
Release v0.5.0 (#71)
Browse files Browse the repository at this point in the history
<!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺
v           ✰  Thanks for creating a PR! You're awesome! ✰
v Please note that maintainers will only review those PRs with a
completed PR template.
☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >  -->

## Purpose of Changes and their Description

Merge to main for `v0.5.0` release.

## Are these changes tested and documented?
Yes, in each ticket separately.
  • Loading branch information
xmariachi authored Oct 23, 2024
2 parents bd5091a + d4a56fd commit 0fad4b0
Show file tree
Hide file tree
Showing 24 changed files with 881 additions and 314 deletions.
22 changes: 21 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,27 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html) for all versions `v1.0.0` and beyond (still considered experimental prior to v1.0.0).

## [Unreleased]
## v0.5.0

### Added

* [#63](https://github.com/allora-network/allora-offchain-node/pull/63) Loss Function Library support.
* [#65](https://github.com/allora-network/allora-offchain-node/pull/65) Introduced different retry delays for account sequence.
* [#68](https://github.com/allora-network/allora-offchain-node/pull/68) Logging configuration
* [#69](https://github.com/allora-network/allora-offchain-node/pull/69) Update to allora-chain v0.6.1 dependencies.

### Removed

### Fixed

* [#65](https://github.com/allora-network/allora-offchain-node/pull/65) Error handling (incl ABCI errors)
* [#70](https://github.com/allora-network/allora-offchain-node/pull/70) Clean and improve readme

### Security
* [#62](https://github.com/allora-network/allora-offchain-node/pull/62) Fix security email


## v0.4.0

### Added

Expand Down
179 changes: 131 additions & 48 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# allora-offchain-node

Allora off-chain nodes publish inferences, forecasts, and losses informed by a configurable ground truth to the Allora chain.
Allora off-chain nodes publish inferences, forecasts, and losses informed by a configurable ground truth and applying a configurable loss function to the Allora chain.

## How to run with docker
1. Clone the repository
Expand All @@ -17,7 +17,7 @@ chmod +x init.config
./init.config
```

from the root diectory. This will:
from the root directory. This will:
- Load your config.json file into the environment. Depending on whether you provided your wallet details or not it will also do the following:
- Automatically create allora keys for you. You will have to request for some tokens from faucet to be able to register your worker and stake your reputer. You can find your address in ./data/env_file
- Automatically export the needed variables from the account created to be used by the offchain node and bundles it with the your provided config.json and then pass them to the node as environemnt variable
Expand Down Expand Up @@ -77,56 +77,139 @@ There are several ways to configure the node. In order of preference, you can do

Each option completely overwrites the other options.


This is the entrypoint for the application that simply builds and runs the Go program.

It spins off a distinct processes per role worker, reputer per topic configered in `config.json`.

### Worker process

1. Spawn a go routine per topic
2. Get topic data from chain via RPC. Hold this in memory
3. Check if wallet registered in topic as worker
4. If wallet not registered in topic as worker then attempt to register
1. Fail if failed to register
5. Every config.loop_seconds seconds...
1. Get and set latest_open_worker_nonce_from_chain from the chain
2. If latest_open_worker_nonce_from_chain does not exist or nil then continue to next loop
1. i.e. wait another config.loop_seconds
3. Retry request_retries times with uniform backoff:
1. Invoke configured `inferenceEntrypoint`, `forecastEntrypoint` for topic and get results
1. Else, break this inner retry loop
2. Attempt to commit inference and forecast bundle to the chain
1. Log success/failures as usual

### Reputer process

1. Spawn a go routine per topic
2. Get topic data from chain via RPC. Hold this in memory
3. Check if wallet registered in topic as reputer
4. If wallet not registered in topic as reputer then attempt to register
1. Fail if failed to register
5. Get current stake from reputer on topic (not including delegate stake)
1. Fail if failed to get
6. If config.min_stake_to_repute > current_stake then attempt to add difference in stake (config.min_stake_to_repute - current_stake) to hit the configured minimum, using config.wallet
1. Fail if failed to add stake
2. If success or if condition met, then continue with rest of loop
7. Every config.loop_seconds seconds...
1. Get and set latest_open_reputer_nonce_from_chain from the chain
2. If latest_open_reputer_nonce_from_chain does not exist or nil then continue to next loop
1. i.e. wait another config.loop_seconds
3. Retry request_retries times with uniform backoff:
1. Invoke configured `truthEntrypoint, lossEntrypoint` for topic and get results
1. Else, break this inner retry loop
2. Attempt to commit loss bundle to the chain
1. Log success/failures as usual

## Future Work

* For now, we put adapters to generate or relay reputer/worker data in packages.
* Should use modules instead of packages
* Then in JSON one can specify which modules to use for which topics and automatically load them with a script that calls `go get ...`
* Make lambda function adapters => super cheap to continuously run for all those with AWS accounts
## Logging env vars

* LOG_LEVEL: Set the logging level. Valid values are `debug`, `info`, `warn`, `error`, `fatal`, `panic`. Defaults to `info`.
* LOG_TIME_FORMAT: Sets the format of the timestamp in the log. Valid values are `unix`, `unixms`, `unixmicro`, `iso8601`. Defaults to `iso8601`.

## Configuration examples

A complete example is provided in `config.example.json`.
These below are excerpts of the configuration (with some parts omitted for brevity) for different setups:

### 1 workers as inferer

```json
{
"worker": [
{
"topicId": 1,
"inferenceEntrypointName": "api-worker-reputer",
"loopSeconds": 10,
"parameters": {
"InferenceEndpoint": "http://source:8000/inference/{Token}",
"Token": "ETH"
}
}
]
}
```

### 1 worker as forecaster
```json
{
"worker": [
{
"topicId": 1,
"forecastEntrypointName": "api-worker-reputer",
"loopSeconds": 10,
"parameters": {
"ForecastEndpoint": "http://source:8000/forecasts/{TopicId}/{BlockHeight}"
}
}
]
}

```

### 1 worker as inferer and forecaster

```json
{
"worker": [
{
"topicId": 1,
"inferenceEntrypointName": "api-worker-reputer",
"forecastEntrypointName": "api-worker-reputer",
"loopSeconds": 10,
"parameters": {
"InferenceEndpoint": "http://source:8000/inference/{Token}",
"ForecastEndpoint": "http://source:8000/forecasts/{TopicId}/{BlockHeight}",
"Token": "ETH"
}
}
]
}
```

### 1 reputer

```json
{
"reputer": [
{
"topicId": 1,
"groundTruthEntrypointName": "api-worker-reputer",
"lossFunctionEntrypointName": "api-worker-reputer",
"loopSeconds": 30,
"minStake": 100000,
"groundTruthParameters": {
"GroundTruthEndpoint": "http://localhost:8888/gt/{Token}/{BlockHeight}",
"Token": "ETHUSD"
},
"lossFunctionParameters": {
"LossFunctionService": "http://localhost:5000",
"LossMethodOptions": {
"loss_method": "sqe"
}
}
}
]
}
```

### 1 worker as inferer and forecaster, and 1 reputer

```json
{
"worker": [
{
"topicId": 1,
"inferenceEntrypointName": "api-worker-reputer",
"forecastEntrypointName": "api-worker-reputer",
"loopSeconds": 10,
"parameters": {
"InferenceEndpoint": "http://source:8000/inference/{Token}",
"ForecastEndpoint": "http://source:8000/forecasts/{TopicId}/{BlockHeight}",
"Token": "ETH"
}
}
],
"reputer": [
{
"topicId": 1,
"groundTruthEntrypointName": "api-worker-reputer",
"lossFunctionEntrypointName": "api-worker-reputer",
"loopSeconds": 30,
"minStake": 100000,
"groundTruthParameters": {
"GroundTruthEndpoint": "http://localhost:8888/gt/{Token}/{BlockHeight}",
"Token": "ETHUSD"
},
"lossFunctionParameters": {
"LossFunctionService": "http://localhost:5000",
"LossMethodOptions": {
"loss_method": "sqe"
}
}
}
]
}
```

## License

Expand Down
14 changes: 14 additions & 0 deletions adapter/api/source/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,13 @@ def __init__(self, worker, value):
def health():
return "Hello, World, I'm alive!"


@app.route('/inference/<token>', methods=['GET'])
def get_inference(token):
random_float = str(random.uniform(0.0, 100.0))
return random_float


@app.route('/forecast', methods=['GET'])
def get_forecast():
node_values = [
Expand All @@ -26,10 +28,22 @@ def get_forecast():
]
return jsonify([nv.__dict__ for nv in node_values])


@app.route('/truth/<token>/<blockheight>', methods=['GET'])
def get_truth(token, blockheight):
random_float = str(random.uniform(0.0, 100.0))
return random_float


@app.route('/is_never_negative', methods=['POST'])
def is_never_negative():
return True


@app.route('/calculate', methods=['POST'])
def calculate_loss():
return "1.0"


if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0', port=8000)
45 changes: 34 additions & 11 deletions adapter/api/worker-reputer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,24 +25,44 @@ Worker: []lib.WorkerConfig{
},
```

Example as Reputer:
Example as Reputer ("gt" in this context means "ground truth"):
```
Reputer: []lib.ReputerConfig{
{
TopicId: 1,
ReputerEntrypoint: apiAdapter.NewAlloraAdapter(),
LoopSeconds: 30,
MinStake: 100000,
Parameters: map[string]string{
"SourceOfTruthEndpoint": "http://localhost:8000/groundtruth/{Token}/{BlockHeight}",
"Token": "ethereum",
"topicId": 1,
"groundTruthEntrypointName": "api-worker-reputer",
"lossFunctionEntrypointName": "api-worker-reputer",
"loopSeconds": 30,
"minStake": 100000,
"groundTruthParameters": {
"GroundTruthEndpoint": "http://localhost:8888/gt/{Token}/{BlockHeight}",
"Token": "ETHUSD"
},
},
"lossFunctionParameters": {
"LossFunctionService": "http://localhost:5000",
"LossMethodOptions": {
"loss_method": "huber",
"delta": "1.0"
}
}
}
},
```

## Parameters
## Parameters

The parameters section contains additional properties the user wants to use to configure their URLs to hit.
In the case of the reputer, there are two parameters sections, one for the ground truth and one for the loss function.
In particular, the `LossMethodOptions` are specific to the loss function and passed unconverted to the loss function service.
They can be used to pass additional parameters to the loss function service. For example, the `delta` parameter is passed to the huber loss function like this (or as per defined in the loss function service of choice):

```
"LossMethodOptions": {
"loss_method": "huber",
"delta": "1.0"
}
```


### Worker

Expand All @@ -59,7 +79,10 @@ InferenceEntrypoint: nil

### Reputer

`SourceOfTruthEndpoint`is required if `ReputerEntrypoint` is defined.
Two endpoints are required:
* `GroundTruthEndpoint`: provides the ground truth endpoint to hit. It does support template variables.
* `LossFunctionService`: provides the loss function service to hit on loss calculation and the endpoint to know whether the loss function is never negative. These are appended to create `/calculate` and `/is_never_negative` endpoints respectively. They do not support template variables.


### Additional Parameters

Expand Down
Loading

0 comments on commit 0fad4b0

Please sign in to comment.