Skip to content

Commit

Permalink
Soroban Docs (#378)
Browse files Browse the repository at this point in the history
* add manifest docs for soroban

* mapping docs for soroban

* Fix Stellar/Soroban docs

* Update to Stellar

---------

Co-authored-by: James Bayly <[email protected]>
  • Loading branch information
guplersaxanoid and jamesbayly authored Aug 8, 2023
1 parent 954c800 commit 412c884
Show file tree
Hide file tree
Showing 3 changed files with 277 additions and 3 deletions.
5 changes: 2 additions & 3 deletions docs/build/manifest/optimism.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,8 +51,7 @@ network:

dataSources:
- kind: ethereum/Runtime # We use ethereum runtime since Optimism is a layer-2 that is compatible
startBlock: 100316590
# startBlock: 9277162 # When the airdrop contract was deployed https://optimistic.etherscan.io/tx/0xdd10f016092f1584912a23e544a29a638610bdd6cb42a3e8b13030fd78334eba
startBlock: 100316590 # When the airdrop contract was deployed https://optimistic.etherscan.io/tx/0xdd10f016092f1584912a23e544a29a638610bdd6cb42a3e8b13030fd78334eba
options:
# Must be a key of assets
abi: airdrop
Expand Down Expand Up @@ -99,7 +98,7 @@ dataSources:

If you start your project by using the `subql init` command, you'll generally receive a starter project with the correct network settings. If you are changing the target chain of an existing project, you'll need to edit the [Network Spec](#network-spec) section of this manifest.

The `chainId` is the network identifier of the blockchain. In Optimism it is `10`. See https://chainlist.org/chain/10
The `chainId` is the network identifier of the blockchain. In Optimism it is `10`. See https://chainlist.org/chain/10.

Additionally you will need to update the `endpoint`. This defines the (HTTP or WSS) endpoint of the blockchain to be indexed - **this must be a full archive node**. This property can be a string or an array of strings (e.g. `endpoint: ['rpc1.endpoint.com', 'rpc2.endpoint.com']`). We suggest providing an array of endpoints as it has the following benefits:

Expand Down
185 changes: 185 additions & 0 deletions docs/build/manifest/stellar.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,185 @@
# Stellar & Soroban Manifest File [Beta]

::: warning Stellar and Soroban is in Beta
Stellar and Soroban support is still in beta and is not ready for production use. You can track progress of [Stellar support](https://github.com/subquery/subql-stellar/issues/2) and [Soroban support](https://github.com/subquery/subql-stellar/issues/3).
:::

The Manifest `project.yaml` file can be seen as an entry point of your project and it defines most of the details on how SubQuery will index and transform the chain data. It clearly indicates where we are indexing data from, and to what on chain events we are subscribing to.

The Manifest can be in either YAML or JSON format. In this document, we will use YAML in all the examples.

Below is a standard example of a basic Optimism `project.yaml`.

```yml
specVersion: "1.0.0"

name: "soroban-subql-starter"
version: "0.0.1"
runner:
node:
name: "@subql/node-stellar"
version: "*"
query:
name: "@subql/query"
version: "*"
description: "This project can be use as a starting point for developing your new Stellar Soroban Future Network SubQuery project"
repository: "https://github.com/subquery/stellar-subql-starter"

schema:
file: "./schema.graphql"

network:
# Stellar and Soroban uses the network passphrase as the chainId
# 'Public Global Stellar Network ; September 2015' for mainnet
# 'Test SDF Future Network ; October 2022' for Future Network
chainId: "Test SDF Future Network ; October 2022"
# This endpoint must be a public non-pruned archive node
# We recommend providing more than one endpoint for improved reliability, performance, and uptime
# Public nodes may be rate limited, which can affect indexing speed
endpoint: ["https://rpc-futurenet.stellar.org:443"]
# Recommended to provide the HTTP endpoint of a full chain dictionary to speed up processing
# dictionary: "https://gx.api.subquery.network/sq/subquery/eth-dictionary"

dataSources:
- kind: stellar/Runtime
startBlock: 270000 # This is the start block from which you begin indexing
mapping:
file: "./dist/index.js"
handlers:
- handler: handleEvent
kind: stellar/EventHandler
filter:
# contractId: "" # You can optionally specify a smart contract address here
topics:
- "transfer" # Topic signature(s) for the events, there can be up to 4
```
## Overview
### Top Level Spec
| Field | Type | Description |
| --------------- | ------------------------------------------ | --------------------------------------------------- |
| **specVersion** | String | The spec version of the manifest file |
| **name** | String | Name of your project |
| **version** | String | Version of your project |
| **description** | String | Description of your project |
| **runner** | [Runner Spec](#runner-spec) | Runner specs info |
| **repository** | String | Git repository address of your project |
| **schema** | [Schema Spec](#schema-spec) | The location of your GraphQL schema file |
| **network** | [Network Spec](#network-spec) | Detail of the network to be indexed |
| **dataSources** | [DataSource Spec](#datasource-spec) | The datasource to your project |
| **templates** | [Templates Spec](../dynamicdatasources.md) | Allows creating new datasources from this templates |
### Schema Spec
| Field | Type | Description |
| -------- | ------ | ---------------------------------------- |
| **file** | String | The location of your GraphQL schema file |
### Network Spec
If you start your project by using the `subql init` command, you'll generally receive a starter project with the correct network settings. If you are changing the target chain of an existing project, you'll need to edit the [Network Spec](#network-spec) section of this manifest.

The `chainId` is the network identifier of the blockchain, [Stellar and Soroban uses the network passphrase](https://developers.stellar.org/docs/encyclopedia/network-passphrases). Examples in Stellar are `Public Global Stellar Network ; September 2015` for mainnet and `Test SDF Future Network ; October 2022` for Future Network.

Additionally you will need to update the `endpoint`. This defines the (HTTP or WSS) endpoint of the blockchain to be indexed - **this must be a full archive node**. This property can be a string or an array of strings (e.g. `endpoint: ['rpc1.endpoint.com', 'rpc2.endpoint.com']`). We suggest providing an array of endpoints as it has the following benefits:

- Increased speed - When enabled with [worker threads](../../run_publish/references.md#w---workers), RPC calls are distributed and parallelised among RPC providers. Historically, RPC latency is often the limiting factor with SubQuery.
- Increased reliability - If an endpoint goes offline, SubQuery will automatically switch to other RPC providers to continue indexing without interruption.
- Reduced load on RPC providers - Indexing is a computationally expensive process on RPC providers, by distributing requests among RPC providers you are lowering the chance that your project will be rate limited.

Public nodes may be rate limited which can affect indexing speed, when developing your project we suggest getting a private API key from a professional RPC provider.

| Field | Type | Description |
| ---------------- | ------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **chainId** | String | A network identifier for the blockchain, [Stellar and Soroban uses the network passphrase](https://developers.stellar.org/docs/encyclopedia/network-passphrases) |
| **endpoint** | String | Defines the endpoint of the blockchain to be indexed - **This must be a full archive node**. |
| **port** | Number | Optional port number on the `endpoint` to connect to |
| **dictionary** | String | It is suggested to provide the HTTP endpoint of a full chain dictionary to speed up processing - read [how a SubQuery Dictionary works](../../academy/tutorials_examples/dictionary.md). |
| **bypassBlocks** | Array | Bypasses stated block numbers, the values can be a `range`(e.g. `"10- 50"`) or `integer`, see [Bypass Blocks](#bypass-blocks) |

### Runner Spec

| Field | Type | Description |
| --------- | --------------------------------------- | ------------------------------------------ |
| **node** | [Runner node spec](#runner-node-spec) | Describe the node service use for indexing |
| **query** | [Runner query spec](#runner-query-spec) | Describe the query service |

### Runner Node Spec

| Field | Type | Description |
| ----------- | ------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **name** | String | `@subql/node-stellar` |
| **version** | String | Version of the indexer Node service, it must follow the [SEMVER](https://semver.org/) rules or `latest`, you can also find available versions in subquery SDK [releases](https://github.com/subquery/subql/releases) |

### Runner Query Spec

| Field | Type | Description |
| ----------- | ------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| **name** | String | `@subql/query` |
| **version** | String | Version of the Query service, available versions can be found [here](https://github.com/subquery/subql/blob/main/packages/query/CHANGELOG.md), it also must follow the SEMVER rules or `latest`. |

### Datasource Spec

Defines the data that will be filtered and extracted and the location of the mapping function handler for the data transformation to be applied.

| Field | Type | Description |
| -------------- | ------------ | ----------------------------------------------------------------------------------------------------------------------------------------- |
| **kind** | string | [stellar/Runtime](#data-sources-and-mapping) |
| **startBlock** | Integer | This changes your indexing start block (called a Ledger on Stellar), set this higher to skip initial blocks/ledgers with no relevant data |
| **mapping** | Mapping Spec | |

### Mapping Spec

| Field | Type | Description |
| ---------------------- | ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------- |
| **handlers & filters** | Default handlers and filters | List all the [mapping functions](../mapping/stellar.md) and their corresponding handler types, with additional mapping filters. |

## Data Sources and Mapping

In this section, we will talk about the default Stellar runtime and its mapping. Here is an example:

```yml
dataSources:
- kind: stellar/Runtime
startBlock: 270000 # This is the start block from which you begin indexing
mapping:
file: "./dist/index.js"
handlers:
...
```

### Mapping Handlers and Filters

The following table explains filters supported by different handlers.

| Handler | Supported filter |
| ----------------------------------------------------------- | -------------------------------------------------------------------------- |
| [stellar/EventHandler](../mapping/stellar.md#event-handler) | Up to 4 `topics` filters applied as an array, and an optional `contractId` |

```yml
# Example filter from EventHandler
- handler: handleEvent
kind: stellar/EventHandler
filter:
# contractId: "" # You can optionally specify a smart contract address here
topics:
- "transfer" # Topic signature(s) for the events, there can be up to 4
```

Default runtime mapping filters are an extremely useful feature to decide what event will trigger a mapping handler.

Only incoming data that satisfies the filter conditions will be processed by the mapping functions. Mapping filters are optional but are highly recommended as they significantly reduce the amount of data processed by your SubQuery project and will improve indexing performance.

## Real-time indexing (Block Confirmations)

As indexers are an additional layer in your data processing pipeline, they can introduce a massive delay between when an on-chain event occurs and when the data is processed and able to be queried from the indexer.

SubQuery provides real time indexing of unconfirmed data directly from the RPC endpoint that solves this problem. SubQuery takes the most probabilistic data before it is confirmed to provide to the app. In the unlikely event that the data isn’t confirmed and a reorg occurs, SubQuery will automatically roll back and correct its mistakes quickly and efficiently - resulting in an insanely quick user experience for your customers.

To control this feature, please adjust the [--block-confirmations](../../run_publish/references.md#block-confirmations) command to fine tune your project and also ensure that [historic indexing](../../run_publish/references.md#disable-historical) is enabled (enabled by default)

## Validating

You can validate your project manifest by running `subql validate`. This will check that it has the correct structure, valid values where possible and provide useful feedback as to where any fixes should be made.
90 changes: 90 additions & 0 deletions docs/build/mapping/stellar.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
# Stellar & Soroban Mapping [Beta]

::: warning Stellar and Soroban is in Beta
Stellar and Soroban support is still in beta and is not ready for production use. You can track progress of [Stellar support](https://github.com/subquery/subql-stellar/issues/2) and [Soroban support](https://github.com/subquery/subql-stellar/issues/3).
:::

Mapping functions define how chain data is transformed into the optimised GraphQL entities that we have previously defined in the `schema.graphql` file.

- Mappings are defined in the `src/mappings` directory and are exported as a function.
- These mappings are also exported in `src/index.ts`.
- The mappings files are reference in `project.yaml` under the mapping handlers.

There is only one type of Handler currently supported for Stellar, [event handlers](#event-handler).

## Event Handler

You can use event handlers to capture information when certain events are included on transactions.

During processing, the event handler will receive a event as an argument with the event's typed inputs and outputs. Any type of event will trigger the mapping, allowing activity with the data source to be captured. You should use [Mapping Filters](../manifest/stellar.md#mapping-handlers-and-filters) in your manifest to filter events to reduce the time it takes to index data and improve mapping performance.

```ts
import { TransferEvent } from "../types";
import { StellarEvent } from "@subql/types-stellar";

export async function handleEvent(event: StellarEvent): Promise<void> {
logger.info(`New event at block ${event.ledger}`);

// Get data from the event
// The transfer event has the following payload \[env, from, to\]
// logger.info(JSON.stringify(event));
const {
topic: [env, from, to],
} = event;

// Create the new transfer entity
const transfer = Transfer.create({
id: event.id,
ledger: ledgerNumber,
date: new Date(event.ledgerClosedAt),
contract: event.contractId,
fromId: from,
toId: to,
value: BigInt(event.value.decoded!),
});

await transfer.save();
}
```

## Third-party Library Support - the Sandbox

SubQuery is deterministic by design, that means that each SubQuery project is guaranteed to index the same data set. This is a critical factor that is required to decentralise SubQuery in the SubQuery Network. This limitation means that in default configuration, the indexer is by default run in a strict virtual machine, with access to a strict number of third party libraries.

**You can easily bypass this limitation however, allowing you to retrieve data from external API endpoints, non historical RPC calls, and import your own external libraries into your projects.** In order to do to, you must run your project in `unsafe` mode, you can read more about this in the [references](../../run_publish/references.md#unsafe-node-service). An easy way to do this while developing (and running in Docker) is to add the following line to your `docker-compose.yml`:

```yml
subquery-node:
image: subquerynetwork/subql-node-stellar:latest
...
command:
- -f=/app
- --db-schema=app
- --unsafe
...
```

When run in `unsafe` mode, you can import any custom libraries into your project and make external API calls using tools like node-fetch. A simple example is given below:

```ts
import { StellarEvent } from "@subql/types-stellar";
import fetch from "node-fetch";

export async function handleEvent(event: StellarEvent): Promise<void> {
const httpData = await fetch("https://api.github.com/users/github");
logger.info(`httpData: ${JSON.stringify(httpData.body)}`);
// Do something with this data
}
```

By default (when in safe mode), the [VM2](https://www.npmjs.com/package/vm2) sandbox only allows the following:

- only some certain built-in modules, e.g. `assert`, `buffer`, `crypto`,`util` and `path`
- third-party libraries written by _CommonJS_.
- external `HTTP` and `WebSocket` connections are forbidden.

## Modules and Libraries

To improve SubQuery's data processing capabilities, we have allowed some of the NodeJS's built-in modules for running mapping functions in the [sandbox](#third-party-library-support---the-sandbox), and have allowed users to call third-party libraries.

Please note this is an **experimental feature** and you may encounter bugs or issues that may negatively impact your mapping functions. Please report any bugs you find by creating an issue in [GitHub](https://github.com/subquery/subql).

0 comments on commit 412c884

Please sign in to comment.