Skip to content

Commit

Permalink
chore: fix typos
Browse files Browse the repository at this point in the history
  • Loading branch information
cristaloleg committed Feb 14, 2025
1 parent 6bd2670 commit 67b6758
Show file tree
Hide file tree
Showing 12 changed files with 19 additions and 19 deletions.
2 changes: 1 addition & 1 deletion blob/commitment_proof.go
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ func (commitmentProof *CommitmentProof) Verify(root []byte, subtreeRootThreshold
}

if subtreeRootThreshold <= 0 {
return false, errors.New("subtreeRootThreshould must be > 0")
return false, errors.New("subtreeRootThreshold must be > 0")
}

// use the computed total number of shares to calculate the subtree roots
Expand Down
2 changes: 1 addition & 1 deletion blob/service.go
Original file line number Diff line number Diff line change
Expand Up @@ -653,7 +653,7 @@ func computeSubtreeRoots(shares []libshare.Share, ranges []nmt.LeafRange, offset
return nil, fmt.Errorf("cannot compute subtree roots for an empty ranges list")
}
if offset < 0 {
return nil, fmt.Errorf("the offset %d cannot be stricly negative", offset)
return nil, fmt.Errorf("the offset %d cannot be strictly negative", offset)
}

// create a tree containing the shares to generate their subtree roots
Expand Down
2 changes: 1 addition & 1 deletion das/state.go
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ func (s *coordinatorState) handleRetryResult(res result) {
s.failed[h] = nextRetry
}

// processed height are either already moved to failed map or succeeded, cleanup inRetry
// processed heights are either already moved to failed map or succeeded, cleanup inRetry
for h := res.from; h <= res.to; h++ {
delete(s.inRetry, h)
}
Expand Down
2 changes: 1 addition & 1 deletion docs/adr/adr-001-predevnet-celestia-node.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ This ADR describes a basic pre-devnet design for a "Celestia Node" that was deci

The goal of this design is to get a basic structure of "Celestia Node" interoperating with a "Celestia Core" consensus node by November 2021 (devnet).

After basic interoperability on devnet, there will be an effort to merge consensus functionality into the "Celestia Node" design as a modulor service that can be added on top of the basic functions of a "Celestia Node".
After basic interoperability on devnet, there will be an effort to merge consensus functionality into the "Celestia Node" design as a modular service that can be added on top of the basic functions of a "Celestia Node".

## Decision

Expand Down
8 changes: 4 additions & 4 deletions docs/adr/adr-003-march2022-testnet.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ A **bridge** node does not care about what kind of celestia-core node it is conn
it only cares that it has a direct RPC connection to a celestia-core node from which it can listen for new blocks.

The name **bridge** was chosen as the purpose of this node type is to provide a mechanism to relay celestia-core blocks
to the data availability network.
to the data availability network.

### **Full Node**

Expand Down Expand Up @@ -131,7 +131,7 @@ checkpoint on any new headers.
### `HeaderService` becomes main component around which most other services are focused

Initially, we started with BlockService being the more “important” component during devnet architecture, but overlooked
some problems with regards to sync (we initially made the decision that a celestia full node would have to be started
some problems with regard to sync (we initially made the decision that a celestia full node would have to be started
together at the same time as a core node).

This led us to an issue where eventually we needed to connect to an already-running core node and sync from it. We were
Expand Down Expand Up @@ -183,7 +183,7 @@ for
the data itself. It is possible to get the namespace for each share encoded in inner non-leaf nodes of the NMT tree.
* Pruning for shares.

### [Move IPLD from celetia-node repo into its own repo](https://github.com/celestiaorg/celestia-node/issues/111)
### [Move IPLD from celestia-node repo into its own repo](https://github.com/celestiaorg/celestia-node/issues/111)

Since the IPLD package is pretty much entirely separate from the celestia-node implementation, it makes sense that it
is removed from the celestia-node repository and maintained separately. The extraction of IPLD should also include a
Expand All @@ -192,7 +192,7 @@ documentation also needs updating.

### Implement additional light node verification logic similar to the Tendermint Light Client Model

At the moment, the syncing logic for a **light** nodes is simple in that it syncs each header from a single peer.
At the moment, the syncing logic for a **light** node is simple in that it syncs each header from a single peer.
Instead, the **light** node should double-check headers with another randomly chosen
["witness"](https://github.com/tendermint/tendermint/blob/02d456b8b8274088e8d3c6e1714263a47ffe13ac/light/client.go#L154-L161)
peer than the primary peer from which it received the header, as described in the
Expand Down
2 changes: 1 addition & 1 deletion docs/adr/adr-006-fraud-service.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ In addition, `das.Daser`:
```

```go
// ProofType is a enum type that represents a particular type of fraud proof.
// ProofType is an enum type that represents a particular type of fraud proof.
type ProofType string
// Proof is a generic interface that will be used for all types of fraud proofs in the network.
Expand Down
2 changes: 1 addition & 1 deletion docs/adr/adr-008-p2p-discovery.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@

This ADR is intended to describe p2p full node discovery in celestia node.
P2P discovery helps light and full nodes to find other full nodes on the network at the specified topic(`full`).
As soon as a full node is found and connection is established with it, then it(full node) will be added to a set of peers(limitedSet).
As soon as a full node is found and connection is established with it, then it (full node) will be added to a set of peers (limitedSet).

## Decision

Expand Down
6 changes: 3 additions & 3 deletions docs/adr/adr-009-public-api.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
## Context

Celestia Node has been built for almost half a year with a bottom-up approach to
development. The core lower level components are built first and public API
development. The core lower-level components are built first and public API
around them is getting organically shaped. Now that the project is maturing and
its architecture is better defined, it's a good time to formally define a set of
modules provided by the node and their respective APIs.
Expand All @@ -32,15 +32,15 @@ matching resource constraints of a type.
### Goals

- Ergonomic. Simple, idiomatic and self-explanatory.
- Module centric(modular). The API is not monolithic and is segregated into
- Module-centric(modular). The API is not monolithic and is segregated into
different categorized and independent modules.
- Unified. All the node types implement the same set of APIs. The difference is
defined by different implementations of some modules to meet resource
requirements of a type. Example: FullAvailability and LightAvailability.
- Embeddable. Simply constructable Node with library style API. Not an
SDK/Framework which dictates users the way to build an app, but users are those
who decide how they want to build the app using the API.
- Language agnostic. It should be simple to implement similar module
- Language-agnostic. It should be simple to implement similar module
interfaces/traits in other languages over RPC clients.

### Implementation
Expand Down
2 changes: 1 addition & 1 deletion docs/adr/adr-011-blocksync-overhaul-part-1.md
Original file line number Diff line number Diff line change
Expand Up @@ -343,7 +343,7 @@ To remove stored EDS `Remove` method is introduced. Internally it:
- Destroys `Shard` via `DAGStore`
- Internally removes its `Mount` as well
- Removes CARv1 file from disk under `Store.Path/DataHash` path
- Drops indecies
- Drops indices

___NOTES:___

Expand Down
2 changes: 1 addition & 1 deletion docs/adr/adr-012-daser-parallelization.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Using multiple coordinated workers running in parallel drastically improves the
To achieve parallelization, the DASer was split into the following core components:

1. The `Coordinator` holds the current state of sampled headers and defines what headers should be sampled next.
2. `Workers` perform sampling over a range of headers and communicate the results back to the coordinator. Workers are created on demand, when `Jobs` are available. The amount of concurrently running workers is limited by the const `concurrencyLimit`. Length of the sampling range is defined by DASer configuration param `samplingRange`.
2. `Workers` perform sampling over a range of headers and communicate the results back to the coordinator. Workers are created on demand, when `Jobs` are available. The amount of concurrently running workers is limited by the const `concurrencyLimit`. Length of the sampling range is defined by DASer configuration param `samplingRange`.
3. The `Subscriber` subscribes to network head updates. When new headers are found, it will notify the `Coordinator`. Recent network head blocks will be prioritized for sampling to increase the availability of the most demanded blocks.
4. The `CheckpointStore` stores/loads the `Coordinator` state as a checkpoint to allow for seamless resuming upon restart. The `Coordinator` stores the state as a checkpoint on exit and resumes sampling from the latest state.
It also periodically stores checkpoints to storage to avoid the situation when no checkpoint is stored upon a hard shutdown of the node.
Expand Down
6 changes: 3 additions & 3 deletions nodebuilder/p2p/routing.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ func newDHT(
lc fx.Lifecycle,
tp node.Type,
network Network,
bootsrappers Bootstrappers,
bootstrappers Bootstrappers,
host HostBase,
dataStore datastore.Batching,
) (*dht.IpfsDHT, error) {
Expand All @@ -35,10 +35,10 @@ func newDHT(
// no bootstrappers for a bootstrapper ¯\_(ツ)_/¯
// otherwise dht.Bootstrap(OnStart hook) will deadlock
if isBootstrapper() {
bootsrappers = nil
bootstrappers = nil
}

dht, err := discovery.NewDHT(ctx, network.String(), bootsrappers, host, dataStore, mode)
dht, err := discovery.NewDHT(ctx, network.String(), bootstrappers, host, dataStore, mode)
if err != nil {
return nil, err
}
Expand Down
2 changes: 1 addition & 1 deletion store/file/ods.go
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ func writeAxisRoots(w io.Writer, roots *share.AxisRoots) error {

for _, root := range roots.ColumnRoots {
if _, err := w.Write(root); err != nil {
return fmt.Errorf("writing columm roots: %w", err)
return fmt.Errorf("writing column roots: %w", err)
}
}

Expand Down

0 comments on commit 67b6758

Please sign in to comment.